Monday, January 27, 2025

Upgrading Kubernetes

I'm adding a new node to my home K8s cluster. Conveniently, I can find the command to run on the client who wants to join by running this on the master [SO]:

kubeadm token create --print-join-command

However, joining proved a problem because my new machine has Kubernets 1.32 and my old cluster is still on 1.28. K8s allows a difference of 1 but this was just too big a jump. 

So, time to upgrade the cluster.

First I had to update the executables by folowing the official documents to put the correct keyrings in place. I also had to overridr the held packages with --allow-change-held-packages as all my boxes are Ubuntu and can be upgraded in lockstep. This meant I didn't have to run:

sudo kubeadm upgrade apply 1.32
sudo kubeadm config images pull --kubernetes-version 1.32.1

However, I must have bodged something as getting the nodes showed the master was in a state of Ready,SchedulingDisabled [SO] where uncordon did the trick. I was also getting "Unable to connect to the server: tls: failed to verify certificate: x509" amongst other errors), so I rolled back [ServerFault] all config with:

sudo kubeadm reset

To be honest, I had to do this a few times as until I worked out that my bodge was an incorrect IP address in my /etc/hosts file for one of my nodes - d'oh.

Then I followed the orginal instructions I used to set up the cluster last year. But don't forget the patch I mention in a previous post. Also note that you must add KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs" to /etc/default/kubelet and the JSON to /etc/docker/daemon.json on the worker nodes too

[Addendum. I upgrade an Ubuntu box from 20 to 22 and had my flannel and proxy pods on that box constantly crashing. Proxy was reporting "nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`". Following the instructions in the previoud paragraph a second time solved the problem as the upgrade had clearly blatted some config.]

I checked the services [SO] with:

systemctl list-unit-files | grep running | grep -i kube 

which showed the kubelet is running (enabled means it will restart upon the next reboot; you can have one without the other) and 

sudo systemctl status kubelet

Things seemed OK:

$ kubectl get pods --all-namespaces 
NAMESPACE      NAME                          READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-7zmz8         1/1     Running   4 (88m ago)   89m
kube-flannel   kube-flannel-ds-sh4nk         1/1     Running   9 (64m ago)   86m
kube-system    coredns-668d6bf9bc-748ql      1/1     Running   0             94m
kube-system    coredns-668d6bf9bc-zcxfp      1/1     Running   0             94m
kube-system    etcd-nuc                      1/1     Running   8             94m
kube-system    kube-apiserver-nuc            1/1     Running   8             94m
kube-system    kube-controller-manager-nuc   1/1     Running   6             94m
kube-system    kube-proxy-dd4gc              1/1     Running   0             94m
kube-system    kube-proxy-hlzhj              1/1     Running   9 (63m ago)   86m
kube-system    kube-scheduler-nuc            1/1     Running   6             94m

Note the nuc node name indicates it's running on my cluster's master and the other pods (flannelcoredns and kube-proxy) have an instance on each node in the cluster.

Note also that we'd expect two Flannel pods as there are two nodes in the cluster.

It's worth noting at this point that kubectl is a client side tool. In fact, you won't be able to see the master until you scp /etc/kubernetes/admin.conf on the master into your local ~/.kube/config.

Contrast this with kubeadm which is a cluster side tool.

No comments:

Post a Comment