|
61 | 61 | ``` |
62 | 62 |
|
63 | 63 | The upgrade controller should watch for this plan and execute the upgrade on the labeled nodes. For more information about system-upgrade-controller and plan options please visit [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller) official repo. |
| 64 | +## Warning |
| 65 | + |
| 66 | +Kubernetes does not support downgrades of control-plane components. Starting with the 2023-07 releases ([v1.27.4+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.27.4%2Bk3s1), [v1.26.7+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.26.7%2Bk3s1), [v1.25.12+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.25.12%2Bk3s1), [v1.24.16+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.24.16%2Bk3s1)) the k3s-upgrade image used by upgrade plans will refuse to downgrade K3s, failing the plan and leaving your nodes cordoned. |
| 67 | + |
| 68 | +If you attempted it your pods in cluster should look something like this: |
| 69 | +``` |
| 70 | +ubuntu@user:~$ kubectl get pods -A |
| 71 | +NAMESPACE NAME READY STATUS RESTARTS AGE |
| 72 | +kube-system coredns-77ccd57875-9ng74 1/1 Running 0 19h |
| 73 | +kube-system local-path-provisioner-957fdf8bc-9vwzn 1/1 Running 0 19h |
| 74 | +kube-system metrics-server-648b5df564-wzbnh 1/1 Running 0 19h |
| 75 | +kube-system svclb-traefik-0bda8e84-hbjq8 2/2 Running 0 19h |
| 76 | +kube-system svclb-traefik-0bda8e84-jg94l 2/2 Running 0 19h |
| 77 | +kube-system svclb-traefik-0bda8e84-qkcs7 2/2 Running 0 19h |
| 78 | +kube-system svclb-traefik-0bda8e84-tfhjq 2/2 Running 0 19h |
| 79 | +kube-system traefik-64f55bb67d-4mm6s 1/1 Running 0 19h |
| 80 | +system-upgrade apply-k3s-server-on-ip-172-31-0-16-with-7af95590a5af8e8c3-2cdc6 0/1 Error 0 9m25s |
| 81 | +system-upgrade apply-k3s-server-on-ip-172-31-10-23-with-7af95590a5af8e8c-9xvwg 0/1 Error 0 14m |
| 82 | +system-upgrade apply-k3s-server-on-ip-172-31-13-213-with-7af95590a5af8e8-8j72v 0/1 Error 0 18m |
| 83 | +system-upgrade system-upgrade-controller-7c4b84d5d9-kkzr6 1/1 Running 0 20m |
| 84 | +``` |
| 85 | +and your nodes something like this: |
| 86 | +``` |
| 87 | +NAME STATUS ROLES AGE VERSION |
| 88 | +ip-172-31-0-16 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1 |
| 89 | +ip-172-31-10-23 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1 |
| 90 | +ip-172-31-13-213 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1 |
| 91 | +ip-172-31-2-13 Ready <none> 19h v1.27.4+k3s1 |
| 92 | +``` |
| 93 | + |
| 94 | +You can return your node to service by uncordoning it with the command `kubectl uncordon NODE_NAME`(only lasts until system-upgrade-controller next try), or by changing the version or channel on your plan to target a release that is the same or newer than what is currently running on the cluster. |
64 | 95 |
|
65 | 96 | # Contact |
66 | 97 | For bugs, questions, comments, corrections, suggestions, etc., open an issue in |
|
0 commit comments