Skip to content

Commit cf8f903

Browse files
committed
add warning note for downgrade error
Signed-off-by: Ian Cardoso <[email protected]>
1 parent 835094a commit cf8f903

File tree

1 file changed

+28
-0
lines changed

1 file changed

+28
-0
lines changed

README.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,34 @@ spec:
6262

6363
The upgrade controller should watch for this plan and execute the upgrade on the labeled nodes. For more information about system-upgrade-controller and plan options please visit [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller) official repo.
6464

65+
66+
## Downgrade Prevention
67+
68+
Starting with the 2023-07 releases ([v1.27.4+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.27.4%2Bk3s1), [v1.26.7+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.26.7%2Bk3s1), [v1.25.12+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.25.12%2Bk3s1), [v1.24.16+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.24.16%2Bk3s1))
69+
70+
Kubernetes does not support downgrades of control-plane components. The k3s-upgrade image used by upgrade plans will refuse to downgrade K3s, failing the plan and leaving your nodes cordoned.
71+
72+
Here is an example cluster, showing failed upgrade pods and cordoned nodes:
73+
74+
```console
75+
ubuntu@user:~$ kubectl get pods -n system-upgrade
76+
NAME READY STATUS RESTARTS AGE
77+
apply-k3s-server-on-ip-172-31-0-16-with-7af95590a5af8e8c3-2cdc6 0/1 Error 0 9m25s
78+
apply-k3s-server-on-ip-172-31-10-23-with-7af95590a5af8e8c-9xvwg 0/1 Error 0 14m
79+
apply-k3s-server-on-ip-172-31-13-213-with-7af95590a5af8e8-8j72v 0/1 Error 0 18m
80+
system-upgrade-controller-7c4b84d5d9-kkzr6 1/1 Running 0 20m
81+
ubuntu@user:~$ kubectl get nodes
82+
NAME STATUS ROLES AGE VERSION
83+
ip-172-31-0-16 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1
84+
ip-172-31-10-23 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1
85+
ip-172-31-13-213 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1
86+
ip-172-31-2-13 Ready <none> 19h v1.27.4+k3s1
87+
```
88+
You can return your cordoned nodes to service by either of the following methods:
89+
* Change the version or channel on your plan to target a release that is the same or newer than what is currently running on the cluster, so that the plan succeeds.
90+
* Delete the plan and manually uncordon the nodes.
91+
Use `kubectl get plan -n system-upgrade` to find the plan name, then `kubectl delete plan -n system-upgrade PLAN_NAME` to delete it. Once the plan has been deleted, use `kubectl uncordon NODE_NAME` to uncordon each of the nodes.
92+
6593
# Contact
6694
For bugs, questions, comments, corrections, suggestions, etc., open an issue in
6795
[k3s-io/k3s](//github.com/k3s-io/k3s/issues) with a title starting with `[k3s-upgrade] `.

0 commit comments

Comments
 (0)