Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,34 @@ spec:

The upgrade controller should watch for this plan and execute the upgrade on the labeled nodes. For more information about system-upgrade-controller and plan options please visit [system-upgrade-controller](https://github.com/rancher/system-upgrade-controller) official repo.


## Downgrade Prevention

Starting with the 2023-07 releases ([v1.27.4+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.27.4%2Bk3s1), [v1.26.7+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.26.7%2Bk3s1), [v1.25.12+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.25.12%2Bk3s1), [v1.24.16+k3s1](https://github.com/k3s-io/k3s-upgrade/releases/tag/v1.24.16%2Bk3s1))

Kubernetes does not support downgrades of control-plane components. The k3s-upgrade image used by upgrade plans will refuse to downgrade K3s, failing the plan and leaving your nodes cordoned.

Here is an example cluster, showing failed upgrade pods and cordoned nodes:

```console
ubuntu@user:~$ kubectl get pods -n system-upgrade
NAME READY STATUS RESTARTS AGE
apply-k3s-server-on-ip-172-31-0-16-with-7af95590a5af8e8c3-2cdc6 0/1 Error 0 9m25s
apply-k3s-server-on-ip-172-31-10-23-with-7af95590a5af8e8c-9xvwg 0/1 Error 0 14m
apply-k3s-server-on-ip-172-31-13-213-with-7af95590a5af8e8-8j72v 0/1 Error 0 18m
system-upgrade-controller-7c4b84d5d9-kkzr6 1/1 Running 0 20m
ubuntu@user:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-0-16 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1
ip-172-31-10-23 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1
ip-172-31-13-213 Ready,SchedulingDisabled control-plane,etcd,master 19h v1.27.4+k3s1
ip-172-31-2-13 Ready <none> 19h v1.27.4+k3s1
```
You can return your cordoned nodes to service by either of the following methods:
* Change the version or channel on your plan to target a release that is the same or newer than what is currently running on the cluster, so that the plan succeeds.
* Delete the plan and manually uncordon the nodes.
Use `kubectl get plan -n system-upgrade` to find the plan name, then `kubectl delete plan -n system-upgrade PLAN_NAME` to delete it. Once the plan has been deleted, use `kubectl uncordon NODE_NAME` to uncordon each of the nodes.

# Contact
For bugs, questions, comments, corrections, suggestions, etc., open an issue in
[k3s-io/k3s](//github.com/k3s-io/k3s/issues) with a title starting with `[k3s-upgrade] `.