-
Notifications
You must be signed in to change notification settings - Fork 457
OKD-294: Migrate runtime from runc to crun on an upgrade for OKD #5389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@Prashanth684: This pull request references OKD-294 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@Prashanth684: This pull request references OKD-294 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.21.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Prashanth684 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
493b483 to
2c63371
Compare
|
/test bootstrap-unit |
|
/test okd-scos-e2e-aws-ovn |
Centos stream 10 dopped the runc package. For now OKD users are following the workaround to edit the MC to point to crun before the upgrade, but this should help with the migration rather than having to intervene manually. This PR: - checks for presence of the configmap - if MCO is built for OKD, edit the MC so the runtime is crun. - deletes the configmap
2c63371 to
7700e81
Compare
|
/retest-required |
| // Only process master and worker pools for the migration | ||
| for _, pool := range pools { | ||
| if pool.Name != ctrlcommon.MachineConfigPoolMaster && pool.Name != ctrlcommon.MachineConfigPoolWorker { | ||
| continue | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the context in #4635 (comment), I'm gathering that the intention is to only target default MCPs. What about the arbiter MCP? Should it also be included in this list of default MCPs?
machine-config-operator/pkg/controller/common/constants.go
Lines 66 to 67 in a1ac217
| // MachineConfigPoolArbiter is the MachineConfigPool name given to the arbiter | |
| MachineConfigPoolArbiter = "arbiter" |
cc @yuqi-zhang Since it looks like you had the original suggestion to scope to default MCPs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yuqi-zhang do we need to consider the arbiter here? or are we good with the changes as is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think arbiter probably isn't relevant here, since 2-node-with-fencing is not yet GA'ed and even if it was, I don't think there would be any OKD overlap with it atm for upgrades, so we should be good.
So basically this is "forcing" any OKD clusters to use crun since runc is gone on upgrades, is installs still possible for runc if you set it (and is there a guard against it if so, I forget if we deprecated it)? Is RHEL10 also dropping runc? Maybe we need this generally speaking cc @sdodson
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, RHEL10 dropped runc but AFAIK the plan is to start building it for OCP as we're hesitant to drop runc until crun sees more adoption even if crun is the default since 4.18.
|
@Prashanth684: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
| // Only process master and worker pools for the migration | ||
| for _, pool := range pools { | ||
| if pool.Name != ctrlcommon.MachineConfigPoolMaster && pool.Name != ctrlcommon.MachineConfigPoolWorker { | ||
| continue | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think arbiter probably isn't relevant here, since 2-node-with-fencing is not yet GA'ed and even if it was, I don't think there would be any OKD overlap with it atm for upgrades, so we should be good.
So basically this is "forcing" any OKD clusters to use crun since runc is gone on upgrades, is installs still possible for runc if you set it (and is there a guard against it if so, I forget if we deprecated it)? Is RHEL10 also dropping runc? Maybe we need this generally speaking cc @sdodson
| } | ||
|
|
||
| // Get the MachineConfig name for this pool | ||
| mcName := fmt.Sprintf("00-override-%s-generated-crio-default-container-runtime", pool.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, where is this MachineConfig coming from? If you're trying to override what #4635 introduced, that wrote to 99-%s-generated-crio-default-container-runtime I'm pretty sure. In our case instead of playing around with MCs and overrides, we should just remove any runc references in the MC and use system default I think? (which would be crun)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is coming from #4715 which was a change introduced in 4.17 alone i believe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, forgot that, would we be able to just delete 00-override-pool-generated-crio-default-container-runtime on behalf of the user? I was under the impression deleting the MC would reset to system defaults which would be crun in this case, and comes with the benefits of not having an additional MC left in the cluster, and we can remove this code block when we branch for 4.22 since all OKD clusters that would have gone through 4.21 would have removed the MC entirely.
(Unless there's a need to explicitly set this on OKD)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was under the impression deleting the MC would reset to system
I thought the same as well...but folks in the OKD community seem to have tried that and it looks like deleting it wasn't enough to reset to crun. Let me double check that behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack, editing the MC should be ok as well, just that it'll become unmanaged state if we were ever to want to change that in the future, so if we can get deleting to work that would probably be better.
|
|
||
| ctrl.clusterVersionLister = clusterVersionInformer.Lister() | ||
| ctrl.clusterVersionListerSynced = clusterVersionInformer.Informer().HasSynced | ||
| ctrl.queue.Add(forceSyncOnUpgrade) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you observed this to be necessary? The new controller is expected to re-gen all the MCs so it should always be synced post-new controller deployment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes - i noticed that before i added this, the upgrade to an image with these changes did not change the MC at all.
| _, ok := cfg.GetAnnotations()[ctrlcommon.MCNameSuffixAnnotationKey] | ||
| arr := strings.Split(managedKey, "-") | ||
| // the first managed key value 99-poolname-generated-containerruntime does not have a suffix | ||
| // the first managed key value 00-override-poolname-generated-containerruntime does not have a suffix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we change this at some point?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#4715 was introduced in 4.17 which creates this mc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ack, but this comment specifically is referring to the containerruntimeconfig object-generated MCs, which comes with suffixes.
We don't generate multiple 00-override-poolname-generated-containerruntime anyways
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah..right...let me fix that up
Centos stream 10 dopped the runc package. For now OKD users are following the workaround to edit the MC to point to crun before the upgrade, but this should help with the migration rather than having to intervene manually.
This PR:
Borrows from the implementation here: #4635