Skip to content

Conversation

@pperiyasamy
Copy link
Member

Manual cherry-pick for 4.20 PR #2835. No conflicts.

martinkennelly and others added 7 commits November 7, 2025 13:19
…odes

Previously, we were checking if the next hop IP is valid
for the current set of nodes but we werent but
every EgressIP is assigned to a subset of the total nodes.

Stale LRPs could occur if a node hosted eip pods,
ovnkube-controller is down, and the EIP moved
to a new Node which said controller is down.

Signed-off-by: Martin Kennelly <[email protected]>
Signed-off-by: Periyasamy Palanisamy <[email protected]>
(cherry picked from commit d2b7cbe)
(cherry picked from commit 688885f)
For IC mode, there is no expectation we can fetch
a remote nodes LSP, therefore, by skipping (continue),
it is causing us to skip generating valid next
hops for the remote node.

Later in sync LRPs, when a valid next hop is inspected,
we do not find it valid and remove that valid next hop.

Handlers will re-add it shortly later.

Signed-off-by: Martin Kennelly <[email protected]>
(cherry picked from commit a46e0e7)
(cherry picked from commit 6f6edf3)
Signed-off-by: Martin Kennelly <[email protected]>
(cherry picked from commit 2735d6b)
(cherry picked from commit 562c749)
Previous to this change, we dont emit log error
for stale next hops.

Signed-off-by: Martin Kennelly <[email protected]>
(cherry picked from commit ab082bd)
(cherry picked from commit c337b16)
Scenario:
- Nodes: node-1, node-2, node-3
- Egress IPs: EIP-1
- Pods: pod1 on node-1, pod2 on node-3 (pods are created via deployment replicas)
- Egress-assignable nodes: node-1, node-2
- EIP-1 assigned to node-1

During a simultaneous reboot of node-1 and node-2, EIP-1 failed over to node-2 and
ovnkube-controller restarted at nearly the same time:

1) EIP-1 was reassigned to node-2 by the cluster manager.
2) The sync EIP happened for EIP1 with stale status, though it cleaned SNATs/LRPs
   referring to node-1 due to outdated pod IPs (this is because pods will be
   recreated due to node reboots).
3) pod1/pod2 Add events arrived while the informer cache still had the
   old EIP status, so new SNATs/LRPs were created pointing to node-1.
4) The EIP-1 Add event arrived with the new status; entries for node-2
   were added/updated.
5) Result: stale SNATs and LRPs with stale nexthops for node-1 remained.

Fix:
- Populate pod EIP status during EgressIP sync so podAssignment has
  accurate egressStatuses.
- Reconcile stale assignments using podAssignment (egressStatuses) when
  the informer cache is not up to date, ensuring SNAT/LRP for the
  previously assigned node are corrected.
- Remove stale EIP SNAT entries for remote-zone pods accordingly.
- Add coverage for simultaneous EIP failover and controller restart.

Signed-off-by: Periyasamy Palanisamy <[email protected]>
(cherry picked from commit 1667a51)
(cherry picked from commit 7060af6)
During an ovnkube-controller restart, pod add/remove events
for EgressIP-served pods may occur before the factory.egressIPPod
handler is registered in the watch factory. As a result, the EIP
controller never able to handle pod delete, leading to stale
logical router policy (LRP) entry.

Scenario:

ovnkube-controller starts.

The EIP controller processes the namespace add event (oc.WatchEgressIPNamespaces)
and creates an LRP entry for the served pod.

The pod is deleted.

The factory.egressIPPod handler registration happens afterward via oc.WatchEgressIPPods.

The pod delete event is never processed by the EIP controller.

Fix:

1. Start oc.WatchEgressIPPods followed by oc.WatchEgressIPNamespaces.
2. Sync EgressIPs before registering factory.egressIPPod event handler.
3. Removal of Sync EgressIPs for factory.EgressIPNamespaceType which is no longer needed.

Signed-off-by: Periyasamy Palanisamy <[email protected]>
(cherry picked from commit 8975b00)
(cherry picked from commit b8303a2)
When the EIP controller cleans up a stale EIP assignment for a pod,
it also removes the pod object from the podAssignment cache.
This is incorrect, as it prevents the EIP controller from processing
the subsequent pod delete event.

Scenario:

1. pod-1 is served by eip-1, both hosted on node1.
2. node1’s ovnkube-controller restarts.
3. Pod add event is received by the EIP controller — no changes.
4. eip-1 moves from node1 to node0.
5. The EIP controller receives the eip-1 add event.
6. eip-1 cleans up pod-1’s stale assignment (SNAT and LRP) for node1,
   but removes the pod object from the podAssignment cache when no other
   assignments found.
7. The EIP controller programs the LRP entry with node0’s transit IP as
   the next hop, but the pod assignment cache is not updated with new
   podAssignmentState.
8. The pod delete event is received by the EIP controller but ignored,
   since the pod object is missing from the assignment cache.

So this commit fixes the issue by adding podAssignmentState back into
podAssignment cache at step 7.

Signed-off-by: Periyasamy Palanisamy <[email protected]>
(cherry picked from commit 16dedd1)
(cherry picked from commit f4e2c17)
@pperiyasamy
Copy link
Member Author

/payload 4.19 ci blocking
/payload 4.19 nightly blocking

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 7, 2025

@pperiyasamy: trigger 5 job(s) of type blocking for the ci release of OCP 4.19

  • periodic-ci-openshift-release-master-ci-4.19-upgrade-from-stable-4.18-e2e-aws-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.19-upgrade-from-stable-4.18-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.19-e2e-gcp-ovn-upgrade
  • periodic-ci-openshift-hypershift-release-4.19-periodics-e2e-aks
  • periodic-ci-openshift-hypershift-release-4.19-periodics-e2e-aws-ovn

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/0842cf70-bbe0-11f0-9806-4a3750713955-0

trigger 11 job(s) of type blocking for the nightly release of OCP 4.19

  • periodic-ci-openshift-hypershift-release-4.19-periodics-e2e-azure-aks-ovn-conformance
  • periodic-ci-openshift-hypershift-release-4.19-periodics-e2e-aws-ovn-conformance
  • periodic-ci-openshift-release-master-nightly-4.19-e2e-aws-ovn-serial
  • periodic-ci-openshift-release-master-ci-4.19-e2e-aws-upgrade-ovn-single-node
  • periodic-ci-openshift-release-master-ci-4.19-e2e-aws-ovn-techpreview
  • periodic-ci-openshift-release-master-ci-4.19-e2e-aws-ovn-techpreview-serial
  • periodic-ci-openshift-release-master-nightly-4.19-e2e-aws-ovn-upgrade-fips
  • periodic-ci-openshift-release-master-ci-4.19-e2e-azure-ovn-upgrade
  • periodic-ci-openshift-release-master-ci-4.19-upgrade-from-stable-4.18-e2e-gcp-ovn-rt-upgrade
  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-bm
  • periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ipi-ovn-ipv6

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/0842cf70-bbe0-11f0-9806-4a3750713955-1

@pperiyasamy
Copy link
Member Author

/retest e2e-azure-ovn-upgrade

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 10, 2025

@pperiyasamy: The /retest command does not accept any targets.
The following commands are available to trigger required jobs:

/test 4.19-upgrade-from-stable-4.18-e2e-aws-ovn-upgrade
/test 4.19-upgrade-from-stable-4.18-e2e-gcp-ovn-rt-upgrade
/test 4.19-upgrade-from-stable-4.18-images
/test e2e-aws-ovn
/test e2e-aws-ovn-edge-zones
/test e2e-aws-ovn-hypershift
/test e2e-aws-ovn-local-gateway
/test e2e-aws-ovn-local-to-shared-gateway-mode-migration
/test e2e-aws-ovn-serial
/test e2e-aws-ovn-shared-to-local-gateway-mode-migration
/test e2e-aws-ovn-upgrade
/test e2e-aws-ovn-upgrade-local-gateway
/test e2e-aws-ovn-windows
/test e2e-azure-ovn-upgrade
/test e2e-gcp-ovn
/test e2e-gcp-ovn-techpreview
/test e2e-metal-ipi-ovn-dualstack
/test e2e-metal-ipi-ovn-dualstack-bgp
/test e2e-metal-ipi-ovn-dualstack-bgp-local-gw
/test e2e-metal-ipi-ovn-ipv6
/test gofmt
/test images
/test lint
/test okd-scos-images
/test qe-perfscale-payload-control-plane-6nodes
/test unit

The following commands are available to trigger optional jobs:

/test 4.19-upgrade-from-stable-4.18-e2e-aws-ovn-upgrade-ipsec
/test e2e-agent-compact-ipv4
/test e2e-aws-ovn-clusternetwork-cidr-expansion
/test e2e-aws-ovn-fdp-qe
/test e2e-aws-ovn-hypershift-conformance-techpreview
/test e2e-aws-ovn-hypershift-kubevirt
/test e2e-aws-ovn-serial-ipsec
/test e2e-aws-ovn-single-node-techpreview
/test e2e-aws-ovn-techpreview
/test e2e-aws-ovn-upgrade-ipsec
/test e2e-aws-ovn-virt
/test e2e-azure-ovn
/test e2e-azure-ovn-techpreview
/test e2e-metal-ipi-ovn-dualstack-local-gateway
/test e2e-metal-ipi-ovn-dualstack-local-gateway-techpreview
/test e2e-metal-ipi-ovn-dualstack-techpreview
/test e2e-metal-ipi-ovn-ipv4
/test e2e-metal-ipi-ovn-ipv6-techpreview
/test e2e-metal-ipi-ovn-techpreview
/test e2e-openstack-ovn
/test e2e-ovn-hybrid-step-registry
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-techpreview
/test e2e-vsphere-windows
/test okd-scos-e2e-aws-ovn
/test openshift-e2e-gcp-ovn-techpreview-upgrade
/test ovncore-perfscale-aws-ovn-large-cluster-density-v2
/test ovncore-perfscale-aws-ovn-large-node-density-cni
/test ovncore-perfscale-aws-ovn-xlarge-cluster-density-v2
/test ovncore-perfscale-aws-ovn-xlarge-node-density-cni
/test perfscale-aws-ovn-medium-cluster-density-v2
/test perfscale-aws-ovn-medium-node-density-cni
/test perfscale-aws-ovn-small-cluster-density-v2
/test perfscale-aws-ovn-small-node-density-cni
/test qe-perfscale-aws-ovn-small-udn-density-l2
/test qe-perfscale-aws-ovn-small-udn-density-l3
/test security

Use /test all to run the following jobs that were automatically triggered:

pull-ci-openshift-ovn-kubernetes-release-4.19-4.19-upgrade-from-stable-4.18-e2e-aws-ovn-upgrade
pull-ci-openshift-ovn-kubernetes-release-4.19-4.19-upgrade-from-stable-4.18-e2e-aws-ovn-upgrade-ipsec
pull-ci-openshift-ovn-kubernetes-release-4.19-4.19-upgrade-from-stable-4.18-e2e-gcp-ovn-rt-upgrade
pull-ci-openshift-ovn-kubernetes-release-4.19-4.19-upgrade-from-stable-4.18-images
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-edge-zones
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-hypershift
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-local-gateway
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-local-to-shared-gateway-mode-migration
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-serial
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-shared-to-local-gateway-mode-migration
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-upgrade
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-upgrade-local-gateway
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-aws-ovn-windows
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-azure-ovn-upgrade
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-gcp-ovn
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-gcp-ovn-techpreview
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-metal-ipi-ovn-dualstack
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-metal-ipi-ovn-dualstack-bgp
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-metal-ipi-ovn-dualstack-bgp-local-gw
pull-ci-openshift-ovn-kubernetes-release-4.19-e2e-metal-ipi-ovn-ipv6
pull-ci-openshift-ovn-kubernetes-release-4.19-gofmt
pull-ci-openshift-ovn-kubernetes-release-4.19-images
pull-ci-openshift-ovn-kubernetes-release-4.19-lint
pull-ci-openshift-ovn-kubernetes-release-4.19-okd-scos-e2e-aws-ovn
pull-ci-openshift-ovn-kubernetes-release-4.19-okd-scos-images
pull-ci-openshift-ovn-kubernetes-release-4.19-security
pull-ci-openshift-ovn-kubernetes-release-4.19-unit

In response to this:

/retest e2e-azure-ovn-upgrade

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@pperiyasamy
Copy link
Member Author

/test e2e-azure-ovn-upgrade

@pperiyasamy
Copy link
Member Author

/payload-job periodic-ci-openshift-release-master-ci-4.19-e2e-aws-upgrade-ovn-single-node

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 10, 2025

@pperiyasamy: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-master-ci-4.19-e2e-aws-upgrade-ovn-single-node

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/a07c8740-be42-11f0-9639-d3bad25109c8-0

@pperiyasamy pperiyasamy changed the title [release-4.19] Fix stale EIP assignments during failover and controller restart [release-4.19] OCPBUGS-64854: Fix stale EIP assignments during failover and controller restart Nov 10, 2025
@openshift-ci-robot openshift-ci-robot added jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Nov 10, 2025
@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: This pull request references Jira Issue OCPBUGS-64854, which is invalid:

  • expected dependent Jira Issue OCPBUGS-63686 to be in one of the following states: VERIFIED, RELEASE PENDING, CLOSED (ERRATA), CLOSED (CURRENT RELEASE), CLOSED (DONE), CLOSED (DONE-ERRATA), but it is POST instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

Manual cherry-pick for 4.20 PR #2835. No conflicts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@huiran0826
Copy link

/verified by @huiran0826

@openshift-ci-robot openshift-ci-robot added the verified Signifies that the PR passed pre-merge verification criteria label Nov 11, 2025
@openshift-ci-robot
Copy link
Contributor

@huiran0826: This PR has been marked as verified by @huiran0826.

In response to this:

/verified by @huiran0826

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@jcaamano
Copy link
Contributor

/override ci/prow/lint
/lgtm
/approve
/label backport-risk-assessed

@openshift-ci openshift-ci bot added the backport-risk-assessed Indicates a PR to a release branch has been evaluated and considered safe to accept. label Nov 12, 2025
@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Nov 12, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 12, 2025

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: jcaamano, pperiyasamy

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 12, 2025
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 12, 2025

@jcaamano: Overrode contexts on behalf of jcaamano: ci/prow/lint

In response to this:

/override ci/prow/lint
/lgtm
/approve
/label backport-risk-assessed

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 12, 2025

@pperiyasamy: all tests passed!

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@pperiyasamy
Copy link
Member Author

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Nov 13, 2025
@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: This pull request references Jira Issue OCPBUGS-64854, which is valid. The bug has been moved to the POST state.

7 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.19.z) matches configured target version for branch (4.19.z)
  • bug is in the state New, which is one of the valid states (NEW, ASSIGNED, POST)
  • release note type set to "Release Note Not Required"
  • dependent bug Jira Issue OCPBUGS-63686 is in the state Verified, which is one of the valid states (VERIFIED, RELEASE PENDING, CLOSED (ERRATA), CLOSED (CURRENT RELEASE), CLOSED (DONE), CLOSED (DONE-ERRATA))
  • dependent Jira Issue OCPBUGS-63686 targets the "4.20.z" version, which is one of the valid target versions: 4.20.0, 4.20.z
  • bug has dependents

Requesting review from QA contact:
/cc @huiran0826

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot requested a review from huiran0826 November 13, 2025 11:45
@openshift-merge-bot openshift-merge-bot bot merged commit 5f70205 into openshift:release-4.19 Nov 13, 2025
28 of 29 checks passed
@openshift-ci-robot
Copy link
Contributor

@pperiyasamy: Jira Issue Verification Checks: Jira Issue OCPBUGS-64854
✔️ This pull request was pre-merge verified.
✔️ All associated pull requests have merged.
✔️ All associated, merged pull requests were pre-merge verified.

Jira Issue OCPBUGS-64854 has been moved to the MODIFIED state and will move to the VERIFIED state when the change is available in an accepted nightly payload. 🕓

In response to this:

Manual cherry-pick for 4.20 PR #2835. No conflicts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. backport-risk-assessed Indicates a PR to a release branch has been evaluated and considered safe to accept. jira/severity-critical Referenced Jira bug's severity is critical for the branch this PR is targeting. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. verified Signifies that the PR passed pre-merge verification criteria

Projects

None yet

Development

Successfully merging this pull request may close these issues.