Skip to content

Conversation

@muhamadazmy
Copy link
Contributor

@muhamadazmy muhamadazmy commented Nov 13, 2025

Copy link
Contributor

@tillrohrmann tillrohrmann left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for creating this PR @muhamadazmy. The changes look good to me :-)

I left a comment about the removal of the "delete an invocation" API. I think it is ok to remove the API since it was deprecated in v1.4.0 but we need to make sure that it's communicated as part of the release. The Admin API version might have to be bumped if older clients could still call the old invocation deletion API.

/// Terminate an invocation
#[openapi(
summary = "Delete an invocation",
deprecated = true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This API was deprecated in 1.4.0. If we remove it with 1.6.0, then we certainly need to add a release note to make people aware. Additionally, we need to check

and
/// Version information endpoint
whether we need to remove support for AdminApiVersion::V2 because we removed the old delete invocation endpoint. We should then also update
pub const MIN_ADMIN_API_VERSION: AdminApiVersion = AdminApiVersion::V2;
accordingly if a bump is needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @slinkydeveloper for double checking whether we can remove this API.

Comment on lines +255 to +252
warn!("Could not append state patching command to Bifrost: {err}");
MetaApiError::Internal(
"Failed sending state patching command to the cluster.".to_owned(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this always true that the patch command could not be appended to Bifrost? What if the ingress is closing after having sent the command out?

Comment on lines 247 to 248
let result = state
.ingress
.ingest(
partition_key,
IngestRecord::from_parts(envelope.record_keys(), envelope),
)
.await
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about rolling upgrades when there are still a few nodes that don't support the PartitionLeaderService yet?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ingress client will keep retrying when it gets an error until the record is committed. The only improvement i can think of here is to timeout if the operation takes too long.

@muhamadazmy muhamadazmy force-pushed the pr3980 branch 7 times, most recently from a5593c7 to 2c784b6 Compare November 28, 2025 11:51
- `ingress-client` implements the runtime layer that receives ingress traffic, fans it out to the correct partition, and tracks completion. It exposes:
  - `Ingress`, enforces inflight budgets, and resolves partition IDs before sending work downstream.
  - The session subsystem that batches `IngestRecords`, retries connections, and reports commit status to callers.
- `ingress-core` only ingests records and notify the caller once the record is "committed" to bifrost by the PP. This makes it useful to implement kafka ingress and other external ingestion
Summary:
Handle the incoming `IngestRequest` messages sent by the `ingress-core`
Summary:
Refactor ingress-kafka to leverage on `ingress-client` implementation. This replaces
the previous direct write to bifrost which allows:
- Batching, which increases throughput
- PP becomes the sole writer of its logs (WIP restatedev#3965)
- Use IngressClient instead of bifrost to write to partitions logs
- Remove deprecated `delete_invocation`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants