Skip to content

Conversation

@tvaucher
Copy link

Description

This PR fixes an issue in the conversation flow where the prompt was not stored in the central memory before being sent via send_prompt_async in PromptNormalizer. Previously, the flow would first call send_prompt_async, and only afterward would the prompt be added to the central memory.

Additionally, send_prompt_async currently only returns a single Message containing MessagePiece objects of a single type. This limitation prevented proper handling of intermediary steps in targets such as OpenAIResponseTarget, since those messages were not being stored anywhere. Attempts to store them in the central memory directly by modifying the target resulted in out-of-order messages due to the aforementioned sequencing issue.

This PR addresses the problem by:

Ensuring that the prompt is first inserted into the central memory before calling send_prompt_async.

Simplifying error-handling logic by removing redundant calls that are now unnecessary thanks to the corrected flow.

As a result, intermediary messages can now be correctly ordered and consistently stored in the central memory.

Tests and Documentation

N/A


response = None

self._memory.add_message_to_memory(request=request)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As is this would cause bugs because targets rely on the last message not being there to construct the history. E.g.

    @limit_requests_per_minute
    @pyrit_target_retry
    async def send_prompt_async(self, *, prompt_request: Message) -> Message:
        """Asynchronously sends a prompt request and handles the response within a managed conversation context.

        Args:
            prompt_request (Message): The message object.

        Returns:
            Message: The updated conversation entry with the response from the prompt target.
        """

        self._validate_request(prompt_request=prompt_request)
        self.refresh_auth_headers()

        message_piece: MessagePiece = prompt_request.message_pieces[0]

        is_json_response = self.is_response_format_json(message_piece)

        conversation = self._memory.get_conversation(conversation_id=message_piece.conversation_id)

Copy link
Contributor

@rlundeen2 rlundeen2 Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another reason it's nice to store after they're sent is we ran into issues if errors happen. If the prompt is never sent then it can cause issues with how our error/retrying/history reconstruction logic works

@rlundeen2
Copy link
Contributor

One thing I am not following. Messages can contain multiple message pieces of different types. So we have types of "function_call", "tool_call", "function_call_output", and "text" which can all be part of the Message "assistant" response. If they are part of the response, then the normalizer should store them all in the database as long as the target returns the right Message. And they should all be part of the correct turn.

Let me know if I'm missing something!

@rlundeen2 rlundeen2 self-assigned this Oct 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants