InvocationScoped for AiService methods #1959
patriot1burke
started this conversation in
Ideas
Replies: 1 comment
-
|
I think it makes perfect sense! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
TLDR; I Would like an @InvocationScoped annotation that causes a clear of the chat memory of the request after teh AiService method finishes. Perfectly happy to create a PR, just want feedback on whether this is a good approach or if something already exists:
So, I ran into a problem: I have a @RequestScoped aiservice method that was using the default memory id:
MyPrompt.chat() is called by a tool from a larger, top level AI request chain. Depending on the prompt, the tool method might be called twice, and thus, MacroGeneratorPrompt .chat() might be called twice in the same @RequestScoped context. The issue was that the 2nd call to MacroGeneratorPrompt.chat() would get screwed up by chat memory of the first call because the memory hadn't been clear yet.
I had 2 options:
a) pass in a unique memory id every time MacroGeneratproPrompt was executed
or
b) Add a throw away chat memory implementation using @RegisterAiService(chatMemoryProviderSupplier)
I decided on (B).
I'd like to have a quarkus langchain4j feature that allowed you to specify @InvocationScoped on the ai service class or method that did the same thing.
Here's an example:
User: Create a new macro named that adds a prefix of "pre" and a suffix of "post"
The top prompt would break this up into 2 tool calls to addCommandToMacro() tool call
(1) addCommandToMacro("add prefix of "pre")
(2) addCommandToMacro("add suffix of "post")
Call (1) would invoke MacroGeneratorPrompt.chat() and would return "addPrefix('pre')". The tool method would then set the new macro code to be "addPrefix('pre')"
Call (2) would invokeMacroGeneratorPrompt.chat() and would return "addPrefix('pre');addSuffix('post')". The LLM decided to look at chat memory of the previous call and add the previous generatored command. So, the new macro created would be
"addPrefix('pre');addPrefix('pre');addSuffix('post')"
Beta Was this translation helpful? Give feedback.
All reactions