-
Notifications
You must be signed in to change notification settings - Fork 58
feat(NGUI-348): next_gen_ui integrated as MCP server #2627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
Hi @lkrzyzanek. Thanks for your PR. I'm waiting for a openshift member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
| ) | ||
|
|
||
|
|
||
| async def execute_tool_calls( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function can be called multiple times ( max_rounds OR until finish_reason=stop ) Do we want to run the generate_ui during the , intermediate steps too ? @lkrzyzanek for POC this is okay .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generate_ui task is generated by LLM and also LLM decides when it happens.
I'm just making sure that conversation (all another tool results) are passed correctly to generate_ui.
So I think it's fine to keep it here.
My observation is that LLM creates for me in 1st round Kube MCP tool and 2nd round was NGUI MCP.
Presumption is that it will be last call. However LLM can make a different decision when user prompt asks for multiple actions. But we need to explore that on exact use cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have few basic queries.
- How are we making sure that even for simple use case model calls the generate_ui tool at the end ? I don't see any prompt change.. is there some information in tool definition ?
- Is it always going to be just one tool in future as well ?
- Are we expecting that generate_ui tool will be called always ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- @xrajesh works on system prompt tuning
- for UI generation probably yes. For other use cases like layout management etc we will probably introduce another tools
- We originally thought it will be always called. However it also depends on the use case / experience. But it will be decided based on @xrajesh 's work on system prompt tuning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xrajesh I removed the logic that gather input data and pass it to NGUI and improved NGUI to have string tool arguments so LLM can pass the data directly.
My observation so far:
- bigger LLM can generatre NGUI args with data from previous tool call with some level of probability.
- Smaller data goes good and reasonably fast. e.g.
generate a table component showing pods in openshift-lightspeed namespace, include all available data - Bigger data takes a lot of time and fails e.g.
what are my namespaces, generate ui.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xrajesh I refactored code a bit plus added support for both "llm powered" and manual arguments passing. The generate_ui_component is mean for LLM powered, generate_ui_multiple_components for manual processing. Which tool is available can be controlled by starting NGUI MCP server with the parameter
Description
DRAFT!
This code support of handling 1 MCP server result and passing programaticaly to "generate_ui" tool call argument (Next Gen UI MCP).
The PR also enhance the API response of sending also
artifactnext tocontentwhich is standard way how to send structured data to the client.Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing