Skip to content

Conversation

@JonahSussman
Copy link
Contributor

@JonahSussman JonahSussman commented Jul 29, 2025

Beginning explorations to integrate Lightspeed with kai.

Related to #838

Summary by CodeRabbit

  • New Features

    • Introduced a comprehensive YAML configuration file enabling flexible setup of providers, APIs, models, shields, tool groups, and server settings.
    • Added a test script to list available models, tool groups, and tools from a local server instance.
  • Chores

    • Updated dependencies to include support for "llama-stack".

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 29, 2025

Walkthrough

A YAML configuration file was introduced for the solution server, specifying providers, APIs, models, and server settings. A Python test script was added to interact with the server via a client, listing models and tools. The project’s dependencies were updated to require the "llama-stack" package.

Changes

Cohort / File(s) Change Summary
Configuration File
kai_mcp_solution_server/config/run.yaml
Added a comprehensive YAML configuration defining API groups, providers, models, shields, tool groups, and server port.
Test Script
kai_mcp_solution_server/config/test_script.py
Introduced a Python script to connect to the server, list models, tool groups, and tools, with example usage comments included.
Dependency Update
kai_mcp_solution_server/pyproject.toml
Added "llama-stack>=0.2.12" to the dependencies list.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

In YAML fields and Python scripts anew,
The server’s heart beats, configuration grew.
With llama-stack in the mix,
Models and tools at your fingertips,
A rabbit hops through code so bright—
Reviewing changes, all feels right! 🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
kai_mcp_solution_server/config/test_script.py (1)

22-24: Optional: pretty-print tool metadata instead of plain __str__

If you need richer output (arguments, description), iterate tool.dict() or tool.model_dump() rather than the implicit __repr__.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 33688e2 and a934892.

⛔ Files ignored due to path filters (1)
  • kai_mcp_solution_server/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • kai_mcp_solution_server/config/run.yaml (1 hunks)
  • kai_mcp_solution_server/config/test_script.py (1 hunks)
  • kai_mcp_solution_server/pyproject.toml (1 hunks)
🧰 Additional context used
🪛 GitHub Actions: Test Code on PR
kai_mcp_solution_server/config/test_script.py

[error] 8-8: llama_stack_client.APIConnectionError: Connection error due to httpx.ConnectError: [Errno 111] Connection refused. The test failed because it could not connect to the required service.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: Run e2e test (macos-13, bash, ChatOpenAI, kai-test-generation)
  • GitHub Check: Run e2e test (ubuntu-22.04-arm, bash, ChatOpenAI, kai-test-generation)
  • GitHub Check: Run e2e test (windows-latest, cmd, ChatOpenAI, kai-test-generation)
  • GitHub Check: Run e2e test (macos-latest, bash, ChatOpenAI, kai-test-generation)
  • GitHub Check: Run e2e test (ubuntu-24.04, bash, ChatOpenAI, kai-test-generation)
🔇 Additional comments (3)
kai_mcp_solution_server/pyproject.toml (1)

15-15: Consider relaxing or pinning the new dependency more deliberately

Adding llama-stack>=0.2.12 means any future breaking release (0.3.x, 1.x) will be picked up automatically. If the API surface is still volatile, you may want either

  1. a pessimistic upper-bound (>=0.2.12,<0.3) or
  2. a caret/tilde spec (^0.2.12) depending on your resolver.
    Otherwise CI may suddenly fail once a backward-incompatible version lands.
kai_mcp_solution_server/config/run.yaml (2)

86-94: Home‐directory shortcuts are not expanded by most OCI runners

~/.llama/... is resolved only by the shell. When this YAML is parsed inside a container or k8s Pod running as non-root, the tilde may remain verbatim and the path becomes invalid.
Use ${env.HOME} or an absolute path calculated at startup instead.


226-230: mcp_endpoint key may be ignored – confirm the schema

For tool_runtime.provider_id: model-context-protocol the stack usually expects the endpoint under config.url (or similar) rather than the sibling mcp_endpoint.uri. Double-check against the MCP provider spec or the request will 404 at runtime.

Comment on lines +17 to +23
- provider_id: ${env.CEREBRAS_API_KEY:+cerebras}
provider_type: remote::cerebras
config:
base_url: https://api.cerebras.ai
api_key: ${env.CEREBRAS_API_KEY:=}
- provider_id: ollama
provider_type: remote::ollama
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Provider IDs that depend on env-vars can produce invalid/empty IDs

provider_id: ${env.CEREBRAS_API_KEY:+cerebras} expands to an empty string when the key is unset.
Several registries (including Llama-Stack) reject blank IDs, which causes startup failure.
Guard with an explicit YAML anchor or move the conditional into a separate enabled: flag.

🤖 Prompt for AI Agents
In kai_mcp_solution_server/config/run.yaml around lines 17 to 23, the
provider_id uses a conditional expansion that results in an empty string if the
environment variable CEREBRAS_API_KEY is unset, causing invalid provider IDs and
startup failures. To fix this, remove the conditional from provider_id and
instead add an explicit enabled flag that checks if CEREBRAS_API_KEY is set,
ensuring the provider is only included when the key is present. Alternatively,
use a YAML anchor to define the provider only when the environment variable
exists, preventing empty or invalid IDs.

Comment on lines +213 to +216
- shield_id: llama-guard
provider_id: ${env.SAFETY_MODEL:+llama-guard}
provider_shield_id: ${env.SAFETY_MODEL:=}
vector_dbs: []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Shield config can become partial / inconsistent

provider_id: ${env.SAFETY_MODEL:+llama-guard} is blank unless SAFETY_MODEL is set, but the object is still appended to shields:. This creates a shield with null/empty IDs and breaks validation.
Add an outer conditional or generate the list programmatically.

🤖 Prompt for AI Agents
In kai_mcp_solution_server/config/run.yaml around lines 213 to 216, the shield
configuration includes an entry with provider_id conditionally set based on
SAFETY_MODEL, but the shield object is always added, causing partial or empty
IDs that break validation. Fix this by adding an outer conditional to include
the entire shield entry only if SAFETY_MODEL is set, or generate the shields
list programmatically to exclude entries with missing environment variables,
ensuring no shield with null or empty IDs is created.

Comment on lines +4 to +8
from llama_stack_client import LlamaStackClient

client = LlamaStackClient(base_url="http://localhost:8321")

models = client.models.list()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Import time side-effects cause the pipeline crash – wrap in __main__

The script connects to the server as soon as it is imported, and the GitHub action imports every *.py file while collecting tests. Because the server is not running, CI sees Connection refused.
Move the client logic behind a main guard and add minimal error handling:

-from llama_stack_client import LlamaStackClient
-
-client = LlamaStackClient(base_url="http://localhost:8321")
-
-models = client.models.list()
+# noqa: D400
+from llama_stack_client import LlamaStackClient
+from httpx import RequestError
+
+
+def main() -> None:  # pragma: no cover
+    client = LlamaStackClient(base_url="http://localhost:8321")
+    try:
+        models = client.models.list()
+    except RequestError as exc:
+        print(f"❌ Cannot reach stack server: {exc}")
+        return

(then indent the remainder of the script under main() and add

if __name__ == "__main__":
    main()
```)  
This prevents accidental execution during test discovery.

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 GitHub Actions: Test Code on PR</summary>

[error] 8-8: llama_stack_client.APIConnectionError: Connection error due to httpx.ConnectError: [Errno 111] Connection refused. The test failed because it could not connect to the required service.

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

In kai_mcp_solution_server/config/test_script.py around lines 4 to 8, the client
connection code runs immediately on import, causing CI failures when the server
is not running. To fix this, wrap the client initialization and model listing
code inside a main() function, add minimal error handling, and then call main()
under the if name == "main": guard. This prevents the code from
executing during test discovery.


</details>

<!-- fingerprinting:phantom:triton:cougar -->

<!-- This is an auto-generated comment by CodeRabbit -->

@JonahSussman JonahSussman marked this pull request as draft July 29, 2025 14:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant