-
Notifications
You must be signed in to change notification settings - Fork 0
Feature/mistral support and cli enhancements #156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Replace vague 'ALL SQL queries require confirm=true' with explicit parameter specification - Add concrete example showing exact execute_sql tool call syntax - Clarify that tool rejects ANY SQL query without the confirm=true parameter - Specify boolean data type and automatic execution behavior - Eliminates ambiguity for different LLM providers about confirmation requirements
… refactoring, enhanced CLI.
Phase 1: Extract Help System - Create centralized help system in cli_modules/help.py - Move help content to markdown file for single source of truth - Update cli.py to use new help module - Maintain backward compatibility Phase 2: Extract Services - Extract 5 service modules with zero UI dependencies: * export.py: Export and JSON serialization services * images.py: Profile image export services * directory.py: Interactive directory generation * import_google.py: Google import services * llm.py: LLM chat integration services - Update imports in cli.py to use extracted services - Remove duplicate function definitions - All services are now independently testable This refactoring improves maintainability and separation of concerns while preserving all existing functionality.
… fixes This commit completes the CLI modularization refactoring and resolves critical compatibility issues that were causing test failures. CLI Modularization (Phases 1-3): - Split monolithic CLI into specialized modules - Maintained backward compatibility layer - Added enhanced functionality with new flags - Fixed missing imports and test compatibility issues Key Features: - --prt-debug-info flag for system diagnostics - --classic flag for forced CLI mode - Complete modular architecture with 26+ new files - Zero test regressions, all functionality preserved Validation: All CLI tests pass (8/8), debug info tests pass (13/13)
- Remove custom help flag handling in favor of Typer's native --help - Eliminate print_custom_help() import and manual help processing - Streamline command interface by letting Typer handle help automatically - Reduces code complexity while maintaining same functionality This is a cleanup following the CLI modularization in PR #153.
- Add specific RuntimeError handling for critical errors (permissions, disk space) - Provide informative stderr warnings when credential setup fails - Differentiate between critical errors and unexpected errors - Improve user guidance about potential system issues - Continue gracefully while preserving error context This improves robustness of the credential system introduced in the CLI modularization, ensuring users get clear feedback about configuration issues.
Security improvements: - Replace random.choices() with secrets.choice() for Mistral tool call ID generation - Use cryptographically secure random number generation for security-sensitive operations Robustness improvements: - Improve base URL processing to avoid corrupting domains like v1.example.com - Use safer string manipulation for /v1 suffix removal - Add explicit imports for security modules These changes strengthen the security posture of the Mistral LLM integration added in PR #153 and improve reliability of URL handling.
Use removesuffix() instead of rstrip() for URL handling to fix potential issues with domains containing 'v1' in their name. This aligns with the URL handling improvements from main branch.
💡 Codex ReviewLines 27 to 79 in ab7ad25
After inheriting from Lines 324 to 338 in ab7ad25
ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
- Fix README.md CLI examples from outdated 'python -m prt_src.cli' to current 'python -m prt_src' structure - Add comprehensive CLI testing and debugging examples including: * prt-debug-info for system health checks * CLI chat mode for testing AI functionality * Model testing and selection commands - Update docs/DEV_SETUP.md with current CLI patterns and troubleshooting workflows - Add dedicated CLI Testing & Debugging Commands section with practical examples - Modernize interface descriptions to reflect TUI as default with CLI as power-user tool - Include specific examples mentioned for development: chat testing, directory generation, system diagnostics The CLI is now properly documented as an excellent testing and debugging tool for developers.
Changes the model selection logic in OllamaModelRegistry.get_default_model() to prioritize gpt-oss:20b as the primary default, followed by any officially supported models, then any available models. New priority order: 1. gpt-oss:20b (preferred default for tool calling support) 2. Any officially supported model (mistral:7b-instruct, llama3:8b, etc.) 3. Any available model (fallback) This ensures users get the recommended model for PRT's tool calling features when no specific model is requested via command line.
- Remove outdated SQLCipher comments from init.sh (Issue #41 migration complete) - Add bug spec for llama3-8b-local communication error with Ollama - Add test script to reproduce llama3 communication issues for debugging These changes support the ongoing LLM model registry and Ollama integration work.
Code Review - PR #156I've reviewed this pull request and have the following feedback: ✅ Strengths
|
Code Review: Feature/mistral support and cli enhancementsOverall AssessmentThis PR makes several valuable improvements to LLM model selection, documentation, and testing infrastructure. The changes are well-structured and align with the project's goals. However, there are some concerns around code organization, test coverage, and potential issues that should be addressed. Summary:
Detailed Feedback1. Code Quality & Best Practices✅ Good: Model Selection Logic (prt_src/llm_model_registry.py:393-426)The new
Minor suggestion: The import at line 404 is inside the method: from prt_src.llm_supported_models import get_supported_modelsWhile this avoids circular imports, it's unconventional. Consider:
# Import here to avoid circular dependency between registry and supported_models
from prt_src.llm_supported_models import get_supported_models
|
| for model_name in supported_models: | ||
| if model_name in model_names: | ||
| logger.debug(f"Found supported default model: {model_name}") | ||
| return model_name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Default Model Selection Ignores Documented Priority
The get_default_model() method claims to prioritize officially supported models but actually iterates through all models in the registry regardless of support status. The loop at line 419 iterates over supported_models dictionary keys without filtering by support_status, so it can return experimental or deprecated models before checking other official ones. This contradicts the documented priority and could select a lower-quality experimental model when better official alternatives exist.
| import traceback | ||
|
|
||
| # Add prt_src to path | ||
| sys.path.insert(0, os.path.join(os.path.dirname(__file__), "prt_src")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Module Import Path Conflict
The path manipulation adds the prt_src subdirectory to sys.path, but the imports use from prt_src.api and from prt_src.llm_factory, which would look for modules at prt_src/prt_src/api.py. The correct fix is either to add the parent directory instead (sys.path.insert(0, os.path.dirname(__file__))) or change imports to from api import PRTAPI and from llm_factory import create_llm.
Contributor License Statement
By submitting this pull request, I confirm that:
Signed-off-by: (type your GitHub handle)
Note
Updates default LLM selection to prefer gpt-oss:20b and supported models, expands CLI/TUI usage docs, and adds a spec plus a repro test for the llama3-8b-local communication issue.
prt_src/llm_model_registry.get_default_model()to prioritizegpt-oss:20b, then officially supported models, then any available model.docs/DEV_SETUP.mdsignificantly expand CLI/TUI usage, debugging commands, and daily workflow, clarifying TUI as default and adding AI testing commands (prt-debug-info,list-models, chat examples).specs/bug_llama3_8b_local_communication_error.mddetailing an Ollama 400 error withllama3-8b-localand steps to fix/validate.test_llama3_communication.pyto reproduce and compare model communication behavior.init.sh(mac/Linux setup messaging).Written by Cursor Bugbot for commit 9efc8ed. This will update automatically on new commits. Configure here.