Skip to content

Conversation

@valentt
Copy link

@valentt valentt commented Oct 14, 2025

Pull Request: Fix Critical Bugs and Add Manus API Integration

Summary

This PR fixes 17 critical bugs that made OpenManus non-functional and adds support for Manus API integration through a custom LLM client.

Status: ✅ All components now fully operational (Docker, Backend, Frontend, CLI)


Changes Overview

🔧 Infrastructure & Build Fixes (3 commits)

  • fix(docker): Resolve shell script line ending issues (CRLF → LF)
  • fix(deps): Add missing langchain-openai dependency
  • fix(backend): Add missing __init__.py for nodes package

🔄 Backend Workflow Fixes (5 commits)

  • fix(workflow): Add missing Command imports in browser and planner nodes
  • fix(coordinator): Add Command import and improve response handling
  • fix(workflow): Simplify graph configuration for Command-based routing
  • fix(prompts): Correct template regex and add TEAM_MEMBERS support
  • fix(workflow): Handle both dict and Pydantic message objects

🤖 LLM Integration Fixes & Features (2 commits)

  • fix(llm): Improve PlaceholderLLM and implement actual LLM client creation
  • feat(llm): Implement Manus API custom client ⭐

🎨 Frontend Fixes (3 commits)

  • fix(frontend): Implement ChatInput event handlers (UI was non-interactive)
  • fix(frontend): Connect UI to backend API with SSE streaming
  • fix(frontend): Improve ChatDisplay styling and UX

💻 CLI Client Fix (1 commit)

  • fix(cli): Update client.py for current API endpoint

📚 Documentation (3 commits)

  • docs: Create comprehensive .env.example template
  • docs: Update README with detailed configuration guide
  • docs: Add comprehensive bug fix report (BUGFIX_REPORT.md)

🧹 Maintenance (1 commit)

  • chore: Update .gitignore to exclude IDE files and planning docs

Detailed Bug Fixes

Critical Issues Fixed

  1. Container Startup Failure

    • Error: : not found in Docker container
    • Cause: Windows CRLF line endings incompatible with Unix bash
    • Fix: Line ending conversion + .gitattributes
  2. Application Crash on Startup

    • Error: TypeError: Expected a Runnable, callable or dict
    • Cause: Missing Command imports in node files
    • Fix: Added imports to all node files
  3. Graph Configuration Error

    • Error: add_edge() got an unexpected keyword argument 'condition'
    • Cause: Invalid LangGraph API usage
    • Fix: Simplified graph to use Command-based routing
  4. Non-functional Frontend

    • Issue: Send button did nothing
    • Cause: Missing event handlers in ChatInput component
    • Fix: Complete rewrite with proper state management
  5. Frontend Not Connected to Backend

    • Issue: No API communication
    • Cause: Missing fetch implementation
    • Fix: Implemented SSE streaming with proper parsing
  6. LLM Always Using Placeholder

    • Issue: Real LLM never instantiated even with API keys
    • Cause: Logic error in get_llm_by_type()
    • Fix: Added conditional client creation

See BUGFIX_REPORT.md for complete analysis of all 17 bugs.


🌟 New Feature: Manus API Integration

Overview

Manus API uses a task-based architecture (not OpenAI-compatible), requiring a custom implementation:

# Task Creation
POST /v1/tasks → {task_id, task_url}

# Task Polling
GET /v1/tasks/{task_id} → {status, output}

# Response Extraction
Extract assistant message from output array

Implementation

Created ManusLLM class in src/llms/manus_llm.py:

  • ✅ LangChain-compatible invoke() and stream() methods
  • ✅ Task creation and polling logic
  • ✅ Response extraction from task output
  • API_KEY header authentication (not Bearer token)
  • ✅ Handles both dict and LangChain message objects

API Documentation

https://open.manus.ai/docs


Testing

✅ CLI Client

$ python src/client.py --task "What is 2+2?"
Response: 2+2 is 4.

✅ Web Interface

✅ Docker Containers

$ docker-compose up --build
✅ unified container (Backend + tools server)
✅ frontend container (Next.js dev server)

✅ Manus API Integration

  • Task creation tested
  • Polling and response extraction verified
  • Full conversation flow working

Breaking Changes

None - All changes are backwards compatible and additive.

  • Systems without Manus API can still use OpenAI, DeepSeek, Azure, or other providers
  • PlaceholderLLM still works when no API keys configured
  • Existing functionality preserved

Files Changed

Modified: 15 files
Created: 7 files
Total Lines Changed: ~800+

Key Files:

  • src/llms/manus_llm.py (NEW) - 175 lines
  • BUGFIX_REPORT.md (NEW) - 413 lines
  • .env.example (NEW) - 46 lines
  • src/agents/nodes/__init__.py (NEW)
  • README.md - Updated configuration section
  • Multiple workflow and frontend fixes

Documentation

New Documentation Files

  1. BUGFIX_REPORT.md - Comprehensive analysis:

    • All 17 bugs with severity levels
    • Root cause analysis
    • Testing results and statistics
    • Recommendations for future work
  2. .env.example - Configuration template:

    • BASIC, REASONING, VISION LLM configs
    • Azure OpenAI alternatives
    • Browser configuration options
    • Detailed explanatory comments
  3. Updated README.md:

    • Detailed API key setup instructions
    • Supported LLM providers
    • Corrected CLI usage examples
    • Fixed docker-compose examples

Commit History

All 18 commits follow conventional commit format and are organized by functionality:

Phase 1: Infrastructure (commits 1-3)
Phase 2: Backend Workflow (commits 4-8)
Phase 3: LLM Integration (commit 9)
Phase 4: Frontend (commits 10-12)
Phase 5: CLI Client (commit 13)
Phase 6: Manus API Feature (commit 14)
Phase 7: Documentation (commits 15-16)
Phase 8: Project Docs (commit 17)
Phase 9: Maintenance (commit 18)

Each commit is independently reviewable and focused on a single concern.


Future Enhancements

Recommendations for follow-up work:

  1. Multi-turn Conversations: Implement task continuation using taskId parameter
  2. True Streaming: Consider WebSocket for real-time streaming (currently word-splitting simulation)
  3. Rate Limiting: Handle Manus API rate limits gracefully
  4. Vision Support: Implement attachment handling for Manus vision capabilities
  5. Connector Integration: Explore Manus connectors for enhanced capabilities
  6. Testing Suite: Add unit tests for ManusLLM and integration tests

Checklist

  • ✅ All bugs fixed and tested
  • ✅ Manus API integration implemented and tested
  • ✅ Documentation complete and comprehensive
  • ✅ Conventional commit format used
  • ✅ No breaking changes
  • ✅ Code follows existing patterns
  • ✅ .gitignore updated appropriately
  • ✅ .env excluded (only .env.example committed)

Questions?

Feel free to ask questions or request clarification on any of the changes. All fixes are documented in detail in BUGFIX_REPORT.md.

Thank you for considering this contribution!

- Add .gitattributes to enforce LF line endings for shell scripts
- Change Dockerfile CMD from sh to bash for better compatibility
- Convert start.sh from CRLF to LF line endings

Fixes container startup failures with ": not found" errors caused by
Windows CRLF line endings being incompatible with Unix bash.
Add langchain-openai>=0.2.0 to requirements.txt to support
OpenAI-compatible LLM clients.

Fixes ModuleNotFoundError when attempting to import ChatOpenAI.
Create __init__.py with proper exports for all node functions to enable
correct module imports.

Fixes node module import failures in the agent workflow.
Add 'from langgraph.types import Command' import to browser_node.py
and planner_node.py.

Part of fixing TypeError: Expected a Runnable, callable or dict.
Instead got an unsupported type: <class 'module'>
- Add 'from langgraph.types import Command' import
- Add type checking to handle both string and message object responses
- Only apply JSON repair to content that looks like JSON (starts with { or [)

Fixes multiple issues in coordinator_node:
- TypeError: Expected a Runnable, callable or dict
- AttributeError: 'str' object has no attribute 'content'
- Prevents corruption of plain text messages by json_repair
Remove invalid conditional edge syntax and let Command objects handle
routing instead. Simplified graph to only set entry point.

Fixes TypeError: add_edge() got an unexpected keyword argument 'condition'
- Fix regex backreference from r"{1}" to r"{\1}"
- Add TEAM_MEMBERS variable support in apply_prompt_template
- Improve dynamic template variable handling

Fixes IndexError: tuple index out of range and KeyError: 'TEAM_MEMBERS'
Change from dict access (msg["role"]) to attribute access (msg.role)
for proper handling of Pydantic ChatMessage objects.

Fixes TypeError: 'ChatMessage' object is not subscriptable
…tion

- Add stream() method to PlaceholderLLM for compatibility
- Implement actual LLM client creation when API keys are configured
- Add ManusLLM detection and routing for Manus API
- Import ManusLLM class

Fixes:
- AttributeError: 'PlaceholderLLM' object has no attribute 'stream'
- Issue where system used placeholder even with valid API credentials
- Enables Manus API integration
Complete rewrite of ChatInput component to add:
- useState for input value management
- onChange handler for input changes
- onClick handler for send button
- onKeyPress handler for Enter key submission

Fixes non-functional send button - UI was completely non-interactive.
Implement handleSendMessage function with:
- Fetch to /api/chat/stream endpoint
- SSE stream parsing
- Python dict format extraction (HumanMessage content parsing)
- Comprehensive error handling with helpful messages

Fixes network error and enables actual communication with backend.
- Add message bubbles with color distinction for user/assistant
- Add empty state message
- Improve spacing and readability
- Add proper scrolling behavior

Improves user experience with better visual presentation.
Complete rewrite of CLI client:
- Update port from 5000 to 8000
- Change endpoint from /task to /api/chat/stream
- Add SSE stream parsing
- Add response extraction from Python dict format

Fixes broken CLI client that couldn't connect to current API.
Create ManusLLM class for Manus task-based API:
- Implements LangChain-compatible invoke() and stream() methods
- Handles task creation via POST /v1/tasks
- Implements polling for task completion via GET /v1/tasks/{task_id}
- Extracts responses from task output array
- Uses API_KEY header authentication (not Bearer token)
- Handles both dict and LangChain message objects

Manus API is task-based (not OpenAI-compatible chat API), requiring
custom implementation. Maintains compatibility with existing agent workflow.

API Documentation: https://open.manus.ai/docs
Add .env.example with:
- BASIC, REASONING, and VISION LLM configuration
- Azure OpenAI alternative configuration
- Browser configuration options
- Detailed comments explaining each section

Provides clear template for users to configure API keys.
Update Configuration section with:
- Detailed API key setup instructions
- Supported LLM providers list
- Fixed docker-compose example
- Updated API documentation
- Corrected CLI usage examples

Improves documentation accuracy and completeness for new users.
Add detailed documentation of all bugs encountered and fixed:
- 17 bugs fixed (categorized by severity and component)
- 1 new feature (Manus API integration)
- Root cause analysis for each issue
- Testing results and statistics
- Recommendations for future work

Provides complete reference for maintainers and contributors.
- Add .claude/ directory (IDE-specific)
- Add COMMIT_PLAN.md and NEXT_STEPS.md (internal planning documents)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant