A minimal human-in-the-loop agent built with LangGraph and the MCP (Model Context Protocol). It streams responses, gates sensitive tool calls for human approval, and uses a Dockerized MCP filesystem server to safely access files.
- Human review for protected tools:
create_directory,edit_file,move_file,write_file. - Streaming output with clear tool-call annotations.
- YOLO mode toggle to skip human approvals when desired.
- MCP filesystem server via Docker, sandboxed to a configurable workspace.
- Python: 3.13+
- Docker: installed and running (pulls
mcp/filesystemimage automatically) - OpenAI API key: for the default
ChatOpenAImodel
cd /Users/m_warid/Desktop/dev/human-in-the-loop
python -m venv .venv && source .venv/bin/activate
pip install -U pip
# Project deps + a small runtime extra used by the CLI
pip install -e . nest_asyncioflowchart LR
start((START))
finish((END))
assistant["assistant_node<br/>LLM: respond or call tools"]
tools["Tools (ToolNode wrapper)"]
human["human_tool_review_node<br/>interrupt() and wait"]
d_toolcalls{Tool calls present?}
d_yolo{yolo_mode?}
d_protected{Any protected tool?}
d_review{Review action?}
legend["protected_tools:<br/>- create_directory<br/>- edit_file<br/>- move_file<br/>- write_file"]
start --> assistant
assistant --> d_toolcalls
d_toolcalls -->|no| finish
d_toolcalls -->|yes| d_yolo
d_yolo -->|yes| tools
d_yolo -->|no| d_protected
d_protected -->|yes| human
d_protected -->|no| tools
tools -->|after tool result| assistant
human --> d_review
d_review -->|continue / update| tools
d_review -->|feedback / reject / default| assistant
assistant -.-> legend
Set the workspace directory that the MCP filesystem server can access, and your OpenAI key.
export WORKSPACE="/absolute/path/you/want/to/expose" # e.g., $PWD
export OPENAI_API_KEY="sk-..."python -m frontend.chat_local- The app prints assistant messages as they stream.
- When a protected tool is requested, you’ll be prompted to choose:
reject,continue,update, orfeedback. - Type
exitorquitat the user prompt to stop.
src/goop/graph.pybuilds a LangGraph agent bound to tools discovered from the MCP servers (viaMultiServerMCPClient).- Protected tools trigger an interrupt handled in a human review node; your decision controls whether and how the tool executes.
frontend/chat_local.pyruns the graph, streams tokens, and manages the human-approval loop.
/frontend/
chat_local.py # CLI runner with streaming + approvals
/src/goop/
graph.py # LangGraph definition and human review node
config.py # Loads and resolves env vars in mcp_config.json
mcp_config.json # Dockerized MCP filesystem server binding ${WORKSPACE}
pyproject.toml # Project metadata and dependencies
README.md
- Dockerized tool sandbox: The filesystem tool runs inside a Docker container (
mcp/filesystem) with a bind mount only toWORKSPACE. The agent cannot read or write outside that path. - Human approval for sensitive ops:
create_directory,edit_file,move_file, andwrite_fileare gated by a human review node unless YOLO mode is enabled. - Fail-closed env handling:
src/goop/config.pyresolves${VAR}placeholders and raises if required env vars are missing. - No arbitrary shell exec: Tools are discovered from MCP servers; there is no direct shell execution path in the agent code.
- Checkpointed sessions: LangGraph checkpointing (with
thread_id) makes the flow resumable and auditable.
Example run (build image first):
# Build image (create a Dockerfile suited to your environment)
docker build -t goop .
# Run: pass through Docker socket and mount the workspace
docker run -it --rm \
-e OPENAI_API_KEY="$OPENAI_API_KEY" \
-e WORKSPACE="$(pwd)" \
-v /var/run/docker.sock:/var/run/docker.sock \
-v "$(pwd)":"$(pwd)" \
-w "$(pwd)" \
goop- Parallel tool call review: Currently only one tool call can be reviewed at a time. Implement a tool queue system to handle multiple simultaneous tool calls.
- Additional MCP servers: Integrate more MCP servers (database, web search, code analysis, etc.) beyond just filesystem operations.
- Persistent checkpointing: Replace
MemorySaverwith database-backed checkpointing (SQLite, PostgreSQL) for session persistence across restarts. - Multi-model support: Add configuration for switching between OpenAI, Anthropic, local models (Ollama), and other providers.
- Custom tool definitions: Allow users to define custom tools and their protection levels via configuration.
- Granular tool permissions: Replace binary protected/unprotected with role-based permissions (read-only, admin, specific operations).
- Audit logging: Log all tool calls, approvals, and rejections with timestamps and reasoning for compliance.
- Workspace isolation per session: Create isolated workspace directories per
thread_idto prevent cross-session file access. - Resource limits: Add CPU, memory, and disk usage limits to the Docker containers.
- Tool execution timeouts: Implement timeouts for long-running tool operations.
- Web interface: Replace CLI with a web-based chat interface with rich tool call previews and approval workflows.
- Mobile support: Responsive web UI for mobile approval workflows.
- Tool call previews: Show detailed previews of what each tool will do before requiring approval.
- Approval templates: Pre-configured approval rules (e.g., "always approve read operations").
- Session management: List, resume, and manage multiple conversation threads.
- Metrics dashboard: Track tool usage, approval rates, session duration, and error rates.
- Integration with observability tools: OpenTelemetry tracing, Prometheus metrics, structured logging.
- Performance profiling: Monitor and optimize graph execution times and memory usage.
- Health checks: API endpoints for monitoring the health of MCP servers and the main application.
- Dynamic MCP server management: Add/remove MCP servers without restarting the application.
- Environment-specific configs: Development, staging, production configuration profiles.
- Kubernetes deployment: Helm charts and manifests for container orchestration.
- API mode: REST/GraphQL API for programmatic access alongside the chat interface.
- Plugin architecture: Allow third-party plugins to extend functionality without core changes.
- Environment variable not set: If a required var in
mcp_config.json(e.g.,WORKSPACE) is missing, startup will raise a clear error. - Docker errors: Ensure Docker is running and can pull/run the
mcp/filesystemimage. - OpenAI auth: Set
OPENAI_API_KEY. To use a local model instead, switch toChatOllamainsrc/goop/graph.py.