Local-First AI Workspace powered by MLX, DrawThings, and the MCP Protocol
A modular, full-stack platform for creativity tools, academic assistants, and multi-agent workflows. Designed to run entirely local-first with optional cloud provider integrations.
- 🤖 Local-First AI — Image generation (DrawThings), LLM (MLX-RAG-Lab), and embeddings run entirely offline
- 🧩 Modular Apps — 13 independent micro-apps: Idea Lab, Image Booth, HugginPapers, Kanban, Planner, Character Lab, Calendar AI, Workflows, and more
- 🔌 MCP Protocol — Backend executes via Model Context Protocol for tool orchestration
- 🎨 Design System — Style Dictionary tokens with semantic spacing and consistent theming
- ☁️ Cloud-Optional — Boots with zero API keys; cloud providers are opt-in enhancements
git clone https://github.com/KBLLR/gen-idea-lab
cd gen-idea-lab
npm install
npm run devURLs:
- Frontend: http://localhost:3000
- Backend: http://localhost:8081
Create .env with:
# Required for OAuth
GOOGLE_CLIENT_ID=your_client_id
GOOGLE_CLIENT_SECRET=your_client_secret
# Required for session/encryption
SESSION_SECRET=random_secret
ENCRYPTION_KEY=$(openssl rand -hex 32)
# Optional: Cloud AI providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...
HUME_API_KEY=...
# Figma OAuth (server uses FIGMA_CLIENT_SECRET from env; client ID can be public)
FIGMA_CLIENT_ID=gD6UQSwun8TeikH2ZONT56
FIGMA_CLIENT_SECRET=your_figma_client_secretDevelopment Mode (Auth Bypass):
# Require login by default; set to false to allow the bypass flags below
REQUIRE_AUTH=true
# AUTH_BYPASS=1 # Backend: bypass requireAuth middleware
# VITE_AUTH_BYPASS=1 # Frontend: auto-authenticate as demo userSecrets check (optional preflight):
python .secretsbank/check_required_secrets.py --house tier2-orchestrator
Each app is independent and communicates only through the centralized Zustand store:
src/apps/
├── home/ # Dashboard
├── ideaLab/ # Multi-agent academic assistant
├── imageBooth/ # AI image transformations
├── hugginPapers/ # Research paper explorer
├── kanban/ # Task management
├── planner/ # Graph-based planning
├── workflows/ # Reusable AI workflows
└── [10 more apps] # Character Lab, Calendar AI, Archiva, etc.
Core Principles:
- ✅ Apps declare UI via layout slots (left/right panes)
- ✅ Zero prop-passing between apps
- ✅ All state via
useStore.use.sliceName()selectors - ✅ All mutations via
useStore.use.actions().actionName()
server/
├── routes/
│ ├── mcpTools.js # MCP backend execution
│ ├── auth.js # Google OAuth
│ ├── models.js # AI model discovery
│ └── [12 more routes] # Services, kanban, rigging, etc.
├── lib/
│ ├── authMiddleware.js
│ └── encryptionUtils.js
└── index.js # Express server
API Conventions:
- All routes prefixed
/api/ - Protected routes use
requireAuthmiddleware - Errors return
{ error: message }with proper status codes
┌───────────────────────────────┐
│ FRONTEND (Vite) │
│ React Apps (IdeaLab, etc.) │
│ Zustand Store + Actions │
│ │
│ LLM/Image Actions │
│ │ │
│ ▼ │
│ FE MCP Client │
│ POST /api/mcp/execute │
└───────────────┬───────────────┘
│
▼
┌────────────────────────────────────────┐
│ BACKEND (Node) │
│ apiRouter.js → /api/mcp/execute │
│----------------------------------------│
│ MCP LAYER │
│ mcpTools.js + tool registry │
│ │
│ Tools: │
│ • llm_chat │
│ • image_generate │
│ • rag_query │
│ • rag_upsert │
│ │
│ (Legacy cloud routes remain but return │
│ 501 intentionally) │
└───────────────┬────────────────────────┘
│
▼
┌───────────────────────────────────────┐
│ LOCAL AI RUNTIMES │
│---------------------------------------│
│ DrawThings Server → image_generate │
│ MLX LLM Runtime → llm_chat │
│ MLX-RAG-Lab → rag_query │
│ rag_upsert │
└───────────────────────────────────────┘
Current Implementation Status:
- ✅ Frontend MCP client complete
- ✅ Backend MCP layer with tool registry complete
- ✅ MCP tools return stub responses (Phase 2)
- ✅ Phase-4 Orchestrator complete (Smart Campus integration)
- 🚧 Local runtime connections in Phase 4 (in progress)
The Phase-4 orchestrator provides Smart Campus-aware AI by fusing RAG, LLM, and Smart Campus providers into unified endpoints.
Tier-1 UIs (Smart Campus, CLIs)
│
▼
┌───────────────────────────────┐
│ Tier-2: Orchestrator │ ← Phase-4 Layer
│ • Room-aware query fusion │
│ • Structured context[] │
│ • HTDI metadata │
│ • Health aggregation │
└──────────┬────────────────────┘
│
┌──────┼──────┐
▼ ▼ ▼
┌────────┬────────┬─────────┐
│Tier-3A │Tier-3B │ Tier-3C │
│ MLX │ RAG │ Smart │
│ LLM │ Engine │ Campus │
└────────┴────────┴─────────┘
Queries AI with Smart Campus room and entity context:
curl -X POST http://localhost:8081/orchestrate/room_query \
-H "Content-Type: application/json" \
-d '{
"requestId": "req_001",
"source": "smart-campus",
"timestamp": "2025-11-20T12:00:00Z",
"room": "peace",
"query": "What is the current state of this room?",
"includeRag": true,
"includeEntities": true
}'Response:
Standard chat with optional RAG:
curl -X POST http://localhost:8081/orchestrate/chat \
-H "Content-Type: application/json" \
-d '{
"requestId": "req_002",
"source": "web-ui",
"timestamp": "2025-11-20T12:05:00Z",
"messages": [
{"role": "user", "content": "How do I set up MLX?"}
],
"useRag": true,
"ragCollection": "documentation"
}'Check health of all providers:
curl http://localhost:8081/healthResponse:
{
"ok": true,
"status": "healthy",
"providers": {
"mlx": {"ok": true, "models_healthy": true, "latencyMs": 12.3},
"rag": {"ok": true, "latencyMs": 8.7},
"smartCampus": {"ok": true, "latencyMs": 15.2}
}
}Add to .env:
# Tier-3 Provider URLs
MLX_URL=http://localhost:8000 # MLX LLM server
RAG_URL=http://localhost:5100 # RAG engine
SMART_CAMPUS_URL=http://localhost:5200 # Smart Campus service
# Defaults
DEFAULT_LLM_MODEL=mlx-qwen2.5-7b
DEFAULT_RAG_COLLECTION=smart-campus-docs1. Start MLX LLM Server (Tier-3A):
cd ../mlx-openai-server-lab
python server.py --port 80002. Start RAG Engine (Tier-3B):
cd ../mlx-rag-lab
python app.py --port 51003. Start Smart Campus Service (Tier-3C) (if available):
cd ../smart-campus-service
# Follow service-specific instructions- PHASE4_ORCHESTRATOR_CONTRACT.md — Complete API specification
- Phase-4 Protocol Types — TypeScript/JSDoc types
- Provider Implementations — MLX, RAG, Smart Campus, Orchestrator
| File | Purpose |
|---|---|
src/shared/lib/store.js |
Zustand store (single source of truth) |
src/shared/lib/routes.js |
React Router configuration |
src/shared/data/appManifests.js |
App metadata for dashboard |
src/shared/data/serviceConfigs.js |
Service registry (icons, colors, configs) |
server/apiRouter.js |
Route aggregator |
CLAUDE.md |
Complete architectural guide for AI assistants |
# Development
npm run dev # Full stack (Vite + Express)
npm run dev:client # Frontend only
npm run dev:server # Backend only
# Testing
npm test # Jest tests
npm run test:ui # Vitest UI tests
npm run test:ui:watch # UI tests (watch mode)
# Build
npm run build # Production build
npm run preview # Preview production build
# Design System
npm run tokens:build # Generate CSS tokens
npm run tokens:watch # Watch token changes
npm run ds:check # Audit for hardcoded pixels
# Utilities
npm run storybook # Component library✅ Phase 0-3 Complete:
- Frontend defaults to local providers (DrawThings, MCP)
- Backend MCP stubs in place
- Service registry centralized
- Cloud providers fully optional
- All apps crash-proof and loading correctly
🚧 Phase 4 In Progress:
- Connect MCP → MLX-RAG-Lab runtime
- Connect MCP → DrawThings server
- Add streaming support
- Remove legacy cloud dependencies
Branch Naming: feature/*, fix/*, claude/*
Before Committing:
npm run build # Must pass
npm test # Must passKey Rules:
- Follow local-first principle (no required cloud dependencies)
- Update
CHANGELOG.mdfor significant changes - Use Zustand store patterns (see
CLAUDE.md) - No direct prop-passing between apps
- CLAUDE.md — Complete architectural guide
- DATA_FLOW_ARCHITECTURE.md — Data contracts & patterns
- OAUTH_SETUP.md — Service integration guide
- .gemini/project-overview.md — 700+ line deep-dive
Apache-2.0
Local-First AI Workspace for Creativity & Learning
{ "ok": true, "answer": "The Peace room currently has...", "ragContext": [...], // RAG documentation chunks "roomContext": { // Smart Campus context "id": "peace", "entities": [...] }, "htdi": { // Phase-4 metadata "providersUsed": {...}, "contextUsage": {...} }, "latencyMs": 1024.5 }