Skip to content

Connect with expert AI assistants for each subject and generate creative project ideas. A growing collection of micro-apps designed for agents with Human in the loop.

Notifications You must be signed in to change notification settings

KBLLR/gen-idea-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Gen-Idea-Lab

Local-First AI Workspace powered by MLX, DrawThings, and the MCP Protocol

A modular, full-stack platform for creativity tools, academic assistants, and multi-agent workflows. Designed to run entirely local-first with optional cloud provider integrations.


✨ Features

  • 🤖 Local-First AI — Image generation (DrawThings), LLM (MLX-RAG-Lab), and embeddings run entirely offline
  • 🧩 Modular Apps — 13 independent micro-apps: Idea Lab, Image Booth, HugginPapers, Kanban, Planner, Character Lab, Calendar AI, Workflows, and more
  • 🔌 MCP Protocol — Backend executes via Model Context Protocol for tool orchestration
  • 🎨 Design System — Style Dictionary tokens with semantic spacing and consistent theming
  • ☁️ Cloud-Optional — Boots with zero API keys; cloud providers are opt-in enhancements

🚀 Quick Start

git clone https://github.com/KBLLR/gen-idea-lab
cd gen-idea-lab
npm install
npm run dev

URLs:

Environment Setup

Create .env with:

# Required for OAuth
GOOGLE_CLIENT_ID=your_client_id
GOOGLE_CLIENT_SECRET=your_client_secret

# Required for session/encryption
SESSION_SECRET=random_secret
ENCRYPTION_KEY=$(openssl rand -hex 32)

# Optional: Cloud AI providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-...
HUME_API_KEY=...

# Figma OAuth (server uses FIGMA_CLIENT_SECRET from env; client ID can be public)
FIGMA_CLIENT_ID=gD6UQSwun8TeikH2ZONT56
FIGMA_CLIENT_SECRET=your_figma_client_secret

Development Mode (Auth Bypass):

# Require login by default; set to false to allow the bypass flags below
REQUIRE_AUTH=true
# AUTH_BYPASS=1      # Backend: bypass requireAuth middleware
# VITE_AUTH_BYPASS=1 # Frontend: auto-authenticate as demo user

Secrets check (optional preflight):

python .secretsbank/check_required_secrets.py --house tier2-orchestrator

🏗️ Architecture

Micro-App System

Each app is independent and communicates only through the centralized Zustand store:

src/apps/
├── home/              # Dashboard
├── ideaLab/          # Multi-agent academic assistant
├── imageBooth/       # AI image transformations
├── hugginPapers/     # Research paper explorer
├── kanban/           # Task management
├── planner/          # Graph-based planning
├── workflows/        # Reusable AI workflows
└── [10 more apps]    # Character Lab, Calendar AI, Archiva, etc.

Core Principles:

  • ✅ Apps declare UI via layout slots (left/right panes)
  • ✅ Zero prop-passing between apps
  • ✅ All state via useStore.use.sliceName() selectors
  • ✅ All mutations via useStore.use.actions().actionName()

Backend Structure

server/
├── routes/
│   ├── mcpTools.js       # MCP backend execution
│   ├── auth.js           # Google OAuth
│   ├── models.js         # AI model discovery
│   └── [12 more routes]  # Services, kanban, rigging, etc.
├── lib/
│   ├── authMiddleware.js
│   └── encryptionUtils.js
└── index.js              # Express server

API Conventions:

  • All routes prefixed /api/
  • Protected routes use requireAuth middleware
  • Errors return { error: message } with proper status codes

Data Flow Architecture

┌───────────────────────────────┐
│        FRONTEND (Vite)        │
│  React Apps (IdeaLab, etc.)   │
│  Zustand Store + Actions       │
│                               │
│  LLM/Image Actions             │
│        │                       │
│        ▼                       │
│  FE MCP Client                 │
│  POST /api/mcp/execute         │
└───────────────┬───────────────┘
                │
                ▼
┌────────────────────────────────────────┐
│             BACKEND (Node)             │
│      apiRouter.js → /api/mcp/execute   │
│----------------------------------------│
│              MCP LAYER                 │
│   mcpTools.js + tool registry          │
│                                        │
│   Tools:                               │
│     • llm_chat                         │
│     • image_generate                   │
│     • rag_query                        │
│     • rag_upsert                       │
│                                        │
│ (Legacy cloud routes remain but return │
│ 501 intentionally)                     │
└───────────────┬────────────────────────┘
                │
                ▼
┌───────────────────────────────────────┐
│          LOCAL AI RUNTIMES            │
│---------------------------------------│
│ DrawThings Server  → image_generate   │
│ MLX LLM Runtime    → llm_chat         │
│ MLX-RAG-Lab        → rag_query        │
│                       rag_upsert      │
└───────────────────────────────────────┘

Current Implementation Status:

  • ✅ Frontend MCP client complete
  • ✅ Backend MCP layer with tool registry complete
  • ✅ MCP tools return stub responses (Phase 2)
  • ✅ Phase-4 Orchestrator complete (Smart Campus integration)
  • 🚧 Local runtime connections in Phase 4 (in progress)

🏢 Phase-4 Smart Campus Orchestration

The Phase-4 orchestrator provides Smart Campus-aware AI by fusing RAG, LLM, and Smart Campus providers into unified endpoints.

Architecture

Tier-1 UIs (Smart Campus, CLIs)
        │
        ▼
┌───────────────────────────────┐
│   Tier-2: Orchestrator        │ ← Phase-4 Layer
│   • Room-aware query fusion   │
│   • Structured context[]      │
│   • HTDI metadata             │
│   • Health aggregation        │
└──────────┬────────────────────┘
           │
    ┌──────┼──────┐
    ▼      ▼      ▼
┌────────┬────────┬─────────┐
│Tier-3A │Tier-3B │ Tier-3C │
│  MLX   │  RAG   │ Smart   │
│  LLM   │ Engine │ Campus  │
└────────┴────────┴─────────┘

Endpoints

1. Room-Aware Query (POST /orchestrate/room_query)

Queries AI with Smart Campus room and entity context:

curl -X POST http://localhost:8081/orchestrate/room_query \
  -H "Content-Type: application/json" \
  -d '{
    "requestId": "req_001",
    "source": "smart-campus",
    "timestamp": "2025-11-20T12:00:00Z",
    "room": "peace",
    "query": "What is the current state of this room?",
    "includeRag": true,
    "includeEntities": true
  }'

Response:

{
  "ok": true,
  "answer": "The Peace room currently has...",
  "ragContext": [...],     // RAG documentation chunks
  "roomContext": {         // Smart Campus context
    "id": "peace",
    "entities": [...]
  },
  "htdi": {                // Phase-4 metadata
    "providersUsed": {...},
    "contextUsage": {...}
  },
  "latencyMs": 1024.5
}

2. Generic Chat (POST /orchestrate/chat)

Standard chat with optional RAG:

curl -X POST http://localhost:8081/orchestrate/chat \
  -H "Content-Type: application/json" \
  -d '{
    "requestId": "req_002",
    "source": "web-ui",
    "timestamp": "2025-11-20T12:05:00Z",
    "messages": [
      {"role": "user", "content": "How do I set up MLX?"}
    ],
    "useRag": true,
    "ragCollection": "documentation"
  }'

3. Aggregate Health (GET /health)

Check health of all providers:

curl http://localhost:8081/health

Response:

{
  "ok": true,
  "status": "healthy",
  "providers": {
    "mlx": {"ok": true, "models_healthy": true, "latencyMs": 12.3},
    "rag": {"ok": true, "latencyMs": 8.7},
    "smartCampus": {"ok": true, "latencyMs": 15.2}
  }
}

Configuration

Add to .env:

# Tier-3 Provider URLs
MLX_URL=http://localhost:8000        # MLX LLM server
RAG_URL=http://localhost:5100        # RAG engine
SMART_CAMPUS_URL=http://localhost:5200  # Smart Campus service

# Defaults
DEFAULT_LLM_MODEL=mlx-qwen2.5-7b
DEFAULT_RAG_COLLECTION=smart-campus-docs

Provider Setup

1. Start MLX LLM Server (Tier-3A):

cd ../mlx-openai-server-lab
python server.py --port 8000

2. Start RAG Engine (Tier-3B):

cd ../mlx-rag-lab
python app.py --port 5100

3. Start Smart Campus Service (Tier-3C) (if available):

cd ../smart-campus-service
# Follow service-specific instructions

Documentation


📦 Key Files

File Purpose
src/shared/lib/store.js Zustand store (single source of truth)
src/shared/lib/routes.js React Router configuration
src/shared/data/appManifests.js App metadata for dashboard
src/shared/data/serviceConfigs.js Service registry (icons, colors, configs)
server/apiRouter.js Route aggregator
CLAUDE.md Complete architectural guide for AI assistants

🛠️ Development Commands

# Development
npm run dev              # Full stack (Vite + Express)
npm run dev:client       # Frontend only
npm run dev:server       # Backend only

# Testing
npm test                 # Jest tests
npm run test:ui          # Vitest UI tests
npm run test:ui:watch    # UI tests (watch mode)

# Build
npm run build            # Production build
npm run preview          # Preview production build

# Design System
npm run tokens:build     # Generate CSS tokens
npm run tokens:watch     # Watch token changes
npm run ds:check         # Audit for hardcoded pixels

# Utilities
npm run storybook        # Component library

🎯 Current Status

✅ Phase 0-3 Complete:

  • Frontend defaults to local providers (DrawThings, MCP)
  • Backend MCP stubs in place
  • Service registry centralized
  • Cloud providers fully optional
  • All apps crash-proof and loading correctly

🚧 Phase 4 In Progress:

  • Connect MCP → MLX-RAG-Lab runtime
  • Connect MCP → DrawThings server
  • Add streaming support
  • Remove legacy cloud dependencies

🤝 Contributing

Branch Naming: feature/*, fix/*, claude/*

Before Committing:

npm run build    # Must pass
npm test         # Must pass

Key Rules:

  • Follow local-first principle (no required cloud dependencies)
  • Update CHANGELOG.md for significant changes
  • Use Zustand store patterns (see CLAUDE.md)
  • No direct prop-passing between apps

📚 Documentation


📄 License

Apache-2.0


Local-First AI Workspace for Creativity & Learning

About

Connect with expert AI assistants for each subject and generate creative project ideas. A growing collection of micro-apps designed for agents with Human in the loop.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •