Welcome to Nestor, an AI-powered conversational agent system built with Letta and Chainlit. Nestor creates persistent, memory-enabled AI agents that can analyze codebases, answer technical questions, and provide insights about software architecture and implementation patterns.
Nestor is a sophisticated agent system that:
- Creates memory-enabled AI agents using Letta's stateful agent framework
- Analyzes software systems with deep understanding of code structure and logic
- Provides conversational interface through Chainlit for interactive Q&A
- Maintains context across conversations using persistent memory
- Accesses knowledge bases from Vamana's analysis outputs and archival stores
- Generates design insights including architecture patterns, logic flows, and domain models
The system bridges human developers with AI-powered code intelligence, making complex codebases more accessible and understandable.
Nestor is Phase 2 of a two-phase intelligent code analysis system. Phase 1 (Vamana) extracts knowledge from codebases, and Phase 2 (Nestor) loads and queries it.
graph TB
subgraph "PHASE 1: Vamana Extraction"
V1[π¦ Git Repository]
V2[π― CrewAI Agents]
V3[π JSON Output]
end
subgraph "PHASE 2: Nestor Load & Query"
N1[π§ Agent Helper<br/>Loads JSON files]
N2[π€ Letta Agent<br/>AnalysisAgent]
N3[π¬ Chainlit UI<br/>localhost:8000]
N2A[π Chat Memory]
N2B[π§ Custom Tools]
end
subgraph "Knowledge Storage"
S1[π Archival Store<br/>PostgreSQL]
S2[π Shared Folder<br/>File-based]
end
subgraph "User Interaction"
U1[π¨βπ» Developer]
U2[π‘ AI Responses]
end
V1 -->|Analyzes| V2
V2 -->|Generates| V3
V3 -->|Reads| N1
N1 -->|Creates| N2
N1 -->|Option A| S1
N1 -->|Option B| S2
S1 -.->|Queries| N2
S2 -.->|Searches| N2
N2 --> N2A
N2 --> N2B
N2 <-->|Messages| N3
U1 -->|Asks| N3
N3 -->|Returns| U2
classDef extraction fill:#4CAF50,stroke:#2E7D32,stroke-width:2px,color:#fff
classDef load fill:#2196F3,stroke:#1565C0,stroke-width:3px,color:#fff
classDef storage fill:#FF9800,stroke:#E65100,stroke-width:2px,color:#fff
classDef user fill:#F44336,stroke:#C62828,stroke-width:2px,color:#fff
class V1,V2,V3 extraction
class N1,N2,N3,N2A,N2B load
class S1,S2 storage
class U1,U2 user
Nestor follows a Memory-Augmented Agent architecture:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Nestor System β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β Chainlit UI Layer β β
β β - Conversational Interface β β
β β - Real-time Chat β β
β β - Session Management β β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β Letta Agent (AnalysisAgent) β β
β β - Stateful Conversations β β
β β - Persistent Memory (ChatMemory) β β
β β - Tool Integration β β
β β - Reasoning & Planning β β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β Knowledge Sources β β
β β - Archival Store (code analysis) β β
β β - Vamana Output (domain models) β β
β β - Custom Tools β β
β ββββββββββββββββββββββββββββββββββββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Agent Manager (
agent_main.py): Creates and manages Letta agents with memory and tools - Chainlit Interface (
chainlit_conversational_agent.py): Web-based chat interface - Custom Tools (
custom_tools.py): Specialized functions for agent capabilities - Letta Server: Backend service managing agent state and memory (port 8283)
- Memory System: Persistent conversation history and context
sequenceDiagram
participant V as Vamana Output<br/>(JSON Files)
participant H as Agent Helper<br/>(agent_main.py)
participant L as Letta Agent<br/>(AnalysisAgent)
participant S as Knowledge Store<br/>(DB or Folder)
participant C as Chainlit UI<br/>(Port 8000)
participant U as Developer
Note over V,S: LOAD PHASE
H->>V: Read JSON files from<br/>knowledge/output/
H->>H: Parse & validate JSON
H->>L: Create AnalysisAgent
alt Storage Option A: Database
H->>S: Insert passages to PostgreSQL
Note right of S: Full-text search<br/>Semantic embeddings
else Storage Option B: Folder
H->>S: Upload files to shared folder
Note right of S: File-based search<br/>Embedded vectors
end
H->>L: Configure storage access
L->>S: Verify connection
Note over C,U: QUERY PHASE
U->>C: Enter question about code
C->>L: Send user message
L->>L: Parse query intent
L->>S: Search for relevant context
S-->>L: Return matching passages
L->>L: Generate response with<br/>GPT-4o-mini + context
L->>C: Send assistant message
C->>U: Display answer with insights
Note over U,C: Conversation continues...
U->>C: Follow-up question
C->>L: Send with conversation history
L->>S: Search with updated context
S-->>L: Return passages
L->>C: Contextual response
C->>U: Display answer
flowchart TD
Start([Start agent_main.py]) --> Check{Agent exists?}
Check -->|Yes + Recreate| Delete[Delete existing agent]
Check -->|No| Create
Delete --> Create[Create AnalysisAgent]
Create --> Config[Configure Agent]
Config --> Persona[Set Persona:<br/>Software Engineer]
Persona --> Memory[Add Chat Memory]
Memory --> Guidelines[Set Guidelines]
Guidelines --> Tools[Include Base Tools]
Tools --> Storage{Choose Storage}
Storage -->|Option A| Archival[Load to Archival Store]
Archival --> ReadJSON1[Read JSON files]
ReadJSON1 --> ParseJSON1[Parse JSON arrays]
ParseJSON1 --> Insert[Insert passages to DB]
Insert --> Verify1[Verify insertion]
Storage -->|Option B| Folder[Load to Shared Folder]
Folder --> CreateFolder[Create/Delete folder]
CreateFolder --> ReadJSON2[Read JSON files]
ReadJSON2 --> Upload[Upload files]
Upload --> Associate[Associate folder to agent]
Associate --> Verify2[Verify association]
Verify1 --> Ready([Agent Ready])
Verify2 --> Ready
style Start fill:#4CAF50,color:#fff
style Ready fill:#2196F3,color:#fff
style Archival fill:#FF9800,color:#fff
style Folder fill:#FF9800,color:#fff
| Feature | Archival Store (PostgreSQL) | Shared Folder (Files) |
|---|---|---|
| Setup Complexity | Medium (requires PostgreSQL) | Low (file system only) |
| Search Performance | Fast (indexed queries) | Moderate (file scanning) |
| Scalability | Excellent (100K+ passages) | Good (1K-10K files) |
| Data Format | Text passages with embeddings | Original JSON files (no embeddings) |
| Embeddings | Generated during insertion | Not generated |
| Query Method | archival_memory_search() |
Folder file search |
| Best For | Large codebases, production | Small repos, development |
graph LR
subgraph "Vamana Output"
J1[domain_details.json]
J2[confluence_analysis.json]
J3[Other analysis files]
end
subgraph "Agent Helper Processing"
P1[DirectoryReadTool]
P2[File Parser]
P3[JSON Validator]
end
subgraph "Letta Agent"
A1[AnalysisAgent]
A2[Chat Memory]
A3[Embeddings]
end
subgraph "Storage Layer"
D1[(PostgreSQL<br/>Passages)]
D2[π Folder<br/>Files]
end
subgraph "Query Interface"
Q1[Chainlit UI]
Q2[Search Engine]
Q3[LLM Generator]
end
J1 --> P1
J2 --> P1
J3 --> P1
P1 --> P2
P2 --> P3
P3 -->|Insert passages| D1
P3 -->|Upload files| D2
D1 -.-> A1
D2 -.-> A1
A1 --> A2
A1 --> A3
Q1 --> Q2
Q2 --> A1
A1 --> Q3
Q3 --> Q1
classDef input fill:#9C27B0,color:#fff
classDef process fill:#4CAF50,color:#fff
classDef agent fill:#2196F3,color:#fff
classDef storage fill:#FF9800,color:#fff
classDef query fill:#F44336,color:#fff
class J1,J2,J3 input
class P1,P2,P3 process
class A1,A2,A3 agent
class D1,D2 storage
class Q1,Q2,Q3 query
nestor/
βββ src/
β βββ crews/
β β βββ agents/
β β β βββ agent_main.py # Agent creation & management
β β β βββ custom_tools.py # Custom agent tools
β β β βββ __init__.py
β β βββ chainlit/
β β β βββ chainlit_conversational_agent.py # Chat UI
β β β βββ chainlit.md # UI configuration
β β βββ .chainlit/ # Chainlit data
β βββ .env # Environment configuration
β βββ readme.md # This file
βββ openapi_letta.json # Letta API schema
βββ openapi_openai.json # OpenAI API schema
- Python >=3.10 <3.13
- Conda environment:
Conversational-Agentic-Pipeline - OpenAI API Key
- Letta server installed and configured
- Create and activate conda environment:
conda create -n Conversational-Agentic-Pipeline python=3.11
conda activate Conversational-Agentic-Pipeline- Install core dependencies:
# Letta (Memory-enabled agents)
pip install letta letta-client
# Chainlit (Chat interface)
pip install chainlit
# CrewAI Tools
pip install crewai-tools
# Additional dependencies
pip install python-dotenv- Configure environment variables in
.env:
# OpenAI Configuration
OPENAI_API_KEY=your_openai_key_here
# Letta Server Configuration
LETTA_SERVER_URL=http://localhost:8283
LETTA_PG_URI=postgresql://letta:letta@localhost:5432/LettaDB
# Agent Configuration
AGENT_MODEL=gpt-4o-mini
EMBEDDING_MODEL=text-embedding-ada-002All core components are open-source with permissive licenses:
-
Letta (>=0.5.0) - Stateful AI agents with persistent memory
License: Apache 2.0 | GitHub -
Letta Client - Python client for Letta server
License: Apache 2.0 -
Chainlit (>=1.0.0) - Conversational UI framework
License: Apache 2.0 | GitHub -
CrewAI Tools - Pre-built tools for agent capabilities
License: MIT | GitHub -
python-dotenv - Environment configuration management
License: BSD-3-Clause | GitHub
Note: While the frameworks are open-source, you'll need an OpenAI API key for LLM capabilities. Consider open-source alternatives like Ollama for fully local execution.
First, start the Letta server in a separate terminal:
conda activate Conversational-Agentic-Pipeline
letta serverThe server will start on http://localhost:8283
- Navigate to the agents directory:
cd /Users/pradip/Development/nestor/src/crews/agents- Create the agent:
python agent_main.pyThis will:
- Delete existing agent if recreating
- Create new
AnalysisAgentwith configured memory - Register custom tools
- Set up archival store access
Start the conversational UI:
cd /Users/pradip/Development/nestor/src/crews/chainlit
chainlit run chainlit_conversational_agent.pyAccess the interface at: http://localhost:8000
The AnalysisAgent is configured with:
- Persona: Distinguished Software Engineer with expertise in software design
- Memory: Persistent chat memory across sessions
- Tools: Access to archival stores and code analysis data
- LLM: GPT-4o-mini for reasoning and responses
- Embeddings: text-embedding-ada-002 for semantic search
-
Start Letta Server
letta server
-
Create/Update Agent (if needed)
python src/crews/agents/agent_main.py
-
Launch Chainlit UI
chainlit run src/crews/chainlit/chainlit_conversational_agent.py
-
Interact with Agent
- Ask questions about codebase
- Request design analysis
- Query domain models
- Get architecture insights
User: What is the overall architecture of the payment service?
Agent: Based on the analysis, the payment service follows a
layered architecture with Controller β Service β Repository
pattern. It uses SQLite for persistence and implements...
User: How does the trade processing logic work?
Agent: The trade processing involves several steps:
1. Trade validation in TradeController
2. Business logic execution in TradeService
3. Persistence through TradeRepository...
All-in-one startup:
# Terminal 1: Start Letta Server
conda activate Conversational-Agentic-Pipeline
letta server
# Terminal 2: Launch Chainlit
cd /Users/pradip/Development/nestor/src/crews/chainlit
chainlit run chainlit_conversational_agent.pyThen open http://localhost:8000 and start chatting with the agent!
- Ensure PostgreSQL is running if using database backend
- Check
LETTA_PG_URIconfiguration - Verify server is accessible at
http://localhost:8283
- Confirm Letta server is running
- Check OpenAI API key is valid
- Verify all dependencies are installed
- Ensure port 8000 is available
- Check Chainlit is properly installed
- Verify agent exists before starting UI
For questions or issues:
- Review Letta documentation: https://docs.letta.com
- Check Chainlit guides: https://docs.chainlit.io
- Examine agent logs for debugging