Skip to content

pradipd25/nestor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

19 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Nestor - Intelligent Code Analysis Agents

Welcome to Nestor, an AI-powered conversational agent system built with Letta and Chainlit. Nestor creates persistent, memory-enabled AI agents that can analyze codebases, answer technical questions, and provide insights about software architecture and implementation patterns.

Overview

Nestor is a sophisticated agent system that:

  • Creates memory-enabled AI agents using Letta's stateful agent framework
  • Analyzes software systems with deep understanding of code structure and logic
  • Provides conversational interface through Chainlit for interactive Q&A
  • Maintains context across conversations using persistent memory
  • Accesses knowledge bases from Vamana's analysis outputs and archival stores
  • Generates design insights including architecture patterns, logic flows, and domain models

The system bridges human developers with AI-powered code intelligence, making complex codebases more accessible and understandable.

Architecture

System Overview

Nestor is Phase 2 of a two-phase intelligent code analysis system. Phase 1 (Vamana) extracts knowledge from codebases, and Phase 2 (Nestor) loads and queries it.

graph TB
    subgraph "PHASE 1: Vamana Extraction"
        V1[πŸ“¦ Git Repository]
        V2[🎯 CrewAI Agents]
        V3[πŸ“„ JSON Output]
    end

    subgraph "PHASE 2: Nestor Load & Query"
        N1[🧠 Agent Helper<br/>Loads JSON files]
        N2[πŸ€– Letta Agent<br/>AnalysisAgent]
        N3[πŸ’¬ Chainlit UI<br/>localhost:8000]
        
        N2A[πŸ“ Chat Memory]
        N2B[πŸ”§ Custom Tools]
    end

    subgraph "Knowledge Storage"
        S1[πŸ“š Archival Store<br/>PostgreSQL]
        S2[πŸ“‚ Shared Folder<br/>File-based]
    end

    subgraph "User Interaction"
        U1[πŸ‘¨β€πŸ’» Developer]
        U2[πŸ’‘ AI Responses]
    end

    V1 -->|Analyzes| V2
    V2 -->|Generates| V3
    V3 -->|Reads| N1
    
    N1 -->|Creates| N2
    N1 -->|Option A| S1
    N1 -->|Option B| S2
    
    S1 -.->|Queries| N2
    S2 -.->|Searches| N2
    
    N2 --> N2A
    N2 --> N2B
    N2 <-->|Messages| N3
    
    U1 -->|Asks| N3
    N3 -->|Returns| U2

    classDef extraction fill:#4CAF50,stroke:#2E7D32,stroke-width:2px,color:#fff
    classDef load fill:#2196F3,stroke:#1565C0,stroke-width:3px,color:#fff
    classDef storage fill:#FF9800,stroke:#E65100,stroke-width:2px,color:#fff
    classDef user fill:#F44336,stroke:#C62828,stroke-width:2px,color:#fff

    class V1,V2,V3 extraction
    class N1,N2,N3,N2A,N2B load
    class S1,S2 storage
    class U1,U2 user
Loading

Nestor Internal Architecture

Nestor follows a Memory-Augmented Agent architecture:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Nestor System                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                         β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚         Chainlit UI Layer                 β”‚         β”‚
β”‚  β”‚  - Conversational Interface               β”‚         β”‚
β”‚  β”‚  - Real-time Chat                         β”‚         β”‚
β”‚  β”‚  - Session Management                     β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚                      ↓                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚      Letta Agent (AnalysisAgent)         β”‚         β”‚
β”‚  β”‚  - Stateful Conversations                 β”‚         β”‚
β”‚  β”‚  - Persistent Memory (ChatMemory)         β”‚         β”‚
β”‚  β”‚  - Tool Integration                       β”‚         β”‚
β”‚  β”‚  - Reasoning & Planning                   β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚                      ↓                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”‚
β”‚  β”‚         Knowledge Sources                 β”‚         β”‚
β”‚  β”‚  - Archival Store (code analysis)         β”‚         β”‚
β”‚  β”‚  - Vamana Output (domain models)          β”‚         β”‚
β”‚  β”‚  - Custom Tools                           β”‚         β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β”‚
β”‚                                                         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Key Components

  • Agent Manager (agent_main.py): Creates and manages Letta agents with memory and tools
  • Chainlit Interface (chainlit_conversational_agent.py): Web-based chat interface
  • Custom Tools (custom_tools.py): Specialized functions for agent capabilities
  • Letta Server: Backend service managing agent state and memory (port 8283)
  • Memory System: Persistent conversation history and context

Data Flow

Complete System Data Flow

sequenceDiagram
    participant V as Vamana Output<br/>(JSON Files)
    participant H as Agent Helper<br/>(agent_main.py)
    participant L as Letta Agent<br/>(AnalysisAgent)
    participant S as Knowledge Store<br/>(DB or Folder)
    participant C as Chainlit UI<br/>(Port 8000)
    participant U as Developer

    Note over V,S: LOAD PHASE
    
    H->>V: Read JSON files from<br/>knowledge/output/
    H->>H: Parse & validate JSON
    H->>L: Create AnalysisAgent
    
    alt Storage Option A: Database
        H->>S: Insert passages to PostgreSQL
        Note right of S: Full-text search<br/>Semantic embeddings
    else Storage Option B: Folder
        H->>S: Upload files to shared folder
        Note right of S: File-based search<br/>Embedded vectors
    end
    
    H->>L: Configure storage access
    L->>S: Verify connection
    
    Note over C,U: QUERY PHASE
    
    U->>C: Enter question about code
    C->>L: Send user message
    
    L->>L: Parse query intent
    L->>S: Search for relevant context
    S-->>L: Return matching passages
    
    L->>L: Generate response with<br/>GPT-4o-mini + context
    L->>C: Send assistant message
    C->>U: Display answer with insights
    
    Note over U,C: Conversation continues...
    
    U->>C: Follow-up question
    C->>L: Send with conversation history
    L->>S: Search with updated context
    S-->>L: Return passages
    L->>C: Contextual response
    C->>U: Display answer
Loading

Agent Creation Flow

flowchart TD
    Start([Start agent_main.py]) --> Check{Agent exists?}
    
    Check -->|Yes + Recreate| Delete[Delete existing agent]
    Check -->|No| Create
    Delete --> Create[Create AnalysisAgent]
    
    Create --> Config[Configure Agent]
    Config --> Persona[Set Persona:<br/>Software Engineer]
    Persona --> Memory[Add Chat Memory]
    Memory --> Guidelines[Set Guidelines]
    Guidelines --> Tools[Include Base Tools]
    
    Tools --> Storage{Choose Storage}
    
    Storage -->|Option A| Archival[Load to Archival Store]
    Archival --> ReadJSON1[Read JSON files]
    ReadJSON1 --> ParseJSON1[Parse JSON arrays]
    ParseJSON1 --> Insert[Insert passages to DB]
    Insert --> Verify1[Verify insertion]
    
    Storage -->|Option B| Folder[Load to Shared Folder]
    Folder --> CreateFolder[Create/Delete folder]
    CreateFolder --> ReadJSON2[Read JSON files]
    ReadJSON2 --> Upload[Upload files]
    Upload --> Associate[Associate folder to agent]
    Associate --> Verify2[Verify association]
    
    Verify1 --> Ready([Agent Ready])
    Verify2 --> Ready
    
    style Start fill:#4CAF50,color:#fff
    style Ready fill:#2196F3,color:#fff
    style Archival fill:#FF9800,color:#fff
    style Folder fill:#FF9800,color:#fff
Loading

Storage Options Comparison

Feature Archival Store (PostgreSQL) Shared Folder (Files)
Setup Complexity Medium (requires PostgreSQL) Low (file system only)
Search Performance Fast (indexed queries) Moderate (file scanning)
Scalability Excellent (100K+ passages) Good (1K-10K files)
Data Format Text passages with embeddings Original JSON files (no embeddings)
Embeddings Generated during insertion Not generated
Query Method archival_memory_search() Folder file search
Best For Large codebases, production Small repos, development

Knowledge Flow Diagram

graph LR
    subgraph "Vamana Output"
        J1[domain_details.json]
        J2[confluence_analysis.json]
        J3[Other analysis files]
    end
    
    subgraph "Agent Helper Processing"
        P1[DirectoryReadTool]
        P2[File Parser]
        P3[JSON Validator]
    end
    
    subgraph "Letta Agent"
        A1[AnalysisAgent]
        A2[Chat Memory]
        A3[Embeddings]
    end
    
    subgraph "Storage Layer"
        D1[(PostgreSQL<br/>Passages)]
        D2[πŸ“ Folder<br/>Files]
    end
    
    subgraph "Query Interface"
        Q1[Chainlit UI]
        Q2[Search Engine]
        Q3[LLM Generator]
    end
    
    J1 --> P1
    J2 --> P1
    J3 --> P1
    
    P1 --> P2
    P2 --> P3
    
    P3 -->|Insert passages| D1
    P3 -->|Upload files| D2
    
    D1 -.-> A1
    D2 -.-> A1
    
    A1 --> A2
    A1 --> A3
    
    Q1 --> Q2
    Q2 --> A1
    A1 --> Q3
    Q3 --> Q1
    
    classDef input fill:#9C27B0,color:#fff
    classDef process fill:#4CAF50,color:#fff
    classDef agent fill:#2196F3,color:#fff
    classDef storage fill:#FF9800,color:#fff
    classDef query fill:#F44336,color:#fff
    
    class J1,J2,J3 input
    class P1,P2,P3 process
    class A1,A2,A3 agent
    class D1,D2 storage
    class Q1,Q2,Q3 query
Loading

Directory Structure

nestor/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ crews/
β”‚   β”‚   β”œβ”€β”€ agents/
β”‚   β”‚   β”‚   β”œβ”€β”€ agent_main.py           # Agent creation & management
β”‚   β”‚   β”‚   β”œβ”€β”€ custom_tools.py         # Custom agent tools
β”‚   β”‚   β”‚   └── __init__.py
β”‚   β”‚   β”œβ”€β”€ chainlit/
β”‚   β”‚   β”‚   β”œβ”€β”€ chainlit_conversational_agent.py  # Chat UI
β”‚   β”‚   β”‚   └── chainlit.md             # UI configuration
β”‚   β”‚   └── .chainlit/                  # Chainlit data
β”‚   β”œβ”€β”€ .env                            # Environment configuration
β”‚   └── readme.md                       # This file
β”œβ”€β”€ openapi_letta.json                  # Letta API schema
└── openapi_openai.json                 # OpenAI API schema

Installation

Prerequisites

  • Python >=3.10 <3.13
  • Conda environment: Conversational-Agentic-Pipeline
  • OpenAI API Key
  • Letta server installed and configured

Environment Setup

  1. Create and activate conda environment:
conda create -n Conversational-Agentic-Pipeline python=3.11
conda activate Conversational-Agentic-Pipeline
  1. Install core dependencies:
# Letta (Memory-enabled agents)
pip install letta letta-client

# Chainlit (Chat interface)
pip install chainlit

# CrewAI Tools
pip install crewai-tools

# Additional dependencies
pip install python-dotenv
  1. Configure environment variables in .env:
# OpenAI Configuration
OPENAI_API_KEY=your_openai_key_here

# Letta Server Configuration
LETTA_SERVER_URL=http://localhost:8283
LETTA_PG_URI=postgresql://letta:letta@localhost:5432/LettaDB

# Agent Configuration
AGENT_MODEL=gpt-4o-mini
EMBEDDING_MODEL=text-embedding-ada-002

Major Libraries Used

All core components are open-source with permissive licenses:

  • Letta (>=0.5.0) - Stateful AI agents with persistent memory
    License: Apache 2.0 | GitHub

  • Letta Client - Python client for Letta server
    License: Apache 2.0

  • Chainlit (>=1.0.0) - Conversational UI framework
    License: Apache 2.0 | GitHub

  • CrewAI Tools - Pre-built tools for agent capabilities
    License: MIT | GitHub

  • python-dotenv - Environment configuration management
    License: BSD-3-Clause | GitHub

Note: While the frameworks are open-source, you'll need an OpenAI API key for LLM capabilities. Consider open-source alternatives like Ollama for fully local execution.

Execution Details

Starting the Letta Server

First, start the Letta server in a separate terminal:

conda activate Conversational-Agentic-Pipeline
letta server

The server will start on http://localhost:8283

Creating the Analysis Agent

  1. Navigate to the agents directory:
cd /Users/pradip/Development/nestor/src/crews/agents
  1. Create the agent:
python agent_main.py

This will:

  • Delete existing agent if recreating
  • Create new AnalysisAgent with configured memory
  • Register custom tools
  • Set up archival store access

Running the Chainlit Interface

Start the conversational UI:

cd /Users/pradip/Development/nestor/src/crews/chainlit
chainlit run chainlit_conversational_agent.py

Access the interface at: http://localhost:8000

Agent Configuration

The AnalysisAgent is configured with:

  • Persona: Distinguished Software Engineer with expertise in software design
  • Memory: Persistent chat memory across sessions
  • Tools: Access to archival stores and code analysis data
  • LLM: GPT-4o-mini for reasoning and responses
  • Embeddings: text-embedding-ada-002 for semantic search

Typical Workflow

  1. Start Letta Server

    letta server
  2. Create/Update Agent (if needed)

    python src/crews/agents/agent_main.py
  3. Launch Chainlit UI

    chainlit run src/crews/chainlit/chainlit_conversational_agent.py
  4. Interact with Agent

    • Ask questions about codebase
    • Request design analysis
    • Query domain models
    • Get architecture insights

Example Interactions

User: What is the overall architecture of the payment service?

Agent: Based on the analysis, the payment service follows a 
       layered architecture with Controller β†’ Service β†’ Repository 
       pattern. It uses SQLite for persistence and implements...

User: How does the trade processing logic work?

Agent: The trade processing involves several steps: 
       1. Trade validation in TradeController
       2. Business logic execution in TradeService
       3. Persistence through TradeRepository...

Quick Start

All-in-one startup:

# Terminal 1: Start Letta Server
conda activate Conversational-Agentic-Pipeline
letta server

# Terminal 2: Launch Chainlit
cd /Users/pradip/Development/nestor/src/crews/chainlit
chainlit run chainlit_conversational_agent.py

Then open http://localhost:8000 and start chatting with the agent!

Troubleshooting

Letta Server Issues

  • Ensure PostgreSQL is running if using database backend
  • Check LETTA_PG_URI configuration
  • Verify server is accessible at http://localhost:8283

Agent Creation Fails

  • Confirm Letta server is running
  • Check OpenAI API key is valid
  • Verify all dependencies are installed

Chainlit Won't Start

  • Ensure port 8000 is available
  • Check Chainlit is properly installed
  • Verify agent exists before starting UI

Support

For questions or issues:

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages