Skip to content

Autonomous Telegram AI Assistant with local LLM support

Notifications You must be signed in to change notification settings

0xinit/PigeonAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PigeonAI

A personal AI assistant that reads your Telegram messages, learns your texting style, generates replies in your voice, and prioritizes conversations.

Features

  • Read All Messages: Access personal chats, groups, and channels via MTProto
  • Learn Your Style: Analyze your vocabulary, tone, emoji usage, and message patterns
  • Auto-Reply: Generate and send replies that match your writing style
  • Smart Prioritization: Score messages by urgency and sender importance
  • Complete Privacy: Local LLM option - your data never leaves your machine
  • Rate Limiting: Built-in protection against Telegram bans

Requirements

  • Python 3.11+
  • macOS, Linux, or Windows
  • Ollama (for local LLM)
  • Telegram account

Quick Start

1. Install Dependencies

# Clone the repository
cd PigeonAI

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -e .

2. Install Ollama and Model

# Install Ollama (macOS)
brew install ollama

# Or download from https://ollama.com

# Pull the model
ollama pull llama3.1:8b

3. Get Telegram API Credentials

  1. Go to my.telegram.org/apps
  2. Log in with your phone number
  3. Create a new application
  4. Copy the api_id and api_hash

4. Run the Assistant

# Start the assistant
python -m src.main

# Or use the command
pigeonai

The first run will guide you through:

  • Entering your API credentials
  • Authenticating with Telegram
  • Setting up the local LLM

Usage

Interactive Commands

status     - Show current status
stats      - Show message statistics
messages   - Show recent messages
pending    - Show pending reply drafts
send <id>  - Send a pending reply
sync       - Sync messages from all chats
style      - Show/update your style profile
config     - Show configuration
quit       - Exit the application

auto on/off       - Toggle auto-reply
threshold <0-1>   - Set confidence threshold
listen            - Start message listener

Configuration

Create a .env file or set environment variables:

# Telegram API (required)
TELEGRAM_API_ID=your_api_id
TELEGRAM_API_HASH=your_api_hash

# Local LLM
OLLAMA_MODEL=llama3.1:8b
OLLAMA_HOST=http://localhost:11434

# Auto-reply settings
AUTO_REPLY_ENABLED=true
CONFIDENCE_THRESHOLD=0.85

# Rate limiting
RATE_LIMIT_MESSAGES_PER_MINUTE=20

How It Works

Style Learning

The assistant analyzes your outgoing messages to understand:

  • Average message length
  • Emoji usage patterns
  • Punctuation and capitalization style
  • Common phrases and abbreviations
  • Tone (casual, formal, playful, etc.)

Priority Scoring

Messages are scored based on:

  • Urgency keywords (urgent, asap, help, etc.)
  • Response expected (questions, requests)
  • Sender importance (VIP, normal, muted)
  • Recency (newer messages score higher)

Auto-Reply Logic

When auto-reply is enabled:

  1. Message arrives → Priority calculated
  2. Style profile loaded → Reply generated
  3. Confidence calculated → If above threshold, sent automatically
  4. Otherwise → Saved as draft for review

Safety Features

  • Confidence Threshold: Only auto-sends above configurable confidence (default 85%)
  • Contact Blacklist: Disable auto-reply for specific contacts
  • Rate Limiting: Prevents sending too fast (avoids Telegram bans)
  • Sensitive Content Detection: Skips emotional/important messages
  • Draft Mode: Can run in draft-only mode (never auto-sends)

Project Structure

PigeonAI/
├── src/
│   ├── main.py              # Entry point
│   ├── config/              # Configuration
│   ├── telegram/            # Telegram client
│   ├── ai/                  # LLM integration
│   ├── services/            # Priority & rate limiting
│   ├── storage/             # Database & encryption
│   ├── handlers/            # Message handling
│   └── ui/                  # CLI dashboard
├── tests/                   # Test suite
├── plans/                   # Project documentation
└── requirements.txt

Development

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run with coverage
pytest --cov=src

# Type checking
mypy src

# Linting
ruff check src

Privacy & Security

  • Credentials: Stored in OS keyring, never in plain text
  • Messages: Optionally encrypted at rest with AES-256
  • Session: Telegram session encrypted and stored securely
  • Local LLM: Data never sent to external servers

Limitations

  • Telegram may ban accounts for excessive automation
  • Local LLM quality is lower than cloud options (Claude, GPT-4)
  • Requires always-on computer for real-time monitoring
  • Style learning needs 50+ messages for good results

Contributing

Contributions welcome! Please read the plan document in plans/telegram-ai-assistant.md for architecture details.

License

MIT

Disclaimer

This tool is for personal use. Automated messaging may violate Telegram's Terms of Service. Use responsibly and at your own risk.

About

Autonomous Telegram AI Assistant with local LLM support

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages