Your digital memory. Search everything you've copied or typed, instantly.
jot is a fully local, privacy-first AI assistant that remembers your clipboard history and terminal commands. Ask questions in natural language and get instant answers, no scrolling, no searching, just results.
- 🔍 Natural Language Search - Find things by meaning, not just keywords
- 🔒 100% Private - Everything stays on your machine, encrypted
- ⚡ Lightning Fast - Search 10,000+ items in milliseconds
- 🎨 Dual Interface - Beautiful GUI or blazing-fast CLI
- 🧠 Context Aware - Understands what you're looking for
- 🧩 Plugin Ready - Extend Jot with custom Rhai plugins that tap into lifecycle events.
Uses jotx ask (ja for short) for natural language search and search (js for short) for normal keyword search
# Instead of scrolling through terminal history
jotx ask "ssh command for staging server"
→ ssh user@staging.example.com -i ~/.ssh/key.pem
# Find that email you copied hours ago (-c means search in clipboard history)
ja -c "email address from this morning"
→ john.doe@example.com
# Find that yarn command to run the server
js "yarn"
→ yarn startCopy and paste this into your terminal:
curl -fsSL https://raw.githubusercontent.com/Jeffawe/Jot/main/install.sh | bashThat's it! The installer will:
- ✅ Download the right binary for your system
- ✅ Install Ollama (local AI)
- ✅ Set up shell hooks
- ✅ Start the daemon
Alternative with wget:
wget -qO- https://raw.githubusercontent.com/Jeffawe/Jot/main/install.sh | bash# Start monitoring (runs in background)
jotx run
# Search your history
jotx search "ssh"
# Ask questions
jotx ask "what was that git command from yesterday?"- Monitors clipboard and terminal commands using rust copypasta and shell hooks
- Stores everything locally in a local SQLite database
(~/.jotx/jotx.db) - Indexes content using embedding models for semantic search
- Searches using natural language and pluggable LLm models (via ollama) to query db and give results fast
Install the GUI version from https://github.com/Jeffawe/Jot/releases and look for the desktop release
Configuration file: ~/.jotx/config.toml
[llm]
provider = "ollama"
api_base = "http://localhost:11434"
model = "qwen2.5:3b"
max_tokens = 500
temperature = 0.7
max_history_results = 10
[search]
similarity_threshold = 0.5
max_results = 10
fuzzy_matching = true
[storage]
maintenance_interval_days = 7jot is built privacy-first:
- ✅ 100% Local - No data ever leaves your machine
- ✅ No Telemetry - Zero analytics or tracking
- ✅ Configurable Exclusions - Block apps, files, or patterns
(run jotx privacy) - ✅ Open Source - Fully auditable code
- ✅ Clean Data - Clean data stored anytime easily
(run jotx clean-data)
- Language: Rust 🦀
- Storage: SQLite
- Search: fastembed (embedding models)
- AI: Ollama
- GUI: Tauri (Rust + Web)
- CLI: clap for argument parsing
Contributions are welcome! This project is built for learning Rust, so beginner-friendly PRs are encouraged.
# Clone the repo
git clone https://github.com/jeffawe/jot.git
cd jot
# Install dependencies (Full Setup)
make setup
#Install dependencies (Run Rust code)
cargo build
# Run tests
cargo test- 🐛 Bug fixes
- 📝 AI development
- ✨ New search algorithms
- 🎨 UI/UX enhancements
- 🔧 Performance optimizations
- 🧪 Test coverage
Apache License - see LICENSE for details
- Built with Rust
- Embeddings via fastembed
- AI via Ollama for running local LLMs
- Inspired by the need to remember things better
- 📫 Issues: GitHub Issues
- 💭 Discussions: GitHub Discussions
- 🐦 Twitter: @awagu_jeffery
Remember: Your digital memory, always at your fingertips. Never scroll through history again.
Built with ❤️ and Rust 🦀