LLM-powered man page Q&A. Ask natural language questions about command-line tools and get exact commands back.
- 🤖 Multiple LLM Providers: OpenAI, Anthropic Claude, and Ollama (local)
- 💾 Smart Caching: Responses cached for 30 days (configurable)
- 💰 Cost Tracking: Token usage and cost estimates with pricing warnings
- 📋 Multiple Output Modes: Plain text, JSON, or copy to clipboard
- ⚡ Fast: Cached responses return instantly
- 🎯 Focused: Queries the actual man page, not generic knowledge
- 🔧 Configurable: Multiple profiles for different providers/models
brew install alecf/tap/heymango install github.com/alecf/heyman/cmd/heyman@latestDownload the latest release from GitHub Releases
-
Setup a profile:
heyman setup
-
Ask a question:
heyman ls how do I list files by sizeOutput:
ls -lhS -
Get an explanation:
heyman --explain tar how do I create a compressed archive
heyman [flags] <command> <question>
# Basic usage
heyman grep how do I search recursively
# With explanation
heyman --explain find how do I find files modified today
# Show token usage and costs
heyman --tokens curl how do I download a file
# JSON output
heyman --json ssh how do I connect with a specific port
# Copy to clipboard
heyman --copy ps how do I find a process by name
# Use specific profile
heyman --profile openai-gpt4o lsof list open ports-e, --explain- Include explanation (streaming)-j, --json- JSON output with metadata-t, --tokens- Show token usage and costs-c, --copy- Copy command to clipboard-v, --verbose- Show operation details-d, --debug- Show full request/response details--no-cache- Bypass cache for this query-p, --profile- LLM profile to use
Run the interactive setup wizard:
heyman setupEdit the config file:
macOS: ~/Library/Application Support/heyman/config.toml
Linux: ~/.config/heyman/config.toml
default_profile = "ollama-llama"
cache_days = 30
[profiles.ollama-llama]
provider = "ollama"
model = "llama3.2:latest"
[profiles.openai-gpt4o-mini]
provider = "openai"
model = "gpt-4o-mini"
[profiles.anthropic-haiku]
provider = "anthropic"
model = "claude-3-5-haiku-20241022"HEYMAN_PROFILE- Override default profileHEYMAN_CACHE_DIR- Cache location overrideOPENAI_API_KEY- OpenAI authenticationANTHROPIC_API_KEY- Anthropic authenticationOLLAMA_HOST- Ollama server URL (default: http://localhost:11434)
export OPENAI_API_KEY=sk-...
heyman setup # Select OpenAIRecommended models:
gpt-4o-mini- Fast and cheap ($0.15 / $0.60 per 1M tokens)gpt-4o- More capable ($2.50 / $10.00 per 1M tokens)
export ANTHROPIC_API_KEY=sk-...
heyman setup # Select AnthropicRecommended models:
claude-3-5-haiku-20241022- Fast and cheap ($0.80 / $4.00 per 1M tokens)claude-sonnet-4-5-20250924- High quality ($3.00 / $15.00 per 1M tokens)
ollama serve # Start Ollama
ollama pull llama3.2 # Download a model
heyman setup # Select OllamaRecommended models:
llama3.2:latest- Fast, good qualitydeepseek-r1:latest- Reasoning modelllama3.3:70b- Larger, more capable (requires more RAM)
View cache statistics:
heyman cache-statsClear cache:
heyman clear-cacheCache is stored in:
- macOS:
~/Library/Caches/heyman/ - Linux:
~/.cache/heyman/
List all profiles:
heyman list-profilesSet the default profile:
heyman set-profile ollama-llamaTest configuration:
heyman test-configThe --tokens flag shows usage and estimated costs:
$ heyman --tokens ls how do I list files by size
ls -lhS
Token usage:
Input: 8,192 tokens
Output: 14 tokens
Total: 8,206 tokens
Cost: $0.0007 (estimated, based on 2026-01-12 pricing)
⚠️ Pricing may have changed. Check current rates:
https://openai.com/api/pricing/Note: Prices are estimates. Always check the provider's current pricing page.
- Fetch man page: Executes
man <command>to get the actual documentation - Build prompt: Constructs a prompt with the full man page and your question
- Query LLM: Sends to your configured provider (with 8K context window)
- Parse response: Validates and extracts the command
- Cache: Stores the response for future use (30 days by default)
Run heyman setup or manually edit the config file.
Set the environment variable:
export OPENAI_API_KEY=sk-...Make sure Ollama is running:
ollama serveThe command must have a man page installed:
man <command> # Test if it existsgit clone https://github.com/alecf/heyman
cd heyman
go build -o heyman ./cmd/heymango test ./...Uses GoReleaser:
git tag v0.1.0
git push origin v0.1.0
goreleaser releaseMIT License - see LICENSE file
Contributions welcome! Please open an issue or PR.
Built with:
- Cobra - CLI framework
- Viper - Configuration
- OpenAI Go SDK
- Anthropic Go SDK
- Ollama API