A fast, lightweight, and configurable AI completion plugin that works with both local and cloud models. Get GitHub Copilot-like functionality for free or at a fraction of the cost! Built as an experiment to see what was possible with local ollama and remote openrouter AI models.
FYI: If you are are looking for a coding assistant plugin, check out: https://github.com/whatever555/vim-code-checker
- π Real-time AI-powered code completion
- π Support for local models via [Ollama](https://ollama.com]
- βοΈ Cloud model support via OpenRouter
- β‘ Minimal latency, maximum productivity
- π― Filetype-specific enabling/disabling
- π¨ Fully customizable behavior
rec1.mov
- Free/Low-Cost Alternative: Use local models or pay-as-you-go cloud services instead of expensive subscriptions
- Privacy-Focused: Run everything locally with Ollama
- Flexible: Choose between local and cloud models based on your needs
- Lightweight: Minimal impact on editor performance
- Customizable: Configure exactly how and when you want AI assistance
- Install Ollama
- Pull a coding-focused model:
ollama pull codellama:7b
# or for better results:
ollama pull codellama:13b- Create an account at OpenRouter
- Get your API key
Using Vundle:
Plugin 'whatever555/free-pilot-vim'Using vim-plug:
Plug 'whatever555/free-pilot-vim'Using packer.nvim:
use 'whatever555/free-pilot-vim'" How long to wait before triggering completion (in ms)
let g:free_pilot_debounce_delay = 500
" Maximum number of suggestions to show
let g:free_pilot_max_suggestions = 3
" Enable debug logging
let g:free_pilot_debug = 0
" Choose backend: 'ollama' or 'openrouter'
let g:free_pilot_backend = 'ollama'
" AI temperature (0.0 - 1.0, lower = more focused)
let g:free_pilot_temperature = 0.1
" Debug log file location (empty = no logging)
let g:free_pilot_log_file = ''
" Maximum tokens to generate
let g:free_pilot_max_tokens = 120" Model to use with Ollama
let g:free_pilot_ollama_model = 'codellama:13b'
" Ollama API endpoint
let g:free_pilot_ollama_url = 'http://localhost:11434/api/generate'" Your OpenRouter API key
let g:free_pilot_openrouter_api_key = 'your-api-key'
" Preferred model
let g:free_pilot_openrouter_model = 'anthropic/claude-2:1'
" Your site URL for OpenRouter analytics
let g:free_pilot_openrouter_site_url = 'https://github.com/whatever555/free-pilot-vim'
" Your site name for OpenRouter analytics
let g:free_pilot_openrouter_site_name = 'FreePilot.vim'" Enable on startup
let g:free_pilot_autostart = 1
" Only enable for specific filetypes (empty = all)
let g:free_pilot_include_filetypes = []
" Disable for specific filetypes
let g:free_pilot_exclude_filetypes = ['help', 'netrw', 'NvimTree', 'TelescopePrompt',
\ 'fugitive', 'gitcommit', 'quickfix', 'prompt']| Service | Cost | Notes |
|---|---|---|
| GitHub Copilot | $10/month | Fixed subscription |
| Free Pilot (Ollama) | $0 | Free, runs locally |
| Free Pilot (OpenRouter) | ~$0.01-0.10/1000 tokens | Pay for what you use |
- Start typing code as normal
- Watch as AI suggestions appear
- Press
Tabto accept a suggestion - Press
Ctrl-]to skip a suggestion
:FreePilotEnable- Enable completion:FreePilotDisable- Disable completion:FreePilotToggle- Toggle completion:FreePilotStatus- Check current status
While freePilot offers completely free local AI completion through Ollama, it's important to set realistic expectations:
- Local models (like CodeLlama) running on consumer hardware may be:
- Slower than cloud solutions
- Less accurate in their suggestions
- More memory-intensive
- Limited in context understanding
This is not a limitation of freePilot itself, but rather the current state of running large language models locally.
- Smaller, more efficient models are being developed
- Local model performance is improving quickly
- Hardware acceleration is getting better
- Start with local models to test the waters
- If you need more reliable completion, consider using the OpenRouter backend
- Keep updating your Ollama models as new versions are released
- Consider this an investment in the future of local AI tools
Contributions are welcome! Feel free to:
- Report bugs
- Suggest features
- Submit pull requests
-
Completion not showing up?
- Check if the backend is running (
ollama psfor local) - Verify API key for OpenRouter
- Check
:FreePilotStatus
- Check if the backend is running (
-
Slow completion?
- For Ollama: Try a smaller model
- For OpenRouter: Check your internet connection
- Adjust
g:free_pilot_debounce_delay
-
Wrong completions?
- Try a different model
- Check if the correct filetype is detected
MIT License - see LICENSE file for details
- Ollama team for making local AI accessible
- OpenRouter for providing affordable cloud AI
- The Vim/Neovim community
Made with β€οΈ by the Free Pilot team
Note: This is not affiliated with GitHub Copilot or OpenAI