A modern web interface for chatting with your local LLMs through Ollama
- 🖥️ Clean, modern interface for interacting with Ollama models
- 💾 Local chat history using IndexedDB
- 📝 Full Markdown support in messages
- 🌙 Dark mode support
- 🚀 Fast and responsive
- 🔒 Privacy-focused: All processing happens locally
- 🌐 Development proxy for easy network access
# Start Ollama server with your preferred model
ollama pull mistral # or any other model
ollama serve
# Clone and run the GUI
git clone https://github.com/HelgeSverre/ollama-gui.git
cd ollama-gui
yarn install
yarn devThe development server includes an automatic proxy that forwards API requests to your local Ollama instance. This allows other devices on your network to access both the UI and Ollama API:
# Start dev server with network access
yarn dev --host
# Access from other devices using your machine's IP
# Example: http://192.168.1.100:5173Note: This proxy feature is only available during development with yarn dev. For production deployments, you'll need to configure CORS on your Ollama instance or use a reverse proxy.
To disable the proxy (e.g., when using a custom Ollama endpoint):
VITE_NO_PROXY=true yarn devTo use the hosted version, run Ollama with:
OLLAMA_ORIGINS=https://ollama-gui.vercel.app ollama serveThe Docker setup runs both Ollama and the GUI together, so no proxy or CORS configuration is needed. No need to install anything other than docker.
If you have GPU, please uncomment the following lines in the file
compose.yml
# deploy:
# resources:
# reservations:
# devices:
# - driver: nvidia
# count: all
# capabilities: [gpu]docker compose up -d
# Access at http://localhost:8080docker compose down# Enter the ollama container
docker exec -it ollama bash
# Inside the container
ollama pull <model_name>
# Example
ollama pull deepseek-r1:7bRestart the containers using docker compose restart.
Models will get downloaded inside the folder ./ollama_data in the repository. You can change it inside the compose.yml
When building the application for production (yarn build), the resulting static files do not include a proxy server. You have several options for production deployments:
# Allow your production domain
OLLAMA_ORIGINS=https://your-domain.com ollama serveSet up a reverse proxy (nginx, Apache, Caddy) to forward /api requests to your Ollama instance.
The provided Docker setup runs both services together, eliminating CORS issues:
docker compose up -d- Chat history with IndexedDB
- Markdown message formatting
- Code cleanup and organization
- Model library browser and installer
- Mobile-responsive design
- File uploads with OCR support
- Vue.js - Frontend framework
- Vite - Build tool
- Tailwind CSS - Styling
- VueUse - Vue Composition Utilities
- @tabler/icons-vue - Icons
- Design inspired by LangUI
- Hosted on Vercel
Released under the MIT License.
