Skip to content

HelgeSverre/ollama-gui

Repository files navigation

Ollama GUI logo

Ollama GUI

A modern web interface for chatting with your local LLMs through Ollama

Powered by Ollama MIT License Live Demo

✨ Features

  • 🖥️ Clean, modern interface for interacting with Ollama models
  • 💾 Local chat history using IndexedDB
  • 📝 Full Markdown support in messages
  • 🌙 Dark mode support
  • 🚀 Fast and responsive
  • 🔒 Privacy-focused: All processing happens locally
  • 🌐 Development proxy for easy network access

🚀 Quick Start

Prerequisites (only needed for local development)

  1. Install Ollama
  2. Install Node.js (v16+) and Yarn

Local Development

# Start Ollama server with your preferred model
ollama pull mistral  # or any other model
ollama serve

# Clone and run the GUI
git clone https://github.com/HelgeSverre/ollama-gui.git
cd ollama-gui
yarn install
yarn dev

Network Access (Development Only)

The development server includes an automatic proxy that forwards API requests to your local Ollama instance. This allows other devices on your network to access both the UI and Ollama API:

# Start dev server with network access
yarn dev --host

# Access from other devices using your machine's IP
# Example: http://192.168.1.100:5173

Note: This proxy feature is only available during development with yarn dev. For production deployments, you'll need to configure CORS on your Ollama instance or use a reverse proxy.

To disable the proxy (e.g., when using a custom Ollama endpoint):

VITE_NO_PROXY=true yarn dev

Using the Hosted Version

To use the hosted version, run Ollama with:

OLLAMA_ORIGINS=https://ollama-gui.vercel.app ollama serve

Docker Deployment

The Docker setup runs both Ollama and the GUI together, so no proxy or CORS configuration is needed. No need to install anything other than docker.

If you have GPU, please uncomment the following lines in the file compose.yml

    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: all
    #           capabilities: [gpu]

Run

docker compose up -d

# Access at http://localhost:8080

Stop

docker compose down

Download more models

# Enter the ollama container
docker exec -it ollama bash

# Inside the container
ollama pull <model_name>

# Example
ollama pull deepseek-r1:7b

Restart the containers using docker compose restart.

Models will get downloaded inside the folder ./ollama_data in the repository. You can change it inside the compose.yml

🏭 Production Deployment

When building the application for production (yarn build), the resulting static files do not include a proxy server. You have several options for production deployments:

Option 1: Configure CORS on Ollama

# Allow your production domain
OLLAMA_ORIGINS=https://your-domain.com ollama serve

Option 2: Use a Reverse Proxy

Set up a reverse proxy (nginx, Apache, Caddy) to forward /api requests to your Ollama instance.

Option 3: Use Docker Compose

The provided Docker setup runs both services together, eliminating CORS issues:

docker compose up -d

🛣️ Roadmap

  • Chat history with IndexedDB
  • Markdown message formatting
  • Code cleanup and organization
  • Model library browser and installer
  • Mobile-responsive design
  • File uploads with OCR support

🛠️ Tech Stack

📄 License

Released under the MIT License.

About

A Web Interface for chatting with your local LLMs via the ollama API

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published