AI-powered feedback system for educational institutions using Django and Large Language Models.
Try COFFEE instantly with a single command! Download the docker-compose.demo.yml file and run:
docker compose -f docker-compose.demo.yml upOr use this one-liner (macOS/Linux/Windows):
curl -O https://raw.githubusercontent.com/hansesm/coffee/main/docker-compose.demo.yml && docker compose -f docker-compose.demo.yml upWindows (PowerShell):
Invoke-WebRequest -Uri https://raw.githubusercontent.com/hansesm/coffee/main/docker-compose.demo.yml -OutFile docker-compose.demo.yml; docker compose -f docker-compose.demo.yml upThis spins up PostgreSQL, Ollama (with phi4 model), and the app itself using the pre-built image ghcr.io/hansesm/coffee:latest. On startup, migrations run automatically, default users are created, and demo data is imported.
Note: The phi4 model download can take a while. Ollama may run slowly or time out when running in Docker. You can adjust the request_timeout setting in the Admin Panel to prevent timeouts.
Access the app at http://localhost:8000.
To tear everything down:
docker compose -f docker-compose.demo.yml down -vImportant: Restarting the demo reruns the migrations and will likely fail, so this compose file is meant strictly for a one-off demo environment.
- Prerequisites
- Install uv
-
Clone and setup
git clone <repository-url> cd COFFEE uv venv --python 3.13 uv sync
-
Configure environment
cp .env.example .env # Edit .env with your settings -
Setup database Without the env variable
DATABASE_URLdjango creates a sqlite database:uv run task migrate uv run task create-groups
If you want to use a PostgreSQL database, you can spin it up with Docker Compose:
docker compose up -d uv run task migrate uv run task create-groups
-
Run
uv run task server
-
Install Ollama
- Follow the official instructions at ollama.com/download for your platform.
-
Start the Ollama service
- After installation the daemon normally starts automatically. You can verify with:
(Press
ollama serve
Ctrl+Cto stop if it is already running in the background.)
- After installation the daemon normally starts automatically. You can verify with:
-
Download a model
ollama pull phi4
-
Test the model locally
ollama run phi4
The default API endpoint is available at
http://localhost:11434. -
Register Ollama in Django Admin
- Sign in at
<BASE_URL>/admin. - Go to LLM Providers → Add, pick Ollama, set the host (e.g.
http://localhost:11434), and save. - Go to LLM Models → Add, select the newly created Ollama provider, enter the model name (e.g.
phi4), choose a display name, and save. - The provider and model can now be assigned to tasks and criteria inside the app.
- Sign in at
uv run task import-demo-dataAll configuration is environment-based. Copy .env.example to .env and customize:
# Django (REQUIRED)
SECRET_KEY=your-secret-key-here
DEBUG=True
DB_PASSWORD=<YOUR_DB_PASSWORD>
DB_USERNAME=<user>
DB_HOST=<host>
DB_PORT=<port>
DB_NAME=<db>
DB_PROTOCOL=<postgres|sqlite>You can add your own LLM Providers and LLM Models in the Django Admin Panel (<BASE_URL>/admin).
Currently supported LLM Providers:
- Ollama – see
ollama_api.py - Azure – see
azure_ai_api.py - Azure OpenAI – see
azure_openai_api.py
Contributions for additional providers such as LLM Lite, AWS Bedrock, Hugging Face, and others are very welcome! 🚀
Add providers and models in the Django admin under LLM Providers / LLM Models. Each backend needs different connection details:
- Ollama – Set
Endpointto your Ollama host (e.g.http://ollama.local:11434orhttp://localhost:11434). Leave the API key empty unless you enabled token auth; optional TLS settings live in the JSONconfig. - Azure AI – Use the Inference endpoint that already includes the deployment segment, for example
https://<azure-resource>/openai/deployments/<deployment>. Add the matching API key. - Azure OpenAI – Point
Endpointto the service base URL likehttps://<azure-resource>.cognitiveservices.azure.com/. Add the matching API key.
After running python manage.py create_users_and_groups, use these credentials:
- Admin: username
admin, passwordreverence-referee-lunchbox - Manager: username
manager, passwordexpediter-saline-untapped
- Admin: Create courses, tasks, and criteria at
/admin/ - Students: Submit work and receive AI feedback
- Analysis: View feedback analytics and export data
docker build -t coffee .
docker run -p 8000:8000 --env-file .env coffee #On Windows add '--network host' For RedHat Enterprise Linux systems using Podman:
# Install podman-compose if not already installed
sudo dnf install podman-compose
# Copy and configure environment
cp .env.example .env
# Edit .env with your actual configuration values
# Deploy with podman-compose
podman-compose -f podman-compose.yaml up -d
# Create initial users and database schema
podman exec -it coffee_app python manage.py migrate
podman exec -it coffee_app python manage.py create_users_and_groups
# Access the application
curl http://localhost:8000Useful Podman commands:
# View logs
podman-compose logs -f coffee_app
# Stop services
podman-compose down
# Rebuild and restart
podman-compose up -d --buildThis project was developed with assistance from Claude Code, Anthropic's AI coding assistant.
See LICENSE.md for details.