Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 70 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# Changelog

## 2025-08-23

### Backend and Infra
- Added new FastAPI service under `kanban_api/` with SQLAlchemy and Pydantic.
- `kanban_api/app/main.py`: CORS, startup DB retry, dev seed (Default Board with To Do/Doing/Done), and router includes.
- `kanban_api/app/models.py`: ORM models `Board`, `Column`, `Application`, `Resume`. Resolved naming collision by aliasing SQLAlchemy `Column` to `SAColumn`.
- `kanban_api/app/schemas.py`: Pydantic models for serialization/validation.
- `kanban_api/app/routes_kanban.py`: CRUD endpoints for boards, columns, and applications.
- `kanban_api/app/routes_resumes.py`: Create/list resumes and link to `application_id`.
- `kanban_api/app/routes_ai.py`: AI endpoints using LangChain `ChatOllama` (model `gemma3`):
- `POST /ai/summarize-board`
- `POST /ai/tag-application`
- `POST /ai/next-steps`
- `kanban_api/app/config.py`, `kanban_api/app/db.py`: settings and DB session.
- `kanban_api/Dockerfile`: production-ready Uvicorn container.
- Docker Compose updates in `docker-compose.yaml`:
- Services: `postgres`, `mlflow`, `kanban_api`, `backend` (Node), `frontend` (CRA).
- Postgres healthcheck and `POSTGRES_DB=app_db`. `kanban_api` waits for Postgres health.
- MLflow switched to local image built from `docker/mlflow/Dockerfile` with `psycopg2-binary`, exposed on host `5002`.
- Postgres init script at `docker/postgres/init.sql`: creates `appuser`, databases `app_db` and `mlflow`.

### Kanban UI and Resume Integration
- Frontend `KanbanPage` moved header outside grid and added `.kanban__page-header` styles for parity with original board.
- Modal UX improved: wider modal, internal scrolling, better resume editor grid.
- Markdown preview uses `react-markdown` with improved typography and spacing.
- Fixed code-fence issue by stripping leading/trailing ``` from AI output to avoid code-block previews.
- Added Save-to-Card workflow:
- `POST /resumes` persists markdown linked to `application_id`.
- Save action shows success notice and counts saved versions per card.
- Added Export via Pandoc:
- `GET /resumes/{resume_id}/export?format=pdf|docx`
- `GET /resumes/applications/{application_id}/export?format=pdf|docx` (latest resume)
- Frontend buttons “Export PDF/DOCX” in Resume tab, downloading blobs.

### Developer Notes
- Frontend env: `REACT_APP_API_BASE` should point to `http://localhost:8000` when running via docker-compose.
- If styles seem off, hard refresh (Cmd+Shift+R) to invalidate cached CSS.

### How to run
1. Stop previous stack (if any):
```bash
docker compose down
```
2. Start services:
```bash
docker compose up -d --build
```
3. Verify:
- Kanban API health: http://localhost:8000/health
- Boards list: http://localhost:8000/kanban/boards
- Columns of board 1: http://localhost:8000/kanban/boards/1/columns
- MLflow UI: http://localhost:5002
4. Example AI requests:
```bash
curl -s -X POST http://localhost:8000/ai/summarize-board \
-H 'Content-Type: application/json' -d '{"board_id":1}'

curl -s -X POST http://localhost:8000/ai/tag-application \
-H 'Content-Type: application/json' -d '{"application_id":1, "max_tags":5}'

curl -s -X POST http://localhost:8000/ai/next-steps \
-H 'Content-Type: application/json' -d '{"application_id":1}'
```

### Notes
- Default model: `gemma3` via `OLLAMA_BASE_URL`.
- Dev seed creates a "Default Board" with three columns on first run.
- Next steps: unify frontend (single CRA) with routes `/kanban` and `/resume`, port original Kanban styles, add DnD and CRUD wiring, wire resume generation to persist in Postgres.
173 changes: 156 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,42 +1,181 @@
# 🧠 Resume Builder App

This application generates ATS-friendly resumes based on job descriptions and unstructured input data. It is composed of a `frontend` and `backend` service, orchestrated using Docker Compose.
This project includes:

- A classic Resume Generator (`backend` + `frontend`).
- A new Kanban API (`kanban_api/`) with AI endpoints using LangChain + Ollama.
- Postgres database and MLflow service.

All services are orchestrated using Docker Compose.

## 📦 Requirements

- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/)
- Optional (for local AI): [Ollama](https://ollama.com/) installed on the host

## 🚀 Run the Stack (Docker Compose)

Start all services (Postgres, MLflow, Kanban API, backend, frontend):

```bash
docker compose up -d --build
```

- Frontend (CRA): http://localhost:8080
- Node Backend: http://localhost:5001
- Kanban API (FastAPI): http://localhost:8000
- MLflow UI: http://localhost:5002

Check health of the Kanban API:

## 🚀 Running the App
```bash
curl -s http://localhost:8000/health
```

To start the application, run:
Basic Kanban endpoints:

```bash
docker-compose up --build
curl -s http://localhost:8000/kanban/boards
curl -s http://localhost:8000/kanban/boards/1/columns
curl -s http://localhost:8000/kanban/boards/1/applications
```
• The frontend will be available at: http://localhost:8080
• The backend API will be available at: http://localhost:5001

## 🌐 Backend Environment Variables

CORS_ORIGIN Allowed origin for frontend requests
LLM_URL URL to the LLM API (e.g., Ollama instance)
MODEL_NAME Model to use for inference
PORT Port the backend service listens on
- `backend/` (Node):
- `PORT`: port to listen on (default 5001)
- `CORS_ORIGIN`: allowed origin for frontend requests
- `LLM_URL`: URL to the LLM API (e.g., Ollama instance)
- `MODEL_NAME`: model name

- `kanban_api/` (FastAPI):
- `DATABASE_URL`: `postgresql+psycopg2://appuser:apppass@postgres:5432/app_db`
- `CORS_ORIGIN`: `http://localhost:8080`
- `AI_PROVIDER`: `ollama` or `openai`
- `MODEL_NAME`: `gemma3:1b` (default)
- `OLLAMA_BASE_URL`: `http://host.docker.internal:11434`
- `OPENAI_BASE_URL`: OpenAI-compatible base URL (e.g. `https://api.openai.com/v1` or a gateway)
- `OPENAI_API_KEY`: API key when using the OpenAI-compatible provider

## 📂 Output Directory

All generated resumes and related files are saved in the local ./output directory, which is mounted into the backend container.

## 📌 Kanban-Board (New Version — Under Development)
## 🧾 Kanban: Save Resume to Card & Export

You can generate, edit, save, and export resumes directly from the Kanban modal (Details → Resume tab).

A new version of the application is being developed inside the kanban-board folder.
### From the UI

This new app is a standalone service that includes:
1. Open a card → Details → Resume tab.
2. Paste the Job Description and optionally your Profile, then click "AI: Generate Resume".
3. Edit the Markdown as needed and click "Save to Card".
- A notice will show the total number of saved versions linked to this card.
4. Click "Export PDF" or "Export DOCX" to download via Pandoc.

• Resume generation (currently does not export yet)
• A Kanban board to track your job applications
• AI-powered actionables to help you move each application forward
### API Endpoints (FastAPI)

- Create/save resume linked to a card:

```bash
curl -s -X POST http://localhost:8000/resumes \
-H 'Content-Type: application/json' \
-d '{
"application_id": 1,
"job_description": "...",
"input_profile": "...",
"markdown": "# My Resume..."
}'
```

- List resumes for a card:

```bash
curl -s http://localhost:8000/resumes/applications/1
```

- Export latest resume for a card (PDF or DOCX):

```bash
curl -L -o resume.pdf "http://localhost:8000/resumes/applications/1/export?format=pdf"
curl -L -o resume.docx "http://localhost:8000/resumes/applications/1/export?format=docx"
```

Pandoc is installed in the `kanban_api` container (see `kanban_api/Dockerfile`).

## 🤖 AI (Ollama) Setup (Local Host)

The Kanban AI endpoints use Ollama via `OLLAMA_BASE_URL`. To run locally on the host:

1) Start the Ollama server (host):

```bash
ollama serve
```

2) In a separate terminal, pull the model tag used by this repo (smallest):

```bash
ollama pull gemma3:1b
```

3) Verify Ollama is up and reachable:

```bash
curl -s http://localhost:11434/api/tags
```

4) Test AI endpoints (kanban_api):

```bash
curl -s -X POST http://localhost:8000/ai/summarize-board \
-H 'Content-Type: application/json' -d '{"board_id":1}'

curl -s -X POST http://localhost:8000/ai/tag-application \
-H 'Content-Type: application/json' -d '{"application_id":1, "max_tags":5}'

curl -s -X POST http://localhost:8000/ai/next-steps \
-H 'Content-Type: application/json' -d '{"application_id":1}'
```

To run or contribute to this new version, please refer to the documentation in kanban-board/README.md.
Note: `kanban_api` includes `extra_hosts: host.docker.internal:host-gateway` so the container can reach the host Ollama.

## 🔌 OpenAI-compatible Provider Configuration

Both the Node `backend/` and the Python `kanban_api/` can be configured to use OpenAI-compatible APIs.

- Backend (Node):
- Select the provider via `LLM` env: `ollamaService` (default) or `openaiService`.
- For Ollama (raw API):
- `LLM=ollamaService`
- `LLM_URL=http://host.docker.internal:11434/api/generate`
- `MODEL_NAME=gemma3:1b`
- For OpenAI-compatible (Chat Completions):
- `LLM=openaiService`
- `LLM_URL=https://api.openai.com/v1/chat/completions` (or a compatible gateway)
- `OPENAI_API_KEY=...`
- `MODEL_NAME=gpt-4o-mini` (or a compatible model on your provider)

- Kanban API (FastAPI):
- Select the provider via `AI_PROVIDER=ollama|openai`.
- For Ollama:
- `AI_PROVIDER=ollama`
- `OLLAMA_BASE_URL=http://host.docker.internal:11434`
- `MODEL_NAME=gemma3:1b`
- For OpenAI-compatible:
- `AI_PROVIDER=openai`
- `OPENAI_BASE_URL=https://api.openai.com/v1`
- `OPENAI_API_KEY=...`
- `MODEL_NAME=gpt-4o-mini` (or a compatible model on your provider)

## 📌 Kanban-Board (New Frontend — Under Development)

We are unifying the frontend into a single CRA app with routes `/kanban` and `/resume`, porting the exact Kanban styles.

Current status:

- Resume generation works via the classic Node backend.
- Kanban API is live with CRUD and AI endpoints.
- Frontend unification in progress.

6 changes: 6 additions & 0 deletions backend/services/openaiService.js
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,11 @@ async function callLLM(prompt) {
}
},
}
}, {
headers: {
"Content-Type": "application/json",
...(process.env.OPENAI_API_KEY ? { "Authorization": `Bearer ${process.env.OPENAI_API_KEY}` } : {})
}
});

logger.info("📡 OpenAI API Raw Response:", response.data);
Expand All @@ -145,3 +150,4 @@ async function callLLM(prompt) {
}

module.exports = { callLLM };

51 changes: 50 additions & 1 deletion docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,53 @@ services:
- MODEL_NAME=gemma3:1b
- PORT=5001
ports:
- 5001:5001
- 5001:5001

postgres:
image: postgres:16
container_name: postgres
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: apppass
POSTGRES_DB: app_db
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
- ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d app_db"]
interval: 3s
timeout: 3s
retries: 10

mlflow:
build: ./docker/mlflow
container_name: mlflow
ports:
- 5002:5000
depends_on:
- postgres
volumes:
- mlflow_artifacts:/mlflow-artifacts

kanban_api:
build: ./kanban_api
container_name: kanban_api
environment:
CORS_ORIGIN: http://localhost:8080
DATABASE_URL: postgresql+psycopg2://appuser:apppass@postgres:5432/app_db
AI_PROVIDER: ollama
MODEL_NAME: gemma3:1b
OLLAMA_BASE_URL: http://host.docker.internal:11434
ports:
- 8000:8000
depends_on:
postgres:
condition: service_healthy
extra_hosts:
- "host.docker.internal:host-gateway"

volumes:
pgdata:
mlflow_artifacts:
16 changes: 16 additions & 0 deletions docker/mlflow/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
FROM python:3.11-slim

WORKDIR /app

# System dependencies for psycopg2
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*

# Install MLflow and Postgres driver
RUN pip install --no-cache-dir mlflow psycopg2-binary

EXPOSE 5000

CMD ["mlflow", "server", "--host", "0.0.0.0", "--port", "5000", "--backend-store-uri", "postgresql+psycopg2://appuser:apppass@postgres:5432/mlflow", "--artifacts-destination", "/mlflow-artifacts"]
23 changes: 23 additions & 0 deletions docker/postgres/init.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
-- Create application user and databases
DO
$$
BEGIN
IF NOT EXISTS (
SELECT FROM pg_catalog.pg_roles WHERE rolname = 'appuser') THEN
CREATE ROLE appuser LOGIN PASSWORD 'apppass';
END IF;
END
$$;

-- Create databases if not exist
DO
$$
BEGIN
IF NOT EXISTS (SELECT FROM pg_database WHERE datname = 'app_db') THEN
CREATE DATABASE app_db OWNER appuser;
END IF;
IF NOT EXISTS (SELECT FROM pg_database WHERE datname = 'mlflow') THEN
CREATE DATABASE mlflow OWNER appuser;
END IF;
END
$$;
Loading