diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 100644
index 0000000..5869c1a
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1,70 @@
+# Changelog
+
+## 2025-08-23
+
+### Backend and Infra
+- Added new FastAPI service under `kanban_api/` with SQLAlchemy and Pydantic.
+ - `kanban_api/app/main.py`: CORS, startup DB retry, dev seed (Default Board with To Do/Doing/Done), and router includes.
+ - `kanban_api/app/models.py`: ORM models `Board`, `Column`, `Application`, `Resume`. Resolved naming collision by aliasing SQLAlchemy `Column` to `SAColumn`.
+ - `kanban_api/app/schemas.py`: Pydantic models for serialization/validation.
+ - `kanban_api/app/routes_kanban.py`: CRUD endpoints for boards, columns, and applications.
+ - `kanban_api/app/routes_resumes.py`: Create/list resumes and link to `application_id`.
+ - `kanban_api/app/routes_ai.py`: AI endpoints using LangChain `ChatOllama` (model `gemma3`):
+ - `POST /ai/summarize-board`
+ - `POST /ai/tag-application`
+ - `POST /ai/next-steps`
+ - `kanban_api/app/config.py`, `kanban_api/app/db.py`: settings and DB session.
+ - `kanban_api/Dockerfile`: production-ready Uvicorn container.
+- Docker Compose updates in `docker-compose.yaml`:
+ - Services: `postgres`, `mlflow`, `kanban_api`, `backend` (Node), `frontend` (CRA).
+ - Postgres healthcheck and `POSTGRES_DB=app_db`. `kanban_api` waits for Postgres health.
+ - MLflow switched to local image built from `docker/mlflow/Dockerfile` with `psycopg2-binary`, exposed on host `5002`.
+- Postgres init script at `docker/postgres/init.sql`: creates `appuser`, databases `app_db` and `mlflow`.
+
+### Kanban UI and Resume Integration
+- Frontend `KanbanPage` moved header outside grid and added `.kanban__page-header` styles for parity with original board.
+- Modal UX improved: wider modal, internal scrolling, better resume editor grid.
+- Markdown preview uses `react-markdown` with improved typography and spacing.
+- Fixed code-fence issue by stripping leading/trailing ``` from AI output to avoid code-block previews.
+- Added Save-to-Card workflow:
+ - `POST /resumes` persists markdown linked to `application_id`.
+ - Save action shows success notice and counts saved versions per card.
+- Added Export via Pandoc:
+ - `GET /resumes/{resume_id}/export?format=pdf|docx`
+ - `GET /resumes/applications/{application_id}/export?format=pdf|docx` (latest resume)
+ - Frontend buttons βExport PDF/DOCXβ in Resume tab, downloading blobs.
+
+### Developer Notes
+- Frontend env: `REACT_APP_API_BASE` should point to `http://localhost:8000` when running via docker-compose.
+- If styles seem off, hard refresh (Cmd+Shift+R) to invalidate cached CSS.
+
+### How to run
+1. Stop previous stack (if any):
+ ```bash
+ docker compose down
+ ```
+2. Start services:
+ ```bash
+ docker compose up -d --build
+ ```
+3. Verify:
+ - Kanban API health: http://localhost:8000/health
+ - Boards list: http://localhost:8000/kanban/boards
+ - Columns of board 1: http://localhost:8000/kanban/boards/1/columns
+ - MLflow UI: http://localhost:5002
+4. Example AI requests:
+ ```bash
+ curl -s -X POST http://localhost:8000/ai/summarize-board \
+ -H 'Content-Type: application/json' -d '{"board_id":1}'
+
+ curl -s -X POST http://localhost:8000/ai/tag-application \
+ -H 'Content-Type: application/json' -d '{"application_id":1, "max_tags":5}'
+
+ curl -s -X POST http://localhost:8000/ai/next-steps \
+ -H 'Content-Type: application/json' -d '{"application_id":1}'
+ ```
+
+### Notes
+- Default model: `gemma3` via `OLLAMA_BASE_URL`.
+- Dev seed creates a "Default Board" with three columns on first run.
+- Next steps: unify frontend (single CRA) with routes `/kanban` and `/resume`, port original Kanban styles, add DnD and CRUD wiring, wire resume generation to persist in Postgres.
diff --git a/README.md b/README.md
index dba48cf..0769a88 100644
--- a/README.md
+++ b/README.md
@@ -1,42 +1,181 @@
# π§ Resume Builder App
-This application generates ATS-friendly resumes based on job descriptions and unstructured input data. It is composed of a `frontend` and `backend` service, orchestrated using Docker Compose.
+This project includes:
+
+- A classic Resume Generator (`backend` + `frontend`).
+- A new Kanban API (`kanban_api/`) with AI endpoints using LangChain + Ollama.
+- Postgres database and MLflow service.
+
+All services are orchestrated using Docker Compose.
## π¦ Requirements
- [Docker](https://www.docker.com/)
- [Docker Compose](https://docs.docker.com/compose/)
+- Optional (for local AI): [Ollama](https://ollama.com/) installed on the host
+
+## π Run the Stack (Docker Compose)
+
+Start all services (Postgres, MLflow, Kanban API, backend, frontend):
+
+```bash
+docker compose up -d --build
+```
+
+- Frontend (CRA): http://localhost:8080
+- Node Backend: http://localhost:5001
+- Kanban API (FastAPI): http://localhost:8000
+- MLflow UI: http://localhost:5002
+
+Check health of the Kanban API:
-## π Running the App
+```bash
+curl -s http://localhost:8000/health
+```
-To start the application, run:
+Basic Kanban endpoints:
```bash
-docker-compose up --build
+curl -s http://localhost:8000/kanban/boards
+curl -s http://localhost:8000/kanban/boards/1/columns
+curl -s http://localhost:8000/kanban/boards/1/applications
```
- β’ The frontend will be available at: http://localhost:8080
- β’ The backend API will be available at: http://localhost:5001
## π Backend Environment Variables
-CORS_ORIGIN Allowed origin for frontend requests
-LLM_URL URL to the LLM API (e.g., Ollama instance)
-MODEL_NAME Model to use for inference
-PORT Port the backend service listens on
+- `backend/` (Node):
+ - `PORT`: port to listen on (default 5001)
+ - `CORS_ORIGIN`: allowed origin for frontend requests
+ - `LLM_URL`: URL to the LLM API (e.g., Ollama instance)
+ - `MODEL_NAME`: model name
+
+- `kanban_api/` (FastAPI):
+ - `DATABASE_URL`: `postgresql+psycopg2://appuser:apppass@postgres:5432/app_db`
+ - `CORS_ORIGIN`: `http://localhost:8080`
+ - `AI_PROVIDER`: `ollama` or `openai`
+ - `MODEL_NAME`: `gemma3:1b` (default)
+ - `OLLAMA_BASE_URL`: `http://host.docker.internal:11434`
+ - `OPENAI_BASE_URL`: OpenAI-compatible base URL (e.g. `https://api.openai.com/v1` or a gateway)
+ - `OPENAI_API_KEY`: API key when using the OpenAI-compatible provider
## π Output Directory
All generated resumes and related files are saved in the local ./output directory, which is mounted into the backend container.
-## π Kanban-Board (New Version β Under Development)
+## π§Ύ Kanban: Save Resume to Card & Export
+
+You can generate, edit, save, and export resumes directly from the Kanban modal (Details β Resume tab).
-A new version of the application is being developed inside the kanban-board folder.
+### From the UI
-This new app is a standalone service that includes:
+1. Open a card β Details β Resume tab.
+2. Paste the Job Description and optionally your Profile, then click "AI: Generate Resume".
+3. Edit the Markdown as needed and click "Save to Card".
+ - A notice will show the total number of saved versions linked to this card.
+4. Click "Export PDF" or "Export DOCX" to download via Pandoc.
- β’ Resume generation (currently does not export yet)
- β’ A Kanban board to track your job applications
- β’ AI-powered actionables to help you move each application forward
+### API Endpoints (FastAPI)
+
+- Create/save resume linked to a card:
+
+```bash
+curl -s -X POST http://localhost:8000/resumes \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "application_id": 1,
+ "job_description": "...",
+ "input_profile": "...",
+ "markdown": "# My Resume..."
+ }'
+```
+
+- List resumes for a card:
+
+```bash
+curl -s http://localhost:8000/resumes/applications/1
+```
+
+- Export latest resume for a card (PDF or DOCX):
+
+```bash
+curl -L -o resume.pdf "http://localhost:8000/resumes/applications/1/export?format=pdf"
+curl -L -o resume.docx "http://localhost:8000/resumes/applications/1/export?format=docx"
+```
+
+Pandoc is installed in the `kanban_api` container (see `kanban_api/Dockerfile`).
+
+## π€ AI (Ollama) Setup (Local Host)
+
+The Kanban AI endpoints use Ollama via `OLLAMA_BASE_URL`. To run locally on the host:
+
+1) Start the Ollama server (host):
+
+```bash
+ollama serve
+```
+
+2) In a separate terminal, pull the model tag used by this repo (smallest):
+
+```bash
+ollama pull gemma3:1b
+```
+
+3) Verify Ollama is up and reachable:
+
+```bash
+curl -s http://localhost:11434/api/tags
+```
+
+4) Test AI endpoints (kanban_api):
+
+```bash
+curl -s -X POST http://localhost:8000/ai/summarize-board \
+ -H 'Content-Type: application/json' -d '{"board_id":1}'
+
+curl -s -X POST http://localhost:8000/ai/tag-application \
+ -H 'Content-Type: application/json' -d '{"application_id":1, "max_tags":5}'
+
+curl -s -X POST http://localhost:8000/ai/next-steps \
+ -H 'Content-Type: application/json' -d '{"application_id":1}'
+```
-To run or contribute to this new version, please refer to the documentation in kanban-board/README.md.
+Note: `kanban_api` includes `extra_hosts: host.docker.internal:host-gateway` so the container can reach the host Ollama.
+
+## π OpenAI-compatible Provider Configuration
+
+Both the Node `backend/` and the Python `kanban_api/` can be configured to use OpenAI-compatible APIs.
+
+- Backend (Node):
+ - Select the provider via `LLM` env: `ollamaService` (default) or `openaiService`.
+ - For Ollama (raw API):
+ - `LLM=ollamaService`
+ - `LLM_URL=http://host.docker.internal:11434/api/generate`
+ - `MODEL_NAME=gemma3:1b`
+ - For OpenAI-compatible (Chat Completions):
+ - `LLM=openaiService`
+ - `LLM_URL=https://api.openai.com/v1/chat/completions` (or a compatible gateway)
+ - `OPENAI_API_KEY=...`
+ - `MODEL_NAME=gpt-4o-mini` (or a compatible model on your provider)
+
+- Kanban API (FastAPI):
+ - Select the provider via `AI_PROVIDER=ollama|openai`.
+ - For Ollama:
+ - `AI_PROVIDER=ollama`
+ - `OLLAMA_BASE_URL=http://host.docker.internal:11434`
+ - `MODEL_NAME=gemma3:1b`
+ - For OpenAI-compatible:
+ - `AI_PROVIDER=openai`
+ - `OPENAI_BASE_URL=https://api.openai.com/v1`
+ - `OPENAI_API_KEY=...`
+ - `MODEL_NAME=gpt-4o-mini` (or a compatible model on your provider)
+
+## π Kanban-Board (New Frontend β Under Development)
+
+We are unifying the frontend into a single CRA app with routes `/kanban` and `/resume`, porting the exact Kanban styles.
+
+Current status:
+
+- Resume generation works via the classic Node backend.
+- Kanban API is live with CRUD and AI endpoints.
+- Frontend unification in progress.
diff --git a/backend/services/openaiService.js b/backend/services/openaiService.js
index d7560c3..2b3da91 100644
--- a/backend/services/openaiService.js
+++ b/backend/services/openaiService.js
@@ -127,6 +127,11 @@ async function callLLM(prompt) {
}
},
}
+ }, {
+ headers: {
+ "Content-Type": "application/json",
+ ...(process.env.OPENAI_API_KEY ? { "Authorization": `Bearer ${process.env.OPENAI_API_KEY}` } : {})
+ }
});
logger.info("π‘ OpenAI API Raw Response:", response.data);
@@ -145,3 +150,4 @@ async function callLLM(prompt) {
}
module.exports = { callLLM };
+
diff --git a/docker-compose.yaml b/docker-compose.yaml
index 8f7c6d1..2df624e 100644
--- a/docker-compose.yaml
+++ b/docker-compose.yaml
@@ -17,4 +17,53 @@ services:
- MODEL_NAME=gemma3:1b
- PORT=5001
ports:
- - 5001:5001
\ No newline at end of file
+ - 5001:5001
+
+ postgres:
+ image: postgres:16
+ container_name: postgres
+ environment:
+ POSTGRES_USER: appuser
+ POSTGRES_PASSWORD: apppass
+ POSTGRES_DB: app_db
+ ports:
+ - 5432:5432
+ volumes:
+ - pgdata:/var/lib/postgresql/data
+ - ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
+ healthcheck:
+ test: ["CMD-SHELL", "pg_isready -U appuser -d app_db"]
+ interval: 3s
+ timeout: 3s
+ retries: 10
+
+ mlflow:
+ build: ./docker/mlflow
+ container_name: mlflow
+ ports:
+ - 5002:5000
+ depends_on:
+ - postgres
+ volumes:
+ - mlflow_artifacts:/mlflow-artifacts
+
+ kanban_api:
+ build: ./kanban_api
+ container_name: kanban_api
+ environment:
+ CORS_ORIGIN: http://localhost:8080
+ DATABASE_URL: postgresql+psycopg2://appuser:apppass@postgres:5432/app_db
+ AI_PROVIDER: ollama
+ MODEL_NAME: gemma3:1b
+ OLLAMA_BASE_URL: http://host.docker.internal:11434
+ ports:
+ - 8000:8000
+ depends_on:
+ postgres:
+ condition: service_healthy
+ extra_hosts:
+ - "host.docker.internal:host-gateway"
+
+volumes:
+ pgdata:
+ mlflow_artifacts:
\ No newline at end of file
diff --git a/docker/mlflow/Dockerfile b/docker/mlflow/Dockerfile
new file mode 100644
index 0000000..1a8b4a4
--- /dev/null
+++ b/docker/mlflow/Dockerfile
@@ -0,0 +1,16 @@
+FROM python:3.11-slim
+
+WORKDIR /app
+
+# System dependencies for psycopg2
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ build-essential \
+ libpq-dev \
+ && rm -rf /var/lib/apt/lists/*
+
+# Install MLflow and Postgres driver
+RUN pip install --no-cache-dir mlflow psycopg2-binary
+
+EXPOSE 5000
+
+CMD ["mlflow", "server", "--host", "0.0.0.0", "--port", "5000", "--backend-store-uri", "postgresql+psycopg2://appuser:apppass@postgres:5432/mlflow", "--artifacts-destination", "/mlflow-artifacts"]
diff --git a/docker/postgres/init.sql b/docker/postgres/init.sql
new file mode 100644
index 0000000..9ae4455
--- /dev/null
+++ b/docker/postgres/init.sql
@@ -0,0 +1,23 @@
+-- Create application user and databases
+DO
+$$
+BEGIN
+ IF NOT EXISTS (
+ SELECT FROM pg_catalog.pg_roles WHERE rolname = 'appuser') THEN
+ CREATE ROLE appuser LOGIN PASSWORD 'apppass';
+ END IF;
+END
+$$;
+
+-- Create databases if not exist
+DO
+$$
+BEGIN
+ IF NOT EXISTS (SELECT FROM pg_database WHERE datname = 'app_db') THEN
+ CREATE DATABASE app_db OWNER appuser;
+ END IF;
+ IF NOT EXISTS (SELECT FROM pg_database WHERE datname = 'mlflow') THEN
+ CREATE DATABASE mlflow OWNER appuser;
+ END IF;
+END
+$$;
diff --git a/docs/PR-feature-kanban-api-ollama-mlflow.md b/docs/PR-feature-kanban-api-ollama-mlflow.md
new file mode 100644
index 0000000..90cf545
--- /dev/null
+++ b/docs/PR-feature-kanban-api-ollama-mlflow.md
@@ -0,0 +1,89 @@
+# PR: Kanban API + Ollama + MLflow integration, AI provider support, and docs
+
+## Summary
+This PR introduces a FastAPI-based Kanban backend (`kanban_api/`) with Postgres, initial data seeding, and AI endpoints powered by LangChain. It adds provider-selectable AI (Ollama or OpenAI-compatible), aligns Docker Compose services, and documents end-to-end setup, including a connectivity test script.
+
+## Key changes
+- FastAPI service `kanban_api/` with SQLAlchemy 2.0 models and Pydantic schemas.
+- AI endpoints using LangChain with provider switch:
+ - `AI_PROVIDER=ollama|openai`
+ - Default model set to `gemma3:1b`.
+- Docker Compose improvements:
+ - Postgres healthcheck + dependency gating.
+ - `extra_hosts` so API can reach host Ollama via `host.docker.internal`.
+- README updated with detailed run instructions and provider configuration.
+- Connectivity test script `scripts/connectivity_test.sh` to validate services and AI endpoints.
+
+## Environment variables
+- `kanban_api/` (FastAPI):
+ - `DATABASE_URL=postgresql+psycopg2://appuser:apppass@postgres:5432/app_db`
+ - `CORS_ORIGIN=http://localhost:8080`
+ - `AI_PROVIDER=ollama|openai`
+ - `MODEL_NAME=gemma3:1b`
+ - `OLLAMA_BASE_URL=http://host.docker.internal:11434` (for Ollama)
+ - `OPENAI_BASE_URL`, `OPENAI_API_KEY` (for OpenAI-compatible)
+- `backend/` (Node):
+ - `LLM=ollamaService|openaiService`
+ - `LLM_URL` to either `http://host.docker.internal:11434/api/generate` or `https://api.openai.com/v1/chat/completions`
+ - `MODEL_NAME` (defaults to `gemma3:1b` for Ollama), `OPENAI_API_KEY` when using OpenAI-compatible
+
+## Data contracts (Pydantic schemas)
+Defined in `kanban_api/app/schemas.py`.
+
+- BoardRead:
+ - `{ id: int, name: str, created_at: datetime }`
+- ColumnRead:
+ - `{ id: int, board_id: int, name: str, position: int }`
+- ApplicationRead:
+ - `{ id: int, board_id: int, column_id: int, title: str, company: str, description: str|None, status: str|None, tags: List[str], created_at: datetime, updated_at: datetime }`
+- ResumeRead:
+ - `{ id: int, application_id: int, content: str, created_at: datetime }`
+
+AI contracts in `kanban_api/app/routes_ai.py`:
+- SummarizeBoardRequest `{ board_id: int, focus?: str }` -> SummarizeBoardResponse `{ summary: str }`
+- TagApplicationRequest `{ application_id: int, max_tags?: int }` -> TagApplicationResponse `{ tags: List[str] }`
+- NextStepsRequest `{ application_id: int }` -> NextStepsResponse `{ steps: List[str] }`
+
+## API routes
+- Kanban (`kanban_api/app/routes_kanban.py`):
+ - `GET /kanban/boards` -> `List[BoardRead]`
+ - `GET /kanban/boards/{board_id}/columns` -> `List[ColumnRead]`
+ - `GET /kanban/boards/{board_id}/applications` -> `List[ApplicationRead]`
+ - `POST /kanban/boards/{board_id}/applications` -> `ApplicationRead`
+- AI (`kanban_api/app/routes_ai.py`):
+ - `POST /ai/summarize-board` -> `SummarizeBoardResponse`
+ - `POST /ai/tag-application` -> `TagApplicationResponse`
+ - `POST /ai/next-steps` -> `NextStepsResponse`
+- Resumes (`kanban_api/app/routes_resumes.py`):
+ - `POST /resumes` -> `ResumeRead`
+ - `GET /resumes/applications/{application_id}` -> `List[ResumeRead]`
+
+## How to run
+```bash
+docker compose up -d --build
+# Start Ollama on host and pull model
+o llama serve &
+ollama pull gemma3:1b
+# Validate connectivity
+./scripts/connectivity_test.sh
+```
+
+## Tests performed
+- CRUD tested for boards/columns/applications.
+- AI endpoints tested against Ollama `gemma3:1b` via host mapping.
+- `scripts/connectivity_test.sh` consolidates health checks and AI calls.
+
+## Frontend unification (WIP)
+- Plan to unify into a single CRA app with `react-router-dom` and shared navbar.
+- Routes: `/kanban` (drag-and-drop board with CRUD) and `/resume` (resume builder), with consistent Kanban styling.
+- Integrate AI actions (summarize, tag, next steps) on application cards.
+
+## Next steps
+- Add Alembic migrations and a formal seed script.
+- Implement frontend unification pages and shared styles; wire to FastAPI.
+- Optional: Add Ollama as a Compose service and point `AI_PROVIDER=ollama` with `OLLAMA_BASE_URL=http://ollama:11434`.
+- Configure MLflow tracking in `llm-experiments` to use Postgres backend.
+
+## Notes
+- `kanban_api` includes retry and DB health gating to ensure reliable startup.
+- OpenAI-compatible support added via `langchain-openai` and `ChatOpenAI`, gated by envs.
diff --git a/frontend/package.json b/frontend/package.json
index b13366e..aa337fa 100644
--- a/frontend/package.json
+++ b/frontend/package.json
@@ -14,7 +14,8 @@
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-markdown": "^9.0.2",
- "react-scripts": "^5.0.1"
+ "react-scripts": "^5.0.1",
+ "react-router-dom": "^6.26.2"
},
"engines": {
"node": ">=16.0.0"
diff --git a/frontend/src/App.js b/frontend/src/App.js
index 746b27d..aa1c869 100644
--- a/frontend/src/App.js
+++ b/frontend/src/App.js
@@ -1,13 +1,21 @@
import React from "react";
-import ResumeForm from "./components/ResumeForm";
+import { Routes, Route, Navigate } from "react-router-dom";
+import Navbar from "./components/Navbar";
+import ResumePage from "./pages/ResumePage";
+import KanbanPage from "./pages/KanbanPage";
import "./styles/ResumeForm.css";
function App() {
return (
-
-
ATS Resume Generator
-
-
+ <>
+
+
+ } />
+ } />
+ } />
+ } />
+
+ >
);
}
diff --git a/frontend/src/api/kanban.js b/frontend/src/api/kanban.js
new file mode 100644
index 0000000..d3180ef
--- /dev/null
+++ b/frontend/src/api/kanban.js
@@ -0,0 +1,92 @@
+import axios from "axios";
+
+const API_BASE = process.env.REACT_APP_API_BASE || "http://localhost:8000";
+
+export const getBoards = async () => {
+ const { data } = await axios.get(`${API_BASE}/kanban/boards`);
+ return data;
+};
+
+export const getColumns = async (boardId) => {
+ const { data } = await axios.get(`${API_BASE}/kanban/boards/${boardId}/columns`);
+ return data;
+};
+
+export const getApplications = async (boardId) => {
+ const { data } = await axios.get(`${API_BASE}/kanban/boards/${boardId}/applications`);
+ return data;
+};
+
+export const createApplication = async (boardId, payload) => {
+ const { data } = await axios.post(`${API_BASE}/kanban/boards/${boardId}/applications`, payload);
+ return data;
+};
+
+export const updateApplication = async (applicationId, payload) => {
+ const { data } = await axios.put(`${API_BASE}/kanban/applications/${applicationId}`, payload);
+ return data;
+};
+
+export const moveApplication = async (applicationId, columnId) => {
+ const { data } = await axios.post(`${API_BASE}/kanban/applications/${applicationId}/move`, { column_id: columnId });
+ return data;
+};
+
+// AI endpoints
+export const aiSummarizeBoard = async (boardId, focus) => {
+ const { data } = await axios.post(`${API_BASE}/ai/summarize-board`, { board_id: boardId, focus });
+ return data; // { summary }
+};
+
+export const aiTagApplication = async (applicationId, max_tags = 5) => {
+ const { data } = await axios.post(`${API_BASE}/ai/tag-application`, { application_id: applicationId, max_tags });
+ return data; // { tags }
+};
+
+export const aiNextSteps = async (applicationId) => {
+ const { data } = await axios.post(`${API_BASE}/ai/next-steps`, { application_id: applicationId });
+ return data; // { steps }
+};
+
+// Resume endpoints
+export const aiGenerateResume = async ({ applicationId, jobDescription, profile }) => {
+ const { data } = await axios.post(`${API_BASE}/ai/generate-resume`, {
+ application_id: applicationId,
+ job_description: jobDescription,
+ profile,
+ });
+ return data; // { markdown }
+};
+
+export const createResume = async ({ applicationId, jobDescription, inputProfile, markdown, model, params }) => {
+ const { data } = await axios.post(`${API_BASE}/resumes`, {
+ application_id: applicationId,
+ job_description: jobDescription,
+ input_profile: inputProfile,
+ markdown,
+ model,
+ params,
+ });
+ return data; // ResumeRead
+};
+
+export const listResumesForApplication = async (applicationId) => {
+ const { data } = await axios.get(`${API_BASE}/resumes/applications/${applicationId}`);
+ return data; // ResumeRead[]
+};
+
+export const exportLatestResume = async (applicationId, format = "pdf") => {
+ const response = await axios.get(
+ `${API_BASE}/resumes/applications/${applicationId}/export`,
+ { params: { format }, responseType: "blob" }
+ );
+ return response;
+};
+
+export const exportResumeById = async (resumeId, format = "pdf") => {
+ const response = await axios.get(
+ `${API_BASE}/resumes/${resumeId}/export`,
+ { params: { format }, responseType: "blob" }
+ );
+ return response;
+};
diff --git a/frontend/src/components/Navbar.js b/frontend/src/components/Navbar.js
new file mode 100644
index 0000000..d1b3ea9
--- /dev/null
+++ b/frontend/src/components/Navbar.js
@@ -0,0 +1,25 @@
+import React from "react";
+import { Link, NavLink } from "react-router-dom";
+import "../styles/Navbar.css";
+
+export default function Navbar() {
+ return (
+
+ );
+}
diff --git a/frontend/src/index.js b/frontend/src/index.js
index c17ee95..61f27e6 100644
--- a/frontend/src/index.js
+++ b/frontend/src/index.js
@@ -1,10 +1,14 @@
import React from "react";
import ReactDOM from "react-dom";
import App from "./App";
+import { BrowserRouter } from "react-router-dom";
+import "./styles/theme.css";
ReactDOM.render(
-
+
+
+
,
document.getElementById("root")
);
\ No newline at end of file
diff --git a/frontend/src/pages/KanbanPage.js b/frontend/src/pages/KanbanPage.js
new file mode 100644
index 0000000..162b8e9
--- /dev/null
+++ b/frontend/src/pages/KanbanPage.js
@@ -0,0 +1,453 @@
+import React, { useEffect, useMemo, useState } from "react";
+import ReactMarkdown from "react-markdown";
+import { getColumns, getApplications, moveApplication, updateApplication, aiSummarizeBoard, aiTagApplication, aiNextSteps, aiGenerateResume, createResume, createApplication, exportLatestResume, listResumesForApplication } from "../api/kanban";
+import "../styles/Kanban.css";
+
+export default function KanbanPage() {
+ const BOARD_ID = 1;
+ const [columns, setColumns] = useState([]);
+ const [apps, setApps] = useState([]);
+ const [loading, setLoading] = useState(true);
+ const [error, setError] = useState(null);
+ const [editing, setEditing] = useState({}); // { [id]: { title, company, description } }
+ const [selected, setSelected] = useState(null); // card for modal
+ const [ai, setAi] = useState({ loading: false, summary: "", tags: [], steps: [], notice: "" });
+ const [tab, setTab] = useState("details"); // details | resume
+ const [resume, setResume] = useState({ jd: "", profile: "", markdown: "" });
+ const stripCodeFences = (md = "") => {
+ if (!md) return "";
+ // remove leading ```lang and trailing ``` if present
+ const startStripped = md.replace(/^```[a-zA-Z]*\n?/, "");
+ const endStripped = startStripped.replace(/\n?```\s*$/, "");
+ return endStripped.trim();
+ };
+
+ useEffect(() => {
+ let mounted = true;
+ async function load() {
+ try {
+ setLoading(true);
+ const [cols, applications] = await Promise.all([
+ getColumns(BOARD_ID),
+ getApplications(BOARD_ID),
+ ]);
+ if (!mounted) return;
+ setColumns(cols);
+ setApps(applications);
+ } catch (e) {
+ if (!mounted) return;
+ setError("Failed to load Kanban data");
+ } finally {
+ if (mounted) setLoading(false);
+ }
+ }
+ load();
+ return () => {
+ mounted = false;
+ };
+ }, []);
+
+ const addApplication = async () => {
+ try {
+ const defaultColId = columns[0]?.id || null;
+ const payload = {
+ title: "New Application",
+ company: "",
+ description: "",
+ status: "Applied",
+ tags: [],
+ column_id: defaultColId,
+ };
+ const created = await createApplication(BOARD_ID, payload);
+ setApps((prev) => [created, ...prev]);
+ } catch (e) {
+ setError("Failed to add application");
+ }
+ };
+
+ const generateResume = async () => {
+ if (!selected || !resume.jd) return;
+ try {
+ setAi((s) => ({ ...s, loading: true }));
+ const { markdown } = await aiGenerateResume({ applicationId: selected.id, jobDescription: resume.jd, profile: resume.profile });
+ setResume((r) => ({ ...r, markdown: stripCodeFences(markdown) }));
+ } catch (e) {
+ setError("AI resume generation failed");
+ } finally {
+ setAi((s) => ({ ...s, loading: false }));
+ }
+ };
+
+ const saveResume = async () => {
+ if (!selected || !resume.markdown) return;
+ try {
+ setAi((s) => ({ ...s, loading: true }));
+ await createResume({ applicationId: selected.id, jobDescription: resume.jd, inputProfile: resume.profile, markdown: stripCodeFences(resume.markdown) });
+ // optional: fetch count to reflect it was saved
+ try {
+ const list = await listResumesForApplication(selected.id);
+ setAi((s) => ({ ...s, notice: `Saved. Total resumes: ${list.length}` }));
+ } catch {}
+ } catch (e) {
+ setError("Failed to save resume");
+ } finally {
+ setAi((s) => ({ ...s, loading: false }));
+ }
+ };
+
+ const downloadBlob = (blob, filename) => {
+ const url = window.URL.createObjectURL(blob);
+ const a = document.createElement('a');
+ a.href = url;
+ a.download = filename;
+ document.body.appendChild(a);
+ a.click();
+ a.remove();
+ window.URL.revokeObjectURL(url);
+ };
+
+ const exportLatest = async (format) => {
+ if (!selected) return;
+ try {
+ setAi((s) => ({ ...s, loading: true }));
+ const response = await exportLatestResume(selected.id, format);
+ const filename = response.headers['content-disposition']?.split('filename=')[1]?.replace(/"/g, '') || `resume.${format}`;
+ downloadBlob(response.data, filename);
+ setAi((s) => ({ ...s, notice: `Exported ${format.toUpperCase()}` }));
+ } catch (e) {
+ setError(`Failed to export ${format.toUpperCase()}`);
+ } finally {
+ setAi((s) => ({ ...s, loading: false }));
+ }
+ };
+
+ const grouped = useMemo(() => {
+ const byCol = new Map();
+ for (const c of columns) byCol.set(c.id, []);
+ for (const a of apps) {
+ if (!byCol.has(a.column_id)) byCol.set(a.column_id, []);
+ byCol.get(a.column_id).push(a);
+ }
+ return byCol;
+ }, [columns, apps]);
+
+ const handleMove = async (cardId, newColumnId) => {
+ try {
+ // optimistic update
+ setApps((prev) => prev.map((a) => (a.id === cardId ? { ...a, column_id: Number(newColumnId) } : a)));
+ await moveApplication(cardId, Number(newColumnId));
+ } catch (e) {
+ setError("Failed to move card");
+ }
+ };
+
+ const startEdit = (card) => {
+ setEditing((prev) => ({
+ ...prev,
+ [card.id]: {
+ title: card.title || "",
+ company: card.company || "",
+ description: card.description || "",
+ status: card.status || "",
+ tags: card.tags || [],
+ column_id: card.column_id,
+ board_id: card.board_id,
+ },
+ }));
+ };
+
+ const cancelEdit = (id) => {
+ setEditing((prev) => {
+ const n = { ...prev };
+ delete n[id];
+ return n;
+ });
+ };
+
+ const saveEdit = async (id) => {
+ const payload = editing[id];
+ try {
+ const updated = await updateApplication(id, payload);
+ setApps((prev) => prev.map((a) => (a.id === id ? updated : a)));
+ cancelEdit(id);
+ } catch (e) {
+ setError("Failed to update card");
+ }
+ };
+
+ const openDetails = (card) => {
+ setSelected(card);
+ setAi({ loading: false, summary: "", tags: [], steps: [] });
+ setTab("details");
+ setResume({ jd: card.description || "", profile: "", markdown: "" });
+ };
+ const closeDetails = () => {
+ setSelected(null);
+ setAi({ loading: false, summary: "", tags: [], steps: [] });
+ };
+
+ const runSummarize = async () => {
+ try {
+ setAi((s) => ({ ...s, loading: true, summary: "" }));
+ const { summary } = await aiSummarizeBoard(BOARD_ID);
+ setAi((s) => ({ ...s, loading: false, summary }));
+ } catch (e) {
+ setAi((s) => ({ ...s, loading: false }));
+ setError("AI summarize failed");
+ }
+ };
+ const runTag = async () => {
+ if (!selected) return;
+ try {
+ setAi((s) => ({ ...s, loading: true }));
+ const { tags } = await aiTagApplication(selected.id, 5);
+ setAi((s) => ({ ...s, loading: false, tags }));
+ // also update the card with tags
+ const updated = await updateApplication(selected.id, { ...selected, tags });
+ setApps((prev) => prev.map((a) => (a.id === selected.id ? updated : a)));
+ setSelected(updated);
+ } catch (e) {
+ setAi((s) => ({ ...s, loading: false }));
+ setError("AI tag failed");
+ }
+ };
+ const runNextSteps = async () => {
+ if (!selected) return;
+ try {
+ setAi((s) => ({ ...s, loading: true }));
+ const { steps } = await aiNextSteps(selected.id);
+ setAi((s) => ({ ...s, loading: false, steps }));
+ } catch (e) {
+ setAi((s) => ({ ...s, loading: false }));
+ setError("AI next steps failed");
+ }
+ };
+
+ const saveSelected = async () => {
+ if (!selected) return;
+ try {
+ const payload = {
+ title: selected.title || "",
+ company: selected.company || "",
+ description: selected.description || "",
+ status: selected.status || "",
+ tags: selected.tags || [],
+ column_id: selected.column_id,
+ };
+ const updated = await updateApplication(selected.id, payload);
+ setApps((prev) => prev.map((a) => (a.id === selected.id ? updated : a)));
+ setSelected(updated);
+ } catch (e) {
+ setError("Failed to save details");
+ }
+ };
+
+ if (loading) return Loading board...
;
+ if (error) return {error}
;
+
+ return (
+
+
+
Kanban
+
+
+ {ai.loading && (
+
+ )}
+
+ {columns.map((col) => (
+
+
+
{col.name}
+
+
+ {(grouped.get(col.id) || []).map((card) => (
+
+ ))}
+
+
+ ))}
+
+ {selected && (
+
+
e.stopPropagation()}>
+
+
Application #{selected.id}
+
+
+
+
+
+
+
+ {tab === "details" && (<>
+
+
+ setSelected({ ...selected, title: e.target.value })} />
+
+
+
+ setSelected({ ...selected, company: e.target.value })} />
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ {(selected.tags || []).map((t, i) => ({t}))}
+
+
+
+
+
+
+
+
+ {ai.loading &&
AI working...
}
+ {ai.tags.length > 0 && (
+
+
Suggested Tags
+
+ {ai.tags.map((t, i) => ({t}))}
+
+
+ )}
+ {ai.steps.length > 0 && (
+
+
Next Steps
+
+ {ai.steps.map((s, i) => (- {s}
))}
+
+
+ )}
+ {ai.summary && (
+
+
Board Summary
+
{ai.summary}
+
+ )}
+ >)}
+
+ {tab === "resume" && (
+ <>
+
+
+
+
+
+
+
+
+
+
+
+
+
+ {resume.markdown ? (
+
+ {stripCodeFences(resume.markdown)}
+
+ ) : (
+
No content yet
+ )}
+
+
+
+
+
+
+
+
+
+ {ai.notice &&
{ai.notice}
}
+ >
+ )}
+
+
+
+ )}
+
+
+ );
+}
+
diff --git a/frontend/src/pages/ResumePage.js b/frontend/src/pages/ResumePage.js
new file mode 100644
index 0000000..5d85af5
--- /dev/null
+++ b/frontend/src/pages/ResumePage.js
@@ -0,0 +1,11 @@
+import React from "react";
+import ResumeForm from "../components/ResumeForm";
+
+export default function ResumePage() {
+ return (
+
+
ATS Resume Generator
+
+
+ );
+}
diff --git a/frontend/src/styles/Kanban.css b/frontend/src/styles/Kanban.css
new file mode 100644
index 0000000..3cbfdd1
--- /dev/null
+++ b/frontend/src/styles/Kanban.css
@@ -0,0 +1,243 @@
+.kanban {
+ display: grid;
+ grid-template-columns: repeat(auto-fill, minmax(320px, 1fr));
+ gap: 20px;
+ align-items: start;
+ padding: 20px;
+}
+/* Page header */
+.kanban__page-header {
+ display: flex;
+ justify-content: space-between;
+ align-items: center;
+ padding: 12px 20px;
+ background: var(--background);
+ border-bottom: 1px solid var(--border);
+ position: sticky;
+ top: 0;
+ z-index: 40;
+}
+@media (max-width: 1200px) {
+ .kanban {
+ grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
+ }
+}
+@media (max-width: 768px) {
+ .kanban {
+ grid-template-columns: repeat(auto-fill, minmax(260px, 1fr));
+ gap: 14px;
+ padding: 14px;
+ }
+}
+.kanban__column {
+ background: var(--background);
+ border: 1px solid var(--border);
+ border-radius: var(--radius);
+ padding: 8px;
+ color: var(--foreground);
+}
+.kanban__column-header h3 {
+ margin: 6px 8px 12px;
+ font-size: 14px;
+ color: var(--foreground);
+ letter-spacing: 0.2px;
+}
+.kanban__cards {
+ display: flex;
+ flex-direction: column;
+ gap: 10px;
+}
+.kanban__card {
+ background: var(--card);
+ border: 1px solid var(--border);
+ border-radius: var(--radius);
+ padding: 12px 14px;
+ box-shadow: 0 1px 2px rgba(16, 24, 40, 0.04), 0 2px 8px rgba(16, 24, 40, 0.06);
+}
+.kanban__card-title {
+ color: var(--card-foreground);
+ font-weight: 600;
+}
+.kanban__input, .kanban__textarea {
+ width: 100%;
+ background: #fff;
+ border: 1px solid var(--border);
+ border-radius: 6px;
+ padding: 8px 10px;
+ font-size: 13px;
+ color: var(--foreground);
+ margin: 6px 0;
+}
+.kanban__card-sub {
+ color: var(--muted-foreground);
+ font-size: 12px;
+ margin-bottom: 6px;
+}
+.kanban__tags {
+ display: flex;
+ flex-wrap: wrap;
+ gap: 6px;
+ margin: 8px 0 6px;
+}
+.kanban__tag {
+ background: var(--primary);
+ color: #0b2330;
+ padding: 2px 8px;
+ border-radius: 999px;
+ font-size: 11px;
+ font-weight: 600;
+}
+.kanban__desc {
+ font-size: 12px;
+ color: var(--muted-foreground);
+ margin: 6px 0 10px;
+}
+.kanban__card-actions {
+ display: flex;
+ gap: 8px;
+ flex-wrap: wrap;
+}
+.btn {
+ border: 1px solid var(--border);
+ background: #ffffff;
+ color: var(--foreground);
+ padding: 6px 10px;
+ border-radius: 6px;
+ font-size: 12px;
+ cursor: pointer;
+}
+.btn:disabled { opacity: 0.6; cursor: not-allowed; }
+.btn.btn--ghost { background: #fff; }
+.btn--ghost:hover {
+ background: rgba(126, 196, 207, 0.12);
+ border-color: var(--primary);
+}
+.kanban__loading, .kanban__error {
+ color: var(--foreground);
+ padding: 12px;
+}
+
+/* Modal */
+.modal__backdrop {
+ position: fixed;
+ inset: 0;
+ background: rgba(15, 23, 42, 0.4);
+ display: flex;
+ align-items: center;
+ justify-content: center;
+ padding: 20px;
+ z-index: 50;
+}
+.modal {
+ width: 100%;
+ max-width: 900px;
+ background: var(--card);
+ border: 1px solid var(--border);
+ border-radius: var(--radius);
+ box-shadow: 0 10px 30px rgba(16, 24, 40, 0.18);
+ padding: 16px;
+ max-height: 85vh;
+ overflow: auto;
+}
+.modal__header {
+ display: flex;
+ align-items: center;
+ justify-content: space-between;
+ margin-bottom: 12px;
+}
+.modal__tabs {
+ display: flex;
+ gap: 8px;
+ margin: 0 0 12px 0;
+}
+.tab {
+ padding: 6px 10px;
+ border: 1px solid var(--border);
+ background: var(--card);
+ color: var(--muted-foreground);
+ border-radius: calc(var(--radius) - 4px);
+ cursor: pointer;
+}
+.tab.active {
+ color: var(--foreground);
+ border-color: var(--primary);
+ box-shadow: 0 0 0 2px rgba(126, 196, 207, 0.18) inset;
+}
+.modal__body {
+ display: flex;
+ flex-direction: column;
+ gap: 12px;
+}
+.kanban__input-group label {
+ font-size: 12px;
+ color: var(--muted-foreground);
+ display: block;
+ margin-bottom: 4px;
+}
+
+/* Resume editor */
+.resume__editor {
+ display: grid;
+ grid-template-columns: 1fr 1fr;
+ gap: 12px;
+}
+@media (max-width: 900px) {
+ .resume__editor {
+ grid-template-columns: 1fr;
+ }
+}
+.resume__pane label {
+ font-size: 12px;
+ color: var(--muted-foreground);
+ display: block;
+ margin-bottom: 4px;
+}
+.resume__preview {
+ border: 1px solid var(--border);
+ border-radius: var(--radius);
+ background: var(--card);
+ padding: 12px;
+ min-height: 260px;
+ overflow: auto;
+}
+.resume__preview-body {
+ margin: 0;
+ color: var(--foreground);
+}
+.kanban__textarea {
+ font-family: ui-monospace, SFMono-Regular, Menlo, monospace;
+ line-height: 1.4;
+}
+.markdown-body h1, .markdown-body h2, .markdown-body h3 { margin: 8px 0; }
+.markdown-body p { margin: 6px 0 10px; line-height: 1.6; }
+.markdown-body ul, .markdown-body ol { padding-left: 18px; margin: 6px 0; }
+.markdown-body li { margin: 2px 0; }
+.markdown-body code { background: #f6f8fa; padding: 1px 4px; border-radius: 4px; font-family: ui-monospace, SFMono-Regular, Menlo, monospace; }
+.markdown-body pre { background: #f6f8fa; padding: 8px; border-radius: 6px; overflow: auto; }
+.resume__preview-empty {
+ color: var(--muted-foreground);
+}
+
+/* Top progress bar shown during AI actions */
+.progress {
+ position: sticky;
+ top: 0;
+ left: 0;
+ right: 0;
+ height: 3px;
+ background: transparent;
+ z-index: 60;
+}
+.progress__bar {
+ width: 35%;
+ height: 100%;
+ background: var(--primary);
+ border-radius: 2px;
+ box-shadow: 0 0 12px rgba(68, 184, 201, 0.45);
+ animation: progress-slide 1.15s ease-in-out infinite;
+}
+@keyframes progress-slide {
+ 0% { transform: translateX(-120%); opacity: .6; }
+ 50% { transform: translateX(40%); opacity: 1; }
+ 100% { transform: translateX(220%); opacity: .6; }
+}
diff --git a/frontend/src/styles/Navbar.css b/frontend/src/styles/Navbar.css
new file mode 100644
index 0000000..915e0cb
--- /dev/null
+++ b/frontend/src/styles/Navbar.css
@@ -0,0 +1,33 @@
+.navbar {
+ display: flex;
+ align-items: center;
+ justify-content: space-between;
+ padding: 0.75rem 1.25rem;
+ background: #ffffff;
+ color: var(--foreground);
+ border-bottom: 1px solid var(--border);
+}
+.navbar a {
+ color: var(--muted-foreground);
+ text-decoration: none;
+ transition: color .15s ease;
+}
+.navbar__brand a {
+ font-weight: 600;
+ font-size: 18px;
+ color: var(--foreground);
+}
+.navbar__links {
+ list-style: none;
+ display: flex;
+ gap: 1.25rem;
+ margin: 0;
+ padding: 0;
+}
+.navbar__links .active {
+ border-bottom: 2px solid var(--primary);
+ color: var(--foreground);
+}
+.navbar__links a:hover {
+ color: var(--foreground);
+}
diff --git a/frontend/src/styles/theme.css b/frontend/src/styles/theme.css
new file mode 100644
index 0000000..7ce1f59
--- /dev/null
+++ b/frontend/src/styles/theme.css
@@ -0,0 +1,16 @@
+:root {
+ --background: #f0f0f0; /* Light gray */
+ --foreground: #111827; /* Tailwind gray-900-ish */
+ --card: #ffffff;
+ --card-foreground: #111827;
+ --border: #e5e7eb; /* gray-200 */
+ --muted-foreground: #6b7280; /* gray-500 */
+ --primary: #7ec4cf; /* Soft Blue */
+ --accent: #faf884; /* Light Yellow */
+ --radius: 8px;
+}
+
+body {
+ background: var(--background);
+ color: var(--foreground);
+}
diff --git a/kanban_api/Dockerfile b/kanban_api/Dockerfile
new file mode 100644
index 0000000..ca356fd
--- /dev/null
+++ b/kanban_api/Dockerfile
@@ -0,0 +1,19 @@
+FROM python:3.11-slim
+
+WORKDIR /app
+
+# System deps for psycopg2
+RUN apt-get update && apt-get install -y --no-install-recommends \
+ build-essential \
+ libpq-dev \
+ pandoc \
+ && rm -rf /var/lib/apt/lists/*
+
+COPY requirements.txt ./
+RUN pip install --no-cache-dir -r requirements.txt
+
+COPY app ./app
+
+EXPOSE 8000
+
+CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
diff --git a/kanban_api/app/__init__.py b/kanban_api/app/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/kanban_api/app/config.py b/kanban_api/app/config.py
new file mode 100644
index 0000000..f45814a
--- /dev/null
+++ b/kanban_api/app/config.py
@@ -0,0 +1,17 @@
+from pydantic import BaseModel
+import os
+
+class Settings(BaseModel):
+ CORS_ORIGIN: str = os.getenv("CORS_ORIGIN", "http://localhost:8080")
+ DATABASE_URL: str = os.getenv(
+ "DATABASE_URL",
+ "postgresql+psycopg2://appuser:apppass@postgres:5432/app_db",
+ )
+ # AI settings
+ AI_PROVIDER: str = os.getenv("AI_PROVIDER", "ollama") # ollama | openai
+ MODEL_NAME: str = os.getenv("MODEL_NAME", "gemma3:1b")
+ OLLAMA_BASE_URL: str = os.getenv("OLLAMA_BASE_URL", "http://host.docker.internal:11434")
+ OPENAI_BASE_URL: str = os.getenv("OPENAI_BASE_URL", "") # e.g., https://api.openai.com/v1 or a compatible endpoint
+ OPENAI_API_KEY: str = os.getenv("OPENAI_API_KEY", "")
+
+settings = Settings()
diff --git a/kanban_api/app/db.py b/kanban_api/app/db.py
new file mode 100644
index 0000000..c550a3c
--- /dev/null
+++ b/kanban_api/app/db.py
@@ -0,0 +1,14 @@
+from sqlalchemy import create_engine
+from sqlalchemy.orm import sessionmaker
+from .config import settings
+
+engine = create_engine(settings.DATABASE_URL, pool_pre_ping=True)
+SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
+
+# Dependency for FastAPI routes (generator compatible with Depends)
+def get_db():
+ db = SessionLocal()
+ try:
+ yield db
+ finally:
+ db.close()
diff --git a/kanban_api/app/main.py b/kanban_api/app/main.py
new file mode 100644
index 0000000..deecb88
--- /dev/null
+++ b/kanban_api/app/main.py
@@ -0,0 +1,65 @@
+from fastapi import FastAPI
+from fastapi.middleware.cors import CORSMiddleware
+from .config import settings
+from .models import Base, Board, Column
+from .db import engine, SessionLocal
+from .routes_kanban import router as kanban_router
+from .routes_resumes import router as resumes_router
+from .routes_ai import router as ai_router
+import time
+from sqlalchemy.exc import OperationalError
+
+app = FastAPI(title="Kanban API", version="0.1.0")
+
+app.add_middleware(
+ CORSMiddleware,
+ allow_origins=[settings.CORS_ORIGIN],
+ allow_credentials=True,
+ allow_methods=["*"],
+ allow_headers=["*"],
+)
+
+@app.on_event("startup")
+def on_startup():
+ """Create tables automatically in development.
+ In production, prefer Alembic migrations.
+ """
+ # Wait for Postgres to be ready (simple retry loop)
+ retries = 10
+ for attempt in range(retries):
+ try:
+ Base.metadata.create_all(bind=engine)
+ # Development seed: ensure one board with three default columns exists
+ db = SessionLocal()
+ try:
+ has_board = db.query(Board).first()
+ if not has_board:
+ board = Board(name="Default Board")
+ db.add(board)
+ db.flush()
+ cols = [
+ Column(board_id=board.id, name="To Do", position=0),
+ Column(board_id=board.id, name="Doing", position=1),
+ Column(board_id=board.id, name="Done", position=2),
+ ]
+ db.add_all(cols)
+ db.commit()
+ finally:
+ db.close()
+ break
+ except OperationalError:
+ time.sleep(2)
+ continue
+
+app.include_router(kanban_router)
+app.include_router(resumes_router)
+app.include_router(ai_router)
+
+@app.get("/health")
+def health():
+ return {
+ "status": "ok",
+ "model": settings.MODEL_NAME,
+ "ollama_base_url": settings.OLLAMA_BASE_URL,
+ "db": settings.DATABASE_URL.split("@")[-1],
+ }
diff --git a/kanban_api/app/models.py b/kanban_api/app/models.py
new file mode 100644
index 0000000..5486c03
--- /dev/null
+++ b/kanban_api/app/models.py
@@ -0,0 +1,76 @@
+"""
+SQLAlchemy ORM models for the Kanban domain and Resume records.
+PostgreSQL is the database backend. Migrations will be managed by Alembic later.
+"""
+from datetime import datetime
+from typing import Optional
+
+from sqlalchemy import Integer, String, DateTime, ForeignKey, Text, JSON
+from sqlalchemy import Column as SAColumn
+from sqlalchemy.dialects.postgresql import ARRAY
+from sqlalchemy.orm import relationship, declarative_base
+
+Base = declarative_base()
+
+
+class Board(Base):
+ __tablename__ = "boards"
+
+ id = SAColumn(Integer, primary_key=True, index=True)
+ name = SAColumn(String(255), nullable=False)
+ created_at = SAColumn(DateTime, default=datetime.utcnow, nullable=False)
+
+ columns = relationship("Column", back_populates="board", cascade="all, delete-orphan")
+ applications = relationship("Application", back_populates="board", cascade="all, delete-orphan")
+
+
+class Column(Base):
+ __tablename__ = "columns"
+
+ id = SAColumn(Integer, primary_key=True, index=True)
+ board_id = SAColumn(Integer, ForeignKey("boards.id", ondelete="CASCADE"), nullable=False, index=True)
+ name = SAColumn(String(255), nullable=False)
+ position = SAColumn(Integer, nullable=False, default=0)
+
+ board = relationship("Board", back_populates="columns")
+ applications = relationship("Application", back_populates="column", cascade="all, delete-orphan")
+
+
+class Application(Base):
+ __tablename__ = "applications"
+
+ id = SAColumn(Integer, primary_key=True, index=True)
+ board_id = SAColumn(Integer, ForeignKey("boards.id", ondelete="CASCADE"), nullable=False, index=True)
+ column_id = SAColumn(Integer, ForeignKey("columns.id", ondelete="SET NULL"), nullable=True, index=True)
+
+ title = SAColumn(String(255), nullable=False)
+ company = SAColumn(String(255), nullable=True)
+ description = SAColumn(Text, nullable=True)
+ status = SAColumn(String(50), nullable=True)
+ tags = SAColumn(ARRAY(String), nullable=True)
+
+ created_at = SAColumn(DateTime, default=datetime.utcnow, nullable=False)
+ updated_at = SAColumn(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False)
+
+ board = relationship("Board", back_populates="applications")
+ column = relationship("Column", back_populates="applications")
+ resumes = relationship("Resume", back_populates="application", cascade="all, delete-orphan")
+
+
+class Resume(Base):
+ __tablename__ = "resumes"
+
+ id = SAColumn(Integer, primary_key=True, index=True)
+ application_id = SAColumn(Integer, ForeignKey("applications.id", ondelete="SET NULL"), nullable=True, index=True)
+
+ job_description = SAColumn(Text, nullable=True)
+ input_profile = SAColumn(Text, nullable=True)
+ markdown = SAColumn(Text, nullable=True)
+ docx_path = SAColumn(String(1024), nullable=True)
+
+ model = SAColumn(String(100), nullable=True)
+ params = SAColumn(JSON, nullable=True)
+
+ created_at = SAColumn(DateTime, default=datetime.utcnow, nullable=False)
+
+ application = relationship("Application", back_populates="resumes")
diff --git a/kanban_api/app/routes_ai.py b/kanban_api/app/routes_ai.py
new file mode 100644
index 0000000..e75562e
--- /dev/null
+++ b/kanban_api/app/routes_ai.py
@@ -0,0 +1,200 @@
+"""
+AI endpoints powered by LangChain + Ollama.
+Uses ChatOllama with model and base URL from settings.
+"""
+from typing import List, Optional
+
+from fastapi import APIRouter, Depends, HTTPException
+from pydantic import BaseModel
+
+from .config import settings
+from .db import SessionLocal
+from .models import Application, Board, Column
+
+# LangChain providers
+try:
+ from langchain_ollama import ChatOllama # Ollama provider
+except Exception: # pragma: no cover
+ ChatOllama = None # type: ignore
+
+try:
+ from langchain_openai import ChatOpenAI # OpenAI-compatible provider
+except Exception: # pragma: no cover
+ ChatOpenAI = None # type: ignore
+
+router = APIRouter(prefix="/ai", tags=["ai"])
+
+
+def get_llm():
+ provider = settings.AI_PROVIDER.lower()
+ if provider == "ollama":
+ if ChatOllama is None:
+ raise HTTPException(status_code=500, detail="ChatOllama is not available. Check dependencies.")
+ return ChatOllama(model=settings.MODEL_NAME, base_url=settings.OLLAMA_BASE_URL, temperature=0.2)
+ elif provider == "openai":
+ if ChatOpenAI is None:
+ raise HTTPException(status_code=500, detail="ChatOpenAI is not available. Check dependencies.")
+ if not settings.OPENAI_BASE_URL or not settings.OPENAI_API_KEY:
+ raise HTTPException(status_code=500, detail="OPENAI_BASE_URL and OPENAI_API_KEY must be set for OpenAI provider")
+ # ChatOpenAI accepts base_url for compatible providers (e.g., local OpenAI-compatible gateways)
+ return ChatOpenAI(model=settings.MODEL_NAME, base_url=settings.OPENAI_BASE_URL, api_key=settings.OPENAI_API_KEY, temperature=0.2)
+ else:
+ raise HTTPException(status_code=400, detail=f"Unsupported AI_PROVIDER: {settings.AI_PROVIDER}")
+
+
+class SummarizeBoardRequest(BaseModel):
+ board_id: int
+ focus: Optional[str] = None # optional extra instruction
+
+
+class SummarizeBoardResponse(BaseModel):
+ summary: str
+
+
+@router.post("/summarize-board", response_model=SummarizeBoardResponse)
+def summarize_board(body: SummarizeBoardRequest):
+ db = SessionLocal()
+ try:
+ board = db.query(Board).filter(Board.id == body.board_id).first()
+ if not board:
+ raise HTTPException(status_code=404, detail="Board not found")
+ columns = db.query(Column).filter(Column.board_id == board.id).order_by(Column.position).all()
+ apps = (
+ db.query(Application)
+ .filter(Application.board_id == board.id)
+ .order_by(Application.created_at.desc())
+ .all()
+ )
+ # Build a compact context for the LLM
+ col_lines = [f"- {c.position}. {c.name}" for c in columns]
+ app_lines = [
+ f"β’ [{a.id}] {a.title} @ {a.company or '-'} | status={a.status or '-'} | column_id={a.column_id}"
+ for a in apps
+ ]
+ focus_text = f"\nFocus: {body.focus}" if body.focus else ""
+ prompt = (
+ "You are a helpful assistant for a job application Kanban board.\n"
+ "Summarize the current pipeline, risks, and immediate priorities succinctly.\n"
+ "Board: {board}\nColumns:\n{columns}\nApplications:\n{applications}\n" + focus_text + "\n"
+ "Return a short, actionable summary."
+ ).format(
+ board=board.name,
+ columns="\n".join(col_lines) or "(none)",
+ applications="\n".join(app_lines) or "(none)",
+ )
+ llm = get_llm()
+ msg = llm.invoke(prompt)
+ text = getattr(msg, "content", str(msg))
+ return SummarizeBoardResponse(summary=text)
+ finally:
+ db.close()
+
+
+# -------- Generate Resume (Markdown) --------
+class GenerateResumeRequest(BaseModel):
+ application_id: int
+ job_description: str
+ profile: Optional[str] = None
+
+
+class GenerateResumeResponse(BaseModel):
+ markdown: str
+
+
+@router.post("/generate-resume", response_model=GenerateResumeResponse)
+def generate_resume(body: GenerateResumeRequest):
+ db = SessionLocal()
+ try:
+ app = db.query(Application).filter(Application.id == body.application_id).first()
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+ profile_text = f"\nCandidate profile:\n{body.profile}" if body.profile else ""
+ prompt = (
+ "You are a resume writer. Create a concise, tailored resume in GitHub-flavored Markdown "
+ "for the following job application and job description. Focus on impact, keywords, and quantifiable results.\n"
+ "Return only valid Markdown.\n\n"
+ "Application:\nTitle: {title}\nCompany: {company}\nDescription: {desc}\n"
+ "Job Description:\n{jd}\n"
+ + profile_text +
+ "\nFormat: start with a short summary, then skills, experience (bullets), education."
+ ).format(
+ title=app.title,
+ company=app.company or "-",
+ desc=app.description or "-",
+ jd=body.job_description,
+ )
+ llm = get_llm()
+ msg = llm.invoke(prompt)
+ text = getattr(msg, "content", str(msg))
+ return GenerateResumeResponse(markdown=text)
+ finally:
+ db.close()
+
+
+class TagApplicationRequest(BaseModel):
+ application_id: int
+ max_tags: int = 5
+
+
+class TagApplicationResponse(BaseModel):
+ tags: List[str]
+
+
+@router.post("/tag-application", response_model=TagApplicationResponse)
+def tag_application(body: TagApplicationRequest):
+ db = SessionLocal()
+ try:
+ app = db.query(Application).filter(Application.id == body.application_id).first()
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+ desc = app.description or ""
+ prompt = (
+ "Extract up to {k} concise tags for the following job application.\n"
+ "Title: {title}\nCompany: {company}\nDescription: {desc}\n"
+ "Return as a comma-separated list."
+ ).format(k=body.max_tags, title=app.title, company=app.company or "-", desc=desc)
+ llm = get_llm()
+ msg = llm.invoke(prompt)
+ text = getattr(msg, "content", str(msg))
+ # simple parse of comma-separated tags
+ tags = [t.strip() for t in text.replace("\n", " ").split(",") if t.strip()]
+ return TagApplicationResponse(tags=tags[: body.max_tags])
+ finally:
+ db.close()
+
+
+class NextStepsRequest(BaseModel):
+ application_id: int
+
+
+class NextStepsResponse(BaseModel):
+ steps: List[str]
+
+
+@router.post("/next-steps", response_model=NextStepsResponse)
+def next_steps(body: NextStepsRequest):
+ db = SessionLocal()
+ try:
+ app = db.query(Application).filter(Application.id == body.application_id).first()
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+ prompt = (
+ "Given this application, propose 3 concrete next steps.\n"
+ "Title: {title}\nCompany: {company}\nStatus: {status}\nDescription: {desc}\n"
+ "Return as a numbered list."
+ ).format(
+ title=app.title,
+ company=app.company or "-",
+ status=app.status or "-",
+ desc=app.description or "-",
+ )
+ llm = get_llm()
+ msg = llm.invoke(prompt)
+ text = getattr(msg, "content", str(msg))
+ # parse numbered list fallback
+ lines = [l.strip(" -β’\t") for l in text.splitlines() if l.strip()]
+ lines = [l.split(". ", 1)[-1] if l[:2].isdigit() else l for l in lines]
+ steps = [l for l in lines if l]
+ return NextStepsResponse(steps=steps[:3])
+ finally:
+ db.close()
diff --git a/kanban_api/app/routes_kanban.py b/kanban_api/app/routes_kanban.py
new file mode 100644
index 0000000..1ccd08e
--- /dev/null
+++ b/kanban_api/app/routes_kanban.py
@@ -0,0 +1,127 @@
+"""
+Kanban CRUD routes for Boards, Columns, and Applications (cards).
+Minimal implementation for development; errors are simplified.
+"""
+from typing import List, Optional
+from fastapi import APIRouter, Depends, HTTPException
+from sqlalchemy.orm import Session
+
+from .db import get_db
+from . import models
+from .schemas import (
+ BoardCreate,
+ BoardRead,
+ ColumnCreate,
+ ColumnRead,
+ ApplicationCreate,
+ ApplicationRead,
+ ApplicationMove,
+)
+
+router = APIRouter(prefix="/kanban", tags=["kanban"])
+
+
+# Boards
+@router.post("/boards", response_model=BoardRead)
+def create_board(payload: BoardCreate, db: Session = Depends(get_db)):
+ board = models.Board(name=payload.name)
+ db.add(board)
+ db.commit()
+ db.refresh(board)
+ return board
+
+
+@router.get("/boards", response_model=List[BoardRead])
+def list_boards(db: Session = Depends(get_db)):
+ return db.query(models.Board).order_by(models.Board.id).all()
+
+
+# Columns
+@router.post("/boards/{board_id}/columns", response_model=ColumnRead)
+def create_column(board_id: int, payload: ColumnCreate, db: Session = Depends(get_db)):
+ board = db.get(models.Board, board_id)
+ if not board:
+ raise HTTPException(status_code=404, detail="Board not found")
+ col = models.Column(board_id=board_id, name=payload.name, position=payload.position)
+ db.add(col)
+ db.commit()
+ db.refresh(col)
+ return col
+
+
+@router.get("/boards/{board_id}/columns", response_model=List[ColumnRead])
+def list_columns(board_id: int, db: Session = Depends(get_db)):
+ return (
+ db.query(models.Column)
+ .filter(models.Column.board_id == board_id)
+ .order_by(models.Column.position)
+ .all()
+ )
+
+
+# Applications (Kanban cards)
+@router.post("/boards/{board_id}/applications", response_model=ApplicationRead)
+def create_application(board_id: int, payload: ApplicationCreate, db: Session = Depends(get_db)):
+ board = db.get(models.Board, board_id)
+ if not board:
+ raise HTTPException(status_code=404, detail="Board not found")
+ app = models.Application(
+ board_id=board_id,
+ column_id=payload.column_id,
+ title=payload.title,
+ company=payload.company,
+ description=payload.description,
+ status=payload.status,
+ tags=payload.tags,
+ )
+ db.add(app)
+ db.commit()
+ db.refresh(app)
+ return app
+
+
+@router.get("/boards/{board_id}/applications", response_model=List[ApplicationRead])
+def list_applications(board_id: int, db: Session = Depends(get_db)):
+ return (
+ db.query(models.Application)
+ .filter(models.Application.board_id == board_id)
+ .order_by(models.Application.created_at.desc())
+ .all()
+ )
+
+
+@router.put("/applications/{application_id}", response_model=ApplicationRead)
+def update_application(application_id: int, payload: ApplicationCreate, db: Session = Depends(get_db)):
+ app = db.get(models.Application, application_id)
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+ app.title = payload.title
+ app.company = payload.company
+ app.description = payload.description
+ app.status = payload.status
+ app.tags = payload.tags
+ app.column_id = payload.column_id
+ db.commit()
+ db.refresh(app)
+ return app
+
+
+@router.post("/applications/{application_id}/move", response_model=ApplicationRead)
+def move_application(application_id: int, payload: ApplicationMove, db: Session = Depends(get_db)):
+ app = db.get(models.Application, application_id)
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+ app.column_id = payload.column_id
+ db.commit()
+ db.refresh(app)
+ return app
+
+
+@router.delete("/applications/{application_id}")
+def delete_application(application_id: int, db: Session = Depends(get_db)):
+ app = db.get(models.Application, application_id)
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+ db.delete(app)
+ db.commit()
+ return {"ok": True}
diff --git a/kanban_api/app/routes_resumes.py b/kanban_api/app/routes_resumes.py
new file mode 100644
index 0000000..6608648
--- /dev/null
+++ b/kanban_api/app/routes_resumes.py
@@ -0,0 +1,110 @@
+"""
+Resume routes: create and list resumes, and associate them to applications.
+All comments/docstrings in English as requested.
+"""
+from typing import List, Optional
+from fastapi import APIRouter, Depends, HTTPException, Query
+from sqlalchemy.orm import Session
+from starlette.responses import FileResponse
+import tempfile
+import subprocess
+import os
+
+from .db import get_db
+from . import models
+from .schemas import ResumeCreate, ResumeRead
+
+router = APIRouter(prefix="/resumes", tags=["resumes"])
+
+
+@router.post("", response_model=ResumeRead)
+def create_resume(payload: ResumeCreate, db: Session = Depends(get_db)):
+ """Create a Resume record and optionally associate it with an application.
+
+ This endpoint is intended to be called by the frontend after the Node backend
+ finishes generating a resume (markdown/docx).
+ """
+ if payload.application_id is not None:
+ app = db.get(models.Application, payload.application_id)
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+
+ resume = models.Resume(
+ application_id=payload.application_id,
+ job_description=payload.job_description,
+ input_profile=payload.input_profile,
+ markdown=payload.markdown,
+ docx_path=payload.docx_path,
+ model=payload.model,
+ params=payload.params,
+ )
+ db.add(resume)
+ db.commit()
+ db.refresh(resume)
+ return resume
+
+
+@router.get("/applications/{application_id}", response_model=List[ResumeRead])
+def list_resumes_for_application(application_id: int, db: Session = Depends(get_db)):
+ """List all resumes associated with a given application (kanban card)."""
+ app = db.get(models.Application, application_id)
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+
+ return (
+ db.query(models.Resume)
+ .filter(models.Resume.application_id == application_id)
+ .order_by(models.Resume.created_at.desc())
+ .all()
+ )
+
+
+@router.get("/{resume_id}/export")
+def export_resume(resume_id: int, format: Optional[str] = Query("pdf", pattern="^(pdf|docx)$"), db: Session = Depends(get_db)):
+ """Export a resume to PDF or DOCX via Pandoc and return the file.
+
+ Requires Pandoc to be available in the container PATH.
+ """
+ resume = db.get(models.Resume, resume_id)
+ if not resume:
+ raise HTTPException(status_code=404, detail="Resume not found")
+ if not resume.markdown:
+ raise HTTPException(status_code=400, detail="Resume has no markdown content to export")
+
+ with tempfile.TemporaryDirectory() as tmpdir:
+ md_path = os.path.join(tmpdir, "resume.md")
+ out_ext = ".pdf" if format == "pdf" else ".docx"
+ out_path = os.path.join(tmpdir, f"resume-{resume_id}{out_ext}")
+ with open(md_path, "w", encoding="utf-8") as f:
+ f.write(resume.markdown)
+
+ try:
+ # Basic pandoc call; extend with templates or metadata as needed
+ subprocess.run([
+ "pandoc", md_path, "-o", out_path
+ ], check=True)
+ except FileNotFoundError:
+ raise HTTPException(status_code=500, detail="Pandoc is not installed in the backend container")
+ except subprocess.CalledProcessError as e:
+ raise HTTPException(status_code=500, detail=f"Pandoc failed: {e}")
+
+ media_type = "application/pdf" if format == "pdf" else "application/vnd.openxmlformats-officedocument.wordprocessingml.document"
+ filename = f"resume-{resume_id}{out_ext}"
+ return FileResponse(out_path, media_type=media_type, filename=filename)
+
+
+@router.get("/applications/{application_id}/export")
+def export_latest_for_application(application_id: int, format: Optional[str] = Query("pdf", pattern="^(pdf|docx)$"), db: Session = Depends(get_db)):
+ """Export the latest resume for an application."""
+ app = db.get(models.Application, application_id)
+ if not app:
+ raise HTTPException(status_code=404, detail="Application not found")
+ resume = (
+ db.query(models.Resume)
+ .filter(models.Resume.application_id == application_id)
+ .order_by(models.Resume.created_at.desc())
+ .first()
+ )
+ if not resume:
+ raise HTTPException(status_code=404, detail="No resumes found for this application")
+ return export_resume(resume.id, format, db)
diff --git a/kanban_api/app/schemas.py b/kanban_api/app/schemas.py
new file mode 100644
index 0000000..f03e2e0
--- /dev/null
+++ b/kanban_api/app/schemas.py
@@ -0,0 +1,94 @@
+"""
+Pydantic schemas for request/response validation.
+All comments and docstrings are in English by user request.
+"""
+from datetime import datetime
+from typing import List, Optional
+from pydantic import BaseModel, Field
+
+
+# Board
+class BoardCreate(BaseModel):
+ name: str = Field(..., max_length=255)
+
+
+class BoardRead(BaseModel):
+ id: int
+ name: str
+ created_at: datetime
+
+ class Config:
+ from_attributes = True
+
+
+# Column
+class ColumnCreate(BaseModel):
+ name: str
+ position: int = 0
+
+
+class ColumnRead(BaseModel):
+ id: int
+ board_id: int
+ name: str
+ position: int
+
+ class Config:
+ from_attributes = True
+
+
+# Application (Kanban card)
+class ApplicationCreate(BaseModel):
+ title: str
+ company: Optional[str] = None
+ description: Optional[str] = None
+ status: Optional[str] = None
+ tags: Optional[List[str]] = None
+ column_id: Optional[int] = None
+
+
+class ApplicationRead(BaseModel):
+ id: int
+ board_id: int
+ column_id: Optional[int]
+ title: str
+ company: Optional[str]
+ description: Optional[str]
+ status: Optional[str]
+ tags: Optional[List[str]]
+ created_at: datetime
+ updated_at: datetime
+
+ class Config:
+ from_attributes = True
+
+
+class ApplicationMove(BaseModel):
+ column_id: Optional[int] = None
+ # position handling could be added later
+
+
+# Resume
+class ResumeCreate(BaseModel):
+ application_id: Optional[int] = None
+ job_description: Optional[str] = None
+ input_profile: Optional[str] = None
+ markdown: Optional[str] = None
+ docx_path: Optional[str] = None
+ model: Optional[str] = None
+ params: Optional[dict] = None
+
+
+class ResumeRead(BaseModel):
+ id: int
+ application_id: Optional[int]
+ job_description: Optional[str]
+ input_profile: Optional[str]
+ markdown: Optional[str]
+ docx_path: Optional[str]
+ model: Optional[str]
+ params: Optional[dict]
+ created_at: datetime
+
+ class Config:
+ from_attributes = True
diff --git a/kanban_api/requirements.txt b/kanban_api/requirements.txt
new file mode 100644
index 0000000..4e1e664
--- /dev/null
+++ b/kanban_api/requirements.txt
@@ -0,0 +1,10 @@
+fastapi==0.112.2
+uvicorn[standard]==0.30.6
+SQLAlchemy==2.0.34
+psycopg2-binary==2.9.9
+pydantic==2.9.2
+python-dotenv==1.0.1
+alembic==1.13.2
+langchain==0.3.10
+langchain-ollama==0.2.0
+langchain-openai==0.2.4
diff --git a/scripts/connectivity_test.sh b/scripts/connectivity_test.sh
new file mode 100755
index 0000000..161229f
--- /dev/null
+++ b/scripts/connectivity_test.sh
@@ -0,0 +1,50 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+echo "== Connectivity Test: $(date) =="
+
+print_section() {
+ echo
+ echo "# $1"
+}
+
+jq_safe() {
+ if command -v jq >/dev/null 2>&1; then
+ jq .
+ else
+ cat
+ fi
+}
+
+print_section "Docker Services"
+docker compose ps || true
+
+print_section "Kanban API Health"
+curl -sS http://localhost:8000/health | jq_safe || echo "Health check failed"
+
+print_section "Boards"
+curl -sS http://localhost:8000/kanban/boards | jq_safe || true
+
+print_section "Columns (board 1)"
+curl -sS http://localhost:8000/kanban/boards/1/columns | jq_safe || true
+
+print_section "Applications (board 1)"
+curl -sS http://localhost:8000/kanban/boards/1/applications | jq_safe || true
+
+print_section "Ollama Tags (host)"
+curl -sS http://localhost:11434/api/tags | jq_safe || echo "Ollama not reachable"
+
+print_section "AI: Summarize Board"
+curl -sS -X POST http://localhost:8000/ai/summarize-board \
+ -H 'Content-Type: application/json' -d '{"board_id":1}' | jq_safe || true
+
+print_section "AI: Tag Application (id=1)"
+curl -sS -X POST http://localhost:8000/ai/tag-application \
+ -H 'Content-Type: application/json' -d '{"application_id":1, "max_tags":5}' | jq_safe || true
+
+print_section "AI: Next Steps (id=1)"
+curl -sS -X POST http://localhost:8000/ai/next-steps \
+ -H 'Content-Type: application/json' -d '{"application_id":1}' | jq_safe || true
+
+echo
+echo "== Connectivity Test Completed =="