diff --git a/CHANGELOG.md b/CHANGELOG.md new file mode 100644 index 0000000..5869c1a --- /dev/null +++ b/CHANGELOG.md @@ -0,0 +1,70 @@ +# Changelog + +## 2025-08-23 + +### Backend and Infra +- Added new FastAPI service under `kanban_api/` with SQLAlchemy and Pydantic. + - `kanban_api/app/main.py`: CORS, startup DB retry, dev seed (Default Board with To Do/Doing/Done), and router includes. + - `kanban_api/app/models.py`: ORM models `Board`, `Column`, `Application`, `Resume`. Resolved naming collision by aliasing SQLAlchemy `Column` to `SAColumn`. + - `kanban_api/app/schemas.py`: Pydantic models for serialization/validation. + - `kanban_api/app/routes_kanban.py`: CRUD endpoints for boards, columns, and applications. + - `kanban_api/app/routes_resumes.py`: Create/list resumes and link to `application_id`. + - `kanban_api/app/routes_ai.py`: AI endpoints using LangChain `ChatOllama` (model `gemma3`): + - `POST /ai/summarize-board` + - `POST /ai/tag-application` + - `POST /ai/next-steps` + - `kanban_api/app/config.py`, `kanban_api/app/db.py`: settings and DB session. + - `kanban_api/Dockerfile`: production-ready Uvicorn container. +- Docker Compose updates in `docker-compose.yaml`: + - Services: `postgres`, `mlflow`, `kanban_api`, `backend` (Node), `frontend` (CRA). + - Postgres healthcheck and `POSTGRES_DB=app_db`. `kanban_api` waits for Postgres health. + - MLflow switched to local image built from `docker/mlflow/Dockerfile` with `psycopg2-binary`, exposed on host `5002`. +- Postgres init script at `docker/postgres/init.sql`: creates `appuser`, databases `app_db` and `mlflow`. + +### Kanban UI and Resume Integration +- Frontend `KanbanPage` moved header outside grid and added `.kanban__page-header` styles for parity with original board. +- Modal UX improved: wider modal, internal scrolling, better resume editor grid. +- Markdown preview uses `react-markdown` with improved typography and spacing. +- Fixed code-fence issue by stripping leading/trailing ``` from AI output to avoid code-block previews. +- Added Save-to-Card workflow: + - `POST /resumes` persists markdown linked to `application_id`. + - Save action shows success notice and counts saved versions per card. +- Added Export via Pandoc: + - `GET /resumes/{resume_id}/export?format=pdf|docx` + - `GET /resumes/applications/{application_id}/export?format=pdf|docx` (latest resume) + - Frontend buttons β€œExport PDF/DOCX” in Resume tab, downloading blobs. + +### Developer Notes +- Frontend env: `REACT_APP_API_BASE` should point to `http://localhost:8000` when running via docker-compose. +- If styles seem off, hard refresh (Cmd+Shift+R) to invalidate cached CSS. + +### How to run +1. Stop previous stack (if any): + ```bash + docker compose down + ``` +2. Start services: + ```bash + docker compose up -d --build + ``` +3. Verify: + - Kanban API health: http://localhost:8000/health + - Boards list: http://localhost:8000/kanban/boards + - Columns of board 1: http://localhost:8000/kanban/boards/1/columns + - MLflow UI: http://localhost:5002 +4. Example AI requests: + ```bash + curl -s -X POST http://localhost:8000/ai/summarize-board \ + -H 'Content-Type: application/json' -d '{"board_id":1}' + + curl -s -X POST http://localhost:8000/ai/tag-application \ + -H 'Content-Type: application/json' -d '{"application_id":1, "max_tags":5}' + + curl -s -X POST http://localhost:8000/ai/next-steps \ + -H 'Content-Type: application/json' -d '{"application_id":1}' + ``` + +### Notes +- Default model: `gemma3` via `OLLAMA_BASE_URL`. +- Dev seed creates a "Default Board" with three columns on first run. +- Next steps: unify frontend (single CRA) with routes `/kanban` and `/resume`, port original Kanban styles, add DnD and CRUD wiring, wire resume generation to persist in Postgres. diff --git a/README.md b/README.md index dba48cf..0769a88 100644 --- a/README.md +++ b/README.md @@ -1,42 +1,181 @@ # 🧠 Resume Builder App -This application generates ATS-friendly resumes based on job descriptions and unstructured input data. It is composed of a `frontend` and `backend` service, orchestrated using Docker Compose. +This project includes: + +- A classic Resume Generator (`backend` + `frontend`). +- A new Kanban API (`kanban_api/`) with AI endpoints using LangChain + Ollama. +- Postgres database and MLflow service. + +All services are orchestrated using Docker Compose. ## πŸ“¦ Requirements - [Docker](https://www.docker.com/) - [Docker Compose](https://docs.docker.com/compose/) +- Optional (for local AI): [Ollama](https://ollama.com/) installed on the host + +## πŸš€ Run the Stack (Docker Compose) + +Start all services (Postgres, MLflow, Kanban API, backend, frontend): + +```bash +docker compose up -d --build +``` + +- Frontend (CRA): http://localhost:8080 +- Node Backend: http://localhost:5001 +- Kanban API (FastAPI): http://localhost:8000 +- MLflow UI: http://localhost:5002 + +Check health of the Kanban API: -## πŸš€ Running the App +```bash +curl -s http://localhost:8000/health +``` -To start the application, run: +Basic Kanban endpoints: ```bash -docker-compose up --build +curl -s http://localhost:8000/kanban/boards +curl -s http://localhost:8000/kanban/boards/1/columns +curl -s http://localhost:8000/kanban/boards/1/applications ``` - β€’ The frontend will be available at: http://localhost:8080 - β€’ The backend API will be available at: http://localhost:5001 ## 🌐 Backend Environment Variables -CORS_ORIGIN Allowed origin for frontend requests -LLM_URL URL to the LLM API (e.g., Ollama instance) -MODEL_NAME Model to use for inference -PORT Port the backend service listens on +- `backend/` (Node): + - `PORT`: port to listen on (default 5001) + - `CORS_ORIGIN`: allowed origin for frontend requests + - `LLM_URL`: URL to the LLM API (e.g., Ollama instance) + - `MODEL_NAME`: model name + +- `kanban_api/` (FastAPI): + - `DATABASE_URL`: `postgresql+psycopg2://appuser:apppass@postgres:5432/app_db` + - `CORS_ORIGIN`: `http://localhost:8080` + - `AI_PROVIDER`: `ollama` or `openai` + - `MODEL_NAME`: `gemma3:1b` (default) + - `OLLAMA_BASE_URL`: `http://host.docker.internal:11434` + - `OPENAI_BASE_URL`: OpenAI-compatible base URL (e.g. `https://api.openai.com/v1` or a gateway) + - `OPENAI_API_KEY`: API key when using the OpenAI-compatible provider ## πŸ“‚ Output Directory All generated resumes and related files are saved in the local ./output directory, which is mounted into the backend container. -## πŸ“Œ Kanban-Board (New Version β€” Under Development) +## 🧾 Kanban: Save Resume to Card & Export + +You can generate, edit, save, and export resumes directly from the Kanban modal (Details β†’ Resume tab). -A new version of the application is being developed inside the kanban-board folder. +### From the UI -This new app is a standalone service that includes: +1. Open a card β†’ Details β†’ Resume tab. +2. Paste the Job Description and optionally your Profile, then click "AI: Generate Resume". +3. Edit the Markdown as needed and click "Save to Card". + - A notice will show the total number of saved versions linked to this card. +4. Click "Export PDF" or "Export DOCX" to download via Pandoc. - β€’ Resume generation (currently does not export yet) - β€’ A Kanban board to track your job applications - β€’ AI-powered actionables to help you move each application forward +### API Endpoints (FastAPI) + +- Create/save resume linked to a card: + +```bash +curl -s -X POST http://localhost:8000/resumes \ + -H 'Content-Type: application/json' \ + -d '{ + "application_id": 1, + "job_description": "...", + "input_profile": "...", + "markdown": "# My Resume..." + }' +``` + +- List resumes for a card: + +```bash +curl -s http://localhost:8000/resumes/applications/1 +``` + +- Export latest resume for a card (PDF or DOCX): + +```bash +curl -L -o resume.pdf "http://localhost:8000/resumes/applications/1/export?format=pdf" +curl -L -o resume.docx "http://localhost:8000/resumes/applications/1/export?format=docx" +``` + +Pandoc is installed in the `kanban_api` container (see `kanban_api/Dockerfile`). + +## πŸ€– AI (Ollama) Setup (Local Host) + +The Kanban AI endpoints use Ollama via `OLLAMA_BASE_URL`. To run locally on the host: + +1) Start the Ollama server (host): + +```bash +ollama serve +``` + +2) In a separate terminal, pull the model tag used by this repo (smallest): + +```bash +ollama pull gemma3:1b +``` + +3) Verify Ollama is up and reachable: + +```bash +curl -s http://localhost:11434/api/tags +``` + +4) Test AI endpoints (kanban_api): + +```bash +curl -s -X POST http://localhost:8000/ai/summarize-board \ + -H 'Content-Type: application/json' -d '{"board_id":1}' + +curl -s -X POST http://localhost:8000/ai/tag-application \ + -H 'Content-Type: application/json' -d '{"application_id":1, "max_tags":5}' + +curl -s -X POST http://localhost:8000/ai/next-steps \ + -H 'Content-Type: application/json' -d '{"application_id":1}' +``` -To run or contribute to this new version, please refer to the documentation in kanban-board/README.md. +Note: `kanban_api` includes `extra_hosts: host.docker.internal:host-gateway` so the container can reach the host Ollama. + +## πŸ”Œ OpenAI-compatible Provider Configuration + +Both the Node `backend/` and the Python `kanban_api/` can be configured to use OpenAI-compatible APIs. + +- Backend (Node): + - Select the provider via `LLM` env: `ollamaService` (default) or `openaiService`. + - For Ollama (raw API): + - `LLM=ollamaService` + - `LLM_URL=http://host.docker.internal:11434/api/generate` + - `MODEL_NAME=gemma3:1b` + - For OpenAI-compatible (Chat Completions): + - `LLM=openaiService` + - `LLM_URL=https://api.openai.com/v1/chat/completions` (or a compatible gateway) + - `OPENAI_API_KEY=...` + - `MODEL_NAME=gpt-4o-mini` (or a compatible model on your provider) + +- Kanban API (FastAPI): + - Select the provider via `AI_PROVIDER=ollama|openai`. + - For Ollama: + - `AI_PROVIDER=ollama` + - `OLLAMA_BASE_URL=http://host.docker.internal:11434` + - `MODEL_NAME=gemma3:1b` + - For OpenAI-compatible: + - `AI_PROVIDER=openai` + - `OPENAI_BASE_URL=https://api.openai.com/v1` + - `OPENAI_API_KEY=...` + - `MODEL_NAME=gpt-4o-mini` (or a compatible model on your provider) + +## πŸ“Œ Kanban-Board (New Frontend β€” Under Development) + +We are unifying the frontend into a single CRA app with routes `/kanban` and `/resume`, porting the exact Kanban styles. + +Current status: + +- Resume generation works via the classic Node backend. +- Kanban API is live with CRUD and AI endpoints. +- Frontend unification in progress. diff --git a/backend/services/openaiService.js b/backend/services/openaiService.js index d7560c3..2b3da91 100644 --- a/backend/services/openaiService.js +++ b/backend/services/openaiService.js @@ -127,6 +127,11 @@ async function callLLM(prompt) { } }, } + }, { + headers: { + "Content-Type": "application/json", + ...(process.env.OPENAI_API_KEY ? { "Authorization": `Bearer ${process.env.OPENAI_API_KEY}` } : {}) + } }); logger.info("πŸ“‘ OpenAI API Raw Response:", response.data); @@ -145,3 +150,4 @@ async function callLLM(prompt) { } module.exports = { callLLM }; + diff --git a/docker-compose.yaml b/docker-compose.yaml index 8f7c6d1..2df624e 100644 --- a/docker-compose.yaml +++ b/docker-compose.yaml @@ -17,4 +17,53 @@ services: - MODEL_NAME=gemma3:1b - PORT=5001 ports: - - 5001:5001 \ No newline at end of file + - 5001:5001 + + postgres: + image: postgres:16 + container_name: postgres + environment: + POSTGRES_USER: appuser + POSTGRES_PASSWORD: apppass + POSTGRES_DB: app_db + ports: + - 5432:5432 + volumes: + - pgdata:/var/lib/postgresql/data + - ./docker/postgres/init.sql:/docker-entrypoint-initdb.d/init.sql:ro + healthcheck: + test: ["CMD-SHELL", "pg_isready -U appuser -d app_db"] + interval: 3s + timeout: 3s + retries: 10 + + mlflow: + build: ./docker/mlflow + container_name: mlflow + ports: + - 5002:5000 + depends_on: + - postgres + volumes: + - mlflow_artifacts:/mlflow-artifacts + + kanban_api: + build: ./kanban_api + container_name: kanban_api + environment: + CORS_ORIGIN: http://localhost:8080 + DATABASE_URL: postgresql+psycopg2://appuser:apppass@postgres:5432/app_db + AI_PROVIDER: ollama + MODEL_NAME: gemma3:1b + OLLAMA_BASE_URL: http://host.docker.internal:11434 + ports: + - 8000:8000 + depends_on: + postgres: + condition: service_healthy + extra_hosts: + - "host.docker.internal:host-gateway" + +volumes: + pgdata: + mlflow_artifacts: \ No newline at end of file diff --git a/docker/mlflow/Dockerfile b/docker/mlflow/Dockerfile new file mode 100644 index 0000000..1a8b4a4 --- /dev/null +++ b/docker/mlflow/Dockerfile @@ -0,0 +1,16 @@ +FROM python:3.11-slim + +WORKDIR /app + +# System dependencies for psycopg2 +RUN apt-get update && apt-get install -y --no-install-recommends \ + build-essential \ + libpq-dev \ + && rm -rf /var/lib/apt/lists/* + +# Install MLflow and Postgres driver +RUN pip install --no-cache-dir mlflow psycopg2-binary + +EXPOSE 5000 + +CMD ["mlflow", "server", "--host", "0.0.0.0", "--port", "5000", "--backend-store-uri", "postgresql+psycopg2://appuser:apppass@postgres:5432/mlflow", "--artifacts-destination", "/mlflow-artifacts"] diff --git a/docker/postgres/init.sql b/docker/postgres/init.sql new file mode 100644 index 0000000..9ae4455 --- /dev/null +++ b/docker/postgres/init.sql @@ -0,0 +1,23 @@ +-- Create application user and databases +DO +$$ +BEGIN + IF NOT EXISTS ( + SELECT FROM pg_catalog.pg_roles WHERE rolname = 'appuser') THEN + CREATE ROLE appuser LOGIN PASSWORD 'apppass'; + END IF; +END +$$; + +-- Create databases if not exist +DO +$$ +BEGIN + IF NOT EXISTS (SELECT FROM pg_database WHERE datname = 'app_db') THEN + CREATE DATABASE app_db OWNER appuser; + END IF; + IF NOT EXISTS (SELECT FROM pg_database WHERE datname = 'mlflow') THEN + CREATE DATABASE mlflow OWNER appuser; + END IF; +END +$$; diff --git a/docs/PR-feature-kanban-api-ollama-mlflow.md b/docs/PR-feature-kanban-api-ollama-mlflow.md new file mode 100644 index 0000000..90cf545 --- /dev/null +++ b/docs/PR-feature-kanban-api-ollama-mlflow.md @@ -0,0 +1,89 @@ +# PR: Kanban API + Ollama + MLflow integration, AI provider support, and docs + +## Summary +This PR introduces a FastAPI-based Kanban backend (`kanban_api/`) with Postgres, initial data seeding, and AI endpoints powered by LangChain. It adds provider-selectable AI (Ollama or OpenAI-compatible), aligns Docker Compose services, and documents end-to-end setup, including a connectivity test script. + +## Key changes +- FastAPI service `kanban_api/` with SQLAlchemy 2.0 models and Pydantic schemas. +- AI endpoints using LangChain with provider switch: + - `AI_PROVIDER=ollama|openai` + - Default model set to `gemma3:1b`. +- Docker Compose improvements: + - Postgres healthcheck + dependency gating. + - `extra_hosts` so API can reach host Ollama via `host.docker.internal`. +- README updated with detailed run instructions and provider configuration. +- Connectivity test script `scripts/connectivity_test.sh` to validate services and AI endpoints. + +## Environment variables +- `kanban_api/` (FastAPI): + - `DATABASE_URL=postgresql+psycopg2://appuser:apppass@postgres:5432/app_db` + - `CORS_ORIGIN=http://localhost:8080` + - `AI_PROVIDER=ollama|openai` + - `MODEL_NAME=gemma3:1b` + - `OLLAMA_BASE_URL=http://host.docker.internal:11434` (for Ollama) + - `OPENAI_BASE_URL`, `OPENAI_API_KEY` (for OpenAI-compatible) +- `backend/` (Node): + - `LLM=ollamaService|openaiService` + - `LLM_URL` to either `http://host.docker.internal:11434/api/generate` or `https://api.openai.com/v1/chat/completions` + - `MODEL_NAME` (defaults to `gemma3:1b` for Ollama), `OPENAI_API_KEY` when using OpenAI-compatible + +## Data contracts (Pydantic schemas) +Defined in `kanban_api/app/schemas.py`. + +- BoardRead: + - `{ id: int, name: str, created_at: datetime }` +- ColumnRead: + - `{ id: int, board_id: int, name: str, position: int }` +- ApplicationRead: + - `{ id: int, board_id: int, column_id: int, title: str, company: str, description: str|None, status: str|None, tags: List[str], created_at: datetime, updated_at: datetime }` +- ResumeRead: + - `{ id: int, application_id: int, content: str, created_at: datetime }` + +AI contracts in `kanban_api/app/routes_ai.py`: +- SummarizeBoardRequest `{ board_id: int, focus?: str }` -> SummarizeBoardResponse `{ summary: str }` +- TagApplicationRequest `{ application_id: int, max_tags?: int }` -> TagApplicationResponse `{ tags: List[str] }` +- NextStepsRequest `{ application_id: int }` -> NextStepsResponse `{ steps: List[str] }` + +## API routes +- Kanban (`kanban_api/app/routes_kanban.py`): + - `GET /kanban/boards` -> `List[BoardRead]` + - `GET /kanban/boards/{board_id}/columns` -> `List[ColumnRead]` + - `GET /kanban/boards/{board_id}/applications` -> `List[ApplicationRead]` + - `POST /kanban/boards/{board_id}/applications` -> `ApplicationRead` +- AI (`kanban_api/app/routes_ai.py`): + - `POST /ai/summarize-board` -> `SummarizeBoardResponse` + - `POST /ai/tag-application` -> `TagApplicationResponse` + - `POST /ai/next-steps` -> `NextStepsResponse` +- Resumes (`kanban_api/app/routes_resumes.py`): + - `POST /resumes` -> `ResumeRead` + - `GET /resumes/applications/{application_id}` -> `List[ResumeRead]` + +## How to run +```bash +docker compose up -d --build +# Start Ollama on host and pull model +o llama serve & +ollama pull gemma3:1b +# Validate connectivity +./scripts/connectivity_test.sh +``` + +## Tests performed +- CRUD tested for boards/columns/applications. +- AI endpoints tested against Ollama `gemma3:1b` via host mapping. +- `scripts/connectivity_test.sh` consolidates health checks and AI calls. + +## Frontend unification (WIP) +- Plan to unify into a single CRA app with `react-router-dom` and shared navbar. +- Routes: `/kanban` (drag-and-drop board with CRUD) and `/resume` (resume builder), with consistent Kanban styling. +- Integrate AI actions (summarize, tag, next steps) on application cards. + +## Next steps +- Add Alembic migrations and a formal seed script. +- Implement frontend unification pages and shared styles; wire to FastAPI. +- Optional: Add Ollama as a Compose service and point `AI_PROVIDER=ollama` with `OLLAMA_BASE_URL=http://ollama:11434`. +- Configure MLflow tracking in `llm-experiments` to use Postgres backend. + +## Notes +- `kanban_api` includes retry and DB health gating to ensure reliable startup. +- OpenAI-compatible support added via `langchain-openai` and `ChatOpenAI`, gated by envs. diff --git a/frontend/package.json b/frontend/package.json index b13366e..aa337fa 100644 --- a/frontend/package.json +++ b/frontend/package.json @@ -14,7 +14,8 @@ "react": "^18.2.0", "react-dom": "^18.2.0", "react-markdown": "^9.0.2", - "react-scripts": "^5.0.1" + "react-scripts": "^5.0.1", + "react-router-dom": "^6.26.2" }, "engines": { "node": ">=16.0.0" diff --git a/frontend/src/App.js b/frontend/src/App.js index 746b27d..aa1c869 100644 --- a/frontend/src/App.js +++ b/frontend/src/App.js @@ -1,13 +1,21 @@ import React from "react"; -import ResumeForm from "./components/ResumeForm"; +import { Routes, Route, Navigate } from "react-router-dom"; +import Navbar from "./components/Navbar"; +import ResumePage from "./pages/ResumePage"; +import KanbanPage from "./pages/KanbanPage"; import "./styles/ResumeForm.css"; function App() { return ( -
-

ATS Resume Generator

- -
+ <> + + + } /> + } /> + } /> + } /> + + ); } diff --git a/frontend/src/api/kanban.js b/frontend/src/api/kanban.js new file mode 100644 index 0000000..d3180ef --- /dev/null +++ b/frontend/src/api/kanban.js @@ -0,0 +1,92 @@ +import axios from "axios"; + +const API_BASE = process.env.REACT_APP_API_BASE || "http://localhost:8000"; + +export const getBoards = async () => { + const { data } = await axios.get(`${API_BASE}/kanban/boards`); + return data; +}; + +export const getColumns = async (boardId) => { + const { data } = await axios.get(`${API_BASE}/kanban/boards/${boardId}/columns`); + return data; +}; + +export const getApplications = async (boardId) => { + const { data } = await axios.get(`${API_BASE}/kanban/boards/${boardId}/applications`); + return data; +}; + +export const createApplication = async (boardId, payload) => { + const { data } = await axios.post(`${API_BASE}/kanban/boards/${boardId}/applications`, payload); + return data; +}; + +export const updateApplication = async (applicationId, payload) => { + const { data } = await axios.put(`${API_BASE}/kanban/applications/${applicationId}`, payload); + return data; +}; + +export const moveApplication = async (applicationId, columnId) => { + const { data } = await axios.post(`${API_BASE}/kanban/applications/${applicationId}/move`, { column_id: columnId }); + return data; +}; + +// AI endpoints +export const aiSummarizeBoard = async (boardId, focus) => { + const { data } = await axios.post(`${API_BASE}/ai/summarize-board`, { board_id: boardId, focus }); + return data; // { summary } +}; + +export const aiTagApplication = async (applicationId, max_tags = 5) => { + const { data } = await axios.post(`${API_BASE}/ai/tag-application`, { application_id: applicationId, max_tags }); + return data; // { tags } +}; + +export const aiNextSteps = async (applicationId) => { + const { data } = await axios.post(`${API_BASE}/ai/next-steps`, { application_id: applicationId }); + return data; // { steps } +}; + +// Resume endpoints +export const aiGenerateResume = async ({ applicationId, jobDescription, profile }) => { + const { data } = await axios.post(`${API_BASE}/ai/generate-resume`, { + application_id: applicationId, + job_description: jobDescription, + profile, + }); + return data; // { markdown } +}; + +export const createResume = async ({ applicationId, jobDescription, inputProfile, markdown, model, params }) => { + const { data } = await axios.post(`${API_BASE}/resumes`, { + application_id: applicationId, + job_description: jobDescription, + input_profile: inputProfile, + markdown, + model, + params, + }); + return data; // ResumeRead +}; + +export const listResumesForApplication = async (applicationId) => { + const { data } = await axios.get(`${API_BASE}/resumes/applications/${applicationId}`); + return data; // ResumeRead[] +}; + +export const exportLatestResume = async (applicationId, format = "pdf") => { + const response = await axios.get( + `${API_BASE}/resumes/applications/${applicationId}/export`, + { params: { format }, responseType: "blob" } + ); + return response; +}; + +export const exportResumeById = async (resumeId, format = "pdf") => { + const response = await axios.get( + `${API_BASE}/resumes/${resumeId}/export`, + { params: { format }, responseType: "blob" } + ); + return response; +}; diff --git a/frontend/src/components/Navbar.js b/frontend/src/components/Navbar.js new file mode 100644 index 0000000..d1b3ea9 --- /dev/null +++ b/frontend/src/components/Navbar.js @@ -0,0 +1,25 @@ +import React from "react"; +import { Link, NavLink } from "react-router-dom"; +import "../styles/Navbar.css"; + +export default function Navbar() { + return ( + + ); +} diff --git a/frontend/src/index.js b/frontend/src/index.js index c17ee95..61f27e6 100644 --- a/frontend/src/index.js +++ b/frontend/src/index.js @@ -1,10 +1,14 @@ import React from "react"; import ReactDOM from "react-dom"; import App from "./App"; +import { BrowserRouter } from "react-router-dom"; +import "./styles/theme.css"; ReactDOM.render( - + + + , document.getElementById("root") ); \ No newline at end of file diff --git a/frontend/src/pages/KanbanPage.js b/frontend/src/pages/KanbanPage.js new file mode 100644 index 0000000..162b8e9 --- /dev/null +++ b/frontend/src/pages/KanbanPage.js @@ -0,0 +1,453 @@ +import React, { useEffect, useMemo, useState } from "react"; +import ReactMarkdown from "react-markdown"; +import { getColumns, getApplications, moveApplication, updateApplication, aiSummarizeBoard, aiTagApplication, aiNextSteps, aiGenerateResume, createResume, createApplication, exportLatestResume, listResumesForApplication } from "../api/kanban"; +import "../styles/Kanban.css"; + +export default function KanbanPage() { + const BOARD_ID = 1; + const [columns, setColumns] = useState([]); + const [apps, setApps] = useState([]); + const [loading, setLoading] = useState(true); + const [error, setError] = useState(null); + const [editing, setEditing] = useState({}); // { [id]: { title, company, description } } + const [selected, setSelected] = useState(null); // card for modal + const [ai, setAi] = useState({ loading: false, summary: "", tags: [], steps: [], notice: "" }); + const [tab, setTab] = useState("details"); // details | resume + const [resume, setResume] = useState({ jd: "", profile: "", markdown: "" }); + const stripCodeFences = (md = "") => { + if (!md) return ""; + // remove leading ```lang and trailing ``` if present + const startStripped = md.replace(/^```[a-zA-Z]*\n?/, ""); + const endStripped = startStripped.replace(/\n?```\s*$/, ""); + return endStripped.trim(); + }; + + useEffect(() => { + let mounted = true; + async function load() { + try { + setLoading(true); + const [cols, applications] = await Promise.all([ + getColumns(BOARD_ID), + getApplications(BOARD_ID), + ]); + if (!mounted) return; + setColumns(cols); + setApps(applications); + } catch (e) { + if (!mounted) return; + setError("Failed to load Kanban data"); + } finally { + if (mounted) setLoading(false); + } + } + load(); + return () => { + mounted = false; + }; + }, []); + + const addApplication = async () => { + try { + const defaultColId = columns[0]?.id || null; + const payload = { + title: "New Application", + company: "", + description: "", + status: "Applied", + tags: [], + column_id: defaultColId, + }; + const created = await createApplication(BOARD_ID, payload); + setApps((prev) => [created, ...prev]); + } catch (e) { + setError("Failed to add application"); + } + }; + + const generateResume = async () => { + if (!selected || !resume.jd) return; + try { + setAi((s) => ({ ...s, loading: true })); + const { markdown } = await aiGenerateResume({ applicationId: selected.id, jobDescription: resume.jd, profile: resume.profile }); + setResume((r) => ({ ...r, markdown: stripCodeFences(markdown) })); + } catch (e) { + setError("AI resume generation failed"); + } finally { + setAi((s) => ({ ...s, loading: false })); + } + }; + + const saveResume = async () => { + if (!selected || !resume.markdown) return; + try { + setAi((s) => ({ ...s, loading: true })); + await createResume({ applicationId: selected.id, jobDescription: resume.jd, inputProfile: resume.profile, markdown: stripCodeFences(resume.markdown) }); + // optional: fetch count to reflect it was saved + try { + const list = await listResumesForApplication(selected.id); + setAi((s) => ({ ...s, notice: `Saved. Total resumes: ${list.length}` })); + } catch {} + } catch (e) { + setError("Failed to save resume"); + } finally { + setAi((s) => ({ ...s, loading: false })); + } + }; + + const downloadBlob = (blob, filename) => { + const url = window.URL.createObjectURL(blob); + const a = document.createElement('a'); + a.href = url; + a.download = filename; + document.body.appendChild(a); + a.click(); + a.remove(); + window.URL.revokeObjectURL(url); + }; + + const exportLatest = async (format) => { + if (!selected) return; + try { + setAi((s) => ({ ...s, loading: true })); + const response = await exportLatestResume(selected.id, format); + const filename = response.headers['content-disposition']?.split('filename=')[1]?.replace(/"/g, '') || `resume.${format}`; + downloadBlob(response.data, filename); + setAi((s) => ({ ...s, notice: `Exported ${format.toUpperCase()}` })); + } catch (e) { + setError(`Failed to export ${format.toUpperCase()}`); + } finally { + setAi((s) => ({ ...s, loading: false })); + } + }; + + const grouped = useMemo(() => { + const byCol = new Map(); + for (const c of columns) byCol.set(c.id, []); + for (const a of apps) { + if (!byCol.has(a.column_id)) byCol.set(a.column_id, []); + byCol.get(a.column_id).push(a); + } + return byCol; + }, [columns, apps]); + + const handleMove = async (cardId, newColumnId) => { + try { + // optimistic update + setApps((prev) => prev.map((a) => (a.id === cardId ? { ...a, column_id: Number(newColumnId) } : a))); + await moveApplication(cardId, Number(newColumnId)); + } catch (e) { + setError("Failed to move card"); + } + }; + + const startEdit = (card) => { + setEditing((prev) => ({ + ...prev, + [card.id]: { + title: card.title || "", + company: card.company || "", + description: card.description || "", + status: card.status || "", + tags: card.tags || [], + column_id: card.column_id, + board_id: card.board_id, + }, + })); + }; + + const cancelEdit = (id) => { + setEditing((prev) => { + const n = { ...prev }; + delete n[id]; + return n; + }); + }; + + const saveEdit = async (id) => { + const payload = editing[id]; + try { + const updated = await updateApplication(id, payload); + setApps((prev) => prev.map((a) => (a.id === id ? updated : a))); + cancelEdit(id); + } catch (e) { + setError("Failed to update card"); + } + }; + + const openDetails = (card) => { + setSelected(card); + setAi({ loading: false, summary: "", tags: [], steps: [] }); + setTab("details"); + setResume({ jd: card.description || "", profile: "", markdown: "" }); + }; + const closeDetails = () => { + setSelected(null); + setAi({ loading: false, summary: "", tags: [], steps: [] }); + }; + + const runSummarize = async () => { + try { + setAi((s) => ({ ...s, loading: true, summary: "" })); + const { summary } = await aiSummarizeBoard(BOARD_ID); + setAi((s) => ({ ...s, loading: false, summary })); + } catch (e) { + setAi((s) => ({ ...s, loading: false })); + setError("AI summarize failed"); + } + }; + const runTag = async () => { + if (!selected) return; + try { + setAi((s) => ({ ...s, loading: true })); + const { tags } = await aiTagApplication(selected.id, 5); + setAi((s) => ({ ...s, loading: false, tags })); + // also update the card with tags + const updated = await updateApplication(selected.id, { ...selected, tags }); + setApps((prev) => prev.map((a) => (a.id === selected.id ? updated : a))); + setSelected(updated); + } catch (e) { + setAi((s) => ({ ...s, loading: false })); + setError("AI tag failed"); + } + }; + const runNextSteps = async () => { + if (!selected) return; + try { + setAi((s) => ({ ...s, loading: true })); + const { steps } = await aiNextSteps(selected.id); + setAi((s) => ({ ...s, loading: false, steps })); + } catch (e) { + setAi((s) => ({ ...s, loading: false })); + setError("AI next steps failed"); + } + }; + + const saveSelected = async () => { + if (!selected) return; + try { + const payload = { + title: selected.title || "", + company: selected.company || "", + description: selected.description || "", + status: selected.status || "", + tags: selected.tags || [], + column_id: selected.column_id, + }; + const updated = await updateApplication(selected.id, payload); + setApps((prev) => prev.map((a) => (a.id === selected.id ? updated : a))); + setSelected(updated); + } catch (e) { + setError("Failed to save details"); + } + }; + + if (loading) return
Loading board...
; + if (error) return
{error}
; + + return ( +
+
+

Kanban

+ +
+ {ai.loading && ( +
+
+
+ )} +
+ {columns.map((col) => ( +
+
+

{col.name}

+
+
+ {(grouped.get(col.id) || []).map((card) => ( +
+ {editing[card.id] ? ( + <> + setEditing((prev) => ({ ...prev, [card.id]: { ...prev[card.id], title: e.target.value } }))} + placeholder="Title" + /> + setEditing((prev) => ({ ...prev, [card.id]: { ...prev[card.id], company: e.target.value } }))} + placeholder="Company" + /> +