This is a local AI-powered interview practice tool that helps you improve your behavioral interview skills. It uses Ollama for local LLM inference and Whisper for speech-to-text transcription, allowing you to speak your answers and get instant feedback from an AI hiring manager.
- 🎤 Voice-based answers using Whisper for transcription
- 🧑💼 Realistic behavioral interview questions tailored for any profession the user inputs
- 💬 AI feedback focused on clarity, structure, and communication style
- 💾 Local storage of interviews — all interview data (video files, transcripts, and feedback) are stored locally in
the
uploads/interviews/directory.
Each interview is organized in its own subfolder, and you can manage (view or delete) them directly from the MyInterviews page in the app. - ⚙️ Customizable model selection
- 🔒 Runs fully locally (no cloud or API key required)
Make sure you have the following installed before running the app:
| Tool | Description | Install Command / Notes | Approx. Space |
|---|---|---|---|
| Java 17+ | Required to run the Spring Boot app | Download Java | ~300 MB |
| Ollama | Local LLM runtime | Install Ollama | ~1.5 GB |
| AI model | Check out the available models at Ollama library | Example: ollama pull phi3 |
Depends on the model |
| Python 3 + pip | Required for Whisper | Install Python | ~500 MB |
| FFmpeg | Required by Whisper for audio processing | macOS: brew install ffmpeg Ubuntu: sudo apt install ffmpeg |
~200 MB |
| Whisper | Speech-to-text transcription | pip install -U openai-whisper |
~1.5 GB |
💡 Recommended Setup For the best experience, use a Mac with Apple Silicon (M1/M2/M3) or a machine with a dedicated GPU. These devices handle local LLM inference and Whisper audio processing much faster and more efficiently.
The app is also optimized for Google Chrome, which provides the most reliable camera, microphone, and recording support compared to other browsers.
- Clone this repository:
git clone https://github.com/alanquintero/myInterviewBot cd myInterviewBot - Configure the AI model in
application.properties(llama3.1:8bis the default model):
ai-model=llama3.1:8b-
Run the app with Spring Boot:
mvn spring-boot:run
-
Open the app:
-
Visit http://localhost:8080 in your browser.
| Issue | Possible Cause | Solution |
|---|---|---|
| 🎧 No mic input / camera access | Browser permissions or wrong device selected | Check browser settings → Allow mic and camera, and ensure the correct devices are selected |
| ❌ Cannot install Whisper | Make sure Python version is compatible with Whisper | Check Whisper official site openai-whisper |
| ❌ “Model not found” error | You haven’t pulled the model | Example, run ollama pull phi3 |
🐍 pip command not found |
pip not installed | macOS/Linux: sudo apt install python3-pip or brew install python3 |
| 🐢 Slow transcription | Whisper base model is large | Try smaller Whisper models (like tiny or base) (Not yet available in the app) |
Developers can configure the AI model in application.properties:
ai-model=llama3.1:8bYou can replace llama3.1:8b with another Ollama model, such as mistral, phi3, or any other model available
locally.
Or, when running the app, you can override any property like this:
mvn spring-boot:run -Dspring-boot.run.arguments="--ai.model=phi3"Example of running a JAR overring properties:
java -jar myinterviewbot.jar \ --ai.provider=ollama \ --ai.model=phi3:latestExample of running app in other port:
mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=9090"Picking the right model depends on what you care about most — speed, accuracy, or resource usage. Here’s a quick guide:
| 💻 Laptop Type | 🧠 Recommended Model | 📦 Approx. Size | ✅ Benefits | |
|---|---|---|---|---|
| 8–12 GB RAM | Phi-3 Mini (3.8B) | ~2 GB | 🟢 Extremely fast 🟢 Small download 🟢 Great for quick Q&A and light tasks |
🔴 Limited reasoning depth 🔴 Not ideal for long conversations |
| 8–12 GB RAM | Mistral 7B | ~4 GB | 🟢 Smart and efficient 🟢 Handles follow-ups better than Phi-3 🟢 Good general-purpose model |
🔴 Slightly robotic tone 🔴 Less consistent on complex logic |
| 16–18 GB RAM | Llama 3.1 (8B) | ~5 GB | 🟢 Excellent reasoning 🟢 Natural, human-like answers 🟢 Great for behavioral interview simulation |
🔴 Slightly slower startup 🔴 Requires quantized version for best speed |
| 16–18 GB RAM | Gemma 2 (9B) | ~6 GB | 🟢 Balanced quality and speed 🟢 Friendly conversational tone 🟢 Efficient on Apple Silicon |
🔴 Can occasionally repeat or overexplain |
| 24+ GB RAM / M3 Pro–Max | Llama 3.1 (13B) | ~8–9 GB | 🟢 High-quality, detailed reasoning 🟢 Handles multi-turn interviews beautifully 🟢 Very consistent and coherent |
🔴 Slower on smaller laptops 🔴 Heavy model |
| Server / Multi-GPU Setup | Llama 3.1 (70B) | ~40–45 GB | 🟢 Near GPT-4 quality 🟢 Exceptional reasoning and memory 🟢 Ideal for research or production AI agents |
🔴 Requires 64 GB+ RAM or GPU cluster 🔴 Very slow download / load |
Check available models at Ollama library.
Example of getting a model:
ollama pull llama3.1:8bShow all the models that are currently installed on your machine:
ollama list-
Enter a Profession and click Generate Question to receive a tailored interview prompt.

1.a Enter your own question or choose one from the most common interview questions.

-
Instantly get a Transcript and AI-powered Feedback and Evaluation on your performance.


-
Visit the My Interviews page anytime to view all your past practice sessions.

-
Settings: You can customize the app from the Settings page, where you can:
-
System Checks: The app automatically checks your system hardware (camera, microphone, and processing capability).

-
If your device doesn’t meet the minimum requirements, you’ll see a system alert warning that performance might be limited.

- 🧭 Conversation history — track progress over time
- 💻 Custom Questions — enter the question you want to practice
- 🎯 Scoring system — rate clarity, confidence, relevance, etc.
- 📄 Resume Interview Mode — tailor questions based on uploaded resume
- 💻 Technical Interview Mode — technical questions
- 🤝 Mock Interview Mode – A session to practice common questions asked in job interviews
- 📊 Progress Analytics — graphs for improvement over time
- 🤖 Add more AI providers (e.g., OpenAI GPT-4, Claude, Gemini)
- 🗣️ Text-to-Speech for AI questions and feedback
- 🤖 Add support for smaller Whisper models (e.g., tiny, base)
- 🎨 Improved UI/UX — modern dashboard, light/dark mode, analytics
- ⚙️ Settings fully customizable from the app — change AI provider, change AI model, enable or disable text-to-speech, change recording time, only audio interview, etc.
- 🧠 Feedback memory — personalized tips based on past sessions
- 💬 AI interviewer personalities (strict, friendly, technical)
- 🗂️ Integration with Google Drive or Notion for saving feedback
- Backend: Java 17, Spring Boot
- Frontend: HTML, JavaScript, CSS, Bootstrap
- AI: Ollama, Whisper (speech-to-text), FFmpeg (audio)
- Build Tool: Maven
Contributions are welcome! If you’d like to help add features, fix bugs, or improve documentation:
- Fork the repo
- Create a feature branch
- Submit a pull request
MIT License — free to use, modify, and distribute.




