Instant code analysis powered by GPT-4 - get professional feedback on your code in seconds.
π Live Demo: https://ai-code-reviewer-pro.vercel.app
- Overview
- Features
- Quick Start
- Getting Your OpenAI API Key
- Local Development
- Deployment
- Security
- Example
- Tech Stack
- Troubleshooting
AI Code Reviewer is a web application that provides instant, AI-powered code analysis. Simply paste your code and get professional feedback on bugs, performance, security, and best practices.
π Try it now: https://ai-code-reviewer-pro.vercel.app
- Manual code reviews take time
- Junior developers need instant feedback
- Senior developers need quick sanity checks
- Learning proper patterns is hard without guidance
AI-powered code analysis that reviews your code instantly, identifying:
- π Bugs - Logic errors and potential issues
- β‘ Performance - Optimization opportunities
- π Security - Vulnerability scanning
- β¨ Best Practices - Code quality improvements
- β‘ Instant Analysis - Get feedback in 5-10 seconds
- π― Specific Suggestions - Actionable improvements, not generic advice
- π Multi-Language - Supports JavaScript, Python, TypeScript, and more
- π No Signup Required - Paste code and go
- π Secure - API key protected server-side with rate limiting
- π€ Bot Protection - Automatic detection and prevention of abuse
- Node.js 18+ installed (Download)
- OpenAI API key (Get one here)
- npm or yarn package manager
-
Clone the repository:
git clone https://github.com/fastians/ai-code-reviewer.git cd ai-code-reviewer -
Install dependencies:
npm install
-
Set up environment variables:
# Create .env file echo "OPENAI_API_KEY=your_openai_api_key_here" > .env
β οΈ Important: Never commit.envto git! It's already in.gitignore. -
Start the development server:
npm run dev
-
Open your browser: Navigate to
http://localhost:5173
That's it! You're ready to review code. π
- Sign up/Login to OpenAI Platform
- Navigate to API Keys
- Create a new secret key
- Copy the key (you won't see it again!)
- Add it to your
.envfile for local development, or Vercel environment variables for production
π‘ Tip: Start with a small amount of credits to test. OpenAI charges per API call.
The app runs in two parts:
- Frontend (Vite) -
http://localhost:5173 - API Server (Express) -
http://localhost:3001
Both start automatically with:
npm run dev| Command | Description |
|---|---|
npm run dev |
Start both frontend and API server (recommended) |
npm run dev:client |
Start only frontend (Vite) |
npm run dev:server |
Start only API server (Express) |
npm run build |
Build for production |
npm run preview |
Preview production build |
npm run lint |
Run ESLint |
ai-code-reviewer/
βββ api/
β βββ shared.js # Shared logic (rate limiting, bot detection, OpenAI calls)
β βββ review.ts # Serverless function (Vercel) - uses shared.js
βββ src/
β βββ App.tsx # Main React component
β βββ main.tsx # Entry point
βββ public/ # Static assets
βββ server.js # Local Express server (dev only) - uses api/shared.js
βββ vercel.json # Vercel configuration
βββ package.json # Dependencies
π‘ Note: The code logic is shared between
api/review.ts(production) andserver.js(local dev) viaapi/shared.js. This ensures consistent behavior and eliminates code duplication.
Your Vite + React setup works perfectly on Vercel. The api/review.ts file automatically becomes a serverless function.
-
Push your code to GitHub:
git add . git commit -m "Ready for deployment" git push origin main
-
Connect to Vercel:
- Go to vercel.com and sign in
- Click "Add New Project"
- Import your GitHub repository
- Vercel will auto-detect Vite
-
Set Environment Variable:
- Go to Project Settings β Environment Variables
- Add
OPENAI_API_KEYwith your OpenAI API key - Select all environments (Production, Preview, Development)
- Click Save
-
Deploy:
- Click "Deploy"
- Wait for build to complete (~2-3 minutes)
- Your app is live! π
- Production URL: https://ai-code-reviewer-pro.vercel.app
-
Install Vercel CLI:
npm install -g vercel
-
Login:
vercel login
-
Deploy:
vercel
-
Set Environment Variable:
vercel env add OPENAI_API_KEY
Enter your key when prompted, select all environments.
-
Deploy to Production:
vercel --prod
π Need more details? See DEPLOYMENT.md for comprehensive deployment guide.
β
API Key Protection - Stored server-side only, never exposed to frontend
β
Rate Limiting - Prevents abuse and excessive API usage
β
Bot Detection - Automatic detection with stricter limits
β
Input Validation - Maximum 10,000 characters per request
β
Error Handling - Secure error messages without exposing sensitive data
| User Type | Limit | Reset Time |
|---|---|---|
| Normal Users | 5 requests/24h | 24 hours |
| Detected Bots | 2 requests/24h | 24 hours |
| Code Length | 10,000 characters | Per request |
The system automatically detects bots based on:
- User agent patterns (
curl,wget,python,selenium, etc.) - Missing browser headers (
Accept,Accept-Language) - Suspicious request patterns
Detected bots receive stricter rate limits (2 requests per 24 hours vs 5 for normal users).
π Security Details: See SECURITY.md for comprehensive security documentation.
const express = require("express");
const app = express();
const port = process.env.PORT || 3000;
let users = [];
app.use(express.json());
app.get("/users", (req, res) => {
res.send(users);
});
app.post("/users", (req, res) => {
const u = req.body;
if (!u.name) {
res.send("no name");
return;
}
u.id = Math.floor(Math.random() * 10000);
users.push(u);
res.send(u);
});
app.get("/user/:id", (req, res) => {
const id = req.params.id;
const found = users.filter((u) => u.id == id)[0];
if (!found) {
res.send("not found");
} else {
res.send(found);
}
});
app.listen(port, () => {
console.log("server started on port " + port);
});- User ID Collision: Using
Math.random()for ID generation can lead to collisions. Consider using UUIDs or incremental IDs.
const { v4: uuidv4 } = require('uuid');
u.id = uuidv4();- Filter Method Inefficiency: Using
filterfollowed by accessing the first element is inefficient. Usefind()instead.
const found = users.find(u => u.id == id);- Array Storage: Consider using a database for scalable storage
- JSON Response: Use
res.json()instead ofres.send()
- Input Validation: Validate user input using libraries like
express-validator - HTTP Status Codes: Return appropriate status codes (400, 404, etc.)
- Error Handling: Create middleware for consistent error handling
- Modularization: Break out route handlers into separate modules
- Documentation: Add JSDoc comments for better code documentation
| Category | Technology |
|---|---|
| Frontend | React 19, TypeScript, Tailwind CSS |
| Build Tool | Vite |
| Backend | Vercel Serverless Functions |
| AI | OpenAI GPT-4o-mini |
| Deployment | Vercel |
| Icons | Lucide React |
Problem: Getting errors when trying to review code locally.
Solutions:
- β
Make sure both servers are running (
npm run dev) - β
Check that
OPENAI_API_KEYis set in.env - β Verify the API server is running on port 3001
- β Check browser console for errors
Problem: Getting "Rate limit exceeded" error.
Solutions:
- β Wait 24 hours for the limit to reset
- β Each IP address has its own limit
- β Bots get stricter limits automatically
- β Check if you're making too many requests
Problem: Deployment fails with build errors.
Solutions:
- β Check build logs in Vercel dashboard
- β
Ensure all dependencies are in
package.json - β Verify TypeScript compiles without errors
- β
Check that
OPENAI_API_KEYis set in environment variables
Problem: API key not found error.
Solutions:
- β
Verify
OPENAI_API_KEYis set in Vercel environment variables - β Make sure you selected all environments (Production, Preview, Development)
- β Redeploy after adding environment variables
- β Check variable name spelling (case-sensitive)
- π Check SECURITY.md for security details
- π Check DEPLOYMENT.md for deployment help
- π Open an issue on GitHub
Contributions are welcome! Please feel free to submit a Pull Request.
This project is open source and available under the MIT License.
Made with β€οΈ for developers who want instant code feedback
