This repository contains the complete DevOps implementation for the BestCity real estate investment platform. It includes Docker containerization, Infrastructure as Code (Terraform), logging & monitoring setup, and AWS automation scripts.
┌─────────────────────────────────────────────────────────┐
│ Docker Compose Stack │
├─────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ BestCity │ │ MongoDB │ │ Fluentd │ │
│ │ Application │ │ Database │ │ Logging │ │
│ │ (Port 3099) │ │ (Port 27017) │ │ (Port 24224) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Prometheus │ │ Grafana │ │ Node │ │
│ │ Monitoring │ │ Dashboards │ │ Exporter │ │
│ │ (Port 9090) │ │ (Port 3000) │ │ (Port 9100) │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────┘
bc-ops/
├── Dockerfile # Multi-stage Docker build for BestCity app
├── docker-compose.yml # Complete stack orchestration
├── .env.example # Environment variables template
│
├── terraform/ # Infrastructure as Code
│ ├── main.tf # Main Terraform configuration
│ ├── variables.tf # Variable definitions
│ └── terraform.tfvars.example # Example values
│
├── fluentd/ # Logging configuration
│ ├── Dockerfile # Custom Fluentd image
│ └── conf/
│ └── fluent.conf # Log aggregation rules
│
├── prometheus/ # Monitoring configuration
│ └── prometheus.yml # Metrics collection config
│
├── grafana/ # Visualization setup
│ └── provisioning/
│ └── datasources/
│ └── datasource.yml # Prometheus datasource
│
└── scripts/ # Automation scripts
├── setup.sh # Local setup script
├── deploy.sh # EC2 deployment script
├── run-local.sh # Quick local run
└── aws-cli-tasks.sh # AWS operations script
- Docker 20.10+
- Docker Compose 2.0+
- Node.js 18+ (for local development)
- AWS CLI 2.x (for cloud deployment)
- Terraform 1.0+ (for infrastructure provisioning)
-
Clone the repository and navigate to bc-ops:
cd bc-ops -
Setup environment:
cp .env.example .env # Edit .env with your configuration -
Run the setup script:
./scripts/setup.sh
This will:
- Build Docker images
- Start all services
- Run health checks
- Display service URLs
-
Access the application:
- Application: http://localhost:3099
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3000 (admin/admin)
./scripts/run-local.shThis runs the app in development mode with hot reload.
- Multi-stage build for optimized image size
- Non-root user for security
- Health checks for container monitoring
- Tini init system for proper signal handling
- Production-ready with minimal attack surface
- app - BestCity application (React + Node.js)
- mongodb - Database with persistence
- fluentd - Centralized logging
- prometheus - Metrics collection
- grafana - Metrics visualization
- node-exporter - System metrics
# Start services
docker-compose up -d
# View logs
docker-compose logs -f [service-name]
# Stop services
docker-compose down
# Stop and remove volumes
docker-compose down -v
# Rebuild specific service
docker-compose up -d --build app
# Scale service (if applicable)
docker-compose up -d --scale app=3-
Navigate to terraform directory:
cd terraform -
Configure your variables:
cp terraform.tfvars.example terraform.tfvars # Edit terraform.tfvars with your values -
Initialize Terraform:
terraform init
-
Plan the infrastructure:
terraform plan
-
Apply the configuration:
terraform apply
This creates:
- VPC with public subnet
- Internet Gateway
- Security Groups
- EC2 instance (t3.medium)
- Elastic IP
- IAM roles and policies
-
Get outputs:
terraform output
After provisioning infrastructure:
# Set environment variables
export EC2_HOST=$(terraform output -raw instance_public_ip)
export SSH_KEY=/path/to/your/key.pem
# Run deployment script
./scripts/deploy.shIf you prefer manual deployment:
-
SSH to EC2:
ssh -i your-key.pem ec2-user@<EC2_IP>
-
Clone the repository:
git clone <your-repo-url> cd bc-ops
-
Configure environment:
cp .env.example .env nano .env # Update with your values -
Start services:
docker-compose up -d
Access Prometheus at http://<host>:9090
Available metrics:
- System metrics (CPU, Memory, Disk, Network)
- Container metrics
- Application metrics (if implemented)
Useful queries:
# CPU usage
rate(node_cpu_seconds_total[5m])
# Memory usage
node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes
# Container CPU
rate(container_cpu_usage_seconds_total[5m])
Access Grafana at http://<host>:3000
- Default credentials:
admin/admin - Prometheus datasource is pre-configured
- Import community dashboards for Node Exporter
Logs are collected from all Docker containers and:
- Stored in
/fluentd/logdirectory - Tagged by service
- Formatted as JSON
- Rotated daily with compression
View logs:
docker-compose logs -f fluentdThe aws-cli-tasks.sh script provides interactive AWS operations:
./scripts/aws-cli-tasks.shFeatures:
- S3 bucket management
- File upload/download
- Application backups
- EC2 instance management
- CloudWatch alarms
- Secrets Manager integration
Example: Create backup
# Set environment
export AWS_REGION=us-east-1
export PROJECT_NAME=bestcity
export ENVIRONMENT=dev
# Run script
./scripts/aws-cli-tasks.sh
# Select option 5 for application backup- ✅ Non-root user in containers
- ✅ Read-only root filesystem (where applicable)
- ✅ No privileged containers
- ✅ Health checks enabled
- ✅ Resource limits set
- ✅ Security groups with minimal ports
- ✅ IAM roles with least privilege
- ✅ Encrypted EBS volumes
- ✅ VPC isolation
- ✅ SSH key-based authentication
- ✅ Environment variables for secrets
- ✅ HTTPS ready (add certificate)
- ✅ MongoDB authentication enabled
- ✅ CORS configured
- ✅ Input validation
- Change default passwords (Grafana, MongoDB)
- Add SSL/TLS certificates
- Restrict SSH access to specific IPs
- Enable AWS CloudTrail
- Set up automated backups
- Configure proper secret management
Key environment variables (see .env.example):
# Application
NODE_ENV=production
PORT=3099
# Database
MONGO_URI=mongodb://mongodb:27017/bestcity
# Cloudinary (required)
CLOUDINARY_NAME=your_cloudinary_name
CLOUDINARY_API_KEY=your_api_key
CLOUDINARY_API_SECRET=your_api_secret
# JWT
JWT_SECRET=your_secret_key
JWT_EXPIRE=7d
# Email
SENDGRID_API_KEY=your_sendgrid_keydocker build -f Dockerfile -t bestcity:test ../demo-version
docker run -p 3099:3099 bestcity:testcurl http://localhost:3099/api/health# Install Apache Bench
sudo apt-get install apache2-utils # Ubuntu/Debian
brew install ab # macOS
# Run load test
ab -n 1000 -c 10 http://localhost:3099/aws_vpc.main- Virtual Private Cloudaws_subnet.public- Public subnetaws_internet_gateway.main- Internet gatewayaws_security_group.app_sg- Security groupaws_instance.app_server- EC2 instanceaws_eip.app_eip- Elastic IPaws_iam_role.ec2_role- IAM role
# Check Docker resource usage
docker stats
# Cleanup Docker system
docker system prune -a
# Backup MongoDB
docker exec bestcity-mongodb mongodump --out=/backup
# Restore MongoDB
docker exec bestcity-mongodb mongorestore /backup
# View Terraform state
terraform show
# Destroy infrastructure
terraform destroy-
Check logs:
docker-compose logs app
-
Verify environment variables:
docker-compose config
-
Check MongoDB connection:
docker-compose exec mongodb mongosh --eval "db.stats()"
# Check what's using the port
lsof -i :3099
# Change ports in docker-compose.yml# Validate configuration
terraform validate
# Check AWS credentials
aws sts get-caller-identity
# Enable debug logging
export TF_LOG=DEBUG
terraform applyThis is a test project demonstrating DevOps practices. Key areas covered:
- ✅ Containerization - Multi-stage Docker build
- ✅ Orchestration - Docker Compose with multiple services
- ✅ Infrastructure as Code - Terraform for AWS
- ✅ Monitoring - Prometheus + Grafana stack
- ✅ Logging - Centralized with Fluentd
- ✅ Automation - Shell scripts for common tasks
- ✅ AWS Integration - CLI operations and cloud deployment
- ✅ Security - Best practices implemented
- ✅ Documentation - Comprehensive guides
This is a test project for DevOps evaluation purposes.
For issues or questions related to this DevOps implementation, please:
- Check the troubleshooting section
- Review the logs
- Verify your configuration matches the examples
Built with ❤️ for DevOps Excellence