Git based compose deployment should detect if a database is added #3177
andrasbacsai
started this conversation in
Improvement Requests
Replies: 1 comment
-
|
I stumbled on this issue on my first use of Coolify. I implemented a temporary backup solution until this is implemented within Coolify. Sharing my workaround below in case it's useful for anyone else. Any use is at your own risk. backup/Dockerfilebackup/backup.sh#!/bin/bash
# Fail on first error
set -e
# Define webhook URLs
SUCCESS_WEBHOOK_URL=$DISCORD_BACKUP_SUCCESS_WEBHOOK_URL
FAILURE_WEBHOOK_URL=$DISCORD_BACKUP_FAILURE_WEBHOOK_URL
# Function to send a message to Discord
send_discord_message() {
local webhook_url=$1
local message=$2
# Properly escape the message content
local payload=$(jq -n --arg content "$message" '{content: $content}')
curl -H "Content-Type: application/json" -X POST -d "$payload" $webhook_url
}
# Function to handle errors
error_handler() {
local error_message="$BACKUP_IDENTIFIER: Backup failed at $(date +\%Y-\%m-\%d_\%H:\%M:\%S)"
echo $error_message
send_discord_message $FAILURE_WEBHOOK_URL "$error_message"
exit 1
}
# Trap any error and call the error_handler
trap 'error_handler' ERR
# Create the backup
backup_file="/backup/$(date +\%Y-\%m-\%d-\%H-\%M-\%S).dump"
PGPASSWORD=$POSTGRES_PASSWORD pg_dump -h db -U $POSTGRES_USER -Fc $POSTGRES_DB > $backup_file
echo "Backup done at $(date +\%Y-\%m-\%d_\%H:\%M:\%S)"
# Upload the backup to S3
s3_path="s3://$S3_BUCKET_NAME/backup/$BACKUP_IDENTIFIER/$(basename $backup_file)"
aws s3 cp $backup_file $s3_path --endpoint-url $S3_ENDPOINT_URL
echo "Backup uploaded to S3 at $(date +\%Y-\%m-\%d_\%H:\%M:\%S)"
# Construct the URL of the uploaded file
file_url="$S3_ENDPOINT_URL/$S3_BUCKET_NAME/backup/$BACKUP_IDENTIFIER/$(basename $backup_file)"
# Remove old backups locally
ls -1 /backup/*.dump | head -n -2 | xargs rm -f
# Send success message to Discord
success_message="$BACKUP_IDENTIFIER: Backup succeeded at $(date +\%Y-\%m-\%d_\%H:\%M:\%S).\nFile URL: $file_url"
echo -e $success_message
send_discord_message $SUCCESS_WEBHOOK_URL "$success_message"
backup/crontabdocker-compose.yamlversion: "3.8"
services:
app-database:
container_name: app-database
image: postgres:16.3
restart: always
volumes:
- database-data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
healthcheck:
test:
- CMD-SHELL
- "pg_isready -U $${POSTGRES_USER} -d $${POSTGRES_DB}"
interval: 5s
timeout: 20s
retries: 10
app-backup:
container_name: app-backup
restart: always
build:
context: ./backup
dockerfile: Dockerfile
depends_on:
app-database:
condition: service_healthy
volumes:
- ./backup:/backup
links:
- app-database:db
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
S3_BUCKET_NAME: ${S3_BUCKET_NAME}
S3_ENDPOINT_URL: ${S3_ENDPOINT_URL}
DISCORD_BACKUP_SUCCESS_WEBHOOK_URL: ${DISCORD_BACKUP_SUCCESS_WEBHOOK_URL}
DISCORD_BACKUP_FAILURE_WEBHOOK_URL: ${DISCORD_BACKUP_FAILURE_WEBHOOK_URL}
BACKUP_IDENTIFIER: ${BACKUP_IDENTIFIER} # some identifier to segment backups in S3
volumes:
database-data:
backup:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
It should be detected and make a
backupview to setup scheduled backups.Beta Was this translation helpful? Give feedback.
All reactions