Automated Database Backups with S3 Upload and Notifications

Managing database backups is critical for any production system, but setting up automated backup pipelines can be complex and time consuming. This use case demonstrates how to use RapidForge's periodic tasks to create a complete backup automation system that:

All of this is accomplished with a single bash script running in RapidForge's periodic task.

Prerequisites

Step 1: Set Up Environment Variables

Before creating the periodic task, configure your credentials and settings in RapidForge:

  1. Navigate to your block's Settings section
  2. Add the following environment variables:
    • DATABASE_URL - Your database connection string (e.g., postgresql://user:pass@host:5432/dbname)
    • S3_BUCKET - Your S3 bucket name (e.g., my-db-backups)
    • AWS_ACCESS_KEY_ID - Your AWS access key
    • AWS_SECRET_ACCESS_KEY - Your AWS secret key
    • AWS_DEFAULT_REGION - Your AWS region (e.g., us-east-1)
    • SLACK_WEBHOOK - Your Slack webhook URL (optional)
    • BACKUP_RETENTION_DAYS - Number of days to keep backups (e.g., 30)

Tip: Use RapidForge's credential storage for sensitive values like database passwords and AWS keys. They will be accessible in your scripts with CRED_ prefix

Step 2: Create the Periodic Task

  1. Go to your block
  2. Click Create Periodic Task
  3. Set the schedule (e.g., 0 2 * * * for daily at 2 AM)
  4. Add the following bash script:

For PostgreSQL:

#!/bin/bash
set -e

# Generate backup filename with timestamp
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="backup_${TIMESTAMP}.sql.gz"
LOCAL_PATH="/tmp/${BACKUP_FILE}"

# Create compressed database backup
echo "Creating backup: ${BACKUP_FILE}"
pg_dump "$DATABASE_URL" | gzip > "$LOCAL_PATH"

BACKUP_SIZE=$(du -h "$LOCAL_PATH" | cut -f1)
echo "Backup created successfully: ${BACKUP_SIZE}"

# Upload to S3
echo "Uploading to S3..."
aws s3 cp "$LOCAL_PATH" "s3://${S3_BUCKET}/${BACKUP_FILE}" --storage-class STANDARD_IA

echo "Upload successful"

# Send success notification
if [ -n "$SLACK_WEBHOOK" ]; then
    curl -X POST -H 'Content-type: application/json' \
      --data "{\"text\":\"✅ Database backup completed successfully\\nFile: ${BACKUP_FILE}\\nSize: ${BACKUP_SIZE}\"}" \
      "$SLACK_WEBHOOK"
fi

# Clean up local file
rm "$LOCAL_PATH"

# Delete old backups based on retention policy
if [ -n "$BACKUP_RETENTION_DAYS" ]; then
    echo "Cleaning up backups older than ${BACKUP_RETENTION_DAYS} days..."
    CUTOFF_DATE=$(date -d "${BACKUP_RETENTION_DAYS} days ago" +%Y%m%d 2>/dev/null || date -v-${BACKUP_RETENTION_DAYS}d +%Y%m%d)

    aws s3 ls "s3://${S3_BUCKET}/" | grep "backup_" | while read -r line; do
        FILE_DATE=$(echo $line | grep -oE "[0-9]{8}" | head -1)
        FILE_NAME=$(echo $line | awk '{print $4}')

        if [ "$FILE_DATE" -lt "$CUTOFF_DATE" ]; then
            echo "Deleting old backup: ${FILE_NAME}"
            aws s3 rm "s3://${S3_BUCKET}/${FILE_NAME}"
        fi
    done
fi

echo "Backup automation completed"

For MySQL:

#!/bin/bash
set -e

# Parse MySQL connection details from DATABASE_URL
# Format: mysql://user:password@host:port/database
DB_USER=$(echo $DATABASE_URL | sed -n 's/.*:\/\/\([^:]*\):.*/\1/p')
DB_PASS=$(echo $DATABASE_URL | sed -n 's/.*:\/\/[^:]*:\([^@]*\)@.*/\1/p')
DB_HOST=$(echo $DATABASE_URL | sed -n 's/.*@\([^:]*\):.*/\1/p')
DB_PORT=$(echo $DATABASE_URL | sed -n 's/.*:\([0-9]*\)\/.*/\1/p')
DB_NAME=$(echo $DATABASE_URL | sed -n 's/.*\/\(.*\)/\1/p')

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="backup_${TIMESTAMP}.sql.gz"
LOCAL_PATH="/tmp/${BACKUP_FILE}"

# Create compressed database backup
echo "Creating MySQL backup: ${BACKUP_FILE}"
mysqldump -h"$DB_HOST" -P"$DB_PORT" -u"$DB_USER" -p"$DB_PASS" "$DB_NAME" | gzip > "$LOCAL_PATH"

BACKUP_SIZE=$(du -h "$LOCAL_PATH" | cut -f1)
echo "Backup created successfully: ${BACKUP_SIZE}"

# Upload to S3
echo "Uploading to S3..."
aws s3 cp "$LOCAL_PATH" "s3://${S3_BUCKET}/${BACKUP_FILE}" --storage-class STANDARD_IA

echo "Upload successful"

# Send success notification
if [ -n "$SLACK_WEBHOOK" ]; then
    curl -X POST -H 'Content-type: application/json' \
      --data "{\"text\":\"✅ Database backup completed successfully\\nFile: ${BACKUP_FILE}\\nSize: ${BACKUP_SIZE}\"}" \
      "$SLACK_WEBHOOK"
fi

# Clean up local file
rm "$LOCAL_PATH"

echo "Backup automation completed"

Step 3: Configure Failure Notifications

RapidForge allows you to set up an "On Fail" script that automatically runs when a periodic task or webhook fails. This is perfect for sending failure notifications without cluttering your main script with error handling.

  1. In your periodic task settings, find the On Fail section
  2. Enable the on-fail handler
  3. Add the following script:

Slack Notification:

#!/bin/bash

# RapidForge automatically provides these environment variables on failure:
# - FAILURE_EXIT_CODE: The exit code from the failed task
# - FAILURE_OUTPUT: Standard output from the failed task
# - FAILURE_ERROR: Standard error from the failed task
# - TASK_ID: The ID of the failed task

if [ -n "$SLACK_WEBHOOK" ]; then
    ERROR_MSG="${FAILURE_ERROR:-$FAILURE_OUTPUT}"

    curl -X POST -H 'Content-type: application/json' \
      --data "{\"text\":\"❌ Database Backup Failed\\n*Task ID:* ${TASK_ID}\\n*Exit Code:* ${FAILURE_EXIT_CODE}\\n\\n\`\`\`${ERROR_MSG}\`\`\`\"}" \
      "$SLACK_WEBHOOK"
fi

Step 4: Test the Backup

After creating the periodic task:

  1. Check the Events tab to view logs and verify the backup was created successfully
  2. Verify the backup file appears in your S3 bucket
  3. Check your Slack/Discord channel for the notification

Step 5: Monitoring and Maintenance

RapidForge provides several ways to monitor your backup automation:

Encryption

Add GPG encryption before uploading to S3:

# Encrypt the backup
gpg --symmetric --cipher-algo AES256 --passphrase "$BACKUP_PASSWORD" "$LOCAL_PATH"
ENCRYPTED_FILE="${LOCAL_PATH}.gpg"

# Upload encrypted file
aws s3 cp "$ENCRYPTED_FILE" "s3://${S3_BUCKET}/${BACKUP_FILE}.gpg"

This approach gives you enterprise grade backup automation with minimal setup and maximum flexibility, all managed through RapidForge's intuitive interface.