Set Up Server Backup Rotation on Ubuntu VPS: Complete Automation Guide

By Raman Kumar

Share:

Updated on May 14, 2026

Set Up Server Backup Rotation on Ubuntu VPS: Complete Automation Guide

Why Server Backup Rotation Matters for VPS Users

Your VPS generates backups daily, but without proper rotation, you'll quickly run out of disk space. Server backup rotation automatically manages your backup files. It keeps recent backups while removing older ones according to your retention policy.

Most hosting customers learn this lesson the hard way. Your backup script works perfectly for weeks. Then suddenly your VPS runs out of space because you've accumulated hundreds of backup files. Without rotation, a 50GB database can fill a 500GB disk in just 10 days of daily backups.

This tutorial walks you through building an automated rotation system on Ubuntu VPS. You'll create retention policies, set up cleanup scripts, and integrate everything with cron for hands-free operation.

Understanding Backup Retention Policies

Before writing scripts, define your retention requirements. Most businesses follow a 3-2-1 backup strategy: 3 copies of data, 2 different storage types, 1 offsite copy.

Common rotation patterns include:

  • Daily: Keep 7 days of daily backups
  • Weekly: Keep 4 weeks of weekly backups
  • Monthly: Keep 12 months of monthly backups
  • Yearly: Keep 3-7 years of yearly backups

For most Hostperl VPS customers, a simple daily rotation works well. Keep the last 7 daily backups, 4 weekly backups, and 6 monthly backups. This provides good recovery options without excessive storage use.

Creating the Backup Directory Structure

Organize your backups with a clear directory structure. This makes rotation scripts simpler and recovery faster.

sudo mkdir -p /backup/{daily,weekly,monthly}
sudo mkdir -p /backup/logs

Create a configuration file for your rotation settings:

sudo nano /backup/backup-config.conf

Add these configuration options:

# Backup Configuration
BACKUP_ROOT="/backup"
SOURCE_DIRS="/var/www /etc /home"
DATABASES="wordpress_db ecommerce_db"
MYSQL_USER="backup_user"
MYSQL_PASS="secure_password"

# Retention Policy
DAILY_KEEP=7
WEEKLY_KEEP=4
MONTHLY_KEEP=6

# Storage Limits
MAX_BACKUP_SIZE_GB=100
WARN_THRESHOLD_PERCENT=80

Set appropriate permissions:

sudo chmod 600 /backup/backup-config.conf
sudo chown root:root /backup/backup-config.conf

Building the File System Backup Script

Create the main backup script that handles both backup creation and rotation:

sudo nano /backup/rotate-backup.sh

Add this comprehensive backup and rotation script:

#!/bin/bash

# Load configuration
source /backup/backup-config.conf

# Set up logging
LOG_FILE="$BACKUP_ROOT/logs/backup-$(date +%Y%m%d).log"
exec > >(tee -a "$LOG_FILE") 2>&1

echo "[$(date)] Starting backup rotation process"

# Function to create file system backup
create_filesystem_backup() {
    local backup_type=$1
    local backup_dir="$BACKUP_ROOT/$backup_type"
    local timestamp=$(date +"%Y%m%d_%H%M%S")
    local backup_name="filesystem_${backup_type}_${timestamp}.tar.gz"
    
    echo "[$(date)] Creating $backup_type filesystem backup: $backup_name"
    
    # Create compressed backup
    tar -czf "$backup_dir/$backup_name" \
        --exclude="/backup" \
        --exclude="/tmp" \
        --exclude="/var/tmp" \
        --exclude="/proc" \
        --exclude="/sys" \
        $SOURCE_DIRS 2>/dev/null
    
    if [ $? -eq 0 ]; then
        echo "[$(date)] Filesystem backup completed successfully"
        return 0
    else
        echo "[$(date)] ERROR: Filesystem backup failed"
        return 1
    fi
}

# Function to create database backups
create_database_backup() {
    local backup_type=$1
    local backup_dir="$BACKUP_ROOT/$backup_type"
    local timestamp=$(date +"%Y%m%d_%H%M%S")
    
    for db in $DATABASES; do
        local backup_name="database_${db}_${backup_type}_${timestamp}.sql.gz"
        echo "[$(date)] Creating $backup_type database backup: $backup_name"
        
        mysqldump -u"$MYSQL_USER" -p"$MYSQL_PASS" \
            --single-transaction \
            --routines \
            --triggers \
            "$db" | gzip > "$backup_dir/$backup_name"
        
        if [ $? -eq 0 ]; then
            echo "[$(date)] Database backup for $db completed"
        else
            echo "[$(date)] ERROR: Database backup for $db failed"
        fi
    done
}

# Function to rotate backups
rotate_backups() {
    local backup_type=$1
    local keep_count=$2
    local backup_dir="$BACKUP_ROOT/$backup_type"
    
    echo "[$(date)] Rotating $backup_type backups (keeping $keep_count)"
    
    # Count current backups
    local current_count=$(ls -1 "$backup_dir"/*.tar.gz "$backup_dir"/*.sql.gz 2>/dev/null | wc -l)
    
    if [ $current_count -gt $keep_count ]; then
        local remove_count=$((current_count - keep_count))
        echo "[$(date)] Removing $remove_count old $backup_type backups"
        
        # Remove oldest backups
        ls -1t "$backup_dir"/*.tar.gz "$backup_dir"/*.sql.gz 2>/dev/null | \
            tail -n $remove_count | \
            xargs rm -f
        
        echo "[$(date)] Cleanup completed for $backup_type backups"
    else
        echo "[$(date)] No cleanup needed for $backup_type backups ($current_count <= $keep_count)"
    fi
}

echo "[$(date)] Backup rotation script completed"

Make the script executable:

sudo chmod +x /backup/rotate-backup.sh

Implementing Smart Server Backup Rotation Logic

Add the main execution logic to your rotation script. This determines when to create weekly and monthly backups based on the current date:

sudo nano -a /backup/rotate-backup.sh

Append this logic to the end of your script:

# Determine backup type based on date
DAY_OF_WEEK=$(date +%u)  # 1=Monday, 7=Sunday
DAY_OF_MONTH=$(date +%d)

# Always create daily backup
create_filesystem_backup "daily"
create_database_backup "daily"
rotate_backups "daily" $DAILY_KEEP

# Create weekly backup on Sundays
if [ $DAY_OF_WEEK -eq 7 ]; then
    echo "[$(date)] Creating weekly backup (Sunday)"
    create_filesystem_backup "weekly"
    create_database_backup "weekly"
    rotate_backups "weekly" $WEEKLY_KEEP
fi

# Create monthly backup on first day of month
if [ $DAY_OF_MONTH -eq 01 ]; then
    echo "[$(date)] Creating monthly backup (1st of month)"
    create_filesystem_backup "monthly"
    create_database_backup "monthly"
    rotate_backups "monthly" $MONTHLY_KEEP
fi

# Check storage usage
check_storage_usage() {
    local usage_percent=$(df "$BACKUP_ROOT" | awk 'NR==2 {print $5}' | sed 's/%//')
    echo "[$(date)] Backup directory usage: ${usage_percent}%"
    
    if [ $usage_percent -gt $WARN_THRESHOLD_PERCENT ]; then
        echo "[$(date)] WARNING: Backup storage usage exceeds ${WARN_THRESHOLD_PERCENT}%"
        # Send email notification (optional)
        echo "High backup storage usage: ${usage_percent}%" | \
            mail -s "Backup Storage Warning - $(hostname)" admin@yourdomain.com
    fi
}

check_storage_usage

Setting Up Automated Execution with Cron

Configure cron to run your backup rotation automatically. Edit the root crontab:

sudo crontab -e

Add this line to run backups daily at 2 AM:

0 2 * * * /backup/rotate-backup.sh >/dev/null 2>&1

For production servers with specific maintenance windows, you might prefer:

# Run backups during low-traffic hours
0 3 * * * /backup/rotate-backup.sh

# Run additional backup verification on weekends
0 4 * * 0 /backup/verify-backups.sh

Verify your cron configuration:

sudo crontab -l

Adding Backup Verification and Health Checks

Create a verification script to ensure your backups are valid and complete:

sudo nano /backup/verify-backups.sh
#!/bin/bash

source /backup/backup-config.conf

LOG_FILE="$BACKUP_ROOT/logs/verify-$(date +%Y%m%d).log"
exec > >(tee -a "$LOG_FILE") 2>&1

echo "[$(date)] Starting backup verification"

# Function to verify tar.gz files
verify_archives() {
    local backup_dir=$1
    local backup_type=$2
    local error_count=0
    
    echo "[$(date)] Verifying $backup_type archives in $backup_dir"
    
    for archive in "$backup_dir"/*.tar.gz; do
        if [ -f "$archive" ]; then
            echo "Checking: $(basename "$archive")"
            if ! tar -tzf "$archive" >/dev/null 2>&1; then
                echo "ERROR: Corrupted archive - $archive"
                error_count=$((error_count + 1))
            fi
        fi
    done
    
    return $error_count
}

# Function to verify database dumps
verify_databases() {
    local backup_dir=$1
    local backup_type=$2
    local error_count=0
    
    echo "[$(date)] Verifying $backup_type database dumps in $backup_dir"
    
    for dump in "$backup_dir"/*.sql.gz; do
        if [ -f "$dump" ]; then
            echo "Checking: $(basename "$dump")"
            if ! gzip -t "$dump" 2>/dev/null; then
                echo "ERROR: Corrupted database dump - $dump"
                error_count=$((error_count + 1))
            fi
        fi
    done
    
    return $error_count
}

# Verify all backup types
total_errors=0
for backup_type in daily weekly monthly; do
    if [ -d "$BACKUP_ROOT/$backup_type" ]; then
        verify_archives "$BACKUP_ROOT/$backup_type" "$backup_type"
        total_errors=$((total_errors + $?))
        
        verify_databases "$BACKUP_ROOT/$backup_type" "$backup_type"
        total_errors=$((total_errors + $?))
    fi
done

echo "[$(date)] Verification completed with $total_errors errors"

if [ $total_errors -gt 0 ]; then
    echo "Backup verification found $total_errors errors" | \
        mail -s "Backup Verification Errors - $(hostname)" admin@yourdomain.com
fi

Make the verification script executable:

sudo chmod +x /backup/verify-backups.sh

Monitoring and Troubleshooting Common Issues

Create a monitoring script to track backup health and send alerts:

sudo nano /backup/monitor-backups.sh
#!/bin/bash

source /backup/backup-config.conf

# Check if backups are running
check_backup_freshness() {
    local latest_daily=$(find "$BACKUP_ROOT/daily" -name "*.tar.gz" -mtime -1 | wc -l)
    
    if [ $latest_daily -eq 0 ]; then
        echo "ERROR: No daily backups created in the last 24 hours"
        return 1
    fi
    
    echo "INFO: Found $latest_daily recent daily backups"
    return 0
}

# Check backup sizes for consistency
check_backup_sizes() {
    local daily_dir="$BACKUP_ROOT/daily"
    local avg_size=0
    local count=0
    
    # Calculate average size of recent backups
    for backup in $(ls -1t "$daily_dir"/*.tar.gz 2>/dev/null | head -5); do
        local size=$(stat -f%z "$backup" 2>/dev/null || stat -c%s "$backup")
        avg_size=$((avg_size + size))
        count=$((count + 1))
    done
    
    if [ $count -gt 0 ]; then
        avg_size=$((avg_size / count))
        local latest_size=$(stat -f%z "$(ls -1t "$daily_dir"/*.tar.gz | head -1)" 2>/dev/null || 
                          stat -c%s "$(ls -1t "$daily_dir"/*.tar.gz | head -1)")
        
        # Alert if latest backup is 50% smaller than average
        local threshold=$((avg_size / 2))
        if [ $latest_size -lt $threshold ]; then
            echo "WARNING: Latest backup size ($latest_size bytes) significantly smaller than average ($avg_size bytes)"
            return 1
        fi
    fi
    
    return 0
}

check_backup_freshness
check_backup_sizes

Setting Up Remote Backup Storage

For complete protection, sync your local backups to remote storage. This script uploads daily backups to a remote server:

sudo nano /backup/sync-remote.sh
#!/bin/bash

source /backup/backup-config.conf

REMOTE_USER="backup"
REMOTE_HOST="backup.yourdomain.com"
REMOTE_PATH="/remote-backups/$(hostname)"

echo "[$(date)] Starting remote backup sync"

# Sync daily backups to remote storage
rsync -avz --delete-after \
    --include="*.tar.gz" \
    --include="*.sql.gz" \
    "$BACKUP_ROOT/daily/" \
    "$REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH/daily/"

# Sync weekly backups (keep longer)
rsync -avz \
    --include="*.tar.gz" \
    --include="*.sql.gz" \
    "$BACKUP_ROOT/weekly/" \
    "$REMOTE_USER@$REMOTE_HOST:$REMOTE_PATH/weekly/"

echo "[$(date)] Remote sync completed"

Ready to implement professional backup rotation on your server? Hostperl VPS hosting provides the reliable infrastructure and root access you need for custom backup solutions. Our New Zealand-based support team can help you optimize your backup strategy and ensure your rotation scripts run properly.

Frequently Asked Questions

How much storage space should I allocate for backup rotation?

Plan for 3-5x your data size for a complete rotation cycle. A 20GB website with daily, weekly, and monthly retention typically needs 60-100GB of backup storage. Monitor your actual usage and adjust retention policies accordingly.

What happens if the backup script fails during rotation?

The script creates new backups before removing old ones. You never lose existing backups due to rotation failures. Failed backups generate log entries and email alerts. Always verify your backups are working before depending on automated rotation.

Can I run backup rotation on shared hosting?

Shared hosting typically doesn't provide the root access or cron flexibility needed for custom backup rotation. Most shared hosting providers include their own backup systems. For full control over backup rotation, you need VPS or dedicated hosting.

How do I test backup restoration before disaster strikes?

Regularly test restoration by extracting backup files to a temporary directory and verifying the contents. For databases, restore to a test database and run basic queries. Document your restoration procedures and practice them quarterly.

Should I encrypt backups during rotation?

Yes, especially for sensitive data or remote storage. Add GPG encryption to your backup script: gpg --cipher-algo AES256 --compress-algo 1 --symmetric --output backup.tar.gz.gpg backup.tar.gz. Store encryption keys securely and separately from backups.