Skip to content

Database_backups

Comprehensive Database Backup Solutions for Linux

Section titled “Comprehensive Database Backup Solutions for Linux”

Database backups require special attention because they contain structured data that is often the core of business operations. Unlike regular file backups, database backups must maintain data consistency and integrity.

DATABASE BACKUP IMPORTANCE
+------------------------------------------------------------------+
| |
| ┌─────────────────────────────────────────────────────────┐ │
| │ DATABASE BACKUP CHALLENGES │ │
| │ │ │
| │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
| │ │ CONSISTENCY │ │ VOLUME │ │ COMPLEXITY │ │ │
| │ │ │ │ │ │ │ │ │
| │ │ Must ensure │ │ Large data │ │ Different │ │ │
| │ │ ACID │ │ sets need │ │ database │ │ │
| │ │ properties │ │ efficient │ │ systems │ │ │
| │ │ during │ │ methods │ │ require │ │ │
| │ │ backup │ │ │ │ different │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
| │ │ │
| │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
| │ │ RECOVERY │ │ POINT-IN- │ │ STORAGE │ │ │
| │ │ TIME │ │ TIME │ │ MANAGEMENT│ │ │
| │ │ │ │ │ │ │ │ │
| │ │ Critical │ │ Need exact │ │ Balancing │ │ │
| │ │ business │ │ recovery │ │ retention, │ │ │
| │ │ downtime │ │ point │ │ cost, and │ │ │
| │ │ impacts │ │ options │ │ safety │ │ │
| │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
| │ │ │
| └─────────────────────────────────────────────────────────┘ │
| |
+------------------------------------------------------------------+
DATABASE BACKUP TYPES
+------------------------------------------------------------------+
| |
| ┌──────────────────────────────────────────────────────────────┐ │
| │ FULL BACKUP │ │
| │ │ │
| │ ┌──────────────────────────────────────────────────────┐ │ │
| │ │ Database: [████████████████████████] 100% Complete │ │ │
| │ │ │ │ │
| │ │ Pros: Complete restore, simple process │ │ │
| │ │ Cons: Large storage, long backup time │ │ │
| │ └──────────────────────────────────────────────────────┘ │ │
| └──────────────────────────────────────────────────────────────┘ │
| |
| ┌──────────────────────────────────────────────────────────────┐ │
| │ INCREMENTAL BACKUP │ │
| │ │ │
| │ Day 1: [████████████] Full backup │ │
| │ Day 2: [███.........] Changes since Day 1 │ │
| │ Day 3: [█..........] Changes since Day 1 │ │
| │ Day 4: [███.........] Changes since Day 1 │ │
| │ │ │
| │ Note: Each backup contains changes from the full backup │ │
| └──────────────────────────────────────────────────────────────┘ │
| |
| ┌──────────────────────────────────────────────────────────────┐ │
| │ LOG/TRANSACTION BACKUP │ │
| │ │ │
| │ ┌────────────┐ ┌────────────┐ ┌────────────┐ │ │
| │ │ WAL/ │ │ WAL/ │ │ WAL/ │ │ │
| │ │ REDO │ │ REDO │ │ REDO │ │ │
| │ │ Log 001 │ │ Log 002 │ │ Log 003 │ │ │
| │ └────────────┘ └────────────┘ └────────────┘ │ │
| │ │ │
| │ Enables point-in-time recovery (PITR) │ │
| └──────────────────────────────────────────────────────────────┘ │
| |
+------------------------------------------------------------------+

Terminal window
# =============================================================================
# BASIC MYSQLDUMP OPERATIONS
# =============================================================================
# Backup single database
mysqldump -u root -p mydatabase > mydatabase.sql
# Backup multiple databases
mysqldump -u root -p --databases db1 db2 db3 > multi_db.sql
# Backup all databases
mysqldump -u root -p --all-databases > all_databases.sql
# Backup with compression
mysqldump -u root -p mydatabase | gzip > mydatabase.sql.gz
# Backup specific tables
mysqldump -u root -p mydatabase table1 table2 table3 > tables.sql
# =============================================================================
# ADVANCED OPTIONS
# =============================================================================
# Complete backup with all options
mysqldump -u root -p \
--routines \
--triggers \
--events \
--master-data=2 \
--single-transaction \
--flush-logs \
mydatabase > full_backup.sql
# Where:
# --routines : Include stored procedures and functions
# --triggers : Include triggers
# --events : Include scheduled events
# --master-data : Include CHANGE MASTER TO command for replication
# --single-transaction : Consistent backup for InnoDB
# --flush-logs : Rotate MySQL logs
# =============================================================================
# ROW-LEVEL OPTIONS
# =============================================================================
# Export only rows matching condition
mysqldump -u root -p mydatabase users --where="created_at > '2024-01-01'" > recent_users.sql
# Export specific rows to CSV
mysqldump -u root -p mydatabase users --where="id > 1000" --tab=/tmp --fields-terminated-by=','
# =============================================================================
# STRUCTURE ONLY (NO DATA)
# =============================================================================
# Backup schema only
mysqldump -u root -p --no-data mydatabase > schema.sql
# Backup data only (no schema)
mysqldump -u root -p --no-create-info mydatabase > data.sql
#!/bin/bash
#===============================================================================
# MySQL/MariaDB Automated Backup Script
#===============================================================================
set -euo pipefail
# Configuration
MYSQL_USER="backup"
MYSQL_PASSWORD="backup_password"
MYSQL_HOST="localhost"
BACKUP_DIR="/backup/mysql"
RETENTION_DAYS=30
LOG_FILE="/var/log/mysql_backup.log"
# Databases to backup
DATABASES=("app_db" "users_db" "orders_db")
# Create backup directory
mkdir -p "$BACKUP_DIR"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Backup single database
backup_database() {
local db_name="$1"
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="${BACKUP_DIR}/${db_name}_${timestamp}.sql.gz"
log "Backing up database: $db_name"
mysqldump \
-h "$MYSQL_HOST" \
-u "$MYSQL_USER" \
-p"$MYSQL_PASSWORD" \
--routines \
--triggers \
--events \
--single-transaction \
--flush-logs \
"$db_name" | gzip > "$backup_file"
# Verify backup
if [ -s "$backup_file" ]; then
local size
size=$(du -h "$backup_file" | cut -f1)
log "Backup completed: $backup_file ($size)"
# Create symlink to latest
ln -sf "$backup_file" "${BACKUP_DIR}/${db_name}_latest.sql.gz"
return 0
else
log "ERROR: Backup file is empty: $backup_file"
return 1
fi
}
# Backup all databases
backup_all_databases() {
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="${BACKUP_DIR}/all_databases_${timestamp}.sql.gz"
log "Backing up all databases"
mysqldump \
-h "$MYSQL_HOST" \
-u "$MYSQL_USER" \
-p"$MYSQL_PASSWORD" \
--all-databases \
--routines \
--triggers \
--events \
--single-transaction \
--flush-logs | gzip > "$backup_file"
log "All databases backup completed: $backup_file"
}
# Cleanup old backups
cleanup() {
log "Cleaning up backups older than $RETENTION_DAYS days"
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
log "Cleanup completed"
}
# Main function
main() {
local start_time
start_time=$(date +%s)
log "========================================="
log "Starting MySQL backup process"
log "========================================="
# Backup each database
for db in "${DATABASES[@]}"; do
backup_database "$db" || log "WARNING: Failed to backup $db"
done
# Backup all databases
backup_all_databases
# Cleanup
cleanup
local end_time
end_time=$(date +%s)
local duration=$((end_time - start_time))
log "========================================="
log "Backup completed in ${duration} seconds"
log "========================================="
}
main "$@"

MySQL Binary Log Backup (Point-in-Time Recovery)

Section titled “MySQL Binary Log Backup (Point-in-Time Recovery)”
Terminal window
# =============================================================================
# ENABLE BINARY LOGGING
# =============================================================================
# Add to my.cnf [mysqld] section
[mysqld]
log-bin=mysql-bin
binlog_format=ROW
expire_logs_days=7
max_binlog_size=100M
# Restart MySQL
sudo systemctl restart mysql
# =============================================================================
# FIND CURRENT BINLOG POSITION
# =============================================================================
mysql -u root -p -e "SHOW MASTER STATUS\G"
# Output:
# *************************** 1. row ***************************
# File: mysql-bin.000001
# Position: 107
# Binlog_Do_DB: test
# Binlog_Ignore_DB: mysql
# =============================================================================
# BACKUP BINARY LOGS
# =============================================================================
# Copy binary logs to backup location
mysqlbinlog --read-from-remote-server \
--user=backup \
--password=backup_password \
--host=localhost \
mysql-bin.000001 > /backup/mysql-bin.000001
# Or backup using mysqlbinlog with all files
mysqlbinlog mysql-bin.0* | gzip > /backup/binlogs.sql.gz
# =============================================================================
# RESTORE FROM BINARY LOGS
# =============================================================================
# Restore from full backup first
gunzip < backup.sql.gz | mysql -u root -p mydatabase
# Apply binary logs to point in time
mysqlbinlog --stop-datetime="2024-01-15 10:30:00" \
mysql-bin.000001 mysql-bin.000002 | mysql -u root -p mydatabase

Terminal window
# =============================================================================
# BASIC PGDUMP OPERATIONS
# =============================================================================
# Backup single database
pg_dump -U postgres mydatabase > mydatabase.sql
# Backup with compression
pg_dump -U postgres mydatabase | gzip > mydatabase.sql.gz
# Backup in custom format (recommended for large databases)
pg_dump -U postgres -Fc mydatabase > mydatabase.dump
# Backup in directory format (parallel backup)
pg_dump -U postgres -Fd mydatabase -f mydatabase_dumpdir
# Backup all databases
pg_dumpall -U postgres > all_databases.sql
# Backup only roles
pg_dumpall -U postgres --roles-only > roles.sql
# Backup only tablespaces
pg_dumpall -U postgres --tablespaces-only > tablespaces.sql
# =============================================================================
# ADVANCED OPTIONS
# =============================================================================
# Backup with schema and data
pg_dump -U postgres -Fc mydatabase > full_backup.dump
# Schema only
pg_dump -U postgres --schema-only mydatabase > schema.sql
# Data only
pg_dump -U postgres --data-only mydatabase > data.sql
# Exclude table
pg_dump -U postgres --exclude-table=logs mydatabase > no_logs.sql
# Backup specific schema
pg_dump -U postgres -n myschema mydatabase > myschema.sql
# Parallel backup (faster for large databases)
pg_dump -U postgres -Fd mydatabase -j 4 -f mydatabase_parallel
# =============================================================================
# RESTORE OPERATIONS
# =============================================================================
# Restore from plain SQL
psql -U postgres mydatabase < mydatabase.sql
# Restore from compressed
gunzip < mydatabase.sql.gz | psql -U postgres mydatabase
# Restore from custom format
pg_restore -U postgres -d mydatabase mydatabase.dump
# Restore in parallel (for directory format)
pg_restore -U postgres -d mydatabase -j 4 mydatabase_parallel
#!/bin/bash
#===============================================================================
# PostgreSQL Automated Backup Script
#===============================================================================
set -euo pipefail
# Configuration
PGUSER="postgres"
BACKUP_DIR="/backup/postgres"
RETENTION_DAYS=30
LOG_FILE="/var/log/postgres_backup.log"
# Database to backup
DATABASE="appdb"
mkdir -p "$BACKUP_DIR"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Backup database in custom format
backup_database() {
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
local backup_file="${BACKUP_DIR}/${DATABASE}_${timestamp}.dump"
log "Backing up database: $DATABASE"
pg_dump -U "$PGUSER" -Fc "$DATABASE" -f "$backup_file"
# Verify backup
if [ -s "$backup_file" ]; then
local size
size=$(du -h "$backup_file" | cut -f1)
log "Backup completed: $backup_file ($size)"
# Create symlink to latest
ln -sf "$backup_file" "${BACKUP_DIR}/${DATABASE}_latest.dump"
return 0
else
log "ERROR: Backup failed"
return 1
fi
}
# Backup WAL archives for PITR
backup_wal() {
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
local wal_dir="${BACKUP_DIR}/wal/${timestamp}"
mkdir -p "$wal_dir"
# Execute pg_basebackup
pg_basebackup -U "$PGUSER" -D "$wal_dir" -Ft -z -P
log "WAL backup completed: $wal_dir"
}
# Cleanup old backups
cleanup() {
log "Cleaning up backups older than $RETENTION_DAYS days"
find "$BACKUP_DIR" -name "*.dump" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR/wal" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} \;
log "Cleanup completed"
}
main() {
log "Starting PostgreSQL backup"
backup_database
cleanup
log "PostgreSQL backup completed"
}
main "$@"
POSTGRESQL PITR ARCHITECTURE
+------------------------------------------------------------------+
| |
| BACKUP PROCESS |
| ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
| │ pg_base │ │ WAL files │ │ Archive │ │
| │ backup │ │ continue │ │ Storage │ │
| │ │───→│ writing │───→│ │ │
| │ (base backup│ │ │ │ │ │
│ │ + WAL) │ │ (transaction│ │ │ │
| └─────────────┘ │ logs) │ └─────────────┘ │
| └─────────────┘ |
| |
| RECOVERY PROCESS │
| ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
| │ Stop │ │ Copy base │ │ Configure │ │
| │ PostgreSQL│───→│ backup │───→│ recovery │ │
│ │ │ │ │ │ target │ │
| └─────────────┘ └─────────────┘ └─────────────┘ │
| │ |
| ▼ |
| ┌─────────────┐ │
| │ Start │ │
| │ PostgreSQL │ │
| │ Auto │ │
| │ recovers │ │
| └─────────────┘ |
| |
+------------------------------------------------------------------+

Terminal window
# =============================================================================
# BASIC MONGODUMP OPERATIONS
# =============================================================================
# Backup single database
mongodump --uri="mongodb://localhost:27017/mydatabase" --out=/backup/
# Backup with authentication
mongodump \
--uri="mongodb://user:password@localhost:27017/mydatabase" \
--out=/backup/
# Backup specific collection
mongodump \
--uri="mongodb://localhost:27017/mydatabase" \
--collection=users \
--out=/backup/
# Backup with query filter
mongodump \
--uri="mongodb://localhost:27017/mydatabase" \
--collection=users \
--query='{"created_at": {"$gt": {"$date": "2024-01-01T00:00:00Z"}}}' \
--out=/backup/
# =============================================================================
# ADVANCED OPTIONS
# =============================================================================
# Backup with compression (oplog for replica sets)
mongodump \
--uri="mongodb://localhost:27017" \
--oplog \
--out=/backup/$(date +%Y%m%d)
# Exclude specific collections
mongodump \
--uri="mongodb://localhost:27017/mydatabase" \
--excludeCollection=logs \
--excludeCollection=audit \
--out=/backup/
# Parallel backup (improves speed)
mongodump \
--uri="mongodb://localhost:27017/mydatabase" \
--numParallelCollections=4 \
--out=/backup/
# Archive format
mongodump \
--uri="mongodb://localhost:27017/mydatabase" \
--archive=/backup/mydatabase.archive \
--gzip
Terminal window
# =============================================================================
# MONGORESTORE OPERATIONS
# =============================================================================
# Restore database
mongorestore --uri="mongodb://localhost:27017" /backup/mydatabase/
# Restore with drop (remove existing data first)
mongorestore \
--uri="mongodb://localhost:27017" \
--drop \
/backup/mydatabase/
# Restore to specific database
mongorestore \
--uri="mongodb://localhost:27017" \
--nsInclude="mydatabase.*" \
--nsTo="newdatabase.*" \
/backup/mydatabase/
# Restore from archive
mongorestore \
--uri="mongodb://localhost:27017" \
--archive=/backup/mydatabase.archive \
--gzip
#!/bin/bash
#===============================================================================
# MongoDB Automated Backup Script
#===============================================================================
set -euo pipefail
# Configuration
MONGO_URI="mongodb://localhost:27017"
BACKUP_DIR="/backup/mongodb"
RETENTION_DAYS=30
LOG_FILE="/var/log/mongodb_backup.log"
mkdir -p "$BACKUP_DIR"
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
backup_database() {
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
local backup_path="${BACKUP_DIR}/mongodb_${timestamp}"
log "Starting MongoDB backup"
# Backup all databases with oplog (for point-in-time recovery)
mongodump \
--uri="$MONGO_URI" \
--oplog \
--out="$backup_path"
# Compress backup
tar -czf "${backup_path}.tar.gz" -C "$BACKUP_DIR" "mongodb_${timestamp}"
rm -rf "$backup_path"
# Verify
if [ -s "${backup_path}.tar.gz" ]; then
local size
size=$(du -h "${backup_path}.tar.gz" | cut -f1)
log "Backup completed: ${backup_path}.tar.gz ($size)"
ln -sf "${backup_path}.tar.gz" "${BACKUP_DIR}/mongodb_latest.tar.gz"
else
log "ERROR: Backup failed"
return 1
fi
}
cleanup() {
log "Cleaning up old backups"
find "$BACKUP_DIR" -name "*.tar.gz" -mtime +$RETENTION_DAYS -delete
}
main() {
log "Starting MongoDB backup process"
backup_database
cleanup
log "MongoDB backup completed"
}
main "$@"

Terminal window
# =============================================================================
# REDIS RDB SNAPSHOTS
# =============================================================================
# Trigger manual save (blocking)
redis-cli SAVE
# Background save (non-blocking)
redis-cli BGSAVE
# Check last save status
redis-cli LASTSAVE
# Configure automatic saves in redis.conf
# save 900 1 # After 1 change in 900 seconds
# save 300 10 # After 10 changes in 300 seconds
# save 60 10000 # After 10000 changes in 60 seconds
# =============================================================================
# REDIS AOF (APPEND ONLY FILE)
# =============================================================================
# Enable AOF in redis.conf
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
# Rewrite AOF manually
redis-cli BGREWRITEAOF
# Check AOF status
redis-cli INFO persistence
# =============================================================================
# BACKUP REDIS FILES
# =============================================================================
# Redis backup directory
REDIS_DATA_DIR="/var/lib/redis"
BACKUP_DIR="/backup/redis"
# Create backup
backup_redis() {
local timestamp
timestamp=$(date +%Y%m%d_%H%M%S)
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Trigger background save
redis-cli BGSAVE
# Wait for save to complete
while [ "$(redis-cli LASTSAVE)" = "$lastsave" ]; do
sleep 1
done
# Copy RDB file
cp "$REDIS_DATA_DIR/dump.rdb" "$BACKUP_DIR/dump_${timestamp}.rdb"
# Copy AOF if enabled
if [ -f "$REDIS_DATA_DIR/appendonly.aof" ]; then
cp "$REDIS_DATA_DIR/appendonly.aof" "$BACKUP_DIR/appendonly_${timestamp}.aof"
fi
echo "Redis backup completed: $timestamp"
}

DATABASE BACKUP BEST PRACTICES CHECKLIST
+------------------------------------------------------------------+
| |
| ┌─────────────────────────────────────────────────────────────┐ │
| │ MYSQL/MARIADB │ │
| ├─────────────────────────────────────────────────────────────┤ │
| │ □ Use --single-transaction for InnoDB │ │
| │ □ Enable binary logging for point-in-time recovery │ │
| │ □ Backup with --routines and --triggers │ │
| │ □ Test restore procedures regularly │ │
| │ □ Schedule regular automated backups │ │
| │ □ Store backups offsite │ │
| └─────────────────────────────────────────────────────────────┘ │
| |
| ┌─────────────────────────────────────────────────────────────┐ │
| │ POSTGRESQL │ │
| ├─────────────────────────────────────────────────────────────┤ │
| │ □ Use pg_dump custom format (-Fc) for large databases │ │
│ │ □ Implement WAL archiving for PITR │ │
| │ □ Use pg_basebackup for full cluster backup │ │
| │ □ Configure retention policies │ │
| │ □ Test point-in-time recovery │ │
| └─────────────────────────────────────────────────────────────┘ │
| |
| ┌─────────────────────────────────────────────────────────────┐ │
| │ MONGODB │ │
| ├─────────────────────────────────────────────────────────────┤ │
| │ □ Use --oplog for replica sets │ │
│ │ □ Enable journaling │ │
| │ □ Consider mongodump vs filesystem snapshots │ │
| │ □ Test restore with mongorestore --drop │ │
| └─────────────────────────────────────────────────────────────┘ │
| |
| ┌─────────────────────────────────────────────────────────────┐ │
| │ GENERAL │ │
| ├─────────────────────────────────────────────────────────────┤ │
| │ □ Automate all backup processes │ │
| │ □ Monitor backup success/failure │ │
| │ □ Test restore quarterly │ │
| │ □ Document recovery procedures │ │
| │ □ Implement offsite backup │ │
| │ □ Encrypt sensitive backups │ │
| └─────────────────────────────────────────────────────────────┘ │
| |
+------------------------------------------------------------------+
DATABASE BACKUP STRATEGY
+------------------------------------------------------------------+
| |
| START │
| │ │
| ▼ |
| ┌─────────────────────────────┐ │
| │ Database Type? │ │
| └─────────────────────────────┘ │
│ │ │ │ │ │
| MySQL/MariaDB PostgreSQL MongoDB Redis │
| │ │ │
| ▼ ▼ │
| ┌──────────────────────┐ ┌──────────────────────┐ │
| │ Data Size? │ │ Data Size? │ │
| └──────────────────────┘ └──────────────────────┘ │
│ │ │ │ │ │
| Small (<10GB) Large Small (<10GB) Large │
| │ │ │ │ │
| ▼ ▼ ▼ ▼ │
| ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
| │mysqldump│ │xtrabackup│ │pg_dump │ │pg_dump │ │
| │ +binlog│ │ + binlog│ │ +WAL │ │ +base │ │
| └─────────┘ └─────────┘ └─────────┘ └─────────┘ │
| |
+------------------------------------------------------------------+

End of Chapter 44: Database Backup Strategies