A practical guide to running a personal WordPress site on Google Cloud’s Always Free tier using Docker Compose, automatic SSL, security hardening, email alerts, and daily backups to cloud storage — all at zero cost.
What you’ll end up with:
- WordPress + MySQL 8.0 + nginx/SSL running in Docker on a free GCP VM
- Automatic HTTPS via Let’s Encrypt (auto-renewing)
- Security headers, localhost-only port bindings, and a hardened nginx config
- Daily incremental backups to a free GCS bucket (~6 months of history)
- Email alerts when backups fail
- Automated weekly updates for WordPress, Docker images, and OS packages
- SSH locked down via Tailscale — port 22 closed to the public internet
- ~$0/month, indefinitely
What this costs: Nothing. GCP’s Always Free tier covers the VM, and the GCS bucket stays free as long as you keep it under 5 GB (backups for a small site are well under that).
Table of Contents
- Prerequisites
- Create the GCP VM
- Lock down SSH with Tailscale
- Tune Ubuntu for 1 GB RAM
- Point your domain at the VM with Cloudflare
- Install Docker
- Create the site directory
- Write the config files
- Start the stack
- Finish WordPress setup
- Set up email alerts
- Set up backups
- Set up auto-updates
- Maintenance
- How the memory budget works
- Troubleshooting
1. Prerequisites
- A Google account (for GCP and Google Cloud Storage)
- A domain name you control (e.g. from Namecheap, Google Domains, Cloudflare, etc.)
- A Tailscale account — free at tailscale.com (up to 100 devices)
- A Gmail account with an app password (for email alerts)
- Basic comfort with a Linux terminal
2. Create the GCP VM
GCP’s Always Free tier includes one e2-micro instance per month in certain US regions, forever — no credit card charges as long as you stay within the limits.
Create the instance
- Go to console.cloud.google.com → Compute Engine → VM instances → Create instance
- Set:
- Name: anything (e.g.
wordpress-server) - Region:
us-east1,us-west1, orus-central1(required for Always Free) - Machine type:
e2-micro(2 vCPU, 1 GB RAM) - Boot disk: Ubuntu 22.04 LTS, 30 GB standard persistent disk
- Name: anything (e.g.
- Under Firewall, check both Allow HTTP traffic and Allow HTTPS traffic
- Click Create
The 30 GB disk is within the Always Free 30 GB/month standard disk limit. Do not use SSD — only standard persistent disk is free.
Note your external IP
Once created, copy the External IP from the VM list. You’ll need it for DNS.
Open SSH for initial setup
Click SSH in the GCP console to open a browser terminal. You’ll use this for the next step (installing Tailscale), after which you can close public SSH permanently.
3. Lock down SSH with Tailscale
Port 22 open to the internet is the single most-scanned port on any public server. Tailscale gives you a private encrypted network between your devices — SSH stays open on that network and closed to everyone else.
Install Tailscale on the VM
In the GCP browser SSH terminal:
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
tailscale up prints an authentication URL. Open it in your browser, log in with your Tailscale account, and the VM will appear in your Tailscale admin console.
Get the VM’s Tailscale IP (always in the 100.x.x.x range):
tailscale ip -4
Install Tailscale on your local machine
Download from tailscale.com/download for macOS, Windows, or Linux and sign in with the same account. Both devices will now be on your private Tailscale network.
Test SSH over Tailscale before closing the public port
From your local machine:
ssh [email protected] # use the Tailscale IP from above
Confirm this works before the next step.
Close port 22 in GCP’s firewall
In the GCP console:
- Go to VPC network → Firewall
- Find the rule named
default-allow-ssh(allows TCP port 22 from0.0.0.0/0) - Click it → Delete (or Disable if you want to re-enable it later)
Port 22 is now closed to the public internet. Your Tailscale SSH connection still works because Tailscale operates over UDP 41641 (or falls back to HTTPS), not port 22.
GCP console SSH still works. The browser SSH button in the GCP console uses IAP (Identity-Aware Proxy), which tunnels through Google’s internal network — it does not go through the public firewall rule you just removed.
Optional: restrict sshd to the Tailscale interface
For maximum hardening, tell sshd to only listen on the Tailscale interface:
# Find the Tailscale interface name (usually tailscale0)
ip link show | grep tailscale
# Add to /etc/ssh/sshd_config:
echo "ListenAddress 100.x.x.x" | sudo tee -a /etc/ssh/sshd_config
sudo systemctl restart ssh
Replace 100.x.x.x with your actual Tailscale IP. After this, ssh to the public IP will simply time out even if the firewall rule were ever re-enabled.
4. Tune Ubuntu for 1 GB RAM
A fresh Ubuntu 22.04 install on GCP is not optimised for a memory-constrained server. These steps reclaim memory and prevent a few common failure modes before you install anything.
Create a swap file
The e2-micro has 1 GB of RAM. Without swap, a single memory spike from Docker pulling a new image or WordPress processing a large upload can OOM-kill a container. A swap file is cheap insurance.
# Check whether swap already exists
swapon --show
# If nothing is listed, create a 4 GB swap file
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make it permanent across reboots
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Reduce swappiness
The default vm.swappiness of 60 tells the kernel to move pages to swap aggressively. On a server where you want RAM for active processes, 10 is better — use swap as a last resort, not a first choice:
echo 'vm.swappiness=10' | sudo tee /etc/sysctl.d/99-tuning.conf
sudo sysctl -p /etc/sysctl.d/99-tuning.conf
# Verify
cat /proc/sys/vm/swappiness # should print 10
Cap the systemd journal size
By default journald keeps logs until the disk is 10% full. On a 30 GB boot disk that means up to 3 GB of logs — a significant chunk of your free space. Cap it:
sudo mkdir -p /etc/systemd/journald.conf.d
sudo tee /etc/systemd/journald.conf.d/size.conf > /dev/null <<'EOF'
[Journal]
SystemMaxUse=200M
RuntimeMaxUse=50M
EOF
sudo systemctl restart systemd-journald
# Vacuum logs that are already over the new limit
sudo journalctl --vacuum-size=200M
Disable services you don't need
multipathd — manages multipath block devices (SANs, iSCSI). Irrelevant on a GCP VM:
sudo systemctl disable --now multipathd
rsyslog — a second logging daemon that duplicates what journald already does. Ubuntu installs both by default:
sudo systemctl disable --now rsyslog
LXD — a full VM and container manager installed via snap. Not needed when you're using Docker:
sudo snap remove lxd
Removing LXD also removes the core18 and core20 snap bases it depended on, freeing several hundred MB of disk space.
Set up UFW firewall
UFW provides a local firewall layer in addition to GCP's network firewall — defense in depth:
sudo ufw allow 22/tcp # SSH (Tailscale access)
sudo ufw allow 80/tcp # HTTP (Let's Encrypt + redirect)
sudo ufw allow 443/tcp # HTTPS
sudo ufw allow 51820/udp # Tailscale (WireGuard)
sudo ufw enable
Verify memory after tuning
free -h
sudo docker stats --no-stream # run after Docker is installed
At idle you should see roughly 200 MB used by the OS, leaving ~750 MB for your containers.
5. Point your domain at the VM with Cloudflare
Cloudflare is the recommended DNS provider for this setup. The free tier gives you DNS, DDoS protection, and a CDN — all at no cost. You just point your domain's nameservers at Cloudflare instead of your registrar.
Add your site to Cloudflare
- Sign up at cloudflare.com and click Add a site
- Enter your domain and select the Free plan
- Cloudflare will scan your existing DNS records. Review and continue.
- Cloudflare gives you two nameservers, e.g.:
arlo.ns.cloudflare.com
june.ns.cloudflare.com - Log into your domain registrar and replace its nameservers with the two Cloudflare ones
- Click Done in Cloudflare — propagation takes a few minutes to a few hours
Add DNS records
In the Cloudflare dashboard under DNS → Records, add:
Type: A Name: @ IPv4: YOUR_VM_IP
Type: A Name: www IPv4: YOUR_VM_IP
Proxy toggle: orange cloud vs grey cloud
Each DNS record has a Proxy status toggle — the orange cloud icon:
| Setting | What it means |
|---|---|
| Proxied (orange cloud) | Traffic flows through Cloudflare's CDN. Hides your VM's real IP, adds DDoS protection and caching. Recommended. |
| DNS only (grey cloud) | Cloudflare is just DNS. Traffic goes directly to your VM. |
Use proxied (orange cloud). It's one of the main reasons to use Cloudflare.
Let's Encrypt HTTP-01 validation works fine through the Cloudflare proxy — swag handles it automatically.
Set Cloudflare SSL/TLS mode to Full
This is a required step when using the orange cloud proxy.
In Cloudflare: SSL/TLS → Overview → select Full (not Flexible, not Full Strict).
- Flexible — Cloudflare encrypts to visitors but sends plain HTTP to your server. Broken with swag.
- Full — Cloudflare encrypts both legs. Works correctly with swag's Let's Encrypt cert.
- Full (Strict) — Same as Full but validates the origin cert against a CA. Also works, but requires the Let's Encrypt cert to be valid before you can turn this on.
Verify DNS before starting the stack
Let's Encrypt needs the domain to resolve before it can issue a certificate:
dig +short yourdomain.com
# Should return a Cloudflare IP (not your VM IP — that's expected when proxied)
# Test that port 80 reaches your server through Cloudflare
curl -I http://yourdomain.com
6. Install Docker
SSH into your VM and run:
# Install dependencies
sudo apt update && sudo apt install -y ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add the Docker apt repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine and Compose plugin
sudo apt update && sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Add your user to the docker group (needed for cron scripts)
sudo usermod -aG docker $USER
# Verify
sudo docker compose version
Important: always use docker compose (with a space, V2). The old docker-compose hyphenated command is a separate, outdated Python tool that is incompatible with modern Docker image formats.
Important: adding your user to the docker group lets cron-scheduled scripts (backup, auto-update) access Docker without sudo. Log out and back in for the group change to take effect.
7. Create the site directory
mkdir -p ~/mysite/{www,config}
mkdir -p ~/mysqldump
cd ~/mysite
Your layout will be:
~/mysite/
├── docker-compose.yml
├── wordpress.Dockerfile # Custom WordPress image with cron
├── default.conf # nginx config with security headers
├── custom.ini # PHP overrides
├── mpm_prefork.conf # Apache worker tuning
├── mysql.cnf # MySQL 8.0 low-memory tuning
├── .env # Secrets (gitignored)
├── config/ # swag runtime data (SSL certs, nginx state)
└── www/ # WordPress files
~/mysqldump/
└── database.sql # daily MySQL dump (written by backup script)
8. Write the config files
Create each file below. Replace yourdomain.com and [email protected] throughout.
.env (secrets)
Keep passwords out of your compose file. Generate strong random passwords:
cat > .env <<'EOF'
MYSQL_ROOT_PASSWORD=CHANGE_ME_USE_openssl_rand_base64_16
MYSQL_PASSWORD=CHANGE_ME_USE_openssl_rand_base64_16
MYSQL_DATABASE=wp_db
MYSQL_USER=wp_db_user
EOF
chmod 600 .env
docker-compose.yml
This defines three containers: swag (nginx + SSL), wordpress (Apache + PHP), and db (MySQL 8.0). All have memory limits to stay within the 1 GB budget. Ports are bound to 127.0.0.1 to prevent Docker from bypassing your firewall.
services:
swag:
image: linuxserver/swag
container_name: swag
restart: always
depends_on:
- wordpress
volumes:
- ./config:/config
- ./default.conf:/config/nginx/site-confs/default.conf
environment:
- [email protected]
- URL=yourdomain.com
- VALIDATION=http
- TZ=America/Los_Angeles
- PUID=1000
- PGID=1000
ports:
- "443:443"
- "80:80"
mem_limit: 128m
wordpress:
build:
context: .
dockerfile: wordpress.Dockerfile
container_name: wordpress
hostname: wordpress
depends_on:
- db
restart: always
ports:
- "127.0.0.1:8080:80"
volumes:
- ./www/:/var/www/html/
- ./custom.ini:/usr/local/etc/php/conf.d/custom.ini
- ./mpm_prefork.conf:/etc/apache2/mods-available/mpm_prefork.conf
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: ${MYSQL_USER}
WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD}
WORDPRESS_DB_NAME: ${MYSQL_DATABASE}
WORDPRESS_CONFIG_EXTRA: |
define( 'WP_MEMORY_LIMIT', '96M' );
mem_limit: 300m
db:
image: mysql:8.0
container_name: db
volumes:
- /var/lib/mysql:/var/lib/mysql
- ./mysql.cnf:/etc/mysql/conf.d/lowmem.cnf
restart: always
ports:
- "127.0.0.1:3306:3306"
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
mem_limit: 128m
Why
127.0.0.1on ports? Docker manipulates iptables directly, bypassing UFW. Without the127.0.0.1:prefix, ports 3306 and 8080 would be publicly accessible even if UFW blocks them. Binding to localhost ensures only local connections and the swag reverse proxy can reach these services.
Why
/var/lib/mysqlon the host? Mounting the MySQL data directory directly on the host (rather than in a named Docker volume) means the database survives container removal and is easy to access for backups viamysqldump.
wordpress.Dockerfile
A custom Dockerfile that pre-installs cron into the WordPress image. This avoids running apt-get install on every container restart, which is slow and fragile:
FROM wordpress:latest
RUN apt-get update && \
apt-get install -y --no-install-recommends cron && \
rm -rf /var/lib/apt/lists/*
RUN echo '* * * * * www-data php /var/www/html/index.php -- wpaicg_builder=yes' > /etc/cron.d/wp-cron && \
chmod 0644 /etc/cron.d/wp-cron
CMD ["sh", "-c", "cron && docker-entrypoint.sh apache2-foreground"]
default.conf (nginx with security headers)
Redirects HTTP → HTTPS, proxies everything to the WordPress container, and adds a full set of security headers:
server {
listen 80;
listen [::]:80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
root /var/www/html/example;
index index.html index.htm index.php;
server_name _;
include /config/nginx/proxy-confs/*.subfolder.conf;
include /config/nginx/ssl.conf;
client_max_body_size 64m;
# Security headers
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
proxy_hide_header X-Powered-By;
location / {
try_files $uri $uri/ /index.php?$args @app;
}
location @app {
proxy_pass http://wordpress;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Real-IP $remote_addr;
}
}
The security headers do the following:
| Header | What it does |
|---|---|
Strict-Transport-Security | Forces browsers to always use HTTPS for your domain |
X-Content-Type-Options | Prevents browsers from MIME-type sniffing (blocks attacks that disguise scripts as images) |
X-Frame-Options | Prevents your site from being embedded in iframes on other sites (clickjacking protection) |
Referrer-Policy | Controls how much URL info is sent when users click links to other sites |
Permissions-Policy | Explicitly disables access to camera, microphone, and geolocation APIs |
proxy_hide_header X-Powered-By | Strips the PHP version header — less info for attackers |
custom.ini (PHP overrides)
file_uploads = On
memory_limit = 128M
upload_max_filesize = 64M
max_input_vars = 3000
post_max_size = 64M
max_execution_time = 600
mpm_prefork.conf (Apache worker limits)
Apache's default settings spawn up to 150 workers. On a 1 GB VM each PHP worker uses ~30 MB, so the default would consume all available memory. Cap it at 4:
<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 1
MaxSpareServers 3
MaxRequestWorkers 4
MaxConnectionsPerChild 200
</IfModule>
For a personal or low-traffic site, 4 concurrent requests is plenty. If you start seeing 503 errors under load, raise MaxRequestWorkers and bump the mem_limit in docker-compose.yml.
mysql.cnf (MySQL 8.0 memory tuning)
MySQL's defaults assume a dedicated server with gigabytes of RAM. These settings bring it down to ~40 MB idle on a shared 1 GB host:
[mysqld]
# InnoDB — default is 128M, 32M is plenty for a small WordPress site
innodb_buffer_pool_size = 32M
innodb_log_file_size = 8M
innodb_flush_method = O_DIRECT
# Disabling performance_schema saves ~400MB at default settings
performance_schema = OFF
# Reduce caches from their large defaults
table_open_cache = 64
thread_cache_size = 4
thread_stack = 256K
# Per-connection buffers (only allocated when in use)
read_buffer_size = 128K
read_rnd_buffer_size = 128K
sort_buffer_size = 256K
join_buffer_size = 256K
Note:
thread_stackmust be at least 256K for MySQL 8.0 (128K worked in 5.7 but causes stack overflows in 8.0). Thequery_cachesettings from MySQL 5.7 have been removed entirely — MySQL 8.0 dropped the query cache feature.
9. Start the stack
cd ~/mysite
sudo docker compose up -d
Watch the logs to make sure everything comes up:
sudo docker compose logs -f
On first start, swag will:
- Request a Let's Encrypt certificate for your domain (takes ~30 seconds)
- Start nginx with HTTPS enabled
You should see mysqld: ready for connections in the db logs and then the WordPress container come up. Once swag shows Server ready, visit https://yourdomain.com.
If Let's Encrypt fails, check that your DNS A record has propagated and that ports 80/443 are open in GCP's firewall.
10. Finish WordPress setup
Navigate to https://yourdomain.com — you'll see the WordPress installation wizard.
- Choose your language
- Enter site title, admin username, password, and email
- Click Install WordPress
You're done. Your site is live at https://yourdomain.com with a valid SSL certificate that renews automatically.
11. Set up email alerts
Get notified when backups fail or updates have issues. We'll use msmtp — a lightweight SMTP client that sends through Gmail.
Create a Gmail app password
- Go to myaccount.google.com/apppasswords
- Select Mail and Other (custom name), enter "Server alerts"
- Copy the 16-character app password
Install and configure msmtp
sudo DEBIAN_FRONTEND=noninteractive apt install -y msmtp msmtp-mta
Create ~/.msmtprc:
defaults
auth on
tls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
logfile ~/.msmtp.log
account gmail
host smtp.gmail.com
port 587
from [email protected]
user [email protected]
password YOUR_APP_PASSWORD_HERE
account default : gmail
chmod 600 ~/.msmtprc
Test it:
echo "Test alert from $(hostname)" | msmtp [email protected]
The msmtp-mta package creates a sendmail symlink, so the backup and auto-update scripts can use sendmail to send alerts without any additional configuration.
12. Set up backups
This backup system dumps MySQL daily and backs up all site files to a Google Cloud Storage bucket using duplicity (incremental backups, ~6 months of history).
Create a GCS bucket
In the GCP console:
- Go to Cloud Storage → Create bucket
- Name it something unique (e.g.
mysite-backup-abc123) - Region: same region as your VM
- Storage class: Standard
- Leave public access blocked
The first 5 GB of GCS storage is free per month. A WordPress backup set is typically well under that.
Install backup tools
sudo apt install -y duplicity python3-pip gcsfuse
gcsfuse lets you mount the GCS bucket as a local filesystem so duplicity can write to it. Authentication uses the VM's service account automatically — no key file needed.
Create directories
mkdir -p ~/backup-bucket # GCS mount point
mkdir -p ~/mysqldump
Write the backup script
Save this as ~/backup and make it executable (chmod +x ~/backup). Replace YOUR_BUCKET_NAME and [email protected]:
#!/bin/bash
set -uo pipefail
#
# backup — daily backup for mysite
# Dumps MySQL, backs up ~/mysite to GCS via duplicity.
# Run as your regular user (NOT sudo) — gcsfuse needs user permissions.
# Schedule: 0 0 * * * ~/backup > ~/backup.log 2>&1
#
NOTIFY_EMAIL="[email protected]"
WD="$HOME"
BUCKET="YOUR_BUCKET_NAME"
BACKUP_DEST="file://$WD/backup-bucket/backups"
# Load database credentials from .env
source "$WD/mysite/.env"
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"; }
fail() {
log "ERROR: $*"
local msg="Backup failed on $(hostname) at $(date): $*"
if command -v sendmail &>/dev/null; then
printf "Subject: Backup FAILED\nTo: %s\n\n%s\n" "$NOTIFY_EMAIL" "$msg" \
| sendmail "$NOTIFY_EMAIL"
fi
mount | grep -q "$WD/backup-bucket" && fusermount -u "$WD/backup-bucket"
exit 1
}
log "Backup started"
# 1. Mount GCS bucket
if mount | grep -q "$WD/backup-bucket"; then
log "Already mounted."
else
gcsfuse "$BUCKET" "$WD/backup-bucket" || fail "gcsfuse mount failed"
fi
# 2. Dump MySQL
log "Dumping MySQL..."
mysqldump --opt -h localhost --protocol=tcp \
--user="root" --password="$MYSQL_ROOT_PASSWORD" \
"$MYSQL_DATABASE" > "$WD/mysqldump/database.sql" \
|| fail "mysqldump failed"
log "Dump size: $(du -sh $WD/mysqldump/database.sql | cut -f1)"
# 3. Clean up any incomplete previous backup
duplicity cleanup --force --no-encryption "$BACKUP_DEST" \
|| log "WARNING: cleanup returned non-zero (may be harmless)"
# 4. Incremental backup (full if last full is >1 month old)
log "Running duplicity backup..."
duplicity --progress --no-encryption \
--exclude "$WD/backup-bucket" \
--full-if-older-than 1M \
"$WD" "$BACKUP_DEST" \
|| fail "duplicity failed"
# 5. Prune old chains — keep 6 fulls (~6 months)
duplicity remove-all-but-n-full 6 --force --no-encryption "$BACKUP_DEST" \
|| log "WARNING: prune returned non-zero (may be harmless)"
# 6. Unmount
fusermount -u "$WD/backup-bucket" || log "WARNING: unmount failed"
log "Backup completed successfully"
Key improvements over a basic backup script:
set -uo pipefail— catches unset variables and pipe failures- Credentials loaded from
.env— no passwords hardcoded in the script - Email alerts on failure via
sendmail(provided by msmtp) - 6-month retention keeps backup size manageable (~11 GB for a typical WordPress site)
Important: Run the backup script as your regular user, not with sudo. Running as root causes gcsfuse permission issues because the mount point ownership changes.
Test it manually before scheduling:
~/backup
cat ~/backup.log
13. Set up auto-updates
Automate WordPress, Docker, and OS updates with a weekly script. Save this as ~/auto-update and make it executable:
#!/bin/bash
set -uo pipefail
#
# auto-update — weekly WordPress, Docker, and OS updates
# Run as your regular user (NOT sudo).
# Schedule: 0 2 * * 0 ~/auto-update > ~/auto-update.log 2>&1
#
NOTIFY_EMAIL="[email protected]"
SITE_DIR="$HOME/mysite"
COMPOSE="docker compose -f $SITE_DIR/docker-compose.yml"
SITE_URL="https://yourdomain.com"
HAD_ERRORS=false
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"; }
warn() { log "WARNING: $*"; HAD_ERRORS=true; }
# Load database credentials from .env
source "$SITE_DIR/.env"
wpcli() {
docker run --rm \
-v "$SITE_DIR/www":/var/www/html \
--network mysite_default \
-e WORDPRESS_DB_HOST=db \
-e WORDPRESS_DB_USER="$MYSQL_USER" \
-e WORDPRESS_DB_PASSWORD="$MYSQL_PASSWORD" \
-e WORDPRESS_DB_NAME="$MYSQL_DATABASE" \
wordpress:cli wp --allow-root "$@"
}
# Pre-flight: verify Docker access
docker info &>/dev/null || { log "ERROR: Cannot access Docker. Is $USER in the docker group?"; exit 1; }
# Verify site is up before starting
curl -sf "$SITE_URL" > /dev/null || { log "ERROR: Site is down before updates — aborting"; exit 1; }
log "Auto-update started"
# 1. WordPress updates via WP-CLI
log "Updating WordPress core..."
output=$(wpcli core update 2>&1) && log "$output" || warn "Core update issue: $output"
log "Updating plugins..."
output=$(wpcli plugin update --all 2>&1) && log "$output" || warn "Plugin update issue: $output"
log "Updating themes..."
output=$(wpcli theme update --all 2>&1) && log "$output" || warn "Theme update issue: $output"
# 2. Health check after WordPress updates
if ! curl -sf "$SITE_URL" > /dev/null; then
warn "Site is DOWN after WordPress updates — skipping Docker/OS updates"
# Send alert and exit
if command -v sendmail &>/dev/null; then
printf "Subject: Auto-update FAILED — site down\nTo: %s\n\n%s\n" \
"$NOTIFY_EMAIL" "Site went down after WordPress updates. Check immediately." \
| sendmail "$NOTIFY_EMAIL"
fi
exit 1
fi
# 3. Docker image updates
log "Pulling Docker images..."
$COMPOSE pull 2>&1 | while read -r line; do log "$line"; done
log "Recreating containers..."
$COMPOSE down && $COMPOSE up -d --build
sleep 15
# 4. OS updates
log "Running apt upgrade..."
sudo apt update -qq && sudo apt upgrade -y -qq 2>&1 | while read -r line; do log "$line"; done
# 5. Clean up
log "Pruning unused Docker images..."
docker image prune -f 2>&1 | while read -r line; do log "$line"; done
# 6. Send summary email
SUBJECT="Auto-update complete"
$HAD_ERRORS && SUBJECT="Auto-update complete (WARNINGS)"
if command -v sendmail &>/dev/null; then
{
printf "Subject: %s\nTo: %s\n\n" "$SUBJECT" "$NOTIFY_EMAIL"
cat ~/auto-update.log
} | sendmail "$NOTIFY_EMAIL"
fi
log "Auto-update finished"
Schedule both scripts with cron
crontab -e
Add:
# Daily backup at midnight
0 0 * * * /home/YOUR_USERNAME/backup > /home/YOUR_USERNAME/backup.log 2>&1
# Weekly auto-update: Sundays at 2am
0 2 * * 0 /home/YOUR_USERNAME/auto-update > /home/YOUR_USERNAME/auto-update.log 2>&1
Important: Both scripts must run as your regular user (not root). Your user must be in the docker group (sudo usermod -aG docker $USER).
14. Maintenance
Check container status and memory
sudo docker compose -f ~/mysite/docker-compose.yml ps
sudo docker stats --no-stream
View logs
sudo docker compose -f ~/mysite/docker-compose.yml logs --tail=50 db
sudo docker compose -f ~/mysite/docker-compose.yml logs --tail=50 wordpress
sudo docker compose -f ~/mysite/docker-compose.yml logs --tail=50 swag
Restart a container
sudo docker compose -f ~/mysite/docker-compose.yml restart wordpress
Update WordPress manually (plugins, themes, core)
Easiest via the admin dashboard at https://yourdomain.com/wp-admin.
For command-line updates using WP-CLI (runs as a temporary container — nothing to install):
wpcli() {
sudo docker run --rm \
-v ~/mysite/www:/var/www/html \
--network mysite_default \
-e WORDPRESS_DB_HOST=db \
-e WORDPRESS_DB_USER=wp_db_user \
-e WORDPRESS_DB_PASSWORD=YOUR_APP_DB_PASSWORD \
-e WORDPRESS_DB_NAME=wp_db \
wordpress:cli wp --allow-root "$@"
}
wpcli core version
wpcli core update
wpcli plugin update --all
wpcli theme update --all
Update Docker images manually
# Pull latest images and rebuild
sudo docker compose -f ~/mysite/docker-compose.yml pull
sudo docker compose -f ~/mysite/docker-compose.yml down
sudo docker compose -f ~/mysite/docker-compose.yml up -d --build
# Clean up old images
sudo docker image prune -f
Restore from backup
To restore the database from the latest backup:
# Mount bucket
gcsfuse YOUR_BUCKET_NAME ~/backup-bucket
# Extract the database dump
duplicity restore \
--no-encryption \
--file-to-restore mysqldump/database.sql \
file://$HOME/backup-bucket/backups \
/tmp/database.sql
# Import it
mysql -h localhost --protocol=tcp -u root -pYOUR_ROOT_DB_PASSWORD wp_db < /tmp/database.sql
To restore WordPress files or restore from a specific date, swap in the appropriate --file-to-restore path and add --time "2025-06-01".
SSL certificates
Let's Encrypt certificates renew automatically inside the swag container. No action needed. To check renewal status:
sudo docker logs swag 2>&1 | grep -i "cert\|renew\|expire" | tail -10
OS updates
sudo apt update && sudo apt upgrade -y
# Check if a reboot is required
[ -f /var/run/reboot-required ] && echo "REBOOT NEEDED" || echo "No reboot needed"
# If a kernel update was installed — containers restart automatically via restart: always
sudo reboot
15. How the memory budget works
The e2-micro has 1 GB RAM. Here's how it's allocated at idle:
| Component | Idle usage | Hard limit |
|---|---|---|
| OS + Docker daemon | ~200 MB | — |
| swag (nginx) | ~42 MB | 128 MB |
| wordpress (Apache + PHP) | ~142 MB | 300 MB |
| db (MySQL 8.0) | ~58 MB | 128 MB |
| Total | ~442 MB | ~756 MB |
This leaves ~250 MB of headroom before the 1 GB ceiling. Under load, Apache spawns up to 4 PHP workers (capped by mpm_prefork.conf), each using ~30 MB — so peak WordPress memory is around 250 MB, still under the 300 MB container limit.
If a container ever exceeds its mem_limit, the kernel OOM-kills it and Docker immediately restarts it (restart: always). Check with sudo docker stats --no-stream if the site feels slow.
The three biggest memory levers, in order of impact:
performance_schema = OFFinmysql.cnf— saves up to 400 MBMaxRequestWorkers 4inmpm_prefork.conf— caps Apache worker countinnodb_buffer_pool_size = 32Minmysql.cnf— cuts InnoDB cache from 128 MB
16. Troubleshooting
"Error establishing a database connection"
# Check if the db container is running
sudo docker compose -f ~/mysite/docker-compose.yml ps
# If it's missing, start it
sudo docker compose -f ~/mysite/docker-compose.yml up -d db
# Check db logs for errors
sudo docker compose -f ~/mysite/docker-compose.yml logs --tail=30 db
# Test connection directly
mysql -h 127.0.0.1 -P 3306 -u root -pYOUR_ROOT_DB_PASSWORD -e "SELECT 1;"
Let's Encrypt certificate fails on first start
- Verify DNS has propagated:
dig +short yourdomain.comshould return a Cloudflare IP - Verify GCP firewall has port 80 open (required for HTTP-01 challenge)
- Check swag logs:
sudo docker compose logs swag
Site is slow or returning 503
# Check memory — is any container near its limit?
sudo docker stats --no-stream
# Check Apache worker count inside the wordpress container
sudo docker exec wordpress apache2ctl status
If memory is tight, the safest fix is to upgrade the VM to e2-small (2 GB RAM, ~$17/month) rather than loosening the memory caps.
Site returns 500 after a plugin update
# Check for PHP errors
sudo docker logs wordpress 2>&1 | grep "PHP Fatal" | tail -10
# Disable the offending plugin by renaming its directory
sudo mv ~/mysite/www/wp-content/plugins/PLUGIN_NAME{,.disabled}
# Restart WordPress
sudo docker compose -f ~/mysite/docker-compose.yml restart wordpress
Backup script fails with permission errors
Most likely you ran it with sudo. The backup script must run as your regular user — gcsfuse mounts with user-level permissions that break when root takes over:
# Wrong
sudo ~/backup
# Right
~/backup
Docker bypasses your firewall
Docker manipulates iptables directly, bypassing UFW. If you expose a port in docker-compose.yml without the 127.0.0.1: prefix, it's publicly accessible regardless of UFW rules. Always bind non-public ports to localhost:
# Bad — publicly accessible even with UFW blocking 3306
ports:
- "3306:3306"
# Good — only accessible from the host
ports:
- "127.0.0.1:3306:3306"
