A practical guide to running a personal WordPress site on Google Cloud’s Always Free tier
using Docker Compose, automatic SSL, and daily backups to cloud storage — all at zero cost.
What you’ll end up with:
– WordPress + MySQL + nginx/SSL running in Docker on a free GCP VM
– Automatic HTTPS via Let’s Encrypt (auto-renewing)
– Daily incremental backups to a free GCS bucket (~1 year of history)
– SSH locked down via Tailscale — port 22 closed to the public internet
– ~$0/month, indefinitely
What this costs: Nothing. GCP’s Always Free tier covers the VM, and the GCS bucket
stays free as long as you keep it under 5 GB (backups for a small site are well under that).
Table of Contents
- Prerequisites
- Create the GCP VM
- Lock down SSH with Tailscale
- Tune Ubuntu for 1 GB RAM
- Point your domain at the VM with Cloudflare
- Install Docker
- Create the site directory
- Write the config files
- Start the stack
- Finish WordPress setup
- Set up backups
- Schedule the backup
- Maintenance
- How the memory budget works
1. Prerequisites
- A Google account (for GCP and Google Cloud Storage)
- A domain name you control (e.g. from Namecheap, Google Domains, Cloudflare, etc.)
- A Tailscale account — free at tailscale.com (up to 100 devices)
- Basic comfort with a Linux terminal
2. Create the GCP VM
GCP’s Always Free tier includes one e2-micro instance per month in certain US regions,
forever — no credit card charges as long as you stay within the limits.
Create the instance
- Go to console.cloud.google.com → Compute Engine → VM instances → Create instance
- Set:
- Name: anything (e.g.
wordpress-server) - Region:
us-east1,us-west1, orus-central1(required for Always Free) - Machine type:
e2-micro(2 vCPU, 1 GB RAM) - Boot disk: Ubuntu 22.04 LTS, 30 GB standard persistent disk
- Under Firewall, check both Allow HTTP traffic and Allow HTTPS traffic
- Click Create
The 30 GB disk is within the Always Free 30 GB/month standard disk limit.
Do not use SSD — only standard persistent disk is free.
Note your external IP
Once created, copy the External IP from the VM list. You’ll need it for DNS.
Open SSH for initial setup
Click SSH in the GCP console to open a browser terminal. You’ll use this for the next
step (installing Tailscale), after which you can close public SSH permanently.
3. Lock down SSH with Tailscale
Port 22 open to the internet is the single most-scanned port on any public server. Tailscale
gives you a private encrypted network between your devices — SSH stays open on that network
and closed to everyone else.
Install Tailscale on the VM
In the GCP browser SSH terminal:
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
tailscale up prints an authentication URL. Open it in your browser, log in with your
Tailscale account, and the VM will appear in your Tailscale admin console.
Get the VM’s Tailscale IP (always in the 100.x.x.x range):
tailscale ip -4
Install Tailscale on your local machine
Download from tailscale.com/download for macOS, Windows,
or Linux and sign in with the same account. Both devices will now be on your private
Tailscale network.
Test SSH over Tailscale before closing the public port
From your local machine:
ssh [email protected] # use the Tailscale IP from above
Confirm this works before the next step.
Close port 22 in GCP’s firewall
In the GCP console:
- Go to VPC network → Firewall
- Find the rule named
default-allow-ssh(allows TCP port 22 from0.0.0.0/0) - Click it → Delete (or Disable if you want to re-enable it later)
Port 22 is now closed to the public internet. Your Tailscale SSH connection still works
because Tailscale operates over UDP 41641 (or falls back to HTTPS), not port 22.
GCP console SSH still works. The browser SSH button in the GCP console uses IAP
(Identity-Aware Proxy), which tunnels through Google’s internal network — it does not
go through the public firewall rule you just removed.
Optional: restrict sshd to the Tailscale interface
For maximum hardening, tell sshd to only listen on the Tailscale interface:
# Find the Tailscale interface name (usually tailscale0)
ip link show | grep tailscale
# Add to /etc/ssh/sshd_config:
echo "ListenAddress 100.x.x.x" | sudo tee -a /etc/ssh/sshd_config
sudo systemctl restart ssh
Replace 100.x.x.x with your actual Tailscale IP. After this, ssh to the public IP
will simply time out even if the firewall rule were ever re-enabled.
4. Tune Ubuntu for 1 GB RAM
A fresh Ubuntu 22.04 install on GCP is not optimised for a memory-constrained server.
These steps reclaim memory and prevent a few common failure modes before you install anything.
Create a swap file
The e2-micro has 1 GB of RAM. Without swap, a single memory spike from Docker pulling a
new image or WordPress processing a large upload can OOM-kill a container. A swap file
is cheap insurance.
# Check whether swap already exists
swapon --show
# If nothing is listed, create a 4 GB swap file
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
# Make it permanent across reboots
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
Reduce swappiness
The default vm.swappiness of 60 tells the kernel to move pages to swap aggressively.
On a server where you want RAM for active processes, 10 is better — use swap as a last
resort, not a first choice:
echo 'vm.swappiness=10' | sudo tee /etc/sysctl.d/99-tuning.conf
sudo sysctl -p /etc/sysctl.d/99-tuning.conf
# Verify
cat /proc/sys/vm/swappiness # should print 10
Cap the systemd journal size
By default journald keeps logs until the disk is 10% full. On a 30 GB boot disk that
means up to 3 GB of logs — a significant chunk of your free space. Cap it:
sudo mkdir -p /etc/systemd/journald.conf.d
sudo tee /etc/systemd/journald.conf.d/size.conf > /dev/null <<'EOF'
[Journal]
SystemMaxUse=200M
RuntimeMaxUse=50M
EOF
sudo systemctl restart systemd-journald
# Vacuum logs that are already over the new limit
sudo journalctl --vacuum-size=200M
Disable services you don’t need
multipathd — manages multipath block devices (SANs, iSCSI). Irrelevant on a GCP VM:
sudo systemctl disable --now multipathd
rsyslog — a second logging daemon that duplicates what journald already does.
Ubuntu installs both by default:
sudo systemctl disable --now rsyslog
LXD — a full VM and container manager installed via snap. Not needed when you’re
using Docker:
sudo snap remove lxd
Removing LXD also removes the core18 and core20 snap bases it depended on,
freeing several hundred MB of disk space.
Verify memory after tuning
free -h
sudo docker stats --no-stream # run after Docker is installed
At idle you should see roughly 200 MB used by the OS, leaving ~750 MB for your containers.
5. Point your domain at the VM with Cloudflare
Cloudflare is the recommended DNS provider for this setup. The
free tier gives you DNS, DDoS protection, and a CDN — all at no cost. You just point your
domain’s nameservers at Cloudflare instead of your registrar.
Add your site to Cloudflare
- Sign up at cloudflare.com and click Add a site
- Enter your domain and select the Free plan
- Cloudflare will scan your existing DNS records. Review and continue.
- Cloudflare gives you two nameservers, e.g.:
arlo.ns.cloudflare.com
june.ns.cloudflare.com - Log into your domain registrar and replace its nameservers with the two Cloudflare ones
- Click Done in Cloudflare — propagation takes a few minutes to a few hours
Add DNS records
In the Cloudflare dashboard under DNS → Records, add:
Type: A Name: @ IPv4: YOUR_VM_IP
Type: A Name: www IPv4: YOUR_VM_IP
Proxy toggle: orange cloud vs grey cloud
Each DNS record has a Proxy status toggle — the orange cloud icon:
| Setting | What it means |
|---|---|
| Proxied (orange cloud) | Traffic flows through Cloudflare’s CDN. Hides your VM’s real IP, adds DDoS protection and caching. Recommended. |
| DNS only (grey cloud) | Cloudflare is just DNS. Traffic goes directly to your VM. |
Use proxied (orange cloud). It’s one of the main reasons to use Cloudflare.
Let’s Encrypt HTTP-01 validation works fine through the Cloudflare proxy — swag
handles it automatically.
Set Cloudflare SSL/TLS mode to Full
This is a required step when using the orange cloud proxy.
In Cloudflare: SSL/TLS → Overview → select Full (not Flexible, not Full Strict).
- Flexible — Cloudflare encrypts to visitors but sends plain HTTP to your server. Broken with swag.
- Full — Cloudflare encrypts both legs. Works correctly with swag’s Let’s Encrypt cert.
- Full (Strict) — Same as Full but validates the origin cert against a CA. Also works, but
requires the Let’s Encrypt cert to be valid before you can turn this on.
Verify DNS before starting the stack
Let’s Encrypt needs the domain to resolve before it can issue a certificate:
dig +short yourdomain.com
# Should return a Cloudflare IP (not your VM IP — that's expected when proxied)
# Test that port 80 reaches your server through Cloudflare
curl -I http://yourdomain.com
6. Install Docker
SSH into your VM and run:
# Install dependencies
sudo apt update && sudo apt install -y ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add the Docker apt repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine and Compose plugin
sudo apt update && sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Verify
sudo docker compose version
# Docker Compose version v2.x.x
Important: always use docker compose (with a space, V2). The old docker-compose
hyphenated command is a separate, outdated Python tool that is incompatible with modern
Docker image formats.
7. Create the site directory
mkdir -p ~/mysite/{www,config}
mkdir -p ~/mysqldump
cd ~/mysite
Your layout will be:
~/mysite/
├── docker-compose.yml
├── default.conf # nginx config
├── custom.ini # PHP overrides
├── mpm_prefork.conf # Apache worker tuning
├── mysql.cnf # MySQL low-memory tuning
├── config/ # swag runtime data (SSL certs, nginx state)
└── www/ # WordPress files
~/mysqldump/
└── database.sql # daily MySQL dump (written by backup script)
8. Write the config files
Create each file below. Replace yourdomain.com and [email protected] throughout.
docker-compose.yml
This defines three containers: swag (nginx + SSL), wordpress (Apache + PHP),
and db (MySQL). All have memory limits to stay within the 1 GB budget.
Generate strong random passwords before you start — use openssl rand -base64 16 twice.
services:
swag:
image: linuxserver/swag
container_name: swag
restart: always
depends_on:
- wordpress
volumes:
- ./config:/config
- ./default.conf:/config/nginx/site-confs/default.conf
environment:
- [email protected]
- URL=yourdomain.com
- VALIDATION=http
- TZ=America/Los_Angeles
- PUID=1000
- PGID=1000
ports:
- "443:443"
- "80:80"
mem_limit: 128m
wordpress:
image: wordpress:latest
container_name: wordpress
hostname: wordpress
depends_on:
- db
restart: always
ports:
- "8080:80"
volumes:
- ./www/:/var/www/html/
- ./custom.ini:/usr/local/etc/php/conf.d/custom.ini
- ./mpm_prefork.conf:/etc/apache2/mods-available/mpm_prefork.conf
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wp_db_user
WORDPRESS_DB_PASSWORD: YOUR_APP_DB_PASSWORD
WORDPRESS_DB_NAME: wp_db
WORDPRESS_CONFIG_EXTRA: |
define( 'WP_MEMORY_LIMIT', '96M' );
mem_limit: 300m
db:
image: mysql:5.7
container_name: db
volumes:
- /var/lib/mysql:/var/lib/mysql
- ./mysql.cnf:/etc/mysql/conf.d/lowmem.cnf
restart: always
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: YOUR_ROOT_DB_PASSWORD
MYSQL_DATABASE: wp_db
MYSQL_USER: wp_db_user
MYSQL_PASSWORD: YOUR_APP_DB_PASSWORD
mem_limit: 128m
Why
/var/lib/mysqlon the host? Mounting the MySQL data directory directly on the
host (rather than in a named Docker volume) means the database survives container removal
and is easy to access for backups viamysqldump.
default.conf (nginx)
Redirects HTTP → HTTPS and proxies everything to the WordPress container:
server {
listen 80;
listen [::]:80;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
root /var/www/html/example;
index index.html index.htm index.php;
server_name _;
include /config/nginx/proxy-confs/*.subfolder.conf;
include /config/nginx/ssl.conf;
client_max_body_size 0;
location / {
try_files $uri $uri/ /index.php?$args @app;
}
location @app {
proxy_pass http://wordpress;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Real-IP $remote_addr;
}
}
custom.ini (PHP overrides)
file_uploads = On
memory_limit = 128M
upload_max_filesize = 64M
max_input_vars = 3000
post_max_size = 64M
max_execution_time = 600
mpm_prefork.conf (Apache worker limits)
Apache’s default settings spawn up to 150 workers. On a 1 GB VM each PHP worker uses
~30 MB, so the default would consume all available memory. Cap it at 4:
<IfModule mpm_prefork_module>
StartServers 2
MinSpareServers 1
MaxSpareServers 3
MaxRequestWorkers 4
MaxConnectionsPerChild 200
</IfModule>
For a personal or low-traffic site, 4 concurrent requests is plenty. If you start seeing
503 errors under load, raise MaxRequestWorkers and bump the mem_limit in docker-compose.yml.
mysql.cnf (MySQL memory tuning)
MySQL’s defaults assume a dedicated server with gigabytes of RAM. These settings bring it
down to ~40 MB idle on a shared 1 GB host:
[mysqld]
# InnoDB — default is 128M, 32M is plenty for a small WordPress site
innodb_buffer_pool_size = 32M
innodb_log_file_size = 8M
innodb_flush_method = O_DIRECT
# Disabling performance_schema saves ~400MB at default settings
performance_schema = OFF
# Reduce caches from their large defaults
table_open_cache = 64
thread_cache_size = 4
thread_stack = 128K
# Query cache is deprecated in MySQL 5.7 — keep it off
query_cache_size = 0
query_cache_type = 0
# Per-connection buffers (only allocated when in use)
read_buffer_size = 128K
read_rnd_buffer_size = 128K
sort_buffer_size = 256K
join_buffer_size = 256K
9. Start the stack
cd ~/mysite
sudo docker compose up -d
Watch the logs to make sure everything comes up:
sudo docker compose logs -f
On first start, swag will:
1. Request a Let’s Encrypt certificate for your domain (takes ~30 seconds)
2. Start nginx with HTTPS enabled
You should see mysqld: ready for connections in the db logs and then the WordPress
container come up. Once swag shows Server ready, visit https://yourdomain.com.
If Let’s Encrypt fails, check that your DNS A record has propagated and that ports 80/443
are open in GCP’s firewall.
10. Finish WordPress setup
Navigate to https://yourdomain.com — you’ll see the WordPress installation wizard.
- Choose your language
- Enter site title, admin username, password, and email
- Click Install WordPress
You’re done. Your site is live at https://yourdomain.com with a valid SSL certificate
that renews automatically.
11. Set up backups
This backup system dumps MySQL daily and backs up all site files to a Google Cloud Storage
bucket using duplicity (incremental backups, ~1 year of history).
Create a GCS bucket
In the GCP console:
- Go to Cloud Storage → Create bucket
- Name it something unique (e.g.
mysite-backup-abc123) - Region: same region as your VM
- Storage class: Standard
- Leave public access blocked
The first 5 GB of GCS storage is free per month. A WordPress backup set is typically
well under that.
Install backup tools
sudo apt install -y duplicity python3-pip gcsfuse
gcsfuse lets you mount the GCS bucket as a local filesystem so duplicity can write to it.
Authentication uses the VM’s service account automatically — no key file needed.
Create directories
mkdir -p ~/backup-bucket # GCS mount point
mkdir -p ~/mysqldump
Write the backup script
Save this as ~/backup and make it executable (chmod +x ~/backup).
Replace YOUR_BUCKET_NAME, YOUR_ROOT_DB_PASSWORD, and [email protected]:
#!/bin/bash
#
# backup — daily backup for mysite
# Dumps MySQL, backs up ~/mysite to GCS via duplicity.
# Schedule: 0 0 * * * ~/backup > ~/backup.log 2>&1
#
NOTIFY_EMAIL="[email protected]"
WD="$HOME"
BUCKET="YOUR_BUCKET_NAME"
MYSQL_DB="wp_db"
MYSQL_USER="root"
MYSQL_PASS="YOUR_ROOT_DB_PASSWORD"
BACKUP_DEST="file://$WD/backup-bucket/backups"
log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"; }
fail() {
log "ERROR: $*"
local msg="Backup failed on $(hostname) at $(date): $*"
if command -v sendmail &>/dev/null; then
printf "Subject: Backup FAILED\nTo: %s\n\n%s\n" "$NOTIFY_EMAIL" "$msg" \
| sendmail "$NOTIFY_EMAIL"
fi
mount | grep -q "$WD/backup-bucket" && fusermount -u "$WD/backup-bucket"
exit 1
}
log "Backup started"
# 1. Mount GCS bucket
if mount | grep -q "$WD/backup-bucket"; then
log "Already mounted."
else
gcsfuse "$BUCKET" "$WD/backup-bucket" || fail "gcsfuse mount failed"
fi
# 2. Dump MySQL
log "Dumping MySQL..."
mysqldump --opt -h localhost --protocol=tcp \
--user="$MYSQL_USER" --password="$MYSQL_PASS" \
"$MYSQL_DB" > "$WD/mysqldump/database.sql" \
|| fail "mysqldump failed"
log "Dump size: $(du -sh $WD/mysqldump/database.sql | cut -f1)"
# 3. Clean up any incomplete previous backup
duplicity cleanup --force --no-encryption "$BACKUP_DEST" \
|| log "WARNING: cleanup returned non-zero (may be harmless)"
# 4. Incremental backup (full if last full is >1 month old)
log "Running duplicity backup..."
duplicity --progress --no-encryption \
--exclude "$WD/backup-bucket" \
--full-if-older-than 1M \
"$WD" "$BACKUP_DEST" \
|| fail "duplicity failed"
# 5. Prune old chains — keep 12 fulls (~1 year)
duplicity remove-all-but-n-full 12 --force --no-encryption "$BACKUP_DEST" \
|| log "WARNING: prune returned non-zero (may be harmless)"
# 6. Unmount
fusermount -u "$WD/backup-bucket" || log "WARNING: unmount failed"
log "Backup completed successfully"
Test it manually before scheduling:
sudo ~/backup
cat ~/backup.log
12. Schedule the backup
crontab -e
Add:
# Daily backup at midnight
0 0 * * * /home/YOUR_USERNAME/backup > /home/YOUR_USERNAME/backup.log 2>&1
13. Maintenance
Check container status and memory
sudo docker compose -f ~/mysite/docker-compose.yml ps
sudo docker stats --no-stream
View logs
sudo docker compose -f ~/mysite/docker-compose.yml logs --tail=50 db
sudo docker compose -f ~/mysite/docker-compose.yml logs --tail=50 wordpress
sudo docker compose -f ~/mysite/docker-compose.yml logs --tail=50 swag
Restart a container
sudo docker compose -f ~/mysite/docker-compose.yml restart wordpress
Update WordPress (plugins, themes, core)
Easiest via the admin dashboard at https://yourdomain.com/wp-admin.
For command-line updates using WP-CLI (runs as a temporary container — nothing to install):
wpcli() {
sudo docker run --rm \
-v ~/mysite/www:/var/www/html \
--network mysite_default \
-e WORDPRESS_DB_HOST=db \
-e WORDPRESS_DB_USER=wp_db_user \
-e WORDPRESS_DB_PASSWORD=YOUR_APP_DB_PASSWORD \
-e WORDPRESS_DB_NAME=wp_db \
wordpress:cli wp --allow-root "$@"
}
wpcli core update
wpcli plugin update --all
wpcli theme update --all
Update Docker images
# Pull latest images
sudo docker compose -f ~/mysite/docker-compose.yml pull
# Recreate containers (~10 seconds downtime)
sudo docker compose -f ~/mysite/docker-compose.yml down
sudo docker compose -f ~/mysite/docker-compose.yml up -d
# Clean up old images
sudo docker image prune -f
Restore from backup
To restore the database from the latest backup:
# Mount bucket
gcsfuse YOUR_BUCKET_NAME ~/backup-bucket
# Extract the database dump
duplicity restore \
--no-encryption \
--file-to-restore mysqldump/database.sql \
file://$HOME/backup-bucket/backups \
/tmp/database.sql
# Import it
mysql -h localhost -P 3306 -u root -pYOUR_ROOT_DB_PASSWORD wp_db < /tmp/database.sql
To restore WordPress files or restore from a specific date, swap in the appropriate
--file-to-restore path and add --time "2025-06-01".
SSL certificates
Let’s Encrypt certificates renew automatically inside the swag container. No action needed.
To check renewal status:
sudo docker logs swag 2>&1 | grep -i "cert\|renew\|expire" | tail -10
OS updates
sudo apt update && sudo apt upgrade -y
# Check if a reboot is required
[ -f /var/run/reboot-required ] && echo "REBOOT NEEDED" || echo "No reboot needed"
# If a kernel update was installed — containers restart automatically via restart: always
sudo reboot
14. How the memory budget works
The e2-micro has 1 GB RAM. Here’s how it’s allocated at idle:
| Component | Idle usage | Hard limit |
|---|---|---|
| OS + Docker daemon | ~200 MB | — |
| swag (nginx) | ~70 MB | 128 MB |
| wordpress (Apache + PHP) | ~120 MB | 300 MB |
| db (MySQL) | ~40 MB | 128 MB |
| Total | ~430 MB | ~756 MB |
This leaves ~270 MB of headroom before the 1 GB ceiling. Under load, Apache spawns up to
4 PHP workers (capped by mpm_prefork.conf), each using ~30 MB — so peak WordPress
memory is around 250 MB, still under the 300 MB container limit.
If a container ever exceeds its mem_limit, the kernel OOM-kills it and Docker immediately
restarts it (restart: always). Check with sudo docker stats --no-stream if the site
feels slow.
The three biggest memory levers, in order of impact:
performance_schema = OFFinmysql.cnf— saves up to 400 MBMaxRequestWorkers 4inmpm_prefork.conf— caps Apache worker countinnodb_buffer_pool_size = 32Minmysql.cnf— cuts InnoDB cache from 128 MB
Troubleshooting
“Error establishing a database connection”
# Check if the db container is running
sudo docker compose -f ~/mysite/docker-compose.yml ps
# If it's missing, start it
sudo docker compose -f ~/mysite/docker-compose.yml up -d db
# Check db logs for errors
sudo docker compose -f ~/mysite/docker-compose.yml logs --tail=30 db
# Test connection directly
mysql -h 127.0.0.1 -P 3306 -u root -pYOUR_ROOT_DB_PASSWORD -e "SELECT 1;"
Let’s Encrypt certificate fails on first start
- Verify DNS has propagated:
dig +short yourdomain.comshould return your VM IP - Verify GCP firewall has port 80 open (required for HTTP-01 challenge)
- Check swag logs:
sudo docker compose logs swag
Site is slow or returning 503
# Check memory — is any container near its limit?
sudo docker stats --no-stream
# Check Apache worker count inside the wordpress container
sudo docker exec wordpress apache2ctl status
If memory is tight, the safest fix is to upgrade the VM to e2-small (2 GB RAM,
~$17/month) rather than loosening the memory caps.
