Building a Home File Server with Ubuntu, RAID, Docker, and Tailscale

Building a headless Ubuntu file server doesn’t require fancy hardware or expensive NAS appliances. In this guide, I’ll walk through how I turned an old Sandy Bridge desktop into a reliable home file server running RAID 1, LVM, and a full suite of self-hosted media services — all accessible from anywhere via Tailscale VPN.

The Hardware

This build starts with modest hardware — a retired desktop with an Intel Sandy Bridge quad-core CPU, no discrete GPU, and a mix of drives:

ComponentDetail
CPUIntel Sandy Bridge, 4 cores, AES-NI
RAM8GB DDR3
Boot Drive112G WD SSD
Disk 1932G Toshiba HDD
Disk 21.82T Seagate HDD
Disk 3932G Western Digital HDD
GPUIntel HD 2000/3000 (integrated only)

No discrete GPU means no hardware transcoding — but that’s fine. We’ll configure everything for direct play and disable ML features that need GPU acceleration.

Storage Architecture

The storage layout uses two strategies: RAID 1 for active data that needs redundancy, and LVM for bulk archive storage.

┌─────────────────────────────────────────────────────────┐
│                      /mnt/raid (923G)                   │
│                     RAID 1 — "mirror"                   │
│   ┌─────────────────────┐  ┌─────────────────────┐     │
│   │  sda2 (923G)        │  │  sdc3 (923G)        │     │
│   │  Toshiba DT01ACA100 │  │  WD WD10EZEX        │     │
│   └─────────────────────┘  └─────────────────────┘     │
└─────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┐
│                   /mnt/storage (1.82T)                  │
│                  LVM — VG "lvm" / LV "storage"          │
│   ┌─────────────────────────────────────────────┐       │
│   │  sdb (1.82T) — Seagate ST2000VM003          │       │
│   └─────────────────────────────────────────────┘       │
└─────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┐
│                    sdd — Boot Drive                     │
│                  112G WD SSD (ext4 + EFI)               │
└─────────────────────────────────────────────────────────┘

RAID 1 mirrors two drives so active data (documents, photos, music, videos, projects) survives a single drive failure. LVM on the 1.82T Seagate holds archives and backups — data that either exists elsewhere or can be re-downloaded.

The key principle: irreplaceable data goes on RAID, regenerable data goes on LVM. This guides every storage decision on the server.

Setting up the RAID array was straightforward since it was a pre-existing array from 2019:

sudo mdadm --assemble /dev/md0 /dev/sda2 /dev/sdc3
sudo mount /dev/md0 /mnt/raid

The LVM volume just needed activation:

sudo lvchange -ay lvm/storage
sudo mount /dev/lvm/storage /mnt/storage

Both volumes auto-mount via /etc/fstab with the nofail flag, so the system boots even if a data disk fails.

File Sharing: NFS + Samba

The server exports both volumes over NFS v4.2 (for Linux clients) and SMB3 (for Windows and macOS).

NFS v4.2

NFSv4.2 is the cleanest option for Linux-to-Linux file sharing. The server uses a pseudo-root at /srv/nfs with bind mounts:

# /etc/exports
/srv/nfs        10.0.6.0/24(rw,sync,fsid=0,crossmnt,no_subtree_check)
/srv/nfs/raid   10.0.6.0/24(rw,sync,no_subtree_check)
/srv/nfs/storage 10.0.6.0/24(rw,sync,no_subtree_check)

Clients mount with a single line:

sudo mount -t nfs4 -o vers=4.2,noatime,nodiratime disks.local:/ /mnt/disks

Samba / SMB3

For Windows and macOS clients, Samba serves the same volumes with SMB3 encryption and macOS Time Machine compatibility via the fruit VFS module. A dedicated family share gives household members access to shared music and documents.

Docker Media Stack

The heart of this build is a Docker Compose stack running eight self-hosted services behind a Caddy reverse proxy:

ServicePurposeURL
JellyfinVideo streamingjellyfin.jphe.in
NavidromeMusic streaming (Subsonic API)navidrome.jphe.in
ImmichPhoto management & phone backupimmich.jphe.in
SyncthingBidirectional file syncsyncthing.jphe.in
NextcloudCalendar, contacts & webmailnextcloud.jphe.in
OutlineWiki & knowledge baseoutline.jphe.in
VaultwardenPassword manager (Bitwarden-compatible)vault.jphe.in

Jellyfin — Direct Play Only

Without a discrete GPU, Jellyfin is configured for direct play only — no transcoding. This means clients need to support the media formats natively. In practice, most modern clients (phones, smart TVs, web browsers) handle H.264/H.265 just fine.

If a file won’t play on a particular client, I pre-transcode it on my workstation with ffmpeg before adding it to the library.

Navidrome — Lightweight Music Streaming

Navidrome is impressively lightweight — about 30MB of RAM to serve an entire music library. It speaks the Subsonic API, so any Subsonic-compatible client works: Symfonium on Android, play:Sub on iOS, or Sublime Music on Linux.

Immich — Self-Hosted Google Photos

Immich provides a Google Photos-like experience for photo management and mobile backup. Since we have no GPU, the machine learning features (face detection, CLIP search) are disabled — but the core photo browsing, timeline, and mobile auto-upload work perfectly.

Immich Storage Layout

Getting Immich’s storage right took some iteration. The key insight: phone uploads are irreplaceable originals and need RAID redundancy, but thumbnails and transcoded videos are regenerable and can live on cheaper storage.

The final layout uses Docker bind mounts to split data across volumes:

volumes:
  # Base upload directory on LVM (thumbnails, transcodes, backups)
  - /mnt/storage/immich-data:/usr/src/app/upload
  # Phone uploads overlaid onto RAID for redundancy
  - /mnt/raid/immich-upload:/usr/src/app/upload/upload
  - /mnt/raid/immich-library:/usr/src/app/upload/library
  # External photo libraries (read-only)
  - /mnt/raid/photos:/usr/src/app/external-raid:ro

Docker processes volume mounts in order, so the bind mounts for upload/ and library/ overlay onto the base LVM mount. This gives us:

Data TypeLocationWhy
Phone uploads (originals)/mnt/raid (RAID 1)Irreplaceable, needs redundancy
Thumbnails/mnt/storage (LVM)Regenerable from originals
Encoded video/mnt/storage (LVM)Regenerable from originals
DB backups/mnt/storage (LVM)Separate backup chain handles this
Postgres database/mnt/raid (RAID 1)Bind mount at /mnt/raid/docker/immich-db

Lesson learned the hard way: Docker named volumes live under /var/lib/docker/volumes/ on the root filesystem — typically a small boot SSD. If Immich starts processing thousands of photos, thumbnails and transcoded videos can eat 20+ GB fast. Always use bind mounts to explicit paths on your data drives.

Fixing Bad EXIF Dates

Old photos from cameras with dead batteries or no real-time clock often have EXIF dates like January 1, 1980 or December 26, 1970. Immich sorts by EXIF date, so these photos end up in the wrong decade.

For JPEGs, exiftool can fix the dates on disk:

exiftool -overwrite_original -DateTimeOriginal="2018:04:01 12:00:00" photo.JPG

For formats that don’t support EXIF writes (like AVI), fix the dates directly in Immich’s Postgres database:

docker exec immich-db psql -U postgres -d immich -c \
  "UPDATE asset_exif SET \"dateTimeOriginal\" = '2023-05-01 12:00:00'
   WHERE \"assetId\" IN (SELECT id FROM asset WHERE \"originalPath\" LIKE '%folder-name%');"

Nextcloud — Calendar, Contacts & Webmail

Nextcloud fills the last major gap in the self-hosted stack: CalDAV for calendar sync and CardDAV for contacts, plus a built-in webmail client that connects to existing email accounts via IMAP/SMTP.

The architecture mirrors Immich — Nextcloud Apache container backed by Postgres 16 and Redis, all on RAID bind mounts. CalDAV/CardDAV client autodiscovery requires /.well-known/caldav and /.well-known/carddav redirects in Caddy, per RFC 6764:

nextcloud.jphe.in {
    redir /.well-known/carddav /remote.php/dav 301
    redir /.well-known/caldav /remote.php/dav 301
    reverse_proxy nextcloud:80
}

With these redirects, phone and desktop calendar/contacts clients only need the hostname — no manual URL entry required. One critical environment variable: OVERWRITEPROTOCOL=https tells Nextcloud it’s behind a TLS-terminating proxy, so it generates correct https:// links instead of http://.

Outline — Team Wiki

Outline provides a clean, fast wiki for shared documentation and knowledge management. It’s the self-hosted alternative to Notion — Markdown-native, real-time collaborative editing, and a powerful search.

Rather than adding yet another authentication system, Outline uses Nextcloud as an OIDC provider. Anyone with a Nextcloud account can log into Outline — no separate credentials needed. This is configured by registering an OAuth2 client in Nextcloud’s admin settings and pointing Outline’s OIDC environment variables at Nextcloud’s authorization endpoints.

File uploads (images, attachments) are stored directly on the RAID filesystem rather than requiring an S3-compatible object store like MinIO. For a personal/family wiki, local storage is simpler and perfectly adequate — Outline supports this with FILE_STORAGE=local.

Vaultwarden — Self-Hosted Password Manager

Vaultwarden is a lightweight, Rust-based implementation of the Bitwarden API. It runs on minimal resources and is fully compatible with all Bitwarden clients — browser extensions, mobile apps, and the bw CLI tool.

The database lives on RAID for redundancy, and automated daily backups provide an additional safety net (more on that below).

Syncthing — Project Sync

Syncthing keeps my ~/Projects directory synchronized between my workstation and the server’s RAID array. It’s bidirectional, peer-to-peer, and encrypted — no cloud service involved.

HTTPS with Caddy + Cloudflare DNS

Every service gets its own subdomain with automatic Let’s Encrypt certificates. Caddy handles this with the Cloudflare DNS-01 challenge — no ports 80/443 need to be exposed to the internet.

One gotcha: the upstream caddy-dns/cloudflare module rejects Cloudflare’s newer cfut_-prefixed API tokens. I used a patched fork that fixes this.

The Caddyfile is clean — each service is just a few lines:

jellyfin.jphe.in {
    reverse_proxy jellyfin:8096
}

Caddy Timeout Tuning for Large Uploads

If you’re uploading large videos through Immich (or any service behind Caddy), the default proxy timeouts will kill the connection mid-upload. The fix is to increase both the request body size limit and the proxy transport timeouts:

immich.jphe.in {
    request_body {
        max_size 20GB
    }
    reverse_proxy immich:2283 {
        transport http {
            read_timeout 600s
            write_timeout 600s
        }
    }
}

Caddy supports hot-reloading — caddy reload applies config changes without dropping existing connections.

Remote Access with Tailscale

All the *.jphe.in DNS records point to the server’s Tailscale IP (100.x.x.x), not a public IP. This means:

  • From home (LAN): The OpenWrt router intercepts *.jphe.in queries and resolves them to the local IP (10.0.6.120)
  • From anywhere else: DNS resolves to the Tailscale IP, and traffic flows through the Tailscale mesh

No port forwarding. No dynamic DNS. No exposing services to the internet. Just install Tailscale on your devices and everything works.

Server Tuning

A few targeted optimizations make a noticeable difference on this hardware:

Kernel / sysctl

SettingValueWhy
vm.swappiness10Prefer file cache over swap
vm.dirty_ratio40Allow large write batches before flushing
vm.vfs_cache_pressure50Keep directory/inode caches longer
net.ipv4.tcp_congestion_controlbbrGoogle’s BBR for faster throughput
net.core.rmem_max / wmem_max16MBLarge TCP buffers for big file transfers

Storage I/O

HDDs get the mq-deadline scheduler with 2MB readahead and write caching enabled. The SSD boot drive uses none (passthrough). Head parking is disabled on the HDDs (hdparm -B254) to avoid the click-of-death wear pattern.

SSH

SSH is tuned to use hardware-accelerated AES-GCM ciphers (the Sandy Bridge CPU has AES-NI), and reverse DNS lookups are disabled for faster connections.

Terminal compatibility tip: If you use Kitty terminal, remote servers won’t have the xterm-kitty terminfo entry. Tools like top, htop, and nano will break. Fix it by installing the terminfo on the server:

infocmp -x xterm-kitty | ssh disks "mkdir -p ~/.terminfo && tic -x -o ~/.terminfo /dev/stdin"

Automated Backups

The backup strategy has two layers: database dumps for application state, and BorgBackup for file-level deduplication.

Database Backups (Daily at 2:30 AM)

Every service with a database gets a daily dump to /mnt/raid/backups/databases/:

ServiceDB TypeMethod
ImmichPostgrespg_dump (consistent snapshot)
NextcloudPostgrespg_dump (consistent snapshot)
OutlinePostgrespg_dump (consistent snapshot)
VaultwardenSQLitesqlite3 .backup (safe while running)
JellyfinSQLitesqlite3 .backup
NavidromeSQLitesqlite3 .backup

The dumps land on RAID so they’re included in the Borg backup that follows. Old dumps are pruned after 7 days.

Why not just back up the database files directly? Postgres stores data across multiple files that must be consistent with each other. Copying them while Postgres is running can give you a corrupted snapshot. pg_dump produces a clean SQL dump regardless of database activity. SQLite has the same issue — the .backup command ensures a consistent copy.

BorgBackup (Daily at 3:00 AM)

BorgBackup runs 30 minutes after the database dumps, deduplicating critical data from RAID to the storage volume:

# Backs up: documents, keys, projects, database dumps, photos
# Retention: 7 daily, 4 weekly, 6 monthly
# Compression: zstd level 3
# Destination: /mnt/storage/backups/borg-raid

Borg’s deduplication means incremental backups are fast and space-efficient — only changed blocks are stored. The entire backup history compresses to around 320MB of deduplicated storage.

The Backup Chain

2:30 AM  Database dumps → /mnt/raid/backups/databases/
3:00 AM  Borg backup    → /mnt/raid/* → /mnt/storage/backups/borg-raid/
6:00 AM  Disk health    → SMART checks, logged to /var/log/disk-health.log

This means the storage volume holds a complete, deduplicated history of everything on RAID — including consistent database snapshots.

Cloud Backup with rclone

Local backups protect against drive failure, but not against fire, theft, or disaster. For geographic redundancy, the server syncs critical data to cloud storage using rclone.

The Setup

rclone is configured with three cloud remotes:

RemoteProviderStorage
box-personalBox (personal)100GB
box-techempowerBox (work)100GB
gdriveGoogle Drive25GB

That’s 225GB of cloud storage across three accounts — enough for database dumps, documents, phone uploads, music, and a chunk of the photo library.

Google Drive as a FUSE Mount

Google Drive is mounted as a regular filesystem at /mnt/gdrive using rclone’s FUSE mount capability, managed by a systemd service:

[Service]
Type=notify
User=jp
ExecStart=/usr/bin/rclone mount gdrive: /mnt/gdrive \
    --vfs-cache-mode full \
    --vfs-cache-max-size 2G \
    --dir-cache-time 72h \
    --allow-other

The --vfs-cache-mode full setting caches files locally before serving them, so reads and writes behave like a normal filesystem. The --allow-other flag (plus user_allow_other in /etc/fuse.conf) lets other system users and services like Samba access the mount.

OAuth on a Headless Server

Box and Google Drive both use OAuth2 for authentication, which normally requires a web browser. Since the server is headless, the trick is to run rclone config on a workstation that has a browser, complete the OAuth flow there, then copy ~/.config/rclone/rclone.conf to the server. The tokens persist in the config file and rclone handles refresh automatically.

Disk Health Monitoring

smartd runs weekly short tests and monthly long tests on all drives. A custom health check script runs daily at 6am and logs to /var/log/disk-health.log. On a server with aging drives, early warning of failure is critical.

Lessons Learned

A few things I ran into during the build:

  1. Old EFI partitions cause boot confusion. The BIOS was trying to boot a 2019-era GRUB installation from one of the data drives instead of the actual boot SSD. Clearing the old EFI entries fixed it.
  2. Always run e2fsck before operating on corrupt inodes. I tried to clean up files on a filesystem with corrupt inodes and triggered a kernel panic — VFS spinlock deadlock. The fix was to unmount and run e2fsck -y first.
  3. Docker volume permissions matter. Syncthing’s config volume was created as root but the container runs as UID 1000. Switching to a bind mount with explicit user mapping solved it.
  4. RAID cleanup freed massive space. The RAID array was 99% full. Moving backups and archives to the storage volume brought it down to 38% — from 917G used to 320G.
  5. No GPU doesn’t mean no media server. Direct play works great for most use cases. The GPU upgrade path is documented for when I eventually add one.
  6. Docker named volumes default to the root filesystem. Immich’s thumbnails and transcoded videos filled 23GB on the boot SSD before I noticed. Always use bind mounts for data-heavy containers, pointing to your bulk storage.
  7. Symlinks don’t work inside Docker volumes. A symlink on the host points to a path as seen inside the container. If the target path isn’t mounted in the container, the symlink leads nowhere. Use Docker bind mounts instead.
  8. Back up your databases, not just your files. BorgBackup captures files, but copying a live Postgres data directory gives you a potentially corrupt snapshot. I lost an entire Immich database (all photo metadata, albums, users) because there was no pg_dump backup. Now every database gets a daily dump before Borg runs.
  9. Separate irreplaceable from regenerable data. Phone photo uploads need RAID redundancy. Thumbnails and transcoded videos can be regenerated from originals and belong on cheaper storage. This distinction should drive your volume layout from day one.
  10. Caddy needs timeout tuning for large uploads. The default proxy timeouts are fine for web browsing but kill multi-gigabyte video uploads. Configure read_timeout and write_timeout explicitly for services that handle large files.
  11. Use rsync, not mv, across filesystems. mv between different filesystems is a silent copy+delete with no progress indicator and no resume on failure. rsync -ah --progress gives you visibility and idempotent restarts.

The Final Stack

Here’s what the complete setup looks like:

LayerTechnology
OSUbuntu 24.04 (headless)
Storagemdadm RAID 1 + LVM
File SharingNFS v4.2 + Samba/SMB3
MediaJellyfin + Navidrome + Immich
PIMNextcloud (CalDAV/CardDAV/Mail)
Knowledge BaseOutline (Nextcloud OIDC auth)
PasswordsVaultwarden (Bitwarden-compatible)
SyncSyncthing
Reverse ProxyCaddy + Let’s Encrypt (DNS-01)
Remote AccessTailscale mesh VPN
BackupsDaily DB dumps + BorgBackup + rclone cloud sync
Cloud Storagerclone → Box (2×100GB) + Google Drive (25GB FUSE mount)
Monitoringsmartd + daily health checks

Total cost: effectively $0 in new hardware. The server is quiet, draws minimal power, and serves media to every device in the house — plus anywhere I have Tailscale installed.

If you have an old desktop gathering dust, it’s more than capable of running this entire stack. The only thing I’d eventually add is a discrete GPU for Jellyfin transcoding and Immich ML — but honestly, direct play has been perfectly fine.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.