Building a headless Ubuntu file server doesn’t require fancy hardware or expensive NAS appliances. In this guide, I’ll walk through how I turned an old Sandy Bridge desktop into a reliable home file server running RAID 1, LVM, and a full suite of self-hosted media services — all accessible from anywhere via Tailscale VPN.
The Hardware
This build starts with modest hardware — a retired desktop with an Intel Sandy Bridge quad-core CPU, no discrete GPU, and a mix of drives:
| Component | Detail |
|---|---|
| CPU | Intel Sandy Bridge, 4 cores, AES-NI |
| Boot Drive | 112G WD SSD |
| Disk 1 | 932G Toshiba HDD |
| Disk 2 | 1.82T Seagate HDD |
| Disk 3 | 932G Western Digital HDD |
| GPU | Intel HD 2000/3000 (integrated only) |
No discrete GPU means no hardware transcoding — but that’s fine. We’ll configure everything for direct play and disable ML features that need GPU acceleration.
Storage Architecture
The storage layout uses two strategies: RAID 1 for active data that needs redundancy, and LVM for bulk archive storage.
┌─────────────────────────────────────────────────────────┐
│ /mnt/raid (923G) │
│ RAID 1 — "mirror" │
│ ┌─────────────────────┐ ┌─────────────────────┐ │
│ │ sda2 (923G) │ │ sdc3 (923G) │ │
│ │ Toshiba DT01ACA100 │ │ WD WD10EZEX │ │
│ └─────────────────────┘ └─────────────────────┘ │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ /mnt/storage (1.82T) │
│ LVM — VG "lvm" / LV "storage" │
│ ┌─────────────────────────────────────────────┐ │
│ │ sdb (1.82T) — Seagate ST2000VM003 │ │
│ └─────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┐
│ sdd — Boot Drive │
│ 112G WD SSD (ext4 + EFI) │
└─────────────────────────────────────────────────────────┘RAID 1 mirrors two drives so active data (documents, photos, music, videos, projects) survives a single drive failure. LVM on the 1.82T Seagate holds archives and backups — data that either exists elsewhere or can be re-downloaded.
Setting up the RAID array was straightforward since it was a pre-existing array from 2019:
sudo mdadm --assemble /dev/md0 /dev/sda2 /dev/sdc3
sudo mount /dev/md0 /mnt/raidThe LVM volume just needed activation:
sudo lvchange -ay lvm/storage
sudo mount /dev/lvm/storage /mnt/storageBoth volumes auto-mount via /etc/fstab with the nofail flag, so the system boots even if a data disk fails.
File Sharing: NFS + Samba
The server exports both volumes over NFS v4.2 (for Linux clients) and SMB3 (for Windows and macOS).
NFS v4.2
NFSv4.2 is the cleanest option for Linux-to-Linux file sharing. The server uses a pseudo-root at /srv/nfs with bind mounts:
# /etc/exports
/srv/nfs 10.0.6.0/24(rw,sync,fsid=0,crossmnt,no_subtree_check)
/srv/nfs/raid 10.0.6.0/24(rw,sync,no_subtree_check)
/srv/nfs/storage 10.0.6.0/24(rw,sync,no_subtree_check)Clients mount with a single line:
sudo mount -t nfs4 -o vers=4.2,noatime,nodiratime disks.local:/ /mnt/disksSamba / SMB3
For Windows and macOS clients, Samba serves the same volumes with SMB3 encryption and macOS Time Machine compatibility via the fruit VFS module. A dedicated family share gives household members access to shared music and documents.
Docker Media Stack
The heart of this build is a Docker Compose stack running four self-hosted services behind a Caddy reverse proxy:
| Service | Purpose | URL |
|---|---|---|
| Jellyfin | Video streaming | jellyfin.jphe.in |
| Navidrome | Music streaming (Subsonic API) | navidrome.jphe.in |
| Immich | Photo management & backup | immich.jphe.in |
| Syncthing | Bidirectional file sync | syncthing.jphe.in |
Jellyfin — Direct Play Only
Without a discrete GPU, Jellyfin is configured for direct play only — no transcoding. This means clients need to support the media formats natively. In practice, most modern clients (phones, smart TVs, web browsers) handle H.264/H.265 just fine.
If a file won’t play on a particular client, I pre-transcode it on my workstation with ffmpeg before adding it to the library.
Navidrome — Lightweight Music Streaming
Navidrome is impressively lightweight — about 30MB of RAM to serve an entire music library. It speaks the Subsonic API, so any Subsonic-compatible client works: Symfonium on Android, play:Sub on iOS, or Sublime Music on Linux.
Immich — Self-Hosted Google Photos
Immich provides a Google Photos-like experience for photo management and mobile backup. Since we have no GPU, the machine learning features (face detection, CLIP search) are disabled — but the core photo browsing, timeline, and mobile auto-upload work perfectly.
Syncthing — Project Sync
Syncthing keeps my ~/Projects directory synchronized between my workstation and the server’s RAID array. It’s bidirectional, peer-to-peer, and encrypted — no cloud service involved.
HTTPS with Caddy + Cloudflare DNS
Every service gets its own subdomain with automatic Let’s Encrypt certificates. Caddy handles this with the Cloudflare DNS-01 challenge — no ports 80/443 need to be exposed to the internet.
One gotcha: the upstream caddy-dns/cloudflare module rejects Cloudflare’s newer cfut_-prefixed API tokens. I used a patched fork that fixes this.
The Caddyfile is clean — each service is just a few lines:
jellyfin.jphe.in {
reverse_proxy jellyfin:8096
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
}Remote Access with Tailscale
All the *.jphe.in DNS records point to the server’s Tailscale IP (100.x.x.x), not a public IP. This means:
- From home (LAN): The OpenWrt router intercepts
*.jphe.inqueries and resolves them to the local IP (10.0.6.120) - From anywhere else: DNS resolves to the Tailscale IP, and traffic flows through the Tailscale mesh
No port forwarding. No dynamic DNS. No exposing services to the internet. Just install Tailscale on your devices and everything works.
Server Tuning
A few targeted optimizations make a noticeable difference on this hardware:
Kernel / sysctl
| Setting | Value | Why |
|---|---|---|
| vm.swappiness | 10 | Prefer file cache over swap |
| vm.dirty_ratio | 40 | Allow large write batches before flushing |
| vm.vfs_cache_pressure | 50 | Keep directory/inode caches longer |
| net.ipv4.tcp_congestion_control | bbr | Google’s BBR for faster throughput |
| net.core.rmem_max / wmem_max | 16MB | Large TCP buffers for big file transfers |
Storage I/O
HDDs get the mq-deadline scheduler with 2MB readahead and write caching enabled. The SSD boot drive uses none (passthrough). Head parking is disabled on the HDDs (hdparm -B254) to avoid the click-of-death wear pattern.
SSH
SSH is tuned to use hardware-accelerated AES-GCM ciphers (the Sandy Bridge CPU has AES-NI), and reverse DNS lookups are disabled for faster connections.
Automated Backups with BorgBackup
BorgBackup runs daily at 3am, deduplicating critical data from RAID to the storage volume:
# Backs up: documents, keys, projects
# Retention: 7 daily, 4 weekly, 6 monthly
# Compression: zstd level 3Borg’s deduplication means incremental backups are fast and space-efficient — only changed blocks are stored.
Disk Health Monitoring
smartd runs weekly short tests and monthly long tests on all drives. A custom health check script runs daily at 6am and logs to /var/log/disk-health.log. On a server with aging drives, early warning of failure is critical.
Lessons Learned
A few things I ran into during the build:
- Old EFI partitions cause boot confusion. The BIOS was trying to boot a 2019-era GRUB installation from one of the data drives instead of the actual boot SSD. Clearing the old EFI entries fixed it.
- Always run
e2fsckbefore operating on corrupt inodes. I tried to clean up files on a filesystem with corrupt inodes and triggered a kernel panic — VFS spinlock deadlock. The fix was to unmount and rune2fsck -yfirst. - Docker volume permissions matter. Syncthing’s config volume was created as root but the container runs as UID 1000. Switching to a bind mount with explicit user mapping solved it.
- RAID cleanup freed massive space. The RAID array was 99% full. Moving backups and archives to the storage volume brought it down to 38% — from 917G used to 320G.
- No GPU doesn’t mean no media server. Direct play works great for most use cases. The GPU upgrade path is documented for when I eventually add one.
The Final Stack
Here’s what the complete setup looks like:
| Layer | Technology |
|---|---|
| OS | Ubuntu 24.04 (headless) |
| Storage | mdadm RAID 1 + LVM |
| File Sharing | NFS v4.2 + Samba/SMB3 |
| Media | Jellyfin + Navidrome + Immich |
| Sync | Syncthing |
| Reverse Proxy | Caddy + Let’s Encrypt (DNS-01) |
| Remote Access | Tailscale mesh VPN |
| Backups | BorgBackup (daily, deduplicated) |
| Monitoring | smartd + lm-sensors |
Total cost: effectively $0 in new hardware. The server is quiet, draws minimal power, and serves media to every device in the house — plus anywhere I have Tailscale installed.
If you have an old desktop gathering dust, it’s more than capable of running this entire stack. The only thing I’d eventually add is a discrete GPU for Jellyfin transcoding and Immich ML — but honestly, direct play has been perfectly fine.
