Stop Babying Your Rsync Scripts
Managing more than two servers usually leads to a specific kind of stress: keeping configuration files, logs, or user uploads in sync. I once handled a setup with a primary node in Singapore and a backup in Frankfurt. Every time a user uploaded a 5MB profile image, I had to ensure it mirrored to the other side of the world immediately. My first solution was a basic rsync script triggered by a cron job every five minutes.
That five-minute window was a disaster. In high-traffic bursts, files were missing when the secondary node took over, resulting in 404 errors for images and annoyed users. Beyond the lag, I had to manually rotate SSH keys every time we scaled the cluster. It wasn’t a system; it was a ticking time bomb.
Why Your Current Sync Strategy Is Probably Brittle
Manual methods fail for three specific reasons:
- The Latency Gap: Cron jobs aren’t real-time. Even a 60-second interval leaves a massive window for data inconsistency during a failover.
- The Single Point of Failure: Using a ‘master’ server creates a bottleneck. If that central node goes offline, your entire synchronization pipeline freezes.
- Firewall Fatigue: Running rsync or scp across data centers requires opening specific ports and managing messy
known_hostsfiles. It is a maintenance burden that doesn’t scale.
Public cloud providers like Dropbox or Google Drive don’t fit the bill for production servers. They are expensive when you hit the 1TB mark, and they force you to hand over your data privacy. As engineers, we need a tool that lives inside our own perimeter.
The Contenders: Which Tool Actually Works?
I put several popular options through the wringer before choosing a winner:
1. rsync + lsyncd
lsyncd watches for filesystem events and triggers rsync. It solves the cron lag, but it remains a one-way street. Trying to set up bi-directional sync with rsync is a recipe for ‘split-brain’ scenarios where your data gets deleted by mistake.
2. Nextcloud
Nextcloud is fantastic for sharing office documents. However, for syncing /var/www/uploads/ across servers, it is bloated. It demands a full web stack and a database, which can easily eat 512MB of RAM on a small VPS just to idle.
3. Resilio Sync
Formerly BitTorrent Sync, this tool is fast and handles NAT traversal like a pro. The catch? It’s closed-source. I prefer tools I can audit, especially when they handle my sensitive configuration files.
4. Syncthing
Syncthing is an open-source, peer-to-peer powerhouse. There is no central server to crash. Because it’s written in Go, it is lightweight and handles thousands of small files without breaking a sweat. Everything is encrypted via TLS, so your data stays private even over the public internet.
The Workflow: Deploying Syncthing on Linux
After managing 15+ VPS instances over the last few years, I’ve refined this deployment. We’ll use Ubuntu/Debian for this example, but the logic holds for any modern distro.
Step 1: Get the Official Repository
Don’t use the version in the default Ubuntu repos; it’s often months out of date. Grab the latest stable build directly.
# Add the release PGP keys:
sudo mkdir -p /etc/apt/keyrings
sudo curl -L -o /etc/apt/keyrings/syncthing-archive-keyring.gpg https://syncthing.net/release-key.gpg
# Add the "stable" channel to your APT sources:
echo "deb [signed-by=/etc/apt/keyrings/syncthing-archive-keyring.gpg] https://apt.syncthing.net/ syncthing stable" | sudo tee /etc/apt/sources.list.d/syncthing.list
# Update and install
sudo apt update && sudo apt install syncthing
Step 2: Automate with Systemd
Syncthing should start the moment your server boots. We’ll run it under a standard user account—never as root—to keep the blast radius small if there’s ever a security vulnerability.
# Replace 'webadmin' with your Linux user
sudo systemctl enable [email protected]
sudo systemctl start [email protected]
Step 3: Secure Remote Access
The GUI listens on port 8384 but only on localhost. Opening this port to the public web is a massive risk. Use an SSH tunnel to access it securely from your laptop.
# Run this on your local machine
ssh -L 9999:127.0.0.1:8384 webadmin@your-server-ip
Navigate to http://localhost:9999 in your browser. Your first task is to set a strong password in Settings > GUI. Don’t skip this.
Step 4: Creating the Mesh
Every Syncthing instance has a unique Device ID. Find yours under Actions > Show ID. Exchange these IDs between Server A and Server B to pair them. Syncthing will punch through NAT and find the fastest route between them, often reaching speeds of 50-100MB/s on a gigabit backbone.
Step 5: Firewall Rules
Your nodes need to talk. Open the sync and discovery ports in UFW:
sudo ufw allow 22000/tcp
sudo ufw allow 21027/udp
Hardening for Production
Basic setups are for labs. In production, use a .stignore file to skip temp files like .DS_Store or node_modules. This saves bandwidth and prevents sync conflicts.
Enable “Staggered Versioning” in the folder settings. I keep deleted files for 30 days. This saved me once when a rogue cleanup script deleted 200GB of media; I just pulled everything back from the .stversions folder on the peer node. Syncthing isn’t just a sync tool; it’s a safety net for your data architecture.

