Why You Shouldn’t Trust a Single Disk
Running a production server on a single drive is like driving a car without a spare tire. It works fine until it doesn’t. Hardware failure isn’t a possibility; it is an eventual certainty.
I remember a specific incident where a single 500GB SSD failed on a database server, leading to eight hours of frantic data recovery and a very unhappy client. Beyond the risk of total data loss, a single disk is a massive performance bottleneck. When your database and logs compete for the same physical I/O, latency often spikes from 2ms to over 100ms, dragging the entire system down.
On my Ubuntu 22.04 environment, switching to a multi-disk array cut my backup windows by 40%. By spreading the load across several disks using mdadm, you gain two things: peace of mind through redundancy and a significant boost in throughput. This guide walks through how to set that up.
Choosing the Right RAID Level
RAID (Redundant Array of Independent Disks) groups physical drives into one logical volume. Linux manages this through mdadm, a robust utility that handles everything from creation to recovery. Before you start, you must pick a strategy based on your hardware:
- RAID 0 (Striping): This splits data across disks. It is incredibly fast—doubling your read/write speeds with two disks—but offers zero protection. If one drive dies, everything is gone.
- RAID 1 (Mirroring): This creates an exact copy of your data on two or more disks. It is the gold standard for OS partitions. If one disk fails, your uptime remains unaffected.
- RAID 5 (Striping with Parity): You need at least three disks. It balances storage capacity and safety by using one disk’s worth of space for parity data. You can lose one drive without losing data.
- RAID 6 (Double Parity): This requires at least four disks. It is similar to RAID 5 but can survive two simultaneous drive failures. This is ideal for large arrays using 8TB+ drives where rebuild times are long.
Getting Your Environment Ready
You need to identify your raw disks before building the array. I always use lsblk to double-check the drive names and sizes to avoid accidental wipes.
Installing the mdadm Tool
Most minimal Linux installs don’t include the RAID management utilities. Use your package manager to grab them:
# For Ubuntu, Debian, or Linux Mint
sudo apt update && sudo apt install mdadm -y
# For RHEL, AlmaLinux, or Fedora
sudo dnf install mdadm -y
Identifying Your Disks
For this example, I have four 20GB virtual disks: /dev/sdb, /dev/sdc, /dev/sdd, and /dev/sde. Verify your own setup with this command:
lsblk
Ensure these disks are empty. The next commands will overwrite any existing partition tables and data.
Building a RAID 1 Array (Mirroring)
RAID 1 is perfect for high-availability needs, like keeping a /var/lib/mysql directory online. To create a RAID 1 array named /dev/md0 using two disks:
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
The system will ask for confirmation. Type y and hit Enter. You can watch the disks sync in real-time by checking the virtual mdstat file:
watch cat /proc/mdstat
Building a RAID 5 Array (Capacity + Protection)
RAID 5 is the preferred choice for file servers. It provides the most usable space while still offering a safety net. If you have three 1TB drives, a RAID 5 setup gives you 2TB of usable storage.
To build it using three disks:
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
Formatting and Persistence
An array isn’t useful until it has a filesystem and a mount point. I recommend ext4 for most Linux server workloads due to its reliability during power cycles.
1. Create the Filesystem
sudo mkfs.ext4 -F /dev/md0
2. Mount the New Storage
sudo mkdir -p /mnt/raid_storage
sudo mount /dev/md0 /mnt/raid_storage
3. Make the Array Permanent
Linux needs a specific configuration file to remember how to assemble the RAID disks after a reboot. Without this, your array might show up with a different name or fail to start entirely.
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
# Update the initial RAM disk to include the RAID config
sudo update-initramfs -u
4. Set Up Auto-mount
Find the UUID of your new array:
sudo blkid /dev/md0
Copy that UUID into your /etc/fstab file so it mounts automatically at boot:
UUID=your-unique-uuid-here /mnt/raid_storage ext4 defaults 0 0
How to Handle a Disk Failure
RAID is only effective if you know how to fix it when things go sideways. If /dev/sdb fails, the array becomes “degraded.” The server stays online, but you are now running without a safety net. To fix it, first mark the drive as failed and pull it from the array:
sudo mdadm /dev/md0 --fail /dev/sdb
sudo mdadm /dev/md0 --remove /dev/sdb
Once you plug in a new drive (let’s call it /dev/sdf), add it back to the group:
sudo mdadm /dev/md0 --add /dev/sdf
The OS will immediately start rebuilding the data onto the new disk in the background. Your server remains responsive during this process, though you might notice a slight dip in disk performance until the sync finishes.
The Bottom Line
Using mdadm is a straightforward way to harden your infrastructure. It turns cheap, commodity disks into a reliable storage pool. However, never forget the golden rule: RAID is not a backup. It protects you from hardware failure, but it won’t save you from a rm -rf / command or a ransomware attack. Always maintain an off-site backup of your data.
If you are new to this, try these steps in a virtual machine first. Breaking and fixing a virtual RAID array is the best way to build the confidence you’ll need when a real drive fails in production.

