XFS Mastery: Resizing, Backing Up, and Tuning Linux’s Powerhouse File System

Linux tutorial - IT technology blog
Linux tutorial - IT technology blog

Context & Why XFS is My Go-To for Scalable Storage

When I first started managing Linux servers, Ext4 was the undisputed king. However, as my workloads shifted toward multi-terabyte datasets and high-concurrency environments, I moved to XFS. Originally built by Silicon Graphics for IRIX, XFS is now the standard for RHEL, CentOS, and AlmaLinux. It excels at handling massive files and high-parallelism I/O, making it a beast for database servers and 4K media storage.

Managing over 15 production VPS instances since 2023 has taught me one hard truth: file system operations are high-stakes. A single typo can trigger hours of recovery. XFS mitigates this with robust online management tools. You can perform critical maintenance while the partition is mounted and serving traffic—a total lifesaver when 100% uptime is the goal.

One caveat before we dive in: XFS is built for growth, not dieting. You can expand an XFS partition in seconds, but you cannot shrink it. If your infrastructure plan requires reducing partition sizes later, you are better off sticking with Ext4 or using LVM with a very conservative allocation strategy.

Installation of XFS Utilities

Most modern distros include basic XFS support, but specialized tools like xfs_dump are often missing. I usually check my xfsprogs version immediately; newer versions (5.x+) include significant optimizations for NVMe drives and high-speed SSDs.

For RHEL-based systems (CentOS, Fedora, AlmaLinux, Rocky Linux):

sudo dnf install xfsprogs xfsdump

For Debian-based systems (Ubuntu, Debian):

sudo apt update && sudo apt install xfsprogs xfsdump

Once these are installed, you have the full toolkit. I’ve found that having these ready prevents frantic searching when a disk hits 99% capacity at 3 AM.

Configuration and Management Tools

Managing XFS goes beyond mkfs. You need to identify, resize, and safeguard your data. Here are the three pillars of my daily workflow.

1. Customizing with xfs_admin

Predictable identification is vital. Relying on device names like /dev/sdb1 is a recipe for disaster because they can shift if you swap hardware. I always use persistent labels or UUIDs. The xfs_admin tool lets you tweak these without wiping your data.

To label an unmounted partition for easier tracking:

# Unmount the target first
sudo umount /mnt/data

# Assign a clear label
sudo xfs_admin -L "PROD_BACKUPS" /dev/sdb1

You can also refresh a UUID—useful if you’ve used dd to clone a drive and need to resolve a conflict on the same OS:

sudo xfs_admin -u $(uuidgen) /dev/sdb1

Keep in mind that xfs_admin is picky. It generally requires the file system to be offline. Attempting this on a live mount will simply return a “device busy” error.

2. Instant Expansion with xfs_growfs

This is my most-used command. Picture this: your logs are ballooning, and your disk is at 98%. In the cloud, you can expand a virtual volume in one click, but your OS won’t see that space automatically. XFS handles this beautifully without requiring a reboot or unmount.

# First, expand your underlying partition or LVM volume
# Then, grow the XFS file system using its mount point
sudo xfs_growfs /mnt/data

Unlike Ext4’s resize2fs, xfs_growfs targets the mount point, not the device path. It’s a common stumbling block for newcomers. Run df -h immediately after, and you’ll see your new capacity reflected instantly.

3. Faster Backups with xfs_dump

While rsync is great for individual files, xfs_dump operates at the inode level. It’s often 2x faster for large volumes because it reads the disk structure directly. It captures everything: file attributes, extended metadata, and quota limits.

To run a full Level 0 backup:

sudo xfs_dump -l 0 -f /backup/db_full.dump -L "Backup_2026_05_04" -M "Primary_Storage" /mnt/data

The flags are straightforward:

  • -l 0: Performs a full backup. (1-9 are for incrementals).
  • -f: Defines your destination file.
  • -L and -M: Internal labels for your backup logs.

To bring that data back, use the restore counterpart:

sudo xfs_restore -f /backup/db_full.dump /mnt/restore_point

Verification & Health Checks

Proactive monitoring prevents emergencies. XFS includes built-in tools to peek under the hood.

Checking Geometry with xfs_info

If you need to verify block sizes or the number of allocation groups (agcount), xfs_info is the way to go. I use this to ensure performance is optimized for the underlying hardware.

xfs_info /mnt/data

Multiple allocation groups are the secret to XFS speed. They allow different processes to write to separate parts of the disk simultaneously, effectively eliminating the locking bottlenecks common in older file systems.

Managing Fragmentation

XFS is excellent at avoiding fragmentation, but it can still happen on disks that have been near 100% capacity for months. Check your status with the XFS debugger:

sudo xfs_db -c frag -r /dev/sdb1

The -r flag ensures the check is read-only and safe for live systems. If you see fragmentation above 15-20%, consider running xfs_fsr to reorganize files in the background.

The Break-Glass Tool: xfs_repair

If a power failure corrupts your journal, xfs_repair is your last line of defense. It is significantly faster and more thorough than the generic fsck wrapper.

# WARNING: Never run this on a mounted drive!
sudo umount /mnt/data
sudo xfs_repair /dev/sdb1

Usually, the XFS journal replays itself during a mount and fixes minor hiccups automatically. In three years of heavy use, I’ve only needed xfs_repair twice. It’s remarkably resilient. Just remember the golden rule: XFS grows, it backs up fast, but it never shrinks.

Share: