LVM on Linux: Flexible Disk Management Without the Headaches

Linux tutorial - IT technology blog
Linux tutorial - IT technology blog

The Disk Space Problem That Keeps Coming Back

You provision a VPS, allocate 20GB to /var, and six months later your logs and databases have filled it completely. The traditional fix? Boot into rescue mode, resize partitions with fdisk, pray nothing breaks. I ran through that cycle twice in my first year of server management — and across 10+ servers over three years, it never stopped being annoying.

What I eventually learned: disk partitioning decisions made at setup time will always be wrong eventually. You either over-allocate and waste space, or under-allocate and scramble later. LVM (Logical Volume Manager) exists to break that cycle. It separates the physical disk layout from the logical partitions your OS sees, and that gap is what gives you real flexibility after the fact.

Core Concepts: How LVM Actually Works

LVM adds a thin abstraction layer between physical disks and mount points. Three terms cover the whole model:

Physical Volume (PV)

A PV is a raw disk or partition handed to LVM. It could be /dev/sdb, /dev/sdc1, or even a RAID array. LVM writes its own metadata to the PV so it can track what’s available.

Volume Group (VG)

Think of a VG as a storage pool. You add multiple PVs into a VG, and LVM treats the combined space as one large, flexible bucket. Need more space in the pool later? Add another disk — no reboot, no data migration, no drama.

Logical Volume (LV)

An LV is what your OS actually mounts. It’s carved out of the VG pool. Need 10GB for /var? Create a 10GB LV, format it, mount it. Need 15GB next month? Extend it with one command. The filesystem sits on top of the LV and doesn’t care about the physical disks underneath.

Visually, the stack looks like this:

Physical Disks → Physical Volumes → Volume Group → Logical Volumes → Filesystems
/dev/sdb       → PV               → pool_vg      → lv_var (10G)   → ext4 at /var
/dev/sdc       → PV               →              → lv_home (20G)  → xfs at /home

Hands-On: Setting Up LVM from Scratch

Step 1: Install LVM Tools

On Debian/Ubuntu:

sudo apt update && sudo apt install lvm2 -y

On RHEL/AlmaLinux/Rocky:

sudo dnf install lvm2 -y

Step 2: Create Physical Volumes

Assuming you’ve added two new disks (/dev/sdb and /dev/sdc):

sudo pvcreate /dev/sdb /dev/sdc
sudo pvs  # verify

The pvs output will show both disks, their sizes, and which VG they belong to (blank for now).

Step 3: Create a Volume Group

sudo vgcreate data_vg /dev/sdb /dev/sdc
sudo vgs  # verify

Done. data_vg now holds the combined space of both disks as one pool.

Step 4: Create Logical Volumes

# Create a 15GB LV for /var
sudo lvcreate -L 15G -n lv_var data_vg

# Create a 25GB LV for /home
sudo lvcreate -L 25G -n lv_home data_vg

# Use remaining free space for a data volume
sudo lvcreate -l 100%FREE -n lv_data data_vg

sudo lvs  # verify all three

Step 5: Format and Mount

sudo mkfs.ext4 /dev/data_vg/lv_var
sudo mkfs.ext4 /dev/data_vg/lv_home
sudo mkfs.xfs /dev/data_vg/lv_data

sudo mkdir -p /var/lvm_var /home/lvm_home /data
sudo mount /dev/data_vg/lv_var /var/lvm_var
sudo mount /dev/data_vg/lv_home /home/lvm_home
sudo mount /dev/data_vg/lv_data /data

For persistent mounts across reboots, add these lines to /etc/fstab:

/dev/data_vg/lv_var   /var/lvm_var   ext4  defaults  0 2
/dev/data_vg/lv_home  /home/lvm_home ext4  defaults  0 2
/dev/data_vg/lv_data  /data          xfs   defaults  0 2

The Operations That Actually Save You

Extending a Logical Volume (Live)

This one command is the whole reason to use LVM. Extend lv_var by 5GB while it’s mounted and in use:

# Extend the LV
sudo lvextend -L +5G /dev/data_vg/lv_var

# Resize the filesystem to use the new space (ext4)
sudo resize2fs /dev/data_vg/lv_var

# For XFS (can only grow, not shrink)
sudo xfs_growfs /var/lvm_var

No downtime. No reboot. The filesystem is live the entire time.

Adding a New Disk to an Existing VG

Your pool is running low and you’ve attached a new disk (/dev/sdd):

sudo pvcreate /dev/sdd
sudo vgextend data_vg /dev/sdd
sudo vgs  # confirm new total size

That space is immediately available for LV extension. On a production server I timed this once — start to finish, under 30 seconds.

Shrinking a Logical Volume

Shrinking is riskier and needs more care. Always unmount first, always back up your data. XFS cannot shrink at all — this is ext4 only.

# Unmount first
sudo umount /var/lvm_var

# Check filesystem
sudo e2fsck -f /dev/data_vg/lv_var

# Shrink filesystem BEFORE shrinking LV
sudo resize2fs /dev/data_vg/lv_var 10G

# Then shrink the LV
sudo lvreduce -L 10G /dev/data_vg/lv_var

# Remount
sudo mount /dev/data_vg/lv_var /var/lvm_var

Hard rule: resize the filesystem to a size smaller than your target LV size, then shrink the LV. Reverse that order and you corrupt data. Run through this on a throwaway VM before touching production — I made that mistake on a dev server at 2am, which was painful enough.

LVM Snapshots for Backup

Snapshots capture a point-in-time copy of an LV. Most useful right before a risky operation — a major package upgrade, a config overhaul, anything you might want to roll back:

# Create a 2GB snapshot of lv_var
sudo lvcreate -L 2G -s -n lv_var_snap /dev/data_vg/lv_var

# Mount the snapshot read-only
sudo mount -o ro /dev/data_vg/lv_var_snap /mnt/snapshot

# Remove snapshot when done
sudo umount /mnt/snapshot
sudo lvremove /dev/data_vg/lv_var_snap

Snapshots use copy-on-write — only the blocks that change after creation get written to the snapshot volume, not a full copy. Keep them short-lived. Once a snapshot overflows its allocated space, LVM marks it invalid and disables it automatically. You lose the recovery point and the allocated space goes to waste. If the source LV is actively written to, allocate at least 15–20% of its size for the snapshot.

Useful Inspection Commands

Commands worth running regularly when checking on an LVM setup:

  • pvdisplay — detailed info on physical volumes
  • vgdisplay — VG size, free space, PE count
  • lvdisplay — LV paths, sizes, snapshot status
  • lsblk — clean tree view of all block devices including LVs
  • df -h — filesystem usage (mounts)
# Quick summary of everything LVM
sudo pvs && sudo vgs && sudo lvs

LVM in Practice: What Actually Matters

LVM solves the core problem with traditional disk partitioning: you no longer have to get it right at setup time. Start with a reasonable allocation, then resize as actual usage patterns emerge — without any production downtime.

Three commands are worth committing to memory: lvextend + resize2fs for live resize, pvcreate + vgextend for adding a disk to the pool, and lvcreate -s for snapshotting before risky changes. Those three cover 90% of real-world LVM work.

New server? Set it up with LVM from day one, even if you’re certain you’ll never resize. The setup cost is near-zero. The upside, when you eventually need it, is enormous.

Share: