Mastering Chroot on Linux: System Rescue and Recovery from a Live USB

Linux tutorial - IT technology blog
Linux tutorial - IT technology blog

Quick Start: Get Into Chroot in Under 5 Minutes

Your server won’t boot. It’s 2 AM. You have a Live USB and a deadline. Here’s the fastest path to a working chroot environment.

Boot from your Live USB (Ubuntu, Debian, Arch — doesn’t matter much), open a terminal, then run:

# Find your root partition
lsblk

# Mount the broken system's root partition
mount /dev/sda2 /mnt

# Mount essential virtual filesystems
mount --bind /dev /mnt/dev
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
mount --bind /run /mnt/run

# Jump into the broken system
chroot /mnt /bin/bash

That’s it. You’re now operating inside the broken system’s filesystem as root. Reset passwords, reinstall the bootloader, fix broken packages — anything you’d normally do on a live system is now on the table.

One gotcha before moving on: LVM setups or systems with the root partition on a non-obvious disk may look different in lsblk output. When in doubt, cross-check with fdisk -l or blkid to confirm which device you actually want to mount.

What Chroot Actually Does Under the Hood

Chroot — short for “change root” — is a Unix syscall that remaps the root directory for the current process and everything it spawns. A process inside a chroot sees /mnt as /. Nothing above that mount point is visible.

For rescue work, that’s exactly what you want. Package managers, bootloader tools, and config files all behave as if the broken system is running normally — because from their perspective, it is.

Why the Virtual Filesystem Mounts Are Non-Negotiable

Those --bind mounts for /dev, /proc, /sys, and /run aren’t decoration. Skip one and things break in ways that look completely unrelated to the missing mount:

  • /dev — no device nodes; grub-install fails silently without it
  • /proc — process info unavailable; most system tools either crash or return garbage
  • /sys — kernel interface gone; hardware-related operations stop working
  • /run — runtime data missing; anything touching systemd will misbehave

Miss one of these and you’ll burn 20 minutes chasing cryptic errors before you realize the real cause.

Handling EFI and Separate Boot Partitions

Most modern systems use UEFI with a dedicated EFI System Partition (ESP). Fixing the bootloader requires mounting it too:

# Find the EFI partition (usually FAT32, around 512MB)
lsblk -f | grep -i vfat

# Mount it
mount /dev/sda1 /mnt/boot/efi

# If /boot is also a separate partition
mount /dev/sda3 /mnt/boot

Always sanity-check your partition layout against fstab before assuming disk layout:

cat /mnt/etc/fstab

Real Rescue Scenarios with Working Commands

Scenario 1: Reset a Forgotten Root Password

The most common reason anyone pulls out a Live USB. Once you’re in chroot, it’s two commands:

passwd root
# or for a specific user:
passwd username

No single-user mode gymnastics. No kernel boot parameters to juggle. Just done.

Scenario 2: Reinstall GRUB After a Borked Update

This bit me on a production Ubuntu 22.04 server — a kernel update collided with a power cut mid-install. The system came up to a GRUB rescue prompt with no obvious way forward. Two commands from inside chroot fixed it completely:

# Inside chroot (BIOS/MBR systems)
grub-install /dev/sda
update-grub

UEFI systems need a slightly different invocation:

grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=ubuntu
update-grub

Working from the GRUB rescue prompt directly is painful. Its environment is stripped-down; half the commands you’d reach for aren’t there. Chroot gives you the full system toolset, which makes the difference between a 10-minute fix and an hour of guesswork.

Scenario 3: Fix a Broken Package Manager

An interrupted apt upgrade or dpkg run leaves the package database half-configured. From inside chroot:

# Resume interrupted installations
dpkg --configure -a

# Force reinstall a specific broken package
apt-get install --reinstall linux-image-generic

# If apt itself is the problem
dpkg --audit
apt-get -f install

RPM-based systems (RHEL, AlmaLinux, Fedora) have their own equivalents:

rpm --rebuilddb
dnf reinstall systemd

Scenario 4: Recover from a Bad /etc/fstab Edit

A single wrong character in /etc/fstab can drop a system into emergency mode at boot. The culprit is usually a mistyped UUID. From chroot, it’s a straightforward file edit:

nano /etc/fstab
# or
vim /etc/fstab

Get the correct UUIDs from blkid — it lists every block device and its UUID so you can copy-paste without guessing:

blkid

Scenario 5: Unlock a LUKS Encrypted Volume Before Chroot

Full-disk encryption adds one step before you can mount anything:

# Open the LUKS container
cryptsetup open /dev/sda2 cryptroot

# Mount the decrypted volume
mount /dev/mapper/cryptroot /mnt

# If LVM sits on top of LUKS
vgscan
vgchange -ay
mount /dev/mapper/ubuntu--vg-ubuntu--lv /mnt

Practical Tips That Actually Save You at 2 AM

Keep a Rescue Script Handy

Typing those mount commands from memory under pressure is how typos happen. Save this script to a USB stick or your password manager’s notes section — it handles mounting and cleanup automatically:

#!/bin/bash
# rescue-chroot.sh — mount and enter chroot
set -e

ROOT_PART="${1:-/dev/sda2}"
MOUNT_POINT="/mnt"

mount "$ROOT_PART" "$MOUNT_POINT"
for fs in dev proc sys run; do
  mount --bind "/$fs" "$MOUNT_POINT/$fs"
done

echo "Entering chroot on $ROOT_PART..."
chroot "$MOUNT_POINT" /bin/bash

# Cleanup on exit
for fs in run sys proc dev; do
  umount "$MOUNT_POINT/$fs"
done
umount "$MOUNT_POINT"
echo "Unmounted cleanly."

Usage: bash rescue-chroot.sh /dev/sda2

Always Unmount Cleanly

When you’re done, exit the chroot shell and unmount in reverse order:

exit

umount /mnt/run
umount /mnt/sys
umount /mnt/proc
umount /mnt/dev
umount /mnt

Skipping this leaves filesystem journals dirty. Best case: extra fsck time on next boot. Worst case: actual filesystem corruption. Neither is a fun outcome at 3 AM.

DNS Won’t Work Inside Chroot — Here’s the Fix

If you need to download packages or run updates from inside chroot, name resolution is broken by default. One command fixes it:

cp /etc/resolv.conf /mnt/etc/resolv.conf

This copies the Live environment’s DNS config into the chroot so apt and dnf can reach the internet normally.

Exec Format Error: The Architecture Mismatch Problem

Getting chroot: failed to run command '/bin/bash': Exec format error? Your Live USB and the target system have different CPU architectures. Common scenario: trying to chroot into a Raspberry Pi or ARM server image from an x86_64 laptop. The fix requires QEMU user-mode emulation:

apt-get install qemu-user-static
cp /usr/bin/qemu-aarch64-static /mnt/usr/bin/
chroot /mnt /bin/bash

This comes up more often than you’d expect once you start managing ARM hardware alongside regular x86 servers.

Verify You’re Actually Inside the Right System

A quick sanity check after entering chroot costs 10 seconds and prevents a lot of confusion:

# Should show the broken system's hostname, not the Live USB's
hostname

# Should show the broken system's OS release
cat /etc/os-release

# Should list the broken system's installed packages
dpkg -l | head -20

If any of these show Live USB data instead of the broken system’s, the mount went wrong. Go back and confirm you targeted the correct partition — it’s almost always a case of /dev/sda2 vs /dev/sdb2 when booting from USB.

Share: