Configuring iSCSI Storage on Linux: A Practical Guide to Target and Initiator Setup

Linux tutorial - IT technology blog
Linux tutorial - IT technology blog

Storage performance hits a wall sooner than you think. Whether you’re fine-tuning a 3-node HomeLab or overseeing 50+ production servers, local disks eventually run out of steam. While NFS or Samba shares work for basic file storage, they struggle when you need raw block-level access for Proxmox virtual disks or high-transaction MySQL databases. Enter iSCSI.

I’ve handled data migrations for years. iSCSI is my reliable pick for high-performance networking because it treats remote disks like local hardware. Think of it as a virtual 50-meter extension cord for your hard drive. In this guide, we’ll set up an iSCSI Target (the storage provider) and an iSCSI Initiator (the storage consumer) using standard Linux tools.

The Architecture: Why Block Storage?

Standard NAS setups use file-level sharing. This means the server manages the filesystem (like EXT4 or ZFS). With iSCSI, the server—the Target—simply hands over raw blocks of data. The client—the Initiator—is the one that formats and mounts the drive.

This separation is powerful. If you’re running a Kubernetes cluster, your nodes need shared block storage that they can format themselves. Since iSCSI encapsulates SCSI commands into TCP/IP packets, you can run this traffic over existing 1GbE or 10GbE Ethernet infrastructure without buying expensive Fiber Channel hardware.

Installation: Preparing the Environment

We’ll use two machines for this setup. I recommend Ubuntu 22.04 or AlmaLinux 9. One acts as the Target (Storage Server) and the other as the Initiator (Client).

1. On the Target Server

We’ll use targetcli-fb. It’s a specialized shell that manages the Linux kernel’s LIO subsystem.

# Ubuntu/Debian
sudo apt update && sudo apt install targetcli-fb -y

# RHEL/AlmaLinux
sudo dnf install targetcli -y
sudo systemctl enable --now target

2. On the Initiator Client

The client needs the discovery daemon and login utilities.

# Ubuntu/Debian
sudo apt install open-iscsi -y

# RHEL/AlmaLinux
sudo dnf install iscsi-initiator-utils -y

Configuring the iSCSI Target (The Server)

We need to define what we’re sharing. While you can share a raw file, using LVM (Logical Volume Management) is much smarter. It allows you to expand a 100GB volume to 200GB in seconds without unmounting. Open the configuration console:

sudo targetcli

Step 1: Create a Backstore

The backstore is the actual physical or logical storage device. Here, we’ll map a 20GB partition located at /dev/sdb.

cd /backstores/block
create name=iscsi_disk1 dev=/dev/sdb

Step 2: Define the IQN

Every iSCSI device requires a unique iSCSI Qualified Name (IQN). It follows a specific format: iqn.yyyy-mm.naming-authority:unique-id.

cd /iscsi
create iqn.2024-05.com.itfromzero:storage01

Step 3: Map the LUN

A Logical Unit Number (LUN) links your backstore to the IQN you just created.

cd iqn.2024-05.com.itfromzero:storage01/tpg1/luns
create /backstores/block/iscsi_disk1

Step 4: Secure with ACLs

Don’t skip this—it is your primary defense. You must whitelist the specific Initiator allowed to connect. Grab the client’s IQN from /etc/iscsi/initiatorname.iscsi on the client machine first.

cd ../acls
create iqn.2024-05.com.itfromzero:client01

Step 5: Bind the Portal

The portal tells the server which IP to listen on. Use 0.0.0.0 to listen on all interfaces, typically on port 3260.

cd ../portals
create 0.0.0.0 3260

Save your work and exit:

saveconfig
exit

Connecting the Initiator (The Client)

Now, jump back to your client. First, ensure your IQN matches the one you whitelisted in the Target’s ACL. Edit /etc/iscsi/initiatorname.iscsi:

InitiatorName=iqn.2024-05.com.itfromzero:client01

Restart the services to refresh the configuration:

sudo systemctl restart iscsid open-iscsi

Discovery and Login

Scan your network to find the storage server. Replace 192.168.1.100 with your server’s actual IP.

sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.100

Once the discovery succeeds, log in to the specific target:

sudo iscsiadm -m node -T iqn.2024-05.com.itfromzero:storage01 -p 192.168.1.100 --login

Check lsblk. You’ll see a brand-new drive, likely /dev/sdc. It’s ready for partitioning, formatting, and mounting.

Verification & Persistent Mounting

I’ve managed dozens of Linux VPS instances over the years and learned one lesson the hard way: verify your stability. Run this to check your connection stats:

sudo iscsiadm -m session -P 3

Watch for errors. Network “blips” on a busy 1GbE line can drop iSCSI disks or force them into read-only mode. If you see latency spikes, consider enabling Jumbo Frames (MTU 9000).

The Reboot Test

To keep your mount alive after a reboot, use the _netdev flag in /etc/fstab. This prevents the system from trying to mount the drive before the network is fully initialized.

UUID=550e8400-e29b-41d4-a716-446655440000 /mnt/iscsi-storage ext4 defaults,_netdev 0 0

iSCSI is incredibly robust if your network is up to the task. For production environments, I always isolate iSCSI traffic on its own VLAN or a dedicated physical switch. This prevents a sudden backup job from tanking your database performance. Happy building!

Share: