Beyond the “Network is Slow” Support Ticket
We’ve all been there. A ticket arrives with the vague complaint that “the network is slow,” and you’re left staring at a blank terminal. It’s a frustrating starting point. Is it a bad Ethernet cable, a saturated 10G interface, or a misconfigured BGP route? To fix it, you need hard data, not just a hunch.
When I troubleshoot performance, I focus on three metrics: Throughput, Latency, and Jitter. Throughput measures the actual data moving through the pipe. Latency tracks the time a packet takes to travel from point A to B. Jitter measures how much that latency fluctuates. Mastering these metrics is a survival skill for any admin. Without them, you are flying blind during an outage.
Defining the Metrics That Matter
It is common to see engineers confuse bandwidth with throughput. Think of bandwidth as the number of lanes on a highway. Throughput is how many cars actually pass a point per second. On a 1Gbps link, you’ll rarely see 1000Mbps of data transfer. Due to TCP overhead and headers, a “clean” 1Gbps link usually tops out at roughly 940Mbps.
Latency is your Round Trip Time (RTT). While high bandwidth is great for downloads, low latency is critical for database queries or VoIP. A 500ms delay might be fine for an email, but it will make a SSH session feel like it’s underwater.
Jitter is the variation in that delay. If your first packet takes 20ms and the next takes 150ms, your jitter is high. This is the primary enemy of real-time traffic. High jitter usually points to network congestion or buffers filling up sporadically on a router.
Measuring Throughput with iperf3
iperf3 is the standard tool for testing maximum achievable bandwidth. It uses a client-server model. You need it installed on two machines: one to act as the listener and one to generate the traffic.
Setting Up the Server
On your target machine—perhaps a server in your data center—start iperf3 in server mode. It listens on port 5201 by default.
# Install iperf3
sudo apt update && sudo apt install iperf3 -y
# Start the server
iperf3 -s
Ensure your firewall allows traffic on port 5201. If you need to keep the test running for multiple sessions, I recommend running it inside a tmux or screen session.
Running the Client Test
Now, go to your local machine and point it at the server’s IP. Replace 1.2.3.4 with your actual server address.
# Standard 10-second TCP test
iperf3 -c 1.2.3.4
A single TCP stream often fails to saturate a high-speed 10Gbps link. This happens because of CPU bottlenecks on a single core or TCP window scaling limits.
Pro Tip: Parallel Streams
To really push the hardware, I always run multiple parallel streams. This mimics real-world conditions where dozens of users are hitting the network at once.
# Run 10 parallel streams
iperf3 -c 1.2.3.4 -P 10
Testing UDP for Jitter and Packet Loss
TCP hides network flaws by retransmitting lost packets. To see the raw health of your connection, use UDP. You must specify a target bandwidth because UDP doesn’t have built-in congestion control.
# Test with 100Mbps UDP traffic
iperf3 -c 1.2.3.4 -u -b 100M
Check the output for packet loss. If you see more than 0.5% loss on a local wired network, you likely have a failing cable or a duplex mismatch.
Path Analysis with mtr
If iperf3 tells you the “how much,” mtr (My Traceroute) tells you the “where.” It combines ping and traceroute into a live, updating dashboard.
Why mtr is Superior
Standard traceroute sends only three packets per hop and quits. It’s easy to miss intermittent spikes that way. mtr keeps running, allowing you to see trends. If hop 3 shows 20% loss but hops 4 through 10 show 0%, ignore hop 3. That router is likely just rate-limiting ICMP traffic. However, if loss starts at hop 3 and continues to the destination, you’ve found your bottleneck.
Running a Live Scan
Install the ncurses version for the best interactive experience.
sudo apt install mtr -y
# Run mtr against a target
mtr google.com
Generating Evidence Reports
When I need to prove a provider issue to an ISP, I use report mode. This runs 100 cycles and generates a clean table.
# Generate a report after 100 packets
mtr -rw -c 100 google.com
Focus on the StDev (Standard Deviation) column. A high StDev indicates your latency is jumping all over the place, which is a sign of an unstable path.
Lessons from the Data Center Floor
Always test in both directions. Internet routing is often asymmetrical. Your traffic might take a direct path to the server but hop through three different countries on the way back. Use the -R flag in iperf3 to test the reverse direction.
# Test download speed (server to client)
iperf3 -c 1.2.3.4 -R
Watch your CPU usage. If you’re testing a 10G link on a cheap cloud instance, you might hit 100% CPU before you hit the network limit. If one CPU core is pegged, your throughput numbers aren’t accurate.
Finally, consider the MTU (Maximum Transmission Unit). If you’re using a VPN or GRE tunnel, large 1500-byte packets might get fragmented. This causes throughput to fall off a cliff. Use the -M flag in iperf3 to lower the MSS (Maximum Segment Size) and see if performance stabilizes.
Wrapping Up
Network troubleshooting shouldn’t be a guessing game. With iperf3, you can quantify exactly how much data your pipes can move. With mtr, you can pinpoint the exact router causing the delay. These two tools turn vague complaints into actionable data. Next time someone says the network is slow, you’ll have the report ready to show them exactly why.

