Quick Start: Capture Your First Packets in 5 Minutes
If you’ve ever had to debug a flaky microservice at 2 AM with zero access to Wireshark, you already know why tcpdump matters. It’s the first tool I reach for when something weird is happening on the network — and once you learn a handful of commands, you’ll stop thinking about the syntax and just use it.
First, check if tcpdump is installed:
tcpdump --version
If not, install it:
# Debian/Ubuntu
sudo apt install tcpdump
# RHEL/CentOS/Rocky
sudo dnf install tcpdump
Now capture live traffic on your primary interface:
sudo tcpdump -i eth0
Packets start flying by immediately. Hit Ctrl+C to stop. That’s it — you’re already sniffing traffic. Raw output isn’t very useful on its own, so let’s make it actionable.
The command I use most day-to-day caps the count and saves to file:
sudo tcpdump -i eth0 -c 100 -w capture.pcap
This captures exactly 100 packets and saves them to a .pcap file you can analyze later — even in Wireshark if you need richer protocol dissection.
Deep Dive: Understanding Filters and Output
Choosing the Right Interface
List all available interfaces before you start capturing:
sudo tcpdump -D
# or
ip link show
On a server running Docker, you’ll see docker0, eth0, lo, and a handful of veth pairs. Pick the wrong one and you’ll miss your traffic entirely.
BPF Filters: The Real Power
tcpdump uses Berkeley Packet Filter (BPF) syntax. Get comfortable with it and you can isolate exactly the traffic you care about. These are the filters I actually reach for in production:
Filter by host:
sudo tcpdump -i eth0 host 192.168.1.50
Filter by port:
sudo tcpdump -i eth0 port 443
sudo tcpdump -i eth0 port 80 or port 443
Filter by protocol:
sudo tcpdump -i eth0 tcp
sudo tcpdump -i eth0 udp
sudo tcpdump -i eth0 icmp
Combine filters (src/dst + port):
# Traffic FROM a specific host on port 3306 (MySQL)
sudo tcpdump -i eth0 src host 10.0.0.5 and port 3306
# Traffic TO your app server on port 8080
sudo tcpdump -i eth0 dst host 10.0.0.10 and port 8080
Cut the noise — drop SSH so you don’t flood your own terminal:
sudo tcpdump -i eth0 not port 22
Reading Output
A typical tcpdump line looks like this:
14:32:01.123456 IP 10.0.0.5.54321 > 10.0.0.10.80: Flags [S], seq 1234567890, win 65535, length 0
14:32:01.123456— timestamp with microsecond precisionIP 10.0.0.5.54321 > 10.0.0.10.80— source IP:port → destination IP:portFlags [S]— TCP flags: S=SYN, A=ACK, F=FIN, R=RST, P=PSHlength 0— payload size (0 for pure control packets like SYN)
Add -n to skip DNS resolution (faster output, less clutter) and -nn to also skip port name resolution:
sudo tcpdump -i eth0 -nn port 443
Advanced Usage: Real Production Scenarios
Debugging a Slow API
When an API endpoint feels sluggish but you can’t tell if it’s the network or the app, capture TCP handshakes to measure connection latency directly:
# Capture only SYN and SYN-ACK packets to measure connection setup time
sudo tcpdump -i eth0 -nn 'tcp[tcpflags] & (tcp-syn|tcp-ack) != 0' and port 8080
Long gaps between SYN and SYN-ACK mean the server is struggling to accept connections. SYN-ACK comes back fast but the response drags? The bottleneck is in your application, not the network.
Watching DNS Resolution in Real Time
sudo tcpdump -i eth0 -nn udp port 53
This one has saved me more than once. I caught a misconfigured container firing ~8,000 DNS requests per second at an internal resolver — nearly bringing it to its knees. Without live packet inspection, that kind of problem hides behind vague latency metrics and timeouts.
Capturing HTTP Headers (Unencrypted Traffic)
# -A prints packet payload in ASCII
sudo tcpdump -i eth0 -A -nn port 80 | grep -E 'GET|POST|Host:|Content-Type:'
This only works for plain HTTP — for HTTPS you’d need mitmproxy or SSL key logging. That said, for internal services, legacy APIs, or container-to-container traffic on a private network, it’s extremely handy.
Saving and Reading Captures
# Save capture
sudo tcpdump -i eth0 -w /tmp/debug-$(date +%Y%m%d-%H%M%S).pcap not port 22
# Read it back later (no root needed for reading)
tcpdump -r /tmp/debug-20260401-143200.pcap
# Read with filters applied to saved file
tcpdump -r /tmp/debug-20260401-143200.pcap port 5432
Timestamping filenames is a habit I picked up after losing track of five different capture files during a late-night incident. Small thing, but it saves real confusion when you’re already stressed.
Rotating Captures for Long-Running Analysis
# Split into 100MB files, rotate every 60 seconds, keep 10 files
sudo tcpdump -i eth0 -w /tmp/capture-%Y%m%d-%H%M%S.pcap \
-C 100 -G 60 -W 10 not port 22
This is my go-to for leaving tcpdump running overnight on an intermittent issue. You get a rolling 10-minute window of traffic without filling up the disk.
Practical Tips from the Trenches
Tip 1: Always Exclude SSH When Capturing Interactively
Forget this and your terminal floods with your own SSH traffic — useless noise that buries the packets you actually care about:
sudo tcpdump -i eth0 not port 22 and not port 2222
Tip 2: Use -v, -vv, -vvv for More Detail
# -v: TTL, total length, IP options
# -vv: more NFS/SMB/DNS detail
# -vvv: full decode where possible
sudo tcpdump -i eth0 -vv icmp
Tip 3: Capture on the Docker Bridge for Container Traffic
# Find your Docker bridge interface
ip link show | grep docker
# Capture container-to-container traffic
sudo tcpdump -i docker0 -nn not port 22
This lets you inspect inter-container traffic without touching the host’s external interface — exactly what you need when debugging a misbehaving Docker Compose stack.
Tip 4: tcpdump + grep for Quick Pattern Matching
# Watch for connection resets in real time
sudo tcpdump -i eth0 -nn 'tcp[tcpflags] & tcp-rst != 0' 2>/dev/null
# Count packets per source IP (pipe to sort)
sudo tcpdump -i eth0 -nn -c 1000 2>/dev/null | \
awk '{print $3}' | cut -d'.' -f1-4 | sort | uniq -c | sort -rn | head -20
Tip 5: Pair with tshark for Scripted Analysis
tshark is the CLI version of Wireshark — it reads .pcap files with much richer protocol dissection. Capture with tcpdump, analyze with tshark:
# Capture
sudo tcpdump -i eth0 -w capture.pcap port 5432
# Analyze PostgreSQL traffic with tshark
tshark -r capture.pcap -Y 'pgsql' -T fields -e frame.time -e ip.src -e ip.dst
I’ve used this combo to trace slow PostgreSQL queries across a network hop. The pcap made it immediately clear which queries were hammering the connection pool — something the application logs gave zero clues about.
Quick Reference Cheatsheet
tcpdump -i eth0— capture on interfacetcpdump -i any— capture on ALL interfacestcpdump -nn— no DNS/port name resolutiontcpdump -c 100— stop after 100 packetstcpdump -w file.pcap— write to filetcpdump -r file.pcap— read from filetcpdump -A— show ASCII payloadtcpdump -X— show hex + ASCII payloadtcpdump -e— show Ethernet/MAC headerstcpdump host X.X.X.X— filter by hosttcpdump port 443— filter by porttcpdump net 10.0.0.0/24— filter by subnet
tcpdump rewards practice. Start on dev or staging — low pressure, no stakes. Get comfortable reading the output. When a real incident hits at 3 AM, you’ll reach for it automatically and have answers in minutes, not hours.

