Context & Why: Two Protocols Solving Different Problems
Every packet your application sends travels over either TCP or UDP. The choice — and whether it was intentional — shapes how your service handles load, flaky connections, and dropped packets. When you’re chasing a latency spike at 2am, knowing which protocol is at fault (and how to verify it on the wire) gets you to the root cause faster than any dashboard.
Both TCP and UDP live at Layer 4 (Transport Layer) of the OSI model, riding on top of IP. Their design goals, though, point in opposite directions.
TCP — Built for Reliability
TCP (Transmission Control Protocol) establishes a connection before any data flows, using a 3-way handshake:
Client → Server: SYN
Server → Client: SYN-ACK
Client → Server: ACK
(connection established, data starts flowing)
From that point, TCP guarantees packets arrive, arrive in order, and that the sender knows about any loss. Retransmission, flow control, and congestion control are all built in. That reliability has a real cost: TCP headers run 20–60 bytes, and every new connection burns at least 1.5 RTT before your first byte of data arrives. SSH, HTTP/HTTPS, PostgreSQL, MySQL — they accept that overhead because data integrity isn’t optional.
UDP — Built for Speed
UDP (User Datagram Protocol) skips all of that. No handshake, no acknowledgment, no retransmit. You stuff a packet in a datagram, fire it off, and move on. If it gets lost, the protocol doesn’t care — that’s your application’s problem to handle (or ignore).
The payoff: UDP’s header is a flat 8 bytes, and there’s zero connection-setup overhead. DNS, VoIP, video streaming, online gaming, NTP — they use UDP because a dropped packet is far less painful than a 200ms retransmission stall freezing the user’s experience.
Side-by-side, here’s what actually matters in practice:
- Connection: TCP requires a handshake; UDP is connectionless
- Reliability: TCP retransmits lost packets; UDP doesn’t
- Order: TCP delivers in sequence; UDP can arrive out of order
- Speed: UDP has less overhead and lower latency
- Use case: TCP for data integrity; UDP for real-time performance
Installation: Setting Up Your Testing Toolkit
Before experimenting hands-on, you need a few tools. On Debian/Ubuntu:
sudo apt update
sudo apt install -y netcat-openbsd tcpdump iproute2 python3
On RHEL/CentOS/Fedora:
sudo dnf install -y nmap-ncat tcpdump iproute python3
Verify everything is ready:
nc --version
tcpdump --version
ss --version
netcat (nc) is your primary testing tool — it opens TCP or UDP connections straight from the shell with a single flag difference. tcpdump captures live traffic so you can see the actual protocol headers on the wire. ss (socket statistics) replaced netstat years ago and is noticeably faster on busy systems.
Configuration: Working with TCP and UDP in Practice
Testing TCP with Netcat
Open a TCP listener on port 9000 in one terminal:
nc -l -p 9000
Connect from another terminal (or another machine):
nc 127.0.0.1 9000
Type anything — it appears on the listener side. Behind the scenes, TCP negotiated the connection first. Kill the client with Ctrl+C and the server gets an EOF. That’s TCP’s graceful close (FIN/FIN-ACK sequence).
Testing UDP with Netcat
Add the -u flag for UDP:
# Listener
nc -u -l -p 9001
# Sender (different terminal)
echo "hello via udp" | nc -u 127.0.0.1 9001
No connection setup. The packet either arrives or it doesn’t. Kill the listener and keep sending from the client — you won’t get an error. UDP has no way of knowing the other end is gone.
Socket Programming: TCP vs UDP in Python
At the API level, the difference is literally one constant. A minimal TCP server:
import socket
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: # SOCK_STREAM = TCP
s.bind(('0.0.0.0', 9000))
s.listen(1)
conn, addr = s.accept() # blocks until client connects
with conn:
data = conn.recv(1024)
print(f"Received: {data.decode()}")
conn.sendall(b"ACK")
The UDP equivalent — no accept(), no connection state:
import socket
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: # SOCK_DGRAM = UDP
s.bind(('0.0.0.0', 9001))
data, addr = s.recvfrom(1024) # no handshake, just wait for datagrams
print(f"From {addr}: {data.decode()}")
s.sendto(b"got it", addr)
SOCK_STREAM vs SOCK_DGRAM — that’s the whole decision at the API level. Everything else (reliability, ordering, retransmission) follows automatically from that one flag.
Choosing the Right Protocol for Your Application
Simple rule: if losing a single packet breaks the user experience or corrupts data, use TCP. If a lost packet is a minor glitch — or gets replaced by the next update a fraction of a second later — UDP wins.
Concrete cases:
- REST API / database queries: TCP — you need the full response, in order
- DNS lookup: UDP — typical queries are under 512 bytes and resolve in <50ms; TCP fallback kicks in for responses over 4096 bytes (EDNS)
- Video call: UDP — at 30fps, a dropped frame is barely noticeable; a 500ms TCP retransmit stall freezes the call entirely
- File transfer (SCP, rsync): TCP — every byte must arrive intact
- Game state sync: UDP — position updates are superseded by the next packet anyway
- Log shipping (syslog): UDP is common; losing an occasional log line is usually acceptable
Verification & Monitoring: Watching TCP and UDP in Action
Listing Active Connections with ss
See all TCP connections currently on your system:
ss -tnp
UDP sockets (note: UDP has no concept of ESTABLISHED — you see UNCONN for unconnected sockets):
ss -unp
Filter to a specific port:
ss -tnp sport = :443
ss -unp dport = :53
Capturing Traffic with tcpdump
Capture TCP traffic on port 9000:
sudo tcpdump -i lo tcp port 9000 -v
You’ll see the 3-way handshake clearly — SYN, SYN-ACK, ACK — before any data. With UDP:
sudo tcpdump -i lo udp port 9001 -v
The first packet is data. No setup, no negotiation. That’s the core difference made visible at the wire level.
Checking TCP State with netstat (Legacy)
If ss isn’t available:
netstat -tnp | grep ESTABLISHED
netstat -unp
Monitoring TCP Retransmissions
High retransmission rates in production typically mean network congestion or packet loss somewhere in the path. Check with:
ss -s
# or
cat /proc/net/snmp | grep -i retrans
Climbing retransmissions on a latency-sensitive service are worth chasing. Either track the congestion to its source, or ask whether that traffic pattern actually needs TCP at all.
TCP and UDP have been part of the BSD socket API since 4.2BSD in 1983 — over 40 years, and still the foundation every network stack builds on. Engineers who can reach for the right one without hesitation, and verify their assumptions with tcpdump rather than guessing, consistently debug production issues faster and make sharper calls when reliability and latency are pulling in opposite directions.

