Why Your Network Needs a Proxy — and Why Squid
A few years back, I was managing a small office network where staff kept hitting the same large software update servers repeatedly, each machine downloading identical files independently. Bandwidth bills climbed. Response times degraded. And there was zero visibility into what was actually being accessed. That’s exactly the problem a forward proxy solves — and Squid is what fixed it for me.
Squid is an open-source caching and forwarding HTTP proxy that’s been around since the mid-90s. It sits between your clients and the internet, intercepting outbound requests. When two machines request the same resource, Squid serves the cached copy to the second machine instead of hitting the remote server again. Beyond caching, it enforces access policies: block categories of sites, restrict access by IP range, limit traffic by time of day, and log everything for audit purposes.
This guide covers a complete Squid setup on Ubuntu/Debian — installation, ACL configuration, caching tuning, and monitoring. Everything you need to actually run this in production.
Installation
Installing Squid on Ubuntu/Debian
Squid is in the default repositories, so installation is straightforward:
sudo apt update
sudo apt install squid -y
For RHEL/CentOS/Rocky Linux:
sudo dnf install squid -y
After installation, enable and start the service:
sudo systemctl enable squid
sudo systemctl start squid
sudo systemctl status squid
By default, Squid listens on port 3128. Verify it’s bound correctly:
ss -tlnp | grep 3128
You should see Squid listening on 0.0.0.0:3128 or :::3128.
Firewall Setup
Allow clients to reach the proxy port:
# UFW (Ubuntu)
sudo ufw allow 3128/tcp
# firewalld (RHEL/Rocky)
sudo firewall-cmd --permanent --add-port=3128/tcp
sudo firewall-cmd --reload
Configuration
The main configuration file is /etc/squid/squid.conf. It’s large and well-commented, but you only need to touch a handful of directives for a functional setup. Back it up before making any changes:
sudo cp /etc/squid/squid.conf /etc/squid/squid.conf.bak
Defining Who Can Use the Proxy (ACLs)
Squid’s access control is built around ACLs — named conditions you define, then reference in allow/deny rules. Open the config:
sudo nano /etc/squid/squid.conf
Find the ACL section and add your network range. If your LAN is 192.168.1.0/24:
# Define your local network
acl localnet src 192.168.1.0/24
# Allow that network
http_access allow localnet
# Deny everything else (this should already exist at the bottom)
http_access deny all
Order matters here. Squid processes http_access rules top to bottom and stops at the first match. Keep your deny all at the end.
Content Filtering: Blocking Domains and Categories
Create a dstdomain ACL to block specific domains:
# Block social media during work hours
acl social_media dstdomain .facebook.com .twitter.com .instagram.com .tiktok.com
# Define work hours
acl work_hours time MTWHF 08:00-18:00
# Block social media during work hours
http_access deny social_media work_hours
# Allow everything else from local network
http_access allow localnet
Bigger blocklists — hundreds or thousands of domains — belong in an external file:
# /etc/squid/blocklist.txt — one domain per line
.gambling.com
.malware-domain.net
.phishing-example.org
Then reference it in squid.conf:
acl blocklist dstdomain "/etc/squid/blocklist.txt"
http_access deny blocklist
URL pattern matching catches things domain lists miss:
acl bad_words url_regex -i torrent piracy crack keygen
http_access deny bad_words
Caching Configuration for Bandwidth Optimization
This is where the bandwidth savings actually show up. In my experience, proper cache tuning cuts outbound bandwidth by 20–40% in environments where software downloads, package updates, or media assets are repeatedly fetched.
Key cache directives to tune:
# Cache storage location and size (10 GB in this example)
cache_dir ufs /var/spool/squid 10240 16 256
# Memory cache (RAM) — set to ~25% of available RAM
cache_mem 512 MB
# Maximum object size to cache (don't cache huge files)
maximum_object_size 100 MB
# Minimum object size (skip tiny objects that don't benefit from caching)
minimum_object_size 0 KB
# How long to keep objects without explicit expiry headers
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
After modifying cache settings, initialize the cache directories:
sudo squid -z
sudo systemctl restart squid
Authentication (Optional but Recommended)
IP-based access control is a good start. Add basic authentication when you need per-user tracking or want individual accountability in your logs:
# Install apache2-utils for htpasswd
sudo apt install apache2-utils -y
# Create a password file
sudo htpasswd -c /etc/squid/passwd john
sudo htpasswd /etc/squid/passwd jane
Then configure Squid to use it:
auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/passwd
auth_param basic realm Squid Proxy
auth_param basic credentialsttl 2 hours
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
http_access deny all
Logging Configuration
Squid logs to /var/log/squid/access.log by default. Each line contains timestamp, response time, client IP, result code, bytes transferred, HTTP method, URL, and more. The format is configurable:
# Use the default combined format
access_log /var/log/squid/access.log squid
# Or disable logging for performance-critical setups
# access_log none
Verification and Monitoring
Testing the Proxy
From a client machine, set the proxy environment variables and test with curl:
# Set proxy for this shell session
export http_proxy="http://192.168.1.100:3128"
export https_proxy="http://192.168.1.100:3128"
# Test connectivity through the proxy
curl -v http://example.com
# Check your outgoing IP (should be the proxy server's IP)
curl http://ifconfig.me
Check that blocking works as expected:
curl -v http://facebook.com
# Should return: 403 Forbidden (Access Denied)
Checking Cache Hit Rate
The most telling metric is your cache hit ratio. Squid exposes real-time statistics via the cache manager:
# Check cache statistics from the proxy server itself
squidclient -h 127.0.0.1 -p 3128 mgr:info
# Or filter for hit-related counters specifically
squidclient -h 127.0.0.1 -p 3128 mgr:counters | grep hit
In a typical corporate environment, expect hit rates of 20–50% once the cache warms up. The single biggest lever is refresh_pattern tuning. On a network heavy with apt/yum updates, adding specific patterns for .deb, .rpm, and Packages.gz files can push your hit rate from 15% to 40%+ on its own.
Tailing the Access Log
Watch traffic in real time:
sudo tail -f /var/log/squid/access.log
Each line looks like this:
1712550000.123 245 192.168.1.42 TCP_MISS/200 15234 GET http://example.com/ - DIRECT/93.184.216.34 text/html
TCP_MISS means the object wasn’t cached and was fetched from origin. TCP_HIT means it was served from cache. Watch that second value climb as the cache fills up.
Log Analysis with awk
Quick breakdown of top requested domains:
sudo awk '{print $7}' /var/log/squid/access.log | \
grep -oP '(?<=://)([^/]+)' | \
sort | uniq -c | sort -rn | head -20
Top bandwidth consumers by IP:
sudo awk '{print $3, $5}' /var/log/squid/access.log | \
awk '{arr[$1]+=$2} END {for (i in arr) print arr[i], i}' | \
sort -rn | head -10
Reloading Configuration Without Restart
After editing squid.conf, always validate syntax before applying:
# Validate config syntax
sudo squid -k parse
# Reload without dropping connections
sudo squid -k reconfigure
# Or use systemctl
sudo systemctl reload squid
Common Gotchas
- HTTPS transparent interception requires SSL bump and a certificate authority setup — a significantly more involved process than plain HTTP proxying, with legal and privacy implications in many jurisdictions. Think carefully about whether you actually need it before going down that path.
- Cache directory permissions: Squid runs as the
proxyuser. If you pointcache_dirat a custom path, runchown proxy:proxy /your/cache/pathbefore starting Squid, or it will silently fail to write cache. - MTU mismatches: Intermittent client connection failures are sometimes an MTU problem. Check with
ip link show eth0and adjust withip link set eth0 mtu 1400if needed. - refresh_pattern for package managers: Caching apt/yum repos? Add specific patterns for
.deb,.rpm, andPackages.gzfiles with longer TTLs. The default patterns won’t cache these aggressively enough.

