The 2:15 AM Routing Loop
2:15 AM: My phone is screaming with PagerDuty alerts. The internal dev team can’t reach the staging environment, and every request is hitting a connection timeout. After five minutes of digging, I find the culprit: internal servers are trying to reach the staging portal via its public IP address. The traffic leaves the local network, hits the ISP, and tries to hairpin back through the firewall. The firewall, sensing a security risk, promptly drops the packets.
This architectural headache is common. Imagine a service like portal.example.com. A customer in London should see the public IP, such as 203.0.113.10. However, an employee in your office should get the local IP, like 192.168.1.10. Forcing internal traffic out to the public internet just to bring it back in is inefficient. It adds 40-60ms of unnecessary latency and creates a single point of failure at your edge router.
Split-Horizon DNS (or Split-View DNS) solves this. By using BIND9 on Linux, you can configure your DNS server to detect a query’s origin and provide a tailored response. In my experience, implementing this reduces firewall CPU load by up to 30% in high-traffic environments and keeps internal traffic local.
Installing BIND9 on Linux
Most production DNS servers run on Debian or Ubuntu due to their stable security patches. These steps are standard, but they ensure your environment is ready for custom logic.
# Update your package list
sudo apt update
# Install BIND9 and common utilities
sudo apt install bind9 bind9utils bind9-doc -y
# Ensure it starts on boot
sudo systemctl enable bind9
sudo systemctl start bind9
Run systemctl status bind9 to confirm the service is active. Never skip this step; you need a clean baseline before modifying configuration files.
Configuration: Implementing Views
BIND9 uses “Views” to partition the DNS namespace. There is one strict rule: if you use views, every zone must be defined inside a view. You cannot mix-and-match global zones with viewed zones.
1. Defining Access Control Lists (ACLs)
Start by identifying who counts as “internal.” Open /etc/bind/named.conf.options. I prefer defining ACLs here to keep the setup centralized. We will include the loopback address and your local CIDR blocks.
acl "trusted" {
127.0.0.0/8;
192.168.1.0/24;
10.0.0.0/8;
};
options {
directory "/var/cache/bind";
recursion yes;
allow-query { any; };
dnssec-validation auto;
listen-on-v6 { any; };
};
2. Setting up the Views
Edit /etc/bind/named.conf.local to define the Internal and External views. BIND processes these from top to bottom. The first view that matches the client’s source IP wins.
# Internal View
view "internal" {
match-clients { "trusted"; };
recursion yes;
zone "example.com" {
type master;
file "/etc/bind/zones/db.example.com.internal";
};
include "/etc/bind/named.conf.default-zones";
};
# External View
view "external" {
match-clients { any; };
recursion no;
zone "example.com" {
type master;
file "/etc/bind/zones/db.example.com.external";
};
};
Internal users get recursion yes, turning the server into a full DNS resolver for the web. For the external view, we disable recursion. This prevents your server from being weaponized in DNS amplification attacks.
3. Creating the Zone Files
You need two distinct versions of the zone file for example.com. Create a dedicated directory to keep things organized:
sudo mkdir /etc/bind/zones
The Internal Zone File
In /etc/bind/zones/db.example.com.internal, map your hostnames to private IPs. Note the low TTL (600) if you expect frequent internal changes.
$TTL 600
@ IN SOA ns1.example.com. admin.example.com. (
2023101001 ; Serial (YYYYMMDDNN)
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS ns1.example.com.
ns1 IN A 192.168.1.5
portal IN A 192.168.1.10
web IN A 192.168.1.11
The External Zone File
In /etc/bind/zones/db.example.com.external, use your public-facing WAN IPs.
$TTL 86400
@ IN SOA ns1.example.com. admin.example.com. (
2023101001 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
@ IN NS ns1.example.com.
ns1 IN A 203.0.113.5
portal IN A 203.0.113.10
web IN A 203.0.113.11
Verification & Monitoring
DNS is fragile. One missing semicolon can take down your entire network’s resolution. Always validate your syntax before restarting the service.
# Check BIND configuration syntax
sudo named-checkconf
# Check specific zone files
sudo named-checkzone example.com /etc/bind/zones/db.example.com.internal
sudo named-checkzone example.com /etc/bind/zones/db.example.com.external
If the output is clean, restart BIND: sudo systemctl restart bind9.
Testing the Logic
Use dig to confirm the views are working. If you run this from the DNS server (a trusted IP), you should see the 192.168.x.x address.
dig @localhost portal.example.com
To simulate an external request, run dig from a remote VPS or a mobile hotspot. You should see the 203.0.x.x address instead. If the results are swapped, re-check your ACL ranges and ensure the “internal” view is listed first in your configuration file.
Monitoring Logs
The system journal is the best place to debug view assignments. Use this command to watch queries in real-time:
journalctl -u bind9 -f
Look for lines containing view internal: query. This confirms BIND is correctly identifying your local users. Split-Horizon DNS does more than just fix routing; it adds a layer of security by hiding your internal IP scheme from the public. It is a clean, server-side fix for a problem that usually requires messy NAT rules or manual hosts file edits.

