Cloudflare Tunnel: Securely Expose Internal Services Without Opening Firewall Ports

Networking tutorial - IT technology blog
Networking tutorial - IT technology blog

The 2 AM Pager Call: Exposing Internal Apps Securely

It’s 2 AM. The pager just went off, jarring me awake. The dev team just pushed a critical new internal monitoring dashboard—a Grafana instance, for example. Now, management demands immediate external access for a demo with a key partner. "Just open a port for it," the urgent message reads. My heart sinks. I’ve been down this road before, and it rarely ends well.

Opening firewall ports directly to internal services is a massive security risk. It’s a gaping hole just waiting for trouble. But setting up a full VPN client for a quick demo? That’s overkill and a pain for non-technical users. We need something quick, robust, and most importantly, secure. Something that doesn’t involve poking holes in our carefully crafted network perimeter.

Root Cause Analysis: Why Traditional Methods Fall Short

Why is the "just open a port" mentality so dangerous? And why do other common solutions fall short in this specific scenario?

Direct Port Exposure: A Welcome Mat for Attackers

  • Vulnerability: Any service listening on an open port is directly exposed to the entire internet. Every bot, scanner, and malicious actor can hit it directly. Forget Cloudflare’s WAF or DDoS protection – your service stands completely naked against attacks. In Q4 2023 alone, Cloudflare mitigated a record-breaking 13.7 million HTTP DDoS attacks, highlighting the constant threat facing exposed services.
  • Firewall Management Nightmare: You’ll constantly update firewall rules, trying to ensure only specific IPs can access your service. The alternative? Opening it wide to 0.0.0.0/0, a far worse option. This becomes a never-ending game of whack-a-mole, where a single misconfiguration can have severe consequences.
  • Certificate Management: If you’re using HTTPS, you’re responsible for managing certificates on an internal server. This might involve exposing it to the internet for ACME challenges or manually copying files around. Either way, it adds significant complexity and potential for errors, often requiring renewals every 90 days.

VPN: Secure, But Not Always Practical

VPNs excel at providing secure access to your entire internal network for trusted users. However, they often present significant overhead when you simply need to share a single web application:

  • Client Software Requirement: Every user needs to install and configure VPN client software. This can be a hurdle for external, non-technical users or for quick, temporary access.
  • User Management Complexity: Managing VPN user accounts, permissions, and ensuring proper revocation can be cumbersome, especially for external collaborators.
  • Overkill for Single Services: For just one web application or an SSH session, a full VPN tunnel is often an unnecessary layer of complexity and friction.

Comparing Solutions: Finding the Right Tool for the Job

When you’re staring down the barrel of a 2 AM request to expose an internal service, what are your options?

1. The "Just Open a Port" (DMZ/Port Forwarding) Approach

  • Pros: Conceptually simple to explain.
  • Cons: Catastrophic security risks. Direct exposure to attacks. No built-in DDoS protection, WAF, or identity management. Requires a static public IP or dynamic DNS. Trust me, you never want to go down this road for anything beyond a quick, throwaway test. It’s a recipe for disaster.

2. VPN Access

  • Pros: Highly secure for internal users accessing multiple internal resources. Encrypted traffic.
  • Cons: Requires client software installation and configuration on every user’s device. User management can be complex. Not suitable for public-facing web applications or simple sharing. High friction for external, non-technical users.

3. Self-Hosted Reverse Proxy (Nginx/HAProxy on a Public Server)

  • Pros: Consolidates traffic, can handle TLS termination, offers some basic load balancing. You control everything.
  • Cons: This still requires a public server with open ports (80/443). You’re solely responsible for its security, patching, DDoS protection, and certificate management. It’s another server to manage, another potential point of failure. Essentially, you’re building your own mini-Cloudflare, which entails significant work and responsibility.

4. Cloudflare Tunnel

  • Pros: No inbound firewall ports need to be open. This is the game changer. The cloudflared daemon establishes outbound-only connections to Cloudflare’s edge network, meaning your internal services are never directly exposed to the internet.
  • Enhanced Security: It integrates directly with Cloudflare’s powerful security features, including WAF, DDoS protection, and bot management. Your traffic benefits from Cloudflare’s global network and threat intelligence, protecting against millions of daily cyber threats.
  • Managed TLS: Cloudflare handles TLS certificates for your public hostname, simplifying certificate management. This means you avoid the hassle of renewing certificates every few months.
  • Identity-Aware Access (Cloudflare Access): Define who can reach your services using existing identity providers like Google, Okta, or GitHub. This adds an extra layer of authentication and authorization, all without complex VPN setups.
  • Reliable & Scalable: It utilizes Cloudflare’s global network, ensuring high availability and performance for your services.
  • Cons: Dependency on Cloudflare. Some advanced features are part of paid plans. Initial setup might feel a bit different if you’re used to traditional networking, but it’s well worth the learning curve.

The Best Approach: Cloudflare Tunnel for Secure Internal Service Exposure

To securely expose internal services without opening inbound firewall ports, Cloudflare Tunnel is the premier approach in most modern scenarios. It solves the core problem: enabling external access without compromising your network perimeter. When dealing with production systems, particularly at 2 AM, stability and security are non-negotiable.

Look, I’ve personally applied this Cloudflare Tunnel approach in production environments for critical internal dashboards, developer tools, and even SSH access. The results have been consistently stable. It just works, and it gives me peace of mind knowing those services aren’t directly exposed to the internet. It’s like having a secure, private highway from Cloudflare’s edge directly to your internal service, bypassing all the nasty traffic.

Practical Implementation: Setting Up Your First Cloudflare Tunnel

Let’s walk through setting up a Cloudflare Tunnel to expose an internal web service running on localhost:8080 and an SSH server on localhost:22. You’ll need a Cloudflare account and a domain added to Cloudflare.

Step 1: Install cloudflared

First, install the cloudflared daemon on the machine hosting your internal service. This machine will initiate the outbound connection to Cloudflare’s network.

For Debian/Ubuntu systems:


# Add Cloudflare's package repository
curl -fsSL https://pkg.cloudflareclient.com/pubkey.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloudflare-warp-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/cloudflare-warp-archive-keyring.gpg] https://pkg.cloudflareclient.com/ $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/cloudflare-warp.list > /dev/null

# Update package list and install cloudflared
sudo apt update && sudo apt install cloudflared

For other operating systems, check the official Cloudflare documentation.

Step 2: Authenticate cloudflared

Next, authenticate cloudflared with your Cloudflare account. This command will open a browser window for you to log in and select your domain.


cloudflared login

After successful authentication, cloudflared will download a certificate file (cert.pem) to your default configuration directory, typically ~/.cloudflared/. This file is crucial for creating and managing tunnels.

Step 3: Create a Tunnel

Now, let’s create a tunnel. Give it a descriptive name.


cloudflared tunnel create my-internal-app-tunnel

This command will output a Tunnel ID (UUID) and tell you where it saved the credentials file (e.g., ~/.cloudflared/<YOUR_TUNNEL_UUID>.json). Keep that Tunnel ID and credentials file safe! You’ll need them for configuration.

Step 4: Configure the Tunnel

Create a configuration file for your tunnel. I usually put mine in ~/.cloudflared/config.yml. Remember to replace <YOUR_TUNNEL_UUID> with your actual Tunnel ID and yourdomain.com with your domain.


tunnel: <YOUR_TUNNEL_UUID>
credentials-file: /root/.cloudflared/<YOUR_TUNNEL_UUID>.json

ingress:
  # Rule for a web application
  - hostname: myapp.yourdomain.com
    service: http://localhost:8080
    # Optional: Enable Cloudflare Access policies for this hostname (highly recommended for production)
    # originRequest:
    #   access:
    #     required: true

  # Rule for SSH access
  - hostname: ssh.yourdomain.com
    service: ssh://localhost:22
    # Optional: Enable Cloudflare Access for SSH (highly recommended for production)
    # originRequest:
    #   access:
    #     required: true

  # Catch-all rule for unmatched requests (important for security)
  - service: http_status:404

In this configuration:

  • tunnel: Your unique Tunnel ID.
  • credentials-file: Path to the JSON credentials file generated earlier.
  • ingress: A list of rules defining how requests to specific hostnames should be routed to your internal services.
  • http_status:404: This is a crucial catch-all rule. Any request that doesn’t match a defined hostname will receive a 404, preventing accidental exposure.

Step 5: Route DNS Traffic to Your Tunnel

Next, you need to tell Cloudflare to send traffic for your chosen hostnames through your tunnel. This creates CNAME records in your Cloudflare DNS that point to your tunnel’s unique ID.


cloudflared tunnel route dns my-internal-app-tunnel myapp.yourdomain.com
cloudflared tunnel route dns my-internal-app-tunnel ssh.yourdomain.com

You can verify these records in your Cloudflare dashboard under the DNS section.

Step 6: Run the Tunnel as a System Service

To ensure your tunnel starts automatically and runs reliably, install it as a systemd service.


sudo cloudflared --config ~/.cloudflared/config.yml service install

# Start the service
sudo systemctl start cloudflared

# Enable the service to start on boot
sudo systemctl enable cloudflared

# Check the service status
sudo systemctl status cloudflared

You should see output indicating the service is active and running.

Step 7: Verify Access

Now, try accessing myapp.yourdomain.com from a browser outside your network. You should see your internal web application. For SSH, you can try:


ssh [email protected]

If you configured Cloudflare Access (highly recommended for production), you’ll be prompted to authenticate via your chosen identity provider before gaining access.

Wrapping Up: Peace of Mind at 2 AM

By implementing Cloudflare Tunnel, you effectively solve that 2 AM problem without compromising security. Your internal services become externally accessible, yet not a single inbound port on your firewall needs to be open. All traffic flows through Cloudflare’s secure network, benefiting from its advanced security features. This reliable approach ensures a stable and secure way to expose necessary services, giving you peace of mind that your network perimeter remains intact.

No more frantic firewall rule changes, no more direct exposure, and no more complicated VPN setups for simple web access. Just a secure, reliable tunnel, exactly what you need when you’re troubleshooting in the dead of night.

Share: