Quick Start: Exposing Your First Kubernetes App (5-minute setup)
Getting your applications accessible from outside your Kubernetes cluster can feel like navigating a maze. Services provide internal load balancing, but what if you need more? What if you need advanced routing, SSL termination, or a single entry point for multiple applications?
That’s where the Ingress Nginx Controller comes in. It’s a specialized load balancer for Kubernetes. It leverages Nginx to manage external access to your cluster’s services. Think of it as the traffic cop for your cluster, directing external requests to the correct internal services.
Let’s get a basic setup running to see it in action.
Step 1: Deploy the Nginx Ingress Controller
First, you need to install the Ingress Nginx Controller itself. The recommended way is to use Helm, which simplifies deployment significantly.
# Add the ingress-nginx Helm repository
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
# Install the Ingress Nginx Controller
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
This command deploys the Nginx Ingress Controller into its own ingress-nginx namespace. It might take a minute or two for the pods to start up and the LoadBalancer service to get an external IP.
Step 2: Verify the Controller Deployment
You can check if the controller is running and has an external IP:
kubectl get services -n ingress-nginx
Look for the ingress-nginx-controller service. It should have an EXTERNAL-IP. This is the IP address you’ll use to access your applications.
Step 3: Deploy a Sample Application
For this example, let’s deploy a simple Nginx web server application. Create a file named nginx-app.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
Apply this to your cluster:
kubectl apply -f nginx-app.yaml
Step 4: Create an Ingress Resource
Now, let’s tell the Ingress Nginx Controller how to route traffic to our nginx-service. Create a file named nginx-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx # This links to the Nginx Ingress Controller
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
Apply this Ingress resource:
kubectl apply -f nginx-ingress.yaml
Step 5: Access Your Application
Grab the EXTERNAL-IP of your ingress-nginx-controller service again:
kubectl get services -n ingress-nginx
Open your web browser and navigate to http://<EXTERNAL-IP>. You should see the default Nginx welcome page!
That’s it for the quick start. You’ve successfully exposed an application in Kubernetes using the Ingress Nginx Controller. This setup is crucial for anyone working with Kubernetes. It forms the backbone of how most production applications are accessed.
Deep Dive: Understanding Kubernetes Ingress and the Nginx Controller
Now that you’ve got something working, let’s break down what’s happening. What exactly is Ingress, and why is the Nginx Controller so popular?
The Problem: Exposing Services
In Kubernetes, Pods are ephemeral and have internal IP addresses. Services provide a stable IP and DNS name for a set of Pods. However, a Service with type ClusterIP is only accessible from within the cluster. For external access, you typically have options like NodePort or LoadBalancer services:
- NodePort: Exposes a Service on a static port on each Node’s IP. Simple but consumes ports and isn’t ideal for production.
- LoadBalancer: Provisions an external cloud load balancer for your Service. Great for simple exposure but can be costly, and you get one LoadBalancer per Service, leading to management overhead for many applications.
The Solution: Kubernetes Ingress
Ingress is an API object that manages external access to the services in a cluster, typically HTTP. It provides HTTP and HTTPS routing to services based on request host or path. With Ingress, you can consolidate routing rules for multiple services behind a single external IP address provided by an Ingress Controller.
Key benefits of Ingress:
- Centralized Routing: Manage all your external routes in one place.
- Single Entry Point: Use one external IP/LoadBalancer for many applications.
- Layer 7 Features: Path-based routing (e.g.,
/apivs./blog), host-based routing (e.g.,app.example.comvs.admin.example.com). - SSL/TLS Termination: Handle HTTPS traffic at the edge of your cluster, simplifying application code.
- Load Balancing: Distribute traffic across your backend services.
The Nginx Ingress Controller: How It Works
An Ingress Controller is a specialized piece of software that fulfills the Ingress resource. The Nginx Ingress Controller is one of the most widely used because it leverages the robust and high-performance Nginx web server.
- The Ingress Nginx Controller runs as a Pod (or multiple Pods for high availability) within your cluster.
- It watches the Kubernetes API for new or updated Ingress resources.
- When an Ingress resource is created or changed, the controller takes that configuration and generates a corresponding Nginx configuration file.
- It then reloads Nginx with the new configuration, without dropping connections.
- External traffic hits the LoadBalancer Service (which is usually what the Ingress Nginx Controller exposes).
- The LoadBalancer forwards traffic to the Ingress Nginx Controller Pods.
- Nginx, based on its generated configuration, routes the request to the correct internal Kubernetes Service.
In my experience, understanding this interaction between Ingress resources and the Nginx Ingress Controller is a truly vital skill for anyone working with Kubernetes. This bridge between your internal cluster network and the external world makes your applications accessible and manageable.
Advanced Usage: Beyond Basic Routing
The Ingress Nginx Controller offers a lot more than just simple path-based routing. Let’s look at some common advanced patterns.
Host-Based Routing
You can route traffic based on the hostname in the request header. This is common for serving multiple applications from the same Ingress Controller using different domain names.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multiple-hosts-ingress
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
- host: admin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin-service
port:
number: 80
For this to work, you’ll need to configure your DNS to point both app.example.com and admin.example.com to the EXTERNAL-IP of your Ingress Nginx Controller.
SSL/TLS Termination with Cert-Manager
Handling HTTPS is critical. The Ingress Nginx Controller can terminate SSL/TLS for you. While you can manually create Kubernetes Secrets for your TLS certificates, a common and highly recommended approach is to use Cert-Manager to automate the process with Let’s Encrypt.
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.11.0 \
--set installCRDs=true
Then, create a ClusterIssuer for Let’s Encrypt (e.g., letsencrypt-prod.yaml):
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected] # <-- CHANGE THIS
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod-account-key
solvers:
- http01:
ingress:
class: nginx
Apply it: kubectl apply -f letsencrypt-prod.yaml.
Finally, modify your Ingress resource to request a certificate:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-tls-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.example.com # <-- CHANGE THIS
secretName: myapp-tls-secret
rules:
- host: myapp.example.com # <-- CHANGE THIS
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
Cert-Manager will now automatically provision and renew your SSL certificate. The secretName will be created automatically.
Basic Authentication
You can protect a path with HTTP Basic Authentication directly via Ingress annotations. First, generate an htpasswd file (you can use an online generator or htpasswd -nb user password):
# Example: user:password is admin:password123
PASSWORD_HASH=$(printf "admin:$(echo -n 'password123' | openssl passwd -stdin -apr1)")
kubectl create secret generic basic-auth-secret --from-literal=auth=$PASSWORD_HASH
Then, update your Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-auth-ingress
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth-secret
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /admin
pathType: Prefix
backend:
service:
name: admin-service # Assuming you have an admin service
port:
number: 80
Custom Nginx Configuration with Annotations
The Nginx Ingress Controller supports many Nginx-specific annotations to fine-tune its behavior. These allow you to inject custom Nginx directives without modifying the controller’s underlying configuration directly.
For example, to set a custom client body size limit:
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 8m
Practical Tips for Managing Ingress Nginx Controller
Monitoring and Logging
Always enable monitoring for your Ingress Nginx Controller. Prometheus and Grafana are excellent tools for this. The controller exposes metrics that can give you insights into request rates, error rates, and response times. Check the controller’s logs (kubectl logs -n ingress-nginx <controller-pod-name>) for routing issues or errors.
Troubleshooting Common Issues
- 404 Not Found:
- Check if your Ingress resource is correctly applied (
kubectl get ingress). - Ensure
ingressClassName: nginxis set correctly. - Verify the backend service and port names in your Ingress match actual Service resources.
- Check if the Ingress Nginx Controller logs show any parsing errors for your Ingress definition.
- 503 Service Unavailable:
- This usually means Nginx can’t reach your backend service.
- Check if your Service is pointing to healthy Pods (
kubectl get pods -l app=<your-app-label>andkubectl describe service <your-service-name>). - SSL/TLS Errors:
- If using Cert-Manager, check its logs and events for the
CertificateandCertificateRequestresources (kubectl describe certificate <cert-name>). - Ensure your DNS A/CNAME record points to the Ingress Controller’s external IP.
- Verify your
ClusterIssuerconfiguration.
Using a Specific IngressClass
If you run multiple Ingress Controllers in your cluster (e.g., Nginx, Traefik, GKE Ingress), you’ll define an IngressClass resource for each. Your Ingress resources then specify which controller should handle them using the ingressClassName field.
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
This is important for preventing conflicts and ensuring your Ingress resources are picked up by the correct controller.
Security Considerations
- Network Policies: Restrict what can talk to your Ingress Controller and what your controller can talk to internally using Kubernetes Network Policies.
- Least Privilege: Ensure the Ingress Controller’s ServiceAccount has only the necessary permissions.
- Regular Updates: Keep your Ingress Nginx Controller updated to benefit from security fixes and new features.
Using the Nginx Ingress Controller gives you fine-grained control over application exposure. This improves security, manageability, and scalability within your Kubernetes environment.

