Kubernetes Preview Environments: How We Solved the ‘Who Broke Staging?’ Crisis with Argo CD

DevOps tutorial - IT technology blog
DevOps tutorial - IT technology blog

The Staging Server Bottleneck

A few years ago, my team shared one single staging server. It felt like a crowded kitchen with only one stove. One engineer would be testing a new login flow while another tried to debug a checkout error. They inevitably stepped on each other’s toes, leading to those frantic “Who broke staging?” Slack messages at 4:00 PM. It was messy, slowed down our 15-person team, and made QA feel like a game of whack-a-mole.

High-velocity teams need a better approach: Ephemeral Environments (often called Preview Environments). Think of it this way: every Pull Request (PR) triggers its own isolated, short-lived version of your app on Kubernetes. When the PR is merged or closed, the environment vanishes. Reviewers get a production-ready URL to verify changes in a bubble before any code touches the main branch.

We moved this setup to production last year, and it slashed our manual deployment overhead by roughly 12 hours a week. By using Argo CD and its ApplicationSet controller, we replaced a manual nightmare with a fully automated system that just works.

The Engine: Argo CD ApplicationSet

Before ApplicationSets, we had to manually define an Argo CD “Application” for every branch. It didn’t scale. The ApplicationSet controller fixed this by using a template to generate multiple Applications based on external triggers like GitHub PRs.

The Pull Request Generator is the star here. It watches your repository for open PRs. It then grabs metadata—like the PR number and commit SHA—to fill out your deployment template. This is the heart of the ephemeral workflow.

Why this stack?

  • True Isolation: Every PR lives in its own dedicated Namespace.
  • Self-Healing: If a pod crashes in a preview environment, Argo CD restores it automatically.
  • Full Visibility: The Argo CD dashboard shows exactly which PRs have active environments at a glance.

Hands-on: Setting Up Your Pipeline

Prerequisites are straightforward. You need a Kubernetes cluster with Argo CD, a Git repository, and a wildcard DNS record (like *.preview.mycompany.com) pointing to your Ingress Controller.

Step 1: GitHub Authentication

Argo CD needs permission to watch your repo. Create a Personal Access Token (PAT) with repo scope and save it as a Secret in your argocd namespace.

kubectl create secret generic github-token \
  --from-literal=token=YOUR_GITHUB_TOKEN_HERE \
  -n argocd

Step 2: The ApplicationSet Manifest

The ApplicationSet resource tells Argo CD how to handle new PRs. Focus on how the generators and template sections work together.

apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: my-app-previews
  namespace: argocd
spec:
  generators:
    - pullRequest:
        github:
          owner: my-organization
          repo: my-cool-app
          tokenRef:
            secretName: github-token
            key: token
        requeueAfterSeconds: 60
  template:
    metadata:
      name: 'preview-pr-{{number}}'
    spec:
      project: default
      source:
        repoURL: 'https://github.com/my-organization/my-cool-app.git'
        targetRevision: '{{head_sha}}'
        path: charts/my-app
        helm:
          parameters:
            - name: "ingress.host"
              value: "pr-{{number}}.preview.mycompany.com"
            - name: "image.tag"
              value: "{{head_sha}}"
      destination:
        server: https://kubernetes.default.svc
        namespace: 'preview-pr-{{number}}'
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
        syncOptions:
          - CreateNamespace=true

Here’s what’s happening: Argo CD generates a unique URL like pr-123.preview.mycompany.com. The {{number}} and {{head_sha}} variables are injected automatically. When you push a new commit, Argo CD spots the SHA change and updates the environment instantly.

Step 3: Solving the DNS Puzzle

DNS is often the sticking point. You can’t manually create a record for every single PR. The fix? A Wildcard DNS Record. Point *.preview.mycompany.com to your Nginx or Traefik Ingress Controller’s external IP.

When the ApplicationSet spins up a new Ingress, the controller recognizes the host and routes traffic to the new pods. It’s hands-off and reliable.

Resource Management

Resources aren’t free. Running an environment for every PR can get expensive if left unchecked. Fortunately, the ApplicationSet PR generator automatically deletes the Application when a PR is merged. Since we enabled prune: true, Kubernetes wipes the associated Pods and Services immediately.

Still, you should set boundaries. If a busy team opens 20 PRs at once, you don’t want your nodes to run out of memory. I recommend using ResourceQuotas to cap preview namespaces at around 512Mi of RAM and 0.5 CPU cores.

apiVersion: v1
kind: ResourceQuota
metadata:
  name: preview-quota
spec:
  hard:
    requests.cpu: "500m"
    requests.memory: "512Mi"
    limits.cpu: "1000m"
    limits.memory: "1Gi"

The New Workflow

Once this is live, your day looks different:

  1. A dev pushes a branch and opens a PR.
  2. Argo CD spots the PR within 60 seconds.
  3. A fresh preview-pr-X namespace appears.
  4. The app deploys via Helm.
  5. A bot comments on the PR with the preview link.
  6. Stakeholders test the live feature and leave feedback.
  7. The PR is merged, and Argo CD cleans up the mess.

Wrapping Up

Switching to ephemeral environments is one of the biggest wins for any DevOps pipeline. It kills the staging bottleneck and catches bugs before they reach production. While wildcard DNS and ApplicationSets take a bit of initial effort, the speed and confidence they give your team are worth every minute of setup. Give it a try—your developers will thank you.

Share: