Local Kubernetes: Why You Should Swap Minikube for kind or k3d

DevOps tutorial - IT technology blog
DevOps tutorial - IT technology blog

The Cost of Cloud-First Development

Testing Helm charts or Kubernetes manifests directly in EKS or GKE is an expensive way to catch a syntax error. I have watched teams burn through hundreds of dollars in cloud credits just waiting for a cloud provider to provision a LoadBalancer, only to realize a ConfigMap had a typo. The feedback loop is simply too slow. You need a local environment that mimics production without the heavy lifting of traditional virtual machines.

Minikube used to be the only game in town, but its reliance on bulky VMs makes it feel sluggish on modern laptops. This is why kind (Kubernetes in Docker) and k3d (k3s in Docker) have taken over. By running Kubernetes nodes as Docker containers, they start up in seconds rather than minutes. In my experience, mastering these tools is the fastest way to shorten your development cycle and ensure your code works before it ever hits a git push.

Quick Start: A Cluster in Under 60 Seconds

As long as you have Docker running, you can spin up a fully functional cluster before your coffee gets cold. Here is the breakdown for both tools.

Setting up kind

# Install kind on macOS via Homebrew
brew install kind

# Create your first cluster
kind create cluster --name dev-cluster

# Confirm the nodes are ready
kubectl get nodes

Setting up k3d

# Install k3d via script
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash

# Spin up a cluster with 1 load balancer and 2 worker nodes
k3d cluster create labs --agents 2

# Verify the setup
kubectl get nodes

The Showdown: kind vs. k3d

Choosing the right tool depends on your specific goals. While they look similar on the surface, their internal logic differs significantly.

1. Philosophy and Accuracy

kind focuses on being the most “authentic” experience. It uses kubeadm to bootstrap nodes and runs the standard upstream Kubernetes distribution. If you are testing low-level cluster internals or want an environment that is 99% identical to a vanilla cloud install, kind is your best bet.

k3d acts as a wrapper for k3s, the lightweight distribution from Rancher. It swaps out heavy components like etcd for SQLite in smaller setups and strips out legacy code. This makes k3d incredibly resource-efficient. A k3d cluster typically idles at around 500MB of RAM, whereas kind often demands 1GB or more just to stay stable.

2. Networking and Ingress Simplified

Exposing services in k3d is remarkably straightforward because it includes a built-in Nginx load balancer proxy. You can map ports directly from your host to the cluster during creation. With kind, you usually have to install an Ingress controller like Nginx or Contour manually and handle extra port-mapping configurations in a YAML file.

3. Performance Metrics at a Glance

  • Startup Time: k3d is the speed king, often reaching a “Ready” state in 20-30 seconds. kind usually takes 60-90 seconds.
  • Binary Size: k3s is a single 50MB binary. kind is slightly heavier as it pulls full node images.
  • Use Case: Use kind for upstream compatibility testing; use k3d for rapid application development.

Advanced Setup: Multi-node and Custom Configs

Production environments rarely consist of a single node. To test pod anti-affinity or taints and tolerations, you need to simulate a multi-node architecture. You can define these complex setups using simple YAML files.

Configuring Multi-node in kind

Save this as kind-config.yaml to create a cluster with one control plane and three workers:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

Launch it with: kind create cluster --config kind-config.yaml

Port Mapping in k3d

If you need to access a web service on localhost:8080, k3d lets you map that port instantly during the creation phase:

k3d cluster create web-labs -p "8080:80@loadbalancer" --agents 2

This command routes traffic from your laptop’s port 8080 to the K3s LoadBalancer. Any Ingress you deploy will now be reachable via your browser immediately.

CI/CD Integration: Real Tests, No Mocks

Running a real Kubernetes cluster inside your CI runner is a massive upgrade for reliability. Instead of mocking APIs, you can run integration tests against a live environment that exists only for the duration of the job.

GitHub Actions Example

I use this specific workflow to validate Helm charts before merging. It catches issues like missing ImagePullSecrets or incorrect service selectors before they reach staging.

name: K8s Integration Test
on: [push]
jobs:
  integration:
    runs-on: ubuntu-latest
    steps:
    - name: Checkout code
      uses: actions/checkout@v4

    - name: Set up kind
      uses: engineerd/[email protected]
      with:
        version: "v0.19.0"

    - name: Cluster Sanity Check
      run: kubectl cluster-info
        
    - name: Deploy and Wait
      run: |
        kubectl apply -f ./k8s/deployment.yaml
        kubectl wait --for=condition=available --timeout=90s deployment/api-server

Final Verdict

For daily coding, I recommend k3d. Its speed and built-in load balancing remove the friction of local development. However, I often use kind within CI/CD pipelines. Since kind is the official tool for testing Kubernetes itself, it provides a rigorous standard for automated validation.

Just remember to tidy up your workspace. It is easy to forget you have three clusters running in the background, quietly consuming your CPU cycles. Run kind delete cluster or k3d cluster delete --all once you finish your session to keep your machine running smoothly.

Share: