Kubernetes Basics: Pod, Service, and Deployment Explained with Real Examples

DevOps tutorial - IT technology blog
DevOps tutorial - IT technology blog

The Problem: Your Docker App Works Locally, But Production Is Chaos

You’ve containerized your app with Docker, it runs perfectly on your laptop — but the moment you need to run it at scale, things fall apart. One container dies and nothing restarts it. Traffic spikes hit a single instance. Rolling updates knock the service offline. You need something to manage containers for you.

That’s the problem Kubernetes was built to solve. Most tutorials respond by throwing a wall of YAML at you and calling it a day. This one won’t. I’ll explain the three core building blocks — Pod, Service, and Deployment — and show you exactly how they connect.

Core Concepts

Pod: The Smallest Unit

A Pod is the smallest deployable unit in Kubernetes. Think of it as a lightweight wrapper around one or more containers that share the same network namespace and storage volumes. In practice, most real-world Pods run a single container.

Here’s what most people miss early on: Pods are ephemeral. Kubernetes doesn’t try to heal a broken Pod — it discards it and creates a new one. Never rely on a Pod’s IP address or local storage for anything persistent.

A minimal Pod manifest looks like this:

# pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-app-pod
  labels:
    app: my-app
spec:
  containers:
    - name: my-app
      image: nginx:1.25
      ports:
        - containerPort: 80

Apply it with:

kubectl apply -f pod.yaml
kubectl get pods
kubectl describe pod my-app-pod

You’ll see the Pod running — but this alone gets you nothing useful in production. No auto-restart, no load balancing, no rolling updates.

Deployment: Managing Pods at Scale

A Deployment is what you actually use in production. It wraps your Pods in a controller that handles three things:

  • Keeping a desired number of replicas running at all times
  • Rolling updates with zero downtime
  • Rollbacks when something goes wrong

Under the hood, a Deployment creates a ReplicaSet, which does the actual work of creating and monitoring Pods. You don’t interact with ReplicaSets directly — the Deployment manages them for you.

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
        - name: my-app
          image: nginx:1.25
          ports:
            - containerPort: 80
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
            limits:
              cpu: "250m"
              memory: "256Mi"

The selector.matchLabels field is the critical piece — it’s how the Deployment knows which Pods it owns. The labels on the Pod template must match exactly, or nothing gets managed.

Apply and check:

kubectl apply -f deployment.yaml
kubectl get deployments
kubectl get pods  # You'll see 3 pods with auto-generated names

Service: Stable Network Access to Your Pods

Pod IPs are temporary. Every time a Pod restarts, it gets a new address. A Service fixes this with a stable virtual IP that automatically routes traffic to healthy Pods matching a label selector.

Three Service types cover most scenarios:

  • ClusterIP — internal access only (default, used for service-to-service communication)
  • NodePort — exposes the service on a static port on every node (useful for testing)
  • LoadBalancer — provisions a cloud load balancer (AWS ELB, GCP LB, etc.)
# service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: ClusterIP
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

The selector matches the same label (app: my-app) used in the Deployment’s Pod template. Traffic routing isn’t based on Pod names or IPs — just labels. That’s what makes the system resilient to Pod restarts and rescheduling.

Hands-On Practice: Deploy a Real App End to End

Prerequisites

For local testing, install Minikube and kubectl:

# Start a local cluster
minikube start

# Verify it's running
kubectl cluster-info
kubectl get nodes

Deploy the Full Stack

You can put all three resources in one file, separated by ---:

# app-stack.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: web
          image: nginx:1.25-alpine
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-service
spec:
  type: NodePort
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
kubectl apply -f app-stack.yaml

# Check everything is up
kubectl get pods,svc

# On Minikube, get the access URL
minikube service web-service --url

Test Self-Healing

Delete one of the running Pods and watch the Deployment spin up a replacement immediately:

# Get a pod name
kubectl get pods

# Delete it
kubectl delete pod web-deployment-xxxxxxxxx-xxxxx

# Watch Kubernetes recreate it
kubectl get pods -w

Rolling Update with Zero Downtime

Update the container image — Kubernetes handles the rest:

kubectl set image deployment/web-deployment web=nginx:1.26-alpine

# Watch the rollout
kubectl rollout status deployment/web-deployment

# If something's wrong, roll back
kubectl rollout undo deployment/web-deployment

# Check rollout history
kubectl rollout history deployment/web-deployment

In production, this setup is genuinely solid. With 2+ replicas and maxUnavailable: 0 in your rollout strategy, Kubernetes won’t kill old Pods until new ones pass readiness checks. Zero downtime — even under live traffic.

How Labels Tie Everything Together

Labels are the glue in Kubernetes. Here’s how to verify the connections:

# See which pods a service is targeting
kubectl describe service web-service
# Look for: Selector: app=web
# and:      Endpoints: (list of pod IPs)

# Query pods by label directly
kubectl get pods -l app=web

If traffic isn’t routing correctly, nine times out of ten it’s a label mismatch between the Service selector and the Pod labels. Check this first, before anything else.

What to Build On Next

These three resources are your foundation. Here’s where to go from here:

  • ConfigMap and Secret — inject configuration and credentials without rebuilding images
  • Ingress — route external HTTP/HTTPS traffic to multiple Services via one load balancer
  • PersistentVolume — give stateful apps durable storage that survives Pod restarts
  • Horizontal Pod Autoscaler (HPA) — scale replicas automatically based on CPU or custom metrics

Every advanced Kubernetes object follows the same pattern. StatefulSets, DaemonSets, Jobs — they all use Pods tagged with labels, a controller managing their lifecycle, and Services routing traffic via selectors. The vocabulary expands, but the logic stays the same.

Run a Deployment locally. Delete a Pod and watch Kubernetes bring it back. Trigger a rolling update and monitor it live. That hands-on experience is what makes the advanced concepts actually stick.

Share: