The Problem Every Developer Hits Eventually
You clone a colleague’s project, run it, and immediately get a wall of errors. “Works on my machine” — four words that have derailed more projects than I can count. The root cause is almost always environment differences: wrong Python version, missing system library, different OS paths, conflicting dependencies.
Docker fixes this by changing what you ship. Instead of handing someone your code and hoping their machine cooperates, you hand them the entire environment — OS libraries, runtime, dependencies, config — bundled into a portable unit called a container. After running Docker in production for three years across a dozen projects, it’s genuinely one of the most useful tools I’ve picked up.
Quick Start — Get Docker Running in 5 Minutes
Let’s get Docker installed and verify it works before diving into theory.
Install Docker on Linux (Ubuntu/Debian)
# Remove old versions if any
sudo apt-get remove docker docker-engine docker.io containerd runc
# Install dependencies
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Run Docker without sudo (optional but recommended)
sudo usermod -aG docker $USER
newgrp docker
Verify the installation
docker --version
# Docker version 26.x.x, build ...
docker run hello-world
See a friendly message from Docker? You’re set. That hello-world command just pulled an image from Docker Hub, spun up a container, printed output, and exited cleanly. Pull → run → exit. That’s the entire container lifecycle in one command.
How Docker Actually Works
Containers vs Virtual Machines
Lots of people coming from a VM background assume containers are just smaller VMs. They’re not — the difference goes deeper than size.
- VMs virtualize hardware. Each VM runs its own full OS kernel — spinning one up takes 30–60 seconds and can eat 1–2GB of RAM before your app even starts.
- Containers share the host OS kernel. They isolate processes using Linux namespaces and cgroups. Startup takes under a second; memory overhead is typically a few MB.
A useful mental model: a VM is like running a separate computer inside your computer. A container is more like a process that thinks it has the machine to itself.
Core Docker Concepts
- Image — A read-only blueprint. Like a class in OOP.
- Container — A running instance of an image. Like an object created from that class.
- Dockerfile — A recipe for building an image, line by line.
- Docker Hub — Public registry where images live. Think npm, but for containers.
- Volume — Persistent storage that survives container restarts.
Essential Docker Commands
# Pull an image from Docker Hub
docker pull nginx:latest
# List downloaded images
docker images
# Run a container (detached, port mapped)
docker run -d -p 8080:80 --name my-nginx nginx
# Check running containers
docker ps
# Check all containers (including stopped)
docker ps -a
# View container logs
docker logs my-nginx
# Open a shell inside a running container
docker exec -it my-nginx bash
# Stop and remove a container
docker stop my-nginx
docker rm my-nginx
# Remove an image
docker rmi nginx:latest
The -p 8080:80 flag maps port 80 inside the container to port 8080 on your host. Hit http://localhost:8080 in a browser and nginx is live — no installation needed beyond that one docker run.
Building Your Own Docker Image
Pulling pre-built images only gets you so far. Packaging your own app is where things get interesting.
Write a Dockerfile for a Python App
Here’s a minimal Flask app to work with:
# app.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello():
return 'Hello from Docker!'
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
# requirements.txt
flask==3.0.3
Create a Dockerfile in the same directory:
# Use official Python slim image as base
FROM python:3.12-slim
# Set working directory inside container
WORKDIR /app
# Copy dependency file first (layer caching optimization)
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY app.py .
# Expose the port the app listens on
EXPOSE 5000
# Command to run when container starts
CMD ["python", "app.py"]
Build and Run Your Image
# Build the image (. means use current directory as build context)
docker build -t my-flask-app:1.0 .
# Run it
docker run -d -p 5000:5000 --name flask-demo my-flask-app:1.0
# Test it
curl http://localhost:5000
# Hello from Docker!
Using Docker Volumes for Persistent Data
Containers are ephemeral — anything written inside disappears when you remove the container. Databases and stateful services need volumes to survive that:
# Run PostgreSQL with a named volume
docker run -d \
--name postgres-db \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_DB=myapp \
-v pgdata:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:16
# List volumes
docker volume ls
# Inspect volume location on host
docker volume inspect pgdata
The pgdata volume sticks around even after docker rm. Recreate the container with the same -v pgdata:... flag and your data comes back untouched.
Practical Tips from Real-World Usage
1. Layer Caching — Build Images Faster
Docker caches each instruction layer. Copy requirements.txt before copying app code — when only your code changes, Docker reuses the cached dependency layer and skips reinstalling packages entirely. On a project with 40+ dependencies, this cuts build time from ~90 seconds down to under 5.
2. Use .dockerignore
A .dockerignore file keeps junk out of the build context:
__pycache__
*.pyc
.env
.git
venv/
*.log
Smaller build context means faster builds. More importantly, it stops you from accidentally baking local secrets or config into the image.
3. Never Store Secrets in Images
Hardcoding API keys or passwords in a Dockerfile is a common and dangerous mistake. Pass them at runtime instead:
docker run -e DATABASE_URL=postgres://user:pass@host/db my-app
Or point to a .env file that lives outside version control:
docker run --env-file .env my-app
4. Multi-stage Builds for Smaller Images
For compiled languages like Go or Java, multi-stage builds let you compile in one container and ship just the binary in another:
# Build stage
FROM golang:1.22 AS builder
WORKDIR /app
COPY . .
RUN go build -o server .
# Final stage — only the binary, no Go toolchain
FROM debian:bookworm-slim
COPY --from=builder /app/server /usr/local/bin/server
CMD ["server"]
This alone can shrink a Go image from ~850MB (full toolchain included) down to 12–20MB. Worth the extra four lines every time.
5. Inspect a Container When Something Goes Wrong
# Detailed container info (network, mounts, env vars)
docker inspect my-container
# Live resource usage
docker stats
# Shell into a stopped container for debugging
docker run -it --entrypoint bash my-image
Where to Go From Here
Get comfortable with single containers first, then tackle Docker Compose. It lets you define an entire stack — web app, PostgreSQL, Redis, message queue — in one docker-compose.yml and start everything with a single docker compose up. No more juggling five terminal tabs running five separate docker run commands.
After that, Kubernetes handles running containers at scale across multiple machines. Don’t rush there, though. Docker basics alone will make you noticeably more productive within a week of daily use — and that’s a solid foundation to build on.

