The Day Everything Broke Before Stand-Up
Picture this: your team just onboarded a new developer. They clone the repo, follow the README, and thirty minutes later they’re still fighting with port conflicts, missing environment variables, and a database that won’t start in the right order. Stand-up is in five minutes. Every team hits this eventually.
That was us, about two years ago. We had a Node.js API, a PostgreSQL database, a Redis cache, and an Nginx reverse proxy — four containers, all managed by a wall of individual docker run commands stuffed into a shell script nobody wanted to touch.
The containers weren’t the problem. Orchestration was. Getting them to start in the right order, talk to each other on the right network, and stay consistent between local dev and staging — that’s where everything fell apart.
Why Individual docker run Commands Break Down
When you manage containers one by one, you’re essentially solving the same problem with duct tape every time:
- Startup order: Your API container crashes because the database isn’t ready yet.
- Networking: Containers can’t find each other unless you manually create and attach networks.
- Environment drift: Dev uses different ports and credentials than staging, and nobody wrote it down.
- Teardown: Cleaning up means hunting down container IDs, volumes, and networks by hand.
You could write a bash script. Some teams do. They look clean for about a week — then someone adds a service, and suddenly there are conditionals for checking if a network already exists, error handling for partial failures, flags nobody remembers. Scripts rot.
docker run is designed for single containers. Multi-container apps need a different tool.
What Docker Compose Actually Is (And Isn’t)
Docker Compose lets you define your entire multi-container application in a single YAML file, then manage everything with simple commands. Networking, startup order, volume mounts, environment variables — all declarative, all in one place.
It’s not a replacement for Kubernetes. Running hundreds of services across multiple nodes with auto-scaling requirements? You need something heavier. But for local development, small production deployments, and CI environments, Compose hits exactly the right level of complexity.
Since Docker Desktop v3.3, Compose V2 is bundled directly with Docker. You invoke it as docker compose (no hyphen), though the older docker-compose CLI still works if you have it installed separately.
Building a Real-World compose.yaml
Here’s the setup we use for a typical web app stack: an API service, a PostgreSQL database, and a Redis cache. This is close to what we run in production, with credentials simplified for the example.
services:
api:
build: ./api
ports:
- "3000:3000"
environment:
DATABASE_URL: postgres://app:secret@db:5432/appdb
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
restart: unless-stopped
db:
image: postgres:16-alpine
volumes:
- pgdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: secret
POSTGRES_DB: appdb
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d appdb"]
interval: 10s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
volumes:
- redisdata:/data
volumes:
pgdata:
redisdata:
Three decisions in this file are worth understanding before you copy it blindly.
The depends_on + healthcheck Combination
depends_on: db alone only waits for the container to start — not for PostgreSQL to actually be ready to accept connections. Add a healthcheck and set condition: service_healthy, and the API container won’t launch until Postgres passes its health probe. We’ve burned hours on startup race conditions. This combination eliminates them.
Named Volumes vs. Bind Mounts
Named volumes (pgdata, redisdata) are managed by Docker and persist between container restarts. Databases should always use named volumes. Bind mounts — mapping a local directory like ./data:/var/lib/postgresql/data — are useful when you need direct file access during development, but they cause permission headaches on Linux and don’t belong near a production database.
Service Discovery via DNS
The API’s DATABASE_URL uses db as the hostname. Compose automatically creates a network for your project and registers each service by its name — so containers find each other by service name, with no manual network configuration required.
Day-to-Day Commands You’ll Actually Use
# Start everything in detached mode
docker compose up -d
# Start and rebuild images (after code changes)
docker compose up -d --build
# Check what's running
docker compose ps
# Follow logs for a specific service
docker compose logs -f api
# Run a one-off command in a service container
docker compose exec db psql -U app -d appdb
# Stop everything (keeps volumes)
docker compose down
# Nuclear option: remove containers, networks, AND volumes
docker compose down -v
That last one — down -v — deletes your database data. Useful when you want a completely clean slate during development, dangerous if you run it without thinking in the wrong environment.
Managing Environment-Specific Configuration
Environment differences are one of the most common sources of friction between dev, staging, and production. The cleanest approach is combining a base compose.yaml with override files:
# Development (uses compose.yaml + compose.override.yaml automatically)
docker compose up -d
# Staging or production (explicit file selection)
docker compose -f compose.yaml -f compose.prod.yaml up -d
Your compose.override.yaml for development might enable hot-reloading and expose debug ports:
services:
api:
volumes:
- ./api/src:/app/src
environment:
NODE_ENV: development
DEBUG: "*"
The production override keeps things locked down — no volume mounts, no debug flags, resource limits set.
Secrets get their own file. Drop a .env in the same directory as your compose.yaml and Compose picks it up automatically:
# .env
POSTGRES_PASSWORD=actualstrongpassword
API_SECRET_KEY=yoursecretkey
# compose.yaml
services:
db:
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
Commit a .env.example with placeholder values to the repo, add .env to .gitignore. That solves the “where are the credentials” problem without leaking them into version control.
What We Actually Run in Production
We’ve run this exact setup in production for 18 months across three projects. The deployment pipeline is simple: push to main, SSH into the server, pull the latest image, run docker compose up -d. No Kubernetes overhead, no cloud-native complexity.
Two to five engineers, traffic in the hundreds of thousands of monthly requests — this setup handles it fine. Three additions make the difference in production stability:
restart: unless-stoppedon every service — containers restart automatically after crashes or server reboots.- Resource limits via
deploy.resources.limits— prevents a runaway process from starving other containers. - Log drivers — Docker’s default json-file driver stores logs with no size cap. Set
max-sizeandmax-file, or your disk will fill up eventually.
services:
api:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
deploy:
resources:
limits:
memory: 512m
cpus: '0.5'
The Verdict: When Compose Is the Right Call
Running this across multiple projects for over a year, here’s where Compose earns its keep:
- Local development environments — get new team members running in minutes, not hours.
- Small to medium production deployments on a single host or a small cluster.
- CI pipelines — spin up a full stack for integration tests, then tear it down cleanly.
- Staging environments that need to mirror production without the cost of full orchestration.
When you outgrow it, the skills transfer directly to Kubernetes. The concepts — services, networks, volumes, health checks — are the same. Compose is where you learn them without the complexity tax.
The new developer who took thirty minutes to get set up? After we added compose.yaml to the repo, onboarding dropped to under five minutes. One command, everything running, ready to work. That’s the actual value.

