It’s 2 AM and Your App Is Down
You just pushed a new version. Your teammate deployed it directly on the VPS — no containers, just a bare node server.js running in a tmux session. Now the process is dead, port 80 is occupied by something else, and the SSL cert expired three days ago without anyone noticing. Sound familiar?
I’ve been there. Multiple times. After switching to Docker + Nginx + Certbot across four production services, those 2 AM incidents stopped. Here’s exactly how I set it up.
Root Cause: Why Bare-Metal Deploys Break
Worth understanding what goes wrong with the “just run it” approach before jumping into the fix:
- Process management is manual — if the process crashes, nothing restarts it.
- Port conflicts — two services fighting over port 80 or 3000 with no isolation.
- Dependency hell — Node 16 on the server, Node 20 in your code. Things break silently.
- SSL is an afterthought — most teams add HTTPS only after getting burned by a browser warning in front of a client.
Docker handles isolation. Nginx handles routing and SSL termination. Certbot automates certificate renewal. Each tool has one job. Together, they cover the failures that bite you at the worst possible time.
Comparing Your Options
Three common deployment strategies, ranked honestly:
Option 1: Bare-metal (no containers)
- Fast to start, painful to maintain
- No rollback story, dependency conflicts accumulate over time
- Fine for hobby projects, dangerous for anything real
Option 2: Docker alone (no Nginx)
- Solves isolation, but you still need to expose port 80/443 directly
- No centralized SSL management, harder to host multiple apps on one VPS
Option 3: Docker + Nginx + Certbot (this guide)
- Nginx acts as a reverse proxy — your app only listens on an internal port
- Certbot handles Let’s Encrypt SSL with auto-renewal
- Clean separation: add a new app by adding a new Nginx config block, no infra changes
- Running this across five services for over 14 months, I’ve had zero cert renewals to think about and zero port conflicts between apps
Option 3 is what we’re building.
Prerequisites
- A VPS running Ubuntu 22.04 (any Debian-based distro works)
- A domain name pointing to your VPS IP (
Arecord set) - SSH access with sudo privileges
- Your app has a
Dockerfile(we’ll add one if not)
Step 1: Install Docker on the VPS
# Update and install dependencies
sudo apt update && sudo apt install -y ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add Docker repo
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt update && sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Allow your user to run Docker without sudo
sudo usermod -aG docker $USER
newgrp docker
Step 2: Containerize Your App
No Dockerfile yet? Here’s a minimal one for Node.js. Adapt it for Python, Go, or whatever you’re running.
# Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
Build and test locally before touching the VPS:
docker build -t myapp .
docker run -p 3000:3000 myapp
Hit http://localhost:3000 and confirm it works. Don’t skip this — debugging on the VPS with no local context is miserable.
Step 3: Set Up Docker Compose
On the VPS, create a project directory and add a docker-compose.yml. This keeps your container config reproducible and version-controlled.
mkdir -p /opt/myapp && cd /opt/myapp
# docker-compose.yml
services:
app:
image: myapp:latest
restart: always
ports:
- "127.0.0.1:3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgres://user:pass@db:5432/mydb
networks:
default:
name: web
The ports binding is important: 127.0.0.1:3000:3000 exposes the container only to localhost. Nginx (running as a system service) reaches it there. Port 3000 never faces the public internet directly.
restart: always is what kills the 2 AM crash-and-no-one-notices problem. Docker restarts the container automatically if it dies — no tmux sessions required.
Step 4: Install and Configure Nginx
sudo apt install -y nginx
sudo systemctl enable nginx
Create a site config for your domain:
sudo nano /etc/nginx/sites-available/myapp
server {
listen 80;
server_name yourdomain.com www.yourdomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
# Enable the site
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
Your app should now be reachable over HTTP. Confirm with curl http://yourdomain.com before moving to SSL. Don’t skip this check — SSL troubleshooting on top of a broken HTTP config is a bad time.
Step 5: Add Free SSL with Certbot
sudo apt install -y certbot python3-certbot-nginx
sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com
Certbot will:
- Verify domain ownership via HTTP challenge
- Obtain a Let’s Encrypt certificate
- Automatically modify your Nginx config to add HTTPS and redirect HTTP → HTTPS
Test the auto-renewal timer — this single step prevents the expired-cert incident:
sudo certbot renew --dry-run
Certbot installs a systemd timer that runs renewal checks twice daily. Certs are rotated automatically when they’re 30 days from expiry. You don’t need to touch it again.
Step 6: Deploy Your App Container
Transfer your image to the VPS. Quickest path without a registry:
# On your local machine
docker save myapp:latest | gzip > myapp.tar.gz
scp myapp.tar.gz user@your-vps-ip:/opt/myapp/
# On the VPS
cd /opt/myapp
docker load < myapp.tar.gz
docker compose up -d
For CI/CD pipelines, push to Docker Hub or a private registry and pull on the VPS instead — cleaner and faster for repeated deploys:
docker pull youruser/myapp:latest
docker compose up -d --pull always
Step 7: Verify Everything Is Running
# Check container status
docker compose ps
# Follow live logs
docker compose logs -f app
# Check Nginx status
sudo systemctl status nginx
# Confirm SSL is working
curl -I https://yourdomain.com
You should see HTTP/2 200 and a valid SSL header. Seeing a redirect loop? Check that Nginx passes X-Forwarded-Proto correctly and your app isn’t double-redirecting on its own.
Adding a Second App to the Same VPS
Here’s where having this infrastructure earns its keep. Need another service on the same box? New Nginx config pointing to a different port, another Docker Compose stack, Certbot for the new domain. No port conflicts, no shared dependencies, no drama.
sudo nano /etc/nginx/sites-available/anotherapp
# Point proxy_pass to http://localhost:4000
sudo certbot --nginx -d anotherapp.com
docker compose -f /opt/anotherapp/docker-compose.yml up -d
The Stack That Actually Holds Up
Docker for isolation, Nginx for routing, Certbot for SSL. That combination handles 90% of what keeps people up at night. Each piece has one job and does it well.
When something does break, the logs are in two obvious places: docker compose logs and /var/log/nginx/. Rolling back means docker compose down && docker compose up -d with the previous image tag. Start with this stack, keep it lean, and add complexity only when you have a specific reason — not because it seemed like a good idea at 11 PM before a launch.

