Why Most Ubuntu Servers End Up Needing Nginx
When I first set up a Node.js app on a fresh Ubuntu 22.04 VPS with 4GB RAM, I ran it directly on port 3000 and exposed it to the internet. It worked — until it didn’t. The moment I needed HTTPS, wanted to host a second app on the same server, or needed to serve static files efficiently, running Node alone became a liability.
Here’s the root cause. Application runtimes like Node, Python (Gunicorn), or PHP-FPM are excellent at executing code. But none of them are built to handle raw HTTP edge cases: SSL termination, connection queuing under load, gzip, caching, or routing traffic across multiple backends. That’s the gap Nginx fills.
On my production server, switching to this setup dropped average response time from ~90ms to ~22ms under load. The Node process stopped burning CPU on static asset requests and could focus purely on application logic.
Installation
On Ubuntu 22.04 or Debian-based systems, Nginx is in the default APT repository:
sudo apt update
sudo apt install nginx -y
Once installed, start and enable it so it survives reboots:
sudo systemctl start nginx
sudo systemctl enable nginx
Allow HTTP and HTTPS through UFW if you have it active:
sudo ufw allow 'Nginx Full'
Open your server’s IP in a browser — you should see the Nginx welcome page. That confirms the web server is running.
Understanding the Configuration Layout
Before writing any config, understand where Nginx keeps things:
- /etc/nginx/nginx.conf — main config, includes everything in
conf.d/andsites-enabled/ - /etc/nginx/sites-available/ — where you store site configs (inactive until symlinked)
- /etc/nginx/sites-enabled/ — symlinks to active configs
- /var/www/html/ — default web root
- /var/log/nginx/ — access and error logs
The pattern is: write a config in sites-available, then symlink it to sites-enabled to activate it. This lets you disable a site without deleting its config.
Configuration: Nginx as a Static Web Server
Start with the simplest case — serving static HTML files for a domain.
Create a web root and a test page:
sudo mkdir -p /var/www/mysite.com/html
echo '<h1>Hello from Nginx</h1>' | sudo tee /var/www/mysite.com/html/index.html
Write a server block config:
sudo nano /etc/nginx/sites-available/mysite.com
Paste this configuration:
server {
listen 80;
server_name mysite.com www.mysite.com;
root /var/www/mysite.com/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
access_log /var/log/nginx/mysite.access.log;
error_log /var/log/nginx/mysite.error.log;
}
Enable it and test:
sudo ln -s /etc/nginx/sites-available/mysite.com /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl reload nginx
The nginx -t step is critical — always run it before reloading. It catches syntax errors before they take down your server.
Configuration: Nginx as a Reverse Proxy
The reverse proxy setup is what most people actually come to Nginx for. Say you have a Node.js app running on port 3000 locally. You want http://myapp.com to route all traffic to it.
Drop a new config file in place:
sudo nano /etc/nginx/sites-available/myapp.com
server {
listen 80;
server_name myapp.com www.myapp.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
}
access_log /var/log/nginx/myapp.access.log;
error_log /var/log/nginx/myapp.error.log;
}
Don’t skip the proxy_set_header lines. Without them, your backend sees every request coming from 127.0.0.1 instead of the real client IP — which breaks rate limiting, logging, and geolocation in one shot.
Enable it:
sudo ln -s /etc/nginx/sites-available/myapp.com /etc/nginx/sites-enabled/
sudo nginx -t && sudo systemctl reload nginx
Proxying to Multiple Backends
If you run two services on the same server — say an API on port 3000 and a frontend on port 5000 — you can route them by path under a single domain:
server {
listen 80;
server_name platform.com;
location /api/ {
proxy_pass http://127.0.0.1:3000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Nginx matches locations top-to-bottom, longest prefix first. Requests to /api/users go to port 3000; everything else goes to port 5000.
Adding SSL with Let’s Encrypt
Plain HTTP has no place in production. Browsers show “Not secure” warnings, and Google actively penalizes non-HTTPS sites in search rankings. The fix is free — install Certbot and get a certificate:
sudo apt install certbot python3-certbot-nginx -y
sudo certbot --nginx -d myapp.com -d www.myapp.com
Certbot modifies your Nginx config automatically, adding the listen 443 ssl block and redirecting HTTP to HTTPS. Renewal is handled by a systemd timer — verify it with:
sudo certbot renew --dry-run
Verification and Monitoring
After any config change, these are the checks I run every time.
Syntax Check and Reload
sudo nginx -t
sudo systemctl reload nginx
Never use restart in production unless necessary — reload applies the new config with zero downtime by gracefully finishing existing connections first.
Service Status
sudo systemctl status nginx
Look for active (running). If it shows failed, the error message usually tells you exactly which config line caused it.
Live Request Monitoring
Watch real-time traffic to a specific site:
sudo tail -f /var/log/nginx/myapp.access.log
Or filter for errors only:
sudo tail -f /var/log/nginx/myapp.error.log | grep -v 'favicon'
Test Proxy Headers
Use curl to verify the reverse proxy is forwarding correctly and returning the right status code:
curl -I http://myapp.com
curl -v http://myapp.com/api/health
Check the X-Forwarded-For header in your app’s logs to confirm the real client IP is coming through.
Nginx Status Page (Optional)
For basic connection metrics, enable the stub status module:
location /nginx_status {
stub_status;
allow 127.0.0.1;
deny all;
}
Then query it locally:
curl http://127.0.0.1/nginx_status
This outputs active connections, requests per second, and reading/writing/waiting counts — useful for quick capacity checks without installing a full monitoring stack.
Common Pitfalls
- 502 Bad Gateway — your backend isn’t running, or it’s on a different port than what you specified in
proxy_pass. Check withss -tlnp | grep 3000. - 413 Request Entity Too Large — add
client_max_body_size 50M;in your server block if users upload files. - Redirect loops with SSL — if your backend detects HTTPS via
X-Forwarded-Protoand redirects, make sure you’re setting that header. Without it, you get an infinite redirect loop. - Forgetting to disable the default site — the default config catches all unmatched traffic. Remove it with
sudo rm /etc/nginx/sites-enabled/default.

