Which PostgreSQL Setup Should You Actually Use?
Starting with PostgreSQL means making a choice before you write your first query. Three main paths: install it locally on your OS, spin it up in Docker, or use a managed cloud service like Amazon RDS or Supabase. The wrong choice won’t break anything permanently — but it will create friction while you’re still learning the ropes.
Here’s how the three approaches break down:
- Native installation — Install PostgreSQL directly on your machine (Linux, macOS, Windows). Best for learning and local development.
- Docker — Run PostgreSQL in a container. Ideal for team projects where everyone needs identical environments.
- Managed cloud service — AWS RDS, Google Cloud SQL, Supabase, etc. The right call for production when you’d rather not manage the server yourself.
Pros and Cons of Each Approach
Native Installation
- Pros: Simple, direct access, zero overhead, easiest to learn with.
- Cons: Can conflict with other installed versions, harder to replicate across machines, manual upgrades.
Docker
- Pros: Isolated environment, easy to reset, reproducible across dev machines, version pinning.
- Cons: Requires Docker knowledge, slight overhead, data persistence needs explicit volume setup.
Managed Cloud Service
- Pros: Automated backups, high availability, no infrastructure management.
- Cons: Costs money (AWS RDS starts around $15–25/month for the smallest instance), network latency for local development, less control over configuration.
Recommended Setup for Learning
New to PostgreSQL? Go with native installation on Ubuntu/Debian or Docker — in that order. Native wins for beginners because you’re talking to the database directly, without a container abstraction layer getting in the way.
I’ve run native PostgreSQL on Ubuntu in production for over three years. Stable, predictable, no surprises. Installing it directly teaches you how the database actually works — where the data files live (/var/lib/postgresql/), how the service starts, where the logs go — before you start wrapping it behind containers or cloud layers.
Once you’re solid on the fundamentals, Docker becomes the better choice for real team projects.
Implementation Guide
Step 1: Install PostgreSQL on Ubuntu/Debian
Open your terminal and run these commands:
# Update package list
sudo apt update
# Install PostgreSQL
sudo apt install -y postgresql postgresql-contrib
# Verify it's running
sudo systemctl status postgresql
You should see active (running) in the output. PostgreSQL also creates a default system user called postgres automatically during installation.
Step 2: Access the PostgreSQL Shell
Switch to the postgres system user, then open the interactive shell:
# Switch to postgres user
sudo -i -u postgres
# Open psql shell
psql
You’re now inside the psql prompt:
postgres=#
Four meta-commands you’ll reach for constantly:
\l— list all databases\c dbname— connect to a database\dt— list tables in current database\q— quit
Step 3: Create a Database and User
The postgres superuser is fine for quick testing, but every real project should have its own dedicated database and user — even locally:
-- Create a new database
CREATE DATABASE myapp;
-- Create a new user with a password
CREATE USER appuser WITH PASSWORD 'securepassword';
-- Grant all privileges on the database to that user
GRANT ALL PRIVILEGES ON DATABASE myapp TO appuser;
Then connect to it:
\c myapp
Step 4: Create Your First Table
A simple users table to practice with:
CREATE TABLE users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) NOT NULL UNIQUE,
email VARCHAR(100) NOT NULL UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
What each part does:
SERIAL— auto-incrementing integer. PostgreSQL generates the ID for you.PRIMARY KEY— uniquely identifies each row.NOT NULL— the column cannot be empty.DEFAULT CURRENT_TIMESTAMP— automatically fills in the current time on insert.
Step 5: Basic CRUD Operations
CRUD — Create, Read, Update, Delete. Four operations. You’ll use them in every project you ever build.
INSERT — Add Data
INSERT INTO users (username, email) VALUES
('alice', '[email protected]'),
('bob', '[email protected]'),
('charlie', '[email protected]');
SELECT — Read Data
-- Get all users
SELECT * FROM users;
-- Get specific columns
SELECT username, email FROM users;
-- Filter with WHERE
SELECT * FROM users WHERE username = 'alice';
-- Sort results
SELECT * FROM users ORDER BY created_at DESC;
-- Limit results
SELECT * FROM users LIMIT 10;
UPDATE — Modify Data
-- Update a specific user's email
UPDATE users
SET email = '[email protected]'
WHERE username = 'alice';
-- Always use WHERE with UPDATE, otherwise you'll update every row
DELETE — Remove Data
-- Delete a specific user
DELETE FROM users WHERE username = 'charlie';
-- Same rule: always use WHERE, or you'll wipe the entire table
Step 6: Connect from Python
In practice, you’ll rarely type SQL directly into psql. Your app code does it. Here’s how, using Python’s psycopg2 library:
pip install psycopg2-binary
import psycopg2
# Connect to PostgreSQL
conn = psycopg2.connect(
host="localhost",
database="myapp",
user="appuser",
password="securepassword"
)
cursor = conn.cursor()
# Insert a new user
cursor.execute(
"INSERT INTO users (username, email) VALUES (%s, %s)",
("david", "[email protected]")
)
conn.commit()
# Fetch all users
cursor.execute("SELECT * FROM users")
rows = cursor.fetchall()
for row in rows:
print(row)
# Always close the connection
cursor.close()
conn.close()
See those %s placeholders? Never swap them for Python string formatting. The %s syntax lets psycopg2 handle escaping — that’s what blocks SQL injection. Build queries by string concatenation instead, and one malicious input can wipe your entire database.
Step 7: A Few Practical Tips
- Use transactions: Group related operations with
BEGIN/COMMIT. If one step fails, none of them apply — critical for anything involving money or linked records. - Index columns you filter by:
CREATE INDEX idx_users_email ON users(email);— on a table with 1 million rows, a query filtering by email can drop from ~800ms to under 5ms with a proper index. - Read the query plan: Prefix any query with
EXPLAIN ANALYZEto see exactly how PostgreSQL executes it. Invaluable for diagnosing slow queries before they hit production. - Back up regularly:
pg_dump myapp > backup.sqlexports your database.psql myapp < backup.sqlrestores it. Simple, and it works.
Where to Go Next
The basics above will get you surprisingly far. When they start feeling limiting, focus on JOINs (pulling related data from multiple tables in a single query), deeper indexing strategies, and transaction management.
After that: connection pooling with pgBouncer and replication for production setups. The official PostgreSQL docs at postgresql.org/docs are genuinely excellent — detailed, accurate, well-organized. Bookmark them from day one.

