The Database Overkill Problem
You’re building a small internal tool — a task tracker, a local config store, maybe a lightweight data pipeline. Out of habit, you reach for PostgreSQL or MySQL. Suddenly you’re spinning up a Docker container, creating users, configuring a connection pool, writing schema migrations… for something three people will ever touch.
Wrong tool for the job. The issue isn’t missing features — it’s hauling server-grade infrastructure into a problem that doesn’t need it. SQLite exists for exactly these cases.
I’ve used MySQL, PostgreSQL, and MongoDB across different projects. Each has real strengths. But something kept catching me off guard early on: SQLite was often the right answer, and I kept skipping past it anyway.
Architecture: SQLite vs. Server-Based Databases
To use SQLite well, you need to understand how fundamentally different it is from MySQL or Postgres — not just in scale, but in design.
Server-Based Databases (MySQL, PostgreSQL)
- Require a running daemon process
- Use client-server communication (TCP or Unix socket)
- Designed for concurrent multi-user access
- Need separate installation, user management, and backup configuration
- Ideal for web apps with many simultaneous connections
SQLite
- Zero server — the database engine is a library linked directly into your app
- Entire database lives in a single
.dbfile - Access is per-process (with limited concurrent writes)
- Ships with Python, included in Android/iOS SDKs, embedded inside browsers
- Ideal for single-user apps, embedded systems, prototypes, local data storage
SQLite is not a “mini MySQL.” It’s a completely different class of tool. Comparing them head-to-head misses the point — they solve different problems.
Pros and Cons
Why SQLite works well
- No setup required — Python includes
sqlite3in the standard library. Nothing to install. - Portable — the database is just a file. Copy it, move it, or commit it to version control if it’s small enough.
- Fast for reads — on small, read-heavy datasets, SQLite often beats Postgres because there’s no network round-trip overhead. A local query skips the TCP stack entirely.
- Extraordinarily reliable — SQLite’s test suite has over 92 million lines of test code. It runs in aircraft flight systems, medical devices, and billions of mobile apps. The reliability bar is genuinely high.
- ACID compliant — full transaction support: commits, rollbacks, savepoints, and foreign keys. FK enforcement is off by default but trivial to enable per connection.
Where it falls short
- Concurrent writes — SQLite uses database-level write locking. Multiple writers queue up or throw “database is locked” errors under any real write load.
- No access control — anyone with file access can read the database. For multi-tenant apps, that’s a non-starter.
- Limited ALTER TABLE — older versions made schema changes painful. SQLite 3.35 (March 2021) improved this with proper DROP COLUMN support, but it still trails Postgres in schema flexibility.
- Not built for distributed systems — replication, clustering, automatic failover — none of that is SQLite’s territory.
When SQLite Is the Right Call
Here’s where it consistently earns its place:
- CLI tools that need persistent storage
- Desktop apps (Electron, PyQt, etc.)
- Local development environments (swap for Postgres in production)
- Data pipelines and ETL scripts storing intermediate results
- Test suites — fast, isolated, throwaway databases
- Prototyping before you know what database you actually need
A setup I reach for constantly: Django apps run against SQLite during development. Migrations finish in under a second, tests are fully isolated, and there’s no Docker dependency on the local machine. Once the schema stabilizes and we need real concurrency, a single config change flips to Postgres in staging and production. Django handles the rest.
Implementation Guide
Basic usage in Python (no dependencies)
Python’s sqlite3 module is all you need:
import sqlite3
# Opens existing DB or creates a new one
conn = sqlite3.connect("myapp.db")
cursor = conn.cursor()
# Create table
cursor.execute("""
CREATE TABLE IF NOT EXISTS tasks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT NOT NULL,
done INTEGER DEFAULT 0,
created_at TEXT DEFAULT (datetime('now'))
)
""")
conn.commit()
# Insert
cursor.execute("INSERT INTO tasks (title) VALUES (?)", ("Write SQLite tutorial",))
conn.commit()
# Query
cursor.execute("SELECT * FROM tasks WHERE done = 0")
rows = cursor.fetchall()
for row in rows:
print(row)
conn.close()
Notice the ? placeholder. Never use f-strings to build SQL queries — parameterized queries block SQL injection even in local tools. Good habits don’t cost anything.
Enable foreign key enforcement
Foreign key support is built into SQLite, but disabled by default for backward compatibility. Enable it at the start of each connection:
conn = sqlite3.connect("myapp.db")
conn.execute("PRAGMA foreign_keys = ON")
Use the context manager
SQLite connections work as Python context managers. Successful block? Auto-commit. Exception? Auto-rollback:
with sqlite3.connect("myapp.db") as conn:
conn.execute("INSERT INTO tasks (title) VALUES (?)", ("Deploy to prod",))
# auto-commit on success, auto-rollback on exception
Dict-style row access with row_factory
Rows come back as tuples by default — workable, but not great to read. Set row_factory to access columns by name instead:
conn = sqlite3.connect("myapp.db")
conn.row_factory = sqlite3.Row
cursor = conn.cursor()
cursor.execute("SELECT * FROM tasks")
rows = cursor.fetchall()
for row in rows:
print(row["title"], row["done"]) # column name access
Inspecting databases from the CLI
Got the sqlite3 CLI installed? You can poke around any database without writing a line of Python:
# Open a database
sqlite3 myapp.db
# Inside the SQLite shell:
.tables # list all tables
.schema tasks # show CREATE TABLE for tasks
SELECT * FROM tasks LIMIT 10;
.quit
# Install on Ubuntu/Debian
sudo apt install sqlite3
# macOS ships with it already
sqlite3 --version
SQLite with SQLAlchemy
SQLAlchemy is a widely-used Python SQL toolkit and ORM. One of its practical advantages: switching between SQLite and Postgres is a single connection string change, no code rewrite:
# SQLite (local/dev)
DATABASE_URL = "sqlite:///./myapp.db"
# PostgreSQL (production)
DATABASE_URL = "postgresql://user:pass@localhost/myapp"
# In SQLAlchemy
from sqlalchemy import create_engine
engine = create_engine(DATABASE_URL)
In-memory SQLite for tests
Pass ":memory:" as the database name and SQLite creates the entire database in RAM. Nothing written to disk, nothing to clean up:
import sqlite3
def get_test_db():
conn = sqlite3.connect(":memory:")
conn.execute("CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT)")
return conn
def test_insert_user():
conn = get_test_db()
conn.execute("INSERT INTO users (name) VALUES (?)", ("alice",))
row = conn.execute("SELECT name FROM users").fetchone()
assert row[0] == "alice"
Every test run starts clean. No teardown, no leftover state, no flaky failures from shared database rows.
When to Move On from SQLite
The cracks start showing when:
- Multiple processes need to write to the database simultaneously
- Write load is high enough to cause frequent lock contention
- You need row-level access control
- You’re deploying to a container environment where the filesystem isn’t persistent
PostgreSQL is the natural upgrade path. If you built on SQLAlchemy or Django ORM from the start, the migration is mostly one config value. The query code usually stays the same.
For everything else — local tools, scripts, prototypes, embedded apps, test suites — SQLite is the right default. It keeps you from building infrastructure a project doesn’t need.

