The Authentication Mess That Breaks Teams
You start with one app. Users register, log in, done. Then a second service spins up — a separate admin dashboard, an internal API, maybe a customer-facing portal. Each one gets its own user table, its own login form, its own password reset flow.
Six months later, the support inbox fills up with “I forgot which password I used for X.” Developers duplicate auth code across every service. A security audit flags that three of your apps store passwords with MD5. One team uses JWT, another uses sessions, another… just doesn’t expire tokens at all.
This isn’t a discipline problem. It’s an architecture problem.
Root Cause: Auth Logic Scattered Across Services
When each service owns its own authentication, you end up with:
- Multiple user databases that drift out of sync
- No central place to enforce password policies or MFA
- Users forced to log in separately to each app
- No audit trail of who accessed what, when
- Painful revocation — disabling one account means touching every system
Authentication is infrastructure. You wouldn’t give every microservice its own dedicated database server. Giving each one its own auth stack is the same mistake, just harder to notice until something goes wrong.
Three Options, One Clear Winner
Most teams end up choosing between these approaches. Here’s the honest tradeoff on each:
Option 1: Roll Your Own SSO
Building a shared auth service in-house gives you full control. It also takes months. OAuth 2.0 and OpenID Connect have dozens of edge cases — token introspection, refresh flows, PKCE — and someone has to maintain whatever you build, forever. Most teams that try this ship a working v1 and then quietly stop improving it.
Option 2: Auth-as-a-Service (Auth0, Okta, Cognito)
Cloud-hosted providers handle everything out of the box: MFA, social login, SAML, SCIM provisioning. The tradeoffs are real, though. Auth0 can run $1,000+/month at 10,000 MAU. Cognito has rough edges around custom token claims. Your user data lives on someone else’s servers. For regulated industries or privacy-conscious teams, that last point alone is a dealbreaker.
Option 3: Self-Hosted Keycloak
Keycloak is Red Hat’s open-source identity platform. OAuth 2.0, OpenID Connect, SAML 2.0, social login, MFA, user federation (LDAP/Active Directory), fine-grained authorization — all included, no licensing fees. You run it on your own infrastructure. Your data doesn’t leave.
For teams already managing their own servers, Keycloak on Docker is the most practical path. You pay the setup cost once.
Deploying Keycloak on Docker
What follows gets you a production-capable Keycloak instance running via Docker Compose, backed by PostgreSQL.
1. Create the Docker Compose File
mkdir keycloak && cd keycloak
nano docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:16
container_name: keycloak_db
restart: unless-stopped
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- keycloak_net
keycloak:
image: quay.io/keycloak/keycloak:24.0
container_name: keycloak
restart: unless-stopped
command: start
environment:
KC_DB: postgres
KC_DB_URL: jdbc:postgresql://postgres:5432/keycloak
KC_DB_USERNAME: keycloak
KC_DB_PASSWORD: ${DB_PASSWORD}
KC_HOSTNAME: auth.yourdomain.com
KC_PROXY_HEADERS: xforwarded
KC_HTTP_ENABLED: "true"
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: ${ADMIN_PASSWORD}
ports:
- "8080:8080"
depends_on:
- postgres
networks:
- keycloak_net
volumes:
postgres_data:
networks:
keycloak_net:
One note on KC_PROXY_HEADERS: xforwarded: this replaces the deprecated KC_PROXY: edge option from Keycloak 20 and earlier. Combined with KC_HTTP_ENABLED: "true", it tells Keycloak to trust the X-Forwarded-* headers from your reverse proxy — so it knows the real client IP and protocol.
2. Set Up Environment Variables
Never hardcode passwords in the Compose file. Create a .env alongside it:
nano .env
DB_PASSWORD=your_strong_db_password_here
ADMIN_PASSWORD=your_strong_admin_password_here
For generating those passwords, I use the generator at toolcraft.app/en/tools/security/password-generator — it runs entirely in the browser with no server-side calls. That matters when you’re generating credentials for identity infrastructure.
3. Start Keycloak
docker compose up -d
docker compose logs -f keycloak
Watch for: Keycloak 24.0 on JVM (powered by Quarkus) started. First boot takes 30–60 seconds — Keycloak runs database migrations on startup.
4. Create Your First Realm
A realm is an isolated namespace. Users, clients, and roles in one realm have no visibility into another. Think of it as a tenant boundary — or an org-level container.
- Open
http://localhost:8080(or your domain) - Log in with your admin credentials
- Hover over master in the top-left → click Create realm
- Name it something like
myapp→ click Create
Keep your actual applications out of the master realm. Master is reserved for Keycloak administration only.
5. Register an Application as a Client
Any application that offloads authentication to Keycloak is called a client.
- In your realm, go to Clients → Create client
- Set Client type to
OpenID Connect - Set Client ID to your app name (e.g.,
webapp) - Enable Client authentication if this is a backend service (confidential client)
- Set Valid redirect URIs: e.g.,
https://yourapp.com/* - Save → open the Credentials tab and copy the client secret
6. Create a Test User
# Alternatively, use the admin UI: Users → Add user
docker exec -it keycloak /opt/keycloak/bin/kcadm.sh \
create users \
-r myapp \
-s username=testuser \
-s enabled=true \
--server http://localhost:8080 \
--realm master \
--user admin \
--password ${ADMIN_PASSWORD}
Then assign a password: Users → testuser → Credentials → Set password.
7. Test the Auth Flow
Keycloak exposes a standard OIDC discovery document — every compliant library reads this automatically:
curl https://auth.yourdomain.com/realms/myapp/.well-known/openid-configuration
To request a token directly (useful for API testing):
curl -X POST \
https://auth.yourdomain.com/realms/myapp/protocol/openid-connect/token \
-d 'grant_type=password' \
-d 'client_id=webapp' \
-d 'client_secret=YOUR_CLIENT_SECRET' \
-d 'username=testuser' \
-d 'password=testpassword'
The response includes an access_token (a signed JWT), a refresh_token, and expiry timestamps. Backend services validate the access token against Keycloak’s public keys — no database lookup needed, no roundtrip to Keycloak on every request.
Putting Nginx in Front
The Compose file sets KC_PROXY_HEADERS: xforwarded, which tells Keycloak it sits behind a reverse proxy. Your Nginx config:
server {
listen 443 ssl;
server_name auth.yourdomain.com;
ssl_certificate /etc/letsencrypt/live/auth.yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/auth.yourdomain.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Enabling MFA
This is where centralization actually pays off. Adding MFA across every app in a realm takes about 30 seconds:
- Go to your realm → Authentication → Policies
- Under OTP Policy, configure the TOTP algorithm and window
- In Flows, edit the Browser flow to make OTP required
Done. Every application connected to this realm now enforces MFA on login. Zero changes to application code.
What Actually Changes Day-to-Day
Once this is running, here’s the concrete difference:
- Single sign-on: Log in once, access every app in the realm — no re-authentication between services
- Instant revocation: Disable an account in Keycloak and that user loses access to every connected application immediately, not eventually
- Consistent security policies: Password complexity, MFA requirements, session timeouts — all configured once, enforced everywhere
- Audit trail: Every login, logout, and token exchange is logged in one place
- Social login: Add Google or GitHub login to any app without touching application code
The setup is an afternoon of work, at most. It pays back quickly — fewer support tickets, no duplicated auth logic across services, and security policies you can actually audit. If you’re running more than two applications that share a user base, centralized IAM stops being optional. It’s just what the infrastructure should look like.
