The Growing Pains of Image Management
When you’re first building an app, Docker image management is usually the last thing on your mind. Teams typically start by pushing images to public repositories or sharing a single Docker Hub account. This works when you have two developers and three microservices. But things break fast as you scale. Once you’re juggling 50 microservices across Dev, Staging, and Production, that casual approach becomes a massive security risk.
Last year, my team hit a wall. We were constantly throttled by Docker Hub’s 200-pulls-per-6-hours rate limit, which brought our CI/CD pipeline to a standstill. Even worse, we were flying blind. We had no idea if our base images contained critical vulnerabilities. Pulling proprietary code from a public cloud registry also didn’t sit right with our compliance officers. We needed a secure, high-performance vault for our containers that lived inside our own network.
We migrated to Harbor, an open-source registry that treats security as a first-class citizen. After six months of pushing over 100 builds a day in a high-traffic production environment, the results speak for themselves. It is stable, fast, and gives us total ownership of our data.
Why Harbor Wins
Cloud-native solutions like AWS ECR or Google Artifact Registry are fine for some. However, Harbor offers a level of control that cloud providers can’t match. It isn’t just a storage bucket; it is a full-featured management platform.
Granular Access Control
Harbor organizes images into projects, which makes managing permissions simple. I configured our CI/CD bots with push/pull rights, while developers only get pull access to production. For development projects, they have full control. It hooks directly into Keycloak via OIDC, so we don’t have to manage a separate user database.
Automated Security with Trivy
Security shouldn’t be manual. Harbor uses the Trivy scanner to pick apart every layer of your images for known CVEs. We configured a hard rule: if an image has even one ‘Critical’ vulnerability, Harbor blocks the pull request. This ensures that zero-day exploits don’t make it into our Kubernetes clusters.
Smart Retention and Syncing
Storage costs add up fast. To keep our disks from filling up, I set up a policy to auto-delete development images older than 14 days. We keep production tags forever. If you run multiple data centers, Harbor can also replicate images between them to ensure low-latency pulls for your regional clusters.
Hands-on Implementation
If you have wrestled with Docker Compose before, you will find Harbor’s setup familiar. For production, skip the small instances. Use a dedicated VM with at least 8GB of RAM and a fast SSD for the image layers.
1. Prepare the Environment
Check that your server has Docker and Docker Compose ready to go. Don’t try to skimp on resources; Harbor runs several sidecar containers for logging and database management.
# Check your versions
docker --version
docker-compose --version
2. Fetch and Configure
Download the latest stable release. I currently use v2.10.0 for its stability and improved UI.
wget https://github.com/goharbor/harbor/releases/download/v2.10.0/harbor-online-installer-v2.10.0.tgz
tar xvzf harbor-online-installer-v2.10.0.tgz
cd harbor
Your main task is editing harbor.yml. You must set the hostname correctly, or your Docker clients will fail to connect. Don’t even think about running this over HTTP—modern Docker versions will reject the connection unless you jump through hoops.
cp harbor.yml.tmpl harbor.yml
vi harbor.yml
Ensure your certificates are in place. Using Let’s Encrypt is the easiest way to handle this for a production-ready domain.
# Essential harbor.yml settings
hostname: harbor.example.com
https:
port: 443
certificate: /etc/letsencrypt/live/harbor.example.com/fullchain.pem
private_key: /etc/letsencrypt/live/harbor.example.com/privkey.pem
harbor_admin_password: UseAStrongPasswordHere
3. The Big Launch
Run the installer. Always include Trivy for scanning and Notary if you want to sign your images to prevent tampering.
sudo ./install.sh --with-trivy --with-notary
4. Wiring Harbor into Your Pipeline
Once Harbor is live, you need to automate the flow. In our GitLab CI setup, we use a service account rather than individual developer logins. This keeps our credentials safe and our logs clean.
Here is a snippet for a standard build-and-push job:
build_and_push:
stage: build
script:
- echo "$REGISTRY_PASSWORD" | docker login $REGISTRY_URL -u "$REGISTRY_USER" --password-stdin
- docker build -t $REGISTRY_URL/core/api:$CI_COMMIT_SHORT_SHA .
- docker push $REGISTRY_URL/core/api:$CI_COMMIT_SHORT_SHA
- docker tag $REGISTRY_URL/core/api:$CI_COMMIT_SHORT_SHA $REGISTRY_URL/core/api:latest
- docker push $REGISTRY_URL/core/api:latest
5. The ‘Scan on Push’ Safety Net
Enable “Scan on Push” in your project settings immediately. This forces Harbor to scan the image the second it arrives. If Trivy finds a critical bug, the security team gets an alert, and we stop the deployment before it ever touches a server. It’s an automated gatekeeper that never sleeps.
Final Thoughts
Moving to Harbor was the best infrastructure move we made this year. It killed our dependency on external rate limits and gave us a professional-grade security posture. While managing certificates and storage takes some effort, the peace of mind is worth every second of configuration. If you are managing more than 10 microservices, stop gambling with public registries. A private registry is the foundation of a mature DevOps culture.

