Speed Up Your CI/CD Pipeline with Remote Caching and Docker BuildKit

DevOps tutorial - IT technology blog
DevOps tutorial - IT technology blog

Your Pipeline Is Slow — And It’s Costing You More Than You Think

You push a small config change. Maybe you updated an environment variable or fixed a typo in a Dockerfile comment. Then you wait. Five minutes. Eight minutes. Sometimes ten. Every engineer on the team hits that same wall, multiple times a day.

I’ve been there. On one project I joined, the CI pipeline was rebuilding the entire Docker image from scratch on every single commit — downloading all npm dependencies, recompiling native modules, everything. Nobody had set up caching because “it worked, so we left it.” Once we added Remote Caching and BuildKit, average build times dropped from 9 minutes to under 90 seconds. That’s the kind of change that makes developers actually enjoy pushing code again.

This guide walks you through exactly how to do that — from understanding why builds are slow in the first place to wiring up a working remote cache in your pipeline.

Core Concepts: What’s Actually Happening During a Docker Build

Layer Caching and Why It Breaks in CI

Docker builds work in layers. Each instruction in your Dockerfile (RUN, COPY, ADD) creates a new layer. Docker caches these layers on disk — so if nothing changed in that layer or any layer before it, Docker skips rerunning the instruction and uses the cached result instead.

The catch: most CI runners are ephemeral. Every time GitHub Actions or GitLab CI spins up a runner, it starts with a clean slate. No local cache. Every build is a cold build, and Docker redoes everything from scratch.

What Docker BuildKit Brings to the Table

BuildKit is Docker’s next-generation build engine, available since Docker 18.09. It has several advantages over the classic build engine:

  • Parallel execution of independent build stages
  • Better cache management and more granular cache control
  • Support for exporting and importing cache from remote storage
  • Secrets and SSH forwarding without baking credentials into layers

The feature we care about here is cache export/import — specifically, pushing build cache to a registry and pulling it back on the next run.

Remote Caching: The Missing Piece

Remote caching means storing your Docker build cache somewhere persistent — usually a container registry like Docker Hub, Amazon ECR, or GitHub Container Registry (GHCR). When a CI runner starts a new build, it fetches that cache before building. Unchanged layers get skipped entirely.

BuildKit supports several cache backends:

  • registry — Stores cache as OCI image layers in a container registry (most common)
  • local — Stores cache on the local filesystem (useful for self-hosted runners with persistent volumes)
  • s3 — Stores cache in an S3-compatible bucket (via BuildKit’s external cache drivers)
  • gha — Native GitHub Actions cache (no registry needed, tight integration)

Hands-On Practice: Wiring It All Up

Step 1 — Enable BuildKit

First, make sure BuildKit is enabled. Docker 23.0+ turns it on by default. For older versions, set the environment variable:

export DOCKER_BUILDKIT=1

Or set it permanently in your Docker daemon config (/etc/docker/daemon.json):

{
  "features": {
    "buildkit": true
  }
}

Step 2 — Write a Cache-Friendly Dockerfile

Before touching remote caching, get your Dockerfile structure right. The rule is simple: put things that change less frequently earlier in the file.

Here’s a typical Node.js app with bad layer ordering:

# Bad: copies source code before installing dependencies
FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "index.js"]

Every time you change a single source file, the COPY . . layer is invalidated. That means npm install runs again from scratch — every single time.

The fix: separate dependency installation from your source copy.

# Good: dependencies cached separately from source code
FROM node:20-alpine
WORKDIR /app

# Copy package files first
COPY package.json package-lock.json ./
RUN npm ci --omit=dev

# Source code changes here, but npm ci is already cached
COPY . .
CMD ["node", "index.js"]

Now npm ci only reruns when package.json or package-lock.json changes. A source file edit doesn’t touch the dependency layer at all.

Step 3 — Build with Cache Export

Use BuildKit’s --cache-to and --cache-from flags to push and pull cache from a registry. Replace your-registry/your-image with your actual image path:

# Build and export cache to registry
docker buildx build \
  --cache-from type=registry,ref=your-registry/your-image:cache \
  --cache-to type=registry,ref=your-registry/your-image:cache,mode=max \
  -t your-registry/your-image:latest \
  --push \
  .

Two flags to understand:

  • --cache-from — pulls existing cache from the registry before building
  • --cache-to mode=max — exports all intermediate layers to the cache, not just the final image. More data stored, but maximum reuse on the next build.

Step 4 — GitHub Actions Integration (Full Example)

Here’s a complete GitHub Actions workflow using GHCR as the cache backend:

name: Build and Push

on:
  push:
    branches: [main]

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ghcr.io/${{ github.repository }}:latest
          cache-from: type=registry,ref=ghcr.io/${{ github.repository }}:buildcache
          cache-to: type=registry,ref=ghcr.io/${{ github.repository }}:buildcache,mode=max

The first build is always cold — no cache exists yet, so it runs the full time. From the second build onward, unchanged layers come straight from the registry. You’ll see the difference immediately.

Step 5 — Alternative: GitHub Actions Cache Backend

Don’t want to manage a registry for cache storage? GitHub Actions has a native cache backend built in. No registry credentials needed for the cache layer:

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ghcr.io/${{ github.repository }}:latest
          cache-from: type=gha
          cache-to: type=gha,mode=max

The gha backend uses GitHub’s built-in cache storage — 10 GB per repo. Setup is simpler, but there are two trade-offs: cache gets evicted after 7 days of inactivity, and it can be slower than registry-based caching for large images.

Step 6 — Multi-Stage Builds for Even Faster Results

Combine remote caching with multi-stage builds: smaller final images, maximum cache reuse.

# Stage 1: Install dependencies (cached aggressively)
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci

# Stage 2: Build the application
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build

# Stage 3: Production image (runtime artifacts only)
FROM node:20-alpine AS runner
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
CMD ["node", "dist/index.js"]

With mode=max, BuildKit caches all three stages separately. Change your source code but not package.json? The deps stage is pulled from cache and npm ci never runs.

Verifying the Cache Is Working

Watch the build output. Cached layers look like this:

#5 CACHED
#6 CACHED
#7 [builder 3/4] RUN npm run build  0.3s  # Only this stage ran

Steps marked CACHED were pulled from the remote cache and skipped entirely. That’s where your time goes from 9 minutes to 90 seconds.

Real Numbers: What to Actually Expect

Results vary based on image size and how often dependencies change. But across a handful of Node.js and Python projects I’ve worked on, the pattern is consistent:

  • Cold build (no cache): 5–12 minutes for a typical Node.js or Python app
  • Warm build (source-only change): 30–90 seconds
  • Warm build (dependency change): 2–4 minutes (only the dependency stage rebuilds)

The biggest gains come from projects with heavy dependency installation — anything where npm install, pip install -r requirements.txt, or go mod download alone takes more than 60 seconds. That’s the step remote caching eliminates on most commits.

Ship It: Your Five-Step Checklist

Slow CI pipelines don’t just waste time — they break focus, delay feedback, and quietly erode the habit of pushing small, frequent commits. Remote caching with Docker BuildKit is one of the highest-ROI changes you can make to a pipeline, and it takes surprisingly little code.

Here’s the checklist:

  1. Restructure your Dockerfile so dependencies are installed before source code is copied
  2. Enable Docker Buildx in your CI environment
  3. Add cache-from and cache-to flags pointing to a registry or GHA cache
  4. Use mode=max to cache all intermediate layers
  5. Consider multi-stage builds to keep the final image small

Once this is in place, most commits build in under two minutes. The ones that take longer are the ones where something heavy actually changed — which is exactly how it should work.

Share: