The Frustration of YAML Hell
I once spent an entire weekend migrating a mid-sized microservice from GitHub Actions to GitLab CI. It should have been simple. Instead, I lost 72 hours translating proprietary syntax, fixing environment variable ghosts, and waiting for runners to fail. It was the classic ‘commit, push, wait, fail’ loop that every DevOps engineer hates.
Traditional CI/CD tools treat your pipeline as static configuration. You write long strings in a YAML file that only the cloud runner understands. Testing that logic locally is nearly impossible without tools like act, which often fail to replicate the production environment accurately. Dagger fixes this by shifting the logic from configuration to code.
Dagger is a programmable engine that lets you write pipelines in languages you already know, like Python, Go, or TypeScript. It treats your CI infrastructure as actual software. This means you can run the exact same pipeline on your MacBook Pro as you do in a high-powered cloud environment.
Quick Start: Your First Dagger Pipeline in 5 Minutes
Let’s build a functional pipeline. We will use Python because it is readable and standard in the industry. Before starting, ensure you have Docker installed, as Dagger uses it to execute containers.
1. Install the Dagger CLI
# For macOS users with Homebrew
brew install dagger/tap/dagger
# Or use the install script
curl -L https://dl.dagger.io/dagger/install.sh | sh
sudo mv ./bin/dagger /usr/local/bin/dagger
# Confirm it works
dagger version
2. Set Up Your Environment
Create a fresh directory and initialize a virtual environment to keep your dependencies clean.
mkdir dagger-demo && cd dagger-demo
python3 -m venv .venv
source .venv/bin/activate
pip install dagger-io
3. Write the Pipeline Logic
Create a file named main.py. This script defines a container that pulls Node.js, installs dependencies, and executes tests.
import sys
import anyio
import dagger
async def main():
async with dagger.connection(dagger.Config(log_output=sys.stderr)) as client:
# Load the local source code
src = client.host().directory(".")
# Build the execution environment
container = (
client.container()
.from_("node:18-alpine")
.with_directory("/src", src)
.with_workdir("/src")
.with_exec(["npm", "install"])
.with_exec(["npm", "test"])
)
# Run the pipeline and capture output
result = await container.stdout()
print(result)
if __name__ == "__main__":
anyio.run(main)
4. Execute Locally
dagger run python main.py
Dagger pulls the image and runs your steps. If the tests pass here, they will pass in GitHub Actions. No more guessing.
How Dagger Works Under the Hood
Dagger does not just run scripts in order. It builds a Directed Acyclic Graph (DAG) of your operations. When you define a step with with_exec, Dagger adds it to the graph rather than running it immediately. Execution only happens when you request a result, like stdout() or publish().
The Power of the Dagger Engine
A persistent engine—usually a Docker container—handles the heavy lifting. It manages execution and intelligent caching. Unlike standard runners that often start with a blank slate, Dagger is remarkably efficient. On a recent React project, Dagger’s layer caching reduced our npm install time from 90 seconds to just 3 seconds on subsequent runs.
Modular Containers
Think of your pipeline as a collection of functions. You can pass containers between functions to create modular workflows. One function might handle security linting, while another handles the production build.
def run_lint(container: dagger.Container):
return container.with_exec(["npm", "run", "lint"])
def build_binary(container: dagger.Container):
return container.with_exec(["npm", "run", "build"])
One Pipeline, Every CI Provider
Dagger turns your GitHub or GitLab configuration into a simple shim. Your YAML files will likely shrink by 80% or more because they only need to trigger the Dagger script.
Example: GitHub Actions
Your .github/workflows/ci.yml becomes a boilerplate trigger:
name: CI
on: [push]
jobs:
dagger:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dagger/dagger-for-github@v5
with:
verb: run
args: python main.py
Example: GitLab CI
The .gitlab-ci.yml looks nearly identical in logic:
run-dagger:
image: docker:latest
services: [docker:dind]
script:
- curl -L https://dl.dagger.io/dagger/install.sh | sh
- ./bin/dagger run python main.py
If you decide to switch to CircleCI or Jenkins next month, you won’t have to rewrite your tests. You only need to call that same Python script.
Strategies for Better Pipelines
1. Secure Secret Handling
Avoid hardcoding credentials at all costs. Dagger includes built-in secret management that prevents sensitive data from leaking into build logs or image layers.
gh_token = client.set_secret("github-token", os.environ["GITHUB_TOKEN"])
container = container.with_secret_variable("GH_TOKEN", gh_token)
2. Use Persistent Cache Volumes
Speed is everything in CI. Use Dagger’s cache volumes to persist directories like node_modules, .m2, or ~/.cache/pip across different runs. This can shave minutes off every build.
cache = client.cache_volume("node-deps")
container = container.with_mounted_cache("/src/node_modules", cache)
3. Treat CI Code Like Production Code
Since you are using Python, use its full power. Organize your pipeline with classes, split logic into modules, and even write unit tests for your CI logic. I have seen YAML files grow into 2,000-line unreadable monsters. Dagger allows you to keep things clean using standard design patterns.
Moving to Dagger requires a mental shift, especially if you have spent years writing YAML. However, the ability to debug a CI failure locally in seconds is a massive productivity win. You can finally stop using your git history as a testing ground for CI syntax.

