Join our Discord Server
Tanvir Kour Tanvir Kour is a passionate technical blogger and open source enthusiast. She is a graduate in Computer Science and Engineering and has 4 years of experience in providing IT solutions. She is well-versed with Linux, Docker and Cloud-Native application. You can connect to her via Twitter https://x.com/tanvirkour

Dagger: Develop your CI/CD pipelines as code

4 min read

Our traditional CI/CD tools are usually rigid and have predefined workflows that make customization limited. They require an extensive amount of time to adapt to the requirements of certain projects. Because of this rigidity, the tools used to run the workflow can be different from those used in the development environment, leading to inefficiencies and bottlenecks in the software development process.

To tackle this issue, the project Dagger was created. With over 10K GitHub stars, Dagger is designed to be programmable, allowing you to create and define how you want your pipelines and workflows to work. Meaning that no matter how complex or unique your project is, your workflow can be tailored to fit its needs.

Beyond pipeline configuration, Dagger allows you to automate almost all tasks within a CI/CD pipeline, like code reviews, automated testing, deployment, and post-deployment monitoring. This level of automation can significantly reduce manual effort, minimize errors, and increase overall productivity.

How The Dagger Project Works

Dagger Architecture


The Dagger architecture is made up of several key components:

  1. Dagger CLI: This is the command-line interface that developers use to interact with Dagger. It allows you to call Dagger functions, chain them together into a pipeline, and even write their own Dagger functions.
  2. Dagger Functions: These are the building blocks of a Dagger pipeline. A Dagger function is a piece of code that performs a specific task. You can write your own Dagger functions to meet the unique needs of your project.
  3. Dagger Modules: A module is a package of Dagger functions. Once you have written a Dagger function, you can package it into a module for reuse across different projects.
  4. Container Runtime: Dagger requires a container runtime to execute its functions. This can be Docker, Podman, nerdctl, or other Docker-like systems.

The way Dagger works


From the diagram above, when you run a Dagger command with the Dagger CLI (dagger call my-module),

  • Connection: The CLI connects to a running Dagger engine. If there isn’t one already running, it starts one.
  • Session Creation: A new session is opened with the engine. This session has its own instance of a GraphQL server.
  • Module Loading: The specified Dagger module is loaded into the session. If the module code isn’t already cached, it’s pulled from the source.
  • Module Parsing: The module is parsed and prepared for execution.
  • Function Execution: When the module’s functions are called, the engine executes them within a container. This container can leverage resources like vCPUs, memory, and vGPUs based on the module’s requirements.
  • API Calls: Modules can call core APIs (e.g., running containers, working with files) or APIs from other modules they depend on.
  • Result Return: The results of the module’s execution are returned to the CLI.

Getting Started With Dagger

Installing Dagger

To install dagger use the following command:

curl -L https://dl.dagger.io/dagger/install.sh | sh

Check if the installation was complete by running dagger version.

Dagger Functions

Dagger has functions that are written in different programming languages and run in containers. You can call these functions to hold workflows in units that have inputs and outputs.

Calling Functions

You can call Dagger function like the example below:

dagger -m github.com/shykes/daggerverse/hello@v0.1.2 call hello

This should display the following:

Hello, world!

Calling Module Functions

You can call a module function using the dagger call command. For example to call a module called test:

dagger call test

You can also build and save the result to a specific path, use the o or output:

dagger call build -o ./bin/myapp

Getting Module Configuration

Using Dagger vs Traditional methods

To see the difference between a traditional tool (Jenkins) and Dagger, a programmable CI/CD engine that brings several advantages over traditional methods.

The following example is a Jenkins pipeline that automates the build, test, and deployment processes for a Java web application. The pipeline consists of three stages: build, test, and deploy.

pipeline {
    agent any
    environment {
        // Define environment variables here
        TOMCAT_HOME = '/usr/local/tomcat'
    }
    stages {
        stage('Build') {
            steps {
                echo 'Building...'
                sh 'mvn clean install' // Builds the project using Maven
            }
        }
        stage('Test') {
            steps {
                echo 'Testing...'
                sh 'mvn test' // Runs unit tests using Maven
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying...'
                sh '''
                    cp target/my-app.war $TOMCAT_HOME/webapps/
                    $TOMCAT_HOME/bin/shutdown.sh
                    $TOMCAT_HOME/bin/startup.sh
                ''' // Deploys the application to a Tomcat server
            }
        }
    }
}

In the Jenkinsfile above

  • mvn: Cleans the target directory, compiles your code, runs any unit tests, and packages the compiled code into a WAR file.
  • agent any: tells Jenkins to run the pipeline on any available agent.
  • stages: contain all the stages that will be executed in the pipeline.
  • Each stage has a series of steps that define the tasks to be executed.

import os
from dagger import dsl
from dagger.runtime.local import invoke
import paramiko
def ssh_exec(host, user, command):
    ssh = paramiko.SSHClient()
    ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    ssh.connect(host, username=user)
    ssh.exec_command(command)
    ssh.close()
@dsl.task()
def build():
    print('Building...')
    return dsl.Shell('npm install && npm run build')
@dsl.task()
def test():
    print('Testing...')
    return dsl.Shell('npm run test')
@dsl.task()
def deploy_dev():
    print('Deploying to dev server...')
    ssh_exec('devserver.com', 'output_user', 'rm -rf /var/www/myapp/* && exit')
    return dsl.Shell('scp -r build/ output_user@devserver.com:/var/www/myapp')
@dsl.task()
def deploy_prod():
    print('Deploying to prod server...')
    ssh_exec('prodserver.com', 'output_user', 'rm -rf /var/www/myapp/* && exit')
    return dsl.Shell('scp -r build/ output_user@prodserver.com:/var/www/myapp')
@dsl.Pipeline()
def pipeline(branch: str):
    build_result = build()
    test_result = test(after=build_result)
    if branch == 'develop':
        deploy_dev_result = deploy_dev(after=test_result)
    elif branch == 'master':
        deploy_prod_result = deploy_prod(after=test_result)
# Invoke the pipeline with the right branch
invoke(pipeline, params={"branch": "develop"})

This Python script shows how to use Dagger for automating continuous integration and deployment (CI/CD) workflows. Dagger offers a more programmatic and flexible approach compared to Jenkinsfile. Here are the key features of this script:

1. Tasks and Pipelines

  • The script defines several tasks using the @dsl.task() decorator. Each task corresponds to a specific CI/CD step:
    • build(): Installs dependencies and builds the project using npm.
    • test(): Executes unit tests using npm.
    • deploy_dev(): Deploys the application to the development server.
    • deploy_prod(): Deploys the application to the production server.

2. SSH Execution

  • The ssh_exec() function establishes an SSH connection to a remote server and executes a command.
  • It is used within the deploy_dev() and deploy_prod() tasks to clean the existing deployment directory before copying the new build.

3. Pipeline Definition

  • The @dsl.Pipeline() decorator defines the overall CI/CD pipeline.
  • The pipeline takes a branch parameter to determine the deployment target (development or production).
  • The tasks are executed sequentially based on their dependencies (after relationships).

4. Invocation

  • The script invokes the pipeline with the appropriate branch (e.g., develop or master).

Conclusion

There are several use cases where Dagger can be very useful, in cases like:

  • When you are dealing with complex CI/CD workflows that involve multiple stages and dependencies, Dagger’s programmable nature allows you to define these workflows in a more intuitive and manageable way.
  • If you prefer running builds locally for faster feedback, Dagger allows you to define and run your build processes on your local machine, ensuring they work before pushing to the CI/CD pipeline.
  • If your team uses multiple programming languages and you want to standardize your CI/CD processes across these languages, Dagger’s cross-language scripting engine allows you to write functions in any language.

The Dagger project is a significant leap in innovation in continuous integration and continuous deployment (CI/CD). This guide has shown that it can be a versatile tool to handle a wide range of tasks and workflows. You can learn more about the project in their official documentation, or you can even make contributions to the project on GitHub.

Have Queries? Join https://launchpass.com/collabnix

Tanvir Kour Tanvir Kour is a passionate technical blogger and open source enthusiast. She is a graduate in Computer Science and Engineering and has 4 years of experience in providing IT solutions. She is well-versed with Linux, Docker and Cloud-Native application. You can connect to her via Twitter https://x.com/tanvirkour
Join our Discord Server
Index