Join our Discord Server
Adesoji Alu Adesoji brings a proven ability to apply machine learning(ML) and data science techniques to solve real-world problems. He has experience working with a variety of cloud platforms, including AWS, Azure, and Google Cloud Platform. He has a strong skills in software engineering, data science, and machine learning. He is passionate about using technology to make a positive impact on the world.

Streamlining CI/CD Pipelines with Docker and Jenkins

9 min read

Hey there, fellow engineers and tech enthusiasts! I’m thrilled to share one of my favorite strategies for modern software delivery: combining Docker and Jenkins to supercharge your CI/CD pipelines.

Throughout my career as a Software/DevOps Engineer and a mentee under Docker Captain:Ajeet Raina, I’ve discovered that these two tools can significantly streamline releases, minimize environment-related issues, and empower teams to deploy faster with confidence.

In this post, I’ll guide you through:

  • Understanding Docker and Jenkins
  • The synergy between Docker and Jenkins
  • Building and maintaining efficient CI/CD pipelines

My aim is to help you feel at ease when automating your workflows. Let’s dive in!


Understanding Continuous Integration and Continuous Delivery

Continuous Integration (CI) and Continuous Delivery (CD) are fundamental practices in modern software development. If you’re new to these concepts, here’s a quick overview:

  • Continuous Integration (CI): Developers frequently commit their code to a shared repository, triggering automated builds and tests. This practice helps prevent conflicts and ensures that defects are identified early.
  • Continuous Delivery (CD): Building on CI, organizations can automate the release process, enabling shorter release cycles, fewer surprises, and the ability to roll back changes swiftly if necessary.

Leveraging CI/CD can dramatically enhance your team’s velocity and quality. Once you experience the benefits of reliable, streamlined pipelines, you won’t want to go back.


Why Combine Docker and Jenkins for CI/CD?

Docker allows you to containerize your applications, ensuring consistent environments across development, testing, and production. Jenkins, on the other hand, automates tasks such as building, testing, and deploying your code. Think of Jenkins as the tireless “assembly line worker,” while Docker provides identical “containers” to maintain consistency throughout your project’s lifecycle.

Here’s why integrating these tools is so powerful:

  • Consistent Environments: Docker containers ensure uniformity from a developer’s laptop to production. This consistency reduces errors and eliminates the dreaded “it works on my machine” problem.
  • Fast Deployments and Rollbacks: Docker images are lightweight, allowing you to deploy or revert changes quickly—ideal for rapid delivery cycles where minimal downtime is essential.
  • Scalability: Whether you need to run 1,000 tests in parallel or support multiple teams working on microservices, Docker can spin up multiple containers as needed, with Jenkins orchestrating everything through pipelines.

For a DevOps enthusiast like me, the synergy between Jenkins and Docker is a dream come true.


Setting Up Your CI/CD Pipeline with Docker and Jenkins

Before you get started, here are the essentials you’ll need:

  • Docker Desktop (or a Docker server environment) installed and running. You can download Docker for various operating systems from the official Docker website.
  • Jenkins installed, preferably using the jenkins/jenkins:lts image from Docker Hub instead of the deprecated library/jenkins image. You can find it on the Jenkins Docker Hub page.
  • Proper permissions to execute Docker commands and manage Docker images on your system.
  • A GitHub or similar code repository to store your Jenkins pipeline configuration (optional but recommended).

Pro Tip: For production setups, consider using a container orchestration platform like Kubernetes. This simplifies scaling Jenkins, updating Jenkins, and managing additional Docker servers for heavier workloads.


Building a Robust CI/CD Pipeline with Docker and Jenkins

Once your environment is set up, it’s time to create your first Jenkins-Docker pipeline. Below, I’ll walk you through the common steps for a typical pipeline—feel free to adjust them to suit your stack.

1. Install Necessary Jenkins Plugins

Jenkins offers a multitude of plugins. Let’s start with a few that make configuring Jenkins with Docker easier:

How to Install Plugins:

  1. Navigate to Manage Jenkins > Manage Plugins in Jenkins.
  2. Click the Available tab and search for the plugins listed above.
  3. Install the plugins and restart Jenkins if prompted.

Code Example (Plugin Installation via CLI):


// Install plugins using Jenkins CLI
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker-pipeline
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker
java -jar jenkins-cli.jar -s http://<jenkins-server>:8080/ install-plugin docker-build-publish

Pro Tip (Advanced Approach): If you aim for a fully infrastructure-as-code setup, consider using the Jenkins Configuration as Code (JCasC) plugin. With JCasC, you can declare all your Jenkins settings—including plugins, credentials, and pipeline definitions—in a YAML file. This approach ensures your Jenkins configuration is version-controlled and reproducible, making it effortless to spin up fresh Jenkins instances or apply consistent settings across multiple environments. It’s especially beneficial for large teams managing Jenkins at scale.

References:

2. Set Up Your Jenkins Pipeline

Next, you’ll define your pipeline. A Jenkins “pipeline” job uses a Jenkinsfile (stored in your code repository) to specify the steps, stages, and environment requirements.

Example Jenkinsfile:


pipeline {
    agent any
    stages {
        stage('Checkout') {
            steps {
                git branch: 'main', url: 'https://github.com/your-org/your-repo.git'
            }
        }
        stage('Build') {
            steps {
                script {
                    dockerImage = docker.build("your-org/your-app:${env.BUILD_NUMBER}")
                }
            }
        }
        stage('Test') {
            steps {
                sh 'docker run --rm your-org/your-app:${env.BUILD_NUMBER} ./run-tests.sh'
            }
        }
        stage('Push') {
            steps {
                script {
                    docker.withRegistry('https://index.docker.io/v1/', 'dockerhub-credentials') {
                        dockerImage.push()
                    }
                }
            }
        }
    }
}

Pipeline Breakdown:

  • Checkout: Pulls your repository.
  • Build: Creates a Docker image (your-org/your-app) tagged with the build number.
  • Test: Runs your test suite inside a fresh container, ensuring consistent environments for every test run.
  • Push: Pushes the image to your Docker registry (e.g., Docker Hub) if the tests pass.

Reference: Jenkins Pipeline Documentation

3. Configure Jenkins for Automated Builds

With your pipeline defined, you’ll want Jenkins to execute it automatically:

  • Webhook Triggers: Configure your source control (e.g., GitHub) to send a webhook whenever code is pushed. Jenkins will initiate a build immediately.
  • Poll SCM: Jenkins periodically checks your repository for new commits and starts a build if it detects changes.

Which Trigger Method to Choose?

  • Webhook: Ideal for near real-time builds. As soon as you push to your repository, Jenkins is notified, and a new build starts almost instantly. This method is typically more efficient since Jenkins doesn’t need to continuously poll your repository. However, it requires that your source control system and network environment support webhooks.
  • Poll SCM: Useful if your environment can’t support incoming webhooks—for example, if you’re behind a corporate firewall or your repository isn’t configured for outbound hooks. Jenkins routinely checks for new commits based on a schedule you define (e.g., every five minutes), which can introduce a slight delay and extra overhead but may simplify setup in restricted environments.

Personal Experience: I prefer webhook triggers because they keep everything as close to real-time as possible. Polling works fine if webhooks aren’t feasible, but you might experience a slight delay between code pushes and build starts. Additionally, frequent polling can generate extra network traffic.

4. Build, Test, and Deploy with Docker Containers

Now comes the exciting part—automating the entire cycle from build to deploy:

  • Build Docker Image: After pulling the code, Jenkins invokes docker.build to create a new image.
  • Run Tests: Automated or acceptance tests run inside a container spawned from that image, ensuring consistency.
  • Push to Registry: If tests pass, Jenkins pushes the tagged image to your Docker registry—this could be Docker Hub or a private registry.
  • Deploy: Optionally, Jenkins can deploy the image to a remote server or a container orchestrator like Kubernetes.

This streamlined approach ensures every step—build, test, deploy—occurs within one cohesive pipeline, eliminating the “where did that step go?” scenarios.

5. Optimize and Maintain Your Pipeline

Once your pipeline is operational, here are some maintenance tips and enhancements to keep everything running smoothly:

  • Clean Up Images: Regularly clean up Docker images to reclaim space and reduce clutter.
  • Security Updates: Stay updated with the latest versions of Docker, Jenkins, and any plugins. Applying patches promptly helps protect your CI/CD environment from vulnerabilities.
  • Resource Monitoring: Ensure Jenkins nodes have adequate memory, CPU, and disk space for builds. Overloaded nodes can slow down your pipeline and cause intermittent failures.

Pro Tip: In large projects, consider separating your build agents from your Jenkins controller by running them in ephemeral Docker containers (also known as Jenkins agents). If an agent goes down or becomes stale, you can quickly spin up a fresh one—ensuring a clean, consistent environment for every build and reducing the load on your main Jenkins server.

Why Use Declarative Pipelines for CI/CD?

Although Jenkins supports multiple pipeline syntaxes, Declarative Pipelines stand out for their clarity and resource-efficient design. Here’s why:

  • Simplified, Opinionated Syntax: Everything is encapsulated within a single pipeline { ... } block, minimizing “scripting sprawl.” It’s ideal for teams seeking a quick path to best practices without delving deeply into Groovy specifics.
  • Easier Resource Allocation: By specifying an agent at either the pipeline level or within each stage, you can offload heavyweight tasks (builds, tests) onto separate worker nodes or Docker containers. This approach helps prevent your main Jenkins controller from becoming overloaded.
  • Parallelization and Matrix Builds: If you need to run multiple test suites or support various OS/browser combinations, Declarative Pipelines make it straightforward to define parallel stages or set up a matrix build. This is incredibly useful for microservices or large test suites requiring different environments in parallel.
  • Built-in “Escape Hatch”: Need advanced Groovy features? Simply drop into a script block. This allows you to access Scripted Pipeline capabilities for niche cases while still enjoying Declarative’s streamlined structure most of the time.
  • Cleaner Parameterization: Want to let users choose which tests to run or which Docker image to use? The parameters directive makes your pipeline more flexible. A single Jenkinsfile can handle multiple scenarios—like unit vs. integration testing—without duplicating stages.

Declarative Pipeline Examples

Below are sample pipelines illustrating how declarative syntax can simplify resource allocation and keep your Jenkins controller healthy.

Example 1: Basic Declarative Pipeline


pipeline {
    agent any
    stages {
        stage('Build') {
            steps {
                echo 'Building...'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing...'
            }
        }
    }
}

Explanation:

  • Runs on any available Jenkins agent (worker).
  • Uses two stages in a simple sequence.

Example 2: Stage-Level Agents for Resource Isolation


pipeline {
    agent none  // Avoid using a global agent at the pipeline level
    stages {
        stage('Build') {
            agent { docker 'maven:3.9.3-eclipse-temurin-17' }
            steps {
                sh 'mvn clean package'
            }
        }
        stage('Test') {
            agent { docker 'openjdk:17-jdk' }
            steps {
                sh 'java -jar target/my-app-tests.jar'
            }
        }
    }
}

Explanation:

  • Each stage runs in its own container, preventing any single node from being overwhelmed.
  • agent none at the top ensures no global agent is allocated unnecessarily.

Example 3: Parallelizing Test Stages


pipeline {
    agent none
    stages {
        stage('Test') {
            parallel {
                stage('Unit Tests') {
                    agent { label 'linux-node' }
                    steps {
                        sh './run-unit-tests.sh'
                    }
                }
                stage('Integration Tests') {
                    agent { label 'linux-node' }
                    steps {
                        sh './run-integration-tests.sh'
                    }
                }
            }
        }
    }
}

Explanation:

  • Splits tests into two parallel stages.
  • Each stage can run on a different node or container, speeding up feedback loops.

Example 4: Parameterized Pipeline


pipeline {
    agent any
    parameters {
        choice(name: 'TEST_TYPE', choices: ['unit', 'integration', 'all'], description: 'Which test suite to run?')
    }
    stages {
        stage('Build') {
            steps {
                echo 'Building...'
            }
        }
        stage('Test') {
            when {
                expression { return params.TEST_TYPE == 'unit' || params.TEST_TYPE == 'all' }
            }
            steps {
                echo 'Running unit tests...'
            }
        }
        stage('Integration') {
            when {
                expression { return params.TEST_TYPE == 'integration' || params.TEST_TYPE == 'all' }
            }
            steps {
                echo 'Running integration tests...'
            }
        }
    }
}

Explanation:

  • Allows you to choose which tests to run (unit, integration, or both).
  • Executes relevant stages based on the selected parameter, saving resources.

Example 5: Matrix Builds


pipeline {
    agent none
    stages {
        stage('Build and Test Matrix') {
            matrix {
                agent {
                    label "${PLATFORM}-docker"
                }
                axes {
                    axis {
                        name 'PLATFORM'
                        values 'linux', 'windows'
                    }
                    axis {
                        name 'BROWSER'
                        values 'chrome', 'firefox'
                    }
                }
                stages {
                    stage('Build') {
                        steps {
                            echo "Build on ${PLATFORM} with ${BROWSER}"
                        }
                    }
                    stage('Test') {
                        steps {
                            echo "Test on ${PLATFORM} with ${BROWSER}"
                        }
                    }
                }
            }
        }
    }
}

Explanation:

  • Defines a matrix of PLATFORM x BROWSER, running each combination in parallel.
  • Perfect for testing multiple OS/browser combinations without duplicating pipeline logic.

Additional Resources:

Using Declarative Pipelines helps ensure your CI/CD setup is easier to maintain, scalable, and secure. By properly configuring agents—whether Docker-based or label-based—you can distribute workloads across multiple worker nodes, minimize resource contention, and keep your Jenkins controller running smoothly.


Best Practices for CI/CD with Docker and Jenkins

Ready to enhance your setup? Here are some tried-and-true practices I’ve developed:

  • Leverage Docker’s Layer Caching: Optimize your Dockerfiles so that stable (less frequently changing) layers appear early. This significantly reduces build times.
  • Run Tests in Parallel: Jenkins can run multiple containers for different services or microservices, allowing you to test them concurrently. Declarative Pipelines make it easy to define parallel stages, each on its own agent.
  • Shift Left on Security: Integrate security checks early in the pipeline. Tools like Docker Scout can scan images for vulnerabilities, while Jenkins plugins can enforce compliance policies. Don’t wait until production to discover issues.
  • Optimize Resource Allocation: Properly configure CPU and memory limits for Jenkins and Docker containers to avoid resource hogging. If you’re scaling Jenkins, distribute builds across multiple worker nodes or ephemeral agents for maximum efficiency.
  • Configuration Management: Store Jenkins jobs, pipeline definitions, and plugin configurations in source control. Tools like Jenkins Configuration as Code simplify versioning and replicating your setup across multiple Docker servers.

With these strategies—and a solid foundation in Declarative Pipelines—you’ll have a lean, high-performance CI/CD pipeline that’s easier to maintain and evolve.


Troubleshooting Docker and Jenkins Pipelines

Even the best systems encounter issues occasionally. Here are a few common challenges I’ve faced (and overcome):

  • Handling Environment Variability: Ensure Docker and Jenkins versions are synchronized across different nodes. If you’re using multiple Jenkins nodes, standardize Docker versions to prevent random build failures.
  • Troubleshooting Build Failures: Use docker logs -f <container-id> to inspect what happened inside a container. Often, logs reveal missing dependencies or misconfigured environment variables.
  • Networking Challenges: If your containers need to communicate—especially across multiple hosts—properly configure Docker networks or use an orchestration platform. Refer to Docker’s networking documentation and the Jenkins diagnosing issues guide for more troubleshooting tips.

Conclusion

Pairing Docker and Jenkins offers a nimble, robust approach to CI/CD. Docker ensures consistent environments and rapid rollouts, while Jenkins automates essential tasks like building, testing, and deploying your code to production. When these two tools work in harmony, you can expect shorter release cycles, fewer integration headaches, and more time to focus on developing awesome features.

A well-maintained pipeline also means your team can respond quickly to user feedback and confidently roll out updates—key ingredients for any successful software project. Additionally, there are numerous tools and best practices available to keep your applications secure.

I hope this guide helps you build and maintain a high-performance CI/CD pipeline that your team will love. If you have questions or need assistance, feel free to reach out on the community forums, join the conversation on Slack, or open a ticket on GitHub issues. You’ll find plenty of Docker and Jenkins enthusiasts eager to help.

Thanks for reading! If you found this article helpful, be sure to check out our other CI/CD-related posts on Collabnix. Feel free to share this article and leave your thoughts in the comment section below.

Have Queries? Join https://launchpass.com/collabnix

Adesoji Alu Adesoji brings a proven ability to apply machine learning(ML) and data science techniques to solve real-world problems. He has experience working with a variety of cloud platforms, including AWS, Azure, and Google Cloud Platform. He has a strong skills in software engineering, data science, and machine learning. He is passionate about using technology to make a positive impact on the world.
Join our Discord Server
Index