Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

Leveraging Compose Profiles for Dev, Prod, Test, and Staging Environments

3 min read

In the world of containerization, managing applications across different environments can be a daunting task. Each environment, whether it’s development, production, testing, or staging, often requires its own set of configurations. Traditionally, developers would create separate Compose files for each environment, resulting in duplication of code and potential configuration drift.

However, with the introduction of Compose Profiles in Docker Compose V2, you can now streamline and simplify the management of these environments within a single docker-compose.yml file. Let’s explore how Compose Profiles make this possible and why it’s such a game-changer.

The Traditional Approach

Before diving into Compose Profiles, let’s understand the challenges of managing multiple environments using the traditional approach of separate Compose files.

Duplication of Code

Creating separate Compose files for each environment often leads to code duplication. Maintaining identical services across different files can be error-prone and time-consuming.

For example, imagine you have  a simple application with three services: frontend, backend, and database. You need separate Compose files for development (dev), testing (test), and production (prod) environments. The core configuration for each service (image name, ports, etc.) will likely be identical across all environments. You’ll end up copying and pasting these definitions into each Compose file (dev.yml, test.yml, prod.yml). While some configurations might differ (e.g., database connection details), the overall structure for defining those differences will likely be similar. This repetition can lead to inconsistencies if changes made to one file aren’t reflected in others.

With duplicate code, there’s a higher chance of introducing errors when updating service configurations. Missing updates in one file can cause inconsistencies and unexpected behavior in that environment.

Configuration Drift

Configuration drift refers to the unintended divergence of configurations between environments over time. In the context of Compose files, it happens when you modify a service definition in one environment’s file (dev.yml, test.yml, etc.) but forget to update the corresponding definitions in other files.

As you make changes to one environment’s Compose file, you must remember to replicate those changes across all other environment files. Configuration drift can easily occur, leading to inconsistencies and potential issues. Hence, without a central configuration source or automated deployment pipelines, replicating changes across files becomes cumbersome and error-prone.

 

Complexity

Managing multiple Compose files increases the complexity of your project. It becomes challenging to keep track of which file corresponds to which environment, especially as the project grows.

Enter Compose Profiles

Compose Profiles offer a more elegant solution to managing different environments within a single docker-compose.yml file. With Compose Profiles, you can define named groups of services that should be activated together. Here’s how it works:

services:
  frontend:
    image: frontend
    profiles: ["dev", "prod"]

  backend:
    image: backend
    profiles: ["dev", "prod", "test", "staging"]

  database:
    image: mysql
    profiles: ["prod"]

  testing-tools:
    image: testing-tools
    profiles: ["test", "staging"]

In this example, services like frontend and backend are associated with multiple profiles, indicating that they are relevant in various environments. On the other hand, services like database and testing-tools are specific to certain environments.

Leveraging Docker Compose Profiles for Selective CPU and GPU Scheduling

Let’s talk about applications with varying resource requirements. Managing CPU and GPU allocation across different environments becomes tedious. This is where Docker Compose profiles come in handy.

Traditionally, developers might create separate Compose files for different environments (development, testing, production). These files would then define resource constraints like CPU and GPU allocation for each service. Enter Docker Compose Profiles – a game-changer for managing resource allocation across environments.

Scheduling Services on CPU or GPU

Now, let’s see how profiles facilitate scheduling services on CPU or GPU:

  • CPU-Intensive Services: Define a profile (e.g., “cpu-bound”) and associate CPU-intensive services with it. Specify resource allocation using cpu_shares or cpus within the profile.
  • GPU-Accelerated Services: Create a separate profile (e.g., “gpu-enabled”) and associate services requiring GPU acceleration with it. Utilize the gpus option within the profile to allocate GPUs to these services.
  • Selective Activation: When deploying to environments with or without GPUs, activate the appropriate profile. This ensures efficient resource utilization.

Example: docker-compose.yml with Profiles

Here’s a simplified example demonstrating profiles for CPU and GPU allocation:

 
services:
  # CPU-bound service
  frontend:
    image: my-frontend-image
    profiles: ["all", "cpu-bound"]

  # Service with optional GPU acceleration
  backend:
    image: my-backend-image
    profiles: ["all", "gpu-enabled"]

profiles:
  all:
  cpu-bound:
    cpu_shares: 1024  # Allocate more CPU resources 
  gpu-enabled:
    gpus: all  # Allocate all available GPUs

Simplifying Environment Management

Now, let’s see how Compose Profiles simplify environment management:

Single Compose File

You can maintain a single docker-compose.yml file that covers all your environments. No more juggling multiple files, reducing complexity and the risk of configuration drift.

Selective Activation

When you need to spin up an environment, you can activate the corresponding profiles. For example, to set up the development environment:

$ docker compose --profile dev up

This command starts only the services associated with the dev profile, leaving the rest inactive.

Environment Isolation

Compose Profiles ensure that services not relevant to a specific environment remain dormant. This isolation prevents unintended service interactions and optimizes resource utilization.

CI/CD Integration

Integrating Compose Profiles into your CI/CD pipelines is seamless. You can define which profiles to activate at different stages of the pipeline, ensuring that the appropriate services are running during testing and deployment.

Conclusion

Docker Compose Profiles offer an elegant solution to a long-standing challenge in containerization: managing different environments efficiently. By grouping services into profiles and selectively activating them, you can maintain a single docker-compose.yml file for all your environments, simplifying your development, testing, and production workflows.

So, the next time you find yourself managing multiple environments for your containerized applications, consider harnessing the power of Compose Profiles. It’s a versatile feature that not only saves you time and effort but also ensures consistency and reliability across your various environments.

With Compose Profiles, you can truly embrace the flexibility and scalability of containerization without the headaches of managing separate Compose files.

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

Platform Engineering vs DevOps vs SRE: A Cheatsheet

According to Gartner®, by 2026, 80% of large software engineering organizations will establish platform engineering teams—a significant leap from 45% in 2022. This shift...
Tanvir Kour
2 min read

How to Develop Event-Driven Applications with Kafka and Docker

Event-driven architectures have become increasingly popular with the rise of microservices. These architectures are built around the concept of reacting to events in real-time,...
Abraham Dahunsi
6 min read
Join our Discord Server
Index