Top 5 Most Exciting Dockercon EU 2018 Announcements

Estimated Reading Time: 10 minutes

Last week I attended Dockercon 2018 EU which took place at Centre de Convencions Internacional de Barcelona (CCIB) in Barcelona, Spain. With over 3000+ attendees from around the globe, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals and so on..Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology. This time I was lucky enough to get chance to Emcee for Docker for Developer Track  for the first time. Not only this, I conducted Hallway Track for OpenUSM project & DockerLabs community contribution effort. Around 20-30 participants  showed up their interest to learn more around this system management, monitoring & Log Analytics tool.

This Dockercon we had Docker Captains Summit for the first time where the entire day was  dedicated to Captains. On Dec #3 ( 10:00 AM till 3:00 PM), we got chance to interact with Docker Staffs, where we put all our queries around Docker Future roadmap. It was amazing to meet all young Captains who joined us this year as well as getting familiar to what they have been contributing to during the initial introductory rounds.

This Dockercon, there has been couple of exciting announcements. 3 of the new features were targeted at Docker Community Edition, while the two were for Docker Enterprise customers. Here’s a rundown of what I think are the most 5 exciting announcements made last week –

#1. Announcement of Cloud Native Application Bundles(CNAB)

Microsoft and Docker have captured a great piece of attention with announcement around CNAB – Cloud Native Application Bundles.

What is CNAB? 

Cloud Native Application Bundles (CNAB) are a standard packaging format for multi-component distributed applications. It allows packages to target different runtimes and architectures. It empowers application distributors to package applications for deployment on a wide variety of cloud platforms, cloud providers, and cloud services. It also provides the capabilities necessary for delivering multi-container applications in disconnected environments.

Is it platform-specific tool?

CNAB is not a platform-specific tool. While it uses containers for encapsulating installation logic, it remains un-opinionated about what cloud environment it runs in. CNAB developers can bundle applications targeting environments spanning IaaS (like OpenStack or Azure), container orchestrators (like Kubernetes or Nomad), container runtimes (like local Docker or ACI), and cloud platform services (like object storage or Database as a Service). CNAB can also be used for packaging other distributed applications, such as IoT or edge computing. In nutshell, CNAB are a package format specification that describes a technology for bundling, installing, and managing distributed applications, that are by design, cloud agnostic.

Why do we need CNAB?

The current distributed computing landscape involves a combination of executable units and supporting API-based services. Executable units include Virtual Machines (VMs), Containers (e.g. Docker and OCI) and Functions-as-a-Service (FaaS), as well as higher-level PaaS services. Along with these executable units, many managed cloud services (from load balancers to databases) are provisioned and interconnected via REST (and similar network-accessible) APIs. The overall goal of CNAB is to provide a packaging format that can enable application providers and developers with a way of installing a multi-component application into a distributed computing environment, supporting all of the above types.


Is it open source? Tell me more about CNAB format?

It is an open source, cloud-agnostic specification for packaging and running distributed applications. It is a nascent specification that offers a way to repackage distributed computing apps

The CNAB format is a packaging format for a broad range of distributed applications. It specifies a pairing of a bundle definition(bundle.json) to define the app, and an invocation image to install the app.

The bundle definition is a single file that contains the following information:

  • Information about the bundle, such as name, bundle version, description, and keywords
  • Information about locating and running the invocation image (the installer program)
  • A list of user-overridable parameters that this package recognizes
  • The list of executable images that this bundle will install
  • A list of credential paths or environment variables that this bundle requires to execute

What’s Docker future plan to do with CNAB?

This project was incubated by Microsoft and Docker 1 year back. The first implementation of the spec is an experimental utility called Docker App, which Docker officially rolled out this Dockercon and expected to be integrated with Docker Enterprise in near future.  Microsoft and Docker plan to donate CNAB to an open source foundation publicly which is expected to happen early next year.

If you have no patience, head over Docker App CNAB examples recently posted by Gareth Rushgrove, Docker Employee, which is accessible via https://github.com/garethr/docker-app-cnab-examples

The examples in this repository show some basic examples of using docker-app, in particular showing some of the CNAB integration details. Check it out –

 #2. Support for using Docker Compose on Kubernetes.

On the 2nd day of Dockercon, Docker Inc. open sourced Compose on Kubernetes project. Docker Enterprise Edition already had this capability enabled starting Compose File version 3.3 where one can use the same docker-compose.yml file for Swarm deployment as well as one can specify Kubernetes workloads whenever stack is deployed. 

What benefit does this bring to Community Developers?

By making it open source, Docker, Inc has really paved a way of infinite possibilities around simplified way of deploying Kubernetes application. Docker Swarm gained popularity because of its simplified approach of application deployment using docker-compose.yml file. Now the community developers can use the same YAML file to deploy their K8s application. 

Imagine, you are using Docker Desktop on your Macbook. Docker Desktop provides capability of running both Swarm & Kubernetes. You have context set to GKE cluster which is running on Google Cloud Platform. You just deployed your app using docker-compose.yml on your local Macbook. Now you want to deploy it in the same way but this time on your GKE cluster.  Just use docker stack deploy command to deploy it to GKE cluster. Interesting, Isn’t it?

How does Compose on Kubernetes architecture look like?

Compose on Kubernetes is made up of server-side and client-side components. This architecture was chosen so that the entire life cycle of a stack can be managed. The following image is a high-level diagram of the architecture:

Compose on Kubernetes architecture

If you’re interested to learn further, I would suggest you to visit this link.

How can I test it now?

First we need to install the Compose on Kubernetes controller into your Kubernetes cluster(which could be GKE/AKS). You can download the latest binary(as of 12/13/2018) via https://github.com/docker/compose-on-kubernetes/releases/tag/v0.4.16 .

This controller uses the standard Kubernetes extension points to introduce the `Stack` to the Kubernetes API. You can use any Kubernetes cluster you like, but if you don’t already have one available then remember that Docker Desktop comes with Kubernetes and the Compose controller built-in, and enabling it is as simple as ticking a box in the settings

Check out the latest doc which shows how to make it work with AKS here.

#3. Introducing Docker Desktop Enterprise

The 3rd Big announcement was an introduction to Docker Desktop Enterprise. With this, Docker Inc. made a new addition to their desktop product portfolio which currently includes the free Docker Desktop Community products for MacOS and Windows. Docker Desktop is the simplest way to get started with container-based development on both Windows 10 and macOS with a set of features now available for the enterprise.

Desktop Comparison Table

How will Docker Desktop Enterprise be different from Docker Desktop Community Edition?

Good question. Docker Desktop has Docker Engine and Kubernetes built-in and with the addition of swappable version packs you can now synchronize your desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. You get the assurance that your application will not break due to incompatible API calls, and if you have multiple downstream environments running different versions of the APIs, you can quickly change your desktop configuration with the click of a button.

Not only this, with Docker Desktop Enterprise, you get access to the Application Designer which is a new workflow that provides production-ready application and service templates that let you get coding quickly, with the reassurance that your application meets architectural standards

source ~ Docker, Inc


For those who are interested in Docker Desktop Enterprise – Please note that it is expected to be available for preview in January & General Availability is slated to happen during 1H 2019.

#4. From Zero to Docker in Seconds with “docker assemble” CLI

This time, Docker Team announced a very interesting docker subcommand rightly named as “assemble” to the public. Ann Rahma and Gareth Rushgrove from Docker, Inc. announced assemble, a new command that generates optimized images from non dockerized apps. It will get you from source to an optimized Docker images in seconds.

Here are few of interesting facts around docker assemble utility:

  • Docker assemble has capability to build an image without a Dockerfile, all about auto detecting the code framework.
  • It generates docker images (and lot more) from your code with single command and zero effort! which mean no more dockerfile needed for your app till you have a config (.pom file there). 
  • It can analyze your applications, dependencies, and caches, and give you a sweet Docker image without having to author your own Dockerfiles.
  • It is built on top of buildKit, will auto detect framework, versions etc. from a config file (.pom file) and automatically add dependencies to the image label, optimize image size and push.
  • Docker Assemble can also figure out what ports need to be published and what healthchecks are relevant.
  • The dockerassemble builds app without configuration files, without Dockerfile, just a git repository to deploy

Is it an open source project?

It’s an enterprise feature for now — not in the community version. It is available for a couple languages and frameworks (like Java as demonstrated on Dockercon stage). 

How is it different from buildpack?

By reading all through its feature, Docker assemble might look very similar to buildpacks  as it overlap with some of the stuff docker-assemble does. But the huge benefit with assemble is that it’s more than just an image (also ports, healthchecks, volume mounts, etc), and it’s integrated into the enterprise toolchain. The docker-assemble is sort of an enterprise-grade buildpack to help with digitalization.

Keep eye on my next blog post to get more detail around the fancy docker assemblecommand.

#5. Docker-app & CNAB together for the first time

On the 2nd day of Dockercon, Docker confirmed that they are the first to implement CNAB for containerized applications and will be expanding it across the Docker platform to support new application development, deployment and lifecycle management. Initially CNAB support will be released as part of our docker-app experimental tool for building, packaging and managing cloud-native applications. With this, Docker now lets you package CNAB bundles as Docker images, so you can distribute and share through Docker registry tools including Docker Hub and Docker Trusted Registry. Additionally, Docker will enable organizations to deploy and manage CNAB-based applications in Docker Enterprise in the upcoming months.

Can I test the preview binaries of docker-app which comes with CNAB support?

Yes, you can find some preview binaries of docker-app with CNAB support here.The latest release of Docker App is one such tool that implements the current CNAB spec. Tt can be used to both build CNAB bundles for Compose (which can then be used with any other CNAB client), and also to install, upgrade and uninstall any other CNAB bundle.

In case you have no patience, head over to this recently added example of how to deploy a Helm chart

You can visit https://github.com/docker/app/releases/tag/cnab-dockercon-preview for accessing preview build.

I hope you found this blog helpful. In my next blog series, I will deep-dive around each of these announcements in terms of implementation and enablements.

6 Things You Should Know before Dockercon 2018 EU

Estimated Reading Time: 7 minutes

 
At Dockercon 2018 this year, you can expect 2,500+ participates, 52 breakout sessions, 11 Community Theatres, 12 workshops, over 100+ total sessions, exciting Hallway Tracks & Hands-on Labs/Trainings, paid trainings, women’s networking event, DockerPals Meetup, chance to meet Docker community experts – Docker Captains & Community Leaders, attending Ecosystem Expo… and only 3 days to accomplish it all. It’s so easy to get overwhelmed but at the same time you need to attend with the right information, so that you walk out triumphant.
 

 

 
Coming Dec 3-5 2018, I will be attending my 3rd Dockercon conference which is slated to happen in the beautiful city of Barcelona, home to the largest football stadium in all of Europe! Based on my past experience, I am here to share and walk you through the inside scoop on where to go when, what to watch out, must-see sessions, who to meet, and much more.
 

#1. UnFold & Unlock with Dockercon AB – “Dockercon 2018 Agenda Builder”

Once you get your Dockercon ID ready via Registration & Info Desk, just turn it around to unfold and unlock Dockercon Agenda for next 3 days. Very simple, isn’t it? Dockercon Agenda Builder is right in your hand.

 
 
 
Wait, I want to access it beforehand? Sure. You can find out when and where everything is happening at Dockercon with this simple Agenda Builder
 
If you’re CI-CD Enthusiast like me, you should use filters under Agenda Builder to choose CI-CD Keywords. You should be able to easily find out what all sessions(breakout, community theatre, General or Workshops) is scheduled to happen on all 3 days.
 
 
I personally find Session Tracks very useful. This time in Barcelona, I will try my best to attend most of these tracks listed below – 
 
– Using Docker for Developers
– Using Docker for IT Infrastructure & Ops
– Customer Stories
– Black Belt
– Docker Technology
– Ecosystem
– Community Theatre
 
 
 

#2. Don’t Miss out HTP – Dockercon 2018 Hallway Track Platform

Trust Me..Dockercon is full of life-time opportunities. If you’re looking for a place which offers you a great way to network and get to know your fellow Docker enthusiasts, the answer is Hallway Tracks.

Hallway Track is an easy way to connect and learn from each other through conversations about things you care about. Share your knowledge and book 1-on-1 group conversations with other participants on the Hallway Track Platform. 

Recently I submitted a track around self-initiated DockerLabs community program. You can join me directly via https://hallwaytrack.dockercon.com/topics/29045/

 

Did you know that you can even enroll yourself for Hallway Track right away before Dockercon? If interested, Head over to https://hallwaytrack.dockercon.com/

#3. Get Your Hands Dirty with HOL –  Dockercon 2018 Hands-on Labs (HOL)

With Dockercon conference pass, comes a Docker’s Hands-on Lab. These self paced tutorials allow you to learn at your pace anytime during the conference. Featuring a wide range of topics for developers and sys admins working with Windows and Linux environments. The Docker Staff will be available to answer questions and help you along. 

What I love about Docker HOL is you don’t need pre-registration, just stop by during the available hours on Monday through Wednesday. All you need is carry your laptop for lab sessions.

Further Information: https://europe-2018.dockercon.com/hands-on-labs/

 

#4. Deepen Your Container Knowledge with DW – “Dockercon 2018 Workshops”

Pre-conference Docker workshops is an amazing opportunity for you to become better acquainted with Docker and take a deep dive into the Docker platform features, services and uses before the start of the conference. These two hour workshops will provide technical deep dives, practical guidance and hands on tutorials. Reserving a space require just a simple step – RSVP with your Agenda Builder. Please note that this is included under Full Conference Pass.

Below are the list of workshops you might be interested to attend on Monday, Tuesday & Wednesday(Dec 3 – Dec 5 2018):

 

252727 – Workshop: Migrating .NET Applications to Docker Containers

252740 – Workshop: Docker Application Package

252734 – Workshop: Container Networking for Swarm and Kubernetes in Docker Enterprise

252737 – Workshop: Container Storage Concepts and How to Use Them

252728 – Workshop: Security Best Practices for Kubernetes

252733 – Workshop: Building a Secure, Automated Software Supply Chain

252720 – Workshop: Using Istio

262804 – Workshop: Container 101 – Getting Up and Running with Docker Containers

252731 – Workshop: Container Monitoring and Logging

252738 – Workshop: Migrating Java Applications to Docker Containers

261088 – Workshop: Container Troubleshooting with Sysdig

262792 – Workshop: Swarm Orchestration – Features and Workflows

Further Information: https://europe-2018.dockercon.com/hands-on-labs/

 

#5. Meet Your Favourite Captains & Community Leaders via  Dockercon! Pals

DockerPals is an excellent opportunity to meet Docker Captains and Community Leaders who are open to engaging with container enthusiasts of all skill levels, specialities and backgrounds. By participating in Docker Pals you will be introduced to other conference attendees and connected with a DockerCon veteran, giving you a built-in network before you arrive in Barcelona.

If you’re new to Dockercon, you can sign up as a Docker Pal. Docker Pals are matched with 4-5 other conference attendees and one Guide who knows their way around DockerCon. Pals might be newer to DockerCon, or solo attendees who want a group of new friends. Guides help Pals figure out which sessions and activities to attend and are a familiar face at the after-conference events. Both Guides and Pals benefit from making new connections in the Docker Community. You can sign up for Docker Guide under this link.

 

#6 Don’t Miss out Black Belt Sessions

Are you code and demo-heavy contributor? If yes, then you gonna love these sessions.

Attendees of this track will learn from technical deep dives that haven’t been presented anywhere else by members of the Docker team and from the Docker community. These sessions are code and demo-heavy and light on the slides. One way to achieve a deep understanding of complex distributed systems is to isolate the various components of that system, as well as those that interact with it, and examine all of them relentlessly. This is what is discussed under the Black Belt track! It features deeply technical talks covering not only container technology but also related projects.

 

Looking out for Tips for attending Conference???

Earlier this year, I presented a talk around “5 Tips for Making Most out of International Conference” which you might find useful. Do let me know your feedback, if any.

If you still have queries around Dockercon, I would suggest you to join us all at Docker Slack channel. Search for #dc_pals Slack channel to get connected to DockerPals program.

To join Docker Slack, you are requested to go through this link first. 

 

A First Look at Docker Application Package “docker-app”

Estimated Reading Time: 10 minutes 

Did you know? There are more than 300,000 Docker Compose files on GitHub.

Docker Compose is a tool for defining and running multi-container Docker applications. It  is an amazing developer tool to create the development environment for your application stack. It allows you to define each component of your application following a clear and simple syntax in YAML files. It works in all environments: production, staging, development, testing, as well as CI workflows. Though Compose files are easy to describe a set of related services but there are couple of problems which has emerged in the past. One of the major concern has been around multiple environments to deploy the application with small configuration differences.

Consider a scenario where you have separate development, test, and production environments for your Web application. Under development environment, your team might be spending time in building up Web application(say, WordPress), developing WP Plugins and templates, debugging the issue etc.  When you are in development you’ll probably want to check your code changes in real-time. The usual way to do this is mounting a volume with your source code in the container that has the runtime of your application. But for production this works differently. Before you host your web application in production environment, you might want to turn-off the debug mode and host it under the right port so as to test your application usability and accessibility. In production you have a cluster with multiple nodes, and in most of the case volume is local to the node where your container (or service) is running, then you cannot mount the source code without complex stuff that involve code synchronization, signals, etc. In nutshell, this might require multiple Docker compose files for each environment and as your number of service applications increases, it becomes more cumbersome to manage those pieces of Compose files. Hence, we need a tool which can ease the way Compose files can be shareable across  different environment seamlessly.

To solve this problem, Docker, Inc recently announced a new tool called “docker-app”(Application Packages) which makes “Compose files more reusable and shareable”. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

 

 

Under this blog post, I will showcase how docker-app tool makes it easier to use Docker Compose for sharing and collaboration and then pushing it directly to DockerHub. Let us get started-

Prerequisite:

  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Swarm Mode Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
juld0kwbajyn11gx3bon9bsct *   manager1            Ready               Active              Leader              18.03.1-ce
uu675q2209xotom4vys0el5jw     manager2            Ready               Active              Reachable           18.03.1-ce
05jewa2brfkvgzklpvlze01rr     manager3            Ready               Active              Reachable           18.03.1-ce
n3frm1rv4gn93his3511llm6r     worker1             Ready               Active                                  18.03.1-ce
50vsx5nvwx5rbkxob2ua1c6dr     worker2             Ready               Active                                  18.03.1-ce

Cloning the Repository

$ git clone https://github.com/ajeetraina/app
Cloning into 'app'...remote: Counting objects: 14147, done.
remote: Total 14147 (delta 0), reused 0 (delta 0), pack-reused 14147Receiving objects: 100% (14147/14147), 17.32 MiB | 18.43 MiB/s, done.
Resolving deltas: 100% (5152/5152), done.

Installing docker-app

wget https://github.com/docker/app/releases/download/v0.3.0/docker-app-linux.tar.gz
tar xf docker-app-linux.tar.gz
cp docker-app-linux /usr/local/bin/docker-app

OR

$ ./install.sh
Connecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.227.152:443)
docker-app-linux.tar 100% |**************************************************************|  8780k  0:00:00 ETA
[manager1] (local) root@192.168.0.13 ~/app
$ 

Verify docker-app version

$ docker-app version
Version:      v0.3.0
Git commit:   fba6a09
Built:        Fri Jun 29 13:09:30 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

The docker-app tool comes with various options as shown below:

$ docker-app
Build and deploy Docker applications.

Usage:
  docker-app [command]

Available Commands:
  deploy      Deploy or update an application
  helm        Generate a Helm chart
  help        Help about any command
  init        Start building a Docker application
  inspect     Shows metadata and settings for a given application
  ls          List applications.
  merge       Merge the application as a single file multi-document YAML
  push        Push the application to a registry
  render      Render the Compose file for the application
  save        Save the application as an image to the docker daemon(in preparation for push)
  split       Split a single-file application into multiple files
  version     Print version information

Flags:
      --debug   Enable debug mode
  -h, --help    help for docker-app

Use "docker-app [command] --help" for more information about a command.
[manager1] (local) root@192.168.0.48 ~/app

WordPress Application under dev & Prod environment

If you browse to app/examples/wordpress directory under GitHub Repo, you will see that there is a folder called wordpress.dockerapp that contains three YAML documents:

  • metadatas
  • the Compose file
  • settings for your application

Okay, Fine ! But how you created those files?

The docker-app tool comes with an option “init” which initialize any application with the above  3 YAML files and the directory structure can be initialized with the below command:

docker-app init --single-file wordpress

I have already created a directory structure for my environment and you can find few examples under this directory.

Listing the WordPress Application package related files/directories

$ ls
README.md            install-wp           with-secrets.yml
devel                prod                 wordpress.dockerapp

As you see above, I have created each folder for my environment – dev and prod. Under these directories, I have created prod-settings.yml and dev-settings.yml. You can view the content via this link.

WordPress Application Package for Dev Environ

I can pass “-f” <YAML> parameter to docker-app tool to render the respective environment settings seamlessly as shown below:

$ docker-app render wordpress -f devel/dev-settings.yml
version: "3.6"
services:
  mysql:
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: dnsrr
    environment:
      MYSQL_DATABASE: wordpressdata
      MYSQL_PASSWORD: wordpress
      MYSQL_ROOT_PASSWORD: wordpress101
      MYSQL_USER: wordpress
    image: mysql:5.6
    networks:
      overlay: null
    volumes:
    - type: volume
      source: db_data
      target: /var/lib/mysql
  wordpress:
    depends_on:
    - mysql
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: vip
    environment:
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DB_NAME: wordpressdata
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DEBUG: "true"
    image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 8082
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

WordPress Application Package for Prod

Under Prod environment, I have the following content under prod/prod-settings.yml as shown :

debug: false
wordpress:
  port: 80

For production environment, it is obvious that I want my application to be exposed under the standard port:80. Post rendering, you should be able to see port:80 exposed as shown below in the snippet:

 image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 80
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

 

Inspect the WordPress App

$ docker-app inspect wordpress
wordpress 1.0.0
Maintained by: ajeetraina <ajeetraina@gmail.com>

Welcome to Collabnix

Setting                       Default
-------                       -------
debug                         true
mysql.database                wordpressdata
mysql.image.version           5.6
mysql.rootpass                wordpress101
mysql.scale.endpoint_mode     dnsrr
mysql.scale.mode              replicated
mysql.scale.replicas          1
mysql.user.name               wordpress
mysql.user.password           wordpress
volumes.db_data.name          db_data
wordpress.port                8081
wordpress.scale.endpoint_mode vip
wordpress.scale.mode          replicated
wordpress.scale.replicas      1
[manager1] (local) root@192.168.0.13 ~/app/examples/wordpress
$

Deploying the WordPress App

$ docker-app deploy wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress

Switching to Dev Environ

If I want to switch back to Dev environment, all I need is to pass the dev specific YAML file using “-f” parameter  as shown below:

$docker-app deploy wordpress -f devel/dev-settings.yml

docker-app

Switching to Prod Environ

$docker-app deploy wordpress -f prod/prod-settings.yml

docker-app

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f devel/dev-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f prod/prod-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Pushing Application Package to Dockerhub

So I have my application ready to be pushed to Dockerhub. Yes, you heard it right. I said, Application packages and NOT Docker Image.

Let me first authenticate myself before I push it to Dockerhub registry:

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to
 https://hub.docker.com to create one.
Username: ajeetraina
Password:
Login Succeeded
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Saving this Application Package as Docker Image

The docker-app CLI is feature-rich and allows to save the entire application as a Docker Image. Let’s try it out –

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app save wordpress
Saved application as image: wordpress.dockerapp:1.0.0
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the images

$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
wordpress.dockerapp   1.0.0               c1ec4d18c16c        47 seconds ago      1.62kB
mysql                 5.6                 97fdbdd65c6a        3 days ago          256MB
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the services

$ docker stack services wordpress
ID                  NAME                  MODE                REPLICAS            IMAGE               PORTS
l95b4s6xi7q5        wordpress_wordpress   replicated          1/1                 wordpress:latest    *:80->80
/tcp
lhr4h2uaer86        wordpress_mysql       replicated          1/1                 mysql:5.6
[manager1] (local) root@192.168.0.48 ~/docker101/play-with-docker/visualizer

Using docker-app ls command to list out the application packages

The ‘ls’ command has been recently introduced under v0.3.0. Let us try it once –

$ docker-app ls
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
wordpress.dockerapp   1.0.1               299fb78857cb        About a minute ago   1.62kB
wordpress.dockerapp   1.0.0               c1ec4d18c16c        16 minutes ago       1.62kB

Pusing it to Dockerhub

$ docker-app push --namespace ajeetraina --tag 1.0.1
The push refers to repository [docker.io/ajeetraina/wordpress.dockerapp]
51cfe2cfc2a8: Pushed
1.0.1: digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9 size: 524

Say, you built WordPress application package and pushed it to Dockerhub. Now one of your colleague want to pull it on his development system and deploy in his environment.

Pulling it from Dockerhub

$ docker pull ajeetraina/wordpress.dockerapp:1.0.1
1.0.1: Pulling from ajeetraina/wordpress.dockerapp
a59931d48895: Pull complete
Digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9
Status: Downloaded newer image for ajeetraina/wordpress.dockerapp:1.0.1
[manager3] (local) root@192.168.0.24 ~/app
$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        8 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$

Deploying the Application

$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        9 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$ docker-app deploy ajeetraina/wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress
[manager3] (local) root@192.168.0.24 ~/app
$

Using docker-app merge option

Docker Team has introduced docker-app merge option under the new 0.3.0 release.

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app merge -o mywordpress
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ ls
README.md            install-wp           prod                 wordpress.dockerapp
devel                mywordpress          with-secrets.yml
$ cat mywordpress
version: 1.0.1
name: wordpress
description: "Welcome to Collabnix"
maintainers:
  - name: ajeetraina
    email: ajeetraina@gmail.com
targets:
  swarm: true
  kubernetes: true

--
version: "3.6"

services:

  mysql:
    image: mysql:${mysql.image.version}
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.rootpass}
      MYSQL_DATABASE: ${mysql.database}
      MYSQL_USER: ${mysql.user.name}
      MYSQL_PASSWORD: ${mysql.user.password}
    volumes:
       - source: db_data
         target: /var/lib/mysql
         type: volume
    networks:
       - overlay
    deploy:
      mode: ${mysql.scale.mode}
      replicas: ${mysql.scale.replicas}
      endpoint_mode: ${mysql.scale.endpoint_mode}

  wordpress:
    image: wordpress
    environment:
      WORDPRESS_DB_USER: ${mysql.user.name}
      WORDPRESS_DB_PASSWORD: ${mysql.user.password}
      WORDPRESS_DB_NAME: ${mysql.database}
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DEBUG: ${debug}
    ports:
      - "${wordpress.port}:80"
    networks:
      - overlay
    deploy:
      mode: ${wordpress.scale.mode}
      replicas: ${wordpress.scale.replicas}
      endpoint_mode: ${wordpress.scale.endpoint_mode}
    depends_on:
      - mysql

volumes:
  db_data:
     name: ${volumes.db_data.name}

networks:
  overlay:

--
debug: true
mysql:
  image:
    version: 5.6
  rootpass: wordpress101
  database: wordpressdata
  user:
    name: wordpress
    password: wordpress
  scale:
    endpoint_mode: dnsrr
    mode: replicated
    replicas: 1
wordpress:
  scale:
    mode: replicated
    replicas: 1
    endpoint_mode: vip
  port: 8081
volumes:
  db_data:
    name: db_data

docker-app comes with a few other helpful commands as well, in particular the ability to create Helm Charts from your Docker Applications. This can be useful if you’re adopting Kubernetes, and standardising on Helm to manage the lifecycle of your application components, but want to maintain the simplicity of Compose when writing you applications. This also makes it easy to run the same applications locally just using Docker, if you don’t want to be running a full Kubernetes cluster.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you want to keep track of latest Docker related information, follow me at https://www.linkedin.com/in/ajeetsraina/.

Top 5 Most Exciting Dockercon 2018 Announcements

Estimated Reading Time: 8 minutes 

 

Yet another amazzing Dockercon !

I attended Dockercon 2018 last week which happened in the most geographically blessed city of Northern California – San Francisco & in the largest convention and exhibition complex- Moscone Center. With over 5000+ attendees from around the globe, 100+ sponsors, Hallway tracks, workshops & Hands-on labs, Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology.This time I was lucky enough to get chance to visit Docker HQ, Townsend Street for the first time. It was an emotional as well as proud feeling to be part of such vibrant community home.

 

 

This Dockercon, there has been couple of exciting announcements.Three of the new features were targeted at Docker EE, while the two were for Docker Desktop. Here’s a rundown of what I think are the most 5 exciting announcements made last week:

 

Under this blog post, I will go through each one of the announcements in details.

1. Federated Application Management in Docker Enterprise Edition

 

With an estimated 85% of today’s enterprise IT organizations employing a multi-cloud strategy, it has become more critical that customers have a ‘single pane of glass’ for managing their entire application portfolio. Most enterprise organisations have a hybrid and multi-cloud strategy. Containers has helped to make applications portable but let us accept the fact that even though containers are portable today but the management of containers is still a nightmare. The reason being –

  • Each Cloud is managed under a separate operational model, duplicating efforts
  • Different security and access policies across each platform
  • Content is hard to distribute and track
  • Poor Infrastructure utilisation still remains
  • Emergence of Cloud-hosted K8s is exacerbating the challenges with managing containerised applications across multiple Clouds

This time Docker introduced new application management capabilities for Docker Enterprise Edition that will allow organisations to federate applications across Docker Enterprise Edition environments deployed on-premises and in the cloud as well as across cloud-hosted Kubernetes. This includes Azure Kubernetes Service (AKS), AWS Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE). The federated application management feature will automate the management and security of container applications on premises and across Kubernetes-based cloud services.It will provide a single management platform to enterprises so that they can centrally control and secure the software supply chain for all the containerized applications.

With this announcement, undoubtedly Docker Enterprise Edition is the only enterprise-ready container platform that can deliver federated application management with a secure supply chain. Not only does Docker give you your choice of Linux distribution or Windows Server, the choice of running in a virtual machine or on bare metal, running traditional or microservices applications with either Swarm or Kubernetes orchestration, it also gives you the flexibility to choose the right cloud for your needs.

 

 

Below are the list of use cases that are driving the need for federated management of Containerised applications  –

 

If you want to read more about it, please refer this official blog.

 

2. Kubernetes Support for Windows Server Container in Docker Enterprise Edition

The partnership between Docker and Microsoft is not new. They have been working together since 2014 to bring containers to Windows and .NET applications. This DockerCon, Docker & Microsoft both shared the next step in this partnership with the preview and demonstration of Kubernetes support on Windows Server with Docker Enterprise Edition.

With this announcement, Docker is the only platform to support production-grade Linux and Windows containers, as well as dual orchestration options with Swarm and Kubernetes.

There has been a rapid rise of Windows containers as organizations recognize the benefits of containerisation and want to apply them across their entire application portfolio and not just their Linux-based applications.

Docker and Microsoft brought container technology into Windows Server 2016, ensuring consistency for the same Docker Compose file and CLI commands across both Linux and Windows. Windows Server ships with a Docker Enterprise Edition engine, meaning all Windows containers today are based on Docker. Recognizing that most enterprise organizations have both Windows and Linux applications in their environment, we followed that up in 2017 with the ability to manage mixed Windows and Linux clusters in the same Docker Enterprise Edition environment, enabling support for hybrid applications and driving higher efficiencies and lower overhead for organizations. Using Swarm orchestration, operations teams could support different application teams with secure isolation between them, while also allowing Windows and Linux containers to communicate over a common overlay network.

If you want to know further details, refer this official blog.

3. Docker Desktop Template-Based Workflows for Enterprise Developers

Dockercon 2018 was NOT just for Enterprise customers, but  also for Developers. Talking about the new capabilities for Docker Desktop, it is getting new template-based workflows which will enable developers to build new containerized applications without having to learn Docker commands or write Docker files. This template-based workflows will also help development teams to share their own practices within the organisation.

On the 1st day of Dockercon, Docker team previewed an upcoming Docker Desktop feature that will make it easier than ever to design your own container-based applications. For a certain set of developers, the current iteration of Docker Desktop has everything one might need to containerize an applications, but it does require an understanding of the Dockerfile and Compose file specifications in order to get started and the Docker CLI to build and run your applications.

In the upcoming Docker Desktop release, you can expect the below features –

  • You will see new option – “Design New application”  as shown below Preference Pane UI.
  • It will be 100% graphical tool/feature.
  • This tool is a gift for anyone who doesn’t want to write Dockerfiles or Docker compose file.
  • Once a user click the button to start the “Custom application” workflow , he will be presented with a list  services which he can add to the application.
  • Each service which is selected will eventually become a container in the final application, but Docker Desktop will take care of creating the Dockerfiles and Compose files  as complete later steps.
  • Under this beta release, currently one can do some basic customization to the service like changing versions, port numbers, and a few other options depending on the service selected.
  • When all the services are selected, one should be ready to proceed, supply the application a name and specify where to store the files that will be generated and then hit the “Assemble” button.
  • The assemble step creates the Dockerfiles for each service, the Compose file used to start the entire application, and for most services some basic code stubs, giving one enough to start the application.

 

 

If you’re interested in getting early access to the new app design feature in Docker Desktop then please sign up at beta.docker.com.

4. Making Compose Easier with to use with the Application Packages

Soon after Dockercon, one of the most promising tool announced for Developers was Docker Application Packages (docker-app). The “docker-app” is an experimental utility to help make Compose files more reusable and sharable.

What problem does application packages solve?

Compose files do a great job of describing a set of related services. Not only are Compose files easy to write, they are generally easy to read as well. However, a couple of problems often emerge:

  1. You have several environments where you want to deploy the application, with small configuration differences
  2. You have lots of similar applications

Fundamentally, Compose files are not easy to share between concerns. Docker Application Packages aim to solve these problems and make Compose more useful for development and production.

In my next blog post, I will talk more about this tool. If you want to try your hands, head over to https://github.com/docker/app

5. Upcoming Support for Serverless Platform under Docker EE

Recently, Function as a Service (FaaS) programming paradigm has gained a lot of traction in the cloud community. At first, only large cloud providers such as AWS Lambda, Google Cloud Functions or Azure Functions provided such services with a pay-per-invocation model, but since then interest has increased for developers and entreprises to build their own solutions on an open source model.

This Dockercon, Docker identified at least 9 different frameworks out of which the following six: OpenFaaS, nuclio, Gestalt, Riff, Fn and OpenWhisk were already confirmed to be supported under the upcoming Docker Enterprise Edition. Docker, Inc started an open source repository to document how to install all these frameworks on Kubernetes on Docker EE, with the goal of providing a benchmark of these frameworks: docker serverless benchmark Github Repository. Pull Requests are welcome to document how to install other serverless frameworks on Docker EE.

 

Did you find this blog helpful? I am really excited about the upcoming Docker days and feel that these upcoming features will really excite the community. If you have any questions, join me this July 7th at Docker Bangalore Meetup Group, Nutanix Office where I can going to go deeper into Dockercon 2018 Announcements. See you there..

 

When Kubernetes Meet Docker Swarm for the First time under Docker for Mac 17.12 Release

Estimated Reading Time: 7 minutesDocker For Mac 17.12 GA is the first release which includes both the orchestrators – Docker Swarm & Kubernetes under the same Docker platform. As of 1/7/2018 – Experimental Kubernetes has been released under Edge Release(still not available under D4M Stable Release). Experimental Kubernetes is still not available for Docker for Windows & Linux platform. It is slated to be available for Docker for Windows next month(mid of February) and then for Linux by March or April.

Now you might ask why Docker Inc. is making this announcement? What is the fun of having 2 orchestrator under the same roof?  To answer this, let me go little back to the past and see how Docker platform looked like:

                                                                                                                             ~ Source – Docker Inc.

 

Docker platform is like a stack with various layers. The first base layer is called containerd. Containerd is an industry-standard core container runtime with an emphasis on simplicity, robustness and portability. Based on the Docker Engine’s core container runtime, it is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc. Containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users. It basically includes a daemon exposing gRPC API over a local UNIX socket. The API is a low-level one designed for higher layers to wrap and extend. It also includes a barebone CLI (ctr) designed specifically for development and debugging purpose. It uses runC to run containers according to the OCI specification. The code can be found on GitHub, and here are the contribution guidelines. Let us accept the fact that over the last few years, there has been lots of iteration around this layer but now Docker Inc. has finalised it to a robust, popular and widely accepted container runtime.

On top of containerd, there is an orchestration layer rightly called Docker Swarm. Docker Swarm ties all of your individual machines together which runs container runtime. It allows you to deploy application not on a single machine at a time but into a whole system, thereby making your application distributed.

To take advantage of these layers, as a developer you need tools & environment which can build & package your application that takes advantage of your environment, hence Docker Inc. provides Community Edition like  Docker for Mac, Docker for Windows etc. If you are considering to move your application to the production, Docker Enterprise Edition is the right choice.

If the stack looks really impressive, why again the change in architecture?

The reason is – Not everybody uses Swarm.

~ Source – Docker Inc.

Before Swarm & Kubernetes Integration – If you are a developer and you are using Docker, the workflow look something like as shown below. A Developer typically uses Docker for Mac or Docker for Windows.Using a familiar docker build, docker-compose build tool you build your environment and ensure that it gets deployed across a single node cluster OR use docker stack deploy to deploy it across the multiple cluster nodes.

~ Source – Docker Inc.

 

If your production is in swarm, then you can test it locally on Swarm as it is already inbuilt in Docker platform. But if your production environment runs in Kubernetes, then surely there is lot of work to be done like translating files, compose etc. using 3rd party open source tools and negotiating with their offerings. Though it is possible today but it is not still smooth as Swarm Mode CLI.

With the newer Docker platform, you can seamlessly use both Swarm and Kubernetes flawlessly. Interestingly, you use the same familiar tools like docker stack ls, docker stack deploy, docker ps, `docker stack ps`to display Swarm and Kubernetes containers. Isn’t it cool? You don’t need to learn new tools to play around with Kubernetes cluster.

~ Source – Docker Inc.

 

The new Docker platform includes both Kubernetes and Docker Swarm side by side and at the same level as shown below. Please note that it is a real kubernetes sitting next to Docker Swarm and NOT A FORK OR WRAPPER.

                                                                                                 ~ Source – Docker Inc.

Still not convinced why this announcement?

 

                                                                                                  ~ Source – Docker Inc.

How does SWARM CLI builds Kubernetes cluster side-by-side?

The docker compose file analyses the input file format and convert it to pods along with creating replicas set as per the instruction set. With the newer Docker for Mac 17.12 release, a new stack command has been added as the first class citizen to Kubernetes CLI.

 

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get stacks -o wide
NAME      AGE
webapp    1h

 

 

Important Points –

 

  • Future Release of Docker Platform will include both orchestration options available – Kubernetes and Swarm
  • Swarm CLI will be used for Cluster Management while for orchestration you have a choice of Kubernetes & Swarm
  • Full Kubernetes API is exposed in the stack, hence support for overall Kubernetes Ecosystem is possible.
  • Docker Stack Deploy will be able to target both of Swarm or Kubernetes.
  • Kubernetes is recommended for the production environment
  • Running both Swarm & Kubernetes is not recommended for the production environment.
  • AND by now, you must be convinced – “SWARM MODE CLI is NOT GOING ANYWHERE”

Let us test drive the latest Docker for Mac 17.12 beta release and see how Swarm CLI can be used to bring up both Swarm and Kubernetes cluster seamlessly.

  • Ensure that you have Docker for Mac 17.12 Edge Release running on your Mac system. If you still don’t see 17.12-kube_beta client version, I suggest you to go through my last blog post.

A First Look at Kubernetes Integrated Docker For Mac Platform

 

 

Please note that Kubernetes/kubectl comes by default with Docker for Mac 17.12 Beta release. YOU DON”T NEED TO INSTALL KUBERNETES. By default, a single node cluster is already setup for you by default.

As we have Kubernetes & Swarm Orchestration already present, let us head over to build NGINX services as piece of demonstration on this single node Cluster node.

Writing a Docker Compose File for NGINX containers

Let us write a Docker compose file for  nginx image and deploy 3 containers of that image. This is how my docker-compose.yml looks like:

 

Deploying Application Stack using docker stack deploy 

Ajeets-Air:mynginx ajeetraina$ DOCKER_ORCHESTRATOR=kubernetes docker stack deploy --compose-file docker-compose.yml webapp
Stack webapp was created
Waiting for the stack to be stable and running…
-- Service nginx has one container running
Stack webapp is stable and running

 

Verifying the NGINX replica sets through the below command:

 

As shown above, there are 3 replicas of the same NGINX image running as containers.

Verify the cluster using Kubectl CLI displaying the stack information:

Ajeets-MacBook-Air:mynginx ajeetraina$ kubectl get stack -o wide
NAME      AGE
webapp    8h

As you see, kubectl and stack deploy displays the same cluster information.

Verifying the cluster using kubectl CLI displaying YAML file:

You can verify that Docker analyses the docker-compose.yaml input file format and  convert it to pods along with creating replicas set as per the instruction set which can be verified using the below YAML output format.

 

 

We can use the same old stack deploy CLI to verify the cluster information

 

Managing Docker Stack

Ajeets-MacBook-Air:mynginx ajeetraina$ docker stack services webapp
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
20e31598-e4c        nginx               replicated          3/3                 nginx               *:82->80/tcp,*:444->443/tcp

 

It’s time to verify if the NGINX webpage comes up well:

 

Hence, we saw that NGINX service is running both on Kubernetes & Swarm Cluster side by side.

Cleaning up

Run the below Swarm CLI related command to clean up the NGINX service as shown below:

docker stack ls
docker stack rm webapp
kubectl get pods

Output:

Want to see this in action?

https://asciinema.org/a/8lBZqBI3PWenBj6mSPzUd6i9Y

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

Dockercon 2017 surely gonna be EPIC | Top Sessions Which You Can’t Miss to Attend This Year..

Estimated Reading Time: 4 minutesAre you still thinking whether or not to attend Dockercon 2017? Still finding it difficult to convince yourself or your boss/manager to allow you to attend this conference? Then trust me, you have come to the right place. For the next 30 minutes, I will talk about the great sessions which you can’t miss to attend this year.

wordle 10

Dockercon 2017 is just 1 month away. Heavily power-packed with 3 keynotes( includes Solomon Hykes impressive talk), 7 tracks, 60+ breakout sessions, workshops, Ask the Experts, Birds-of-a-feather, Hands-on Lab, Ecosystem expo and lot more.. this year DockerCon 2017 brings a three-day impressive event schedule in capital of the U.S. state of Texas, Austin.Featuring topics, contents & workshops covering all aspects of Docker and it’s ecosystem,Dockercon has always given a chance to meet and talk to like-minded professionals, get familiar about the latest offerings, upcoming Docker releases & roadmap, best practices and solutions for building Docker based applications. Equally it has always provided opportunity to the community users to know what and how are they using Docker in their premises and in the Cloud.

Untitled picture

                     April 17-21 2017 | Austin, TX | DockerCon 2017

Dockercon 2017 is primarily targeted for Developers, DevOps, Ops, System Administrators, Product Manager and IT executives. Whether you are Enablement Solution Architect for DevOps and containers, OR Technical Solution Architect; whether you are part of IoT Development Team OR AWS/Azure DevOps Engineer; whether you are Principal Product Engineer OR Product Marketing Manager, Dockercon is the place to be. Still wondering how would this conference help your organization in adopting containers and improving your offerings in terms of containerized application for your customer? I have categorized the list of topics based on the target audience. Hope it will help you gather data points to convince yourself and your boss.

As a developer, you are a core piece of your organization, busy developing new versions of your flagship software meant to run your software in various platforms. You are responsible for developments leveraging the target containerized platform’s capabilities and adapting and maintaining release artifacts to deliver a compelling experience for your users.Below lists of sessions  might help you to develop the better containerized software –

con_dev

As a Product Manager, you are actually CEO of your product and responsible for the strategy, roadmap, and feature definition for that product or product line. You love to focus on the problems, not on the solutions. You are gifted to excel at getting prospects and customers to express their true needs. Below list of the sessions might interest you to attend:

 con_2

 

As a system administrator, you are the only person who is responsible for the uptime, performance, resources, security, configuration, and reliable operation of systems running Docker applications . Below sessions might interest you to manage your Dockerized environment in a better way –

con_5

 

As a Solution Architect, you are always busy with definition and implementation of reference architectures, capturing business capabilities and transform them into services leveraged across the platform and not to miss out – designing infrastructures for critical applications and business processes in a cost effective manner. Below lists might interest you to shape your containerized solutions in a better way:

  dev_architect

Don’t you think attending Dockercon gonna be a great investment for you and your career?If yes, then what are you waiting for? Docker Team has something really cool for you to get started  –

DockerCon Agenda Builder – Browse and Search Your Session

Register for Dockercon 2017

Dockercon Speakers at a glance

For more information, visit http://2017.dockercon.com/about/