Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).

Integrating AWS and Docker Cloud 1.0 ~ A Multicloud Application Delivery Service

5 min read

Docker’s acquisition of Tutum, a cross-cloud container management service has really paid-off. Two week back, Docker announced “Docker Cloud 1.0” – a new service by Docker that implements all features previously offered by Tutum plus integration with Docker Hub Registry service and the common Docker ID credentials.. It is basically a SaaS platform that allows you to build, deploy and manage Docker containers in a variety of clouds. Docker Cloud is where developers and IT ops meet to build, ship and run any application, anywhere. Docker Cloud enables you to:

– Deploy and scale any application to your cloud in seconds, with just a few clicks
– Continuously deliver your code with integrated and automated build, test, and deployment workflows
– Get visibility across all your containers across your entire infrastructure
– Access the service programmatically via a RESTful API or a developer friendly CLI tool

Docker Cloud provides a single toolset for working with containers on multiple clouds. Docker Cloud currently offers:

– a Websocket Stream API

Docker Cloud Rest API:

The Docker Cloud REST API is reachable through the following hostname:
All requests should be sent to this endpoint using Basic authentication using your API key as password as shown below:

ET /api/app/v1/service/ HTTP/1.1
Authorization: Basic dXNlcm5hbWU6YXBpa2V5
Accept: application/json

Docker Websocket Stream API:

The Docker Cloud Stream API is reachable through the following hostname:wss://
The Stream API requires the same authentication mechanism as the REST API as shown:

GET /api/audit/v1/events HTTP/1.1
Authorization: Basic dXNlcm5hbWU6YXBpa2V5
Connection: Upgrade
Upgrade: websocket

Please refer to read more about Docker Cloud APIs, API roles and authentication details.

My first experience with Docker Cloud was full of excitement. As I work in Enterprise Solution Group, I firmly believe that enterprises rarely deal with only single cloud at a time, and delivering applications across multiple clouds holds the domain of several specialized tools. Docker Cloud is one good reason which enables moving application in between the clouds and believe me, its matter of few clicks.

This article talks about how to get started with Docker Cloud. For this article, I will show how to deploy application  link to Amazon Web Services and Bring Your Own Node (“BYON”). This is a step by step guide to ease your understand and deployment.

  1. Login to


2. Once you login to Docker Cloud window, you are welcome with 5 major steps:

  • Linking to your Cloud Provider
  • Deploying a Node
  • Building a Service
  • Creating a Stack ( from Tatum)
  • Managing Repositories.




3. Let’s follow each sections one by one. The first section helps you to link to your favorite cloud provider.


As I already have Amazon AWS account, I am going to choose AWS and click on credentials. This option helps you to register your AWS account credentials in your Docker Cloud account to deploy node clusters and nodes using Docker Cloud’s dashboard, API or CLI. Under this section, we will also see AWS Security Credentials are required so that Docker Cloud can interact with AWS on your behalf to create and manage your nodes (EC2 instances).


4. Click on “Add Credentials” and the new window will open asking for AWS credential.


To get Access Key ID, one has to go back to AWS account and create user and services  in AWS IAM.

Let’s create a new service user called dockercloud-user in AWS IAM. To access the IAM panel in AWS go to


If you try to go back to Docker Cloud window and try to supply Access ID for Amazon, it still throw warning about the wrong credential. There is still one step left to get it working. Before Docker Cloud can use the new user you just created, you need to give the it privileges so it can provision EC2 resources on your behalf. Go to the AWS IAM panel and click Policies > Create Policy: as shown below:


















Click on “Create Your Own Policy”











I want to limit Docker Cloud to a specific EC2 Region, hence I am going to use the following policy instead, changing out the example us-west-2 US West for my desired region:


“Version”: “2012-10-17”,

“Statement”: [


“Action”: [




“Effect”: “Allow”,

“Resource”: “*”,

“Condition”: {

“StringEquals”: {

“ec2:Region”: “us-west-2”






Click on Validate Policy. Once clicked, it shows up the information.

Image-11Click on Create Policy.






Click on Users.







Time to attach this policy for this docker-cloud user.






















Once you create the new dockercloud-user service user, have its credentials, and set the custom policy that allows Docker Cloud to use it, go back to Docker Cloud to add the service user’s credentials.









We are ready to deploy our first node.

















Lets create a node under zone us-west-2 as per the policy, t1.micro, 15GB disk, 1CPU,1 GB RAM as shown:









Deployment of Node might take around 5-10 minutes










As I have opted for amazon Free Tier, I need to restrict my Node to us-west-2. I can alter the policy as shown:

{ “Version”: “2012-10-17”, “Statement”: [   {     “Action”: [       “ec2:*”,       “iam:ListInstanceProfiles”     ],     “Effect”: “Allow”,     “Resource”: “*”,     “Condition”: {       “StringEquals”: {         “ec2:Region”: “us-west-2”       }      }   } ]}

If you go to your AWS page, you will find that a new instance is being initialized:






Once the instance gets deployed, you will see the screen as shown below:








The Container Host gets deployed as shown below:

Image-24 Image-25

Our first Node is completed. Let’s move to services section:









I was interested in deploying my own repositories. So I choose My Repositories. While I choose it, it will automatically fetch all my containers which are present from Dockerhub.










I want to test drive Nginx image and setup a Nginx server on the cloud( as shown below):











This completes deploying a node with Nginx container.

Next section is quite interesting – Creating a stack.

A stack is a collection of services that make up an application in a specific environment. A stack file is a file in YAML format that define one or more services, similar to a docker-compose.yml file but with a few extensions. The default name for this file is docker-cloud.yml.

To learn more about, you can refer

Let’s see how to create stack. Click on next section, Create a Stack window gets displayed:















The contains number of pre-built stack files which you can import into this page. I picked up “quickstart python” for this example.










As shown above, the brings enormous opportunity to Docker Cloud to  get the application quickly built and run.    Image-G









Multicloud Application Delivery Service  is a market which is booming and I believe Docker picked the right time to get into the right market. Intregrating Docker Cloud with Cloud service provider like Microsoft Azure, AWS, Packet, Digital Ocean, SoftLayer etc. is sure to gain momentum and make application migration too easy.



Have Queries? Join

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 570+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 8900+ members and discord server close to 2200+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server