Running LinuxKit on AWS Platform made easy

With around 2800+ GITHUB stars, 54 contributors,  28 external, 50+ commits per week since the DockerCon launch, LinuxKit has really gained a lot of momentum among the community users. LinuxKit today supports multiple platforms – AWS, Hyper V, Azure, MacOS, Google Cloud Platform, Packets.net, VMware Fusion, QEMU & Local Hypervisors. Installation of LinuxKit on macOS has been simplified using Homebrew. Just 2 simple brew commands and moby is ready to build up your LinuxKit OS image.

docker001

Soon after DockerCon 2017, I wrote a blog post on how to get started with LinuxKit for Google Cloud Platform. Since then I have been closely keeping eye on the latest features, enablements & releases of LinuxKit. Under this blog post, I bring up a simplified approach to get LinuxKit OS instance running on top of Amazon Web Services(AWS) Platform.

Here we go..

Steps:

  1. Install AWS CLI on macOS(Using Homebrew)
  2. Installing LinuxKit & Moby Tool(Using Homebrew)
  3. Configuring AWS S3 bucket
  4. Building a RAW image with Moby tool
  5. Configuring VM Import Service Role
  6. Upload the aws.raw Image to remote AWS S3 bucket using LinuxKit
  7. Run the LinuxKit OS as EC2 Instance 

Installing AWS CLI on macOS

 

Screen Shot 2017-06-20 at 8.11.00 AM

Screen Shot 2017-06-20 at 8.12.14 AM

 

Setting the AWS_REGION environment variable as this is used by the AWS Go SDK:

export AWS_REGION=ap-south-1

Installing LinuxKit & Moby tool:

brew tap linuxkit/linuxkit
brew install --HEAD moby
brew install --HEAD linuxkit

 

Screen Shot 2017-06-20 at 7.47.54 AM

Screen Shot 2017-06-20 at 8.08.07 AM

 

Creating/Configuring AWS S3 bucket:

Open up AWS Management console and click on S3 under AWS Services. It will open up the below page:

Screen Shot 2017-06-18 at 10.14.01 PM

Screen Shot 2017-06-18 at 10.15.19 PM

Screen Shot 2017-06-18 at 10.15.51 PM

Screen Shot 2017-06-18 at 10.16.28 PM

 

Building AWS RAW Image using Moby:

Screen Shot 2017-06-18 at 10.44.35 PM

Screen Shot 2017-06-18 at 10.46.30 PM

 

This builds up aws.raw which we need to push to AWS S3 bucket using the below command:

linuxkit push aws -bucket linuxkit-images  -timeout 1200 aws.raw

This will throw the below error:

“…The sevice role <vmimport> does not exist or does not have sufficient permissions for the service to continue. status code: 400, request id: 0ce661fb-e9b4-40b8-af07-9da6a6fc3c94..”

Follow the next section to get it fixed..

Configuring VM Import Service Role 

VM Import requires a role to perform certain operations in your account, such as downloading disk images from an Amazon S3 bucket. You must create a role named vmimport with a trust relationship policy document that allows VM Import to assume the role, and you must attach an IAM policy to the role. I used this script to setup everything in a single shot:

Upload the aws.raw Image to remote AWS S3 bucket using LinuxKit

It’s time to push the RAW Image to S3 bucket:

linuxkit push aws -bucket linuxkit-images -timeout 1200 aws.raw
Created AMI: ami-0a81fe65

Creating an instance

linuxkit run aws aws
Created instance i-02b28f9f8eee1dcf2
Instance i-02b28f9f8eee1dcf2 is running
 

Open up your AWS Management console and you will soon see new instance coming up.

Screen Shot 2017-06-19 at 12.41.11 AM

 

Screen Shot 2017-06-19 at 12.45.15 AM

 

Here you go.. AWS E2 instance running LinuxKit OS is up and running..

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s happening with AWS project activities clicking on this link.

0
0

Docker 1.12.1 Swarm Mode & Persistent Storage with DellEMC RexRay on AWS Platform

“Does Docker Engine 1.12 has storage discovery similar to Service Discovery and Orchestration Feature? What is the Volume/Persistent Storage story in 1.12 Swarm Mode? Which will be the best solution for my DB persistence? ” – are few common questions which I faced in almost every online meetup, blogs and Docker webinar. Docker 1.12 release was totally focused on Orchestration feature but there has been subtle changes in regards to volumes, volume drivers and storage. Under this blog post, I will be answering the following queries:

docker-storage

  1. What’s new in Docker 1.12 Persistent Storage?
  2. How does Persistent storage work in case of new service API?
  3. How to deploy a simple Web Application for Docker 1.12 using RexRay?

In case you’re new to Docker storage, persistent storage refers to storage volumes — usually associated with stateful applications, such as database .In laymen language,these are places to store data that lives outside the life cycle of the container. A long lived service like database needs persistent storage which should exist outside the container space and has life span longer than the container which uses it.

Docker offers a basic persistent storage solution for containers in the form of Docker Data Volumes. There has been tremendous amount of focus on OverlayFS which is a modern union filesystem that is similar to AUFS. It has a simpler design, potentially faster and has been in the mainline Linux kernel since version 3.18. It is rapidly gaining popularity in the Docker community and is seen by many as a natural successor to AUFS. If interested, you can refer this to learn more about Overlay2. Let us accept the fact that persistent storage is still an active area of development for Docker. Under Docker 1.12.1, there has been number of improvement over volumes which can be tracked here.

Let us accept the another truth – Docker enthusiast who are looking out to run Docker in the production still count on the ecosystem’s partners like DellEMC (RexRay), ClusterHQ (Flocker), PortWrox, CoreOS and Nutanix to simplify persistent storage in one or different ways. DellEMC RexRay and Flocker are the two most popular persistent storage solution which has been appreciated by the large crowd of Docker users. To my curiosity, I decided to first start looking at RexRay and see how Docker 1.12 Swarm Mode works.

What is RexRay?

docker-storage

RexRay is an open source storage orchestration engine which delivers persistent storage access for container run-time, such as Docker , and provides an easy interface for enabling advanced storage functionality across common storage, virtualization and cloud platforms. It implements the back-end for a Docker volume driver, providing persistent storage to containers backed by a long list of storage providers. It is actually a distributed toolset to manage storage from multiple platforms. REX-Ray locally advertises consistent methods to create, remove, map, and copy volumes abstract of what storage platform is serving the operating system.

RexRay(prior to 0.4.0) is available as a standalone process while starting 0.4.0 version it works as a distributed model of client-server.The client performs a level abstraction of local host processes (request for volume attachment, discovery, format, and mounting of devices) while the server provides the necessary abstraction of the control plane for multiple storage platforms.

Let us try out installing RexRay for Docker 1.12 Swarm Mode and see how it achieves persistent storage for us. I will be using two node Swarm Mode cluster under Amazon AWS.

ooto-1

Want to setup RexRay in 1 minute?

Yes, you surely can. Run RexRay inside the container.

docker run -d \

-e AWS_ACCESS_KEY_ID=<access-key> \

-e AWS_SECRET_ACCESS_KEY=<secret-access-key \

-v /run/docker/plugins:/run/docker/plugins \

-v /var/run/rexray:/var/run/rexray \

-v /dev:/dev \

-v /var/run:/var/run \

joskfg/docker-rexray

The official way is simplified too. RexRay is written in Go, so there are typically no dependencies that must be installed alongside its single binary file. The manual methods can be extremely simple through tools like curl. Installing RexRay is just a simple on-liner command:

curl -sSL https://dl.bintray.com/emccode/rexray/install | sh

oto-2

RexRay 0.5.0 is the latest release and setting it up is a matter of few seconds. We can check the RexRay version information through the below command:

ooto-3

RexRay CLI is feature-rich and there are various options available to play around with the storage volume management capabilities as shown below:

oto-5

One of the compelling feature of RexRay is that it can be ran as an interactive CLI to perform volume management capabilities plus it can be ran as a service to support Docker and other platforms that can communicate through HTTP/JSON. For Example, one can create a config.yml file as shown below:

root@ip-172-31-31-235:~# cat /etc/rexray/config.yml
rexray:
logLevel: warn
libstorage:
– service: <>
osDrivers:
– linux
volumeDrivers:
– docker
storageDrivers:
– ec2
aws:
accesskey: <aws-access-key>
secretkey: <aws-secret-access-key>

Initializing the RexRay:

ooto-6

ooto-10

To retrieve the information about Storage volume, one can issue the below command:

ooto-11

How does Persistent storage work in case of new service API?

Docker 1.12 comes with 3 new APIs – service, swarm and node. I found users complaining about why -v option has been dropped in the newer Docker 1.12 swarm mode. The reason – We are not just talking about one single host which runs the docker container, here we are talking about orchestration feature which spans across the hundred of cluster nodes.The  -v  was dropped because services don’t orchestrate volumes. It is important to note that under Docker 1.12.1 we specifically use the term  mount  because that is what services do. You can have a service that has a template for creating a volume that the service then mounts in for you, but it does not itself handle volumes, so it’s intentionally left out for now. The syntax looks like one shown below for NFS mount:

ooto-13

$sudo docker service create  –mount type=volume,volume-opt=o=addr=<source-host which can be master node>, volume-opt=device=:<NFS directory>,volume-opt=type=nfs,source=<volume name>, target=/<insideContainer> –replicas 3 –name <service-name> dockerimage <command>

Want to see how Docker Swarm Mode 1.12 & Persistence storage works with NFS? Check out my recent blog post.

http://collabnix.com/archives/2001

 

How to deploy a simple Web Application for Docker 1.12 using RexRay?

Now this is an interesting topic and I was just curious to implement this. I already had RexRay 0.3.0 installed on my AWS instances and hence wanted to quickly try it and see how Swarm Mode handles the persistent storage functionality:

I assume you have RexRay up and running in your environment. If not, setting up RexRay 0.3.0 is a matter of few seconds. DellEMC {code} team did a great job in providing “REX-Ray Configuration Generator” through this link to create config.yml file for your RexRay configuration. In case you want to keep it simple, you can export the below variables too:

$ export REXRAY_STORAGEDRIVERS=ec2

$ export AWS_ACCESSKEY=access_key

$ export AWS_SECRETKEY=secret_key

$ rexray service start

Done. Next, lets create a RexRay volume which Docker container is going to use:

ooto-30

I have created a rexray volume called collabray of certain size(for demonstration purpose). Also, you need to mount the volume for Docker to understand as shown above.

Now, run the below docker volume create utility:

ooto-1334

You can check that the above command created EBS volume of 7GiB with name “collabray” as shown below:

ooto-234

You can easily verify that the /dev/xvdh gets mounted on /var/lib/rexray/volumes/collabray automatically as shown below:

ooto435

Let’s verify that docker volume detects the particular volume:

ooto-334

Great ! Now comes the most important part of the whole blog. We will be creating WordPress application as a service where MySQL DB will be mounted to the RexRay volume so as to make DB a persistent storage application.  The actual command is shown below:

ooto-34

In the above example, we are using –mount option for the source “collabray” as RexRay volume, targeting /var/lib/mysql as our backup directory using the volume driver called “RexRay”.

ooto-40

We can use docker inspect command for the particular service to verify the RexRay volume being used by container running the service.

ooto-41

Let’s check if it dumps the DB related files under /var/lib/rexray/volumes/collabray/data or not.

ooto989

Wow ! MySQL database related files are present under the mounted location which will be our persistent storehouse for our cluster and RexRay does that for us very safely.

Next, run the wordpress frontend service which is going to discover the backend wordpressdb1 using the unqualified domain name as shown below:

ooto-42

As shown in the example above, we are not using storage volume for wordpressapp but you might want to backup /var/www/html in different volume created using RexRay.

ooto-43

Finally, I have WordPress application ready to be installed in Swarm Mode using RexRay volume. You can quickly view it using lynx text UI as shown below:

ooto5678

ooto900

RexRay provides an efficient snapshot capability too.  First let us find out the volumeid for “collabray” through the below command:

ooto-678

# rexray snapshot create \

–description=”My DB First Snapshot” \

–snapshotname=”Collabnix WordPress” \

–volumeid=vol-06b67054cd80fe9cb

ooto-11000

You can verify the Snapshot looking at AWS Management dashboard too.

oto678

In the future post, I will be covering more on libstorage and newer RexRay 0.5.0 implementation with ScaleIO for Docker 1.12 Swarm Mode.

0
0

Integrating AWS and Docker Cloud 1.0 ~ A Multicloud Application Delivery Service

Docker’s acquisition of Tutum, a cross-cloud container management service has really paid-off. Two week back, Docker announced “Docker Cloud 1.0” – a new service by Docker that implements all features previously offered by Tutum plus integration with Docker Hub Registry service and the common Docker ID credentials.. It is basically a SaaS platform that allows you to build, deploy and manage Docker containers in a variety of clouds. Docker Cloud is where developers and IT ops meet to build, ship and run any application, anywhere. Docker Cloud enables you to:

– Deploy and scale any application to your cloud in seconds, with just a few clicks
– Continuously deliver your code with integrated and automated build, test, and deployment workflows
– Get visibility across all your containers across your entire infrastructure
– Access the service programmatically via a RESTful API or a developer friendly CLI tool

Docker Cloud provides a single toolset for working with containers on multiple clouds. Docker Cloud currently offers:

– a HTTP REST API and
– a Websocket Stream API

Docker Cloud Rest API:

The Docker Cloud REST API is reachable through the following hostname:https://cloud.docker.com/
All requests should be sent to this endpoint using Basic authentication using your API key as password as shown below:

ET /api/app/v1/service/ HTTP/1.1
Host: cloud.docker.com
Authorization: Basic dXNlcm5hbWU6YXBpa2V5
Accept: application/json

Docker Websocket Stream API:

The Docker Cloud Stream API is reachable through the following hostname:wss://ws.cloud.docker.com/
The Stream API requires the same authentication mechanism as the REST API as shown:

GET /api/audit/v1/events HTTP/1.1
Host: ws.cloud.docker.com
Authorization: Basic dXNlcm5hbWU6YXBpa2V5
Connection: Upgrade
Upgrade: websocket

Please refer https://docs.docker.com/apidocs/docker-cloud/#actions to read more about Docker Cloud APIs, API roles and authentication details.

My first experience with Docker Cloud was full of excitement. As I work in Enterprise Solution Group, I firmly believe that enterprises rarely deal with only single cloud at a time, and delivering applications across multiple clouds holds the domain of several specialized tools. Docker Cloud is one good reason which enables moving application in between the clouds and believe me, its matter of few clicks.

This article talks about how to get started with Docker Cloud. For this article, I will show how to deploy application  link to Amazon Web Services and Bring Your Own Node (“BYON”). This is a step by step guide to ease your understand and deployment.

  1. Login to https://cloud.docker.com

Image-1

2. Once you login to Docker Cloud window, you are welcome with 5 major steps:

  • Linking to your Cloud Provider
  • Deploying a Node
  • Building a Service
  • Creating a Stack ( stackfiles.io from Tatum)
  • Managing Repositories.

 

 

Image-2

3. Let’s follow each sections one by one. The first section helps you to link to your favorite cloud provider.

Image-3

As I already have Amazon AWS account, I am going to choose AWS and click on credentials. This option helps you to register your AWS account credentials in your Docker Cloud account to deploy node clusters and nodes using Docker Cloud’s dashboard, API or CLI. Under this section, we will also see AWS Security Credentials are required so that Docker Cloud can interact with AWS on your behalf to create and manage your nodes (EC2 instances).

Image-4

4. Click on “Add Credentials” and the new window will open asking for AWS credential.

Image-5

To get Access Key ID, one has to go back to AWS account and create user and services  in AWS IAM.

Let’s create a new service user called dockercloud-user in AWS IAM. To access the IAM panel in AWS go to https://console.aws.amazon.com/iam/#users

Image-6

If you try to go back to Docker Cloud window and try to supply Access ID for Amazon, it still throw warning about the wrong credential. There is still one step left to get it working. Before Docker Cloud can use the new user you just created, you need to give the it privileges so it can provision EC2 resources on your behalf. Go to the AWS IAM panel and click Policies > Create Policy: https://console.aws.amazon.com/iam/#policies as shown below:

Image-7

 

 

 

 

 

 

 

 

 

Image-8

 

 

 

 

 

 

Click on “Create Your Own Policy”

Image-10

 

 

 

 

 

 

 

 

 

I want to limit Docker Cloud to a specific EC2 Region, hence I am going to use the following policy instead, changing out the example us-west-2 US West for my desired region:

{

“Version”: “2012-10-17”,

“Statement”: [

{

“Action”: [

“ec2:*”,

“iam:ListInstanceProfiles”

],

“Effect”: “Allow”,

“Resource”: “*”,

“Condition”: {

“StringEquals”: {

“ec2:Region”: “us-west-2”

}

}

}

]

}

Click on Validate Policy. Once clicked, it shows up the information.

Image-11Click on Create Policy.

 

Image-13

 

 

 

Click on Users.

Image-14

 

 

 

 

 

Time to attach this policy for this docker-cloud user.

Image-17

 

 

 

 

 

 

 

 

 

 

    Image-18

 

 

 

 

 

 

 

 

 

Once you create the new dockercloud-user service user, have its credentials, and set the custom policy that allows Docker Cloud to use it, go back to Docker Cloud to add the service user’s credentials.

Image-19

 

 

 

 

 

 

 

We are ready to deploy our first node.

Image-20

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Lets create a node under zone us-west-2 as per the policy, t1.micro, 15GB disk, 1CPU,1 GB RAM as shown:

Image-21

 

 

 

 

 

 

 

Deployment of Node might take around 5-10 minutes

Image-22

 

 

 

 

 

 

 

 

As I have opted for amazon Free Tier, I need to restrict my Node to us-west-2. I can alter the policy as shown:

{ “Version”: “2012-10-17”, “Statement”: [   {     “Action”: [       “ec2:*”,       “iam:ListInstanceProfiles”     ],     “Effect”: “Allow”,     “Resource”: “*”,     “Condition”: {       “StringEquals”: {         “ec2:Region”: “us-west-2”       }      }   } ]}

If you go to your AWS page, you will find that a new instance is being initialized:

Image-Special

 

 

 

 

Once the instance gets deployed, you will see the screen as shown below:

Image_Sp2

 

 

 

 

 

 

The Container Host gets deployed as shown below:

Image-24 Image-25

Our first Node is completed. Let’s move to services section:

Image-A

 

 

 

 

 

 

 

I was interested in deploying my own repositories. So I choose My Repositories. While I choose it, it will automatically fetch all my containers which are present from Dockerhub.

 

Image-B

 

 

 

 

 

 

 

I want to test drive Nginx image and setup a Nginx server on the cloud( as shown below):

 

Image-C

 

 

 

 

 

 

 

 

This completes deploying a node with Nginx container.

Next section is quite interesting – Creating a stack.

A stack is a collection of services that make up an application in a specific environment. A stack file is a file in YAML format that define one or more services, similar to a docker-compose.yml file but with a few extensions. The default name for this file is docker-cloud.yml.

To learn more about Stackfiles.io, you can refer https://docs.docker.com/docker-cloud/feature-reference/stacks/

Let’s see how to create stack. Click on next section, Create a Stack window gets displayed:

Image-E

 

 

 

 

 

 

 

 

 

 

 

 

 

The stackfiles.io contains number of pre-built stack files which you can import into this page. I picked up “quickstart python” for this example.

Image-F

 

 

 

 

 

 

 

 

As shown above, the stackfiles.io brings enormous opportunity to Docker Cloud to  get the application quickly built and run.    Image-G

 

 

 

 

 

 

 

 

Multicloud Application Delivery Service  is a market which is booming and I believe Docker picked the right time to get into the right market. Intregrating Docker Cloud with Cloud service provider like Microsoft Azure, AWS, Packet, Digital Ocean, SoftLayer etc. is sure to gain momentum and make application migration too easy.

 

 

0
0