Building a Secure VM based on LinuxKit on Microsoft Azure Platform

LinuxKit GITHUB repository recently crossed 3000 stars, forked around 300+ times and added 60+ contributors. Just 5 months old project and it has already gained lot of momentum across the Docker community. Built with a purpose that enables community to create secure, immutable, and minimal Linux distributions, LinuxKit is matured enough to support number of Cloud Platforms like Azure, AWS, Google Cloud Platform, VMware, Packets.net and many more..

 

In my recent blogs, I showcased how to get LinuxKit OS built for Google Cloud Platform, Amazon Web Services and VirtualBox. ICYMI, I recently published few of the the video on LinuxKit too. Check it out.

 

Under this blog post, I will walkthrough how to build secure and portal VM based on LinuxKit image on Microsoft Azure Platform.

Pre-requisite:

I will be leveraging macOS Sierra running Docker 17.06.1-ce-rc1-mac20 version. I tested it on Ubuntu 16.04 LTS edition too running on one of Azure VM and it went fine. Prior knowledge of Microsoft Azure / Azure CLI 2.0 will be required to configure Service Principle for VHD image to get uploaded to Azure smoothly.

 

Step-1: Pulling the latest LinuxKit repository

Pull the LinuxKit repository using the below command:

git clone https://github.com/linuxkit/linuxkit

 

Step-2: Build Moby & LinuxKit tool

cd linuxkit
make

 

Step-3: Copying the tools into the right PATH

cp -rf bin/moby /usr/local/bin/
cp -rf bin/linuxkit /usr/local/bin/

 

Step-4: Preparing Azure CLI tool

curl -L https://aka.ms/InstallAzureCli | bash

 

Step-5: Run the below command to restart your shell

exec -l $SHELL

 

Step-6: Building LinuxKit OS for Azure Platform

cd linuxkit/examples/
moby build -output vhd azure.yml

This will build up VHD image which now has to be pushed to Azure Platform.

In order to push the VHD image to Azure, you need to authenticate LinuxKit with your Azure subscription, hence you  will need to set up the following environment variables:

   export AZURE_SUBSCRIPTION_ID=43b263f8-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_TENANT_ID=633df679-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_CLIENT_ID=c7e4631a-XXXX--XXXX--XXXX--XXXXXXXX
   export AZURE_CLIENT_SECRET=XXXXXXXXXXXXXXXXXXXXXXXX=

Alternatively, the easy way to get all the above details is through the below command:

az login
To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code XXXXXX to authenticate.

The above command lists out Subscription ID and tenant ID which can be exported therein.

Next, follow this link to create an Azure Active Directory application and service principal that can access resources. If you want to stick to CLI rather than UI, you can follow the below steps:

Step-7: Pushing the VHD image to Azure Platform

linuxkit run azure --resourceGroupName mylinuxkit --accountName mylinuxkitstore -location eastasia azure.vhd
Creating resource group in eastasia
Creating storage account in eastasia, resource group mylinuxkit

The command will end up with the below message:

 

 Completed: 100% [     68.00 MB] RemainingTime: 00h:00m:00s Throughput: 0 Mb/sec    

Creating virtual network in resource group mylinuxkitresource, in eastasia

Creating subnet linuxkitsubnet468 in resource group mylinuxkitresource,

within virtual network linuxkitvirtualnetwork702

Creating public IP Address in resource group mylinuxkitresource, with name publicip159

Started deployment of virtual machine linuxkitvm941 in resource group mylinuxkitresource

Creating virtual machine in resource group mylinuxkitresource, with name linuxkitvm941, in location eastasia

NOTE: Since you created a minimal VM without the Azure Linux Agent,

the portal will notify you that the deployment failed. After around 50 seconds try connecting to the VM

ssh -i path-to-key root@publicip159.eastasia.cloudapp.azure.com

 

By this time, you should be able to see LinuxKit VM coming up under Azure Platform as shown below:

Wait for next 2-3 minutes till you try SSHing to this Azure instance and its all set to be up an running smoothly.

Known Issue:

  • Since the image currently does not contain the Azure Linux Agent, the Azure Portal will report the creation as failed.
  • The main workaround is the way the VHD is uploaded, specifically by using a Docker container based on Azure VHD Utils. This is mainly because the tool manages fast and efficient uploads, leveraging parallelism
  • There is work in progress to specify what ports to open on the VM (more specifically on a network security group)
  • The metadata package does not yet support the Azure metadata.

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Further Reference:

0
0

The 20-minutes Docker 1.12 Swarm Mode demonstration on Azure Platform

2016 has been a great year for Docker Inc. With the announcement of Docker 1.12 release in last Dockercon, a new generation Docker clustering & distributed system was born. With an optional “Swarm Mode” feature rightly integrated into core Docker Engine, a native management of a cluster of Docker Engines, orchestration, decentralized design, service and application deployment, scaling, rolling updates, desired state reconciliation, multi-host networking, service discovery and routing mesh implementation – all of these features works flawlessly. With the recent Engine 1.12.5 release, all of these features have matured enough to make it production ready.

Under this blog post, I will be spending another 20-minutes to go quickly through the complete A-Z tutorial around Swarm Mode covering the important features like Orchestration, Scaling, Routing Mesh, Overlay Networking, Rolling Updates etc.

  • Preparing Your Azure Environment
  • Setting up Cluster Nodes
  • Setting up Master Node
  • Setting up Worker Nodes
  • Creating Your First Service
  • Inspecting the Service
  • Scaling service for the first time
  • Creating the Nginx Service
  • Verifying the Nginx Page
  • Stopping all the services in a single command
  • Building WordPress Application using CLI
  • Building WordPress Application using Docker-Compose
  • Demonstrating CloudYuga RSVP Application
  • Scaling the CloudYuga RSVP Application
  • Demonstrating Rolling Updates
  • Docker 1.12 Scheduling | Restricting Service to specific nodes

 

Preparing Your Azure Environment:

1. Login to Azure Portal.
2. Create minimal of 5 swarm nodes(1 master & 4 worker nodes) – [We can definitely automate this]
3. While creating Virtual Machine, select “Docker on Ubuntu Server” (It contains 1.12.3 and Ubuntu 16.04)
4. Select Password rather than Public Key for quick access using PuTTY.[for demonstration]
5. Select the default Resource Group for all the nodes to communicate each other

 

Setting Up Cluster Nodes:

tryit1

 

Setting Up Master Node:

ajeetraina@Master1:~$ sudo docker swarm init –advertise-addr 10.1.0.4
Swarm initialized: current node (dj89eg83mymr0asfru2n9wpu7) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join \
–token SWMTKN-1-511d99ju7ae74xs0kxs9x4sco8t7awfoh99i0vwrhhwgjt11wi-d8y0tplji3z449ojrfgrrtgyc \
10.1.0.4:2377

To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.

 

Setting up Worker Node:

Adding worker nodes is quite easy. Just run the above command to connect worker nodes to manager one by one. This can also be automated(will touch later if needed).

Tips: In case you loose your current session on Manager and want to know what token will allow you to connect to the cluster, run the below command on the manager node:

ajeetraina@Master1:~$ sudo docker swarm join-token worker
To add a worker to this swarm, run the following command:

docker swarm join \
–token SWMTKN-1-511d99ju7ae74xs0kxs9x4sco8t7awfoh99i0vwrhhwgjt11wi-d8y0tplji3z449ojrfgrrtgyc \
10.1.0.4:2377

Run the above command on all the nodes one by one.

 

Listing the Swarm Cluster:

ajeetraina@Master1:~$ sudo docker node ls
ID                                                 HOSTNAME        STATUS    AVAILABILITY  MANAGER STATUS
49fk9jibezh2yvtjuh5wlx3td            Node2              Ready            Active
aos67yarmj5cj8k5i4g9l3k6g          Node1               Ready            Active
dj89eg83mymr0asfru2n9wpu7 *  Master1           Ready            Active                        Leader
euo8no80mr7ocu5uulk4a6fto       Node4              Ready            Active

 

Verifying if the node is worker node or not?

Run the below command on the worker node:

$sudo docker info

Swarm: active
NodeID: euo8no80mr7ocu5uulk4a6fto
Is Manager: false
Node Address: 10.1.0.7
Runtimes: runc

The “Is Manager: false” entry specifies that this node is a worker node.

Creating our First Service:

ajeetraina@Master1:~$ sudo docker service create alpine ping 8.8.8.8
2ncblsn85ft2sgeh5frsj0n8g
ajeetraina@Master1:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                  PORTS               NAMES
cc80ce569a94        alpine:latest       “ping 8.8.8.8”      2 seconds ago       Up Less than a second                       nauseous_stonebraker.1.6gcqr8d9brbf9lowgqpb4q6uo
ajeetraina@Master1:~$

 

Querying the service:

Syntax: sudo docker service ps <service-id>

Example:

ajeetraina@Master1:~$ sudo docker service ps 2ncb
ID                         NAME                    IMAGE   NODE     DESIRED STATE  CURRENT STATE           ERROR
6gcqr8d9brbf9lowgqpb4q6uo  nauseous_stonebraker.1  alpine  Master1  Running        Running 54 seconds ago

Alternative Method:

ajeetraina@Master1:~$ sudo docker service ps nauseous_stonebraker
ID                         NAME                    IMAGE   NODE     DESIRED STATE  CURRENT STATE               ERROR
6gcqr8d9brbf9lowgqpb4q6uo  nauseous_stonebraker.1  alpine  Master1  Running        Running about a minute ago
ajeetraina@Master1:~$

 

Inspecting the Service:

ajeetraina@Master1:~$ sudo docker service inspect 2ncb
[
{
“ID”: “2ncblsn85ft2sgeh5frsj0n8g”,
“Version”: {
“Index”: 27
},
“CreatedAt”: “2016-11-16T10:59:10.462901856Z”,
“UpdatedAt”: “2016-11-16T10:59:10.462901856Z”,
“Spec”: {
“Name”: “nauseous_stonebraker”,
“TaskTemplate”: {
“ContainerSpec”: {
“Image”: “alpine”,
“Args”: [
“ping”,
“8.8.8.8”
]
},
“Resources”: {
“Limits”: {},
“Reservations”: {}
},
“RestartPolicy”: {
“Condition”: “any”,
“MaxAttempts”: 0
},
“Placement”: {}
},
“Mode”: {
“Replicated”: {
“Replicas”: 1
}
},
“UpdateConfig”: {
“Parallelism”: 1,
“FailureAction”: “pause”
},
“EndpointSpec”: {
“Mode”: “vip”
}
},
“Endpoint”: {
“Spec”: {}
},
“UpdateStatus”: {
“StartedAt”: “0001-01-01T00:00:00Z”,
“CompletedAt”: “0001-01-01T00:00:00Z”
}
}
]
ajeetraina@Master1:~$

 

Scaling the Service:

ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  1/1       alpine  ping 8.8.8.8
ajeetraina@Master1:~$ sudo docker service scale nauseous_stonebraker=4
nauseous_stonebraker scaled to 4
ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  4/4       alpine  ping 8.8.8.8
ajeetraina@Master1:~$

 

Creating a Nginx Service:

ajeetraina@Master1:~$ sudo docker service create –name web –publish 80:80 –replicas 4 nginx
9xm0tdkt83z395bqfhjgwge3t
ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  4/4       alpine  ping 8.8.8.8
9xm0tdkt83z3  web                   0/4       nginx
ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  4/4       alpine  ping 8.8.8.8
9xm0tdkt83z3  web                   0/4       nginx
ajeetraina@Master1:~$ sudo docker service ls
ID            NAME                  REPLICAS  IMAGE   COMMAND
2ncblsn85ft2  nauseous_stonebraker  4/4       alpine  ping 8.8.8.8
9xm0tdkt83z3  web                   4/4       nginx

 

Verifying the Nginx Web Page:

ajeetraina@Master1:~$ sudo curl http://localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
ajeetraina@Master1:~$

 

Stopping all the swarm mode service in a single shot

$sudo docker service rm $(docker service ls | awk ‘{print $1}’)

Building WordPress Application using CLI

Create an overlay network:

$sudo docker network create –driver overlay collabnet

Run the backend(wordpressdb) service:

$sudo docker service create –env MYSQL_ROOT_PASSWORD=collab123 –env MYSQL_DATABASE=wordpress –network collabnet –replicas 1 –name wordpressdb mysql:latest

Run the frontend(wordpressapp) service:

$sudo docker service create –env WORDPRESS_DB_HOST=wordpressdb –env WORDPRESS_DB_PASSWORD=collab123 –network collabnet –replicas 4 –name wordpressapp –publish 80:80/tcp wordpress:latest

Inspecting the Virtual IP Address:

$docker service inspect –format {{.Endpoint.VirtualIPs}} wordpressdb
$docker service inspect –format {{.Endpoint.VirtualIPs}} wordpressapp

Ensuring that the WordPress Application is working or not:

$curl http://localhost

Building WordPress Application using Docker-Compose:

Create a file called docker-compose.yml under some directory on your Linux system with the below entry:

version: '2'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_PASSWORD: wordpress
volumes:
    db_data:

Execute the below command to bring up the application:

$sudo docker-compose up -d

Verifying the running containers:

$sudo docker-compose ps

Running the Interactive Mode for docker-compose:

$sudo docker-compose config –services

 

Testing CloudYuga RSVP Application:

[Credits: http://www.cloudyuga.guru]

$docker network create –driver overlay rsvpnet
$docker service create –name mongodb  -e MONGODB_DATABASE=rsvpdata –network rsvpnet  mongo:3.3
$docker service create –name rsvp -e MONGODB_HOST=mongodb –publish 5000 –network rsvpnet teamcloudyuga/rsvpapp

Verifying it opens up on Web browser:

$ curl http://localhost:30000
<!doctype html>
<html>
<title>RSVP App!</title>
<meta name=”viewport” content=”width=device-width, initial-scale=1″>
<link rel=”stylesheet” href=”/static/bootstrap.min.css”>
<link rel=”icon” href=”https://raw.githubusercontent.com/cloudyuga/rsvpapp/master/static/cloudyuga.png” type=”image/png” sizes=”
16×16″>
<script type=”text/javascript” src= “/static/jquery.min.js”></script>
<script type=”text/javascript” src= “/static/bootstrap.min.js”></script>
<body>
<div class=”jumbotron”>
<div class=”container”>
<div align=”center”>
<h2><a href=””>CloudYuga<img src=”https://raw.githubusercontent.com/cloudyuga/rsvpapp/master/static/cloudyuga.png”/>Garage RSVP!<
/a></h2>
<h3><font color=”maroon”> Serving from Host: 75658e4fd141 </font>
<font size=”8″ >

Delete the CloudYuga RSVP service

$ docker service rm rsvp

 

Create the CloudYuga RSVP service with custom names:

$ docker service create –name rsvp  -e MONGODB_HOST=mongodb -e TEXT1=”Docker Meetup” -e TEXT2=”Bangalore” –publish 5000  –network rsvpnet teamcloudyuga/rsvpapp

 

Scale the CloudYuga RSVP service:

$ docker service scale rsvp=5

 

Demonstrating the rolling update

$docker service update –image teamcloudyuga/rsvpapp:v1 –update-delay 10s rsvp

keep refreshing the RSVP frontend watch for changes,  “Name” should be converted into “Your Name”.

Demonstrating DAB and Docker Compose

$ mkdir cloudyuga
$ cd cloudyuga/
:~/cloudyuga$ docker-compose bundle -o cloudyuga.dab
WARNING: Unsupported top level key ‘networks’ – ignoring
Wrote bundle to cloudyuga.dab
:~/cloudyuga$ vi docker-compose.yml
:~/cloudyuga$ ls
cloudyuga.dab  docker-compose.yml

cat cloudyuga.dab
{
“Services”: {
“mongodb”: {
“Env”: [
“MONGODB_DATABASE=rsvpdata”
],
“Image”: “mongo@sha256:08a90c3d7c40aca81f234f0b2aaeed0254054b1c6705087b10da1c1901d07b5d”,
“Networks”: [
“rsvpnet”
],
“Ports”: [
{
“Port”: 27017,
“Protocol”: “tcp”
}
]
},
“web”: {
“Env”: [
“MONGODB_HOST=mongodb”
],
“Image”: “teamcloudyuga/rsvpapp@sha256:df59278f544affcf12cb1798d59bd42a185a220ccc9040c323ceb7f48d030a75”,
“Networks”: [
“rsvpnet”
],
“Ports”: [
{
“Port”: 5000,
“Protocol”: “tcp”
}
]
}
},
“Version”: “0.1”

Scaling the Services:

$docker service ls
ID            NAME               REPLICAS  IMAGE                                                                                          COMMAND
66kcl850fkkh  cloudyuga_web      1/1       teamcloudyuga/rsvpapp@sha256:df59278f544affcf12cb1798d59bd42a185a220ccc9040c323ceb7f48d030a75
aesw4vcj1s11  cloudyuga_mongodb  1/1

$docker service scale rsvp=5

mongo@sha256:08a90c3d7c40aca81f234f0b2aaeed0254054b1c6705087b10da1c1901d07b5d
aztab8c3r22c  rsvp               5/5       teamcloudyuga/rsvpapp:v1
f4olzfoomu76  mongodb            1/1       mongo:3.3

Restricting service to node-1

$sudo docker node update –label-addtype=ubuntu node-1

master==>sudo docker service create –name mycloud –replicas 5 –network collabnet –constraint ‘node.labels.type
== ubuntu’ dockercloud/hello-world
0elchvwja6y0k01mbft832fp6
master==>sudo docker service ls
ID            NAME               REPLICAS  IMAGE
COMMAND
0elchvwja6y0  mycloud            0/5       dockercloud/hello-world

66kcl850fkkh  cloudyuga_web      3/3       teamcloudyuga/rsvpapp@sha256:df59278f544affcf12cb1798d59bd42a185a220ccc9
040c323ceb7f48d030a75
aesw4vcj1s11  cloudyuga_mongodb  1/1       mongo@sha256:08a90c3d7c40aca81f234f0b2aaeed0254054b1c6705087b10da1c1901d
07b5d
aztab8c3r22c  rsvp               5/5       teamcloudyuga/rsvpapp:v1

f4olzfoomu76  mongodb            1/1       mongo:3.3

master==>sudo docker service ps mycloud
ID                         NAME       IMAGE                    NODE    DESIRED STATE  CURRENT STATE          ERROR
a5t3rkhsuegf6mab24keahg1y  mycloud.1  dockercloud/hello-world  node-1  Running        Running 5 seconds ago
54dfeuy2ohncan1sje2db9jty  mycloud.2  dockercloud/hello-world  node-1  Running        Running 3 seconds ago
072u1dxodv29j6tikck8pll91  mycloud.3  dockercloud/hello-world  node-1  Running        Running 4 seconds ago
enmv8xo3flzsra5numhiln7d3  mycloud.4  dockercloud/hello-world  node-1  Running        Running 4 seconds ago
14af770jbwipbgfb5pgwr08bo  mycloud.5  dockercloud/hello-world  node-1  Running        Running 4 seconds ago
master==>

 

0
0