Docker 17.06 Swarm Mode: Now with built-in MacVLAN & Node-Local Networks support

Docker 17.06.0-ce-RC5 got announced 5 days back and is available for testing. It brings numerous new features & enablements under this new upcoming release. Few of my favourites includes support for Secrets on Windows,  allows specifying a secret location within the container, adds --format option to docker system df command, adds support for placement preference to docker stack deploy, adds monitored resource type metadata for GCP logging driver and adding build & engine info prometheus metrics to list a few. But one of the notable and most awaited feature include support of swarm-mode services with node-local networks such as macvlan, ipvlan, bridge and host.

Under the new upcoming 17.06 release, Docker provides support for local scope networks in Swarm. This includes any local scope network driver. Some examples of these are bridgehost, and macvlan though any local scope network driver, built-in or plug-in, will work with Swarm. Previously only swarm scope networks like overlay were supported. This is a great news for all Docker Networking enthusiasts.

A Brief Intro to MacVLAN:

Picture1

 

macvlan

In case you’re new , the MACVLAN driver provides direct access between containers and the physical network. It also allows containers to receive routable IP addresses that are on the subnet of the physical network.

MACVLAN offers a number of unique features and capabilities. It has positive performance implications by virtue of having a very simple and lightweight architecture. It’s use cases includes very low latency applications and networking design that requires containers be on the same subnet as and using IPs as the external host network.The macvlan driver uses the concept of a parent interface. This interface can be a physical interface such as eth0, a sub-interface for 802.1q VLAN tagging like eth0.10 (.10representing VLAN 10), or even a bonded host adaptor which bundles two Ethernet interfaces into a single logical interface.

To test-drive MacVLAN under Swarm Mode, I will leverage the existing 3 node Swarm Mode clusters on my VMware ESXi system. I have tested it on bare metal system and VirtualBox and it works equally great.  

[Updated: 9/27/2017 – I have added docker-stack.yml at the end of this guide to show you how to build services out of docker-compose.yml file. DO NOT FORGET TO CHECK IT OUT]

Installing Docker 17.06 on all the Nodes:

curl -fsSL https://test.docker.com > install-docker.sh
sh install-docker.sh

 

Verifying the latest Docker version:

Screen Shot 2017-06-26 at 12.51.18 AM

 

Setting up 2 Node Swarm Mode Cluster:

 

 

Attention VirtualBox Users: – In case you are using VirtualBox,  the MACVLAN driver requires the network and interfaces to be in promiscuous mode. 

A local network config is created on each host. The config holds host-specific information, such as the subnet allocated for this host’s containers. --ip-range is used to specify a pool of IP addresses that is a subset of IPs from the subnet. This is one method of IPAM to guarantee unique IP allocations.

Manager:

manager1==>sudo docker network create --config-only --subnet 100.98.26.0/24 -o parent=ens160.60 --ip-range 100.98.26.100/24 collabnet

 

Worker-1:

worker1==>sudo docker network create --config-only --subnet 100.98.26.0/24 -o parent=ens160.60 --ip-range 100.98.26.100/24 collabnet

 

 

Instantiating the macvlan network globally

Manager:

manager1==> $sudo docker network create -d macvlan --scope swarm --config-from collabnet swarm-macvlan

 

Deploying a service to the swarm-macvlan network:

Let us go ahead and deploy WordPress application. We will be creating 2 services – wordpressapp and wordpressdb1 and attach it to “swarm-macvlan” network as shown below:

Creating Backend Service:

docker service create --replicas 1 --name wordpressdb1 --network swarm-macvlan --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql

Let us verify if MacVLAN network scope holds this container:

 

Creating Frontend Service

Next, it’s time to create wordpress application i.e. wordpressapp

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --network swarm-macvlan --replicas 4 --name wordpressapp --publish 80:80/tcp wordpress:latest

Verify if both the services are up and running:

 

Verifying if all the containers on the master node picks up desired IP address from the subnet:

 

Docker Compose File showcasing MacVLAN Configuration

Ensure that you run the below commands to setup MacVLAN configuration for your services before you execute the above docker stack deploy CLI:

root@ubuntu-1610:~# docker network create --config-only --subnet 100.98.26.0/24 --gateway 100.98.26.1 -o parent=ens160.60 --ip-range 100.98.26.120/24 collabnet
da2912d762cbf5f5ea412e6e4d69352a3285f720e23740529af9e533c7168729
 
root@ubuntu-1610:~#docker network create -d macvlan --scope swarm --config-from collabnet swarm-macvlan
jp76lts6hbbheqlbbhggumujd

 

Verify that the containers inspection shows the correct information:

root@ubuntu-1610:~/docker101/play-with-docker/wordpress/example1# docker network inspect swarm-macvlan
[
{
“Name”: “swarm-macvlan”,
“Id”: “jp76lts6hbbheqlbbhggumujd”,
“Created”: “2017-09-27T02:12:00.827562388-04:00”,
“Scope”: “swarm”,
“Driver”: “macvlan”,
“EnableIPv6”: false,
“IPAM”: {
“Driver”: “default”,
“Options”: null,
“Config”: [
{
“Subnet”: “100.98.26.0/24”,
“IPRange”: “100.98.26.120/24”,
“Gateway”: “100.98.26.1”
}
]
},
“Internal”: false,
“Attachable”: false,
“Ingress”: false,
“ConfigFrom”: {
“Network”: “collabnet”
},
“ConfigOnly”: false,
“Containers”: {
“3c3f1ec48225ef18e8879f3ebea37c2d0c1b139df131b87adf05dc4d0f4d8e3f”: {
“Name”: “myapp2_wordpress.1.nd2m62alxmpo2lyn079x0w9yv”,
“EndpointID”: “a15e96456870590588b3a2764da02b7f69a4e63c061dda2798abb7edfc5e5060”,
“MacAddress”: “02:42:64:62:1a:02”,
“IPv4Address”: “100.98.26.2/24”,
“IPv6Address”: “”
},
“d47d9ebc94b1aa378e73bb58b32707643eb7f1fff836ab0d290c8b4f024cee73”: {
“Name”: “myapp2_db.1.cxz3y1cg1m6urdwo1ixc4zin7”,
“EndpointID”: “201163c233fe385aa9bd8b84c9d6a263b18e42893176271c585df4772b0a2f8b”,
“MacAddress”: “02:42:64:62:1a:03”,
“IPv4Address”: “100.98.26.3/24”,
“IPv6Address”: “”
}
},
“Options”: {
“parent”: “ens160”
},
“Labels”: {},
“Peers”: [
{
“Name”: “ubuntu-1610-1633ea48e392”,
“IP”: “100.98.26.60”
}
]
}
]

Docker Stack Deploy CLI:

docker stack deploy -c docker-stack.yml myapp2
Ignoring unsupported options: restart
Creating service myapp2_db
Creating service myapp2_wordpress

Verifying if the services are up and running:

root@ubuntu-1610:~/# docker stack ls
NAME SERVICES
myapp2 2
root@ubuntu-1610:~/# docker stack ps myapp2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
nd2m62alxmpo myapp2_wordpress.1 wordpress:latest ubuntu-1610 Running Running 15 minutes ago
cxz3y1cg1m6u myapp2_db.1 mysql:5.7 ubuntu-1610 Running Running 15 minutes ago

Looking for Docker Compose file for Single Node?

 

Cool..I am going to leverage this for my Apache JMeter Setup so that I can push loads from different IPs using Docker containers.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s new upcoming under Docker 17.06 CE release by clicking on this link.

1
0

Running LinuxKit on AWS Platform made easy

With around 2800+ GITHUB stars, 54 contributors,  28 external, 50+ commits per week since the DockerCon launch, LinuxKit has really gained a lot of momentum among the community users. LinuxKit today supports multiple platforms – AWS, Hyper V, Azure, MacOS, Google Cloud Platform, Packets.net, VMware Fusion, QEMU & Local Hypervisors. Installation of LinuxKit on macOS has been simplified using Homebrew. Just 2 simple brew commands and moby is ready to build up your LinuxKit OS image.

docker001

Soon after DockerCon 2017, I wrote a blog post on how to get started with LinuxKit for Google Cloud Platform. Since then I have been closely keeping eye on the latest features, enablements & releases of LinuxKit. Under this blog post, I bring up a simplified approach to get LinuxKit OS instance running on top of Amazon Web Services(AWS) Platform.

Here we go..

Steps:

  1. Install AWS CLI on macOS(Using Homebrew)
  2. Installing LinuxKit & Moby Tool(Using Homebrew)
  3. Configuring AWS S3 bucket
  4. Building a RAW image with Moby tool
  5. Configuring VM Import Service Role
  6. Upload the aws.raw Image to remote AWS S3 bucket using LinuxKit
  7. Run the LinuxKit OS as EC2 Instance 

Installing AWS CLI on macOS

 

Screen Shot 2017-06-20 at 8.11.00 AM

Screen Shot 2017-06-20 at 8.12.14 AM

 

Setting the AWS_REGION environment variable as this is used by the AWS Go SDK:

export AWS_REGION=ap-south-1

Installing LinuxKit & Moby tool:

brew tap linuxkit/linuxkit
brew install --HEAD moby
brew install --HEAD linuxkit

 

Screen Shot 2017-06-20 at 7.47.54 AM

Screen Shot 2017-06-20 at 8.08.07 AM

 

Creating/Configuring AWS S3 bucket:

Open up AWS Management console and click on S3 under AWS Services. It will open up the below page:

Screen Shot 2017-06-18 at 10.14.01 PM

Screen Shot 2017-06-18 at 10.15.19 PM

Screen Shot 2017-06-18 at 10.15.51 PM

Screen Shot 2017-06-18 at 10.16.28 PM

 

Building AWS RAW Image using Moby:

Screen Shot 2017-06-18 at 10.44.35 PM

Screen Shot 2017-06-18 at 10.46.30 PM

 

This builds up aws.raw which we need to push to AWS S3 bucket using the below command:

linuxkit push aws -bucket linuxkit-images  -timeout 1200 aws.raw

This will throw the below error:

“…The sevice role <vmimport> does not exist or does not have sufficient permissions for the service to continue. status code: 400, request id: 0ce661fb-e9b4-40b8-af07-9da6a6fc3c94..”

Follow the next section to get it fixed..

Configuring VM Import Service Role 

VM Import requires a role to perform certain operations in your account, such as downloading disk images from an Amazon S3 bucket. You must create a role named vmimport with a trust relationship policy document that allows VM Import to assume the role, and you must attach an IAM policy to the role. I used this script to setup everything in a single shot:

Upload the aws.raw Image to remote AWS S3 bucket using LinuxKit

It’s time to push the RAW Image to S3 bucket:

linuxkit push aws -bucket linuxkit-images -timeout 1200 aws.raw
Created AMI: ami-0a81fe65

Creating an instance

linuxkit run aws aws
Created instance i-02b28f9f8eee1dcf2
Instance i-02b28f9f8eee1dcf2 is running
 

Open up your AWS Management console and you will soon see new instance coming up.

Screen Shot 2017-06-19 at 12.41.11 AM

 

Screen Shot 2017-06-19 at 12.45.15 AM

 

Here you go.. AWS E2 instance running LinuxKit OS is up and running..

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s happening with AWS project activities clicking on this link.

0
0

Why Infrakit & LinuxKit are better together for Building Immutable Infrastructure?

Yet Another Problem Statement(YAPS)…

Let us accept the fact – “Managing Docker on different Infrastructure is still difficult and not portable”. While working on Docker for Mac, AWS, GCP & Azure, Docker Team realized the need for a standard way to create and manage infrastructure state that was portable across any type of infrastructure, from different cloud providers to on-prem. One serious challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision n-number of servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required. Also, in the case of server failures (especially unplanned), that sudden change needs to be reconciled against the desired state to ensure that any required servers are re-provisioned with the necessary configuration. Docker Team introduced and open sourced “InfraKit” last year to solve these problems and to provide the ability to create a self healing infrastructure for distributed systems.

Screen Shot 2017-06-11 at 5.05.12 PM

                                            

InfraKit is basically a toolkit for infrastructure orchestration. With an emphasis on immutable infrastructure, it breaks down infrastructure automation and management processes into small, pluggable components. These components work together to actively ensure the infrastructure state matches the user’s specifications. InfraKit therefore provides infrastructure support for higher-level container orchestration systems and can make your infrastructure self-managing and self-healing.

Why the Integration of LinuxKit with Infrakit now??

LinuxKit is gaining momentum in terms of  a toolkit for building custom minimal, immutable Linux distributions. Integration of Infrakit with LinuxKit will help users  to build and deploy custom OS images to a variety of targets – from a single vm instance on the mac (via xhyve / hyperkit, no virtualbox) to a cluster of them, as well as booting a remote ARM host on Packet.net from the local laptop via a ngrok tunnel.

Under this blog post, I will show you how does InfraKit and LinuxKit work together to build immutable infrastructure. I want to test drive these toolkits on my macOS Sierra 10.12.3 system.

Installing Homebrew

/usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)”

Installing WGET

brew install wget

Installing Infrakit:

sudo wget -qO- https://docker.github.io/infrakit/install | sh

Copy the file to your path:

sudo cp ./infrakit /usr/local/bin/

Infrakit is cross compiled and installed on your system. Let us try to see what playbook are available as of now.

Screen Shot 2017-06-12 at 2.04.28 AM

 

Adding INFRAKIT_HOME variable:

bash-3.2$ source ./.bash_profile

bash-3.2$ echo $INFRAKIT_HOME

/Users/ajeetraina/.infrakit

bash-3.2$

Adding LinuxKit Playbook

By default, there is no playbook available . Let us try to install LinuxKit and see how InfraKit integrate with LinuxKit.

Create a directory called testproj. 

mkdir testproj
cd testproj

Add the LinuxKit Playbook:

 infrakit playbook add linuxkit https://docker.github.io/infrakit/playbooks/linuxkit/index.yml

Verifying the LinuxKit Playbook

sudo infrakit playbook ls
PLAYBOOK                      URL                           
linuxkit                      https://docker.github.io/infrakit/playbooks/linuxkit/index.yml

Screen Shot 2017-06-12 at 2.02.57 AM

Cool.

Run the below command to see what options are available under LinuxKit playbook:

infrakit playbook linuxkit

Screen Shot 2017-06-12 at 2.05.51 AM

 

Installing HyperKit Plugin on your Mac

infrakit playbook linuxkit install-hyperkit

 

Screen Shot 2017-06-12 at 2.01.54 AM

This command installs HyperKit plugins and add it under /usr/local/bin directory automatically.

Under the testproj directory, you will now see infrakit-instance-hyperkit gets created:

testproj$ ls
docker4mac infrakit-instance-hyperkit

Installing Moby Build Tool  to build custom OS images

infrakit playbook linuxkit install-moby

Screen Shot 2017-06-12 at 1.56.10 AM

Once it gets completed, you will see the options to build OS images:

Screen Shot 2017-06-11 at 7.50.40 AM

Now the software has been installed. The Playbook has command to start everything.

Starting the LinuxKit 

sudo infrakit playbook linuxkit start

Screen Shot 2017-06-12 at 1.59.23 AM

Ensure that you type “yes” for ‘Start HYPERKIT plugin’ while running the above command.

By now, the Infrakit command line interface will show additional options:

infrakit -h
Infrakit command line interface
Usage:
infrakit [command]
Available Commands:
event Access event exposed by infrakit plugins
event-time Access plugin event-time which implements Metadata/0.1.0
event-time/time Access plugin event-time/time which implements Metadata/0.1.0
event-time/timer Access plugin event-time/timer which implements Event/0.1.0
flavor-vanilla Access plugin flavor-vanilla which implements Flavor/0.1.0
group Access plugin group which implements Group/0.1.0,Manager/0.1.0,Metadata/0.1.0,Updatable/0.1.0
group-stateless Access plugin group-stateless which implements Group/0.1.0,Metadata/0.1.0
instance-hyperkit Access plugin instance-hyperkit which implements Instance/0.5.0,Metadata/0.1.0
manager Access the manager
metadata Access metadata exposed by infrakit plugins
playbook Manage playbooks
plugin Manage plugins
remote Manage remotes
template Render an infrakit template at given url. If url is ‘-‘, read from stdin
util Utilities
version Print build version information
x Experimental features

As you see above, CLI is contextual. It basically discovers the hyperkit plugin running and generates a new command for you to access it

 

In case hyperkit plugin is not turning up, you can kill the old hyperkit instance process and re-start it.

Screen Shot 2017-06-13 at 9.19.10 AM

Verify if the hyperkit plugin is up and running

Screen Shot 2017-06-12 at 1.52.27 AM

In case you want to run HyperKit, there is recommended command for it.

bash-3.2$ infrakit playbook linuxkit run-hyperkit
Start HYPERKIT plugin? [no]: yes
Starting HYPERKIT plugin.  This must be running on the Mac as a daemon and not as a container
This plugin is listening at localhost:24865

By now, we have everything ready for our LinuxKit SSH playbook

LinuxKit SSHD Example:

Now let me show you how to build a LinuxKit image containing just a simple sshd.The file `sshd.yml` defines the components inside the image.  Instead of a standard LinuxKit image yml, it is actually an InfraKit template that is rendered before the moby tool is invoked to build the actual OS image.

sudo infrakit playbook linuxkit demo-sshd

This will show up the detailed information on its usage on your terminal:

 

Screen Shot 2017-06-11 at 8.13.58 AM

Screen Shot 2017-06-11 at 8.14.51 AM

Screen Shot 2017-06-11 at 8.16.07 AM

Let us first build the “SSHD” YAML file using the below command:

sudo infrakit playbook linuxkit demo-sshd build-image
 
 
Verifying the SSH outputs:
 
Looking into sshd.yml content:
 
The below content shows us that SSH service has been right created:
 
The command `build-image` will collect user input such as the public key location and use that to generate the final input to `moby`.
Open up a new terminal to watch our first Hyperkit instance:
[simterm]
watch -d infrakit instance-hyperkit describe

Running the SSH Instance

Using the `hyperkit` subcommand (does not require billing accounts / signup on providers),you can create a single or a cluster of instances after you run the `build-image`.

The command `… hyperkit run-instance` will use hyperkit plugin to create a single guest vm that boots from the image you built with `build-image`.

infrakit playbook linuxkit demo-sshd hyerpkit run-instance

This command brings up the LinuxKitOS instance. 

Now you can run the below command to enter into SH shell

docker run --rm -ti -v ~/.ssh:/root/.ssh  infrakit/ssh /bin/sh

[ A Special Thanks to David Chung, Docker Team for assisting me understand LinuxKit Playbook thoroughly.]

In the future blog post, I will show you how LinuxKit + Infrakit + GCP works together.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more what’s happening with Infrakit project activities clicking on this link.

0
0

A Quick Look at LinuxKit Packaging System

One of the most compelling feature of LinuxKit is “Everything replaceable and customisable”. You can now build up your own customised Linux Operating System so as to run it across various platforms like Bare Metal Hardware, Virtual Machines, Cloud Instances etc. As of today, LinuxKit supports various platform like OSX/Hypervisor, Google Cloud Platform, Packets.net, QEMU/KVM &  VMware ESXi vCenter Server. Effort has already been started for enabling this solution on AWS, Azure, Bluemix etc. For Raspberry Pi aspirants, there is already a feature request raised for enabling LinuxKit on ARM64 platform.

Screen Shot 2017-06-03 at 9.44.56 AM

LinuxKit is built with containers to run the containers. It involves easy tooling and that too with easy iteration.Under LinuxKit, we have runc and containerd which are used to run containers.Containers under LinuxKit are managed by runc and containerd and not by Docker.Everything apart from the init process and runc/containerd run in a container. Under this blog post, I will take you through Packaging System under LinuxKit. This might help you build your own customised OS for your infrastructure requirement.

Screen Shot 2017-06-03 at 9.41.00 AM

Building Packages Under LinuxKit:

Packages under LinuxKit are just container images stored on DockerHub or in private registries. They are stored under Dockerhub under LinuxKit org.

Screen Shot 2017-06-03 at 7.42.13 AM

Base packages for LinuxKit are stored under /pkg directory under LinuxKit repository. To view it, you can clone LinuxKit GITHUB repository and look under linuxkit/pkg directory as shown below:

git clone https://github.com/linuxkit/linuxkit
cd linuxkit/pkg/

 

Screen Shot 2017-06-03 at 6.56.14 AM

 

There are different ways for building LinuxKit base packages. You can either use DockerHub images directly or build your own wrapper around DockerHub images in the form of shell script. You have flexibility to use customised base images or custom package with just 1 binary.

It is important to note that Packaging System under LinuxKit uses extensive use of Multi-Stage Builds to Keep package Smaller. To demonstrate, let us pick up one base package – OpenNTPD and see what does it contain.

Screen Shot 2017-06-03 at 9.38.57 AM

Ajeets-MacBook-Air:openntpd ajeetraina$ pwd;ls
/Users/ajeetraina/linuxkit/pkg/openntpd
Dockerfile Makefile etc

As shown above, OpenNTPD contains a Dockerfile, Makefile & etc directory to hold NTP configurational changes.

Dockerfile:

Screen Shot 2017-06-03 at 7.54.05 AM

There are multiple FROMs in this Dockerfile which indicates that packages are built using Multi-Stage Build environment. Let us closely look line by line:

FROM linuxkit/alpine:630ee558e4869672fae230c78364e367b8ea67a9
 

In the first stage, we are trying to pull ALPINE base image which is part of LinuxKit project. LinuxKit has specific versions of all the packages and hence you can see that specific version is hard-coded.

RUN mkdir -p /out/etc/apk && cp -r /etc/apk/* /out/etc/apk/

This creates a directory where it is going to install the  contents of the packages.

RUN apk add --no-cache --initdb -p /out \
    alpine-baselayout \
    busy box \
    musl \
    openntpd \
    && true

It uses Alpine Package Manager(APK) to install standard base layout for standard root filesystem, busy box for SHELL environment, musl in order to have C libraries which is used by all of these components and finally OpenNTP related packages.

At the end of first stage, a container image based on Alpine base image having a directory holding required files and packages with root filesystem enough to run OpenNTP container is successfully built.

RUN rm -rf /out/etc/apk /out/lib/apk /out/var/cache

This removes temporary/unwanted files from OUT directly so as to keep the container smaller in size.

Under the 2nd Stage, the content of Dockerfile look like this:

Screen Shot 2017-06-03 at 9.10.29 AM

 

This starts with empty container image. We copy it from the first stage (i.e.. OUT directory) to our root filesystem. Then OpenNTPD daemon configuration & OpenNTP scripts are added from external /etc directory to the container image. Next, we specify the command that whenever container starts, we need OpenNTP daemon to be started.This provides us a minimal package which runs OpenNTPD service.

The last line is very important and recently been introduced under LinuxKit project. If you remember while building YAML file for LinuxKit, we used to specify CAPABILITIES section. For this OpenNTP service, we won’t require it now because we have now LABEL section added during the build process.  We have bind mounts so that /etc is mounted, network namespace can be specified, CAP_NET_BIND_SERVICE which helps in binding the container  to the interface.

Screen Shot 2017-06-03 at 9.24.45 AM

File: Makefile

Screen Shot 2017-06-03 at 8.43.44 AM

It contains docker build command, squash parameter is passed to squash the layer of the image into a single image to make it smaller in the 2nd stage, –network=none to indicates that while booting the container , it has no access to the network. Other essential entries includes the HASH value provides you with the content hash of all the files in the directory and hence  it becomes important to ensure that only the required files are being used to build up container image.

Let us try to run make command and see if this builds up right container image for us.

Ajeets-MacBook-Air:openntpd ajeetraina$ sudo make
Password:
docker build --squash --no-cache --network=none -t linuxkit/openntpd:45deeb05f736162d941c9bf494983f655ab80aa5 .
Sending build context to Docker daemon  5.632kB
Step 1/12 : FROM linuxkit/alpine:630ee558e4869672fae230c78364e367b8ea67a9 AS mirror
 ---> 72ff54f67634
Step 2/12 : RUN mkdir -p /out/etc/apk && cp -r /etc/apk/* /out/etc/apk/
 ---> Running in 175f2b917c9a
 ---> 08b114fc941c
Removing intermediate container 175f2b917c9a
Step 3/12 : RUN apk add --no-cache --initdb -p /out     alpine-baselayout     busybox     musl     openntpd     && true
 ---> Running in 3109ff2301fa
(1/7) Installing musl (1.1.16-r9)
(2/7) Installing busybox (1.26.2-r4)
Executing busybox-1.26.2-r4.post-install
(3/7) Installing alpine-baselayout (3.0.4-r0)
Executing alpine-baselayout-3.0.4-r0.pre-install
Executing alpine-baselayout-3.0.4-r0.post-install
(4/7) Installing libressl2.5-libcrypto (2.5.4-r0)
(5/7) Installing libressl2.5-libssl (2.5.4-r0)
(6/7) Installing libressl2.5-libtls (2.5.4-r0)
(7/7) Installing openntpd (6.0_p1-r3)
Executing openntpd-6.0_p1-r3.pre-install
Executing busybox-1.26.2-r4.trigger
OK: 3 MiB in 7 packages
 ---> 2336220414ec
Removing intermediate container 3109ff2301fa
Step 4/12 : RUN rm -rf /out/etc/apk /out/lib/apk /out/var/cache
 ---> Running in c39c2d86c898
 ---> 9248b40c1e31
Removing intermediate container c39c2d86c898
Step 5/12 : FROM scratch
 ---> 
Step 6/12 : ENTRYPOINT
 ---> Running in a1475ce25417
 ---> 89a4eefb0f2e
Removing intermediate container a1475ce25417
Step 7/12 : CMD
 ---> Running in 3299eb03002b
 ---> e0761d845e86
Removing intermediate container 3299eb03002b
Step 8/12 : WORKDIR /
 ---> dc50915b1354
Removing intermediate container 37292d5db9f6
Step 9/12 : COPY --from=mirror /out/ /
 ---> 60c358443d49
Removing intermediate container 4c746169a103
Step 10/12 : COPY etc/ /etc/
 ---> 049744fe228f
Removing intermediate container f9f21176ca4d
Step 11/12 : CMD /usr/sbin/ntpd -d -s
 ---> Running in 94c8a776666d
 ---> 56d618a1fa76
Removing intermediate container 94c8a776666d
Step 12/12 : LABEL org.mobyproject.config ‘{“capabilities”: [“CAP_SYS_TIME”, “CAP_SYS_NICE”, “CAP_SYS_CHROOT”, “CAP_SETUID”, “CAP_SETGID”]}’
 ---> Running in c1514342c0fb
 ---> 46987e8c78bc
Removing intermediate container c1514342c0fb
Successfully built c1dbda899687
Successfully tagged linuxkit/openntpd:45deeb05f736162d941c9bf494983f655ab80aa5
DOCKER_CONTENT_TRUST=1 docker pull linuxkit/openntpd:45deeb05f736162d941c9bf494983f655ab80aa5 || \
DOCKER_CONTENT_TRUST=1 docker push linuxkit/openntpd:45deeb05f736162d941c9bf494983f655ab80aa5
Pull (1 of 1): linuxkit/openntpd:45deeb05f736162d941c9bf494983f655ab80aa5@sha256:26e88ffd48262f4a03ed678d2edee35b807b4d7fe561a3aa6577eef325e317c4
sha256:26e88ffd48262f4a03ed678d2edee35b807b4d7fe561a3aa6577eef325e317c4: Pulling from linuxkit/openntpd
Digest: sha256:26e88ffd48262f4a03ed678d2edee35b807b4d7fe561a3aa6577eef325e317c4
Status: Image is up to date for linuxkit/openntpd@sha256:26e88ffd48262f4a03ed678d2edee35b807b4d7fe561a3aa6577eef325e317c4
Tagging linuxkit/openntpd@sha256:26e88ffd48262f4a03ed678d2edee35b807b4d7fe561a3aa6577eef325e317c4 as linuxkit/openntpd:45deeb05f736162d941c9bf494983f655ab80aa5

Let us verify if it built required image or not:

Screen Shot 2017-06-03 at 9.21.56 AM

The image is just 3.62MB which is quite smaller in size.

Now, you should be able to use this OpenNTPD base image under your YAML file so that moby tool can build up LinuxKit OS image:

Screen Shot 2017-06-03 at 9.29.40 AM

The above YAML content has been picked up from docker.yml file which contains ntpd service container created out of OpenNTPD base image.

In the future blog post, I will showcase how to build your own customised Kernel for LinuxKit.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more about the latest LinuxKit project activities clicking on this link.

0
0

Test Drive Multitenant Feature with Oracle 12C Enterprise Edition Docker Store Image

At Dockercon last month, Oracle released  its flagship databases, middleware and developer tools into the Docker Store marketplace via the Docker Certification Program. What does it mean to developers like me? It means that now you can pull OFFICIAL images of Oracle products in Docker and quickly start developing, testing and deploying modern enterprise applications. Isn’t it cool?

The Docker Certification Program (DCP)  framework is gaining  a lot of attention among partners which is a single platform allowing them to integrate and certify their technology to the Docker EE commercial platform. As of today, Oracle published Oracle Instant Client, Oracle Java 8 SE (Server JRE), Oracle Coherence, Oracle Database Enterprise Edition & Oracle Linux. You can directly view the solutions offered under this link.

Docker_Store

 

Under this post, I will demonstrate how to get started with Oracle Database Enterprise Edition 12.1.0.2 in very simplified way:

Step-1:  Logging / Registering to Docker Store

You will need Dockerhub Account to login to Docker Store and pull Oracle Database EE Docker Image as shown below:

sudo  docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don’t have a Docker ID, head over to https://hub.docker.com to create one.
Username (ajeetraina): ajeetraina
Password:
Login Succeeded
[root@pxemaster ~]#

 

Step-2: Pulling Oracle Database Enterprise Docker Image

sudo docker pull store/oracle/database-enterprise:12.1.0.2

Let us verify once the image has been pulled:

ora1

Its 5.27 GB in size and hence might take long time based on your network connectivity.  I have pulled other Oracle products too as shown above.

Step-3:  Preparing Environment File

To create a database container, we need to use the environment file below to pass configuration parameters into container:

##——————————————————————
## Specify the basic DB parameters
##——————————————————————

## db sid (name)
## default : ORCL
## cannot be longer than 8 characters

DB_SID=OraDoc

## db passwd
## default : Oracle

DB_PASSWD=MyPasswd123

## db domain
## default : localdomain

DB_DOMAIN=my.domain.com

## db bundle
## default : basic
## valid : basic / high / extreme
## (high and extreme are only available for enterprise edition)

DB_BUNDLE=basic

## end

The above is just an example file. You can make changes accordingly. I have changed DB_DOMAIN, DB_SID and DB_PASSWD according to my infrastructure. Save the file by name “env”.

 

Step-4: Running the Oracle Database Enterprise Container

sudo docker run -d --env-file env -p 1527:1521 -p 5507:5500 -it --name oracleDB --shm-size="12g" store/oracle/database-enterprise:12.1.0.2

where,

env is the path to the environment file you created using above example.
1521 is the port on host machine to map the container’s 1521 port (listener port).
5500 is the port on host machine to map the container’s 5500 port (http service port).
oracledb is the container name you want to create.
12g is the memory size for the container to run. The minimum requirement is 4GB (–shm-size=”4g”).
store/oracle/database-enterprise:12.1.0.2 is the image that you use to create a container.

Step-5:  Verify that the oracleDB container is running or not:

sudo docker ps

oracle_1

As shown above, the container ID starting with d88 indicates Oracle Database container running.

“…We are not yet done. ..”

You can enter into the container and verify that Oracle Linux Server is the base image for this container.

ora112

 

Step-6:  Running the required script to start Oracle Database

The database setup and startup are executed by running “/bin/bash /home/oracle/setup/dockerInit.sh“, which is the default CMD instruction in the images. 

docker exec -it <container_name> /bin/bash

 
[root@localhost ~]# docker exec -it d88 /bin/bash /home/oracle/setup/dockerInit.sh
User check : root.
Start up Oracle Database
Last login: Fri May 26 07:20:07 UTC 2017
Fri May 26 07:21:13 UTC 2017
start database
start listener
The database is ready for use .
Fri May 26 07:18:51 UTC 2017
User check : root.
Setup Oracle Database
Fri May 26 07:21:13 UTC 2017
User check : root.
Start up Oracle Database…

This will take about 5 to 8 minutes to be up and running. You can check logs which are kept under /home/oracle/setup/log location.

You can verify log file placed under “/home/oracle/setup/log/setupDB.log“. If “Done ! The database is ready for use .” is shown, the database setup was successful.

Let us switch to oracle user and try testing the DB connectivity using SQLPLUS command:

oracle112

 

Multitenant: Connecting to Container Databases (CDB) & Pluggable Databases (PDB) 

If you are new to Oracle D12c, the multitenant option introduced in Oracle Database 12c allows a single container database (CDB) to host multiple separate pluggable databases (PDB). I couldn’t just wait to test this out & see how to connect to container databases (CDB) and pluggable databases (PDB).

By default, PDB1 gets automatically created during the database installation. You can check that through the below command:

pdb_oracle

The V$SERVICES views can be used to display available services from the database as shown below:

Let us see how to connect to one of the container database.

Before you connect to the container database, let us setup a user first:

SQL> create user raina identified by raina;
User created.

 

SQL> grant dba to raina;
Grant succeeded.

 

SQL> grant create session to raina;
Grant succeeded.

 

SQL> grant connect to raina;
Grant succeeded.

 

SQL> conn raina/raina@pdb1
Connected.
SQL>

Follow the below commands to connect to PDB1 using the above user account:

SQL> ALTER SESSION SET container = pdb1;

ora21

SQL> SHOW CON_NAME

ora22

SQL> conn raina/raina@pdb1

 

ora23

 

Hence, we are now connected to one of database container running inside Oracle Database Enterprise Edition.

In the future post, we will look how does Oracle Client Instant connect to Oracle Database Enterprise Docker container.

 

 

 

0
0

Topology Aware Scheduling under Docker v17.05.0 Swarm Mode Cluster

 

Docker 17.05.0 Final release went public exactly 2 week back.This community release was the first release as part of new Moby project.  With this release, numerous interesting features  like Multi-Stage Build support to the builder, using build-time ARG in FROM, DEB packaging for Ubuntu 17.04(Zesty Zapus)  has been included. With this latest release, Docker Team brought numerous new features and improvements in terms of Swarm Mode. Example – synchronous service commands, automatic service rollback on failure, improvement over raft transport package, service logs formatting etc. 

swarm_1705

 

Placement Preference under Swarm Mode:

One of the prominent new feature introduced is placement preference  under 17.05.0-CE Swarm Mode . Placement preference feature allows you to divide tasks evenly over different categories of nodes. It allows you to balance tasks between multiple datacenters or availability zones. One can use a placement preference to spread out tasks to multiple datacenters and make the service more resilient in the face of a localized outage. You can use additional placement preferences to further divide tasks over groups of nodes.  Under this blog, we will setup 5-node Swarm Mode cluster on play-with-docker platform and see how to balance them over multiple racks within each datacenter. (Note – This is not real time scenario but based on assumption that nodes are being placed in 3 different racks).

Assumption:  There are 3 datacenter Racks holding respective nodes as shown:

{Rack-1=> Node1, Node2 and Node3},

{Rack-2=> Node4}  &

{Rack-3=> Node5}

 

Creating Swarm Master Node:

Open up Docker Playground to build up Swarm Cluster.

docker swarm init --advertise-addr 10.0.116.3

Screen Shot 2017-05-20 at 7.04.26 AM

 

Adding Worker Nodes to Swarm Cluster

docker swarm join --token <token-id> 10.0.116.3:2377

Screen Shot 2017-05-20 at 7.07.53 AM

Create 3 more instances and add those nodes as worker nodes. This should build up 5 node Swarm Mode cluster.

Screen Shot 2017-05-20 at 7.08.51 AM

 

Setting up Visualizer Tool 

To showcase this demo, I will leverage a fancy popular Visualizer tool.

 

git clone https://github.com/ajeetraina/docker101
cd docker101/play-with-docker/visualizer

 

All you need is to execute docker-compose command to bring up visualizer container:

 

docker-compose up -d

 

Screen Shot 2017-05-20 at 7.09.57 AM

 

Click on port “8080” which gets displayed on top centre of this page and it should display a fancy visualiser depicting Swarm Mode cluster nodes.

Screen Shot 2017-05-20 at 7.13.50 AM

Creating an Overlay Network:

 $docker network create -d overlay collabnet

Screen Shot 2017-05-20 at 7.15.15 AM

 

Let us try to create service with no preference placement or no node labels.

Setting up WordPress DB service:

docker service create --replicas 10 --name wordpressdb1 --network collabnet --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql:latest

Screen Shot 2017-05-20 at 7.19.02 AM

When you run the above command, the swarm will spread the containers evenly node-by-node. Hence, you will see 2-containers per node as shown below:

 Screen Shot 2017-05-20 at 7.22.58 AM 

Setting up WordPress Web Application:

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --network collabnet --replicas 3 --name wordpressapp --publish 80:80/tcp wordpress:latest

Screen Shot 2017-05-20 at 7.30.55 AM

Visualizer:

Screen Shot 2017-05-20 at 7.34.51 AM 

As per the visualizer, you might end up with uneven distribution of services. Example., Rack-1 holding node-1, node-2 and node-3 looks to have almost equal distribution of services, Rack-2 which holds node3 lack WordPress fronted application.

Here Comes Placement Preference for a rescue…

Under the latest release, Docker team has introduced a new feature called “Placement Preference Scheduling”. Let us spend some time to understand what it actually means. You can set up the service to divide tasks evenly over different categories of nodes. One example of where this can be useful is to balance tasks over a set of datacenters or availability zones. 

This uses --placement-pref with a spread strategy (currently the only supported strategy) to spread tasks evenly over the values of the datacenter node label. In this example, we assume that every node has a datacenter node label attached to it. If there are three different values of this label among nodes in the swarm, one third of the tasks will be placed on the nodes associated with each value. This is true even if there are more nodes with one value than another. For example, consider the following set of nodes:

  • Three nodes with node.labels.datacenter=india
  • One node with node.labels.datacenter=uk
  • One node with node.labels.datacenter=us

Considering the last example, since we are spreading over the values of the datacenter label and the service has 5 replicas, at least 1 replica should be available  in each datacenter. There are three nodes associated with the value “india”, so each one will get one of the three replicas reserved for this value. There is 1 node with the value “uk”, and hence 1 replica for this value will be receiving it. Finally, “us” has a single node that will again get atleast 1 replica of each service reserved.

To understand more clearly, let us assign node labels to Rack nodes as shown below:

Rack-1 : 

Node-1

docker node update --label-add datacenter=india node1
docker node update --label-add datacenter=india node2
docker node update --label-add datacenter=india node3

Rack-2

docker node update --label-add datacenter=uk node4

Rack-3

docker node update --label-add datacenter=us node5

Screen Shot 2017-05-20 at 7.46.33 AM

 

Removing both the services:

docker service rm wordpressdb1 wordpressapp

Let us now pass placement preference parameter to the docker service command:

docker service create --replicas 10 --name wordpressdb1 --network collabnet --placement-pref “spread=node.labels.datacenter” --env MYSQL_ROOT_PASSWORD=collab123 --env MYSQL_DATABASE=wordpress mysql:latest

Screen Shot 2017-05-20 at 8.05.52 AM

 

Visualizer:

Screen Shot 2017-05-20 at 8.05.14 AM

Rack-1(node1+node2+node3) has 4 copies, Rack-2(node4) has 3 copies and Rack-3(node5) has 3 copies.

Let us run WordPress Application service likewise:

docker service create --env WORDPRESS_DB_HOST=wordpressdb1 --env WORDPRESS_DB_PASSWORD=collab123 --placement-pref “spread=node.labels.datacenter” --network collabnet --replicas 3 --name wordpressapp --publish 80:80/tcp wordpress:latest

Screen Shot 2017-05-20 at 8.09.29 AM

Visualizer: As shown below, we have used placement preference feature to ensure that the service containers get distributed across the swarm cluster on both the racks.

Screen Shot 2017-05-20 at 8.10.41 AM

 

As shown above, –placement-pref ensures that the task is spread evenly over the values of the datacenter node label. Currently spread strategy is only supported.Both engine labels and node labels are supported by placement preferences.

Please Note: If you want to try this feature with Docker compose , you will need Compose v3.3 which is slated to arrive under 17.06 release.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Know more about the latest Docker releases clicking on this link.

 

0
0

When Moby Meet Kubernetes for the first time

Moby has turned to be an open playground for collaborators. It has become a popular collaborative project for the container ecosystem to assemble container-based systems. There has been tremendous amount of effort put to containerize an application but what about the platform which runs those containers? Shouldn’t that be containerize too? Moby is the answer. With library of over 80+ components for all vital aspects of a container system: OS, container run time, orchestration, infrastructure management, networking, storage, security, build, image distribution, etc., Moby can help you package your own components as containers.  The Moby Project enables customers to plug and play their favorite technology components to create their own custom platform. Interestingly, all Moby components are containers, so creating new components is as easy as building a new OCI-compatible container.

While  Moby project provide you with a command-line tool called “moby” to assembles components, LinuxKit is a valuable toolkit which allows you for building secure, portable and lean operating systems for containers. It provides a container-based approach to building a customized  Linux subsystem for each type of container. It is based on containerd and has its own Linux kernel, system daemon and system services. 

Mobyy

I attended Dockercon 2017, Austin TX last month and one of coolest weekend project showcased by Docker Team was running Kubernetes on Mac using Moby and LinuxKit. In case you’re completely new to Kubernetes, it is an open-source system for automating deployment, scaling and management of containerized applications. It was originally designed by Google and donated to the Cloud Native Computing Foundation. It provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It supports a range of container tools, including Docker.

moby_kubernetes

 

 

linuxkit_4_kubernetes

 

One of the main benefit of LinuxKit for Kubernetes includes reliable deployment, lower security footprint, easy customization around building own desired base image.Under this blog post, I am going to demonstrate how one can easily create minimal and immutable Kubernetes OS images with LinuxKit.

Pre-requisite:  

  1. Install the latest Edge Release of Docker for Mac and Engine through this link.
  2. Please note that if you are using Stable Release of Docker for Mac, you won’t be able to setup Multi-node Kubernetes cluster as the stable release lack Multi-host functionality of VPNKit. Do refer this known issue. The support for multi-host networking was introduced in the latest Edge release.

 

Screen Shot 2017-05-11 at 9.09.37 AM

  • Ensure that Docker for Mac Edge Release gets displayed once you have installed it properly.

 

Screen Shot 2017-05-11 at 8.26.34 AM

 

Clone the LinuxKit Repository as shown:

git clone https://github.com/linuxkit/linuxkit

 

Build the Moby and LinuxKit tool first using the below commands:

 

cd linuxkit
make
cp -rf bin/moby /usr/local/bin/
cp -rf bin/linuxkit /usr/local/bin/

 

Change directory to kubernetes project:

 

cd linuxkit/projects/kubernetes

 

You will find the below list of files and directories:

Screen Shot 2017-05-11 at 8.31.24 AM

 

Let us first look at kube-master.yml file. Everything under LinuxKit is just a YAML file. This files starts with a section defining the kernel configuration, init section just lists images that is used for the init system and are unpacked directly into the root filesystem, the onboot sections indicates that  these containers are run to completion sequentially, using runc before anything else is started.  As shown below,  under the service section, there is a kubelet service defined which uses errordeveloper/mobykube:master image and build Kubernetes images.

Edit kube-master.yml and add your public SSH key to files section. You can generate the SSH key using ssh-keygen command.

Screen Shot 2017-05-12 at 9.24.03 AM

 

Once you have added your public SSH key, go ahead and build OS images using the below command:

 

sudo make build-vm-images

 

The above command provides you with the below output:

 

Screen Shot 2017-05-11 at 8.48.35 AM

 

Few of the important files includes:

kube-node-kernel  kube-node-initrd.img  kube-node-cmdline

 

Under the same directory, you will find a file called “boot-master.sh” which will help us in setting up the master node.

 

Screen Shot 2017-05-12 at 9.28.14 AM

 

Boot Kubernetes master OS image using hyperkit on macOS:

 

./boot-master.sh

This will display the following output:

Screen Shot 2017-05-11 at 8.50.11 AM

Just wait for few seconds and you will see LinuxKit OS coming up as shown:

Screen Shot 2017-05-11 at 8.52.58 AM

 

It’s easy to retrieve the IP address of the master node:

 

Screen Shot 2017-05-11 at 8.54.17 AM

 

Verify the kubelet process:

Screen Shot 2017-05-11 at 8.55.15 AM

Now it’s time to execute the script to manually initialize master with kubeadm:

/ # runc exec kubelet kubeadm-init.sh

 

Screen Shot 2017-05-11 at 8.56.48 AM

 

Copy / Save  the below command  and keep it handy. We are going to need it soon.

kubeadm join --token a5365b.45e88229a1548bf2 192.168.65.2:6443

 

Hence, your Kubernetes master is up and ready.

You can verify the cluster node:

Screen Shot 2017-05-11 at 8.57.48 AM

This was so easy to setup. Isn’t it? Let us create 3 node cluster directly from macOS terminal. Open up 3 new separate terminal to start 3 nodes  and run the below commands:

 

 ajeetraina$cd linuxkit/projects/kubernetes/
 ajeetraina$ sudo ./boot-node.sh 1 --token a5365b.45e88229a1548bf2 192.168.65.2:6443
 ajeetraina$ sudo ./boot-node.sh 2 --token a5365b.45e88229a1548bf2 192.168.65.2:6443
 ajeetraina$ sudo ./boot-node.sh 3 --token a5365b.45e88229a1548bf2 192.168.65.2:6443

Open up the master node terminal and verify if all the 3 nodes gets added:

 

/ # kubectl get nodes
NAME                STATUS    AGE       VERSION
moby-025000000003   Ready     18m       v1.6.1
moby-025000000004   Ready     13m       v1.6.1
moby-025000000004   Ready     15m       v1.6.1
 moby-025000000004   Ready     14m       v1.6.1

 

Screen Shot 2017-05-11 at 9.06.28 AM

Moby makes it so simple to setup Kubernetes cluster up and running. Under this demonstration, it created a bridge network inside VPNKit and hosts are added to that as they use the same VPNKit socket.

Thanks to Justin Cormack @ LinuxKit maintainer for the valuable insight regarding the multi-host networking functionality.

giphy

 

Did you find this blog helpful? Are you planning to explore Moby for Kubernetes? Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Track The Moby Project here.

0
0

Demystifying the Relationship Between Moby & Docker

Why https://github.com/docker/docker been redirected to moby/moby?  Why Docker created the Moby project? Is Docker renamed to Moby? What it actually mean when we say Moby is going to be upstream project? – I have been reading a lot of such queries in twitter and open forums even after dozens of blogs are available to clarify the confusion. Still confused about the relationship between Moby  and Docker? –  Under this blog, I will try to clarify the relationship between “the Moby project” & “Docker product” through popular existing analogy -Fedora, RHEL & CentOS.

 

Relationship

                   ** the only difference it holds is of Docker CE which comes before Docker EE as compared to RHEL which comes before CentOS.

Fedora is an operating system based on Linux kernel and GNU programs. It is purely a community driven project and sponsored by Red Hat. With an estimation of over 1.2 million users,  it is actually a playground for new functionality as it primarily  focus on quick releases of new features. Red Hat Enterprise Linux branches its releases from versions of Fedora. The key reason of birth of Fedora Linux was that Fedora’s repository development would be collaborative with the global volunteer community. Fedora Linux was eventually absorbed into the Fedora Project, carrying with it this collaborative approach.

RHEL (Red Hat Enteprise Linux) is  a Linux distribution developed and run by Red Hat and targeted toward the commercial market. RHEL is a downstream product and based on Fedora. 

CentOS  is very close to being RHEL without the branding and support. It is a spinoff of RHEL. It is based on the same code base.  Again, it is a community driven and a downstream product.

Let us consider this analogy and try to understand the relationship between Moby, Docker Community Edition & Docker Enterprise Edition.

 ” Moby is a project & Docker is a product”

 

Moby_1

A Birth of Moby Project – Why?

In the last 2 years, Docker experienced exponential growth . With around 6 billions Docker images pull (compared to 100 million two years back) and thousands of contributors being added month after month, Docker project saw tremendous community engagement. With such exponentially growing community, the boundaries between community and product started to look “blur” as the community was confused if they are contributing to product or the project. To bring the clarification around this debate, Docker finally decided to break its monolithic model into smaller open source components(includes containerd, libnetwork, swarmkit and LinuxKit) and hence the birth of Moby project.

As an analogy, think of Moby as community-driven Fedora project. Moby is an open-source project created by Docker to advance the software containerization movement. It is an upstream project  & perfectly a place for all container enthusiasts to experiment and exchange ideas. As Solomon rightly said “Docker uses the Moby Project as an open R&D lab”.

 

 

moby

 

Think of Docker Community Edition (CE) as CentOS product. As it name suggest, it will again be community-driven, free to use and distribute. Docker CE, as a product, is is ideal for developers and small teams looking to get started with Docker and experimenting with container-based apps. Docker CE is integrated and optimized to the infrastructure so you can maintain a native app experience while getting started with Docker. Build the first container, share with team members and automate the dev pipeline, all with Docker Community Edition.

Docker Enterprise Edition(EE) would be a good analogy to RHEL. It is based on Docker CE and hence, a downstream product. It is officially driven by Docker Inc.

Moby Vs Docker – Q/A

Que:1 > Why docker/docker renamed to moby/moby?

Docker is transitioning all of its open source collaborations to the Moby project going forward and hence it is getting redirected. Docker the product will be assembled from components that are packaged by the Moby project. As the Docker Engine continues to be split up into more components the Moby project will also be the home for those components until a more appropriate location is found.

Que:2 > What is Docker – a project or a product?

Docker is, and will remain, an open source product that lets you build, ship and run containers. It is staying exactly the same from a user’s perspective. Users can download Docker from the docker.com website.

Que:3 > To  what set of users Moby is NOT recommended?

Moby is NOT recommended for application developers looking for an easy way to run their applications in containers.(use Docker CE instead), to enterprise IT and development teams looking for a ready-to-use, commercially supported container platform.(use Docker EE instead) & to anyone curious about containers and looking for an easy way to learn.(use docker.com website instead).

Que:4 > To what set of users Moby is recommended?

Moby is recommended for anyone who wants to assemble a container-based system. This includes hackers, system engineers, infrastructure provides, container enthusiast, open source developers etc.

Que:5 > Will Moby be community-driven going forward?

Yes, of-course. Just like other open source projects, the Moby Project will always be a community-run project. Docker Inc. might plan to donate it to Linux Foundation hopefully.(similar to what they did it for containerd – donated to CNCF).

Que:6 > What is Moby project made up of?

All Moby components are containers, so creating new components is as easy as building a new OCI-compatible container. However, at the core of Moby is a framework to assemble specialized container systems. It provides:

  • A library of containerized components for all vital aspects of a container system: OS, container runtime, orchestration, infrastructure management, networking, storage, security, build, image distribution, etc.
  • Tools to assemble the components into runnable artifacts for a variety of platforms and architectures: bare metal (both x86 and Arm); executables for Linux, Mac and Windows; VM images for popular cloud and virtualization providers.
  • A set of reference assemblies which can be used as-is, modified, or used as inspiration to create your own.

Que:7 > Does Moby uses containerd?

Yes, you are right. Moby uses containerd as the default container runtime.

Que:8 > What does Moby tool do? How is it related to LinuxKit?

The Moby project provides a command-line tool called moby which assembles components. Currently it assembles bootable OS images, but soon it will also be used by Docker for assembling Docker out of components, many of which will be independent projects.

If Moby is used to build ISO images, LinuxKit takes a charge of pushing it and running on diversified platforms.

Did you find this blog helpful? Are you planning to explore Moby? Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

Track The Moby Project @ https://github.com/moby/moby

 

0
0

Test-Drive LinuxKit OS on Oracle VirtualBox running on macOS Sierra

In my last blog post, I showed how to get started with LinuxKit for Google Cloud Platform. LinuxKit is secure, lean and portable Linux Subsystem for the container movement and it is being built for various platforms which Docker runs on, for example – Azure, VMware ESXi, AWS and bare metal system. It went open source just 10 days back and you can see that the project has received over 2300+ stars in just 1 week time. That’s really appreciating figure. If you are completely new to LinuxKit, please refer my last blog post for understanding its basic concept.

linuxsubsystem

 

Screen Shot 2017-05-01 at 2.46.43 PM

 

Under this blog post, I will show you how LinuxKit OS can be run over VirtualBox running on macOS Sierra. Though the entire steps can be automated( this automation script works well for Ubuntu OS and can be tweaked for macOS), I have put it in simpler way.

Pre-requisites:

Ensure that  the following list of software are installed on your macOS machine:

  • go packages  from https://golang.org/dl/
  • make (using Xcode-select –install)
  • docker Version 17.03.1-ce-mac5 (16048) Channel: stable
  • VirtualBox software installed on macOS Sierra
  • git binary, if not installed

 

Once you have installed the above  packages, verify that they are all up and running. Ensure that Docker is up and running. Let us begin by cloning the LinuxKit repository. Open up the terminal and type the below command:

 

Ajeets-MacBook-Air:~ ajeetraina$ sudo git clone https://github.com/linuxkit/linuxkit

Cloning into ‘linuxkit’…

remote: Counting objects: 20097, done.

remote: Compressing objects: 100% (14/14), done.

remote: Total 20097 (delta 1), reused 0 (delta 0), pack-reused 20082

Receiving objects: 100% (20097/20097), 15.76 MiB | 441.00 KiB/s, done.

Resolving deltas: 100% (12066/12066), done.

Ajeets-MacBook-Air:~ ajeetraina$

 

Building the Image:

 

$cd linuxkit

$ sudo make

 

This build up two essential tools for us – moby and linuxkit.  

Next, we need to copy these tools to our PATH as shown below:

 

$ sudo cp bin/* /usr/local/bin

 

Now it’s time to use Moby to build the image:

$ sudo moby build examples/vmware.yml

 

Screen Shot 2017-05-01 at 3.07.17 PM

 

Hence, moby builds up VMware.vmdk file. To push it to VirtualBox, one can think of various option. If you are completely new to VirtualBox, all you can do is import the VMDK file directly as shown below:

Importing  VMDK into VirtualBox software directly:

 

Screen Shot 2017-05-01 at 3.19.12 PM

 

Type your preferable name under VM name(I named it as LinuxKitOS) and select VMware.vmk from your macOS to import it into VirtualBox. Click on “Create”.

Next, you can start the VM by clicking on “Start” button.

 

Screen Shot 2017-05-01 at 3.19.35 PM

It hardly takes few seconds for LinuxKit OS to come up. Once up, you can run runc command to see the list of services up and running.

VirtualBox_LinuxkitOS_01_05_2017_14_06_59

 

The other option is to covert .VMDK into .VDI format and registering the VM using VBoxManage command as shown below:

$VBoxManage clonehd –format VDI vmware.vmdk vmware.vdi

 

Screen Shot 2017-05-01 at 3.37.38 PM

The above command converts VMDK to VDI format which is now ready to be imported into VirtualBox.

Though LinuxKit comes with QEMU tool, if you still want to use qemu-img  (2 step process) to extract only virtual disk and not complete VM, this is a way to do it as shown below:

 $brew install qemu --env=std --use-gcc

Run the below command to convert VMDK to VDI using qemu:

Ajeets-MacBook-Air$ sudo qemu-img convert vmware.vmdk swap.bin -p

Password:

    (100.00/100%)

Ajeets-MacBook-Air:examples ajeetraina$ sudo VBoxManage convertfromraw swap.bin output.vdi –format VDI

Converting from raw image file=”swap.bin” to file=”output.vdi”…

Creating dynamic image with size 1073741824 bytes (1024MB)…

 

Still I would love to see LinuxKit providing the direct way of importing it into VirtualBox , the similar way how it pushes to Google Cloud Platform. LinuxKit for AWS and Azure is under progress. In the future blog post, I will try to cover how LinuxKit works on other platform like Azure and AWS.

Did you find this blog helpful? Are you planning to explore LinuxKit? Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

0
0

LinuxKit 101: Getting Started with LinuxKit for Google Cloud Platform

 

“…LinuxKit? A New Beast?

     What problem does it solve for us?..”

 

01

In case you missed out Dockercon 2017 and have no idea what is LinuxKit all about, then you have arrived at the right place. For the next 30 minutes of your time, I will be talking about an open source container toolkit which Docker Inc. has recently made to the public & will help you get started with it in very easy and precise way.

What is LinuxKit?

LinuxKit is just like Docker’s other open-source container toolkits such as InfraKit and VPNkit. It is essentially a container-native toolkit that allows organizations to build their own containerized operating systems that are secure, lean, modular and portable. Essentially, it is more of a developer kit than an end-user product.This project is completely open source and  is hosted on GitHub, under an Apache 2 licence.

What problem does it solve?

Last year Docker Inc. started shipping Docker for Mac, Docker for Windows, Docker for Azure & Docker for GCP and that brought a Docker-native experience to these various platforms. One of the common problem which the community faced was non-standard Linux OS running on all those platform.  Esp. Cloud platform do not ship with a standard Linux which brought lots of concerns around portability, security and  incompatibility. This lead Docker Inc. to bundle Linux into the Docker platform to run on all of these places uniformly.

Talking about portability, Docker Inc. has always focused on product which should run anywhere. Hence, they worked with partners like HP, Intel, ARM and Microsoft to ensure that LinuxKit toolkit should flawlessly run on the desktop, server, cloud ARM, x86, virtual environment and on bare metal. LinuxKit was built with an intention of  an optimized tooling for portability which can accommodate a new architecture, a new system in very easier way.

What does LinuxKit hold?

LinuxKit includes the tooling to allow building custom Linux subsystems that only include exactly the components the runtime platform requires. All system services are containers that can be replaced, and everything that is not required can be removed.The toolkit works with Docker’s containerd. All components can be substituted with ones that match specific needs.You can optimize LinuxKit images for specific hardware platforms and host operating systems with just the drivers and other dependencies you need, and nothing more, rather than use a full-fat generic base. The toolkit basically tries to help you create your own slimline containerized operating system as painlessly as possible. The size of a LinuxKit image is in MBs ( around 35-50MB).

100

The above shown is YAML file which specifies a kernel and base init system, a set of containers that are built into the generated image and started at boot time. It also specifies what formats to output(shown at the last line), such as bootable ISOs and images for various platforms. Interestingly, system services are sandboxed in containers, with only the privileges they need. The configuration is designed for the container use case. The whole system is built to be used as immutable infrastructure, so it can be built and tested in your CI pipeline, deployed, and new versions are redeployed when you wish to upgrade. To know more about YAML specification, check this out.

What tool does LinuxKit uses?

There are two basic tools which LinuxKit uses – Linuxkit & Moby.

2

In short, the moby tool converts the yaml specification into one or more bootable images.

Let us get started with LinuxKit to understand how it builds customized ISO images and run uniformly across various platform. Under this blog post, I have chosen Google Cloud Platform. We will build LinuxKit based customized ISO image locally on my Macbook Air and push it to Google Cloud Platform to run as VM instance. I will be using forked linuxkit repository which I have built around and runs Docker container(ex. running Portainer docker container) inside VM instance too.

Steps:

  1. Install LinuxKit & Moby tool on macOS
  2. Building a LinuxKit ISO Image with Moby 
  3. Create a bucket under Google Cloud Platform
  4. Upload the LinuxKit ISO image to a GCP bucket using LinuxKit tool
  5. Initiate the GCP instance from the LinuxKit ISO image placed under GCP bucket
  6. Verifying Docker running inside LinuxKitOS 
  7. Running Portainer as Docker container

 

Pre-requisite:

– Install Google Cloud SDK on your macOS system through this link. You will need to verify your google account using the below command:

$gcloud auth login

– Ensure that the build essential tools like make are perfectly working

– Ensure that GO packages are installed on macOS..

Steps:

  1. Clone the repository:

 

sudo git clone https://github.com/ajeetraina/linuxkit

4

2.  Change directory to linuxkit and run make which builds “moby” and “linuxkit” for us

cd linuxkit && sudo make

 

3.  Verify that these tools are built and placed under /bin:

cd bin/
ls
moby         linuxkit

4.  Copy these tools into system PATH:

 
sudo cp bin/* /usr/local/bin/

5. Use moby tool to build the customized ISO image:

 

cd examples/
sudo moby build gcpwithdocker.yml

 

5

 

[Update: 6/21/2017 – With the latest release of LinuxKit, Output section is no longer allowed inside YAML file. It means that whenever you use moby build command to build an image, specify -output gcp to build an image in a format that GCP will understand. For example:

moby build -output gcp example/gcpwithdocker.yml

This will create a local gcpwithdocker.img.tar.gz compressed image file.]

 

6.  Create a GCE bucket “mygcp” under your Google Cloud Platform:

7

7. Run  linuxkit push command  to push it to GCP:

 

sudo linuxkit push gcp -project synthetic-diode-161714 -bucket mygcp gcpwithdocker.img.tar.gz

 

8

[Note: “synthetic-diode-161714” is my GCP project name and “mygcp” is the bucket name which I created in earlier step. Please input as per your environment.]

Please note that you might need to enable Google Cloud API using this link in case you encounter “unable to connect GCP”  error. 

8.  You can execute the image you created and this will should show up under VM instance on Google  Cloud Platform:

9

10

This will build up a LinuxKit OS which you can verify below:

15

You can also verify if this brings up VM instance on GCP platform:

010

9. You can use runc command to list out all the services which were defined under gcpwithdocker.yml file:

11

10. As shown above, one of the service which I am interested is called “docker”. You can use the below command to enter into docker service:

 

runc exec -t docker sh

Wow ! It is running the latest Docker 17.04.0-ce version.

11.  Let us try to run Portainer application and check if it works good.

12

You can verify the IP address running ifconfig for that specific container which in my case is 35.187.162.100:

14

Now this is what I call ” a coolest stuff on earth”. Linuxkit allows you to build your own secure, modular, portable, lean and mean containerized OS and that too in just minutes. I am currently exploring LinuxKit in terms of bare metal OS and will share it under my next blog post.

Did you find this blog helpful? Are you planning to explore LinuxKit? Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

0
0