Getting Started with OpenUSM on Docker for Windows Platform

Estimated Reading Time: 6 minutes

 

OpenUSM is a modern approach to DellEMC Server Management, Insight Logs Analytics and Machine Learning solution integrated with monitoring & logging pipeline using Docker containers & Redfish. It is 100% container-based platform-agnostic solution which can be run from laptop, server or cloud and works seamlessly on any of Linux or Windows platform with Docker Engine running on top of it. It follows “Container-Per-Server(CPS)” model. For each server management tasks, there are Python-scripts which when executed builds and run Docker containers, uses Redfish API to communicate directly with Dell iDRAC, collects iDRAC/LC logs and pushes it to ELK(Elasticsearch, Logstash & Kibana) stack for further log analytics and Machine Learning. OpenUSM is currently hosted at https://github.com/openusm/openusm

OpenUSM today support both Linux and Windows Platform. It has already been validated on Linux OS like Debian, Ubuntu and CentOS system. You can find extensive documents here.

Under this blog post, I will showcase how to get started with OpenUSM on Docker for Windows Platform.

Tested Platform:

  • Microsoft Windows 10 Enterprise
  • X64 based PC

Pre-requisite:

  • Installing Python 2.7
  • Installing Winsyslog
  • Installing Docker for Windows
  • Configuring Docker for Windows

Installing Python 2.7

To install Python 2.7, the simplest way is to use Chocolatey. Chocolatey is software management automation. Chocolatey works with over 20+ installer technologies for Windows, but it can manage things you would normally xcopy deploy (like runtime binaries and zip files). You can also work with registry settings or managing files and configurations, or any combination.

Run the below command to install Chocolatey on your Windows 10 laptop:

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))

If you face any issue, do refer this link.

Now install python using choco as shown below:

choco install python

You can verify if python is installed or not using the below command:

PS C:\> C:\Python27\python.exe -V
Python 2.7.15

Installing WinSyslog

For OpenUSM to work, syslog server is required. You can install any Syslog server available in the internet. For this demo, I will use WinSyslog which is too easy to setup. Once you install it, it will open up the window as shown below:

Click on Options section under File Menu and you shall see the below window:

As shown above, you will need to enter your local laptop IP address and port to fetch logs for OpenUSM. You can test it by clicking on “Send” and you shall see “Syslog Messages send successfully to 192.168.1.6” if it goes well.

Installing Docker for Windows:

Open https://docs.docker.com/docker-for-windows/install/ and click on “Install from Docker Store” to open up the below page to download Docker for Windows CE Edition.

Docker CE for Windows is Docker designed to run on Windows 10. It is a native Windows application that provides an easy-to-use development environment for building, shipping, and running dockerized apps. Docker CE for Windows uses Windows-native Hyper-V virtualization and networking and is the fastest and most reliable way to develop Docker apps on Windows. Docker CE for Windows supports running both Linux and Windows Docker containers. You can install either of stable or Edge release from the below link.

 

Double-click Docker for Windows Installer to run the installer. When the installation finishes, Docker starts automatically. The whale  in the notification area indicates that Docker is running, and accessible from a terminal.

You can verify Docker version either by visiting “About Docker” in the top menu:

Or you can open a command-line terminal like PowerShell, and try out below Docker command to check the version –

Configuring Docker for Windows for OpenUSM

We need to perform few of configuration changes related to Docker for Windows before we proceed with setting up ELK stack. First we need to enable share drives for ELK stack to work. Docker for Windows provides you a simplified approach to enable this feature. Click on Whale Icon > Shared Drives > Select “C:” local drive which will be made available to your Docker containers which run ELK Stack.

Once you select and click on “Apply” it will restart Docker as well as Kubernetes(if enabled earlier). This should be good enough for OpenUSM to work smoothly.

Cloning the OpenUSM Repository

git clone https://github.com/openusm/openusm

cd openusm/logging/

Setting up ELK Stack

Docker for Windows is a development platform and comes with docker-compose installed by default. All you need is to run the below command to bring up ELK stack… Awesome, Isn’t it?

docker-compose up -d

You can verify if ELK has come up or not by running the below command as shown:

Open up http://127.0.0.1:5601 to access Kibana UI as shown below:

Sending DellEMC HW Sensor Logs to ELK Stack

It’s time for an action now. I assume you are connected to your Lab infrastructure using VPN in order to access DellEMC PowerEdge servers. It just takes few seconds to send sensor logs(Fan, Temperature etc.) of DellEMC PowerEdge server sitting in your datacenter to ELK stack using the below python script.

 

PS C:\Users\Ajeet_Raina\openusm\logging> C:\Python27\python.exe .\sensorexporter.py -i <idrac-ip> -ei 127.0.0.1 -eu elastic -ep <password>

This script uses Redfish to talk to remote DellEMC PE Server, fetches the logs and send it to syslog server which we configured earlier, pushes it to Logstash and elasticsearch and get it displayed via Kibana UI – all in just few seconds. Isn’t it cool?

Visualizing the logs under Kibana UI

When you open kibana UI for the first time, the index pattern mightn’t come up. Click on “Index Pattern” under Management tab on the left hand side. Next, click on “Create Index Pattern”. Search for Fan* and temp*. By now, you should be able to see temperature and Fan speed logs under Discover tab.

 

Click on “Discover” tab to see the overall logs fetched directly from iDRAC IPs.

 

Click on “Visualize” tab to add filter. In the below example, I have chosen iDRAC IP, Minimum and Maximum Reading as shown below:

Click on “Dashboard” to add specific filters for Fan speed, choose your type of visualization(I selected “Pie Chart” option) and select the metrics to display it as shown below:

 

In my next blog post, I will talk about Elastic’s Machine Learning “anomaly score” and how the various scores presented in the dashboards relate to the “unusualness” of individual occurrences within the data set of fan speed and temperature as a whole. Stay tuned !

2 Minutes to Docker MacVLAN Networking – A Beginners Guide

Estimated Reading Time: 3 minutes

 

Scenario:  Say, you have built Docker applications(legacy in nature like network traffic monitoring, system management etc.) which is expected to be directly connected to the underlying physical network. In this type of situation, you can use the macvlan network driver to assign a MAC address to each container’s virtual network interface, making it appear to be a physical network interface directly connected to the physical network.

Last year, I wrote a blog post on “How MacVLAN work under Docker Swarm?” for those users who want to assign underlying physical network address to Docker containers which runs various Swarm services. Do check it out.

Docker 17.06 Swarm Mode: Now with built-in MacVLAN & Node-Local Networks support

In case you’re completely new to Docker networking, when Docker is installed, a default bridge network named docker0 is created. Each new Docker container is automatically attached to this network. Besides docker0 , two other networks get created automatically by Docker: host (no isolation between host and containers on this network, to the outside world they are on the same network) and none (attached containers run on container-specific network stack)

Assume you have a clean Docker Host system with just 3 networks available – bridge, host and null

root@ubuntu:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
871f1f745cc4        bridge              bridge              local
113bf063604d        host                host                local
2c510f91a22d        none                null                local
root@ubuntu:~#

My Network Configuration is quite simple. It has eth0 and eth1 interface. I will just use eth0.

root@ubuntu:~# ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:7d:83:13:8e
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth0      Link encap:Ethernet  HWaddr fe:05:ce:a1:2d:5d
          inet addr:100.98.26.43  Bcast:100.98.26.255  Mask:255.255.255.0
          inet6 addr: fe80::fc05:ceff:fea1:2d5d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:923700 errors:0 dropped:367 overruns:0 frame:0
          TX packets:56656 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:150640769 (150.6 MB)  TX bytes:5125449 (5.1 MB)
          Interrupt:31 Memory:ac000000-ac7fffff

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:45 errors:0 dropped:0 overruns:0 frame:0
          TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3816 (3.8 KB)  TX bytes:3816 (3.8 KB)

Creating MacVLAN network on top of eth0.

docker network create -d macvlan --subnet=100.98.26.43/24 --gateway=100.98.26.1  -o parent=eth0 pub_net

Verifying MacVLAN network

root@ubuntu:~# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
871f1f745cc4        bridge              bridge              local
113bf063604d        host                host                local
2c510f91a22d        none                null                local
bed75b16aab8        pub_net             macvlan             local
root@ubuntu:~#

Let us create a sample Docker Image and assign statics IP(ensure that it is from free pool)

root@ubuntu:~# docker  run --net=pub_net --ip=100.98.26.47 -itd alpine /bin/sh
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ff3a5c916c92: Pull complete
Digest: sha256:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f
Status: Downloaded newer image for alpine:latest
493a9566c31c15b1a19855f44ef914e7979b46defde55ac6ee9d7db6c9b620e0

Important Point: When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.

Enabling Container to Host Communication

It’s simple. Just run the below command:

Example: ip link add mac0 link $PARENTDEV type macvlan mode bridge

So, in our case, it will be:

ip link add mac0 link eth0 type macvlan mode bridge
ip addr add 100.98.26.38/24 dev mac0
ifconfig mac0 up

Let us try creating container and pinging:

root@ubuntu:~# docker run --net=pub_net -d --ip=100.98.26.53 -p 81:80 nginx
10146a39d7d8839b670fc5666950c0e265037105e61b0382575466cc62d34824
root@ubuntu:~# ping 100.98.26.53
PING 100.98.26.53 (100.98.26.53) 56(84) bytes of data.
64 bytes from 100.98.26.53: icmp_seq=1 ttl=64 time=1.00 ms
64 bytes from 100.98.26.53: icmp_seq=2 ttl=64 time=0.501 ms

Wow ! It just worked.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

Test Drive Your First Istio Deployment using Play with Kubernetes Platform

Estimated Reading Time: 3 minutes

If you’re a Developer and have been spending lot of time in developing apps recently, you already understand a whole new set of challenges related to Microservice architecture. Although there has been a shift from bloated monolithic apps to small, focused Microservices to speed up implementation and to improve resiliency but the fact is  developers have to really worry about the challenges in integrating services in distributed systems which includes accountability for service discovery, load balancing, registration, fault tolerance, monitoring, routing, compliance and security.

Let us understand the challenges which Microservice bring to developers and operators in details. Consider a 1st Generation simple Service Mesh scenario. As shown below, Service (A) talks to Service (B). Instead of talking directly, the request gets routed through Nginx. The Nginx finds route in Consul (which is actually a service discovery tool) and automatic retries on HTTP 502’s happen.

 

But with the advent of growing number of microservices architecture, the below listed challenges arises for both developers as well as operation team which are discussed below –

  • How to enable these growing number of microservices to talk to each other?
  • How to enable these growing number of microservices to load-balance?
  • How to enable these growing number of microservices to provide role-based routing?
  • How to implement outgoing traffic on these microservices and test canary deployment?
  • How to manage complexity around these growing pieces of microservices?
  • How can operator implement fine-grained control of traffic behavior with rich-routing rules?
  • How shall one implement Traffic encryption, service-to-service authentication and strong identity assertions?

 

In nutshell, although you could put service discovery and retry logic into application or networking middleware but the fact is service discovery becomes tricky to get right.

Enter Istio’s Service Mesh

“Service Mesh” is one of the hottest buzzword of 2018. As its name suggest, it is a configurable infrastructure layer for a microservices app. It describes the network of microservices that make up applications and the interactions between them. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.

Istio is a completely open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and is actually a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is being hosted on GitHub. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Read the full story at Knowledgehut.

What’s New in Docker Enterprise Edition 18.03 Engine Release?

Estimated Reading Time: 6 minutes

 

A New Docker Enterprise Engine 18.03.0-ee-1 has been released. This is a stand-alone engine release intended for EE Basic customers. Docker EE Basic includes the Docker EE Engine and does not include UCP or DTR.

In case you’re new, Docker is available in two editions:

  • Community Edition (CE)
  • Enterprise Edition (EE)

Docker Community Edition (CE) is ideal for individual developers and small teams looking to get started with Docker and experimenting with container-based apps. Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship, and run business critical applications in production at scale. Below are the list of capabilities around various Docker Editions:

[Please Note – Since UCP and DTR can not be run on 18.03 EE Engine, it is intended for EE Basic customers. It is not intended, qualified, or supported for use in an EE Standard or EE Advanced environment.If you’re deploying UCP or DTR, use Docker EE Engine 17.06.]

In my recent blog, I talked about “Docker EE 2.0 – Under the Hood” where I covered on 3 major components of EE 2.0 which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment. Under this blog post, I am going to talk about what’s new features have been included under the newly introduced Docker EE 18.03 Engine.

 

Let us talk about each of these features in detail.

Containerd 1.1 Merged under 18.03 EE Engine

containerd is an industry-standard core container runtime. It is based on the Docker Engine’s core container runtime to benefit from its maturity and existing contributors. It provides a daemon for managing running containers. It is available today as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments.

1.1 is the second major release for containerd with added support for CRI, the
Kubernetes Container Runtime InterfaceCRI is a new plugin which allows connecting the containerd daemon directly to a Kubernetes kubelet to be used as the container runtime. The CRI GRPC interface listens on the same socket as the containerd GRPC interface and runs in the same process.

containerd 1.1 announced in April implements Kubernetes Container Runtime Interface (CRI), so it can be used directly by Kubernetes, as well as Docker Engine. It implements namespaces so that clients from different container systems (e.g. Docker Engine and DC/OS) can leverage a single containerd instance on one system while being logically separated. This release allows  you to plug in any OCI-compliant runtime, such as Kata containers, gVisor etc.  And includes  many performance improvements such as significantly better pod start latency and cpu/memory usage of the CRI plugin. And for additional performance benefits it replaces graphdrivers with more efficient snapshotters, that BuildKit leverages for faster build.

 

 

The new containerd 1.1 includes –

  • CRI plugin
  • ZFS, AUFS, and native snapshotter
  • Improvements to the ctr tool
  • Better support for multiple platforms
  • Cross namespace content sharing
  • Better mount cleanup
  • Support for disabling plugins
  • TCP debug address for remote debugging
  • Update to Go 1.10
  • Improvements to the garbage collector

FIPS 140-2 Compliant Engine

FIPS stands for Federal Information Processing Standard (FIPS) Publication. It is a U.S. government computer security standard which is used to approve cryptographic modules. It is important to note that the Docker EE cryptography libraries are at the “In-process(Co-ordination)” phase of the FIPS 140-2 Level 1 Cryptographic Module Validation Program.The Docker platform is validated against widely-accepted standards and best practices is a critical aspect of the product development as this enables companies and agencies across all industries to adopt Docker containers. FIPS is a notable standard which validates and approves the use of various security encryption modules within a software system.

For further detail: https://blog.docker.com/2017/06/docker-ee-is-now-in-process-for-fips-140-2/

 

 

Source: https://csrc.nist.gov/Projects/Cryptographic-Module-Validation-Program/Modules-In-Process/Modules-In-Process-List

Support for Windows Server 1709 (RS3) / 1803 (RS4)

Docker EE 18.03 Engine added support for Microsoft Windows Server 1709 & Windows Server 1803. With this release, there has been subtle improvement made in terms of density and performance via smaller image sizes than Server 2016. With this release, a single deployment architecture for Windows and Linux via parity with ingress networking and VIP service discovery has been introduced. Reduced operational requirements via relaxed image
compatibility is one of notable feature introduced with this newer release.

Below are the list of customer pain points which Docker, Inc focused for last couple of years:

 

Refer https://docs.docker.com/ee/engine/release-notes/#runtime for more information.

The –chown support in COPY/ ADD Commands in Docker EE

With Docker EE 18.03 EE Release, there is a support for –chown with COPY and ADD in Dockerfile. This release improve security by enabling developer-specific users vs root for run-time file operations. This release provide parity with functionality added in Docker CE. At this point of time, this has been introduced under Linux OS only.

Support for Compose on Kubernetes CLI

Under Docker EE 18.03 release, now you can use the same Compose files to deploy apps to Kubernetes on Enterprise Edition. This feature is not available under any of Desktop or Community Edition Release.

You should be able to pass –orchestrator parameter to specify the orchestrator.

 

This means that it should be easy to move applications between Kubernetes and Swarm, and simplify application configuration. Under this release, one can create native Kubernetes Stack object and able to interact with Stacks via the Kubernetes API. This release improve UX and move out of experimental phase. Functionality now available in the main, supported Docker CLI.

Easy setup of Docker Content Trust

Docker Content Trust was introduced first of all in Docker Engine 1.8 and Docker CS Engine 1.9.0 and is available in Docker EE.It allows image operations with a remote Docker registry to enforce client-side signing and verification of image tags. It enables digital signatures for data sent to, and received from, remote Docker registries. These signatures allow Docker client-side interfaces to verify the integrity and publisher of specific image tags.

Docker Content Trust works with Docker Hub as well as Docker Trusted Registry (Notary integration is experimental in version 1.4 of Docker Trusted Registry).It provides strong cryptographic guarantees over what code and what versions of software are being run in your infrastructure. Docker Content Trust integrates The Update Framework (TUF) into Docker using Notary , an open source tool that provides trust over any content

Below are the list of CLI options available under the current release –

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

5 Minutes to Run Your First Docker Container on Google Cloud Platform using Terraform

Estimated Reading Time: 11 minutes

 

It’s 2018 ~ Define Your whole Cloud infrastructure via Terraform

I still remember those days(back in 2006-07′) when I started my career as IT Consultant in Telecom R&D centre where I used to administer Subversion & CVS repositories running on diversified Linux platforms. I considered it as a dark age where fear of downtime, fear of accidental misconfiguration and slow network impacted the overall development, testing & go-to-market process.

Thanks to today’s DevOps Era, we now have a better way to do things: Infrastructure-as-Code (IAC). The goal of DevOps is to perform software delivery more efficiently, and we need  tools to make this delivery quick and efficient, this is where the tools like Terraform help companies with infrastructure as code and automation.

Terraform is an open source tool that allows you to define infrastructure for a variety of cloud providers (e.g. AWS, Azure, Google Cloud, DigitalOcean, etc) using a simple, declarative programming language and to deploy and manage that infrastructure using a few CLI commands.Terraform is a tool to Build, Change and Version Control your Infrastructure.

Building Infrastructure includes:

  • Talking to Multiple Cloud/Infrastructure Provider
  • Ensuring Creation & Consistency
  • Express in an API-agnostic DSL

Change Infrastructure includes:

  • Apply Incremental Changes
  • Destroy when needed
  • Preview Changes
  • Scale Easily

Version Control includes:

  • HCL Language( HashiCorp Configuration Language)
  • State File(don’t store it in GitHub Repo)

Wait a sec..I have been using Ansible & Puppet. How is Terraform different from these CM tools?

You might have used technologies like Ansible, Chef, or Puppet to automate and provision the software. Terraform starts from the same law, infrastructure as code, but focuses on the automation of the infrastructure itself. Your whole Cloud infrastructure (instances, volumes, networking, IPs) can be easily defined in terraform.

Chef, Puppet & Ansible are “Configuration management” tools whereas Terraform is actually an orchestration tool. Terraform is designed to provision the servers themselves. Tools like Chef, Puppet, & Ansible typically default to a mutable infrastructure paradigm which means if you tell Puppet to install a new version of Docker, it’ll run the software update on your existing servers and the changes will happen in-place accordingly. If you’re using an orchestration tool such as Terraform to deploy machine images created by Docker or Packer, then every “change” is actually a deployment of a new server (just like every “change” to a variable in functional programming actually returns a new variable). I recommend you to read this,  if you have spare time to deep-dive into use cases around Terraform.

Under this blog post, I will show you how to run your first Docker Web container on Google Cloud Platform using Terraform in just 5 minutes. I will be running the below command under macOS High Sierra v10.13.3.

Installing Terraform on macOS

Installing Terraform on macOS is super easy. You are just a “brew-far”.

[Captains-Bay]🚩 >  brew install terraform
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 4 taps (jenkins-x/jx, homebrew/cask-versions, homebrew/core, homebrew/cask).
==> Updated Formulae
agda                globus-toolkit      pango               shibboleth-sp
bacula-fd           hadolint            pdftoedn            subversion
chronograf          jenkins-x/jx/jx     pdftoipe            tkdiff
conan               json-fortran        php@5.6             triton
convox              jsonnet             php@7.0             vdirsyncer
diff-pdf            kitchen-sync        php@7.1             xapian
fdroidserver        libdill             picard-tools        xml-tooling-c
fn                  lmod                poppler
git-annex           pandoc              rust

Warning: terraform 0.11.7 is already installed and up-to-date
To reinstall 0.11.7, run `brew reinstall terraform`

Clone the Repository

I have put all the required files for Terraform in my docker101 repository. Just clone it and you are ready to go..

[Captains-Bay]🚩 >  git clone https://github.com/ajeetraina/docker101
Cloning into 'docker101'...
remote: Counting objects: 5637, done.
remote: Compressing objects: 100% (155/155), done.
remote: Total 5637 (delta 117), reused 114 (delta 57), pack-reused 5422
Receiving objects: 100% (5637/5637), 17.67 MiB | 432.00 KiB/s, done.
Resolving deltas: 100% (1821/1821), done.

Change directory to Terraform-GCP location

As I am planning to write dozens of articles around Terraform as IaC, I have rightly arranged it under automation/terraform/<platform> folder. You can keep an eye on this repository to learn more from my exploration:

[Captains-Bay]🚩 >  pwd
/Users/ajeetraina/docker101/automation/terraform/googlecloud/building-first-instance
[Captains-Bay]🚩 >  tree
.
├── README.md
├── compute.tf
├── first-docker-container
│   ├── README.md
│   ├── main.tf
│   ├── output.tf
│   ├── terraform-provider-google
│   └── variables.tf
├── google-compute-firewall.tf
├── provider.tf
└── terraform-account.json

2 directories, 9 files
[Captains-Bay]🚩 >

Let us spend some time in understanding the essential concepts of Terraform before we move ahead.

A Quick Look at Terraform Module

Modules in the Terraform ecosystem are a way to organize the code to be more reusable, to avoid code duplication & to improve the code organisation and its readability. By using modules, you will save time because you write the code once, test it and reuse it many times with different parameters.

The below main.tf is the main configuration file for Terraform. It starts with definition of a provider which is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare). The Google Cloud provider is used to interact with Google Cloud services. The provider needs to be configured with the proper credentials before it can be used.We are targeting Google Cloud as our provider, hence the definition look like as shown below:

provider "google" {
  region      = "${var.region}"
  project     = "${var.project_name}"
  credentials = "${file("${var.credentials_file_path}")}"
}


resource "google_compute_instance" "docker" {
  count = 1

  name         = "tf-docker-${count.index}"
  machine_type = "f1-micro"
  zone         = "${var.region_zone}"
  tags         = ["docker-node"]

  boot_disk {
    initialize_params {
      image = "ubuntu-os-cloud/ubuntu-1404-trusty-v20160602"
    }
  }

  network_interface {
    network = "default"

    access_config {
      # Ephemeral
    }
  }

  metadata {
    ssh-keys = "root:${file("${var.public_key_path}")}"
  }


  provisioner "remote-exec" {
    connection {
      type        = "ssh"
      user        = "root"
      private_key = "${file("${var.private_key_path}")}"
      agent       = false
    }

    inline = [
      "sudo curl -sSL https://get.docker.com/ | sh",
      "sudo usermod -aG docker `echo $USER`",
      "sudo docker run -d -p 80:80 nginx"
    ]
  }

  service_account {
    scopes = ["https://www.googleapis.com/auth/compute.readonly"]
  }
}

resource "google_compute_firewall" "default" {
  name    = "tf-www-firewall"
  network = "default"

  allow {
    protocol = "tcp"
    ports    = ["80"]
  }

  source_ranges = ["0.0.0.0/0"]
  target_tags   = ["docker-node"]
}

If you have used Google Cloud in the past, you will surely find it easy to understand. If not, I suggest you to dirty your hands in creating your first Google Cloud instance and running your first Docker container.

File: variables.tf

Input variables serve as parameters for a Terraform module. When used in the root module of a configuration, variables can be set from CLI arguments and environment variables. Below is the variables file one can set for our GCP instance which specified region, project ID, credential file, private and public SSH key.

[Captains-Bay]🚩 >  cat variables.tf
variable "region" {
  default = "us-central1"
}

variable "region_zone" {
  default = "us-central1-f"
}

variable "project_name" {
  description = "The ID of the Google Cloud project"
}

variable "credentials_file_path" {
  description = "Path to the JSON file used to describe your account credentials"
  default     = "~/.gcloud/Terraform.json"
}

variable "public_key_path" {
  description = "Path to file containing public key"
  default     = "~/.ssh/gcloud_id_rsa.pub"
}

variable "private_key_path" {
  description = "Path to file containing private key"
  default     = "~/.ssh/gcloud_id_rsa"
}

Generating Key

[Captains-Bay]🚩 >  ssh-keygen -f ~/.ssh/gcloud_id_rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/ajeetraina/.ssh/gcloud_id_rsa.
Your public key has been saved in /Users/ajeetraina/.ssh/gcloud_id_rsa.pub.
The key fingerprint is:
SHA256:ebJlwBe0MCFQv49bLHtfXZ2havVf89sulIYE2PDHXBI ajeetraina@Ajeets-MacBook-Air.local
The key's randomart image is:
+---[RSA 2048]----+
|    .oo =*o E..  |
|       +.+o= o   |
|        + +.+  . |
|         = .. . +|
|        S +. + oo|
|         X  + =..|
|        + +o o.oo|
|         =o  .. *|
|        o. ..  +*|
+----[SHA256]-----+
[Captains-Bay]🚩 >

Download the credential File from Google Cloud Console

You need to download credential file that contains your service account private key in JSON format. You can download your existing Google Cloud service account file from the Google Cloud Console, or you can create a new one from the same page.

Ensure that you create an empty directory .gcloud under the home directory and place this JSON file under this location.

[Captains-Bay]🚩 >  mkdir ~/.gcloud
[Captains-Bay]🚩 >  cd ~/.gcloud/

Terraform

The terraform init command is used to initialize a working directory containing Terraform configuration files. This is the first command that should be run after writing a new Terraform configuration or cloning an existing one from version control.By default, terraform init assumes that the working directory already contains a configuration and will attempt to initialize that configuration. You can run this command multiple times, it’s safe !

terraform init
[Captains-Bay]🚩 >  terraform plan
var.project_name
  The ID of the Google Cloud project

  Enter a value: i-guru-209217

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

google_compute_instance.www: Refreshing state... (ID: tf-www-0)
google_compute_firewall.default: Refreshing state... (ID: tf-docker-firewall)

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + google_compute_firewall.default
      id:                                                  <computed>
      allow.#:                                             "1"
      allow.272637744.ports.#:                             "1"
      allow.272637744.ports.0:                             "80"
      allow.272637744.protocol:                            "tcp"
      destination_ranges.#:                                <computed>
      direction:                                           <computed>
      name:                                                "tf-www-firewall"
      network:                                             "default"
      priority:                                            "1000"
      project:                                             <computed>
      self_link:                                           <computed>
      source_ranges.#:                                     "1"
      source_ranges.1080289494:                            "0.0.0.0/0"
      target_tags.#:                                       "1"
      target_tags.1090984259:                              "docker-node"

  + google_compute_instance.docker
      id:                                                  <computed>
      boot_disk.#:                                         "1"
      boot_disk.0.auto_delete:                             "true"
      boot_disk.0.device_name:                             <computed>
      boot_disk.0.disk_encryption_key_sha256:              <computed>
      boot_disk.0.initialize_params.#:                     "1"
      boot_disk.0.initialize_params.0.image:               "ubuntu-os-cloud/ubuntu-1404-trusty-v20160602"
      boot_disk.0.initialize_params.0.size:                <computed>
      boot_disk.0.initialize_params.0.type:                <computed>
      can_ip_forward:                                      "false"
      cpu_platform:                                        <computed>
      create_timeout:                                      "4"
      deletion_protection:                                 "false"
      guest_accelerator.#:                                 <computed>
      instance_id:                                         <computed>
      label_fingerprint:                                   <computed>
      machine_type:                                        "f1-micro"
      metadata.%:                                          "1"
      metadata.ssh-keys:                                   "root:ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDW4qyWPIaZg0fu5QMMgVRc96Nv1C2ft2k+cy6bkf0fz5WjZTDWaGRlvkdt7eZqFd5I7C+9frYfwUpBMAJ+lu2nK2xKxTjPUC/PGuhgIVz+AzJX1Rz1RxaOr//xMDvlYDvoQesRO/EMqb31uYPTY/WZVz8k+joj7OMQHkDwZo/Al5a8uSmkHQ6sPQ2mPusT7p7bFfe9M/xQxVBeWtvfAXtXTFRhGecLPByQQ3RogDMO5TvUh3/tURt54OmQNnqzRf36o9Nh69jxhSpbMrRr3ViWZADcyNnD0eECec+1d/3JzbZqoMmUhm5Jpiua+iEPYOj8WbvrU6j4GCuhth0HWSuP ajeetraina@Ajeets-MacBook-Air.local\n"
      metadata_fingerprint:                                <computed>
      name:                                                "tf-docker-0"
      network_interface.#:                                 "1"
      network_interface.0.access_config.#:                 "1"
      network_interface.0.access_config.0.assigned_nat_ip: <computed>
      network_interface.0.access_config.0.nat_ip:          <computed>
      network_interface.0.access_config.0.network_tier:    <computed>
      network_interface.0.address:                         <computed>
      network_interface.0.name:                            <computed>
      network_interface.0.network:                         "default"
      network_interface.0.network_ip:                      <computed>
      network_interface.0.subnetwork_project:              <computed>
      project:                                             <computed>
      scheduling.#:                                        <computed>
      self_link:                                           <computed>
      service_account.#:                                   "1"
      service_account.0.email:                             <computed>
      service_account.0.scopes.#:                          "1"
      service_account.0.scopes.2862113455:                 "https://www.googleapis.com/auth/compute.readonly"
      tags.#:                                              "1"
      tags.1090984259:                                     "docker-node"
      tags_fingerprint:                                    <computed>
      zone:                                                "us-central1-f"


Plan: 2 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

Verifying if Docker is installed on GCP instance

@tf-docker-0:~$ sudo docker version
Client:
 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:10:22 2018
 OS/Arch:           linux/amd64
 Experimental:      false
Server:
 Engine:
  Version:          18.06.0-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       0ffa825
  Built:            Wed Jul 18 19:08:26 2018
  OS/Arch:          linux/amd64
  Experimental:     false
@tf-docker-0:~$ 

[Captains-Bay]🚩 >  terraform show
google_compute_firewall.default:
  id = tf-www-firewall
  allow.# = 1
  allow.272637744.ports.# = 1
  allow.272637744.ports.0 = 80
  allow.272637744.protocol = tcp
  deny.# = 0
  description =
  destination_ranges.# = 0
  direction = INGRESS
  disabled = false
  name = tf-www-firewall
  network = https://www.googleapis.com/compute/v1/projects/i-guru-209217/global/networks/default
  priority = 1000
  project = i-guru-209217
  self_link = https://www.googleapis.com/compute/v1/projects/i-guru-209217/global/firewalls/tf-www-firewall
  source_ranges.# = 1
  source_ranges.1080289494 = 0.0.0.0/0
  source_service_accounts.# = 0
  source_tags.# = 0
  target_service_accounts.# = 0
  target_tags.# = 1
  target_tags.1090984259 = docker-node
google_compute_instance.docker:
  id = tf-docker-0
  attached_disk.# = 0
  boot_disk.# = 1
  boot_disk.0.auto_delete = true
  boot_disk.0.device_name = persistent-disk-0
  boot_disk.0.disk_encryption_key_raw =
  boot_disk.0.disk_encryption_key_sha256 =
  boot_disk.0.initialize_params.# = 1
  boot_disk.0.initialize_params.0.image = https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1404-trusty-v20160602
  boot_disk.0.initialize_params.0.size = 10
  boot_disk.0.initialize_params.0.type = pd-standard
  boot_disk.0.source = https://www.googleapis.com/compute/v1/projects/i-guru-209217/zones/us-central1-f/disks/tf-docker-0
  can_ip_forward = false
  cpu_platform = Intel Ivy Bridge
  create_timeout = 4
  deletion_protection = false
  guest_accelerator.# = 0
  instance_id = 5050855407468093023
  label_fingerprint = 42WmSpB8rSM=
  labels.% = 0
  machine_type = f1-micro
  metadata.% = 1
  metadata.ssh-keys = root:ssh-rsa XXXX

  metadata_fingerprint = CXEyE8jgfhM=
  metadata_startup_script =
  min_cpu_platform =
  name = tf-docker-0
  network_interface.# = 1
  network_interface.0.access_config.# = 1
  network_interface.0.access_config.0.assigned_nat_ip = 35.226.155.224
  network_interface.0.access_config.0.nat_ip = 35.226.155.224
  network_interface.0.access_config.0.network_tier = PREMIUM
  network_interface.0.access_config.0.public_ptr_domain_name =
  network_interface.0.address = 10.128.0.2
  network_interface.0.alias_ip_range.# = 0
  network_interface.0.name = nic0
  network_interface.0.network = https://www.googleapis.com/compute/v1/projects/i-guru-209217/global/networks/default
  network_interface.0.network_ip = 10.128.0.2
  network_interface.0.subnetwork = https://www.googleapis.com/compute/v1/projects/i-guru-209217/regions/us-central1/subnetworks/default
  network_interface.0.subnetwork_project = i-guru-209217
  project = i-guru-209217
  scheduling.# = 1
  scheduling.0.automatic_restart = false
  scheduling.0.on_host_maintenance = MIGRATE
  scheduling.0.preemptible = false
  scratch_disk.# = 0
  self_link = https://www.googleapis.com/compute/v1/projects/i-guru-209217/zones/us-central1-f/instances/tf-docker-0
  service_account.# = 1
  service_account.0.email = 737359258701-compute@developer.gserviceaccount.com
  service_account.0.scopes.# = 1
  service_account.0.scopes.2862113455 = https://www.googleapis.com/auth/compute.readonly
  tags.# = 1
  tags.1090984259 = docker-node
  tags_fingerprint = KMHM74J1xug=
  zone = us-central1-f

Verifying if Nginx container is running

Run the below command directly on Google cloud instance

inc@tf-docker-0:~$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS         
       NAMES
a6df1767bb64        nginx               "nginx -g 'daemon of…"   3 minutes ago       Up 3 minutes        0.0.0.0:80->80
/tcp   elastic_pare
[Captains-Bay]🚩 >  curl 35.226.155.224
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

In my future blogs, I will show how to setup Docker Swarm Mode and Kubernetes cluster using Terraform. Stay tuned !

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

A First Look at Docker Application Package “docker-app”

Estimated Reading Time: 10 minutes

 

Did you know? There are more than 300,000 Docker Compose files on GitHub.

Docker Compose is a tool for defining and running multi-container Docker applications. It  is an amazing developer tool to create the development environment for your application stack. It allows you to define each component of your application following a clear and simple syntax in YAML files. It works in all environments: production, staging, development, testing, as well as CI workflows. Though Compose files are easy to describe a set of related services but there are couple of problems which has emerged in the past. One of the major concern has been around multiple environments to deploy the application with small configuration differences.

Consider a scenario where you have separate development, test, and production environments for your Web application. Under development environment, your team might be spending time in building up Web application(say, WordPress), developing WP Plugins and templates, debugging the issue etc.  When you are in development you’ll probably want to check your code changes in real-time. The usual way to do this is mounting a volume with your source code in the container that has the runtime of your application. But for production this works differently. Before you host your web application in production environment, you might want to turn-off the debug mode and host it under the right port so as to test your application usability and accessibility. In production you have a cluster with multiple nodes, and in most of the case volume is local to the node where your container (or service) is running, then you cannot mount the source code without complex stuff that involve code synchronization, signals, etc. In nutshell, this might require multiple Docker compose files for each environment and as your number of service applications increases, it becomes more cumbersome to manage those pieces of Compose files. Hence, we need a tool which can ease the way Compose files can be shareable across  different environment seamlessly.

To solve this problem, Docker, Inc recently announced a new tool called “docker-app”(Application Packages) which makes “Compose files more reusable and shareable”. This tool not only makes Compose file shareable but provide us with simplified approach to share multi-service application (not just Docker Image) directly on Dockerhub.

 

 

Under this blog post, I will showcase how docker-app tool makes it easier to use Docker Compose for sharing and collaboration and then pushing it directly to DockerHub. Let us get started-

Prerequisite:

  • Click on Icon near to Instance to choose 3 Managers & 2 Worker Nodes

Deploy 5 Node Swarm Mode Cluster

$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
juld0kwbajyn11gx3bon9bsct *   manager1            Ready               Active              Leader              18.03.1-ce
uu675q2209xotom4vys0el5jw     manager2            Ready               Active              Reachable           18.03.1-ce
05jewa2brfkvgzklpvlze01rr     manager3            Ready               Active              Reachable           18.03.1-ce
n3frm1rv4gn93his3511llm6r     worker1             Ready               Active                                  18.03.1-ce
50vsx5nvwx5rbkxob2ua1c6dr     worker2             Ready               Active                                  18.03.1-ce

Cloning the Repository

$ git clone https://github.com/ajeetraina/app
Cloning into 'app'...remote: Counting objects: 14147, done.
remote: Total 14147 (delta 0), reused 0 (delta 0), pack-reused 14147Receiving objects: 100% (14147/14147), 17.32 MiB | 18.43 MiB/s, done.
Resolving deltas: 100% (5152/5152), done.

Installing docker-app

wget https://github.com/docker/app/releases/download/v0.3.0/docker-app-linux.tar.gz
tar xf docker-app-linux.tar.gz
cp docker-app-linux /usr/local/bin/docker-app

OR

$ ./install.sh
Connecting to github.com (192.30.253.112:443)
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (52.216.227.152:443)
docker-app-linux.tar 100% |**************************************************************|  8780k  0:00:00 ETA
[manager1] (local) root@192.168.0.13 ~/app
$ 

Verify docker-app version

$ docker-app version
Version:      v0.3.0
Git commit:   fba6a09
Built:        Fri Jun 29 13:09:30 2018
OS/Arch:      linux/amd64
Experimental: off
Renderers:    none

The docker-app tool comes with various options as shown below:

$ docker-app
Build and deploy Docker applications.

Usage:
  docker-app [command]

Available Commands:
  deploy      Deploy or update an application
  helm        Generate a Helm chart
  help        Help about any command
  init        Start building a Docker application
  inspect     Shows metadata and settings for a given application
  ls          List applications.
  merge       Merge the application as a single file multi-document YAML
  push        Push the application to a registry
  render      Render the Compose file for the application
  save        Save the application as an image to the docker daemon(in preparation for push)
  split       Split a single-file application into multiple files
  version     Print version information

Flags:
      --debug   Enable debug mode
  -h, --help    help for docker-app

Use "docker-app [command] --help" for more information about a command.
[manager1] (local) root@192.168.0.48 ~/app

WordPress Application under dev & Prod environment

If you browse to app/examples/wordpress directory under GitHub Repo, you will see that there is a folder called wordpress.dockerapp that contains three YAML documents:

  • metadatas
  • the Compose file
  • settings for your application

Okay, Fine ! But how you created those files?

The docker-app tool comes with an option “init” which initialize any application with the above  3 YAML files and the directory structure can be initialized with the below command:

docker-app init --single-file wordpress

I have already created a directory structure for my environment and you can find few examples under this directory.

Listing the WordPress Application package related files/directories

$ ls
README.md            install-wp           with-secrets.yml
devel                prod                 wordpress.dockerapp

As you see above, I have created each folder for my environment – dev and prod. Under these directories, I have created prod-settings.yml and dev-settings.yml. You can view the content via this link.

WordPress Application Package for Dev Environ

I can pass “-f” <YAML> parameter to docker-app tool to render the respective environment settings seamlessly as shown below:

$ docker-app render wordpress -f devel/dev-settings.yml
version: "3.6"
services:
  mysql:
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: dnsrr
    environment:
      MYSQL_DATABASE: wordpressdata
      MYSQL_PASSWORD: wordpress
      MYSQL_ROOT_PASSWORD: wordpress101
      MYSQL_USER: wordpress
    image: mysql:5.6
    networks:
      overlay: null
    volumes:
    - type: volume
      source: db_data
      target: /var/lib/mysql
  wordpress:
    depends_on:
    - mysql
    deploy:
      mode: replicated
      replicas: 1
      endpoint_mode: vip
    environment:
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DB_NAME: wordpressdata
      WORDPRESS_DB_PASSWORD: wordpress
      WORDPRESS_DB_USER: wordpress
      WORDPRESS_DEBUG: "true"
    image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 8082
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

WordPress Application Package for Prod

Under Prod environment, I have the following content under prod/prod-settings.yml as shown :

debug: false
wordpress:
  port: 80

For production environment, it is obvious that I want my application to be exposed under the standard port:80. Post rendering, you should be able to see port:80 exposed as shown below in the snippet:

 image: wordpress
    networks:
      overlay: null
    ports:
    - mode: ingress
      target: 80
      published: 80
      protocol: tcp
networks:
  overlay: {}
volumes:
  db_data:
    name: db_data

 

Inspect the WordPress App

$ docker-app inspect wordpress
wordpress 1.0.0
Maintained by: ajeetraina <ajeetraina@gmail.com>

Welcome to Collabnix

Setting                       Default
-------                       -------
debug                         true
mysql.database                wordpressdata
mysql.image.version           5.6
mysql.rootpass                wordpress101
mysql.scale.endpoint_mode     dnsrr
mysql.scale.mode              replicated
mysql.scale.replicas          1
mysql.user.name               wordpress
mysql.user.password           wordpress
volumes.db_data.name          db_data
wordpress.port                8081
wordpress.scale.endpoint_mode vip
wordpress.scale.mode          replicated
wordpress.scale.replicas      1
[manager1] (local) root@192.168.0.13 ~/app/examples/wordpress
$

Deploying the WordPress App

$ docker-app deploy wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress

Switching to Dev Environ

If I want to switch back to Dev environment, all I need is to pass the dev specific YAML file using “-f” parameter  as shown below:

$docker-app deploy wordpress -f devel/dev-settings.yml

docker-app

Switching to Prod Environ

$docker-app deploy wordpress -f prod/prod-settings.yml

docker-app

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f devel/dev-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app deploy -f prod/prod-settings.yml
Updating service wordpress_wordpress (id: l95b4s6xi7q5mg7vj26lhzslb)
Updating service wordpress_mysql (id: lhr4h2uaer861zz1b04pst5sh)
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Pushing Application Package to Dockerhub

So I have my application ready to be pushed to Dockerhub. Yes, you heard it right. I said, Application packages and NOT Docker Image.

Let me first authenticate myself before I push it to Dockerhub registry:

$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to
 https://hub.docker.com to create one.
Username: ajeetraina
Password:
Login Succeeded
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Saving this Application Package as Docker Image

The docker-app CLI is feature-rich and allows to save the entire application as a Docker Image. Let’s try it out –

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app save wordpress
Saved application as image: wordpress.dockerapp:1.0.0
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the images

$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
wordpress.dockerapp   1.0.0               c1ec4d18c16c        47 seconds ago      1.62kB
mysql                 5.6                 97fdbdd65c6a        3 days ago          256MB
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$

Listing out the services

$ docker stack services wordpress
ID                  NAME                  MODE                REPLICAS            IMAGE               PORTS
l95b4s6xi7q5        wordpress_wordpress   replicated          1/1                 wordpress:latest    *:80->80
/tcp
lhr4h2uaer86        wordpress_mysql       replicated          1/1                 mysql:5.6
[manager1] (local) root@192.168.0.48 ~/docker101/play-with-docker/visualizer

Using docker-app ls command to list out the application packages

The ‘ls’ command has been recently introduced under v0.3.0. Let us try it once –

$ docker-app ls
REPOSITORY            TAG                 IMAGE ID            CREATED              SIZE
wordpress.dockerapp   1.0.1               299fb78857cb        About a minute ago   1.62kB
wordpress.dockerapp   1.0.0               c1ec4d18c16c        16 minutes ago       1.62kB

Pusing it to Dockerhub

$ docker-app push --namespace ajeetraina --tag 1.0.1
The push refers to repository [docker.io/ajeetraina/wordpress.dockerapp]
51cfe2cfc2a8: Pushed
1.0.1: digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9 size: 524

Say, you built WordPress application package and pushed it to Dockerhub. Now one of your colleague want to pull it on his development system and deploy in his environment.

Pulling it from Dockerhub

$ docker pull ajeetraina/wordpress.dockerapp:1.0.1
1.0.1: Pulling from ajeetraina/wordpress.dockerapp
a59931d48895: Pull complete
Digest: sha256:14145fc6e743f09f92177a372b4a4851796ab6b8dc8fe49a0882fc5b5c1be4f9
Status: Downloaded newer image for ajeetraina/wordpress.dockerapp:1.0.1
[manager3] (local) root@192.168.0.24 ~/app
$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        8 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$

Deploying the Application

$ docker images
REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
ajeetraina/wordpress.dockerapp   1.0.1               299fb78857cb        9 minutes ago       1.62kB
[manager3] (local) root@192.168.0.24 ~/app
$ docker-app deploy ajeetraina/wordpress
Creating network wordpress_overlay
Creating service wordpress_mysql
Creating service wordpress_wordpress
[manager3] (local) root@192.168.0.24 ~/app
$

Using docker-app merge option

Docker Team has introduced docker-app merge option under the new 0.3.0 release.

[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ docker-app merge -o mywordpress
[manager1] (local) root@192.168.0.48 ~/app/examples/wordpress
$ ls
README.md            install-wp           prod                 wordpress.dockerapp
devel                mywordpress          with-secrets.yml
$ cat mywordpress
version: 1.0.1
name: wordpress
description: "Welcome to Collabnix"
maintainers:
  - name: ajeetraina
    email: ajeetraina@gmail.com
targets:
  swarm: true
  kubernetes: true

--
version: "3.6"

services:

  mysql:
    image: mysql:${mysql.image.version}
    environment:
      MYSQL_ROOT_PASSWORD: ${mysql.rootpass}
      MYSQL_DATABASE: ${mysql.database}
      MYSQL_USER: ${mysql.user.name}
      MYSQL_PASSWORD: ${mysql.user.password}
    volumes:
       - source: db_data
         target: /var/lib/mysql
         type: volume
    networks:
       - overlay
    deploy:
      mode: ${mysql.scale.mode}
      replicas: ${mysql.scale.replicas}
      endpoint_mode: ${mysql.scale.endpoint_mode}

  wordpress:
    image: wordpress
    environment:
      WORDPRESS_DB_USER: ${mysql.user.name}
      WORDPRESS_DB_PASSWORD: ${mysql.user.password}
      WORDPRESS_DB_NAME: ${mysql.database}
      WORDPRESS_DB_HOST: mysql
      WORDPRESS_DEBUG: ${debug}
    ports:
      - "${wordpress.port}:80"
    networks:
      - overlay
    deploy:
      mode: ${wordpress.scale.mode}
      replicas: ${wordpress.scale.replicas}
      endpoint_mode: ${wordpress.scale.endpoint_mode}
    depends_on:
      - mysql

volumes:
  db_data:
     name: ${volumes.db_data.name}

networks:
  overlay:

--
debug: true
mysql:
  image:
    version: 5.6
  rootpass: wordpress101
  database: wordpressdata
  user:
    name: wordpress
    password: wordpress
  scale:
    endpoint_mode: dnsrr
    mode: replicated
    replicas: 1
wordpress:
  scale:
    mode: replicated
    replicas: 1
    endpoint_mode: vip
  port: 8081
volumes:
  db_data:
    name: db_data

docker-app comes with a few other helpful commands as well, in particular the ability to create Helm Charts from your Docker Applications. This can be useful if you’re adopting Kubernetes, and standardising on Helm to manage the lifecycle of your application components, but want to maintain the simplicity of Compose when writing you applications. This also makes it easy to run the same applications locally just using Docker, if you don’t want to be running a full Kubernetes cluster.

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you want to keep track of latest Docker related information, follow me at https://www.linkedin.com/in/ajeetsraina/.

Top 5 Most Exciting Dockercon 2018 Announcements

Estimated Reading Time: 8 minutes

 

 

Yet another amazzing Dockercon !

I attended Dockercon 2018 last week which happened in the most geographically blessed city of Northern California – San Francisco & in the largest convention and exhibition complex- Moscone Center. With over 5000+ attendees from around the globe, 100+ sponsors, Hallway tracks, workshops & Hands-on labs, Dockercon allowed developers, sysadmins, Product Managers & industry evangelists come closer to share their wealth of experience around the container technology.This time I was lucky enough to get chance to visit Docker HQ, Townsend Street for the first time. It was an emotional as well as proud feeling to be part of such vibrant community home.

 

 

This Dockercon, there has been couple of exciting announcements.Three of the new features were targeted at Docker EE, while the two were for Docker Desktop. Here’s a rundown of what I think are the most 5 exciting announcements made last week:

 

Under this blog post, I will go through each one of the announcements in details.

1. Federated Application Management in Docker Enterprise Edition

 

With an estimated 85% of today’s enterprise IT organizations employing a multi-cloud strategy, it has become more critical that customers have a ‘single pane of glass’ for managing their entire application portfolio. Most enterprise organisations have a hybrid and multi-cloud strategy. Containers has helped to make applications portable but let us accept the fact that even though containers are portable today but the management of containers is still a nightmare. The reason being –

  • Each Cloud is managed under a separate operational model, duplicating efforts
  • Different security and access policies across each platform
  • Content is hard to distribute and track
  • Poor Infrastructure utilisation still remains
  • Emergence of Cloud-hosted K8s is exacerbating the challenges with managing containerised applications across multiple Clouds

This time Docker introduced new application management capabilities for Docker Enterprise Edition that will allow organisations to federate applications across Docker Enterprise Edition environments deployed on-premises and in the cloud as well as across cloud-hosted Kubernetes. This includes Azure Kubernetes Service (AKS), AWS Elastic Container Service for Kubernetes (EKS), and Google Kubernetes Engine (GKE). The federated application management feature will automate the management and security of container applications on premises and across Kubernetes-based cloud services.It will provide a single management platform to enterprises so that they can centrally control and secure the software supply chain for all the containerized applications.

With this announcement, undoubtedly Docker Enterprise Edition is the only enterprise-ready container platform that can deliver federated application management with a secure supply chain. Not only does Docker give you your choice of Linux distribution or Windows Server, the choice of running in a virtual machine or on bare metal, running traditional or microservices applications with either Swarm or Kubernetes orchestration, it also gives you the flexibility to choose the right cloud for your needs.

 

 

Below are the list of use cases that are driving the need for federated management of Containerised applications  –

 

If you want to read more about it, please refer this official blog.

 

2. Kubernetes Support for Windows Server Container in Docker Enterprise Edition

The partnership between Docker and Microsoft is not new. They have been working together since 2014 to bring containers to Windows and .NET applications. This DockerCon, Docker & Microsoft both shared the next step in this partnership with the preview and demonstration of Kubernetes support on Windows Server with Docker Enterprise Edition.

With this announcement, Docker is the only platform to support production-grade Linux and Windows containers, as well as dual orchestration options with Swarm and Kubernetes.

There has been a rapid rise of Windows containers as organizations recognize the benefits of containerisation and want to apply them across their entire application portfolio and not just their Linux-based applications.

Docker and Microsoft brought container technology into Windows Server 2016, ensuring consistency for the same Docker Compose file and CLI commands across both Linux and Windows. Windows Server ships with a Docker Enterprise Edition engine, meaning all Windows containers today are based on Docker. Recognizing that most enterprise organizations have both Windows and Linux applications in their environment, we followed that up in 2017 with the ability to manage mixed Windows and Linux clusters in the same Docker Enterprise Edition environment, enabling support for hybrid applications and driving higher efficiencies and lower overhead for organizations. Using Swarm orchestration, operations teams could support different application teams with secure isolation between them, while also allowing Windows and Linux containers to communicate over a common overlay network.

If you want to know further details, refer this official blog.

3. Docker Desktop Template-Based Workflows for Enterprise Developers

Dockercon 2018 was NOT just for Enterprise customers, but  also for Developers. Talking about the new capabilities for Docker Desktop, it is getting new template-based workflows which will enable developers to build new containerized applications without having to learn Docker commands or write Docker files. This template-based workflows will also help development teams to share their own practices within the organisation.

On the 1st day of Dockercon, Docker team previewed an upcoming Docker Desktop feature that will make it easier than ever to design your own container-based applications. For a certain set of developers, the current iteration of Docker Desktop has everything one might need to containerize an applications, but it does require an understanding of the Dockerfile and Compose file specifications in order to get started and the Docker CLI to build and run your applications.

In the upcoming Docker Desktop release, you can expect the below features –

  • You will see new option – “Design New application”  as shown below Preference Pane UI.
  • It will be 100% graphical tool/feature.
  • This tool is a gift for anyone who doesn’t want to write Dockerfiles or Docker compose file.
  • Once a user click the button to start the “Custom application” workflow , he will be presented with a list  services which he can add to the application.
  • Each service which is selected will eventually become a container in the final application, but Docker Desktop will take care of creating the Dockerfiles and Compose files  as complete later steps.
  • Under this beta release, currently one can do some basic customization to the service like changing versions, port numbers, and a few other options depending on the service selected.
  • When all the services are selected, one should be ready to proceed, supply the application a name and specify where to store the files that will be generated and then hit the “Assemble” button.
  • The assemble step creates the Dockerfiles for each service, the Compose file used to start the entire application, and for most services some basic code stubs, giving one enough to start the application.

 

 

If you’re interested in getting early access to the new app design feature in Docker Desktop then please sign up at beta.docker.com.

4. Making Compose Easier with to use with the Application Packages

Soon after Dockercon, one of the most promising tool announced for Developers was Docker Application Packages (docker-app). The “docker-app” is an experimental utility to help make Compose files more reusable and sharable.

What problem does application packages solve?

Compose files do a great job of describing a set of related services. Not only are Compose files easy to write, they are generally easy to read as well. However, a couple of problems often emerge:

  1. You have several environments where you want to deploy the application, with small configuration differences
  2. You have lots of similar applications

Fundamentally, Compose files are not easy to share between concerns. Docker Application Packages aim to solve these problems and make Compose more useful for development and production.

In my next blog post, I will talk more about this tool. If you want to try your hands, head over to https://github.com/docker/app

5. Upcoming Support for Serverless Platform under Docker EE

Recently, Function as a Service (FaaS) programming paradigm has gained a lot of traction in the cloud community. At first, only large cloud providers such as AWS Lambda, Google Cloud Functions or Azure Functions provided such services with a pay-per-invocation model, but since then interest has increased for developers and entreprises to build their own solutions on an open source model.

This Dockercon, Docker identified at least 9 different frameworks out of which the following six: OpenFaaS, nuclio, Gestalt, Riff, Fn and OpenWhisk were already confirmed to be supported under the upcoming Docker Enterprise Edition. Docker, Inc started an open source repository to document how to install all these frameworks on Kubernetes on Docker EE, with the goal of providing a benchmark of these frameworks: docker serverless benchmark Github Repository. Pull Requests are welcome to document how to install other serverless frameworks on Docker EE.

 

Did you find this blog helpful? I am really excited about the upcoming Docker days and feel that these upcoming features will really excite the community. If you have any questions, join me this July 7th at Docker Bangalore Meetup Group, Nutanix Office where I can going to go deeper into Dockercon 2018 Announcements. See you there..

 

Kubernetes Application Deployment Made Easy using Helm on Docker for Mac 18.05.0 CE

Estimated Reading Time: 10 minutes

 

Docker for Mac 18.05.0 CE Release went GA last month. With this release, you can now select your orchestrator directly from the UI in the “Kubernetes” pane which allows “docker stack” commands to deploy to swarm clusters, even if Kubernetes is enabled in Docker for Mac. This feature was introduced for the first time under any Desktop Edition. To try out this feature, ensure that you are using Edge Release of Docker for Mac 18.05.0 CE. Once you update your Docker for Mac, you can find this new feature by clicking on Preference Pane UI and then selecting Kubernetes as shown below:

 

 

 

Whenever you select your choice of orchestrator, it updates ~/.docker/config.json file in the backend as shown below:

 

Docker for Mac is used everyday by hundreds of thousands of developers to build, test and debug containerized apps in their local and dev environment. Developers building both docker-compose and Swarm-based apps, and apps destined for deployment on Kubernetes now get a simple-to-use development system that takes optimal advantage of their laptop or workstation. All container tasks – build, run and push – will run on the same Docker instance with a shared set of images, volumes and containers. With the current release, it is way more simple to install, so you can have Docker containers running on your Mac in just a few minutes.

Check out the curated list of blogs around Docker for Mac

 

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes

Docker for Mac provides docker stack command to deploy your application for both Swarm and Kubernetes. This become very useful for Docker Swarm users as they can use the same Swarm CLI to bring up Kubernetes users. But here is an extra bonus – Docker for Mac now works flawlessly for Helm Package Manager.

Why Yet another Package Manager?

Let’s accept the fact that Kubernetes can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.

In case you are completely new – Helm is an open source project that enables developers to create packages of containerized apps to make installation much simpler. Helm is the package manager for Kubernetes and it’s the best way to find, share, and deploy software to k8s.The project was initially created by Deis and has since been donated to the Cloud Native Computing Foundation (CNCF).

Users can install Helm with one click or configure it to suit their organization’s needs. For example, if you want to package and release version 1.0, making only certain parts configurable, this can be done with Helm. Then with version 2.0, additional parts can be made configurable.

Up until now, it was a sub-project of Kubernetes, the popular container orchestration tool, but as of today it is a stand-alone project.

 

Helm is built on three important concepts:

  • Charts – a bundle of information necessary to create an instance of a Kubernetes application
  • Config – contains configuration information that can be merged into a packaged chart to create a releasable object
  • Release – a running instance of a chart, combined with a specific config

Architecture of Helm:

Architecturally it’s built on two major components:

 

Helm Client, a command line tool with the following responsibilities

  • Interacting with the Tiller server
  • Sending charts to be installed
  • Upgrading or uninstalling of existing releases
  • Managing repositories

Tiller Server, an in-cluster server with the following responsibilities:

  • Interacts with the Helm client
  • Interfaces the Kubernetes API server
  • Combining a chart and configuration to build a release
  • Installing charts and tracking the release
  • Upgrading and uninstalling charts

Both the Helm client and Tiller are written in Go and uses gRPC to interact with each other. Tiller (as the server part running inside Kubernetes) provides a gRPC server to connect with the client and it uses the k8s client library to communicate with Kubernetes. It does not require it’s own database as the information is stored within Kubernetes as ConfigMaps.

 

Installing Helm

Pre-requisites:

  • Docker for Mac 18.05.0 CE – Edge Release
  • Enable Kubernetes under Preference Pane UI

To install Helm, you just need a single-liner command on your macOS:

[Captains-Bay]🚩 >  brew install kubernetes-helm
[Captains-Bay]🚩 >  helm
The Kubernetes package manager

To begin working with Helm, run the 'helm init' command:

	$ helm init

This will install Tiller to your running Kubernetes cluster.
It will also set up any necessary local configuration.

Common actions from this point include:

- helm search:    search for charts
- helm fetch:     download a chart to your local directory to view
- helm install:   upload the chart to Kubernetes
- helm list:      list releases of charts

Environment:
  $HELM_HOME          set an alternative location for Helm files. By default, these are stored in ~/.helm
  $HELM_HOST          set an alternative Tiller host. The format is host:port
  $HELM_NO_PLUGINS    disable plugins. Set HELM_NO_PLUGINS=1 to disable plugins.
  $TILLER_NAMESPACE   set an alternative Tiller namespace (default "kube-system")
  $KUBECONFIG         set an alternative Kubernetes configuration file (default "~/.kube/config")

Usage:
  helm [command]

Available Commands:
  completion  Generate autocompletions script for the specified shell (bash or zsh)
  create      create a new chart with the given name
  delete      given a release name, delete the release from Kubernetes
  dependency  manage a chart's dependencies
  fetch       download a chart from a repository and (optionally) unpack it in local directory
  get         download a named release
  history     fetch release history
  home        displays the location of HELM_HOME
  init        initialize Helm on both client and server
  inspect     inspect a chart
  install     install a chart archive
  lint        examines a chart for possible issues
  list        list releases
  package     package a chart directory into a chart archive
  plugin      add, list, or remove Helm plugins
  repo        add, list, remove, update, and index chart repositories
  reset       uninstalls Tiller from a cluster
  rollback    roll back a release to a previous revision
  search      search for a keyword in charts
  serve       start a local http web server
  status      displays the status of the named release
  template    locally render templates
  test        test a release
  upgrade     upgrade a release
  verify      verify that a chart at the given path has been signed and is valid
  version     print the client/server version information

Flags:
      --debug                           enable verbose output
  -h, --help                            help for helm
      --home string                     location of your Helm config. Overrides $HELM_HOME (default "/Users/ajeetraina/.helm")
      --host string                     address of Tiller. Overrides $HELM_HOST
      --kube-context string             name of the kubeconfig context to use
      --tiller-connection-timeout int   the duration (in seconds) Helm will wait to establish a connection to tiller (default 300)
      --tiller-namespace string         namespace of Tiller (default "kube-system")

Use "helm [command] --help" for more information about a command.

Verify the Helm version. If Server and client version doesn’t match, you need to upgrade to deploy application seamlessly(as shown below):

[Captains-Bay]🚩 >  helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
[Captains-Bay]🚩 >  helm init --upgrade
$HELM_HOME has been configured at /Users/ajeetraina/.helm.

Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
[Captains-Bay]🚩 >  helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}

Installing WordPress Application using Helm

Say, you want to install WordPress application using Helm. First you need to update the repository and then you can search the application using helm search command as shown below:

[Captains-Bay]🚩 > helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

[Captains-Bay]🚩 >  helm search wordpress
NAME            	CHART VERSION	APP VERSION	DESCRIPTION
stable/wordpress	1.0.2        	4.9.4      	Web publishing platform for building blogs and ...

Just a single-liner command and your WordPress application is up and running:

[Captains-Bay]🚩 >  helm install stable/wordpress --name mywp
NAME:   mywp
LAST DEPLOYED: Sat Jun  2 07:19:25 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
mywp-mariadb    1        1        1           0          0s
mywp-wordpress  1        1        1           0          0s

==> v1/Pod(related)
NAME                             READY  STATUS             RESTARTS  AGE
mywp-mariadb-b689ddf74-mlprh     0/1    Init:0/1           0         0s
mywp-wordpress-774555bd4b-hcdc2  0/1    ContainerCreating  0         0s

==> v1/Secret
NAME            TYPE    DATA  AGE
mywp-mariadb    Opaque  2     1s
mywp-wordpress  Opaque  2     1s

==> v1/ConfigMap
NAME                DATA  AGE
mywp-mariadb        1     1s
mywp-mariadb-tests  1     1s

==> v1/PersistentVolumeClaim
NAME            STATUS   VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
mywp-mariadb    Bound    pvc-2e1f1122-6607-11e8-8d79-025000000001  8Gi       RWO           hostpath      1s
mywp-wordpress  Pending  hostpath                                  1s

==> v1/Service
NAME            TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
mywp-mariadb    ClusterIP     10.98.65.236    <none>       3306/TCP                    0s
mywp-wordpress  LoadBalancer  10.109.204.199  localhost    80:31016/TCP,443:30638/TCP  0s


NOTES:
1. Get the WordPress URL:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace default -w mywp-wordpress'

  export SERVICE_IP=$(kubectl get svc --namespace default mywp-wordpress -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  echo http://$SERVICE_IP/admin

2. Login with the following credentials to see your blog

  echo Username: user
  echo Password: $(kubectl get secret --namespace default mywp-wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)

[Captains-Bay]🚩 >

Checking the Status of WordPress Application

[Captains-Bay]🚩 >  kubectl get svc --namespace default -w mywp-wordpress
NAME             TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
mywp-wordpress   LoadBalancer   10.109.204.199   localhost     80:31016/TCP,443:30638/TCP   47s

That’s it. Just browse the WordPress using your localhost IP:

 

Cleaning up WordPress

[Captains-Bay] > helm delete mywp
release "mywp" deleted

Installing Prometheus Stack using Helm

[Captains-Bay]🚩 >  helm install stable/prometheus
NAME:   hasty-ladybug
LAST DEPLOYED: Sun Jun  3 09:00:30 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/PersistentVolumeClaim
NAME                                   STATUS  VOLUME                                    CAPACITY  ACCESS MODES  STORAGECLASS  AGE
hasty-ladybug-prometheus-alertmanager  Bound   pvc-7732a80b-66de-11e8-b7d4-025000000001  2Gi       RWO           hostpath      2s
hasty-ladybug-prometheus-server        Bound   pvc-773541f3-66de-11e8-b7d4-025000000001  8Gi       RWO           hostpath      2s

==> v1/ServiceAccount
NAME                                         SECRETS  AGE
hasty-ladybug-prometheus-alertmanager        1        2s
hasty-ladybug-prometheus-kube-state-metrics  1        2s
hasty-ladybug-prometheus-node-exporter       1        2s
hasty-ladybug-prometheus-pushgateway         1        2s
hasty-ladybug-prometheus-server              1        2s

==> v1beta1/ClusterRoleBinding
NAME                                         AGE
hasty-ladybug-prometheus-kube-state-metrics  2s
hasty-ladybug-prometheus-server              2s

==> v1beta1/DaemonSet
NAME                                    DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
hasty-ladybug-prometheus-node-exporter  1        1        0      1           0          <none>         2s

==> v1/Pod(related)
NAME                                                          READY  STATUS             RESTARTS  AGE
hasty-ladybug-prometheus-node-exporter-9ggqj                  0/1    ContainerCreating  0         2s
hasty-ladybug-prometheus-alertmanager-5c67b8b874-4xxtj        0/2    ContainerCreating  0         2s
hasty-ladybug-prometheus-kube-state-metrics-5cbcd4d86c-788p4  0/1    ContainerCreating  0         2s
hasty-ladybug-prometheus-pushgateway-c45b7fd6f-2wwzm          0/1    Pending            0         2s
hasty-ladybug-prometheus-server-799d6c7c75-jps8k              0/2    Init:0/1           0         2s

==> v1/ConfigMap
NAME                                   DATA  AGE
hasty-ladybug-prometheus-alertmanager  1     2s
hasty-ladybug-prometheus-server        3     2s

==> v1beta1/ClusterRole
NAME                                         AGE
hasty-ladybug-prometheus-kube-state-metrics  2s
hasty-ladybug-prometheus-server              2s

==> v1/Service
NAME                                         TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE
hasty-ladybug-prometheus-alertmanager        ClusterIP  10.96.193.91   <none>       80/TCP    2s
hasty-ladybug-prometheus-kube-state-metrics  ClusterIP  None           <none>       80/TCP    2s
hasty-ladybug-prometheus-node-exporter       ClusterIP  None           <none>       9100/TCP  2s
hasty-ladybug-prometheus-pushgateway         ClusterIP  10.97.92.108   <none>       9091/TCP  2s
hasty-ladybug-prometheus-server              ClusterIP  10.96.118.138  <none>       80/TCP    2s

==> v1beta1/Deployment
NAME                                         DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
hasty-ladybug-prometheus-alertmanager        1        1        1           0          2s
hasty-ladybug-prometheus-kube-state-metrics  1        1        1           0          2s
hasty-ladybug-prometheus-pushgateway         1        1        1           0          2s
hasty-ladybug-prometheus-server              1        1        1           0          2s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
hasty-ladybug-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[Captains-Bay]🚩 >  helm ls
NAME         	REVISION	UPDATED                 	STATUS  	CHART           	NAMESPACE
hasty-ladybug	1       	Sun Jun  3 09:00:30 2018	DEPLOYED	prometheus-6.7.0	default
mywp         	1       	Sat Jun  2 07:19:25 2018	DEPLOYED	wordpress-1.0.2 	default

Accessing Prometheus UI under Web Browser

export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090

Now you can access Prometheus:

open http://localhost:9090

 

Cleaning Up

[Captains-Bay]🚩 >  helm delete hasty-ladybug
release "hasty-ladybug" deleted

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

How to Get the Most Out of Dockercon 2018 – (June 12 – 15th, SF)

Estimated Reading Time: 8 minutes

 

In a recent survey of 191 Top Fortune 1000 executives, 69% of them believe that conference presents a wealth of networking opportunities which is clearly a driving force behind their decision making. What it means is that the conference is NOT only about attending sessions but all about –

  • Gaining visibility,
  • Networking ,
  • Building strong relationship,
  • Connecting with Speakers
  • Connection with your customer
  • Meeting Like-Minded People

Hence, attending conference might be one of the best things you can do for your career. It is the time where you learn about new industry trends, gain some new skills and make all kind of new connections.

Dockercon 2018 is just a month away. It is happening this year at Moscone Center ~ one of the largest convention & exhibition complex in San Francisco, California starting June 13-15, 2018.  The event is expected to welcome 6,000+ developers, sysadmins, architects, VP of Apps & other IT leaders to get hands-on with the latest innovations in the container ecosystem. This is where all general DockerCon activities such as keynotes, breakouts, networking, meals, etc. will take place.

 

Use code “CaptainAjeet” to get 10% off on 2018.

New to Dockercon?

DockerCon is an event where the Docker community comes to learn, belong, and collaborate. Attendees are a mix of beginner, intermediate and advanced users who are all looking to level up their skills and go home inspired. With a 2 full days of training, more than 100 sessions, 100+ sponsors, free workshops & hands-on labs, and the wealth of experience brought by each attendee, DockerCon is the place to be if you’re looking to learn Docker in 2018.

I assume that you are able to convince your boss for Dockercon. Awesome! If you are still planning for it, here is a bonus – Docker Team has prepared this booklet for you.

Starting with booking your conference – the cost of many conferences increases substantially in the run up to the event, make sure you book as soon as possible to take advantage of early bird registration rates. Booking in advance always keep your travel and accommodation costs lower too, so whether you’re funding yourself or spending from a grant, it’s always good to save a little cash along the way (which you can hopefully put towards your next conference!). But in case you missed it out, here is the latest price list for various training sessions you might be interested to invest on –

Please Note: Your Full conference pass includes talks, workshops, keynotes, breakfast/lunch, unlimited coffee & access to party. The training & certification packages are additional purchases & not included in the full conference pass. In case you’re new, please note that as  a DockerCon attendee, you’ll have the opportunity to appear and earn ‘Docker Certified Associate’ designation with the digital certificate and verification to prove it! You should be able to schedule your exam during the below timeframe:

 

Under this blog post, I have come up with the list of important suggestions on how to make the most out of this amazing event.

#1 – Let’s begin with Housekeeping stuffs…

Based on my past experience, I suggest to keep your phone and laptop chargers with you.You’re going to spend a huge part of your day on your devices — don’t get caught with dead batteries. If you are travelling from India, be aware a normal laptop charger plugin point mightn’t work for you. You might need to carry US complaint laptop battery charger plug point. During my first visit to Docker Distribution System Conference, Berlin(Germany), I had a hard time looking out the right plug point for my laptop charger.

Secondly, ensure that you pack enough business cards. Make sure you have some on hand and a stash in your luggage. You never know how many people you’re going to meet. Thirdly, if you are presenter or conducting a workshop in the conference, ensure that you  bring the materials you need for demos. By no means should you spend the conference pitching to people who don’t want to be pitched. However, if one of those pre-set prospect meetings turns into a real sales opportunity, it’ll be more efficient — and impressive — if you can provide a walkthrough on the spot.

 

 

 

#2 – Prioritize Your Time

At any conference your time will compete with many activities. How can you maximize your time while you’re there? Here are some things to consider before you arrive.Are there many tracks going on at once? Will you have access to video recordings after the conference?Will there be a lounge or hack area outside the sessions?

The one session you should not miss is the keynote. All conferences have a keynote. It dictates the rest of the conference with high profile speakers or major announcements. Yet, the keynote is the single common session for the entire conference. You can use the keynote to break the ice while networking. Below is the snapshot of Dockercon Schedule you can go through and plan for prioritizing your time.

 

 

Conferences are broken down into “tracks” – these are different sets of simultaneous panels, talks or workshops that can be set up using a variety of criteria. Some of those criteria can be based on subject matter (e.g. Technology track, Business track, Legal track, etc.). Other times, they can set up based on format (e.g. Talks, Panels, Workshops, etc.). Here are the list

  • Kubernetes extensibility by Tim Hockin and Eric Tune (Google)
  • Accelerating Development Velocity of Production ML Systems with Docker by Kinnary Jangla (Pinterest)
  • Digital Transformation with Docker, Cloud and DevOps: How JCPenney Handles Black Friday and 100K Deployments Per Year by Sanjoy Mukherjee, (JCPenney)
  • Don’t have a Meltdown! Practical Steps for Defending your Apps by Liz Rice (Aqua) and Justin Cormack (Docker)
  • Creating Effective Docker Images by Abby Fuller (AWS)
  • App Transformation with Docker: 5 Patterns for Success by Elton Stoneman (Docker)

#3 – Break Out of Your Comfort Zone

A Conference is the time to meet new people, but its also a time build on the relationships you already have. You might be interacting with your counterparts in other part of world, hence this is the time where you can strengthen your bonding. If you know of people you want to reconnect, you can reach out few weeks before the conference to setup a time to meet for coffee or a meal while you’re at the event.

 

 

DockerCon is all about learning new things and connecting with the right people. One of such innovative platform introduced during Dockercon 2017 was called Hallway Track. Docker Hallway Track is an innovative platform that helps you find like-minded people to meet one-on-one and share knowledge in a structured way, so you get tangible results from networking.It is a one-on-one or group conversations based on topics of interest that you schedule with other attendees during DockerCon. Hallway Track’s recommendation algorithm curates an individualized selection of Hallway Track topics for each participant, based on their behavior and interests. Don’t miss out this chance which allows you meet and share knowledge with community members and practitioners at the conference.

 

 

 

 

Dockercon is a great place to meet Docker Captains. Docker Captains are technology experts and leaders in their communities who are passionate about sharing their Docker knowledge with others. Don’t miss out this chance to say “Hi” to them and know about their contribution towards the community.

#4 – Don’t Miss out the Evenings

Conferences don’t stop after the final session of the day. There will be many opportunities to network, dinners and other events. Be sure to go to as much of these as possible and use to meet new people. These are the perfect networking events and can help you relax after a long conference day.

#5 – Don’t Forget to Follow-Up

The insights you gained at the conference are likely to be useful for your team, so make sure to set aside time to pass on what you learned. Whether it’s leading an in-person session or writing an email or post to document the most valuable information, proactively sharing information will help your colleagues do better work while establishing you as a leader on your team.There’s no better place than a conference to take stock of the state of your industry and your profession. Make the most of your time, and have fun!

References:

 

 

Under the Hood: Demystifying Docker For Mac CE Edition

Estimated Reading Time: 6 minutes

 

Docker is a full development platform for creating containerized apps, and Docker for Mac is the most efficient way to start and run Docker on your MacBook. It runs on a LinuxKit VM and NOT on VirtualBox or VMware Fusion. It embeds a hypervisor (based on xhyve), a Linux distribution which runs on LinuxKit and filesystem & network sharing that is much more Mac native. It is a Mac native application, that you install in /Applications. At installation time, it creates symlinks in /usr/local/bin for docker & docker-compose and others, to the commands in the application bundle, in /Applications/Docker.app/Contents/Resources/bin.

One of the most amazing feature about Docker for Mac is “drag & Drop” the Mac application to /Applications to run Docker CLI and it just works flawlessly. The way the filesystem sharing maps OSX volumes seamlessly into Linux containers and remapping macOS UIDs into Linux is one of the most anticipated feature.

Few Notables Features of Docker for Mac:

  • Docker for Mac runs in a LinuxKit VM.
  • Docker for Mac uses HyperKit instead of Virtual Box. Hyperkit is a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.
  • Docker for Mac does not use docker-machine to provision its VM. The Docker Engine API is exposed on a socket available to the Mac host at /var/run/docker.sock. This is the default location Docker and Docker Compose clients use to connect to the Docker daemon, so you to use docker and docker-compose CLI commands on your Mac.
  • When you install Docker for Mac, machines created with Docker Machine are not affected.
  • There is no docker0 bridge on macOS. Because of the way networking is implemented in Docker for Mac, you cannot see a docker0 interface on the host. This interface is actually within the virtual machine.
  • Docker for Mac has now Multi-Architectural support. It provides binfmt_misc multi architecture support, so you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.

Top 5 Exclusive Features of Docker For Mac That You Can’t Afford to Ignore

Under this blog, I will deep dive into Docker for Mac architecture and show how to access service containers running on top of LinuxKit VM.

At the base of architecture, we have hypervisor called Hyperkit which is derived from xhyve. The xhyve hypervisor is a port of bhyve to OS X. It is built on top of Hypervisor.framework in OS X 10.10 Yosemite and higher, runs entirely in userspace, and has no other dependencies. HyperKit is basically a toolkit for embedding hypervisor capabilities in your application. It includes a complete hypervisor optimized for lightweight virtual machines and container deployment. It is designed to be interfaced with higher-level components such as the VPNKit and DataKit.

Just sitting next to HyperKit is Filesystem sharing solution. The osxfs is a new shared file system solution, exclusive to Docker for Mac. osxfs provides a close-to-native user experience for bind mounting macOS file system trees into Docker containers. To this end, osxfs features a number of unique capabilities as well as differences from a classical Linux file system.On macOS Sierra and lower, the default file system is HFS+. On macOS High Sierra, the default file system is APFS.With the recent release, NFS Volume sharing has been enabled both for Swarm & Kubernetes.

There is one more important component sitting next to Hyperkit, rightly called as VPNKit. VPNKit is a part of HyperKit attempts to work nicely with VPN software by intercepting the VM traffic at the Ethernet level, parsing and understanding protocols like NTP, DNS, UDP, TCP and doing the “right thing” with respect to the host’s VPN configuration. VPNKit operates by reconstructing Ethernet traffic from the VM and translating it into the relevant socket API calls on OSX. This allows the host application to generate traffic without requiring low-level Ethernet bridging support.

On top of these open source components, we have LinuxKit VM which runs containerd and service containers which includes Docker Engine to run service containers. LinuxKit VM is built based on YAML file. The docker-for-mac.yml contains an example use of the open source components of Docker for Mac. The example has support for controlling dockerd from the host via vsudd and port forwarding with VPNKit. It requires HyperKit, VPNKit and a Docker client on the host to run.

Sitting next to Docker CE service containers, we have kubelet binaries running inside LinuxKit VM. If you are new to K8s, kubelet is an agent that runs on each node in the cluster. It makes sure that containers are running in a pod. It basically takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.On top of Kubelet, we have kubernetes services running. We can either run Swarm Cluster or Kubernetes Cluster. We can use the same Compose YAML file to bring up both the clusters side by side.

Peeping into LinuxKit VM

Curious about VM and how Docker for Mac CE Edition actually look like?

Below are the list of commands which you can leverage to get into LinuxKit VM and see kubernetes services up and running. Here you go..

How to enter into LinuxKit VM?

Open MacOS terminal and run the below command to enter into LinuxKit VM:

$screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty

Listing out the service containers:

Earlier the ctr tasks ls used to list the service containers running inside LinuxKit VM but in the recent release, namespace concept has been introduced, hence you might need to run the below command to list out the service containers:

$ ctr -n services.linuxkit tasks ls
TASK                    PID     STATUS
acpid                   854     RUNNING
diagnose                898     RUNNING
docker-ce               936     RUNNING
host-timesync-daemon    984     RUNNING
ntpd                    1025    RUNNING
trim-after-delete       1106    RUNNING
vpnkit-forwarder        1157    RUNNING
vsudd                   1198    RUNNING

How to display containerd version?

Under Docker for Mac 18.05 RC1, containerd version 1.0.1 is available as shown below:

linuxkit-025000000001:~# ctr version
Client:
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55

Server:
  Version:  v1.0.1
  Revision: 9b55aab90508bd389d7654c4baf173a981477d55
linuxkit-025000000001:~#

How shall I enter into docker-ce service container using containerd?

ctr -n services.linuxkit tasks exec -t --exec-id 936 docker-ce sh
/ # docker version
Client:
 Version:      18.05.0-ce-rc1
 API version:  1.37
 Go version:   go1.9.5
 Git commit:   33f00ce
 Built:        Thu Apr 26 00:58:14 2018
 OS/Arch:      linux/amd64
 Experimental: false
 Orchestrator: swarm

Server:
 Engine:
  Version:      18.05.0-ce-rc1
  API version:  1.37 (minimum version 1.12)
  Go version:   go1.10.1
  Git commit:   33f00ce
  Built:        Thu Apr 26 01:06:49 2018
  OS/Arch:      linux/amd64
  Experimental: true
/ #

How to verify Kubernetes Single Node Cluster?

/ # kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-23T09:38:59Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
/ # kubectl get nodes
NAME                 STATUS    ROLES     AGE       VERSION
docker-for-desktop   Ready     master    26d       v1.9.6
/ #

 

Interested to read further? Check out my curated list of blog posts –

Docker for Mac is built with LinuxKit. How to access the LinuxKit VM
Top 5 Exclusive Features of Docker for Mac That you can’t afford to ignore
5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0
Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0
2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI
Docker For Mac 1.13.0 brings support for macOS Sierra, now runs ARM & AARCH64 based Docker containers
Docker for Mac 18.03.0 now comes with NFS Volume Sharing Support for Kubernetes

 

Did you find this blog helpful?  Feel free to share your experience. Get in touch with me at twitter @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.