Introducing OpenUSM – Simplifying Server Management & Insight Log Analytics using Docker containers

Its 2018 ! Let Containers Manage Your Datacenter..

Containers are changing the dynamics of modern data center. It is a growing technology that is drawing widespread attention across the Enterprise IT. One of the primary reason of the rise & adoption of containers is that it allows developers to move faster. Compared to VMs which takes minutes to stand up, containers take milliseconds and even microseconds. As organizations prioritize shipping new products and features faster, to keep up in the software-eaten world, developers are favoring technology that allows them to scale applications and deploy resources much faster than what traditional VMs on public and private clouds can support.

Docker containers bring variety of cost and performance benefits. It brings ability to run multiple applications on the same server or OS without a hypervisor which eliminates the drag of the hypervisor on system resources, which means that your workloads have a lighter footprint – the container footprint is zero, because it’s simply a boundary of permissions and resources within Linux. They fire up and decommission very rapidly compared to virtual machines – a perfect fit for the ephemeral nature of today’s short-lived workloads, which are often tied to real world events.

Docker is an open platform which make application and workloads more portable and distributed in an effective and standardized way. Combining Docker containers, micro services along with software-defined infrastructure makes the datacenter more agile and quick resource reallocation. Hence, this architecture works well to improve the datacenter operations.

Most DellEMC Server Management software offerings, as well as the entire Software Defined Infrastructure, are built upon standard implementation using RESTful architecture called Redfish. Redfish is a next generation system management standard using a data model representation inside a hypermedia RESTful interface. The data model is defined in terms of a standard, machine-readable schema, with the payload of the messages expressed in JSON and the protocol using OData v4. Since it is a hypermedia API, Redfish is capable of representing a variety of implementations by using a consistent interface. It has mechanisms for discovering and managing data center resources, handling events, and managing long-lived tasks. It is easy to implement, easy to consume and offer scalability advantages over old technologies. Redfish is a RESTful interface over HTTPS in JSON format based on ODATA v4 usable by clients, scripts, and browser-based GUIs

 

What is OpenUSM?

 

 

OpenUSM stands for Open Universal Systems Manager. It is basically a multi-tool product like “a Swiss army knife”. It is a suite of open source tools & scripts which purely uses containers & related tools to perform server management tasks, monitoring & insight Log Analytics. It is 100% container based solution which heavily uses Docker for building microservices for system management tasks like BIOS token change, Firmware update. It is completely an out-of-band system management solution purely based on Redfish API Interface. It is a platform agnostic solution (can be run from laptop, server or cloud) and works on any of Linux or Windows platform with Docker Engine running on top of it.

OpenUSM is a project which was created by Ajeet Singh Raina & Avinash Bendigeri, a DellEMC employee 3-months back.

 

Value Proposition:

OpenUSM follow an easy deployment model. It uses developer’s tools like Docker Compose & CLI to bring up microservices which ensures that system management tasks can be achieved flawlessly. It uses modern tools & technologies and integrates well with near real-time search analytics tools like ELK stack. It enables Sensor Log Analytics & visualization for operations team using the popular Grafana tool. It can scale both vertically & horizontally. As it is completely open source, you are free to build and customize based on your needs and holds a plug-and-play components and functionalities.

Technology Overview of OpenUSM:

OpenUSM is an integrated container solution which brings 3 basic functionalities– system management, monitoring and insight log analytics. System Management stack sits at the middle of the architecture which uses Python, Redfish, Django, Docker & Docker compose to enable system management tasks. On the left hand side of the architecture, we have monitoring stack which uses Prometheus, Grafana, Alert-Manager and Pushgateway to retrieve GPU and Sensor metrics. On the right hand side, it consists of open source version of Elasticsearch, Logstash & Kibana for sensors, lifecycle controller & SEL logs.

Understanding OpenUSM System Management WorkFlow

OpenUSM uses “Container-Per-Server (CPS)” model. For each server management tasks, there are scripts which when executed builds and run Docker containers against each of server platform. It purely uses Redfish API to communicate directly with Dell iDRAC, collects iDRAC/LC logs and pushes it to syslog server. Logstash collects the syslog server and pushes to elasticsearch and finally it gets visualized through Kibana. OpenUSM uses Prometheus Stack for monitoring System components like GPU/CPU monitoring using nvidia-docker & node exporter.

 

Under this blog post, I am going to demonstrate how OpenUSM simplifies the overall system management tasks with the help of Docker containers & Redfish. For this demonstration, I will leverage Ubuntu 17.10 VM running on my ESXi 6.0 system. This code should work on any of Linux & Windows platform too.

The source code for this project is open to the public and is available under https://github.com/openusm/openusm

 

Cloning the Repository

git clone https://github.com/openusm/opensum

Bootstrapping Docker

If you have Docker already installed on your system, you can skip to next step. If not, run the below command to install Docker & Docker Compose on your system.

sh bootstrap.sh install_docker

Based on your network connectivity, this step will take 1-2 minutes to complete.

Bootstrapping ELK

OpenUSM is 100% containerized solution and hence we will be running ELK inside Docker containers. To keep it simple, we designed a docker-compose file which can get you started in a matter of seconds.

Execute the same bootstrap file to bring up ELK stack as shown below:

sh bootstrap.sh provision_elk

Just wait for 30-40 seconds to get ELK stack up and running.

Verifying the ELK services

Run the below command to check if ELK services are up and running:

sh bootstrap.sh logs

Pushing DellEMC iDRAC Logs to ELK Stack

Under this section, I am going to demonstrate how OpenUSM makes it so easy to push logs of DellEMC system management tasks to a centralized  ELK stack and get it visualized via Kibana . Let us pick up a simple “BIOS Token Change” functionality and apply it for multitude of DellEMC servers.

To keep it simple, we designed a script named “bios-token.py” which is placed under the root of OpenUSM GIT repository. Let us first look at various parameters which can be supplied with bios-token.py script –

python bios-token.py --help
usage: bios-token.py [-h] [--verbose] [-i IDRAC] [-n NFS] [-s SHARE]
[-c CONFIG] [-f IPS]
 
Welcome to Universal Systems Manager Bios Token Change
 
optional arguments:
-h, --help            show this help message and exit
--verbose             Turn on verbose logging
-i IDRAC, --idrac IDRAC
iDRAC IP of the Host
-n NFS, --nfs NFS     NFS server IP address
-s SHARE, --share SHARE
NFS Share folder
-c CONFIG, --config CONFIG
XML File to be imported
-f IPS, --ips IPS     IP files to be updated

As shown above, the script is targeted both at a single Dell server as well as multitude of DellEMC servers via a plain text file. We are currently looking at Autodiscovery feature to automate this functionality. One need to provide NFS server IP, share name and BIOS token configuration files as argument to execute it successfully. Once this script is invoked, it creates as many number of Docker containers per DellEMC servers, collects iDRAC logs from each of servers and pushes it to syslog server which runs inside Docker container. Logstash collects it from syslog and dumps it into Elasticsearch to get it visualized under Kibana UI.

bios-token.py -f ips.txt -s /var/nfsshare -c biosconfig.xml -n <NFS server-IP>

where,

/var/nfsshare => NFS share

ips.txt => list of DellEMC iDRAC IP

biosconfig.xml => XML definining the BIOS tokens entry

By now, you should be able to see iDRAC logs visualized under Kibana UI. We can perform ample amount of customization around Kibana UI to display the logs per server basis.

 

Insight Log Analytics (LC, SEL & Sensor Logs)

Unpacking OpenUSM secrets further, this marks as an interesting use case and robust capability around insight Log Analytics. DellEMC server generates varieties of logs like system event logs (SEL), RAID controller logs & Lifecycle controller (LC) logs. When a system event occurs on a managed system, it is recorded in the System Event Log (SEL). The SEL page displays a system health indicator, a time stamp, and a description for each event logged. The same SEL entry is also available in the Lifecycle Controller (LC) log.

Considering a certain use cases where datacenter administrator want to collect LC logs for the last 1 year, it definitely requires a robust and modern tool to collect such huge data and perform analysis on top of the specific software.

We recently designed a script which simplifies such log analytics capability using Docker, Redfish & ELK. You can find the “lclogexporter.py” script under the root of GITHUB repository:

python lclogexporter.py -f ips.txt -ei <ip-address of ELK> -eu elastic -ep changeme

 

This script requires elasticsearch IP address, credentials & list of iDRAC IPs to get all iDRAC LC logs pushed to ELK stack. Whenever you execute this script, it will build a Docker image called “openusm-analytics” first and run this container which automatically pushes all LC, SEL and Sensors logs to ELK stack.

Below is the Kibana UI visualizing Pie-Chart for Dell Lifecycle Controller logs collected for the last 1 year timeframe.

 

Insight Log Metrics (Sensor Logs) using Grafana

 

Did you find OpenUSM interesting? OpenUSM is just 3 months old project and we are looking out for contributors across the globe. If you think the project really looks cool, come & join us to make it more robust. We welcome your participation..

1
0

Under the Hood: Demystifying Docker Enterprise Edition 2.0 Architecture

The Only Kubernetes Solution for Multi-Linux, Multi-OS and Multi-Cloud Deployments is right here…

Docker Enterprise Edition(EE) 2.0 Final GA release is available. One of the most promising feature announced with this release includes Kubernetes integration as an optional orchestration solution, running side-by-side with Docker Swarm. Not only this, this release includes Swarm Layer 7 routing improvements, Registry image mirroring, Kubernetes integration to Docker Trusted Registry & Kubernetes integration to Docker EE access controls. With this new release, organizations will be able to deploy applications with either Swarm or fully-conformant Kubernetes while maintaining the consistent developer-to-IT workflow.

Docker EE is more than just a container orchestration solution; it is a full lifecycle management solution for the modernization of traditional applications and microservices across a broad set of infrastructure platforms. It is a Containers-as-a-Service(CaaS) platform for IT that manages and secures diverse applications across disparate infrastructure, both on-premises and in the cloud. Docker EE provides an integrated, tested and certified platform for apps running on enterprise Linux or Windows operating systems and Cloud providers. It is tightly integrated to the underlying infrastructure to provide a native, easy to install experience and an optimized Docker environment.

Docker EE 2.0 GA consists of 3 major components which together enable a full software supply chain, from image creation, to secure image storage, to secure image deployment.

  • Universal Control Plane 3.0.0 (application and cluster management) – Deploys applications from images, by managing orchestrators, like Kubernetes and Swarm. UCP is designed for high availability (HA). You can join multiple UCP manager nodes to the cluster, and if one manager node fails, another takes its place automatically without impact to the cluster.
  • Docker Trusted Registry 2.5.0 – The production-grade image storage solution from Docker &
  • EE Engine 17.06.2- The commercially supported Docker engine for creating images and running them in Docker containers.

Tell me more about Kubernetes Support..

Kubernetes in Docker EE fully supports all Docker EE features, including role-based access control, LDAP/AD integration, scanning, signing enforcement, and security policies.

Kubernetes features on Docker EE include:

  • Kubernetes orchestration full feature set
  • CNCF Certified Kubernetes conformance
  • Kubernetes app deployment by using web UI or CLI
  • Compose stack deployment for Swarm and Kubernetes apps
  • Role-based access control for Kubernetes workloads
  • Pod-based autoscaling, to increase and decrease pod count based on CPU usage
  • Blue-Green deployments, for load balancing to different app versions
  • Ingress Controllers with Kubernetes L7 routing
  • Interoperability between Swarm and Kubernetes workloads for networking and storage.

But wait.. What type of workload shall I run on K8s and what on Swarm?

One of the question raised multiple times in last couple of Docker Meetup was – Good that now we have K8s and Swarm as multi-orchestrator under the single Enterprise Engine. But what type of workload shall I run on Kubernetes and what on Docker Swarm?

 

As shown above, even though there is a bigger overlap in terms of Microservices, Docker Inc. recommends specific types of workloads for both Swarm and Kubernetes.
Kubernetes works really great for large-scale workloads. It is designed to address some of the difficulties that are inherent in managing large-scale containerized environments. Example: blue-green deployments on Cloud Platform, highly resilient infrastructure with zero downtime deployment capabilities, automatic rollback, scaling, and self-healing of containers (which consists of auto-placement, auto-restart, auto-replication , and scaling of containers on the basis of CPU usage). Hence, it can be used in a variety of scenarios, from simple ones like running WordPress instances on Kubernetes, to scaling Jenkins machines, to secure deployments with thousands of nodes.

Docker Swarm, on the other hand, do provide most of the functionality which Kubernetes provides today. With notable features like easy & fast to install & configure, quick container deployment and scaling even in very large clusters, high availability through container replication and service redundancy, automated internal load balancing, process scheduling, Automatically configured TLS authentication and container networking, service discovery, routing mesh etc. Docker Swarm is best suited for MTA program – modernizing your traditional legacy application(.Net app running on Windows Server 2016)  by containerization on Docker Enterprise Edition. By doing this, you reduce the total resource requirements to run your application. This increases operational efficiency and allows you to consolidate your infrastructure.

Under this blog post, I will deep dive into Docker EE Architecture covering its individual components, how they work together and how the platform manages containers.

Let us begin with its architectural diagram –

 

At the base of the architecture, we have Docker Enterprise Engine. All the nodes in the cluster runs Docker Enterprise Edition as the base engine. Currently the stable release is 17.06 EE. It is a lightweight and powerful containerization technology combined with a work flow for building and containerizing your applications.

Sitting on top of the Docker Engine is what we call “Enterprise Edition Stack” where all of these components run as containers(shown below in the picture), except Swarm Mode. Swarm Mode is integrated into Docker Engine and you can turn it on/off based on your choice.Swarm Mode is enabled when Universal Control Plane (UCP) is up & running.

 

UCP depicting ucp-etcd, ucp-hyperkube & ucp-agent containers

In case you’re new to UCP, Docker Universal Control Plane is a solution designed to deploy, manage and scale Dockerized applications and infrastructure. As a Docker native solution, full support for the Docker API ensures seamless transition of your app from development to test to production – no code changes required, support for the Docker tool-set and compatibility with the Docker ecosystem.

 

 

From infrastructure clustering and container scheduling with the embedded Docker Swarm Mode, Kubernetes to multi-container application definition with Docker Compose and image management with Trusted Registry, Universal Control Plane simplifies the process of managing infrastructure, deploying, managing applications and monitoring their activity, with a mix of graphical dashboards and Docker command line driven user interface

There are 3 orchestrators sitting on top of Docker Enterprise Engine – Docker Swarm Mode, Classic Swarm & Kubernetes. For both Classic Swarm and Kubernetes, there is the same underlying etcd instance which is highly available based on setup you have in your cluster.

Docker EE provides access to the full API sets of three popular orchestrators:

  • Kubernetes: Full YAML object support
  • SwarmKit: Service-centric, Compose file version 3
  • “Classic” Swarm: Container-centric, Compose file version 2

Docker EE proxies the underlying API of each orchestrator, giving you access to all of the capabilities of each orchestrator, along with the benefits of Docker EE, like role-based access control and Docker Content Trust.

To manage lifecycle of orchestrator, we have a component called Agents and reconciler which manages both the promotion and demotion flows as well as certification renewal rotation and deal with patching and upgrading.

Agent is the core component of UCP and is a globally-scheduled service. When you install UCP on a node, or join a node to a swarm that’s being managed by UCP, the agent service starts running on that node. Once this service is running, it deploys containers with other UCP components, and it ensures they keep running. The UCP components that are deployed on a node depend on whether the node is a manager or a worker. In nutshell, it monitors the node and ensures the right UCP services are running.

Another important component is Reconciler. When UCP agent detects that the node is not running the right UCP components, it starts the reconciler container to converge the node to its desired state. It is expected for the UCP reconciler container to remain in an exited state when the node is healthy.

In order to integrate with K8s and support authentication, Docker Team built open ID connect provider. In case you’re completely new to OIDC, OpenID Connect is a simple identity layer on top of the OAuth 2.0 protocol. It allows Clients to verify the identity of the End-User based on the authentication performed by an Authorization Server, as well as to obtain basic profile information about the End-User in an inter-operable and REST-like manner.

As Swarm Mode is designed, we have central certificate authority that manages all the certificates and cryptographic node identity of the cluster. All component uses mutual TLS communication and they are using nodes which are issue by certificate authority. We are using unmodified version of Kubernetes.

Sitting on the top of the stack, there is GUI which uses Swarm APIs and Kubernetes APIs. Right next to GUI, we have Docker Trusted Registry which is an enterprise-ready on-premises service for storing, distributing and securing images. It gives enterprises the ability to ensure secure collaboration between developers and sysadmins to build, ship and run applications. Finally we have Docker & Kubernetes CLI capability integrated. Please note that this uses un-modified version of Kubernetes API.

Try Free Hosted Trial

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

1
0

Docker’s Birthday Celebration in Bangalore – The Fifth Kind

“As you grow up, make sure you have more networking opportunities than chances, and more collaborative approach than just an acquaintance. Reason  ~ “Sharing is Caring”.

Docker’s Birthday Celebration is not just about cakes, food and party. It’s actually a global tradition that is near and dear to our heart because it gives each one of us an opportunity to express our gratitude to our community of contributors, customers, partners and users. The goal of this global celebration is to welcome those new to Docker and bring them up to speed through hands-on labs with the help of more advanced users who act as mentors on-site.

 

This year, celebrations all over the world took place March 19-25, 2018. Yesterday we celebrated Docker’s 5th Birthday in Bangalore at Altimetrik Office, Electronic City. This time I was lucky enough to get chance to lead this event.

I started planning for this event in the first week of March. Started interlocking with  Community Marketing Co-ordinator, Docker Inc. and ordered for Docker Mentor T-shirts, birthday stickers & Docker crafted Balloons. It just took 1 week time to get it shipped to Bangalore address. Quite quick, isn’t it? A Special thanks to Lisa McNikol for helping me with these stuffs. Next, Neependra Khare, Founder of CloudYuga & Docker Bangalore Meetup organizer initiated a thread for looking out Mentors for the event. Around 10+ community members turned up to mentor the participants for lab sessions. I started interlocking with them for the lab preparation. A BIG SHOUT OUT to each and every mentors (names listed towards the end) as I find them quite interested and were very quick in completing the testing with the lab sessions. Few of the issues reported were fixed and handled smoothly.

Next, it was time to work with the organizers at Altimetric India. A special thanks to Sonali Baluni, Altimetrik Employee for multiple interactions on phone and emails for planning this event. Over 30+ balloons, 5kg Docker Birthday Cake, goodies for organizer and participants, smoothening WiFi connectivity & Backups were planned for the day.

Over 200+ participants signed up while 50+ turned up for this event. This was expected as we just completed Joint Meetup with 8 vibrant groups last weekend in Walmart Office. I reached venue at around 8:30 AM and was stoked to find wonderful ambience in place. Worked with organizers to arrange Docker T-shirts and Balloons for the participants, distributed Mentors T-Shirts so that they can get ready for the celebration.

I started the Birthday celebration with an attention grabber –

 

Talked about the agenda for the day which included more networking, mentorship & lab practice as shown –

Next, I asked each of Docker Mentors to introduce themselves to the group. Mentors are group of experts targeted to help the participants with the lab sessions.

 

Most of the participants followed the instructions on their own and where ever needed, mentors helped the participants. There were around 5 Lab sessions categorised under Docker 101, MTA with Java and .Net applications, Kubernetes and Hybrid OS applications. You can find it under https://github.com/dockersamples/docker-fifth-birthday

 

At 11:30 sharp, we did a cake cutting. We gave chance to youngest member in the group to do this honour.

This time I focused on networking and spent considerable amount of time interlocking with the participants. I met with new Community members and tried to understand how they are using Docker. Few of interesting questions from the community were –

  1. What is Serverless?
  2. How to get started with OpenFaas?
  3. How to become Docker Mentor?
  4. Why we switched to Kubernetes? Issues around Docker EE UCP.
  5. How to containerise 20-years old Monolithic Legacy Application inside my organization?
  6. What is RexRay & its architecture?
  7. When can we expect K8s under Docker CE on Linux Platform?
  8. A Query around AWX – An Alternative to RedHat’s Ansible Tower?
  9. When shall I use Kubernetes & when to use Swarm?

I am planning to come up with blog post around most of the queries in the near future.

Thank You –

  • Mentors (Suraj Narwade, Yogesh Kad, Mamta Jha, Shanthi Kumar, Jibin George, Shankar, Kumara K, Vishan Ghule, Y Ramesh Naik, Savitha Ashture, Saurav Kumar, Swapnasagar Pradhan – For effective Mentorship
  • Mano Marks, Elton & Marcos Nills – For the robust PWD Platform for Lab workshop
  • Sonali Baluni & Team, Organizer ( Altimetrik India)
  • Lisa McNikol – Community Marketing Co-ordinator, Docker Inc.
  • Jenny Burcio – Sr. Program Manager, Developer Relations and Advocacy, Docker Inc.
  • All Participants

What I liked most about this Meetup is the energy and curiosity that the participants have. It was a blast discussing with, answering and questioning all the keen minds that make up this community.

Dockercon 2018 is happening this June 12-15. I have plan to attend this event.  Would be completely excited to meet more members of the Docker community over at DockerCon in June and hope to bring home interesting knowledge and tips & tricks for the larger Docker Bangalore Meetup to gain from.

Thanks to all you there who participated to make this event a blast !

1
0

5 Minutes to Bootstrap Kubernetes Cluster on GKE using Docker for Mac 18.03.0

Docker 18.03.0 CE Release is now available under Docker for Mac Platform. Docker for Mac 18.03.0 CE is now shipped with Docker Compose version 1.20.1, Kubernetes v1.9.2,  Docker Machine 0.14.0 & Notary 0.6.0.  Few of the promising features included under this release are listed below-

  • Changing VM Swap size under settings
  • Linux Kernel 4.9.87
  • Support of NFS Volume sharing under Kubernetes.
  • Revert the default disk format to qcow2 for users running macOS 10.13 (High Sierra).
  • DNS name `host.docker.internal` used for host resolution from containers.
  • Improvement over Kubernetes Load balanced services (No longer marked as `Pending`)
  • Fixed hostPath mounts in Kubernetes`.
  • Fix support for AUFS.
  • Fix synchronisation between CLI `docker login` and GUI login.
  • Updated Compose on Kubernetes to v0.3.0. Existing Kubernetes stacks will be removed during migration and need to be re-deployed on the cluster… and many more

In my last blog, I talked about context switching and showcased how one can switch the context from docker-for-desktop to Minikube under Docker for Mac Platform. A context element in a kubeconfig file is used to group access parameters under a convenient name. Each context has three parameters: cluster, namespace, and user. By default, the kubectl command-line tool uses parameters from the current context to communicate with the cluster. Under  .kube/config file, you can see the list of context specified a shown below –

– cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURERENDQWZTZ0F3SUJBZ0lSQUpwcmVPY..V0gKZ0hVaVl6dGR…
server: https://35.201.215.156
name: gke_spheric-temple-187614_asia-east1-a_k8s-lab1
– cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOOd2..LQo=
server: https://localhost:6443
name: kubernetes
– cluster:
certificate-authority: /Users/ajeetraina/.minikube/ca.crt
server: https://192.168.99.100:8443
name: minikube
contexts:
– context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop

Under this blog, I will showcase how you can bootstrap Kubernetes Cluster on GKE Platform using context switching functionality under Docker for Mac Platform.

Pre-requisite:

  • Install/Upgrade Docker for Mac 18.03 CE Edition
  • Install google-cloud-sdk
  • Enable Google Cloud Engine API
  • Authenticate Your Google Cloud using gcloud auth

Installing Docker for Mac 18.03 CE Edition

Installing Google Cloud SDK on your macOS

  • Make sure that Python 2.7 is installed on your system:

Ajeets-MacBook-Air:~ ajeetraina$ python -V
Python 2.7.10
  • Download the below package based on your system.
Platform Package Size SHA256 Checksum
macOS 64-bit(x86_64) google-cloud-sdk-195.0.0-darwin-x86_64.tar.gz 15.0 MB 56d72895dfc6c4208ca6599292aff629e357ad517e6979203a68a3a8ca5f6cc8
macOS 32-bit(x86) google-cloud-sdk-195.0.0-darwin-x86.tar.gz 15.0 MB e389ec98b65a0dbfc3f2c2637b9e3a375913b39d50e668fecb07cd04474fc080
  • Extract the archive to any location on your file system.
./google-cloud-sdk/install.sh
  • Restart your terminal for the changes to take effect.

Initializing the SDK

gcloud init

In your browser, log in to your Google user account when prompted and click Allow to grant permission to access Google Cloud Platform resources.

Enabling Kubernetes Engine API

You need to enable K8s engine API to bootstrap K8s cluster on Google Cloud Platform. To do so, open up this link.

Authenticate Your Google Cloud

Next, you need to authenticate your Google Cloud using glcloud auth

gcloud auth login

Done. We are all set to bootstrap K8s cluster…

Creating GKE Cluster Node

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw
WARNING: The behavior of --scopes will change in a future gcloud release: service-control and service-management scopes will no longer be added to what is specified in --scopes. To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
Creating cluster k8s-lab1...done.
Created [https://container.googleapis.com/v1/projects/spheric-temple-187614/zones/asia-east1-a/clusters/k8s-lab1].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-east1-a/k8s-lab1?project=spheric-temple-187614
kubeconfig entry generated for k8s-lab1.
NAME      LOCATION      MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
k8s-lab1  asia-east1-a  1.7.11-gke.1    35.201.215.156  n1-standard-2  1.7.11-gke.1  3          RUNNING

Viewing it on Docker for Mac UI

Click on Whale icon on the top right of Docker for Mac and by now, you must be able to see the new Context getting appeared.

 

Listing the Nodes

 

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get nodes
NAME                                      STATUS    ROLES     AGE       VERSION
gke-k8s-lab1-default-pool-042d2598-591g   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-c633   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-q603   Ready     <none>    7m        v1.7.11-gke.1

Viewing it directly under GCP Platform

 

 

Connecting to Your GKE Cluster

There are 2 ways to do this:

Method-1: Click on “Connection” button to see how to connect to K8s-lab1.

 

Method-2:

You can connect to your cluster via command-line or using a dashboard.

Ajeets-MacBook-Air:~ ajeetraina$gcloud container clusters get-credentials k8s-lab1 --zone asia-east1-a --project captain-199803
Fetching cluster endpoint and auth data.
kubeconfig entry generated for k8s-lab1. 

Fetching cluster endpoint and auth data. kubeconfig entry generated for k8s-lab1.

Listing the Nodes under Google Cloud Platform

 

Deploy Nginx on GKE Cluster

Let us see how to deploy Nginx on remote GKE cluster using Docker for Mac. This requires two commands. deploy and expose.

Step 1: Deploy nginx

$ kubectl run nginx --image=nginx --replicas=3

deployment "nginx" created

This will create a replication controller to spin up 3 pods, each pod runs the nginx container.

Step 2: Verify that the pods are running.

You can see the status of deployment by running:

kubectl get pods -owide
NAME                    READY     STATUS    RESTARTS   AGE       IP          NODE
nginx-7c87f569d-glczj   1/1       Running   0          8s        10.12.2.6   gke-k8s-lab1-default-pool-b2aaa29b-w904
nginx-7c87f569d-pll76   1/1       Running   0          8s        10.12.0.8   gke-k8s-lab1-default-pool-b2aaa29b-2gzh
nginx-7c87f569d-sf8z9   1/1       Running   0          8s        10.12.1.8   gke-k8s-lab1-default-pool-b2aaa29b-qpc7

Youcan see that each nginx pod is now running in a different node (virtual machine).

Once all pods have the Running status, you can then expose the nginx cluster as an external service.

Step 3: Expose the nginx cluster as an external service.

$ kubectl expose deployment nginx --port=80 --target-port=80 \
--type=LoadBalancer

service "nginx" exposed

This command will create a network load balancer to load balance traffic to the three nginx instances.

Step 4: Find the network load balancer address:

kubectl get service nginx
NAME      TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
nginx     LoadBalancer   10.15.247.8   <pending>     80:30253/TCP   12s

It may take several minutes to see the value of EXTERNAL_IP. If you don’t see it the first time with the above command, retry every minute or so until the value of EXTERNAL_IP is displayed.

You can then visit http://EXTERNAL_IP/ to see the server being served through network load balancing.

GKE provides amazing platform to view workloads & Load-balancer as shown below:

GKE also provides UI for displaying Loadbalancer:

In my upcoming blog post, I will showcase how context switching can help you in switching your project between Dev, QA & Production environment flawlessly.

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

1
0

Test-Drive Continuous Integration Pipeline using Docker, Jenkins & GitHub under $0

Are you new to CI/CD?

Continuous Integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. CI doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.

CI/CD merges development with testing, allowing developers to build code collaboratively, submit it the master branch, and checked for issues. This allows developers to not only build their code, but also test their code in any environment type and as often as possible to catch bugs early in the applications development lifecycle. Since Docker can integrate with tools like Jenkins and GitHub, developers can submit code in GitHub, test the code and automatically trigger a build using Jenkins, and once the image is complete, images can be added to Docker registries. This streamlines the process, saves time on build and set up processes, all while allowing developers to run tests in parallel and automate them so that they can continue to work on other projects while tests are being run.

A Typical CI workflow look like:

1. Developer pushes a commit to GitHub
2. GitHub uses a webhook to notify Jenkins of the update
3. Jenkins pulls the GitHub repository, including the Dockerfile describing the image, as well as the application and test code.
4. Jenkins builds a Docker image on the Jenkins slave node
5. Jenkins instantiates the Docker container on the slave node, and executes the appropriate tests
6. If the tests are successful the image is then pushed up to Dockerhub or Docker Trusted Registry.

Under these series of blog post, I will help you get started with Continuous Integration with Jenkins, Docker & GitHub under $0. I will be using Play with Docker platform which comes for free for the general public.

Let’s get started..

Go to your browser and type –  https://labs.play-with-docker.com/

 

Open up a “New Instance” on the left hand side of the screen.

 

 

Cloning the Repository

Let us start with a simple HTML webpage application. I have a sample GITHUB repository already created for you which contains Dockerfile and Jenkinsfile under the root of the repository as shown below:

 

 

$ git clone https://github.com/ajeetraina/webpage

 

What is Jenkinsfile & Jenkins Pipeline?

A Jenkinsfile is a text file that contains the definition of a Jenkins Pipeline and is checked into source control.

Jenkins Pipeline (or simply “Pipeline”) is a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins. It provides an extensible set of tools for modeling simple-to-complex delivery pipelines “as code”. The definition of a Jenkins Pipeline is typically written into a text file (called a Jenkinsfile) which in turn is checked into a project’s source control repository.
I have created a sample Jenkinsfile which clones the repository, builds the Docker Image, test it and then push it to Dockerhub automatically using Jenkins.
node {
    def app

    stage('Clone repository') {
        /* Let's make sure we have the repository cloned to our workspace */

        checkout scm
    }

    stage('Build image') {
        /* This builds the actual image; synonymous to
         * docker build on the command line */

        app = docker.build("ajeetraina/webpage")
    }

    stage('Test image') {
        /* Ideally, we would run a test framework against our image.
         * Just an example */

        app.inside {
            sh 'echo "Tests passed"'
        }
    }

    stage('Push image') {
        /* Finally, we'll push the image with two tags:
         * First, the incremental build number from Jenkins
         * Second, the 'latest' tag.
         * Pushing multiple tags is cheap, as all the layers are reused. */
        docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
            app.push("${env.BUILD_NUMBER}")
            app.push("latest")
        }
    }
}

In the above Jenkinsfile, there are various stages like cloning the SCM, build, test and pushing the Docker image. You will need to change it to your respective GITHUB repo if you want to try to build your own Docker image.

Let us quickly setup Jenkins on PWD platform and believe me it’s just a matter of a single docker-compose CLI. Before we proceed, ensure that the below permission is granted so as to get Docker plugin to work flawlessly –

$chmod 666 /var/run/docker.sock

It’s time to clone the repository:

$ git clone https://github.com/ajeetraina/jenkins
Cloning into ‘jenkins’…
remote: Counting objects: 18, done.
remote: Compressing objects: 100% (16/16), done.
remote: Total 18 (delta 2), reused 0 (delta 0), pack-reused 0
Unpacking objects: 100% (18/18), done.
[node1] (local) root@192.168.0.53 ~/webpage
$ cd jenkins
[node1] (local) root@192.168.0.53 ~/webpage/jenkins
$docker-compose up

Copy the password of the jenkins starting with “5ee571..” as shown in the above screenshot.

Click on 8080 port which appear on the top of the screen and this will redirect you to Jenkins page.

Once you add the Administrator password which we copied earlier, then it will ask to install plugins. Choose “Install Suggested Plugins” on the left hand side as shown below:

Go back to Jenkins dashboard and click “New Item”. Supply any name(i.e demo-webpage) and select “Pipeline” as type of the project.

This will show up a page with Build Triggers as one of the option. Select “Pipeline script from SCM” and choose GIT under SCM. This will allow you to enter your GITHUB URL which in my case is https://github.com/ajeetraina/webpage. This will automatically add Jenkinsfile under Script Path. Click on “Apply” and then Save.

Click on “Build Now” option on the left side of the Jenkins screen. This traverse through the build pipeline i.e cloning the repository, building Docker Image , testing it and pushing it to your Dockerhub account as shown below:

 

 

 

 

Once you click on “Build Now” you will see that it initiates the build pipeline.

 

Jenkins Pipeline Stages View:

 

Click on “Console Output” to show detailed build process as shown below:

 

A Quick View of Stage View:

 

 

Till now, we have a new Docker Image pushed to Dockerhub automatically. Let’s go ahead and verify it by bringing up webpage container in the separate instance window as shown below:

 

 

Here you see…a static HTML page appears running on port 80. This was built as part of Jenkins pipeline.

In order to trigger this pipeline every 15 minutes, you can go back to Jenkins page for specific job and then choose “Build Periodically” under Configure page as shown below:

 

Under my future blog post, I will show you how CI-CD works in terms of Docker Swarm Mode and Kubernetes.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

5
0

Context Switching Made Easy under Kubernetes powered Docker for Mac 18.02.0

Say Bye to Kubectx !

I have been a great fan of kubectx and kubectl which has been a fast way to switch between clusters and namespaces until I came across Docker for Mac 18.02. With the newer Docker for Mac 18.02 RC build, it is just a matter of a “toggle”. Life has become too easy to switch between dev, QA & production environment.

[Updated(2-Feb-2018) : Docker for Mac 18.02.0 CE RC2 build now comes with Kubernetes v1.9.2 for the first time. I upgraded my macOS High Sierra to RC2 build today and it just works flawlessly. Check it out]

 

New to Kubernetes Namespace Vs Context ?

Generally, software development teams partition their development pipelines into discrete units. These units take various forms in a discrete layout –

          Dev  >> Testing|QA >> Staging >> Production

The resulting layouts are ideally suited to Kubernetes Namespaces. Each environment or stage in the pipeline becomes a unique namespace.

In Kubernetes terminology, Namespaces are the way to partition a single Kubernetes cluster into multiple virtual clusters. Namespaces are a logical partitioning capability that enable one Kubernetes cluster to be used by multiple users, teams of users, or a single user with multiple applications without concern for undesired interaction. Each user, team of users, or application may exist within its Namespace, isolated from every other user of the cluster and operating as if it were the sole user of the cluster.

A major benefit of applying namespaces to the development cycle is that the naming of software components (e.g. micro-services/endpoints) can be maintained without collision across the different environments. This is due to the isolation of the Kubernetes namespaces. The fact that each namespace is logically discrete allows the development teams to work within an isolated “development” namespace.

Say, you have two clusters, one for development work and one for scratch work. In the development cluster, your frontend developers work in a namespace called frontend, and your storage developers work in a namespace called storage. In your scratch cluster, developers work in the default namespace, or they create auxiliary namespaces as they see fit. Access to the development cluster requires authentication by certificate. Access to the scratch cluster requires authentication by username and password.

Shown below is an example which clearly shows a file config-demo with this content:

apiVersion: v1
kind: Config
preferences: {}

clusters:
- cluster:
  name: development
- cluster:
  name: scratch

users:
- name: developer
- name: experimenter

contexts:
- context:
  name: dev-frontend
- context:
  name: dev-storage
- context:
  name: exp-scratch

As shown above, a configuration file describes clusters, users, and contexts. Your config-demo file has the framework to describe two clusters, two users, and three contexts.

Under this blog post, I will showcase how to create 3 difference contexts – Google Cloud, Docker for Desktop & Minikube first and then how easy is it to toggle between them under Docker for Mac Platform. Let’s get started –

Pre-requisite:

  • Docker For Mac 18.02 RC2 build

 

  • Enable Kubernetes under Preference Pane

Installing Minikube

Minikube requires that VT-x/AMD-v virtualization is enabled in BIOS. To check that this is enabled on OSX / macOS run:
sysctl -a | grep machdep.cpu.features | grep VMX

Installing Minikube via brew

Ajeets-MacBook-Air:~ ajeetraina$ brew update && brew install kubectl && brew cask install minikube

Starting Minikube

Ajeets-MacBook-Air:~ ajeetraina$ minikube start
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
Downloading Minikube ISO
 142.22 MB / 142.22 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
 162.41 MB / 162.41 MB [============================================] 100.00% 0s
 0 B / 65 B [----------------------------------------------------------]   0.00%
 65 B / 65 B [======================================================] 100.00% 0sSetting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.

Viewing Kubernetes context selection pane

By now, you should see Minikube context appear

Verifying it using CLI

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER                      AUTHINFO             NAMESPACE
         docker-for-desktop            docker-for-desktop-cluster   docker-for-desktop
         gce                                                        cluster-admin
         kubernetes-admin@kubernetes   kubernetes                   kubernetes-admin
*         minikube                      minikube                     minikube

Viewing Minikube Dashboard

You can just type the below command to bring up qinikube dashboard in a sec.

minikube dashboard

Initializing Docker Swarm

Ajeets-MacBook-Air:testenviron ajeetraina$ docker swarm init
Swarm initialized: current node (zfxiqqjpjmwbvhm1ahjwio3s7) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-4vnxn6cbq4gtsjjvaluucncc8m71aexe11dhbm40aoxfqnr7s3-bevjmv2qpklluuhm6ufrfoas2 192.168.65.3:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Ajeets-MacBook-Air:testenviron ajeetraina$ docker node ls
ID                            HOSTNAME                STATUS              AVAILABILITY        MANAGER STATUS
zfxiqqjpjmwbvhm1ahjwio3s7 *   linuxkit-025000000001   Ready               Active              Leader
Ajeets-MacBook-Air:testenviron ajeetraina$ docker stack deploy -c docker-compose.yml myapp3
Creating network myapp3_default
Creating service myapp3_db1
Creating service myapp3_web1
Ajeets-MacBook-Air:testenviron ajeetraina$ docker stack ls
NAME                SERVICES
myapp3              2

Switching the context from Minikube to docker-for-desktop

Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl config get-contexts
CURRENT   NAME                          CLUSTER                      AUTHINFO             NAMESPACE
          docker-for-desktop            docker-for-desktop-cluster   docker-for-desktop
          gce                                                        cluster-admin
          kubernetes-admin@kubernetes   kubernetes                   kubernetes-admin
*         minikube                      minikube                     minikube

Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl config use-context docker-for-desktop
Switched to context "docker-for-desktop".

Verifying through Pane UI

Open up whale icon under D4M and see if the context switched successfully.

Enabling Kubernetes

Go to whale icon > Click on Preference > Click on Kubernetes > Enable Kubernetes > Show Systems Containers

It will take few minutes to get Kubernetes up and running. Expect it to take long time if you are enabling kubernetes for the first time based on your internet speed.

Clone the Repository

$git clone https://github.com/ajeetraina/docker101

Change to the right location

$cd docker101/play-with-kubernetes/examples/stack-deploy-on-mac/

Example-1 : Demonstrating a Simple Web Application

Building the Web Application Stack

$docker stack deploy -c docker-stack1.yml myapp1

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods

Verifying if the web application is accessible

$curl localhost:8083

Cleaning up the Stack

$docker stack rm myapp`

Example:2 – Demonstrating ReplicaSet

Building the Web Application Stack

$docker stack deploy -c docker-stack2.yml myapp2

Verifying the Stack

$docker stack ls

Verifying using Kubectl

$kubectl get pods
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get stacks
NAME      AGE
myapp2    22m
Ajeets-MacBook-Air:testenviron ajeetraina$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
db1-d977d5f48-l6v9d     1/1       Running   0          22m
db1-d977d5f48-mpd25     1/1       Running   0          22m
web1-6886bb478f-s7mvz   1/1       Running   0          22m
web1-6886bb478f-wh824   1/1       Running   0          22m

Adding Context for Google Cloud

Pre-requisites:

  • Install google-cloud-sdk on macOS
  • Enable Google Cloud Engine API
  • Authenticate Your Google Cloud using gcloud auth

Creating GKE Cluster Node

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters create k8s-lab1 --disk-size 10 --zone asia-east1-a --machine-type n1-standard-2 --num-nodes 3 --scopes compute-rw
WARNING: The behavior of --scopes will change in a future gcloud release: service-control and service-management scopes will no longer be added to what is specified in --scopes. To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
WARNING: Starting in Kubernetes v1.10, new clusters will no longer get compute-rw and storage-ro scopes added to what is specified in --scopes (though the latter will remain included in the default --scopes). To use these scopes, add them explicitly to --scopes. To use the new behavior, set container/new_scopes_behavior property (gcloud config set container/new_scopes_behavior true).
Creating cluster k8s-lab1...done.
Created [https://container.googleapis.com/v1/projects/spheric-temple-187614/zones/asia-east1-a/clusters/k8s-lab1].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-east1-a/k8s-lab1?project=spheric-temple-187614
kubeconfig entry generated for k8s-lab1.
NAME      LOCATION      MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
k8s-lab1  asia-east1-a  1.7.11-gke.1    35.201.215.156  n1-standard-2  1.7.11-gke.1  3          RUNNING

Verify it on Google Cloud

Cluster

Master version	
1.7.11-gke.1 Upgrade available
Endpoint	
35.201.215.156 Show credentials
Client certificate	
Enabled
Kubernetes alpha features	
Disabled
Total size	
3
Master zone	
...

Connecting to Your GKE Cluster

You can connect to your cluster via command-line or using a dashboard too.

Ajeets-MacBook-Air:~ ajeetraina$ gcloud container clusters get-credentials k8s-lab1 --zone asia-east1-a --project spheric-temple-187614

Fetching cluster endpoint and auth data. kubeconfig entry generated for k8s-lab1.

Listing the Nodes

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get nodes
NAME                                      STATUS    ROLES     AGE       VERSION
gke-k8s-lab1-default-pool-042d2598-591g   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-c633   Ready     <none>    7m        v1.7.11-gke.1
gke-k8s-lab1-default-pool-042d2598-q603   Ready     <none>    7m        v1.7.11-gke.1

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

0
0

2-minutes to Kubernetes Cluster on Docker for Mac 18.01 using Swarm CLI

Docker for Mac 18.01.0 CE is available for the general public. It holds experimental Kubernetes release running on Linux Kernel 4.9.75, Docker Compose 1.180 and Docker Machine 0.13.0. It is available only under Edge Release. Please note that this feature is still NOT available under Stable Release branch. This release brought a major fixes around insecure registry, VPNKit port,  DNS timeout issues and many more which you can refer under Release Notes section.

[Updated – 29/01/2018 – Docker Inc. introduced Kubernetes context selector UI in the recent Docker for Mac 18.02 RC1 release. If you have Minikube already running on the same system, you can switch the context in between Minikube & docker for Mac flawlessly. Refer this for more information]

 

 

In my previous blog, I talked about how to build Kubernetes Cluster in 3 minutes using Kubectl tool which comes by default with this release. But what if you are a die-hard fan of   Docker Swarm CLI like me, here is the good news – You can now use Swarm CLI to bring up Kubernetes Cluster. Under this post, I will show you how Swarm CLI can be used to bring up Kubernetes cluster in just 2 minutes.

 

Pre-requisite:

  • Docker for Mac 18.01.0 CE Edge Release
  • Enable Kubernetes under Preference > Kubernetes Tab
  • Select Checkbox under Show System Container

A Quick 2-minutes ASCIINEMA video:

Here is 2-minutes video which shows how to get started from Zero to NGINX web server setup. It initiate with 0 pods, 0 external service and 0 deployments in Kubernetes terminology. Under this video, we will use the familiar docker stack CLI to bring up K8s cluster and then cleaning up in no seconds.

Liked the video? You can refer this link for detailed instructions and further examples.

As I dig deeper into Kubernetes architecture, below links might be useful for anyone who want to learn Kubernetes concepts in detail.

Getting Started with Kubernetes Concepts & Architecture

Building Kubernetes Dashboard on Docker for Mac in 1 min

Demystifying Kubernetes Namespace

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

 

 

1
0

When Kubernetes Meet Docker Swarm for the First time under Docker for Mac 17.12 Release

Docker For Mac 17.12 GA is the first release which includes both the orchestrators – Docker Swarm & Kubernetes under the same Docker platform. As of 1/7/2018 – Experimental Kubernetes has been released under Edge Release(still not available under D4M Stable Release). Experimental Kubernetes is still not available for Docker for Windows & Linux platform. It is slated to be available for Docker for Windows next month(mid of February) and then for Linux by March or April.

Now you might ask why Docker Inc. is making this announcement? What is the fun of having 2 orchestrator under the same roof?  To answer this, let me go little back to the past and see how Docker platform looked like:

                                                                                                                             ~ Source – Docker Inc.

 

Docker platform is like a stack with various layers. The first base layer is called containerd. Containerd is an industry-standard core container runtime with an emphasis on simplicity, robustness and portability. Based on the Docker Engine’s core container runtime, it is available as a daemon for Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments, etc. Containerd is designed to be embedded into a larger system, rather than being used directly by developers or end-users. It basically includes a daemon exposing gRPC API over a local UNIX socket. The API is a low-level one designed for higher layers to wrap and extend. It also includes a barebone CLI (ctr) designed specifically for development and debugging purpose. It uses runC to run containers according to the OCI specification. The code can be found on GitHub, and here are the contribution guidelines. Let us accept the fact that over the last few years, there has been lots of iteration around this layer but now Docker Inc. has finalised it to a robust, popular and widely accepted container runtime.

On top of containerd, there is an orchestration layer rightly called Docker Swarm. Docker Swarm ties all of your individual machines together which runs container runtime. It allows you to deploy application not on a single machine at a time but into a whole system, thereby making your application distributed.

To take advantage of these layers, as a developer you need tools & environment which can build & package your application that takes advantage of your environment, hence Docker Inc. provides Community Edition like  Docker for Mac, Docker for Windows etc. If you are considering to move your application to the production, Docker Enterprise Edition is the right choice.

If the stack looks really impressive, why again the change in architecture?

The reason is – Not everybody uses Swarm.

~ Source – Docker Inc.

Before Swarm & Kubernetes Integration – If you are a developer and you are using Docker, the workflow look something like as shown below. A Developer typically uses Docker for Mac or Docker for Windows.Using a familiar docker build, docker-compose build tool you build your environment and ensure that it gets deployed across a single node cluster OR use docker stack deploy to deploy it across the multiple cluster nodes.

~ Source – Docker Inc.

 

If your production is in swarm, then you can test it locally on Swarm as it is already inbuilt in Docker platform. But if your production environment runs in Kubernetes, then surely there is lot of work to be done like translating files, compose etc. using 3rd party open source tools and negotiating with their offerings. Though it is possible today but it is not still smooth as Swarm Mode CLI.

With the newer Docker platform, you can seamlessly use both Swarm and Kubernetes flawlessly. Interestingly, you use the same familiar tools like docker stack ls, docker stack deploy, docker ps, `docker stack ps`to display Swarm and Kubernetes containers. Isn’t it cool? You don’t need to learn new tools to play around with Kubernetes cluster.

~ Source – Docker Inc.

 

The new Docker platform includes both Kubernetes and Docker Swarm side by side and at the same level as shown below. Please note that it is a real kubernetes sitting next to Docker Swarm and NOT A FORK OR WRAPPER.

                                                                                                 ~ Source – Docker Inc.

Still not convinced why this announcement?

 

                                                                                                  ~ Source – Docker Inc.

How does SWARM CLI builds Kubernetes cluster side-by-side?

The docker compose file analyses the input file format and convert it to pods along with creating replicas set as per the instruction set. With the newer Docker for Mac 17.12 release, a new stack command has been added as the first class citizen to Kubernetes CLI.

 

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get stacks -o wide
NAME      AGE
webapp    1h

 

 

Important Points –

 

  • Future Release of Docker Platform will include both orchestration options available – Kubernetes and Swarm
  • Swarm CLI will be used for Cluster Management while for orchestration you have a choice of Kubernetes & Swarm
  • Full Kubernetes API is exposed in the stack, hence support for overall Kubernetes Ecosystem is possible.
  • Docker Stack Deploy will be able to target both of Swarm or Kubernetes.
  • Kubernetes is recommended for the production environment
  • Running both Swarm & Kubernetes is not recommended for the production environment.
  • AND by now, you must be convinced – “SWARM MODE CLI is NOT GOING ANYWHERE”

Let us test drive the latest Docker for Mac 17.12 beta release and see how Swarm CLI can be used to bring up both Swarm and Kubernetes cluster seamlessly.

  • Ensure that you have Docker for Mac 17.12 Edge Release running on your Mac system. If you still don’t see 17.12-kube_beta client version, I suggest you to go through my last blog post.

A First Look at Kubernetes Integrated Docker For Mac Platform

 

 

Please note that Kubernetes/kubectl comes by default with Docker for Mac 17.12 Beta release. YOU DON”T NEED TO INSTALL KUBERNETES. By default, a single node cluster is already setup for you by default.

As we have Kubernetes & Swarm Orchestration already present, let us head over to build NGINX services as piece of demonstration on this single node Cluster node.

Writing a Docker Compose File for NGINX containers

Let us write a Docker compose file for  nginx image and deploy 3 containers of that image. This is how my docker-compose.yml looks like:

 

Deploying Application Stack using docker stack deploy 

Ajeets-Air:mynginx ajeetraina$ DOCKER_ORCHESTRATOR=kubernetes docker stack deploy --compose-file docker-compose.yml webapp
Stack webapp was created
Waiting for the stack to be stable and running…
-- Service nginx has one container running
Stack webapp is stable and running

 

Verifying the NGINX replica sets through the below command:

 

As shown above, there are 3 replicas of the same NGINX image running as containers.

Verify the cluster using Kubectl CLI displaying the stack information:

Ajeets-MacBook-Air:mynginx ajeetraina$ kubectl get stack -o wide
NAME      AGE
webapp    8h

As you see, kubectl and stack deploy displays the same cluster information.

Verifying the cluster using kubectl CLI displaying YAML file:

You can verify that Docker analyses the docker-compose.yaml input file format and  convert it to pods along with creating replicas set as per the instruction set which can be verified using the below YAML output format.

 

 

We can use the same old stack deploy CLI to verify the cluster information

 

Managing Docker Stack

Ajeets-MacBook-Air:mynginx ajeetraina$ docker stack services webapp
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
20e31598-e4c        nginx               replicated          3/3                 nginx               *:82->80/tcp,*:444->443/tcp

 

It’s time to verify if the NGINX webpage comes up well:

 

Hence, we saw that NGINX service is running both on Kubernetes & Swarm Cluster side by side.

Cleaning up

Run the below Swarm CLI related command to clean up the NGINX service as shown below:

docker stack ls
docker stack rm webapp
kubectl get pods

Output:

Want to see this in action?

https://asciinema.org/a/8lBZqBI3PWenBj6mSPzUd6i9Y

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina.

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

0
0

Top 10 Reasons why LinuxKit is better than the traditional OS distribution

 

“LinuxKit is NOT designed with an intention to replace any of traditional OS like Alpine, Ubuntu, Red Hat etc. It is an open-source toolbox for building fine-tuned Linux-based operating systems that run in containers and hence, lean, portable & secure out-of-the-box.”

                                                                                                                     ~ Collabnix

 

How is LinuxKit different from other available GNU operating system? How is it different from Hashicorp Packer & Cloud instances? Why should I consider LinuxKit? I came across these queries in various community forums and discussion and thought to come up a write-up which talks about the factor which differentiate LinuxKit from other available operating system, tools & utilities.

In case you’re new, LinuxKit is a toolkit for building minimal Linux distributions that is lean, secure and portable. LinuxKit is built with an intention to build a distributions that has just enough Linux to accomplish a task and just do it in a way that is more secure. It uses Moby to build the distribution images and the LinuxKit tool to run them on the local hypervisor, cloud and bare metal environment.

Under this blog post, I will go through 10 good reasons why LinuxKit is better than other available GNU / open source distributions:

1. Smaller in size(< 200MB) compared to GBs size of traditional OS

The idea behind LinuxKit is that you start with a minimal Linux kernel — the base distro is only 35MB — and add literally only what you need. Once you have that, you can build your application on it, and run it wherever you need to. 

                                                                                                                     ~ source 

LinuxKit is not a full host operating system. It primarily has two jobs to accomplish- first to  run containerd containers and secondly, be secure. Linuxkit provides the ability to build an extremely small distribution (~50m) stripped of everything but that which is required to run containers.The init system and all other userland system components (e.g. dhcp, sshd) are containers themselves and as such can be swapped out, or others be plugged in. The whole OS is immutable (unless data volumes are configured), and will run anywhere one finds silicon or a hypervisor: Windows, MacOS, IoT devices, the Cloud etc.The system does not contain extraneous packages or drivers by default. Because LinuxKit is customizable, it is up to individual operators to include any additional bits they may require.

Let us try building LinuxKit ISO out of the minimal.yaml (shown below) which is also available under LinuxKit repository and verify its size.

git clone https://github.com/linuxkit/linuxkit
cd linuxkit && make
cp -rf bin/linuxkit /usr/local/linuxkit

Building LinuxKit ISO

Ajeets-MacBook-Air:examples ajeetraina$  linuxkit build -format iso-bios -name kitiso minimal.yml

This will build up minimal LinuxKit ISO. 

Let us verify its size as shown below:

Ajeets-MacBook-Air:examples ajeetraina$ du -sh * | grep linuxkit.iso 
133M linuxkit.iso
Ajeets-MacBook-Air:examples ajeetraina$ file linuxkit.iso 
linuxkit.iso: DOS/MBR boot sector; partition 1 : ID=0x17, active, start-CHS (0x0,0,1), end-CHS (0x84,63,32), startsector 0, 272384 sectors
Ajeets-MacBook-Air:examples ajeetraina$

 

A Traditional OS, in the other hand, is a complete operating system and includes hundreds and thousands of application programs. 

2. Minimal Provisioning/Boot Time(1.5 – 3 minutes)

LinuxKit hardly takes 1.5 to 3 minutes to get up and running on either local hypervisor/cloud or bare metal system. This is way better than 3 to 5 minutes of provisioning time as compared to traditional OS distribution. LinuxKit is an entirely immutable system, coming in at around 50MB with nothing extraneous to that which is needed to run containers. The root filesystem is read-only, making it stateless and tamper-proof. LinuxKit runs almost everything – onboot processes and continuous services- in a container which makes the boot time quite fast.  Even the init phase – where the OS image will be booted – is configured by copying files from OCI images.

While booting up the minimal LinuxKit ISO, it just took 0.034515sec for the containerd to get booted up.(shown below)

 

3. Build-time Package Updates(LinuxKit) Vs Run-time Package Updates(OS)

LinuxKit uses Moby tool which assembles a set of containerized components into an image. It uses YAML configuration which specifies the components used to build up an image . All components are downloaded at build time to create an image. The image is self-contained and immutable, so it can be tested reliably for continuous delivery. Most importantly – The build itself takes a matter of seconds, and is eminently reproducible, making it an ideal candidate to pass through a CI system. This is definitely an advantage over traditional GNU operating system.

4. Built-in Security(LinuxKit) Vs Base Security(Traditional OS)

LinuxKit allows users to create very secure Linux subsystems because it is designed around containers. All of the processes, including system daemons, run in containers, enabling users to assemble a Linux subsystem with only the needed services. As a result, systems created with LinuxKit have a smaller attack surface than general purpose systems.

LinuxKit is architected to be secure by default. LinuxKit’s build process leverages Alpine Linux’s hardened userspace tools such as Musl libc, and compiler options that include -fstack-protector and position-independent executable output. 

Most importantly, LinuxKit uses modern kernels, and updates frequently following new releases.  LinuxKit tracks new kernel releases very closely, and also follows best practice settings for the kernel configuration from the Kernel Self Protection Project and elsewhere.

The core system components included in LinuxKit userspace are written in type safe languages, such as Rust, Go and OCaml, and run with maximum privilege separation and isolation. LinuxKit’s build process heavily leverages Docker images for packaging. Of note, all intermediate build images are referenced by digest to ensures reproducibility across LinuxKit builds. Tags are mutable, and thus subject to override (intentionally or maliciously) – referencing by digest mitigates classes of registry poisoning attacks in LinuxKit’s buildchain. Certain images, such as the kernel image, is usually signed by LinuxKit maintainers using Docker Content Trust, which guarantees authenticity, integrity, and freshness of the image.

If you compare it with the traditional OS, many kernel bugs usually lurk in the codebase for years. Therefore, it is imperative to not only patch the kernel to fix individual vulnerabilities but also benefit from the upstream security measures designed to prevent classes of kernel bugs.

5. Container & Cloud Native at the same time

It is important to note that Linuxkit is built with containers, for running containers. Linuxkit today is supported on local hypervisor, cloud and bare metal system. The same minimal LinuxKit ISO image which is container native runs well on Cloud platform like Google Cloud Platform, Amazon Web Services & Microsoft Azure flawlessly.

If we talk about the traditional OS available, the distribution is usually customized by Cloud vendors to fit into the Cloud which is quite different from the one which is available for bare metal system. For example, An Amazon Machine Image (AMI) or preemptive Google Cloud instances etc.

6. Batteries included but removable/swappable

LinuxKit is built on the philosophy of “batteries included but swappable”.  Everything is replaceable and customizable under LinuxKit. That’s one of the unique feature in LinuxKit. The YAML format specifies the components used to build an ISO image. This is made possible via the moby tool which assembles a set of containerized components into an image. Let us look at the various YAML file which talks about how this philosophy really works:

  • Minimal YAML file to build LinuxKit OS
  • YAML file which builds LinuxKit OS with SSH Enabled

If you compare the above YAML file with the minimal YAML, just SSHD service containers has been added and hence new LinuxKit ISO can be booted up. You can find various other YAML files under this link.

7. Immutable in Production

Lots of companies – small or very large – have used immutability as a core concept of their infrastructure.Immutable infrastructure basically consists of immutable components which are actually  replaced for every deployment, rather than being updated. These components are usually started from a common image that is built once per deployment and can be tested and validated. The common image can be built through automation, but doesn’t have to be. Important to note that immutability is independent of any tool or workflow for building the images. Its best use case is in a cloud or virtualized environment.

Why is it important?

Immutable components as part of your infrastructure are a way to reduce inconsistency in your infrastructure and improve the trust into your deployment process. 

Toolkit like LinuxKit have made building immutable components very easy. Together with existing cloud infrastructure, it is a powerful concept to help you build better and safer infrastructure. 

8. Designed to be managed by external tooling

LinuxKit today can be managed flawlessly with popular tooling like Infrakit, Terraform or CloudFormation. It uses external tooling such as Infrakit or CloudFormation templates to manage the update process externally from LinuxKit, including doing rolling cluster upgrades to make sure distributed applications stay up and responsive.

Updates may preserve the state disk used by applications if needed, either on the same physical node, or by reattaching a virtual cloud volume to a new node.

Soon after Dockercon 2017 Austin, I wrote a blog post on how Infrakit & LinuxKit work together for building immutable infrastructure.

Why Infrakit & LinuxKit are better together for Building Immutable Infrastructure?

If you want to know how LinuxKit & Terraform can work together, you should look at this link too.

 

9. Enough to bootstrap distributed applications

LinuxKit is designed for building and running clustered applications, including but not limited to container orchestration such as Docker or Kubernetes. It is obvious that in the production environment, most of the users will use a cluster of instances, usually running distributed applications, such as etcdDockerKubernetes or distributed databases. LinuxKit provides examples of how to run these effectively, largely using the tooling likeinfrakit, although other machine orchestration systems can equally well be used, for example Terraform or VMWare.

LinuxKit is gaining momentum in terms of  a toolkit for building custom minimal, immutable Linux distributions. Integration of Infrakit with LinuxKit help users  to build and deploy custom OS images to a variety of targets – from a single vm instance on the mac (via xhyve / hyperkit, no virtualbox) to a cluster of them, as well as booting a remote ARM host on Packet.net from the local laptop via a ngrok tunnel.

 

10. Eliminates Cloud-provides specific Base Image variance

If we talk about Cloud, one serious challenge is that each vendor has differentiated IP invested in how they handle certain aspects of their cloud infrastructure. It is not enough to just provision n-number of servers;what IT ops teams need is a simple and consistent way to declare the number of servers, what size they should be, and what sort of base software configuration is required. Linuxkit solves this problem of base software configuration issue by allowing users to build a stripped down version of traditional OS based on their own requirement.

As said earlier, LinuxKit is not here to replace Alpine OS. LinuxKit uses the minimalist Alpine Linux distribution by default as the foundation of its official container images, and will continue to do so in future. Alpine is seen as an ideal all-purpose generic base for running heavy-lifting software like Redis and Nginx within containers. You can also switch Alpine for Debian/ubuntu/centos, if you wish.

Interested to learn more about LinuxKit, head over to these series of articles.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

3
0

A First Look at Kubernetes Integrated Docker For Mac Platform

Docker support for Kubernetes is now in private beta. As a docker captain, I was able to be a part of the first group to get my hands on the update and can show you what to expect. Kubernetes is ONLY available to those users who are part of the private beta for Docker for Mac 17.12. To access beta builds, you must be signed in within Docker for Mac using your Docker ID. Please remember that this won’t be available to those users who missed to register for Docker Beta Program. 

[Updated: 1/7/2018] – Docker For Mac 17.12 GA is available and is the first release which includes both the orchestrators – Docker Swarm & Kubernetes under the same Docker platform. As of 1/7/2018 – Experimental Kubernetes has been released under Edge Release(still not available under D4M Stable Release). Experimental Kubernetes is still not available for Docker for Windows & Linux platform. It is slated to be available for Docker for Windows next month(mid of February) and then for Linux by March or April.

 

 

 

 

What’s new in Docker 17.12 CE Final Release?

The fresh new Docker 17.12 Community Edition Final Release  includes a standalone Kubernetes server & client, plus Docker CLI integration. The Kubernetes server runs locally within your Docker instance and is a single-node cluster. It is specifically meant for development & testing purpose only. 

Docker 17.12 CE includes the newer Docker Compose 1.18.0-rc2 release along with the below new features & further improvements like:

New Feature:

  •  VM disk size can be changed in settings. (See docker/for-mac#1037).

Bug fixes and minor changes

  • Avoid VM reboot when changing host proxy settings.
  • Don’t break HTTP traffic between containers by forwarding them via the external proxy (docker/for-mac#981)
  • Filesharing settings are now stored in settings.json
  • Daemon restart button has been moved to settings / Reset Tab
  • Display various component versions in About box
  • Better VM state handling & error messsages in case of VM crashes

Important Information:

  • The beta features are being released in a controlled manner: not all users who signed up for the beta will be able to access the features right away. You must be signed in within Docker for Mac using your Docker ID to access the beta builds.

  • The Kubernetes features are only accessible on macOS for now; Windows will follow at a later date.

  • Because this feature is still in beta, it can only be accessed using the latest Docker for Mac release, more precisely on the Edge channel. 

How to get this Beta Release?

You need to install Docker 17.12 Edge Release(it is NOT available under Stable Release yet). Once you install the latest Beta release, all you need is to log in with your Docker ID, select whale menu -> Sign in / Create Docker ID from the menu bar. 

 

 

To enable Kubernetes support and install a standalone instance of Kubernetes running as a Docker container, select Enable Kubernetes and click the Apply and restart button.

 

Once Kubernetes support is enabled, you are able to deploy your workloads, in parallel, on Kubernetes, Swarm, and as standalone containers. Enabling or disabling the Kubernetes server is not going to affect other workloads.

This new beta includes Docker CLI integration for Kubernetes too along with a standalone Kubernetes server & client and this can be verified as shown below. 

 

 

By default, Kubernetes containers are hidden from commands like docker service ls, because managing them manually is not supported. To make them visible, select Show system containers (advanced) and click Apply and restart

 

This means that now you can use the same old favourite docker ps command for displaying Kubernetes internal containers. Isn’t it cool?

 

How to get Kubectl working?

Kubectl comes out-of-the-box by default with this beta release and hence you don’t need to install it. Kubectl is a command line interface for running commands against Kubernetes clusters. It holds a lot of functionality similarity with docker CLI like docker run, docker attach, docker logs and so on. kubectl controls the Kubernetes cluster manager.

The Docker for Mac Kubernetes integration provides the Kubernetes CLI command at /usr/local/bin/kubectl. By default, the kubectl mightn’t work as expected as we still need to choose the correct context.

 

Follow the below steps to get started with kubectl:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config get-contexts

Let us pick up the docker-for-desktop context:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl config use-context docker-for-desktop
Switched to context “docker-for-desktop”.

Now the kubectl should work fine and display a single node K8s cluster as shown below:

The above output shows us that we were able to connect to our Kubernetes cluster and display the status of our master node. In case you are new to Kubernetes terminology – a Kubernetes Node is a physical or virtual machine used to host application containers.  

Verifying the kubectl version:

Ajeets-MacBook-Air:~ ajeetraina$ sudo kubectl version
Client Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.2″, GitCommit:”bdaeafa71f6c7c04636251031f93464384d54963″, GitTreeState:”clean”, BuildDate:”2017-10-24T19:48:57Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”darwin/amd64″}
Server Version: version.Info{Major:”1″, Minor:”8″, GitVersion:”v1.8.2″, GitCommit:”bdaeafa71f6c7c04636251031f93464384d54963″, GitTreeState:”clean”, BuildDate:”2017-10-24T19:38:10Z”, GoVersion:”go1.8.3″, Compiler:”gc”, Platform:”linux/amd64″}

Verifying the Kubernetes Containers

Displaying the Kubernetes Cluster Information

Ajeets-MacBook-Air:~ ajeetraina$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’

 

Test Drive WordPress Application on a single Node Kubernetes Cluster

With our Kubernetes cluster ready, let us now start deploying application containers. I have picked up my all time favourite WordPress application.

Let us test and deploy WordPress application on top of this single node Kubernetes cluster. We will follow the below steps:

  1. Creating a Persistent Volume
  2. Creating a Secret
  3. Deploying MySQL
  4. Deploying WordPress

Creating a Persistent Volume

We will first create two Persistent Volumes from the local-volumes.yaml file shown below:

Copy the code into a file called local-volume.yaml and then run the below command:

kubectl create -f local-volumes.yaml

You can verify the details below:

Creating a Secret

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create secret generic mysql-pass --from-literal=password=collab123
secret “mysql-pass” created

Verifying the secrets:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get secrets
NAME TYPE DATA AGE
default-token-fwflq kubernetes.io/service-account-token 3 1h
mysql-pass Opaque 1 15s
Ajeets-MacBook-Air:~ ajeetraina$

Deploying MySQL

The below YAML describes a single-instance MySQL Deployment. The MySQL container mounts the PersistentVolume at /var/lib/mysql. The MYSQL_ROOT_PASSWORD environment variable sets the database password from the Secret.

Create a file called “mysql-deployment.yaml” and copy the above content. Once you save the file, run the below command to bring up MySQL container:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create -f mysql-deployment.yaml
service “wordpress-mysql” created
persistentvolumeclaim “mysql-pv-claim” created

Verifying the MySQL Pod:

Ajeets-MacBook-Air:~ ajeetraina$ kubectl get pods
NAME               READY       STATUS      RESTARTS         AGE
wordpress-mysql-7b4ffb6fb4-gfk8n 0/1 ContainerCreating 0 34s

 

 

Deploying WordPress

The below YAML describes a single-instance WordPress Deployment and Service. It uses a PVC for persistent storage & a Secret for the password as shown in the content. Did you see : type: NodePort. entry? This setting exposes WordPress to traffic from outside of the cluster.

Ajeets-MacBook-Air:~ ajeetraina$ kubectl create -f wordpress-deployment.yaml
service “wordpress” created
persistentvolumeclaim “wp-pv-claim” created
deployment “wordpress” created

Let us verify the pods using kubectl get pods command:

 

 

Verify that the Services are up and running by running the following command:

 

 

You can fetch overall status of the pods and running containers using the below command:

 

By now, you should be able to visit http://localhost and see WordPress page as shown below:

 

 

While you access the WordPress page,you can see logs under the WordPress container as shown below:

 

Hence, this is how your WordPress page can be customized flawlessly.

 

In my next blog post, I will deep dive into how Docker Swarm and Kubernetes gonna work together under the same cluster environment.

Did you find this blog helpful?  Feel free to share your experience. Get in touch @ajeetsraina

If you are looking out for contribution/discussion, join me at Docker Community Slack Channel.

 

1
0
WP Facebook Auto Publish Powered By : XYZScripts.com