Installing Helm to deploy Kubernetes Applications on Docker Enterprise 2.0 Made Easy

Estimated Reading Time: 10 minutes

 

 

Let’s talk about RBAC under Docker EE 2.0…

Kubernetes RBAC(Role-based Access Control) security context is a fundamental part of Kubernetes security best practices, as well as rolling out TLS certificates / PKI authentication for connecting to the Kubernetes API server and between its components. Kubernetes RBAC is essentially an authorization and access control specification where you define the actions (GET, UPDATE, DELETE, etc) that Kubernetes subjects (i.e. human users, software, kubelets) are allowed to perform over Kubernetes entities (i.e. pods, secrets, nodes).

 

 

 

Early this year, Docker EE 2.0 introduced an enhanced RBAC solution for the first time that provided flexible and granular access controls across multiple teams and users. Docker EE leverages the Kubernetes webhook authentication model. This feature enables the validation of all requests by an outside source. With Docker EE we use the control plane’s RBAC controller, eNZi. Each Kubernetes request, whether issued via the CLI or the GUI, is validated against Docker EE’s authN/authZ database, and then rejected or accepted as appropriate.

 

With Docker EE 2.0, UCP now includes an upstream distribution of Kubernetes. From a security point of view this is the best of both worlds. Out of the box Docker EE 2.0 provides user authentication and RBAC on top of Kubernetes. To ensure the Kubernetes orchestrator follows all the security best practices UCP utilizes TLS for the Kubernetes API port. When combined with UCP’s auth model, this allows for the same client bundle to talk to the Swarm or Kubernetes API.

Early this year, I wrote a blog post which deep-dive into Docker EE 2.0 Architecture. Check it out if you are new and want to get into nitty-gritty of Docker Enterprise product.

Under the Hood: Demystifying Docker Enterprise Edition 2.0 Architecture

Under this blog post, I will show you an insanely easy way to deploy Helm Package manager on 3-Node Kubernetes Cluster running Docker Enterprise 2.0.

Tested Infrastructure

Platform Google Cloud Platform
OS Instance Ubuntu 18.04
Machine Type n1-standard-4 (4 vCPUs, 15 GB memory)
No. of Nodes 3

I assume that you have 3 Ubuntu 18.04 instances having the above minimal configurations. Make sure all the hosts you want to manage with Docker EE have a minimum of:

  • Docker Enterprise Edition 17.06.2-ee-8. Values of n in the -ee- suffix must be 8 or higher
  • Linux kernel version 3.10 or higher
  • 4.00 GB of RAM
  • 3.00 GB of available disk space

 


Cloning the Repository

Login to the first Ubuntu 18.04 OS and run the below command to clone the repository.

git clone https://github.com/ajeetraina/docker101
cd docker101/docker-ee/ubuntu

 

Installing Docker EE

The best way to try Docker Enterprise Edition for yourself is to get the 30-day trial available at the Docker Store. Once you get your trial license, you can install Docker EE on your Linux servers.

As soon as you get 1 Month Trial Version of Docker EE, you will be provided with URL. Copy the section from URL starting from sub

https://storebits.docker.com/ee/ubuntu/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Exporting URL

export eeid=sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx

Installing Docker EE

I have built a script for you to get Docker EE 2.0 up and running flawlessly. Just one command and Docker EE is all set.

sh bootstrap.sh provision_dockeree

Setting up UCP

Run the below command to initiate container which will setup UCP for you.

sudo sh bootstrap.sh provision_ucp
openusm@master01:~/test/docker101/docker-ee/ubuntu$ sudo sh bootstrap.sh provision_ucp
Unable to find image 'docker/ucp:3.0.5' locally
3.0.5: Pulling from docker/ucp
ff3a5c916c92: Pull complete 
a52011fa0ead: Pull complete 
87e35eb74a08: Pull complete 
Digest: sha256:c8a609209183561de7779e5d32abc5fd9125944c67e8daf262dcbb9f2b1e44ff
Status: Downloaded newer image for docker/ucp:3.0.5
INFO[0000] Your engine version 17.06.2-ee-16, build 9ef4f0a (4.15.0-1021-gcp) is compatible with UCP 3.0.5 (f588f8a) 
Admin Username: collabnix
Admin Password: 
Confirm Admin Password: 
INFO[0043] Pulling required images... (this may take a while) 
INFO[0043] Pulling docker/ucp-auth:3.0.5                
INFO[0049] Pulling docker/ucp-hyperkube:3.0.5           
INFO[0064] Pulling docker/ucp-etcd:3.0.5                
INFO[0070] Pulling docker/ucp-interlock-proxy:3.0.5     
INFO[0086] Pulling docker/ucp-agent:3.0.5               
INFO[0092] Pulling docker/ucp-kube-compose:3.0.5        
INFO[0097] Pulling docker/ucp-dsinfo:3.0.5              
INFO[0104] Pulling docker/ucp-cfssl:3.0.5               
INFO[0107] Pulling docker/ucp-kube-dns-sidecar:3.0.5    
INFO[0112] Pulling docker/ucp-interlock:3.0.5           
INFO[0115] Pulling docker/ucp-kube-dns:3.0.5            
INFO[0120] Pulling docker/ucp-controller:3.0.5          
INFO[0128] Pulling docker/ucp-pause:3.0.5               
INFO[0132] Pulling docker/ucp-calico-kube-controllers:3.0.5 
INFO[0136] Pulling docker/ucp-auth-store:3.0.5          
INFO[0142] Pulling docker/ucp-calico-cni:3.0.5          
INFO[0149] Pulling docker/ucp-calico-node:3.0.5         
INFO[0158] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.5 
INFO[0163] Pulling docker/ucp-compose:3.0.5             
INFO[0167] Pulling docker/ucp-swarm:3.0.5               
INFO[0173] Pulling docker/ucp-metrics:3.0.5             
INFO[0179] Pulling docker/ucp-interlock-extension:3.0.5 
WARN[0183] None of the hostnames we'll be using in the UCP certificates [master01 127.0.0.1 172.17.0.1 10.140.0.2] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases 

You may enter additional aliases (SANs) now or press enter to proceed with the above list.
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2 
Additional aliases: 
INFO[0000] Initializing a new swarm at 10.140.0.2       
INFO[0009] Installing UCP with host address 10.140.0.2 - If this is incorrect, please specify an alternative address with the '--host-address' flag 
INFO[0009] Deploying UCP Service...                     
INFO[0068] Installation completed on master01 (node slsvy00m1khejbo5itmupk034) 
INFO[0068] UCP Instance ID: omz7lso0zpeyzk17gxubvz72r   
INFO[0068] UCP Server SSL: SHA-256 Fingerprint=24:9B:51:4E:E2:F1:CD:1B:DE:E0:86:0F:DC:E7:29:B5:1E:0E:6B:0C:BF:24:CC:27:85:91:35:A1:6A:39:37:C6 
INFO[0068] Login to UCP at https://10.140.0.2:443       
INFO[0068] Username: collabnix                          
INFO[0068] Password: (your admin password) 

Logging in Docker EE

By now, you should be able to login to Docker EE Window using browser. Upload the license and you should be good to see the UCP console.

Installing Kubectl

Undoubtedly, kubectl has been favourite command for K8s users. It works great but it’s painful because you use it to manually run a command for each resource in your Kubernetes application. This is prone to error, because we might forget to deploy one resource, or introduce a typo when writing our kubectl commands. As we add more parts to our application, the probability of these problems occurring increases.

But still here’s a bonus – Execute the below script if you really want to use kubectl on Docker EE Platform.

sudo sh bootstrap.sh install_kubectl

Verify Kubectl Version

@master01:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.11", GitCommit:"1df6a8381669a6c753f79cb31ca2e3d57ee7c8a3", GitTreeState:"clean", BuildDate:"2018-04-05T17:24:
03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.11-docker-8d637ae", GitCommit:"8d637aedf46b9c21dde723e29c645b9f27106fa5", GitTreeState:"clean", BuildDate:"2
018-04-26T16:51:21Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Verifying Docker Version

Here you go.. Docker EE 2.0 is all set up with Swarm & Kubernetes running side by side.

 docker version
Client: Docker Enterprise Edition (EE) 2.0
 Version:      17.06.2-ee-16
 API version:  1.30
 Go version:   go1.8.7
 Git commit:   9ef4f0a
 Built:        Thu Jul 26 16:41:28 2018
 OS/Arch:      linux/amd64
Server: Docker Enterprise Edition (EE) 2.0
 Engine:
  Version:      17.06.2-ee-16
  API version:  1.30 (minimum version 1.12)
  Go version:   go1.8.7
  Git commit:   9ef4f0a
  Built:        Thu Jul 26 16:40:18 2018
  OS/Arch:      linux/amd64
  Experimental: false
 Universal Control Plane:
  Version:       3.0.5
  ApiVersion:                   1.30
  Arch:                         amd64
  BuildTime:                    Thu Aug 30 17:47:03 UTC 2018
  GitCommit:                    f588f8a
  GoVersion:                    go1.9.4
  MinApiVersion:                1.20
  Os:                           linux
 Kubernetes:
  Version:      1.8+
  buildDate:                   2018-04-26T16:51:21Z
  compiler:                    gc
  gitCommit:                   8d637aedf46b9c21dde723e29c645b9f27106fa5
  gitTreeState:                clean
  gitVersion:                  v1.8.11-docker-8d637ae
  goVersion:                   go1.8.3
  major:                       1
  minor:                       8+
  platform:                    linux/amd64
 Calico:
  Version:          v3.0.8
  cni:                             v2.0.6
  kube-controllers:                v2.0.5
  node:                            v3.0.8

Verifying the Kubernetes Nodes

@master01:~/test/docker101/docker-ee/ubuntu$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
master01   Ready     master    20m       v1.8.11-docker-8d637ae

Adding Worker Nodes

To add worker nodes, go to Add Nodes section under Docker Enterprise UI and click on “Add a Node”. It will display a command which need to be executed on worker nodes. This should be good to build multi-node Docker EE Swarm & Kubernetes Cluster

m@master01:~$ sudo docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
av668en5dinpin5jpi6ro0yfs     worker01            Ready               Active              
k4grcnyl6vbf0z17bh67cz9l5     worker02            Ready               Active              
slsvy00m1khejbo5itmupk034 *   master01            Ready               Active              Leader

@master01:~$ kubectl get nodes
NAME       STATUS     ROLES     AGE       VERSION
master01   Ready      master    1h        v1.8.11-docker-8d637ae
worker01   NotReady   <none>    28s       v1.8.11-docker-8d637ae
worker02   Ready      <none>    3m        v1.8.11-docker-8d637ae
openusm@master01:~$ 

Downloading Client Bundle

In order to manage services using Docker CLI, one need to install client bundle and Docker Inc. provides an easy way to install it. I put it under the script “install-client-bundle” . All you need is to supply the correct username and password plus UCP URL to make it work.

 

Installing Helm

openusm@master01:~$ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > install-helm.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7230  100  7230    0     0  17173      0 --:--:-- --:--:-- --:--:-- 17132
openusm@master01:~$ chmod u+x install-helm.sh
openusm@master01:~$ ./install-helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
openusm@master01:~$ helm init
Creating /home/openusm/.helm 
Creating /home/openusm/.helm/repository 
Creating /home/openusm/.helm/repository/cache 
Creating /home/openusm/.helm/repository/local 
Creating /home/openusm/.helm/plugins 
Creating /home/openusm/.helm/starters 
Creating /home/openusm/.helm/cache/archive 
Creating /home/openusm/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /home/openusm/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
openusm@master01:~$

Verifying Helm Version

helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}

Installing MYSQL using Helm

helm install --name mysql stable/mysql
Error: release mysql failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:default" cannot get namespaces in the namespace "default": access denied

You will encounter the above error message and it is expected. This is caused by tiller not having the correct service account / permissions to install applications into the cluster.

Creating Service Account for Tiller

To create a service account for tiller and apply the correct grants, follow these steps:

Create the tiller service account:

openusm@master01:~$ kubectl create serviceaccount --namespace kube-system tiller
serviceaccount "tiller" created

 

In the UCP UI, navigate to the User Management > Grants menu and click  the Create button.In the Subject area, choose Service Account and select the kube-system Namespace and tiller Service Account; then hit Next:

 

In the Role area, select Full Control. Then hit Next. In the Resource area, select Namespaces, flip the switch to enable Apply grant to all existing and new namespaces, then hit Create:

 

 

Verify the final grant. It should look similar to what is shown below:

 

At this stage, if you have tiller installed in the cluster already, you will need to patch the tiller deployment to use the tiller service account just created:

$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

So before we can use helm with a kubernetes cluster, you need to install tiller on it. It’s as easy as running the below command:

openusm@master01:~$ helm init --service-account tiller
$HELM_HOME has been configured at /home/openusm/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

The above command installs Tiller (the Helm server-side component) onto your Kubernetes Cluster and sets up local configuration in $HELM_HOME (default ~/.helm/).As with the rest of the Helm commands, ‘helm init’ discovers Kubernetes clusters by reading $KUBECONFIG (default ‘~/.kube/config’) and using the default context.

We are good to go ahead and install a chart via helm install:

openusm@master01:~$ helm install --name mysql stable/mysql
NAME:   mysql
LAST DEPLOYED: Thu Oct 11 13:37:52 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Pod(related)
NAME                    READY  STATUS   RESTARTS  AGE
mysql-694696f6d5-w4j5n  0/1    Pending  0         1s
==> v1/Secret
NAME   AGE
mysql  1s
==> v1/ConfigMap
mysql-test  1s
==> v1/PersistentVolumeClaim
mysql  1s
==> v1/Service
mysql  1s
==> v1beta1/Deployment
mysql  1s
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql.default.svc.cluster.local
To get your root password run:
    MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h mysql -p
To connect to your database directly from outside the K8s cluster:
    MYSQL_HOST=127.0.0.1
    MYSQL_PORT=3306
    # Execute the following command to route the connection:
    kubectl port-forward svc/mysql 3306
    mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
    

In my next blog post, I will showcase how to build IstioMesh on Docker Enterprise 2.0. Stay tuned !

If you’re interested in a fully conformant Kubernetes environment that is ready for the enterprise,    https://trial.docker.com/

Did you find this blog helpful? Feel free to share your experience. Get in touch with me on Twitter –  @ajeetsraina

If you are looking out for contribution, join me at Docker Community Slack Channel.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Estimated Reading Time: 8 minutes

 

Istio is completely an open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and its a fully grown platform which provides APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is hosted on GitHub under this link. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Istio is composed of these components:

  • Envoy – Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Mixer – Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.
  • Pilot – A component responsible for configuring the proxies at runtime.
  • Citadel – A centralized component responsible for certificate issuance and rotation.
  • Node Agent – A per-node component responsible for certificate issuance and rotation.
  • Galley– Central component for validating, ingesting, aggregating, transforming and distributing config within Istio.

What benefits does Istio bring?

Figure: The sidecar incepts all the network traffic

  • Istio lets you connect, secure, control, and observe services.
  • It helps to reduce the complexity of service deployments and eases the strain on your development teams.
  • It provides developers and DevOps fine-grained visibility and control over traffic without requiring any changes to application code.
  • It provides CIOs with the necessary tools needed to help enforce security and compliance requirements across the enterprise.
  • It provides behavioral insights & operational control over the service mesh as a whole.
  • Istio makes it easy to create a network of deployed services with automatic Load Balancing for HTTP, gRPC, Web Socket & TCP Traffic.
  • It provides fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
  • It enables a pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Istio provides automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • It provides secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

Istio currently supports Kubernetes.Under this blog post, I will showcase how can one bring up Istio on Play with Kubernetes Platform.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.
Click on the Login button to authenticate with Docker Hub or GitHub ID.

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

 

 

Verifying the running Pods


Installing Istio 1.0.0

Istio is deployed in a separate Kubernetes namespace istio-system. We will verify it later. As of now, you can copy the below content in a file called install_istio.sh and save it. You can make it executable and run it to install Istio and related tools.

#!/bin/bash
curl -L https://git.io/getLatestIstio | sh –
export PATH=$PWD/bin:$PATH
cd istio-1.0.0
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f install/kubernetes/istio-demo.yaml
kubectl get svc -n istio-system
kubectl get pods -n istio-system

You should be able to see screen flooding with the below output.

 

 

As shown above, it will enable the Prometheus, ServiceGraph, Jaeger, Grafana, and Zipkin by default.

Please note – While executing this script, it might end up with the below error message –

unable to recognize “install/kubernetes/istio-demo.yaml”: no matches for admissionregistration.k8s.io/, Kind=MutatingWebhookConfiguration

The error message is expected.

As soon as the command gets executed completely, you should be able to see a long list of ports which gets displayed at the top center of the page.

Verifying the Services


Exposing the Services

To expose Prometheus, Grafana & Servicegraph services, you will need to delete the existing services and then use NodePort instead of ClusterIP so as to access the service using the port displayed on the top of the instance page. (as shown below)

You should be able to access Grafana page by clicking on “30004” port and Prometheus page by clicking on “30003”.

You can check Prometheus metrics by selecting the necessary option as shown below:

Under Grafana Page, you can add “Data Source” for Prometheus and ensure that the dashboard is up and running:

Congratulations! You have installed Istio on Kubernetes cluster. Below listed services have been installed on K8s playground:

  • Istio Controllers and related RBAC rules
  • Istio Custom Resource Definitions
  • Prometheus and Grafana for Monitoring
  • Jeager for Distributed Tracing
  • Istio Sidecar Injector (we’ll take a look next section)

Installing Istioctl

Istioctl is configuration command line utility of Istio. It helps to create, list, modify and delete configuration resources in the Istio system.

Deploying the Sample BookInfo Application

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation- BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings – all managed using Istio.

Deploying BookInfo Services

 

[node1 istio-1.0.0]$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

service “details” created

deployment “details-v1” created

service “ratings” created

deployment “ratings-v1” created

service “reviews” created

deployment “reviews-v1” created

deployment “reviews-v2” created

deployment “reviews-v3” created

service “productpage” created

deployment “productpage-v1” created

 

 

[node1 istio-1.0.0]$ istioctl create -f samples/bookinfo/networking/bookinfo-gateway.yaml

Created config gateway/default/bookinfo-gateway at revision 13436

Created config virtual-service/default/bookinfo at revision 13438

 

 

Verifying BookInfo Application

[node1 istio-1.0.0]$ kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.97.29.111     <none>        9080/TCP   1m

kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    1h

productpage   ClusterIP   10.106.144.171   <none>        9080/TCP   1m

ratings       ClusterIP   10.111.164.221   <none>        9080/TCP   1m

reviews       ClusterIP   10.99.195.21     <none>        9080/TCP   1m

[node1 istio-1.0.0]$ curl 10.106.144.171:9080

<!DOCTYPE html>

<html>

<head>

<title>Simple Bookstore App</title>

<meta charset=”utf-8″>

<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>

<meta name=”viewport” content=”width=device-width, initial-scale=1″>

 

<!– Latest compiled and minified CSS –>

<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css”>

 

<!– Optional theme –>

<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css”>

 

</head>

<body>

 

 

<p><h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3></p>

 

<table class=”table table-condensed table-bordered table-hover”><tr><th>endpoint</th><td>details</td></tr><tr><th>name</th><td>http://details:9080</td></tr><tr><th>children</th><td><table class=”table table-condensed table-bordered table-hover”><tr><th>endpoint</th><th>name</th><th>children</th></tr><tr><td>details</td><td>http://details:9080</td><td></td></tr><tr><td>reviews</td><td>http://reviews:9080</td><td><table class=”table table-condensedtable-bordered table-hover”><tr><th>endpoint</th><th>name</th><th>children</th></tr><tr><td>ratings</td><td>http://ratings:9080</td><td></td></tr></table></td></tr></table></td></tr></table>

 

<p><h4>Click on one of the links below to auto generate a request to the backend as a real user or a tester

</h4></p>

<p><a href=”/productpage?u=normal”>Normal user</a></p>

<p><a href=”/productpage?u=test”>Test user</a></p>

 

 

 

<!– Latest compiled and minified JavaScript –>

<script src=”https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js”></script>

 

<!– Latest compiled and minified JavaScript –>

<script src=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js”></script>

 

</body>

</html>

 

Accessing it via Web URL

[node1 istio-1.0.0]$ kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.97.29.111     <none>        9080/TCP   2m

kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    1h

productpage   ClusterIP   10.106.144.171   <none>        9080/TCP   2m

ratings       ClusterIP   10.111.164.221   <none>        9080/TCP   2m

reviews       ClusterIP   10.99.195.21     <none>        9080/TCP   2m

[node1 istio-1.0.0]$ kubectl delete svc productpage

service “productpage” deleted

[node1 istio-1.0.0]$ kubectl create service nodeport productpage –tcp=9080 –node-port=30010

service “productpage” created

 

You should now be able the BookInfo Sample as shown below:

In my next blog post, I will showcase how to bring up Kubernetes Dashboard under Play with Kubernetes Platform.

 

Kubernetes Hands-on Lab #2 – Running Our First Nginx Cluster

Estimated Reading Time: 2 minutes

 

 

Nginx (pronounced “engine-x”) is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The nginx project started with a strong focus on high concurrency, high performance and low memory usage. It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Windows.

In my last blog post, I showcased how to build 5-Node Kubernetes cluster. Under this blog post, we will see how to build our first Nginx application on this cluster environment.

Verifying 5-Node K8s Cluster

[node1 ~]$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    1h        v1.10.2
node2     Ready     <none>    1h        v1.10.2
node3     Ready     <none>    1h        v1.10.2
node4     Ready     <none>    1h        v1.10.2
node5     Ready     <none>    14m       v1.10.2
[node1 ~]$

Running Nginx having 4 Replicas

kubectl run nginx --image=nginx:latest --replicas=4

Verifying K8s Pods Up and Running

[node1 ~]$ kubectl get po
NAME                     READY     STATUS    RESTARTS   AGE
nginx-5db977d67c-6sdfd   1/1       Running   0          2m
nginx-5db977d67c-jfq9h   1/1       Running   0          2m
nginx-5db977d67c-vs925   1/1       Running   0          2m
nginx-5db977d67c-z5r45   1/1       Running   0          2m
[node1 ~]$

Watch the pods

kubectl get pods -w

Expose the NGINX API port:


kubectl expose deploy/nginx --port 80

Testing the Nginx Service


IP=$(kubectl get svc nginx -o go-template --template '{{ .spec.clusterIP }}')

Send a few requests:

[node1 ~]$ curl $IP:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[node1 ~]$

In my next blog post, I will showcase how to build Istio Application on Play with Kubernetes Platform.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster