Kubernetes Hands-on Lab #4 – Deploy Prometheus Stack using Helm on Play with Kubernetes Platform

Estimated Reading Time: 8 minutes

 

Let’s talk about Kubernetes Deployment Challenges…

As monolithic systems become too large to deal with, many enterprises are drawn to breaking them down into the microservices architecture.Whenever we move from monolithic to microservice architecture, application consists of multiple components in terms of services talking to each other. Each components has its own resources and can be scaled individually. If we talk about Kubernetes, it can become very complex with all the objects you need to handle ― such as ConfigMaps, services, pods, Persistent Volumes ― in addition to the number of releases you need to manage. The below list of challenges might occur :

 

 
 
1. Manage, Edit and Update multiple k8s configuration
2. Deploy Multiple K8s configuration as a SINGLE application
3. Share and reuse K8s configurations and applications
4. Parameterize and support multiple environments
5. Manage application releases: rollout, rollback, diff, history
6. Define deployment lifecycle(control operations to be run in different phases
7. Validate release state after deployment
 
These can be managed with Kubernetes Helm, which offers a simple way to package everything into one simple application and advertises what you can configure.
 

Helm is a Deployment Management(and NOT JUST PACKAGE MANAGER) for Kubernetes. It does a heavy lifting of repeatable deployment, management of dependencies(reuse and share), management of multiple configurations, update, rollback and test application deployments(Releases).

Under this blog post, we will test drive Helm on top of Play with Kubernetes Platform. Let’s get started.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.

Click on the Login button to authenticate with Docker Hub or GitHub ID.

 

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

 

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

[node1 ~]$ kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
node1     Ready      master    18m       v1.11.3
node2     Ready      <none>    4m        v1.11.3
node3     Ready      <none>    39s       v1.11.3
node4     NotReady   <none>    22s       v1.11.3
node5     NotReady   <none>    4s        v1.11.3
[node1 ~]$
[node1 ]$ kubectl get po
No resources found.
[node1 ]$ kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   1h
[node1]$

 

Verifying the running Pods


 

[node1 ~]$ kubectl get nodes -o json |
>       jq ".items[] | {name:.metadata.name} + .status.capacity"

{
  "name": "node1",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node2",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node3",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node4",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}
{
  "name": "node5",
  "cpu": "8",
  "ephemeral-storage": "10Gi",
  "hugepages-1Gi": "0",
  "hugepages-2Mi": "0",
  "memory": "32929612Ki",
  "pods": "110"
}

Installing OpenSSL

[node1 ~]$ yum install -y openssl

Installing Helm

$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
[node1 ~]$ sh get_helm.sh
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.11.0-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
get_helm.sh: line 177: which: command not found
Run 'helm init' to configure helm.
[node1 ~]$ helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming

Installing Prometheus 

Let us try to install Prometheus Stack on top of 5-Node K8s cluster using Helm.

First one can search for application stack using helm search <packagename> option.

[node1 ~]$ helm search prometheus
NAME                                    CHART VERSION   APP VERSION     DESCRIPTION
stable/prometheus                       7.3.4           2.4.3           Prometheus is a monitoring system and time series database.
stable/prometheus-adapter               v0.2.0          v0.2.1          A Helm chart for k8s prometheus adapter
stable/prometheus-blackbox-exporter     0.1.3           0.12.0          Prometheus Blackbox Exporter
stable/prometheus-cloudwatch-exporter   0.2.1           0.5.0           A Helm chart for prometheus cloudwatch-exporter
stable/prometheus-couchdb-exporter      0.1.0           1.0             A Helm chart to export the metrics from couchdb in Promet...
stable/prometheus-mysql-exporter        0.2.1           v0.11.0         A Helm chart for prometheus mysql exporter with cloudsqlp...
stable/prometheus-node-exporter         0.5.0           0.16.0          A Helm chart for prometheus node-exporter
stable/prometheus-operator              0.1.7           0.24.0          Provides easy monitoring definitions for Kubernetes servi...
stable/prometheus-postgres-exporter     0.5.0           0.4.6           A Helm chart for prometheus postgres-exporter
stable/prometheus-pushgateway           0.1.3           0.6.0           A Helm chart for prometheus pushgateway
stable/prometheus-rabbitmq-exporter     0.1.4           v0.28.0         Rabbitmq metrics exporter for prometheus
stable/prometheus-redis-exporter        0.3.2           0.21.1          Prometheus exporter for Redis metrics
stable/prometheus-to-sd                 0.1.1           0.2.2           Scrape metrics stored in prometheus format and push them ...
stable/elasticsearch-exporter           0.4.0           1.0.2           Elasticsearch stats exporter for Prometheus
stable/karma                            1.1.2           v0.14           A Helm chart for Karma - an UI for Prometheus Alertmanager
stable/stackdriver-exporter             0.0.4           0.5.1           Stackdriver exporter for Prometheus
stable/weave-cloud                      0.3.0           1.1.0           Weave Cloud is a add-on to Kubernetes which provides Cont...
stable/kube-state-metrics               0.9.0           1.4.0           Install kube-state-metrics to generate and expose cluster...
stable/mariadb                          5.2.2           10.1.36         Fast, reliable, scalable, and easy to use open-source rel...
[node1 ~]$

Update the Repo

[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈

Installing Prometheus

$helm install stable/prometheus

Error: namespaces “default” is forbidden: User “system:serviceaccount:kube-system:default” cannot get namespaces in the namespace “default”

How to fix?

To fix this issue, you need to follow below steps:

kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade

Listing Helm

[node1 ~]$ helm list
NAME            REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
excited-elk     1               Sun Oct 28 10:00:02 2018        DEPLOYED        prometheus-7.3.4        2.4.3           default
[node1 ~]$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
[node1 ~]$ helm install stable/prometheus
NAME:   excited-elk
LAST DEPLOYED: Sun Oct 28 10:00:02 2018
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/DaemonSet
NAME                                  AGE
excited-elk-prometheus-node-exporter  1s

==> v1/Pod(related)

NAME                                                        READY  STATUS             RESTARTS  AGE
excited-elk-prometheus-node-exporter-7bjqc                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-gbcd7                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tk56q                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-node-exporter-tkk9b                  0/1    ContainerCreating  0         1s
excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz        0/2    Pending            0         1s
excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj  0/1    ContainerCreating  0         1s
excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69         0/1    ContainerCreating  0         1s
excited-elk-prometheus-server-5958586794-b97xn              0/2    Pending            0         1s

==> v1/ConfigMap

NAME                                 AGE
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1/ServiceAccount
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1beta1/ClusterRole
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1beta1/Deployment
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s

==> v1/PersistentVolumeClaim
excited-elk-prometheus-alertmanager  1s
excited-elk-prometheus-server        1s

==> v1beta1/ClusterRoleBinding
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-server              1s

==> v1/Service
excited-elk-prometheus-alertmanager        1s
excited-elk-prometheus-kube-state-metrics  1s
excited-elk-prometheus-node-exporter       1s
excited-elk-prometheus-pushgateway         1s
excited-elk-prometheus-server              1s


NOTES:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-server.default.svc.cluster.local


Get the Prometheus server URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9090


The Prometheus alertmanager can be accessed via port 80 on the following DNS name from within your cluster:
excited-elk-prometheus-alertmanager.default.svc.cluster.local


Get the Alertmanager URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=alertmanager" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9093


The Prometheus PushGateway can be accessed via port 9091 on the following DNS name from within your cluster:
excited-elk-prometheus-pushgateway.default.svc.cluster.local


Get the PushGateway URL by running these commands in the same shell:
  export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=pushgateway" -o jsonpath="{.items[0].metadata.name}")
  kubectl --namespace default port-forward $POD_NAME 9091

For more information on running Prometheus, visit:
https://prometheus.io/
[node1 ~]$ kubectl get all
NAME                                                             READY     STATUS    RESTARTS   AGE
pod/excited-elk-prometheus-alertmanager-68f4f57c97-wrfjz         0/2       Pending   0          3m
pod/excited-elk-prometheus-kube-state-metrics-858d44dfdc-vt4wj   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-7bjqc                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-gbcd7                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tk56q                   1/1       Running   0          3m
pod/excited-elk-prometheus-node-exporter-tkk9b                   1/1       Running   0          3m
pod/excited-elk-prometheus-pushgateway-58bfd54d6d-m4n69          1/1       Running   0          3m
pod/excited-elk-prometheus-server-5958586794-b97xn               0/2       Pending   0          3m

NAME                                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/excited-elk-prometheus-alertmanager         ClusterIP   10.106.159.46   <none>        80/TCP     3m
service/excited-elk-prometheus-kube-state-metrics   ClusterIP   None            <none>        80/TCP     3m
service/excited-elk-prometheus-node-exporter        ClusterIP   None            <none>        9100/TCP   3m
service/excited-elk-prometheus-pushgateway          ClusterIP   10.106.88.15    <none>        9091/TCP   3m
service/excited-elk-prometheus-server               ClusterIP   10.107.15.64    <none>        80/TCP     3m
service/kubernetes                                  ClusterIP   10.96.0.1       <none>        443/TCP    37m

NAME                                                  DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/excited-elk-prometheus-node-exporter   4         4         4         4            4           <none>          3m

NAME                                                        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/excited-elk-prometheus-alertmanager         1         1         1            0           3m
deployment.apps/excited-elk-prometheus-kube-state-metrics   1         1         1            1           3m
deployment.apps/excited-elk-prometheus-pushgateway          1         1         1            1           3m
deployment.apps/excited-elk-prometheus-server               1         1         1            0           3m

NAME                                                                   DESIRED   CURRENT   READY     AGE
replicaset.apps/excited-elk-prometheus-alertmanager-68f4f57c97         1         1         0         3m
replicaset.apps/excited-elk-prometheus-kube-state-metrics-858d44dfdc   1         1         1         3m
replicaset.apps/excited-elk-prometheus-pushgateway-58bfd54d6d          1         1         1         3m
replicaset.apps/excited-elk-prometheus-server-5958586794               1         1         0         3m
[node1 ~]$

Wait for few minutes while you can access Prometheus UI using https://<external-ip>:9090 In the upcoming blog series, I will bring more interesting stuffs around Helm on PWD Playground.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Kubernetes Hands-on Lab #1 – Setting up 5-Node K8s Cluster

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster

Estimated Reading Time: 8 minutes

 

Istio is completely an open source service mesh that layers transparently onto existing distributed applications. Istio v1.0 got announced last month and is ready for production. It is written completely in Go Language and its a fully grown platform which provides APIs that let it integrate into any logging platform, or telemetry or policy system. This project adds a very tiny overhead to your system. It is hosted on GitHub under this link. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

Istio is composed of these components:

  • Envoy – Sidecar proxies per microservice to handle ingress/egress traffic between services in the cluster and from a service to external services. The proxies form a secure microservice mesh providing a rich set of functions like discovery, rich layer-7 routing, circuit breakers, policy enforcement and telemetry recording/reporting functions.

    Note: The service mesh is not an overlay network. It simplifies and enhances how microservices in an application talk to each other over the network provided by the underlying platform.

  • Mixer – Central component that is leveraged by the proxies and microservices to enforce policies such as authorization, rate limits, quotas, authentication, request tracing and telemetry collection.
  • Pilot – A component responsible for configuring the proxies at runtime.
  • Citadel – A centralized component responsible for certificate issuance and rotation.
  • Node Agent – A per-node component responsible for certificate issuance and rotation.
  • Galley– Central component for validating, ingesting, aggregating, transforming and distributing config within Istio.

What benefits does Istio bring?

Figure: The sidecar incepts all the network traffic

  • Istio lets you connect, secure, control, and observe services.
  • It helps to reduce the complexity of service deployments and eases the strain on your development teams.
  • It provides developers and DevOps fine-grained visibility and control over traffic without requiring any changes to application code.
  • It provides CIOs with the necessary tools needed to help enforce security and compliance requirements across the enterprise.
  • It provides behavioral insights & operational control over the service mesh as a whole.
  • Istio makes it easy to create a network of deployed services with automatic Load Balancing for HTTP, gRPC, Web Socket & TCP Traffic.
  • It provides fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
  • It enables a pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
  • Istio provides automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
  • It provides secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

Istio currently supports Kubernetes.Under this blog post, I will showcase how can one bring up Istio on Play with Kubernetes Platform.

Open https://labs.play-with-k8s.com/ to access Kubernetes Playground.
Click on the Login button to authenticate with Docker Hub or GitHub ID.

Once you start the session, you will have your own lab environment.

Adding First Kubernetes Node

Click on “Add New Instance” on the left to build your first Kubernetes Cluster node. It automatically names it as “node1”. Each instance has Docker Community Edition (CE) and Kubeadm already pre-installed. This node will be treated as the master node for our cluster.

Bootstrapping the Master Node

You can bootstrap the Kubernetes cluster by initializing the master (node1) node with the below script. Copy this script content into bootstrap.sh file and make it executable using “chmod +x bootstrap.sh” command.

When you execute this script, as part of initialization, the kubeadm write several configuration files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver, kube-dns, kube-proxy, etcd, etc.). Control plane components are deployed as Docker containers.

Copy the above kubeadm join token command and save it for the next step. This command will be used to join other nodes to your cluster.

Adding Worker Nodes

Click on “Add New Node” to add a new worker node.

Checking the Cluster Status

 

 

Verifying the running Pods


Installing Istio 1.0.0

Istio is deployed in a separate Kubernetes namespace istio-system. We will verify it later. As of now, you can copy the below content in a file called install_istio.sh and save it. You can make it executable and run it to install Istio and related tools.

#!/bin/bash
curl -L https://git.io/getLatestIstio | sh –
export PATH=$PWD/bin:$PATH
cd istio-1.0.0
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f install/kubernetes/istio-demo.yaml
kubectl get svc -n istio-system
kubectl get pods -n istio-system

You should be able to see screen flooding with the below output.

 

 

As shown above, it will enable the Prometheus, ServiceGraph, Jaeger, Grafana, and Zipkin by default.

Please note – While executing this script, it might end up with the below error message –

unable to recognize “install/kubernetes/istio-demo.yaml”: no matches for admissionregistration.k8s.io/, Kind=MutatingWebhookConfiguration

The error message is expected.

As soon as the command gets executed completely, you should be able to see a long list of ports which gets displayed at the top center of the page.

Verifying the Services


Exposing the Services

To expose Prometheus, Grafana & Servicegraph services, you will need to delete the existing services and then use NodePort instead of ClusterIP so as to access the service using the port displayed on the top of the instance page. (as shown below)

You should be able to access Grafana page by clicking on “30004” port and Prometheus page by clicking on “30003”.

You can check Prometheus metrics by selecting the necessary option as shown below:

Under Grafana Page, you can add “Data Source” for Prometheus and ensure that the dashboard is up and running:

Congratulations! You have installed Istio on Kubernetes cluster. Below listed services have been installed on K8s playground:

  • Istio Controllers and related RBAC rules
  • Istio Custom Resource Definitions
  • Prometheus and Grafana for Monitoring
  • Jeager for Distributed Tracing
  • Istio Sidecar Injector (we’ll take a look next section)

Installing Istioctl

Istioctl is configuration command line utility of Istio. It helps to create, list, modify and delete configuration resources in the Istio system.

Deploying the Sample BookInfo Application

Now Istio is installed and verified, you can deploy one of the sample applications provided with the installation- BookInfo. This is a simple mock bookstore application made up of four services that provide a web product page, book details, reviews (with several versions of the review service), and ratings – all managed using Istio.

Deploying BookInfo Services

 

[node1 istio-1.0.0]$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

service “details” created

deployment “details-v1” created

service “ratings” created

deployment “ratings-v1” created

service “reviews” created

deployment “reviews-v1” created

deployment “reviews-v2” created

deployment “reviews-v3” created

service “productpage” created

deployment “productpage-v1” created

 

 

[node1 istio-1.0.0]$ istioctl create -f samples/bookinfo/networking/bookinfo-gateway.yaml

Created config gateway/default/bookinfo-gateway at revision 13436

Created config virtual-service/default/bookinfo at revision 13438

 

 

Verifying BookInfo Application

[node1 istio-1.0.0]$ kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.97.29.111     <none>        9080/TCP   1m

kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    1h

productpage   ClusterIP   10.106.144.171   <none>        9080/TCP   1m

ratings       ClusterIP   10.111.164.221   <none>        9080/TCP   1m

reviews       ClusterIP   10.99.195.21     <none>        9080/TCP   1m

[node1 istio-1.0.0]$ curl 10.106.144.171:9080

<!DOCTYPE html>

<html>

<head>

<title>Simple Bookstore App</title>

<meta charset=”utf-8″>

<meta http-equiv=”X-UA-Compatible” content=”IE=edge”>

<meta name=”viewport” content=”width=device-width, initial-scale=1″>

 

<!– Latest compiled and minified CSS –>

<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap.min.css”>

 

<!– Optional theme –>

<link rel=”stylesheet” href=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/css/bootstrap-theme.min.css”>

 

</head>

<body>

 

 

<p><h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3></p>

 

<table class=”table table-condensed table-bordered table-hover”><tr><th>endpoint</th><td>details</td></tr><tr><th>name</th><td>http://details:9080</td></tr><tr><th>children</th><td><table class=”table table-condensed table-bordered table-hover”><tr><th>endpoint</th><th>name</th><th>children</th></tr><tr><td>details</td><td>http://details:9080</td><td></td></tr><tr><td>reviews</td><td>http://reviews:9080</td><td><table class=”table table-condensedtable-bordered table-hover”><tr><th>endpoint</th><th>name</th><th>children</th></tr><tr><td>ratings</td><td>http://ratings:9080</td><td></td></tr></table></td></tr></table></td></tr></table>

 

<p><h4>Click on one of the links below to auto generate a request to the backend as a real user or a tester

</h4></p>

<p><a href=”/productpage?u=normal”>Normal user</a></p>

<p><a href=”/productpage?u=test”>Test user</a></p>

 

 

 

<!– Latest compiled and minified JavaScript –>

<script src=”https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js”></script>

 

<!– Latest compiled and minified JavaScript –>

<script src=”https://maxcdn.bootstrapcdn.com/bootstrap/3.3.5/js/bootstrap.min.js”></script>

 

</body>

</html>

 

Accessing it via Web URL

[node1 istio-1.0.0]$ kubectl get services

NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE

details       ClusterIP   10.97.29.111     <none>        9080/TCP   2m

kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    1h

productpage   ClusterIP   10.106.144.171   <none>        9080/TCP   2m

ratings       ClusterIP   10.111.164.221   <none>        9080/TCP   2m

reviews       ClusterIP   10.99.195.21     <none>        9080/TCP   2m

[node1 istio-1.0.0]$ kubectl delete svc productpage

service “productpage” deleted

[node1 istio-1.0.0]$ kubectl create service nodeport productpage –tcp=9080 –node-port=30010

service “productpage” created

 

You should now be able the BookInfo Sample as shown below:

In my next blog post, I will showcase how to bring up Kubernetes Dashboard under Play with Kubernetes Platform.

 

Kubernetes Hands-on Lab #2 – Running Our First Nginx Cluster

Estimated Reading Time: 2 minutes

 

 

Nginx (pronounced “engine-x”) is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server). The nginx project started with a strong focus on high concurrency, high performance and low memory usage. It is licensed under the 2-clause BSD-like license and it runs on Linux, BSD variants, Mac OS X, Solaris, AIX, HP-UX, as well as on other *nix flavors. It also has a proof of concept port for Microsoft Windows.

In my last blog post, I showcased how to build 5-Node Kubernetes cluster. Under this blog post, we will see how to build our first Nginx application on this cluster environment.

Verifying 5-Node K8s Cluster

[node1 ~]$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node1     Ready     master    1h        v1.10.2
node2     Ready     <none>    1h        v1.10.2
node3     Ready     <none>    1h        v1.10.2
node4     Ready     <none>    1h        v1.10.2
node5     Ready     <none>    14m       v1.10.2
[node1 ~]$

Running Nginx having 4 Replicas

kubectl run nginx --image=nginx:latest --replicas=4

Verifying K8s Pods Up and Running

[node1 ~]$ kubectl get po
NAME                     READY     STATUS    RESTARTS   AGE
nginx-5db977d67c-6sdfd   1/1       Running   0          2m
nginx-5db977d67c-jfq9h   1/1       Running   0          2m
nginx-5db977d67c-vs925   1/1       Running   0          2m
nginx-5db977d67c-z5r45   1/1       Running   0          2m
[node1 ~]$

Watch the pods

kubectl get pods -w

Expose the NGINX API port:


kubectl expose deploy/nginx --port 80

Testing the Nginx Service


IP=$(kubectl get svc nginx -o go-template --template '{{ .spec.clusterIP }}')

Send a few requests:

[node1 ~]$ curl $IP:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[node1 ~]$

In my next blog post, I will showcase how to build Istio Application on Play with Kubernetes Platform.

Kubernetes Hands-on Lab #3 – Deploy Istio Mesh on K8s Cluster