How to setup and run Redis in a Docker Container

Redis stands for REmote DIctionary Server. It is an open source, fast NoSQL database written in ANSI C and optimized for speed. Redis is an in-memory database that means that all data in Redis is stored in RAM, delivering the fastest possible access times to the data for both read and write requests.

Redis is written in ANSI C and optimized for speed. It is designed and implemented for performance. Redis compiles into extremely efficient machine code and requires little overhead. It uses a (mostly) single threaded event loop model that optimally utilizes the CPU core that it is running on. The data structures used internally by Redis are implemented for maximum performance and the majority of data operations require constant time and space.

Redis is based on the key-value model. In Redis, the data is stored and fetched from Redis by key. Keybased access allows for extremely efficient access times and this model maps naturally to caching, with Redis providing the customary GET and SET semantics for interacting with the data. Also, it supports multi-key operations. Several of Redis’ commands operate on multiple keys. Multi-key operations provide better overall performance compared to performing the operations one after the other, because they require substantially less communication and administration.

Redis is designed to be a remote, network-attached server. It uses a lightweight TCP protocol that has client implementations in every conceivable programming language. Redis is basically an open source, in-memory, data structure store that can be used as a cache, primary database and message broker. It is a multi-model database that supports search, graph, real-time streaming, analytics and many other use cases beyond that of a simple data store. With over 52,000+ GitHub stars, 20,000+ forks and over 500+ contributors, Redis is quite popular among the developers. Redis gives developers building applications the tools and information they need in a very efficient manner. Redis today can be deployed on-premises, across clouds, hybrid environments as well as over the Edge devices flawlessly.

Here’s a quickstart guide to get Redis running in a Docker container:

Ensure that Docker is installed

docker -v

Create a dedicated Docker network

docker network create -d bridge redisnet

Run Redis container

docker run -d -p 6379:6379 --name myredis --network redisnet redis

Install redis-cli

brew install redis-cli

Enter into Redis-cli

redis-cli

Accessing the keys

  • Create a generic key like set a1 100 and get a1 100
set a1 100
get a1

Importing user keys

Let us import a user database( 6k keys). This dataset contains users stored as Redis Hash.

The user hashes contain the following fields:

user:id : The key of the hash.
first_name : First Name.
last_name : Last name.
email : email address.
gender : Gender (male/female).
ip_address : IP address.
country : Country Name.
country_code : Country Code.
city : City of the user.
longitude : Longitude of the user.
latitude : Latitude of the user.
last_login : EPOC time of the last login.

Cloning the repository

git clone https://github.com/redis-developer/redis-datasets
cd redis-datasets/user-database

Importing the user database:

redis-cli -h localhost -p 6379 < ./import_users.redis

Open a new terminal and run the monitor CLI

monitor

Flushing up the database

flushdb

Cleaning up container

docker stop myredis

Further References

Top 25 Active Chatrooms for DevOps Engineers like You

As the tech stacks are becoming more and more complex and business is moving at a fast pace, DevOps Teams worldwide are turning to the communication tools like Slack and Discord to stay productive. Collabnix community is coming together to list out the top 100 active Slack and discord channels for DevOps engineer like you.

Worldle

Curated List of Workspaces/Channels (in alphebetical order)

CommunitySlackDiscord
All Day DevOpshttps://alldaydevops.slack.com (official)N/A
Azure Stack Bloghttps://azurestackblog.slack.com (official)N/A
CNCFhttps://slack.cncf.io/ (official)N/A
Collabnixhttps://launchpass.com/collabnix (Official)N/A
DevtronN/Ahttps://discord.com/invite/jsRG5qx2gp (Official)
Dockerhttps://dockercommunity.slack.com (Official)https://discord.gg/CVBzBtdY (Unofficial)
Devlio DevChathttps://devolio-devchat.slack.com (Official)N/A
DevOps Chathttps://devopschat.slack.com (Official)N/A
DevOps Engineershttps://devopsengineers.slack.com (Official)N/A
Datadoghttps://chat.datadoghq.com/(Official)N/A
Grafanahttps://slack.grafana.com/ (official)N/A
HangOpshttps://hangops.slack.com (Official)N/A
Kuberneteshttps://slack.k8s.io/ (official)N/A
KubeDailyN/Ahttps://discord.gg/rEvr7vq (official)
OpenFaashttps://tinyurl.com/openfaas (official)N/A
Prometheushttps://slack.cncf.io/N/A
Portainerhttps://tinyurl.com/portainer (Official)https://discord.com/invite/j8fVken (Official)
RedisN/Ahttps://discord.gg/redis (Official)
Rancherhttps://slack.rancher.io/ (official)N/A
Shipahttps://tinyurl.com/shipaio (official)N/A
SweetOpshttps://sweetops.slack.com (official)N/A
Thanos#thanos-dev(CNCF) (official)N/A
Bret Fisher’s Vital DevOpsN/Ahttps://discord.gg/CXvdcE66vw
The Rawkode AcademyN/Ahttps://discord.gg/f4FBMcH8
Layer5https://layer5io.slack.com (official)N/A

Unable to see your favorite chatroom? No worries. Just raise PR and get it listed.

References:

How to build a Sample Album Viewer application using Windows containers

This tutorial walks you through building and running the sample Album Viewer application with Windows containers. The Album Viewer app is an ASP.NET Core application, maintained by Microsoft MVP Rick Strahl. There is a fork at dockersamples/dotnet-album-viewer which uses Docker Windows containers.

Docker isn’t just for new apps built with .NET Core. You can run full .NET Framework apps in Docker Windows containers, with production support in Docker Enterprise. Check out the labs for Modernizing .NET apps with Docker.

Using Docker Compose on Windows

Docker Compose is a great way develop distributed applications, where all the components run in their own containers. In this lab you’ll use Docker Compose to run SQL Server in a container, as the data store for an ASP.NET Core web application running in another container.

Docker Compose is installed with Docker Desktop on Windows 10. If you’ve installed the Docker Engine as a Windows Service instead, you can download the compose command line using PowerShell:

Invoke-WebRequest https://github.com/docker/compose/releases/download/1.23.2/docker-compose-Windows-x86_64.exe -UseBasicParsing -OutFile $env:ProgramFiles\docker\docker-compose.exe

To run the sample application in multiple Docker Windows containers, start by cloning the GithUb dockersamples/dotnet-album-viewer repository:

git clone https://github.com/dockersamples/dotnet-album-viewer.git

The Dockerfile for the application uses Docker multi-stage builds, where the app is compiled inside a container and then packaged into a Docker image. That means you don’t need .NET Core installed on your machine to build and run the app from source:

cd dotnet-album-viewer
docker-compose build

You’ll see a lot of output here. Docker will pull the .NET Core images if you don’t already have them, then it will run dotnet restore and dotnet build inside a container. You will see the usual NuGet and MSBuild output – you don’t need to have the .NET Core SDK installed, because it is part of the Docker image.

When the build completes, run the app with:

docker-compose up -d

Docker starts a database container using TiDB, which is a modern distributed database system compatible with MySQL. When the database is running it starts the application container. The database and application containers are in the same Docker network, so they can reach each other.

The container for the web application publishes port 80 on the host, so you can browse to your http://localhost and check out the working site.

Organizing Distributed Solutions with Docker Compose

Take a closer look at the docker-compose.yml file. There are two services defined, which are the different components of the app that will run in Docker containers. The first is the MySQL-compatible database:

  db:
    image: dockersamples/tidb:nanoserver-1809
    ports:
      - "3306:4000"

The second service is the ASP.NET Core web application, which uses the custom image you built at the start of the lab:

  app:
    image: dockersamples/dotnet-album-viewer
    build:
      context: .
      dockerfile: docker/app/Dockerfile
    ports:
      - "80:80"
    environment:
      - "Data:Provider=MySQL"
      - "Data:ConnectionString=Server=db;Port=4000;Database=AlbumViewer;User=root;SslMode=None"      
    depends_on:
      - db

The build details capture the path to the Dockerfile. The environment variables are used to configure the app – they override the settings in appsettings.json. This configuration uses MySQL rather than the default SQLite database, and sets the connection string to use the TiDB database container.

The database container has a built-in user called root with no password, and this is the account used by the web application in the connection string.

The app definition also captures the dependency on the database server, and publishes port 80 so any traffic coming into the host gets directed by Docker into the container.

Packaging ASP.NET Core apps in Docker

How can you compile and run this app without .NET Core installed? Docker compiles and runs the app using containers. The tasks are in the Dockerfile, which captures the app dependencies so the only pre-requisite you need is Docker. The first stage in the Dockerfile publishes the app:

FROM microsoft/dotnet:2.1-sdk-nanoserver-1809 AS builder

WORKDIR /album-viewer
COPY AlbumViewerNetCore.sln .
COPY src/AlbumViewerNetCore/AlbumViewerNetCore.csproj src/AlbumViewerNetCore/AlbumViewerNetCore.csproj
COPY src/AlbumViewerBusiness/AlbumViewerBusiness.csproj src/AlbumViewerBusiness/AlbumViewerBusiness.csproj
COPY src/Westwind.Utilities/Westwind.Utilities.csproj src/Westwind.Utilities/Westwind.Utilities.csproj
RUN dotnet restore src/AlbumViewerNetCore/AlbumViewerNetCore.csproj

COPY src src
RUN dotnet publish .\src\AlbumViewerNetCore\AlbumViewerNetCore.csproj

This uses Microsoft’s .NET Core Docker image as the base in the FROM instruction. It uses a specific version of the image, with the .NET Core 2.1 SDK installed, running on the 1809 release of Microsoft Nano Server. Then the COPY instructions copy the project files and solution files into the image, and the RUN instruction executes dotnet restore to restore packages.

Docker caches parts of the image as it build them, and this Dockerfile separates out the restore part to take advantage of that. Unless the solution or project files change, Docker will re-use the image layer with the dependencies already restored, saving time on the dotnet restore operation.

After the restore, the rest of the source code is copied into the image and Docker runs dotnet publish to compile and publish the app.

The final stage in the Dockerfile packages the published application:

# app image
FROM microsoft/dotnet:2.1-aspnetcore-runtime-nanoserver-1809

WORKDIR /album-viewer
COPY --from=builder /album-viewer/src/AlbumViewerNetCore/bin/Debug/netcoreapp2.0/publish/ .
CMD ["dotnet", "AlbumViewerNetCore.dll"]

This uses a different variant of the dotnet base image, which is optimized for running ASP.NET Core apps. It has the .NET Core runtime, but not the SDK, and the ASP.NET core packages are already installed. The COPY instruction copies the published .NET Core app from the previous stage in the Dockerfile (called builder), and the CMD instruction tells Docker how to start the app.

The Dockerfile syntax is simple. You only need to learn a handful of instructions to build production-grade Docker images. Inside the Dockerfile you can use PowerShell to deploy MSIs, update Windows Registry settings, set file permissions and do anything else you need.

Next Steps

This lab walked you through building and running a simple .NET Core web application using Docker Windows containers. Take a look at some more Windows container labs to see how your existing apps can be moved into Docker:

This post was originally published under DockerLabs.

How to efficiently scale Terraform infrastructure?

Terraform infrastructure gets complex with every new deployment. The code helps you to manage the infrastructure uniformly. But as the organization grows, it itself becomes hard to maintain and scale appropriately. In this article, I will be showing how you can grow and manage Terraform workflow at scale.

Scalability has various dimensions. You may want to ship the code quickly, or your priority could be to pivot the resources, or you might be looking to tighten the security. There are multiple approaches to handle the scalability effectively, but we will be analyzing the easily accessible solutions:

  1. Run Terraform locally
  2. Integrate into Homegrown Automation
  3. Open Source Solutions
  4. Managed Solutions

#1 Run Terraform Locally

You can download Terraform binary directly to your machine from which you are developing code. Having Terraform on the same machine makes it quick to deploy resources. Direct access to the target provider makes it easier to process operations such as state import, move, remove etc., compared to the implementation without direct access. You can use all the features and tools, e.g. Remote State and State Locking, with the team as you scale Terraform from a local machine. There are various tools to perform multiple functions. If you want to keep the code quality intact without having continuous integration set up, you can use pre-commit terraform. Though you can use all the features of Terraform right from your machine, it comes with several drawbacks.

Running Terraform locally means doing each action manually by the developer – which could cause issues due to human fault. It also means wasting time in applying changes as many people will apply their changes to the code simultaneously. It will be difficult to apply another person’s changes until the codebase is updated with one person’s changes. A bottleneck will form in code testing. Besides delay, this approach opens up several security vulnerabilities.Every person on the team will need access to the provider – which could compromise the environment. Terraform has to be allowed to create, change and destroy resources, it is difficult to restrict the developers’ permissions in this model.

And with how Terraform state works, every person needs access to the Terraform state to work with the codebase. This allows team members to run a Terraform State pull to access all the secrets if they wish, even if the data is in Vault. Running Terraform locally could be a good option if you are a one-person team and quickly provision resources.

#2 Integrate into Homegrown Automation

If you already have an in-house CI/CD system running, you can scale the entire structure by integrating Terraform. This way, there is no requirement to give privileged access to developers. They can have read-only permission while proper access would be given to the execution layer. In this model, you can track all the changes as the system logs all the changes in the CI/CD pipeline. You can access the records at any time and in real-time.

Other pipeline processes, such as linting, coding standards, compliance, unit tests, can be configured and moved to the pull request status checks. Though secure, this approach also lacks collaboration. Developers cannot execute concurrent pull requests due to State Locking. When one pull request triggers the pipeline, the subsequent request fails to run, as the State is locked, running off the write permissions. A simple solution is to configure queued runs, but not all the CI/CD products can run into the queue.

Moreover, building and managing an entire pipeline will take more resources. Several essential processes to create an efficient workflow are:

  • Planning on pull requests
  • Linting
  • Unit tests
  • Compliance checks
  • Applying once merged
  • Periodical drift detection

It would be an additional load on the team to maintain it all in-house with all the configurations. Having an inbuilt automation solution looks tempting, but soon you will face various problems.

Nevertheless, it is a good option if you are highly concerned about security. Everything stays in your control, and you can employ multiple security arrangements.

#3 Open Source Solutions

One reason for Terraform’s popularity is its wide-open source library. While scaling, you can utilize these open source tools to add additional features to the Terraform infrastructure.

Several popular tools are:

  • Terraformer: A CLI tool to create Terraform files from the existing infrastructure. Let’s say your Infrastructure is working excellently, and you want to save a Terraform file for the current state. Terraformer is the tool for you.
  • Terratest: It’s a Go library to write automated tests for the Infrastructure code. Packed with multiple helper functions and patterns for everyday infrastructure testing tasks, you can use it with Terraform to quickly write tests.
  • Terragrunt: Terragrunt is a thin wrapper for Terraform that provides extra tools for keeping your Terraform configurations DRY, working with multiple Terraform modules, and managing remote state.

 

 

  • Atlantis: Atlantis enables you to automate your Terraform via pull requests.
  • Terrascan: Scan compliance and security in your IAC to prevent any violation risk before provisioning cloud-native infrastructure. It offers flexibility to run locally or integrate with your CICD.
  • Driftctl: As the code becomes complex, tracking any change in Infrastructure State becomes critical, and that’s where Driftctl works. It detects tracks and schedules alerts on infrastructure drift.
  • Terraform Switcher: Quickly download and switch between the Terraform version from the command-line tool.
  • Infracost: Cloud costs can go out of hand quickly if they are not appropriately managed. Infracost is a handy tool for DevOps and SRE to see the cloud cost estimation and compare different options upfront.

There are over 5000+ open source Terraform projects for a variety of requirements. Be it linting, managing the environment, security tools, test automation, managing cost etc.If you are going completely open-source, there is community support, forums and, most important, tools to help you scale your infrastructure efficiently. However, you may need quick support in case of emergency or errors, which may not be possible with open source. That’s where managed solution providers come with their fantastic support and intuitive platforms.

#4 Managed Solutions

The managed solution often comes with a specialized management platform with multiple tools that fill the collaboration, security, speed, and reliability gap. With the stability and powerful features, IAC platforms provide, maintaining the nuances of Terraform infrastructure becomes relatively easy.

The two most popular platforms are:

  • Terraform Cloud
  • Spacelift

 

Terraform Cloud

Terraform Cloud is provided by HashiCorp – the company behind Terraform itself. With the freemium pricing approach, you can choose the plan your organization needs.

First of all, Terraform Cloud solves the state file issue. You can run the terraform state on your local machine, but it saves and retrieves the state file from Terraform Cloud. Basically, the Terraform Cloud manages the state file for you.

Due to this, state files can have more security, and the team can collaborate much better. In addition, you can grant access to selected users and have granular control over who accesses the state file and more.

Also, it keeps all the versions of the state file and keeps track of all the changes applied.

Having state files secure and safe is critical to efficiently scale Terraform infrastructure, as the entire team will look up to it to manage the workflow. As the team and workload grow, collaboration becomes critical for the sustainable development of any infrastructure.

With a cloud SAAS solution like Terraform Cloud with its centralized resources, help, history of changes, controlled access, and support – the team can focus on delivering the code.

 

Spacelift

Spacelift is a highly flexible platform that integrates with Terraform and other Infrastructure as Code tools. It uses open source technologies to enable flexibility and customization in Infrastructure management.

For example, it uses Open Policy Agent – an open-source policy engine that other products integrating with the Terraform (Kubernetes, Kafka, etc.) also incorporate. Using a similar tool, the focus could be on providing compliance instead of learning a new syntax.

One fantastic feature that Spacelift has that Terraform Cloud lacks is to schedule a periodic drift detection on any stack. As the infrastructure grows, detecting whether a stack’s actual configuration differs, or has drifted, from its expected configuration becomes necessary to prevent any deviations. Of course, you can use Driftctl to track the drifting, but the solution is inbuilt in Spacelift. As Terraform Cloud manages the State file, you have to access the provider first to import something directly from the State file. With Spacelift, you do not need additional permission to run any command.

Spacelift is similar to Terraform Cloud but has slightly more features and tools to manage Terraform efficiently.

Which one to pick?

You can efficiently scale Terraform with:

  • Running Terraform locally: If your product demands quick changes, you have a one-person or small team. As the team grows, collaboration becomes difficult.
  • Homegrown Automation: Highly effective if you want total control and access over the Terraform and your Infrastructure. But setting and maintenance require resources.
  • Open Source solution: To bring the flexibility and configuration of open source features connected with Terraform. However, updates, upgrades, security, and errors have to be appropriately managed.
  • Managed Solutions: For the organization that wants reliable and plug & play Terraform solution and support.

Conclusion

As Terraform makes it easier to manage the Infrastructure, there are methods to manage growing Terraform requirements effectively.

In this article, I suggested four methods.However, which one you choose depends on your organization’s requirements. So, analyze your technical requirement and the scalability’s focus, and pick the suitable method. If there is any question or doubt, please leave it in the comment section.

Author:

Jacob Schulz

DevOps Community Manager at Spacelift

spacelift.io

Jacob is a DevOps Engineer based in Berlin currently working as DevOps Community Manager at Spacelift. He has worked with cloud and DevOps technologies for the last four years. He is passionate about DevOps, cloud industry and community building. In his free time he enjoys hiking, cycling, and biography books.

What are Kubernetes Replicasets? – KubeLabs Glossary

How can you ensure that there are 3 Pods instances which are always available and running at one point in time?

ReplicaSets are Kubernetes controllers that are used to maintain the number and running state of pods. It uses labels to select pods that it should be managing. A pod must labeled with a matching label to the ReplicaSet selector, and it must not be already owned by another controller so that the ReplicaSet can acquire it. Pods can be isolated from a ReplicaSet by simply changing their labels so that they no longer match the ReplicaSet’s selector. ReplicaSets can be deleted with or without deleting their dependent pods.

You can easily control the number of replicas (pods) . The ReplicaSet should maintain it through the command line or by directly editing the ReplicaSet configuration on the fly. You can also configure the ReplicaSet to autoscale based on the amount of CPU load the node is experiencing. You may have read about ReplicationControllers in older Kubernetes documentation, articles or books. ReplicaSets are the successors of ReplicationControllers. They are recommended to be used instead of ReplicationControllers as they provide more features.

A Kubernetes pod serves as a deployment unit for the cluster. It may contain one or more containers. However, containers (and accordingly, pods) are short-lived entities. A container hosting a PHP application, for example may experience an unhandled code exception causing the process to fail, effectively crashing the container. Of course, the perfect solution for such a case is to refactor the code to properly handle exceptions. But, till that happens we need to keep the application running and the business going. In other words, we need to restart the pod whenever it fails. In parallel, developers are monitoring, investigating and fixing any errors that make it crash. At some point, a new version of the pod is deployed, monitored and maintained. It’s an ongoing process that is part of the DevOps practice. Another requirement is to keep a predefined number of pods running. If more pods are up, the additional ones are terminated. Similarly, of one or more pods failed, new pods are activated until the desired count is reached.

A Kubernetes ReplicaSet resource was designed to address both of those requirements. It creates and maintains a specific number of similar pods (replicas). Under this lab, we’ll discuss how we can define a ReplicaSet and what are the different options that can be used for fine-tuning it.

How Does ReplicaSet Manage Pods?

  • In order for a ReplicaSet to work, it needs to know which pods it will manage so that it can restart the failing ones or kill the unneeded.
  • It also requires to understand how to create new pods from scratch in case it needs to spawn new ones.
  • A ReplicaSet uses labels to match the pods that it will manage. It also needs to check whether the target pod is already managed by another controller (like a Deployment or another ReplicaSet). So, for example if we need our ReplicaSet to manage all pods with the label role=webserver, the controller will search for any pod with that label. It will also examine the ownerReferences field of the pod’s metadata to determine whether or not this pod is already owned by another controller. If it isn’t, the ReplicaSet will start controlling it. Subsequently, the ownerReferences field of the target pods will be updated to reflect the new owner’s data.

To be able to create new pods if necessary, the ReplicaSet definition includes a template part containing the definition for new pods.

Creating Your First ReplicaSet

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/kubernetes/workshop/replicaset101
kubectl apply -f nginx_replicaset.yaml
kubectl get rs
NAME   DESIRED   CURRENT   READY   AGE
web	4     	4     	4   	2m

A Peep into the ReplicaSet definition file

Let’s examine the definition file that was used to create our ReplicaSet:

  • The apiVersion for this object is currently app/v1
  • The kind of this object is ReplicaSet
  • In the metadata part, we define the name by which we can refer to this ReplicaSet. We also define a number of labels through which we can identify it.
  • The spec part is mandatory in the ReplicaSet object. It defines:
    • The number of replicas this controller should maintain. It default to 1 if it was not specified.
    • The selection criteria by which the ReplicaSet will choose its pods. Be careful not to use a label that is already in use by another controller. Otherwise, another ReplicaSet may acquire the pod(s) first. Also notice that the labels defined in the pod template (spec.template.metadata.label) cannot be different than those defined in the matchLabels part (spec.selector).
    • The pod template is used to create (or recreate) new pods. It has its own metadata, and spec where the containers are specified. You can refer to our article for more information about pods.

Is Our ReplicaSet the Owner of Those Pods?

OK, so we do have four pods running, and our ReplicaSet reports that it is controlling four pods. In a busier environment, you may want to verify that a particular pod is actually managed by this ReplicaSet and not by another controller. By simply querying the pod, you can get this info:

kubectl get pods web-6n9cj -o yaml | grep -A 5 owner

The first part of the command will get all the pod information, which may be too verbose. Using grep with the -A flag (it takes a number and prints that number of lines after the match) will get us the required information as in the example:

ownerReferences:
  - apiVersion: extensions/v1beta1
	blockOwnerDeletion: true
	controller: true
	kind: ReplicaSet
	name: web

Removing a Pod From a ReplicaSet

You can remove (not delete) a pod that is managed by a ReplicaSet by simply changing its label. Let’s isolate one of the pods created in our previous example:

kubectl edit pods web-44cjb

Then, once the YAML file is opened, change the pod label to be role=isolated or anything different than role=web. In a few moments, run kubectl get pods. You will notice that we have five pods now. That’s because the ReplicaSet dutifully created a new pod to reach the desired number of four pods. The isolated one is still running, but it is no longer managed by the ReplicaSet.

Scaling the Replicas to 5

[node1 replicaset101]$ kubectl scale --replicas=5 -f nginx_replicaset.yaml

Scaling and Autoscaling ReplicaSets

You can easily change the number of pods a particular ReplicaSet manages in one of two ways:

  • Edit the controllers configuration by using kubectl edit rs ReplicaSet_name and change the replicas count up or down as you desire.
  • Use kubectl directly. For example, kubectl scale –replicas=2 rs/web. Here, I’m scaling down the ReplicaSet used in the article’s example to manage two pods instead of four. The ReplicaSet will get rid of two pods to maintain the desired count. If you followed the previous section, you may find that the number of running pods is three instead of two; as we isolated one of the pods so it is no longer managed by our ReplicaSet.
kubectl autoscale rs web --max=5

This will use the Horizontal Pod Autoscaler (HPA) with the ReplicaSet to increase the number of pods when the CPU load gets higher, but it should not exceed five pods. When the load decreases, it cannot have less than the number of pods specified before (two in our example).

Best Practices

The recommended practice is to always use the ReplicaSet’s template for creating and managing pods. However, because of the way ReplicaSets work, if you create a bare pod (not owned by any controller) with a label that matches the ReplicaSet selector, the controller will automatically adopt it. This has a number of undesirable consequences. Let’s have a quick lab to demonstrate them.

Deploy a pod by using a definition file like the following:

apiVersion: v1
kind: Pod
metadata:
  name: orphan
  labels:
	role: web
spec:
  containers:
  - name: orphan
	image: httpd

It looks a lot like the other pods, but it is using Apache (httpd) instead of Nginx for an image. Using kubectl, we can apply this definition like:

kubectl apply -f orphan.yaml

Give it a few moments for the image to get pulled and the container is spawned then run kubectl get pods. You should see an output that looks like the following:

NAME    	READY   STATUS    	RESTARTS   AGE
orphan  	0/1 	Terminating   0      	1m
web-6n9cj   1/1 	Running   	0      	25m
web-7kqbm   1/1 	Running   	0      	25m
web-9src7   1/1 	Running   	0      	25m
web-fvxzf   1/1 	Running   	0      	25m

The pod is being terminated by the ReplicaSet because, by adopting it, the controller has more pods than it was configured to handle. So, it is killing the excess one.

Another scenario where the ReplicaSet won’t terminate the bare pod is that the latter gets created before the ReplicaSet does. To demonstrate this case, let’s destroy our ReplicaSet:

kubectl delete -f nginx_replicaset.yaml

Now, let’s create it again (our orphan pod is still running):

kubectl apply -f nginx_replicaset.yaml

Let’s have a look at our pods status by running kubectl get pods. The output should resemble the following:

orphan  	1/1 	Running   0      	29s
web-44cjb   1/1 	Running   0      	12s
web-hcr9j   1/1 	Running   0      	12s
web-kc4r9   1/1 	Running   0      	12s

The situation now is that we’re having three pods running Nginx, and one pod running Apache (the httpd image). As far as the ReplicaSet is concerned, it is handling four pods (the desired number), and their labels match its selector. But what if the Apache pod went down?

Let’s do just that:

kubectl delete pods orphan

Now, let’s see how the ReplicaSet responded to this event:

kubectl get pods

The output should be something like:

NAME    	READY   STATUS          	RESTARTS   AGE
web-44cjb   1/1 	Running         	0      	24s
web-5kjwx   0/1 	ContainerCreating   0      	3s
web-hcr9j   1/1 	Running         	0      	24s
web-kc4r9   1/1 	Running         	0      	24s

The ReplicaSet is doing what is was programmed to: creating a new pod to reach the desired state using the template that was added in its definition. Obviously, it is creating a new Nginx container instead of the Apache one that was deleted.

So, although the ReplicaSet is supposed to maintain the state of the pods it manages, it failed to respawn the Apache web server. It replaced it with an Nginx one.

The bottom line: you should never create a pod with a label that matches the selector of a controller unless its template matches the pod definition. The more-encouraged procedure is to always use a controller like a ReplicaSet or, even better, a Deployment to create and maintain your pods.

Deleting Replicaset

kubectl delete rs ReplicaSet_name

Alternatively, you can also use the file that was used to create the resource (and possibly, other resource definitions as well) to delete all the resources defined in the file as follows:

kubectl delete -f definition_file.yaml

The above commands will delete the ReplicaSet and all the pods that it manges. But sometimes you may want to just delete the ReplicaSet resource, keeping the pods unowned (orphaned). Maybe you want to manually delete the pods and you don’t want the ReplicaSet to restart them. This can be done using the following command:

kubectl delete rs ReplicaSet_name --cascade=false

If you run kubectl get rs now you should see that there are no ReplicaSets there. Yet if you run kubectl get pods, you should see all the pods that were managed by the destroyed ReplicaSet still running.

The only way to get those pods managed by a ReplicaSet again is to create this ReplicaSet with the same selector and pod template as the previous one. If you need a different pod template, you should consider using a Deployment instead, which will handle replacing pods in a controlled way.

Top 6 Docker Security Scanning Practices

When it comes to running containers and using Kubernetes, it’s important to make security just as much of a priority as development. The DevOps approach to development is what brings security and development teams together to create code that’s effective and secure.

Managing container vulnerabilities can be tricky due to how many elements are involved. This can cause delays to delivery dates for applications. However, implementing DevOps to securely create code can alleviate many of these vulnerabilities along the way.

As a result, developers can work more productively on code that works effectively while also being secure. This post covers some of the best Docker security scanning practices to consider during development to keep containers as secure as possible.

Inline Scanning


Inline image scanning can be implemented through your CD/CI pipeline easily and efficiently. Developers can easily manage their privacy as they’re able to focus on only scanning data that has been specifically sent to the tool that you’re using.


Inline image scanning helps developers to discover whether credentials have been included within images by accident. When developers have an idea about what these mistakes are, they can prevent them from getting into the hands of hackers and prevent more damage from being caused.

Image Scanning

It’s good practice for developers to properly scan container images before they execute them. This ensures that any security risks within the images can be found and fixed before being executed.

Once developers have tested their code and finished building it, they can send them to a repository for staging. This allows them to use tools to scan for vulnerabilities which are provided in the form of reports which include details about the severity of each security risk. This is fantastic for allowing developers to prioritize the security risks in order of severity. They can work on the most severe vulnerabilities first and work their way systematically down the list.


If there are numerous issues found after checking the results from image scanning, developers may decide to put the project on a halt depending on the severity of the issues discovered.


These tools can be implemented with automated systems that make them much easier and efficient to use. Developers can run image scanning tools and be notified of issues that need fixing. It’s an effective way to prevent vulnerabilities from becoming a bigger problem as they’re sorted before reaching the next stage of development.


Preventing Vulnerable Images From Deployment

Preventing images in CI/CD pipelines that contain vulnerabilities sometimes isn’t enough. They can still make their way into the rest of the production. As a result, it’s a good idea to implement Kubernetes that can scan images before they’re scheduled to be executed. This enables developers to stop images with vulnerabilities or images that haven’t been scanned from being deployed. Kubernetes admission controllers are a feature within Kubernetes that helps developers personalize the specific elements that are permitted to run
within a cluster.

As result, any actions that are trying to run in a cluster that isn’t within the customization settings that you’ve created will come up as a red flag. Admission controllers can stop vulnerabilities from going any further as long as there is the proper authentication in place. OPA (Open Policy Agent) is a feature that can help with automating decision-making processes. This enables developers to make decisions within their Kubernetes cluster which allows them to use information directly from the cluster. It can be a more effective way to ensure that vulnerabilities within images are found more precisely and that developers have more control over what gets approved and what doesn’t.

Registry Image Scanning

It’s good practice for developers to use registries along with image scanning. This helps them to scan images before they’re pulled and to be included within production. As a result, developers already know that any images being pulled from their registries have already been through scans to check for vulnerabilities. This makes the whole process of running images securely more efficient.

Scanning 3rd-Party Libraries

Developers often include 3rd-party libraries within their code because it’s an incredibly effective way to finish and deploy projects. However, organizations must be aware that using 3rd-party components can come with a higher risk of vulnerabilities.

Using scanning tools is a must for 3rd-party libraries. You’re provided with information about vulnerabilities within these elements that enables developers to either fix the security risks or find other components to use instead.

Scanning for Errors in Dockerfiles

It’s common for developers to come across misconfigurations within their Dockerfiles. There are several ways that you can approach finding misconfigurations within Dockerfiles. One of the best ways to find these misconfigurations is to run applications as privileged users. This is because it grants you more access to resources that could prove to be useful. Private files may have included mistaken commands that could leave the files more exposed to vulnerabilities. Developers may also want to consider allowing all users to create options to an entry point for improved security choices.

In addition to this, developers should observe whether insecure ports have been included inside containers. Insecure ports that are left open can provide attackers with an entry point to gain access to the rest of your system.

Conclusion


Scanning images is becoming a standard part of the development process. It combines the efforts of developers and security teams to help organizations create applications that are secure during every stage.


As a result, developers have an easier time working systematically to discover vulnerabilities and prioritize them in terms of severity. Image scanning is also something that should be integrated throughout the entire project as a continuous process that developers use as checkpoints.


When Docker scanning practices are used correctly, they can save organizations time and hassle on having to go back and fix security risks. Developers can work more productively to deliver applications faster and more securely.

Hopefully, the information in this post has provided you with more insight into what some of the best Docker scanning methods involve.

Shipa Vs OpenShift: A Comparative Guide

With the advent of a popular PaaS like Red Hat OpenShift, developers are able to focus on delivering value via their product, not building IT infrastructure. Red Hat OpenShift is a multifaceted container application platform from Red Hat for the development, deployment and management of applications. OpenShift is a layer on top of Docker and Kubernetes that makes it accessible and easy for the developer to create applications and a platform that is a dream for operators to deploy containers on for both development and production workloads.Today OpenShift is heavily used by enterprise IT but let us agree to the fact that customizing and building a complex platform and services adds no specific value to your organization and complication is intrinsically a killer in and of itself, an exponential risk to your chances of success. 

Today you need to enable your developers to self-service through a continuous operation platform and empowering them to deploy and operate their service with minimal Ops intervention.Enabling developers to self-service means treating Ops as a product team. The infrastructure automation, deployment automation, configuration management, logging, monitoring, and production tools — these are all products and it’s these products that allow teams to fully own their services. This leads to empowerment. Products enable ownership. We move away from Ops as masters of production responsible for everything and push that responsibility onto dev teams. They are the experts for their services. They are best equipped to deal with problems that arise but we provide the tools they need to diagnose and resolve those problems on their own. 

Continuous Operation for Cloud Native Applications 

Shipa is reinventing how cloud-native applications are delivered across heterogeneous clusters and cloud environments. Our landing pad approach allows Platform and DevOps teams to build security, compliance, and operational guardrails directly into their infrastructure while enabling Developers to focus on what matters, delivering software and changes that will add value to the organization. With Shipa, teams can now focus on the application delivery and governance, rather than infrastructure.Shipa implements a clear separation of concern between Developers and the Platform Engineering teams, improving Developer experience, governance, monitoring and control. Shipa provides native services to applications that are available right at deployment, services such as different databases, queuing, canary based deployment, deployment rollback and more, allowing Developers to focus on application code delivery while platform teams support development speed and experience, rather than spending time and effort installing and maintaining different infrastructure components.

Now you might be thinking why the world needs another offering such as Shipa, and how it’s different. Under this comparative blog post, I will be taking a deeper look at how Shipa compares to OpenShift. 

Key motivation to become Cloud-Native 

Before we deep dive into the comparative world, let us first try to chart out top capabilities to consider while evaluating an enterprise Kubernetes Platform to become Cloud Native: 

– Developer Productivity 

– A Cluster Agnostic Platform 

– Application Portability 

– Resiliency 

– Multi-Cloud Support 

– Scalability 

– Out of the box Monitoring 

– Out of the box Security

– Seamless 3rd Party Integration 

– OpenAPI & Extension to Edge devices 

– Business Agility 

– Cost Saving 

– Automated Routing & Observability 

Let’s deep dive into each of these capabilities and see how Shipa fits into Cloud-Native world: 

Cluster Agnostic Platform 

The world is moving to cloud-native architectures with multi-vendor, open-cloud hybrid systems. As compared to OpenShift which locks you to OpenShift only, Shipa is purely a cluster agnostic platform. It means that users can attach any Kubernetes clusters to the Shipa pools, such as GKE, on-premises, AKS and so on.You can connect Shipa’s landing pads (also known as Pools) to multiple clusters, across multiple clouds, such as GKE, AKS, EKS and OKE to centrally configure policies at the pool level, monitor applications, monitor security, perform deployment rollback and others, helping your team deliver on governance and control goals required by your organization. 

Users can install Shipa on any existing Kubernetes cluster (1.10.x to newer versions). Shipa leverages Helm charts for the install and it supports both versions, Helm 2 and 3. Fully automated deployment and easy UI-driven wizard gets Kubernetes clusters running in a few minutes. Hence, you can manage clusters with one-click UI-based upgrades and troubleshooting. 

Application Portability 

OpenShift generally offers multi-cluster management for OpenShift clusters only. Comparatively, Shipa has the capability to offer multi-cluster management through Pools. It handles the application lifecycle across multi-cluster & clouds, no matter if it’s GKE, EKS, AKS, and irrespective of Kubernetes version difference, no matter if it is 1.14 or 1.16. 

Shipa makes the underlying cluster and infrastructure irrelevant. It means that users can move apps between different clusters flawlessly. It is responsible for moving the app and creating the required application objects, which can help in scenarios such as high-availability, cluster upgrades, multi-cloud and others. 

Shipa uses the concept of a landing pad, called Pools, to bring application context to your application delivery and management workflow. Pools are where your CI/CD pipeline will deliver the application code and where the process starts with Shipa. Once code is received, Shipa pools are responsible for creating application objects, run security scans, attach policies, start producing application metrics and more. A Shipa pool can host nodes from multiple cloud providers and will distribute your application units/containers across multiple clouds/nodes 

Enhanced Out-of-box Monitoring 

In Shipa, monitoring comes out-of-the-box, with the option to also export it to 3rd party tools, such as New Relic and Datadog. Since application objects are created, deployed and monitored automatically, now developers can focus solely on delivering application code rather than worrying about monitoring tools. 

Enhanced Out-of-the-Box Security 

OpenShift requires you to install additional tools to have security in place and still be somewhat complex for the operations team. To harden OpenShift, you need to be aware of sophisticated tools like SELinux, Stateful and stateless inspection firewall, process/network/storage separation, OAuth to authenticate using an identity provider, such as LDAP, GitHub, or Google and much more. 

Under Shipa, Security comes out of the box.While Shipa allows security configuration at the pool level, which makes it flexible, especially when there are scenarios of multi-tenancy and/or services deployed across different pools with different requirements and because its native to the pool, no additional tools or setup is necessary 

OpenAPI & Extensible to Edge Devices 

Capability of using docker nodes, not only Kubernetes (Shipa nodes), so you can leverage cloud instances such as EC2 and others as well as extend it to edge devices that are linux based. Shipa’s APIs are open and documented, allowing Platform and DevOps teams to integrate Shipa with additional external platforms as well as leverage Shipa as the center of their automation strategy. In addition to that, Shipa has the concept of Plugins, which allows DevOps and Platform teams to build even further automation inside Shipa that can be used at different stages of the application operation lifecycle. 

Say “No” to YAML 

If you are looking out for a platform that allows you to think and operate applications rather than infrastructure when delivering services, across any infrastructure. Shipa abstracts away the need for creating and maintaining object yaml files, not only for applications but across, such as with volumes management. Many of the components on OpenShift still require you to deal with Kubernetes object yaml files as well as custom scripts for actions such as certificate generation and encryption for apps. With governance, guardrails and automated object creation and management in place for applications, developers can focus on writing code that delivers value and services to the organization, not worrying about the infrastructure that runs them 

Since application objects are created, deployed and monitored automatically, Developers can focus solely on delivering application code rather than learning Kubernetes related requirements. And as Shipa integrates into your existing CI/CD workflow, Developers do not need to change or learn additional tools. the need to learn or create any yaml or volume related files, improving Developer experience and application delivery speed. 

Automated Routing & Observability 

Shipa’s DNS capability ensures that users can reach your application from the moment it launches, with no extra effort required with capabilities such as Canary Deployment and

Deployment based rollback, which allows Platform teams to ensure applications are also available. At application deployment time, Shipa provides automated monitoring and application related metrics, helping you better understand the status of your applications, which can also be exported to 3rd party monitoring platforms. 

Support to Persistent Application 

Shipa supports connection to all major CSI providers, allowing Platform Engineering teams to make volumes available across different clusters and CSI providers at the pool level, so Developers can just attach these available volumes to their application, without the need to learn or create any yaml or volume related files, improving Developer experience and application delivery speed. 

Conclusion 

In nutshell, Shipa has re-addressed the traditional PaaS model compared to Red Hat OpenShift & added extra value to enterprises cloud native successes, especially for enterprises key objectives listed below: 

● Even though OpenShift helps a bit on the developer side, it still hold “object oriented/context” rather than application context, so you can expect your platform, devops and development team to still be tied to infrastructure and objects rather than on app only. 

● With OpenShift you will end up locking you up on an OpenShift only platform, which becomes expensive in the long run . Additionally, the platform doesn’t allow you to use different clusters, versions and others that might be best for your application requirements. 

● Shipa leverages and creates the application and its objects inside the cluster, so in future, if you decide to move away from Shipa, your apps will continue working (you will just have to take on the burden of managing everything and every object manually) 

● OpenShift upgrades are painful, takes time and if not planned right, can impact your environment and applications. 

Hence, Shipa provides native services to applications that are available right at deployment, services such as different databases, queuing, canary based deployment, deployment rollback , out of the box monitoring & security and more, allowing Developers to focus on application code delivery while platform teams support development speed and experience, rather than spending time and effort installing and maintaining different infrastructure components.

Top 5 Docker Myths and Facts That You Should be Aware of

 

Today, every fast-growing business enterprise has to deploy new features of their app rapidly if they really want to survive in this competitive market. Developing apps today requires so much more than writing code. For developers, there is a vast array of complex tooling and a duplicate set of commands and tasks to go from local desktop to cloud-native development. It takes hours and possibly days for the development team to decide on the right cloud environment to meet their requirements and to have that environment successfully set up. Docker simplifies and accelerates your workflow, while giving developers the freedom to innovate with their choice of tools, application stacks, and deployment environments for each project.

With over 396 billion all-time DockerHub pulls, 16.2 million Docker Desktop downloads & 9 million Docker accounts, Docker is still the most popular container platform among developers. If you search “Docker ” in GitHub, you will find over 20 million code results, 690 K repositories and over 14,000 discussions around Docker. It shows how Docker is being used by millions of developers to build, share, and run any app, anywhere. As per the latest StackOverFlow 2021 survey, Docker is still the #1 most wanted and #2 most loved developer tools, and helps millions of developers build, share and run any app, anywhere – on-prem or in the cloud. 

Today, all major cloud providers use Docker platform. For example, AWS and Docker have collaborated to make a simplified developer experience that enables you to deploy and manage containers on Amazon ECS directly using Docker tools. Amazon ECS uses Docker images in task definitions to launch containers as part of tasks in your clusters. This year, Docker announced that all of the Docker Official Images are now made available on AWS ECR Public.

The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment. Nevertheless, technologies and tools available from Docker and its open source project, Moby, have been leveraged by all major data center vendors and cloud providers. Many of these providers are leveraging Docker for their container-native IaaS offerings. Additionally, the leading open source serverless frameworks utilize Docker container technology.

Undoubtedly, Docker today is the de facto standard for most of the developers for packaging their apps but as the container market continues to evolve and diversify in terms of standards and implementations, there is a rise of a confusion among the enterprise developers  to choose the right container platform for their environment. Fortunately, I am here to help you with top 5 reasons debunking many of these modern myths. This blog aims to clear up some commonly held misconceptions in the field of Docker capabilities. The truth, as they say, shall set you free and ‘whalified’.

Myth – 1: Docker doesn’t support rootless containers

This myth says that the Docker daemon requires root privileges and hence admins can’t launch containers as a non-privileged user. 

Fact: Rootless mode was introduced in Docker Engine v19.03 as an experimental feature. Rootless mode graduated from experimental mode in Docker Engine v20.10. This means that Docker today can also be run as a non-root user. Rootless containers have a huge advantage over rootful containers since (you guessed it) they do not run under the root account. The benefit of this is that if an attacker is able to capture and escape a container, this attacker is still a normal user on the host. Containers that are started by a user cannot have more privileges or capabilities than the user itself.

Learn more – https://docs.docker.com/engine/security/rootless/

Myth – 2: Docker doesn’t support daemonless architecture. 

Let us understand this myth. It says that when working with Docker, you have to use the Docker CLI, which communicates with a background daemon (the Docker daemon). The main logic resides in the daemon, which builds images and executes containers. This daemon runs with root privileges which presents a security challenge when providing root privileges to users. It also means that an improperly configured Docker container could potentially access the host filesystem without restriction. As Docker depends on a daemon running in the background, whenever a problem arises with the daemon, container management comes to a halt. This point of failure therefore becomes a potential problem.

Fact: By default, when the Docker daemon terminates, it shuts down running containers. You can configure the daemon so that containers remain running if the daemon becomes unavailable. This functionality is called live restore. The live restore option helps reduce container downtime due to daemon crashes, planned outages, or upgrades. To  enable the live restore setting to keep containers alive when the daemon becomes unavailable, you can add the configuration to the daemon configuration file:

On Linux, this defaults to /etc/docker/daemon.json.  On Docker Desktop for Mac or Docker Desktop for Windows, select the Docker icon from the task bar, then click Preferences -> Docker Engine 

Use the following JSON to enable live-restore.

{

"live-restore": true

}

Learn more: https://docs.docker.com/config/containers/live-restore/ 

Myth – 3: Docker doesn’t support Container Image signing

This myth states that Docker is not secure. Docker images can’t be trusted as they are not signed. Docker doesn’t validate your images and doesn’t have capability to track the source from where the Docker images are being pulled.

Fact: Docker Content Trust has been there since v1.8. Docker version 1.8 introduces Content Trust, which allows you to verify the authenticity, integrity, and publication date of Docker images that are made available on the Docker Hub Registry. Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific image tags. 

Within the Docker CLI we can sign and push a container image with the ‘docker trust’ command syntax. This is built on top of the Notary feature set. A prerequisite for signing an image is a Docker Registry with a Notary server attached (such as the Docker Hub ).

docker trust

Usage:  docker trust COMMAND

Manage trust on Docker images

Management Commands:
  key         Manage keys for signing Docker images
  signer      Manage entities who can sign Docker images

Commands:
  inspect     Return low-level information about keys and signatures
  revoke      Remove trust for an image
  sign        Sign an image

Run 'docker trust COMMAND --help' for more information on a command.


Learn more – https://docs.docker.com/engine/security/trust/

Myth – 4: Docker is becoming paid and not free software anymore

This myth states that Docker is not free software anymore. Docker has completely monetized the software and hence one needs to pay for the subscription if they want to use it.

Fact: Docker Engine and all upstream open source Docker and Moby projects are still free. Docker Desktop is free to download and install for your personal use. If you’re running a small business with fewer than 250 employees and less than $10 million in annual revenue, Docker Desktop is  still free. No matter, if you are a student or an instructor either in an academic or professional environment, it is still free to download and install. If you are working on any open source non-commercial project hosted over GitHub and abide by the Open Source Initiative definition, you can use Docker Desktop for free. All you need to do is to fill up the form and apply here.

For your open source project namespace on Docker Hub, Docker offers unlimited pulls and unlimited egress to any and all users, with no egress restrictions applying to any Docker users pulling images from that namespace. In addition, if your open source project uses Autobuild capabilities, you can continue using them for free. You are also free to continue to use Docker Desktop via the Docker Personal subscription. 

Myth – 5: Docker doesn’t support Kubernetes

This myth states that Docker is incapable to run Kubernetes Pods. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources.

Fact: Docker Desktop does allow you to run Kubernetes Pods. If you have Docker Desktop installed in your Mac or Windows system, you can enable Kubernetes under Dashboard UI and then deploy Pods over it. You can even use the native Docker compose tool to bring up Kubernetes resources seamlessly.

Learn more – https://docs.docker.com/desktop/kubernetes/ 

Conclusion:

Docker today is still heavily used by millions of developers to build, share, and run any app, anywhere, almost everyday. It is enabling developers to accelerate their productivity and spend more time on delivering value that’s core to their business. If you are looking out for matured, stable and enterprise-ready container desktop platform, Docker Desktop is a right choice for you and your organization.

References:

Here at Collabnix Community Slack , we’re happy to chat around Docker and how it is being adopted by millions of Developer communities. If interested, leave your comments below.

How to control DJI Tello Mini-Drone using Python

If you want to take your drone programming skills to the next level, DJI Tello is the right product to buy. Tello is $99 mini-drone that can be programmed using Python, Scratch and Swift. It is rightly called as an indoor quadcopter that comes equipped with an HD camera, giving you a bird’s-eye view of the world. It’s damn easy to fly. Start flying by simply tossing Tello into the air. Slide on your mobile screen to perform 8D flips cool aerial stunts. It’s quite lightweight and dimensions are 3.8 x 3.6 x 1.6 inches, weighing only 2.8 ounces. One of the amazing feature is its VR headset compatibility. You can fly it with a breathtaking first-person view.

Tello is a small quadcopter that features a Vision Positioning System and an onboard camera. Tello is a great choice if you want to learn AI analytics at the Edge. Imagine you’re building an application that captures video streaming from the drone and sends it to AI computers like Jetson Xavier or Nano for AI analytics, storing the time-series data in Redis running over Cloud and visualizing it over Grafana. There’s ample amount of learning opportunity using these affordable drones for researchers, engineers and students.

Notable Features of Tello Drone

  • DJI Tello has an excellent flight time of 13 minutes. (enough for your indoor testing!)
  • It comes with a 5MP camera. It can shoot 720p videos, and has digital image stabilization!
  • Approximately 80 g (Propellers and Battery Included) in weight.
  • This small drone has a maximum flight distance of 100 meters and you can fly it in various flight modes using your smartphone or via the Bluetooth controller!
  • It has two antennas that make video transmission extra stable and a high-capacity battery that offers impressively long flight times.
  • It comes with a Micro USB Charging Port
  • Equipped with a high-quality image processor, Tello shoots incredible photos and videos. Even if you don’t know how to fly, you can record pro-level videos with EZ Shots and share them on social media from your smartphone.
Photo Quality Setting in Tello App
  • Tello comes with sensors that helps it finding obstacles and during the landing
DJI Tello Calibration Settings
  • Tello height can be hacked . Check this out: http://protello.com/tello-hacking-height-limit/
  • DJI Tello has a brushed motor, which is cheaper than the brushless motor, but it’s also less efficient. (Sadly, brushed motors are known to burn out sometimes due to low quality or poor implementation. They’re also susceptible to rough impacts).
  • Being a Non-GPS drone, it is very stable. Video quality is quite decent and landing is also accurate. Also fly it during calm winds or no winds conditions otherwise it’ll sway away with he wind. Good for indoors as well and to click some good selfies.
  • DJI Tello is controlled using an application on an iOS or Android mobile phone. It is also possible to control via Bluetooth joystick connected via application.

Getting Started

Hardware Required:

Unboxing DJI Tello Drone
  • DJI Tello Drone (Buy)
  • Charging Cable
  • Battery (Buy)

DJI Tello comes with a detachable Battery of 1.1Ah/3.8V. Insert the 26g battery into the aircraft and charge the battery by connecting the Micro USB port on the aircraft to a charger.

Ways of controlling Your DJI Tello

There are 2 ways you can control your DJI Tello. The first one is using your mobile device, you will need to download Tello or Tello EDU App first. You can also control your Tello via Python or Scratch programming. In this blog, we will see how to control Tello using Python.

Pre-requisite:

  • Linux System( Desktop or Edge device)
  • Python3
  • Tello Mobile app

Press the “Power” button of Tello once. Once it start blinking, open up Tello Android app to discover Tello drone. Open settings and configure WiFi settings like username and password. Connect your laptop to the Tello WiFI network. Follow the below steps to connect via Python script.

Install using pip

pip install djitellopy

For Linux distributions with both python2 and python3 (e.g. Debian, Ubuntu, …) you need to run

pip3 install djitellopy

API Reference

See djitellopy.readthedocs.io for a full reference of all classes and methods available.

Step 1. Connect, TakeOff, Move and Land

The below Python script allows you to connect to the drone, take off, make some movement – Left and Right and then Land smoothly.

from djitellopy import Tello

tello = Tello()

tello.connect()
tello.takeoff()

tello.move_left(100)
tello.rotate_counter_clockwise(90)
tello.move_forward(100)

tello.land()

Step 2. Take a Picture

import cv2
from djitellopy import Tello

tello = Tello()
tello.connect()

tello.streamon()
frame_read = tello.get_frame_read()

tello.takeoff()
cv2.imwrite("picture.png", frame_read.frame)

tello.land()

Step 3. Recording a Video


# source https://github.com/damiafuentes/DJITelloPy
import time, cv2
from threading import Thread
from djitellopy import Tello

tello = Tello()

tello.connect()

keepRecording = True
tello.streamon()
frame_read = tello.get_frame_read()

def videoRecorder():
    # create a VideoWrite object, recoring to ./video.avi
   
    height, width, _ = frame_read.frame.shape
    video = cv2.VideoWriter('video.avi', cv2.VideoWriter_fourcc(*'XVID'), 30, (width, height))

    while keepRecording:
        video.write(frame_read.frame)
        time.sleep(1 / 30)

    video.release()

# we need to run the recorder in a seperate thread, otherwise blocking options
#  would prevent frames from getting added to the video
recorder = Thread(target=videoRecorder)
recorder.start()

tello.takeoff()
tello.move_up(100)
tello.rotate_counter_clockwise(360)
tello.land()

keepRecording = False
recorder.join()

Step 4. Control the drone using Keyboard

# source https://github.com/damiafuentes/DJITelloPy
from djitellopy import Tello
import cv2, math, time

tello = Tello()
tello.connect()

tello.streamon()
frame_read = tello.get_frame_read()

tello.takeoff()

while True:
    # In reality you want to display frames in a seperate thread. Otherwise
    #  they will freeze while the drone moves.
   
    img = frame_read.frame
    cv2.imshow("drone", img)

    key = cv2.waitKey(1) & 0xff
    if key == 27: # ESC
        break
    elif key == ord('w'):
        tello.move_forward(30)
    elif key == ord('s'):
        tello.move_back(30)
    elif key == ord('a'):
        tello.move_left(30)
    elif key == ord('d'):
        tello.move_right(30)
    elif key == ord('e'):
        tello.rotate_clockwise(30)
    elif key == ord('q'):
        tello.rotate_counter_clockwise(30)
    elif key == ord('r'):
        tello.move_up(30)
    elif key == ord('f'):
        tello.move_down(30)

tello.land()

In my next blog post, I will showcase how to implement object detection and analytics using Deep Learning, DJI Tello, Jetson Nano and DeepStream.

References:

5 Minutes to AI Data Pipeline

white and brown human robot illustration

According to a recent Gartner report, “By the end of 2024, 75% of organizations will shift from piloting to operationalizing artificial intelligence (AI), driving a 5x increase in streaming data analytics infrastructure.” The report, Top 10 Trends in Data and Analytics, 2020, further states, “Getting AI into production requires IT leaders to complement DataOps and ModelOps with infrastructures that enable end-users to embed trained models into streaming-data infrastructures to deliver continuous near-real-time predictions.”

What’s driving this tremendous shift to AI? The accelerated growth in the size and complexity of distributed data needed to optimize real-time decision making.

Realizing AI’s value, organizations are applying it to business problems by adopting intelligent applications and augmented analytics and exploring composite AI techniques. AI is finding its way into all aspects of applications, from AI-driven recommendations to autonomous vehicles, from virtual assistants, chatbots, and predictive analytics to products that adapt to the needs and preferences of users. 

Surprisingly, the hardest part of AI is not artificial intelligence itself, but dealing with AI data. The accelerated growth of data captured from the sensors in the internet of things (IoT) solutions and the growth of machine learning (ML) capabilities are yielding unparalleled opportunities for organizations to drive business value and create competitive advantage. That’s why ingesting data from many sources and deriving actionable insights or intelligence from it have become a prime objective of AI-enabled applications. In this blog post, we will discuss the AI data pipeline and the challenges of getting AI into real-world production.

From 30,000 feet up, a great AI-enabled application might look simple. But a closer look reveals a complex data pipeline. To understand what’s really happening, let’s break down the various stages of the AI data pipeline.

The first stage is data ingestion. Data ingestion is all about identifying and gathering the raw data from multiple sources, including the IoT, business processes, and so on. 

The gathered data is typically unstructured and not necessarily in the correct form to be processed, so you also need a data-preparation stage. This is where the pre-processing of data—filtering, construction, and selection—takes place. Data segregation also happens at this stage, as subsets of data are split in order to train the model, test it, and validate how it performs against the new data.

Next comes the model training phase. This includes incremental training of conventional neural network models, which generates trained models that are deployed by the model serving layer to deliver inferences or predictions. The training phase is iterative. Once a trained model is generated, it must be tested for inference accuracy and then re-trained to improve that accuracy. 

In nutshell, the fundamental building block of artificial intelligence comprises everything from ingest through several stages of data classification, transformation, analytics, machine learning, and deep learning model training, and then retraining through inference to yield increasingly accurate insights.


AI pipeline characterization and performance requirements

The AI data pipeline has varying characteristics and performance requirements. The data can be characterized by variety, volume, and disparity. The ingest phase must support fast, large-scale processing of the incoming data, while data quality is the primary focus of the data preparation phase. Both the training and inference phases are sensitive to model quality, data-access latency, response time, throughput, and data-caching capabilities of the AI solution.

In my next blog, I will showcase how AI Data Pipeline is used to build object detection and analytics platform using Edge devices and Cloud-Native application.

References:

Getting Started with Terraform in 5 Minutes

Terraform is an infrastructure as code (IaC) tool that allows you to build, change, and version infrastructure safely and efficiently. This includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied.

GitHub - hashicorp/terraform: Terraform enables you to safely and  predictably create, change, and improve infrastructure. It is an open  source tool that codifies APIs into declarative configuration files that  can be shared

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

Following are the key features of Terraform:

  1. Infrastructure as a Code: As discussed earlier, Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.
  2. Execution Plans: Terraform has a “planning” step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.
  3. Resource Graph: Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.
  4. Change Automation: Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

Installing Terraform on MacOS

Homebrew is a free and open-source package management system for Mac OS X. Install the Terraform formula from the terminal.

$ brew install terraform

NOTE: Homebrew and the Terraform formula are NOT directly maintained by HashiCorp. The latest version of Terraform is always available by manual installation.

Verify the latest release

terraform version
Terraform v1.0.11
on darwin_amd64

Verify the installation

Verify that the installation worked by opening a new terminal session and listing Terraform’s available subcommands.

$ terraform -help
Usage: terraform [-version] [-help] <command> [args]

The available commands for execution are listed below. The most common, useful commands are shown first, followed by less common or more advanced commands. If you’re just getting started with Terraform, stick with the common commands. For the other commands, please read the help and docs before usage. ##…

Add any subcommand to terraform -help to learn more about what it does and available options.

$ terraform -help plan

Troubleshoot

If you get an error that terraform could not be found, your PATH environment variable was not set up properly. Please go back and ensure that your PATH variable contains the directory where Terraform was installed.

Enable tab completion

If you use either bash or zsh you can enable tab completion for Terraform commands. To enable autocomplete, run the following command and then restart your shell.

$ terraform -install-autocomplete

Installing Terraform on Linux

A binary distribution is avaialble for all environments. Let’s grab the latest version of it for linux.

$ wget https://releases.hashicorp.com/terraform/1.0.11/terraform_1.0.11_linux_amd64.zip

Then unzip the archieve,

$ unzip terraform_1.0.11_linux_amd64.zip

Check the executable permission on the binary, if it’s not executable, make it executable using the below commmand,

$ chmod +x terraform

Finally make sure that terrform is avaiable in PATH. So, let’s move the binary into /usr/local/bin directroy,

$ sudo mv terraform /usr/local/bin

Now you are ready to run terraform commands. Open up a new termnal and run a command terraform and enter,

$ terraform

Verify the installation

Verify that the installation worked by opening a new terminal session and listing Terraform’s available subcommands.

$ terraform -help
Usage: terraform [-version] [-help] <command> [args]

The available commands for execution are listed below. The most common, useful commands are shown first, followed by less common or more advanced commands. If you’re just getting started with Terraform, stick with the common commands. For the other commands, please read the help and docs before usage. ##…

Add any subcommand to terraform -help to learn more about what it does and available options.

$ terraform -help plan

Troubleshoot

If you get an error that terraform could not be found, your PATH environment variable was not set up properly. Please go back and ensure that your PATH variable contains the directory where Terraform was installed.

Enable tab completion

If you use either bash or zsh you can enable tab completion for Terraform commands. To enable autocomplete, run the following command and then restart your shell.

$ terraform -install-autocomplete

Installing Terraform on Windows

A binary distribution is avaialble for all environments. Let’s grab the latest version of it for windows.

Open up a powershell on your windows machine, cd to a directroy to D drive and create an Terraform directory,

PS C:\Users\user>D:

Get an exe from the below url,

PS D:\> curl.exe -O https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_windows_amd64.zip

Then unzip this archieve, rename a directory to terraform and we will see a single binary file name terraform and add it’s path into environment variables.

PS D:\Terraform> Expand-Archive terraform_0.12.26_windows_amd64.zip
PS D:\> Rename-Item -path .\terraform_0.12.26_windows_amd64\ .\terraform

Regarding setting up an environment variable, you can add terraform path in Path variable as shown in below screenshot,

And, your are done. Now open up a terminal and run a command terrform and enter

PS D:\terraform> terraform

Verify the installation

Verify that the installation worked by opening a new powershell or cmd session and listing Terraform’s available subcommands.

PS D:\terraform> terraform -help
Usage: terraform [-version] [-help] <command> [args]

The available commands for execution are listed below. The most common, useful commands are shown first, followed by less common or more advanced commands. If you’re just getting started with Terraform, stick with the common commands. For the other commands, please read the help and docs before usage. ##…

Add any subcommand to terraform -help to learn more about what it does and available options.

PS D:\terraform> terraform -help plan

Troubleshoot

If you get an error that terraform could not be found, your PATH environment variable was not set up properly. Please go back and ensure that your Path variable contains the directory where Terraform was installed.

How to setup Terraform for Google Cloud Engine

Pre-requiste

In order to avoid explicitly using the GCP service key, we are going to use GOOGLE_APPLICATION_CREDENTIAL environment variable that points to our GCP service key.

Installing Terraform ~> 0.12.26

If you have a service key at your disposal

export GOOGLE_APPLICATION_CREDENTIAL = {path to your service key file}

If you have not created a service account and a service key then follow below steps

Install gcloud cli

The gcloud cli is a part of Google Cloud SDK. We must download and install the SDK on your system and initialize it before you can use the gcloud command-line tool.

Note: You can follow the install script given in the Google Cloud SDK documentation.

Google Cloud SDK Quickstart script for CentOS

sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM

[google-cloud-sdk]

name=Google Cloud SDK baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOM

Installing Google cloud SDK

yum install google-cloud-sdk

Once the SDK is installed, run gcloud init to initialize the SDK,

gcloud init

Run the following scripts

export PROJECT_ID={Name of your GCP Project}

export GOOGLE_APPLICATION_CREDENTIALS=~/.config/gcloud/${PROJECT_ID}-terraform-admin.json

gcloud iam service-accounts create terraform --display-name "Terraform admin account"

gcloud projects add-iam-policy-binding ${PROJECT_ID} --member serviceAccount:terraform@${PROJECT_ID}.iam.gserviceaccount.com   --role roles/owner

gcloud services enable compute.googleapis.com

gcloud services enable iam.googleapis.com

gcloud iam service-accounts keys create ${GOOGLE_APPLICATION_CREDENTIALS} --iam-account terraform@${PROJECT_ID}.iam.gserviceaccount.com```

Note – you would need to export the GOOGLE_APPLICATION_CREDENTIALS every time you work with terraform when interacting with your configurations.

References:

Top 5 Effective Discord Bot For Your Server in 2021

Discord is one of the fastest-growing voice, video and text communication services used by millions of people in the world. It is the easiest way to talk, chat, hang out, and stay close with your friends and communities. It is available over 28 languages and primarily aimed at gamers. Discord was started to solve a big problem: how to communicate with friends around the world while playing games online and recently, we have started seeing mainstream acceptance as well. Companies are using Discord in the same way they’d use Slack and Microsoft Teams.

Why is Discord so popular?

Discord is an innovative platform. Even though It has gained momentum from within the gaming community in the initial days, it has now undergone considerable growth and transformation to establish itself as a platform of choice for the community groups.It has been designed to meet the need of new-age social media applications. It comes with artificial intelligence-powered bots that allows users to perform numerous actions from welcoming new members, moderating communications between members to paraphrasing content. One of the most exciting features that I have used bots for is fetching regular feeds from the newsletters and blog sites. I think that’s a super cool way to keep yourself up-to-date with the latest technology news. 

Let us look at the top 5 bots that are available online and can be used by anyone who want to leverage this innovative technology platform:

1. MEE6

MEE6 is a Discord role bot that allows users to self-assign roles by using discord reactions. This discord role generator will automatically update permissions for users in discord. 

10 commands MEE6 can process:

1.!mute – Command for muting a user from the discord server.

2.!tempban – Command for restricting a user from the discord server for a temporary duration.

3.tempmute – Command for temporarily muting a user from the discord server.

4.!server-info – Command for getting information concerning the existing server.

5.!role-info – Command for extracting the information related to a specific role.

6.!user-info – Command for getting information about a user.

7.!slowmode – Command for disabling or enabling the slow-mode feature in a channel.

8.!warn – Command for issuing a warning to a particular user.

9.!infractions : list of member’s infractions.

10./Nick : to change nicknames.

To add this bot in your discord-

Follow this link- https://mee6.xyz (Official MEE6 website)

2. Groovy bot

An incredibly easy to use music bot for Discord that doesn’t skip on features. Supports YouTube, Spotify, Apple Music and more. Invite Groovy today and start listening to your favorite songs.

10 commands Groovy can process

1. /play (song name or link)-Loads your input and adds it to the queue; If there is no playing track, then it will start playing.

2./queue – Displays the current song queue.

3./skip – Skips to the next song.

4./back – Skips to the previous song. 

5./clear – Removes all tracks from the queue. 

6./jump – Skips to the specified track.

7./lyrics – Displays lyrics for the currently playing track. 

8./disconnect – Disconnects the bot from your voice channel and clears the queue.

9. /autoplay – Toggles AutoPlay, which will automatically queue the best song to play next through looking at your listening history.

10./nightcore – Toggles nightcore mode

If you have any questions or want to add this bot in your server 

Follow this link- https://groovy.bot (official Groovy bot website)

3. GiveawayBot

Discord bots can help you create a better community. If you want to show love to your community, this is the best bot available for you. By using a bunch of simple commands, it allows you to start a giveaway, promote it, pick winners and end it without any manual effort. It is a perfect bot for modern digital marketers who are always looking for new exciting ways to engage with their audiences.

5 Commands that GiveawayBot can process:

  1. !gcreate – create a giveaway
  2. !gstart <time>[Winners][prize] – starts  a giveaway
  3. !gend – ends the most recent giveaway in the current channel
  4. !glist – lists all the current-running giveaways on the server
  5. !greroll – picks a new winner from the latest giveaway.

4. Octave

Are you a music lover? If yes, then this bot is for you. Octave is a highly popular music bot for Discord. It allows you to play songs available on YouTube, Soundcloud or your voice-channels. You can create a playlist and add your favourite songs. You can perform pause, replay, queue songs  and even ask Octave to display the lyrics of the song. Interesting, isn’t it?

  1. play [url|YT search] — Join and play music in a channel.
  2. pause — Pause or resume the music player.
  3. stop, leave — Stop and clear the music player.
  4. skip — Skip the current music track if you’re the requester.
  5. remove (first|last|all|start..end|#) — Remove a song from the queue.
  6. shuffle — Shuffle the music queue.
  7. restart — Restart the current song.
  8. repeat (song, playlist, none) — Set if the music player should repeat.
  9. volume [loudness %] — Set the volume of the music player
  10. youtube, yt (query…) — Search and see YouTube results.
  11. soundcloud, sc (query…) — Search and see SoundCloud results.

Learn more: https://bot.to/bot/octave/

5.Dank Memer

Dank memer is a feature-rich Discord bot with the original twist of being sarcastic and memey. A MASSIVE currency system, tons of memes, and much more!

Dank Memer is a bot that brings some great perks to your Discord server!

  • Increased server engagement
  • A selling point while advertising your server (Tens of millions of users know and use Dank Memer!)
  • A partnership program available to servers who actively use Dank Memer, where we bring our users to you!

It might make you wonder, what’s the big deal with Dank Memer? Why do so many users love it?

  • Global currency system
    • Stealing/Bank Robbing
    • Gambling
    • Unique items
    • Pets
    • And much more!
  • Memes!
    • We provide TONS of easy ways to generate memes just by using the bot’s image category.
    • The bot collects the top hottest memes on Reddit daily.
    • Auto-posting meme webhooks for premium servers
  • Games, Animal pictures, and so much more!

If you want to check it out then follow this link-https://dankmemer.lol

Don’t wait to see what the hype is all about; get on the train now!

References:

Author:

Gurviraj Singh is a 13 years old from India. He is a 6th standard student currently studying at Christ Academy, Bangalore. He is a technology enthusiast and has set up his own Minecraft platform using Docker over Cloud platform. He is fond of reading fiction stories. He is reachable at https://twitter.com/MinecraftWreck1

What is Kubernetes Scheduler and why do you need it? – KubeLabs Glossary

If you are keen to understand why Kubernetes Pods are placed onto a particular cluster node, then you have come to the right place. This detailed guide talks about the Kubernetes schedulers and how it works. It also covers the concepts like Node-Affinity, taints and tolerations.

In Kubernetes, scheduling refers to making sure that  Pods are matched to Nodes so that Kubelets can run them The Kubernetes Scheduler is a core component of Kubernetes: After a user or a controller creates a Pod, the Kubernetes Scheduler, monitoring the Object Store for unassigned Pods, will assign the Pod to a Node. Then, the Kubelet, monitoring the Object Store for assigned Pods, will execute the Pod.

A scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the best Node for that Pod to run on.

How does Kubernetes Schedule work?

The Kubernetes scheduler is in charge of scheduling pods onto nodes. Basically it works like this:

  1. You create a pod
  2. The scheduler notices that the new pod you created doesn’t have a node assigned to it
  3. The scheduler assigns a node to the pod

K8s scheduler is not responsible for actually running the pod – that’s the kubelet’s job. So it basically just needs to make sure every pod has a node assigned to it. Kubernetes in general has this idea of a “controller”. A controller’s job is to:

  • look at the state of the system
  • notice ways in which the actual state does not match the desired state (like “this pod needs to be assigned a node”)
  • repeat

The scheduler is a kind of controller. There are lots of different controllers and they all have different jobs and operate independently.

How Kubernetes Selects The Right node?

Enter Node Affinity. Node Affinity allows you to tell Kubernetes to schedule pods only to specific subsets of nodes.The initial node affinity mechanism in early versions of Kubernetes was the nodeSelector field in the pod specification. The node had to include all the labels specified in that field to be eligible to become the target for the pod.nodeSelectorSteps

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/kubernetes/workshop/Scheduler101/
kubectl label nodes node2 mynode=worker-1
kubectl apply -f pod-nginx.yaml
  • We have label on the node with node name,in this case i have given node2 as mynode=worker-1 label.

Viewing Your Pods

kubectl get pods --output=wide
[node1 Scheduler101]$ kubectl describe po nginx
Name:               nginx
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node2/192.168.0.17
Start Time:         Mon, 30 Dec 2019 16:40:53 +0000
Labels:             env=test
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"env":"test"},"name":"nginx","namespace":"default"},"spec":{"contai...
Status:             Pending
IP:
Containers:
  nginx:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qpgxq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-qpgxq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qpgxq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  mynode=worker-1
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  7s    default-scheduler  Successfully assigned default/nginx to node2
  Normal  Pulling    3s    kubelet, node2     Pulling image "nginx"
[node1 Scheduler101]$

  • You can check in above output Node-Selectors: mynode=worker-1

Deleting the Pod

kubectl delete -f pod-nginx.yaml
pod "nginx" deleted

Please note:

  • Node affinity is conceptually similar to nodeSelector – it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.
  • There are currently two types of node affinity.
    1. requiredDuringSchedulingIgnoredDuringExecution (Preferred during scheduling, ignored during execution; also known as “hard” requirements)
    2. preferredDuringSchedulingIgnoredDuringExecution (Required during scheduling, ignored during execution; also known as “soft” requirements)

Show me a demo…

Let’s jump into a quick demonstration by cloning the repository and labelling the nodes

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/kubernetes/workshop/Scheduler101/
kubectl label nodes node2 mynode=worker-1
kubectl label nodes node3 mynode=worker-3
kubectl apply -f pod-with-node-affinity.yaml

Viewing Your Pods


kubectl get pods --output=wide
NAME                 READY   STATUS    RESTARTS   AGE     IP          NODE          NOMINATED NODE   READINESS GATES
with-node-affinity   1/1     Running   0          9m46s   10.44.0.1   kube-slave1   <none>           <none>

[node1 Scheduler101]$ kubectl describe po
Name:               with-node-affinity
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node3/192.168.0.16
Start Time:         Mon, 30 Dec 2019 19:28:33 +0000
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"with-node-affinity","namespace":"default"},"spec":{"affinity":{"nodeA...
Status:             Pending
IP:
Containers:
  nginx:
    Container ID:
    Image:          nginx
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qpgxq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-qpgxq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qpgxq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  26s   default-scheduler  Successfully assigned default/with-node-affinity to node3
  Normal  Pulling    22s   kubelet, node3     Pulling image "nginx"
  Normal  Pulled     20s   kubelet, node3     Successfully pulled image "nginx"
  Normal  Created    2s    kubelet, node3     Created container nginx
  Normal  Started    0s    kubelet, node3     Started container nginx

Cleaning up

Finally you can clean up the resources you created in your cluster:

kubectl delete -f pod-with-node-affinity.yaml

What is Anti-Node Affinity ?

  • Some scenarios require that you don’t use one or more nodes except for particular pods. Think of the nodes that host your monitoring application.
  • Those nodes shouldn’t have many resources due to the nature of their role. Thus, if other pods than those which have the monitoring app are scheduled to those nodes, they hurt monitoring and also degrades the application they are hosting.
  • In such a case, you need to use node anti-affinity to keep pods away from a set of nodes.

Show me a demo..

Let us jump into an anti-affinity demonstration by cloning the repository and running the below commands:

git clone https://github.com/collabnix/dockerlabs
cd dockerlabs/kubernetes/workshop/Scheduler101/
kubectl label nodes node2 mynode=worker-1
kubectl label nodes node3 mynode=worker-3
kubectl apply -f pod-anti-node-affinity.yaml

Viewing Your Pods

[node1 Scheduler101]$ kubectl get pods --output=wide
NAME    READY   STATUS    RESTARTS   AGE     IP          NODE    NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          2m37s   10.44.0.1   node2   <none>           <none>

Get nodes label detail

[node1 Scheduler101]$ kubectl get nodes --show-labels | grep mynode
node2   Ready    <none>   166m   v1.14.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2,kubernetes.io/os=linux,mynode=worker-1,role=dev
node3   Ready    <none>   165m   v1.14.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node3,kubernetes.io/os=linux,mynode=worker-3

Get pod describe

[node1 Scheduler101]$ kubectl describe pods nginx
Name:               nginx
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               node2/192.168.0.17
Start Time:         Mon, 30 Dec 2019 19:02:46 +0000
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"affinity":{"nodeAffinity":{"re...
Status:             Running
IP:                 10.44.0.1
Containers:
  nginx:
    Container ID:   docker://2bdc20d79c360e1cd857eeb9bbb9424c726b2133e78f25bf4587e0befe3fbcc7
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:b2d89d0a210398b4d1120b3e3a7672c16a4ba09c2c4a0395f18b9f7999b768f2
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Mon, 30 Dec 2019 19:03:07 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qpgxq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-qpgxq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qpgxq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  60s   default-scheduler  Successfully assigned default/nginx to node2
  Normal  Pulling    56s   kubelet, node2     Pulling image "nginx"
  Normal  Pulled     54s   kubelet, node2     Successfully pulled image "nginx"
  Normal  Created    40s   kubelet, node2     Created container nginx
  Normal  Started    39s   kubelet, node2     Started container nginx

Adding another key to the matchExpressions with the operator NotIn will avoid scheduling the nginx pods on any node labelled worker-1.

Cleaning up

Finally you can clean up the resources you created in your cluster:

kubectl delete -f pod-anti-node-affinity.yaml

In our next blog post, we will learn about node taints and tolerations in detail.

Further References:

How to build and run a Python app in a container – Docker Python Tutorial

In May 2021, over 80,000 developers participated in StackOverFlow Annual Developer survey. Python traded places with SQL to become the third most popular language. Docker is a containerization tool used for spinning up isolated, reproducible application environments. It is a popular development tool for Python developers. The tutorials and articles here will teach you how to include Docker to your development workflow and use it to deploy applications locally and to the cloud.

If you’re a Python developer and want to get started with Docker, I highly recommend you to first start with the basics of Docker. Docker Labs is one of the most popular resource that was developed by Collabnix community members. You can complete the Docker101 track before you start with the below instructions.

Getting Started

We will be leveraging Docker Desktop for most of the tutorials and examples below. Follow the below steps to install Docker Desktop in your local laptop.

Installing Docker Desktop

Let us quickly install Docker Desktop. You can refer this link to download Docker Desktop for Mac. Once you install Docker Desktop, go to Preference tab as shown in the image to make the necessary changes based on your system availability(optional).

Installing Python

You can use Homebrew to install Python in your system

brew install python

Create the app.py file with the following content:

from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
    return "Hello World!"

if __name__ == "__main__":
    app.run(host='0.0.0.0')

# Why you should run it at 0.0.0.0
# https://stackoverflow.com/questions/30323224/deploying-a-minimal-flask-app-in-docker-server-connection-issues

Now that we have our server, let’s set about writing our Dockerfile and constructing the container in where our newly born Python application will live.

Create a Dockerfile with following content: 

FROM python:3.8-alpine
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]

Now that we have defined everything we need for our Python application to run in our Dockerfile we can now build an image using this file. In order to do that, we’ll need to run the following command:

$ docker build -t my-python-app .
Sending build context to Docker daemon   5.12kB
Step 1/6 : FROM python:3.8-alpine
 ---> d4953956cf1e
Step 2/6 : RUN mkdir /app
 ---> Using cache
 ---> be346f9ff24f
Step 3/6 : ADD . /app
 ---> eb420da7413c
Step 4/6 : WORKDIR /app
 ---> Running in d623a88e4a00
Removing intermediate container d623a88e4a00
 ---> ffc439c5bec5
Removing intermediate container 15805f4f7685
 ---> 31828faf8ae4
Step 5/6 : CMD ["python", "app.py"]
 ---> Running in 9d54463b7e84
Removing intermediate container 9d54463b7e84
 ---> 3f9244a1a240
Successfully built 3f9244a1a240
Successfully tagged my-python-app:latest

We can now verify that our image exists on our machine by typing docker images:

$ docker images
REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE
my-python-app                             latest              3f9244a1a240        2 minutes ago       355MB$ docker images


In order to run this newly created image, we can use the docker run command and pass in the ports we want to map to and the image we wish to run.

$ docker run -p 8000:5000 -it my-python-app

  • -p 8000:5000 – This exposes our application which is running on port 8081 within our container on http://localhost:8000 on our local machine.
  • -it – This flag specifies that we want to run this image in interactive mode with a tty for this container process.
  • my-python-app – This is the name of the image that we want to run in a container.

Awesome, if we open up http://localhost:8000 within our browser, we should see that our application is successfully responding with Hello, "/".

Running our Container In the Background

You’ll notice that if we ctrl-c this within the terminal, it will kill the container. If we want to have it run permanently in the background, you can replace -it with -d to run this container in detached mode.

In order to view the list of containers running in the background you can use docker ps which should output something like this:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
70fcc9195865        my-python-app    "python app.py"           5 seconds ago       Up 3 seconds        0.0.0.0:8080->8081/tcp   silly_swirles


Dockerize Multi-Container Python App Using Compose

Learn how to containerize Python Application Using Docker Compose

The app fetches the quote of the day from a public API hosted at http://quotes.rest then it caches the result in Redis. For subsequent API calls, the app will return the result from Redis cache instead of fetching it from the public API

create following file structure :

python-docker-compose
   ↳ app.py


from flask import Flask
from datetime import datetime
import requests
import redis
import os
from dotenv import load_dotenv
import json

load_dotenv()  # take environment variables from .env.


app = Flask("app")

def get_quote_from_api():
	API_URL = "http://quotes.rest/qod.json"
	resp = requests.get(API_URL)
	if resp.status_code == 200:
		try:
			quote_resp = resp.json()["contents"]["quotes"][0]["quote"]
			return quote_resp
		except (KeyError, IndexError) as e:
			print (e)
			return None
	else:
		return None


@app.route("/")
def index():
	return "Welcome! Please hit the `/qod` API to get the quote of the day."


@app.route("/qod")
def quote_of_the_day():
	# get today's date in string
	date = datetime.now().strftime("%Y-%m-%d")
	quote = redis_client.get("date")
	if not quote:
		quote = get_quote_from_api()	
	return "Quote of the day: " + quote




if __name__ == '__main__':
	# Connect to redis client
	redis_host = os.environ.get("REDIS_HOST", "localhost")
	redis_port = os.environ.get("REDIS_PORT", 6379)
	redis_password = os.environ.get("REDIS_PASSWORD", None)
	redis_client = redis.StrictRedis(host=redis_host, port=redis_port, password=redis_password)

	# Run the app
	app.run(port=8080, host="0.0.0.0")



Run a Python application

git clone https://github.com/docker-community-leaders/dockercommunity/
cd /content/en/examples/Python/python-docker-compose
pip install -r requirements.txt
python app.py

On a different terminal run

$ curl http://localhost:8080/qod
The free soul is rare, but you know it when you see it - basically because you feel good, very good, when you are near or with them.

Dockerize Python application

/Python/python-docker-compose/Dockerfile 

# Dockerfile References: https://docs.docker.com/engine/reference/builder/

# Start from python:3.8-alpine base image
FROM python:3.8-alpine

# The latest alpine images don't have some tools like (`git` and `bash`).
# Adding git, bash and openssh to the image
RUN apk update && apk upgrade && \
    apk add --no-cache bash git openssh

# Make dir app
RUN mkdir /app
WORKDIR /app
COPY requirements.txt requirements.txt

RUN pip install -r requirements.txt

# Copy the source from the current directory to the Working Directory inside the container
COPY . .


COPY . .

# Expose port 8080 to the outside world
EXPOSE 8080

# Run the executable
CMD ["python", "app.py"]

Application services via Docker Compose

Our application consists of two services –

  • App service that contains the API to display the “quote of the day”.
  • Redis which is used by the app to cache the “quote of the day”.

Let’s define both the services in a docker-compose.yml file

File: /Python/python-docker-compose/docker-compose.yml 

# Docker Compose file Reference (https://docs.docker.com/compose/compose-file/)

version: '3'

# Define services
services:

  # App Service
  app:
    # Configuration for building the docker image for the service
    build:
      context: . # Use an image built from the specified dockerfile in the current directory.
      dockerfile: Dockerfile
    ports:
      - "8080:8080" # Forward the exposed port 8080 on the container to port 8080 on the host machine
    restart: unless-stopped
    depends_on: 
      - redis # This service depends on redis. Start that first.
    environment: # Pass environment variables to the service
      REDIS_HOST: redis
      REDIS_PORT: 6379    
    networks: # Networks to join (Services on the same network can communicate with each other using their name)
      - backend

  # Redis Service   
  redis:
    image: "redis:alpine" # Use a public Redis image to build the redis service    
    restart: unless-stopped
    networks:
      - backend

networks:
  backend:    

Running the application with docker compose

$ docker-compose up
Starting python-docker-compose_redis_1 ... done
Starting python-docker-compose_app_1   ... done
Attaching to python-docker-compose_redis_1, python-docker-compose_app_1
redis_1  | 1:C 02 Feb 2019 12:32:45.791 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1  | 1:C 02 Feb 2019 12:32:45.791 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1  | 1:C 02 Feb 2019 12:32:45.791 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1  | 1:M 02 Feb 2019 12:32:45.792 * Running mode=standalone, port=6379.
redis_1  | 1:M 02 Feb 2019 12:32:45.792 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1  | 1:M 02 Feb 2019 12:32:45.792 # Server initialized
redis_1  | 1:M 02 Feb 2019 12:32:45.792 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1  | 1:M 02 Feb 2019 12:32:45.793 * DB loaded from disk: 0.000 seconds
redis_1  | 1:M 02 Feb 2019 12:32:45.793 * Ready to accept connections
app_1    | 2019/02/02 12:32:46 Starting Server

The docker-compose up command starts all the services defined in the docker-compose.yml file. You can interact with the python service using curl –

$ curl http://localhost:8080/qod
A show of confidence raises the bar

Stopping all the services with docker compose

$ docker-compose down

5 Best Redis Tutorials and Free Resources for all Levels

If you’re a developer looking out for Redis related resources, then you have finally arrived at the right place. With more than 51,000 GitHub stars, 20,000 forks, and 528 contributors, Redis is an incredibly popular open source project supported by a vibrant community. But developers don’t just use Redis, they love it. Stack Overflow’s 2021 Developer Survey has ranked Redis as the Most Loved Database for 5 years running! 

Source ~ https://insights.stackoverflow.com/survey/2021

Let us quickly jump into top 5 tutorials and resources sites that you might useful, no matter if you are a beginner, intermediate or experience developer.


#1 Redis Developer Hub

Level: Beginners, Intermediate

Source – https://developer.redis.com

With over 150+ tutorials, Redis Developer Hub is basically a journey carved out for developers like YOU. The journey is defined in terms of “Create > Develop > Explore” . The “Create” section allows you to get started with Redis, creating Redis database locally as well as over Cloud in the form of DBaaS. The “Develop” section allows you to build your app using Redis clients. Finally, the “Explore” section helps you to explore your Redis database using robust tools like RedisInsight, Redis Data Source for Grafana, RIOT and many more…

Redis Developer Hub includes several interesting tutorials around RedisInsight. If you’re looking out for a Redis  intuitive and efficient GUI for Redis that allows you to interact with your databases and manage your data—with built-in support for most popular Redis modules, RedisInsight is the right tool for you. It’s a freeware and supported by Redis Inc.


#2 Redis Launchpad

Level: Intermediate, Advance

Source – https://launchpad.redis.com

Redis Launchpad is the central place to find example apps that are built on Redis. Redis Launchpad is basically a hub of over 75+ sample applications built on Redis. It provides developers and architects an easy, tangible way to find and visualize numerous sample apps that use Redis as a real-time data platform and primary database in one central location. Here you can dive into high-quality sample apps that show different architectures, data modeling, data storage, and commands, allowing you to start building fast apps faster.

The Redis Launchpad site is pretty neat. The various apps are categorizes in various languages (JavaScript, Java, Python, .NET etc.), cater to different industry verticals (Financial services, Gaming, Retail, etc), use different Redis modules (RedisJSON, RediSearch, RedisAI, RedisTimeSeries etc), and showcase varying capabilities. If you’re really interested to know how Redis can be used as a primary database, check out cool apps like Feature Creep, Dronification of Crop Insurer, Zindagi, Sahay, Code Red and many more.


#3 Redis University

Level: Beginner, Intermediate

Source – https://university.redis.com

Redis university is an online destination for Redis users and enthusiasts to learn from peers and database experts. It is free to sign up and prospective students need only provide their first and last name, email address and affiliated organization. The university-style structure creates a rhythm to help students stay engaged and invested in their work, while also creating a feedback loop for Redis. Online courses will run four to six weeks and include video tutorials, quizzes, demos and assignments with firm due dates. Students who complete the course with grades of 65 percent or higher will receive certificates of completion and online badges for their LinkedIn profiles. The initial course is designed as an introduction for developers at the beginning of their journey, but subsequent courses will appeal to a wider audience and include more administration-oriented classes for those in the more advanced stages.

Source: https://university.redis.com

Redis University also gives users of the Redis open source platform the opportunity to explore for free the advanced functionality of the Redis Enterprise solution, while sharing information and discussing applications and use cases with their peers. Redis University will launch two courses this year and run each of them twice, focusing initially on developers.


#4 Redis For Dummies e-Book

Level: Beginner

Whether you are a developer interested in learning Redis or a manager thinking about implementing Redis in your organization, Redis for Dummies can help you get started with. Redis for Dummies’ readers are managers and database developers interested in improving the performance of e-commerce, search, internet-of-things, and other data-centric applications.

In this latest Edition of Redis for Dummies ebook, you’ll

  • See real-world examples of Redis-powered applications
  • Learn about coding with Redis clients like Python, Java, and NodeJS
  • Explore Redis clustering and high availability
  • Discover Redis data structures and modules like RediSearch, RedisJSON, RedisGraph, and RedisTimeSeries
  • Visualize, interact and manage your databases using RedisInsight

#5 Redis Core Project Repository

Level: Beginner, Intermediate and Advance

If you’re a developer keen to to learn as well as contribute to Redis, then nothing compares to the core Redis project repository. It’s a great resource to deep dive into nitty-gritty of Redis and its development GitHub branches.

Here is a list of selected starting points that you might find interesting:


A Bonus – Redis Community Resources

Level: All

See below for all of the community resources and learning programs that Redis offers to the community:

Great place to collaborate with fellow Redis users both virtually and in-person.

Provides an opportunity to learn from like-minded developers, share what you have learned with the broader community, and grow a deeper understanding of Redis by teaching others.

An Redis-centric auditory experience for YOU.

A unique take on the Redis world, RedisPods features monthly interviews and community updates for our developer and DevOps friends about the many ways our guests use Redis. (Formally known as the Redis Stars Podcast).

Got questions? Have something to share?

The Redis Discord server is a friendly community of Redis users from all around. Here you can ask general questions, get answers, and share your knowledge. If you have a more involved question, or just prefer forums to chat, then check out the Redis user forum.

Further References:

Running Automated Tasks with a CronJob over Kubernetes running on Docker Desktop 4.1.1

Docker Desktop 4.1.1 got released last week. This latest version support Kubernetes 1.21.5 that was made available recently by the Kubernetes community. There were couple of new features announced with this latest version of Kubernetes. For example, CronJobs graduated to the stable version for the first time, Immutable Secrets and ConfigMaps graduated (Secrets and ConfigMaps by default are mutable which is beneficial for pods that are able to consume changes but Mutating Secrets and ConfigMaps can also cause problems if a bad configuration is pushed for pods that use them) with this latest release, IPv4/IPv6 dual-stack support was introduced (dual-stack support in Kubernetes means that pods, services, and nodes can get IPv4 addresses and IPv6 addresses. In Kubernetes 1.21 dual-stack networking has graduated from alpha to beta, and is now enabled by default.) etc. You can find the list of all the new features introduced under this link.

Docker Desktop is FREE for personal use

Yes, you read it correct. Docker Desktop remains free for small businesses (fewer than 250 employees AND less than $10 million in annual revenue), personal use, education, and non-commercial open source projects. The existing Docker Free subscription has been renamed Docker Personal. No changes to Docker Engine or any other upstream open source Docker or Moby project.


CronJob graduated to Stable!

CronJobs was promoted to general availability in Kubernetes v1.21 for the first time. Older Kubernetes versions do not support the batch/v1 CronJob API. CronJobs are meant for performing regular scheduled actions such as backups, report generation, and so on. Each of those tasks should be configured to recur indefinitely (for example: once a day / week / month); you can define the point in time within that interval when the job should start.

To test-drive CronJob feature of Kubernetes, let us quickly leverage Docker Desktop. You can refer this link to download Docker Desktop for Mac. Once you install Docker Desktop, go to Preference tab as shown in the image.

Select “Kubernetes” tab and click on “Enable Kubernetes” box.

Verifying Kubectl version:

kubectl version --short
Client Version: v1.21.5
Server Version: v1.21.5

Checking the Kubernetes Component Status


kubectl get --raw '/healthz?verbose'
[+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
healthz check passed

Test driving CronJobs

CronJobs (previously ScheduledJobs), is used primarily for performing regular scheduled actions such as backups, report generation, and so on, has been a beta feature since Kubernetes 1.8! With 1.21, we get to finally see this widely used API graduate to stable. A CronJob creates Jobs on a repeating schedule. One CronJob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, written in Cron format. In addition, the CronJob schedule supports timezone handling, you can specify the timezone by adding “CRON_TZ=” at the beginning of the CronJob schedule, and it is recommended to always set CRON_TZ.

Most often, I use Cronitor – a quick and simple editor for cron schedule expressions by Cronitor.

Let us try to write a CronJob manifest that prints the current time and a hello message every minute:

Create a file called crontest.yaml and add the below content:

apiVersion: batch/v1
kind: CronJob
metadata:
  name: crontesting
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - date; echo Hello from the Kube cluster
          restartPolicy: OnFailure

Running the CronJob

kubectl create -f crontest.yaml 
cronjob.batch/crontesting created

Fetching the status

kubectl get cronjob crontesting
NAME          SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
crontesting   */1 * * * *   False     0        <none>          28s

You can further fetch the status over Docker Desktop Dashboard UI too as shown below:

As you can see from the results of the command, the cron job has not scheduled or run any jobs yet. Watch for the job to be created in around one minute:

Watching the Job

kubectl get jobs --watch
NAME                   COMPLETIONS   DURATION   AGE
crontesting-27250869   1/1           1s         43s
crontesting-27250870   0/1                      0s
crontesting-27250870   0/1           0s         0s
crontesting-27250870   1/1           1s         1s

Now you’ve seen one running job scheduled by the “crontesting” cron job. You can stop watching the job and view the cron job again to see that it scheduled the job:

kubectl get cronjob crontesting
NAME          SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
crontesting   */1 * * * *   False     0        8s              3m4s

You should see that the cron job crontesting successfully scheduled a job at the time specified in LAST SCHEDULE. There are currently 0 active jobs, meaning that the job has completed or failed.

Deleting a Cron Job

When you don’t need a cron job any more, delete it with kubectl delete cronjob <cronjob name>:

kubectl delete cronjob crontesting

Also Read:

How to setup GPS Module with Raspberry Pi and perform Google Map Geo-Location Tracking in a Real-Time

NEO-6M GPS Module with EPROM is a complete GPS module that uses the latest technology to give the best possible positioning information and includes a larger built-in 25 x 25mm active GPS antenna with a UART TTL socket. A battery is also included so that you can obtain a GPS lock faster. This is an updated GPS module that can be used with ardupilot mega v2. This GPS module gives the best possible position information, allowing for better performance with your Ardupilot or other Multirotor control platform.

The GPS module has serial TTL output, it has four pins: TX, RX, VCC, and GND. You can download the u-centre software for configuring the GPS and changing the settings and much more. It is really good software (see link below).

Table of Contents

  1. Intent
  2. Hardware
  3. Software
  4. Connect GPS Module to Rasberry Pi
  5. Ploting the GPS Values over Google Map
  6. Stream Data Over PubNub

Intent

This tutorial shows you detailed instructions on how to connect GPS to Raspberry Pi(works for Arduino too), fetch the latitude and longitude values and plot it over Google Map using PubNub in a real-time.

Hardware

  • Raspberry Pi/Arduino
  • NEO-6M GPS Module with EPROM
image

Software

  • Flash Rapsberry Pi SD card with OS using Etcher

Connect the GPS module to the Raspberry PI.

There are only 4 wires (F to F), so it’s a simple connection.

image
  • Neo-6M RPI
  • VCC to Pin 1, which is 3.3v
  • TX to Pin 10, which is RX (GPIO15)
  • RX to Pin 8, Which is TX (GPIO14)
  • Gnd to Pin 6, which is Gnd
image

Turn Off the Serial Console

By default, the Raspberry Pi uses the UART as a serial console. We need to turn off that functionality so that we can use the UART for our own application. Open a terminal session on the Raspberry Pi.

Step 1. Backup the file cmdline.txt

sudo cp /boot/cmdline.txt /boot/cmdline_backup.txt 

Step 2. Edit cmdlint.txt and remove the serial interface

sudo nano /boot/cmdline.txt

Step 3. Delete console=ttyAMA0,115200

Once you delete it, save the file by pressing Ctrl X, Y, and Enter.

Step 4. Edit /etc/inittab

sudo nano /etc/inittab 

Step 5. Find ttyAMA0

You can find ttyAMA0 by pressing Ctrl W and typing ttyAMA0 on the search line

Press Home > insert a # symbol to comment out that line and Ctrl X, Y, Enter to save.

sudo reboot

Step 6. Test the GPS

Open a terminal session and type

sudo apt-get install gpsd gpsd-clients

Step 7. Start the serial port:

stty -F /dev/ttyAMA0 9600

Now start GPSD:

sudo gpsd /dev/ttyAMA0 -F /var/run/gpsd.sock

Step 8. Final Results

cgps -s

Fetching the Values

Clone the repository

git clone https://github.com/collabnix/cherrybot
cd cherrybot/pubnub/

Fetching the GPS values

python3 gps.py
Latitude=12.9814865and Longitude=77.6683425
Latitude=12.9814848333and Longitude=77.6683436667
Latitude=12.9814841667and Longitude=77.6683451667
Latitude=12.9814818333and Longitude=77.6683461667
Latitude=12.9814853333and Longitude=77.6683491667
Latitude=12.9814783333and Longitude=77.6683485
Latitude=12.9814701667and Longitude=77.6683466667
Latitude=12.981464and Longitude=77.668345
Latitude=12.9814586667and Longitude=77.6683438333
Latitude=12.9814525and Longitude=77.6683428333
Latitude=12.9814458333and Longitude=77.6683421667
Latitude=12.9814395and Longitude=77.6683421667
Latitude=12.9814331667and Longitude=77.668342
Latitude=12.981428and Longitude=77.6683425
Latitude=12.981423and Longitude=77.6683428333
Latitude=12.9814185and Longitude=77.6683431667
Latitude=12.9814146667and Longitude=77.6683436667
Latitude=12.9814095and Longitude=77.6683443333
Latitude=12.9814056667and Longitude=77.6683456667
Latitude=12.981401and Longitude=77.668346
Latitude=12.9813966667and Longitude=77.66834

Ploting the GPS Values over Google Map

Stream Data Over PubNub

If you haven’t already done so, sign up for a free PubNub account before you begin this step.

Change directory

Change directory into the examples directory containing the gps_simpletest.py file and install the PubNub Python SDK.

pip3 install pubnub

Import PubNub Package

import pubnub
from pubnub.pnconfiguration import PNConfiguration
from pubnub.pubnub import PubNub
from pubnub.callbacks import SubscribeCallback
from pubnub.enums import PNOperationType, PNStatusCategory

Configure a PubNub instance with your publish/subscribe Keys

pnconfig = PNConfiguration()
pnconfig.subscribe_key = "YOUR SUBSCRIBE KEY"
pnconfig.publish_key = "YOUR PUBLISH KEY"
pnconfig.ssl = False
pubnub = PubNub(pnconfig)

Then to publish, place a publishing callback somewhere near the beginning of your code. You can write whatever you want for the callback, but we’ll leave it blank as we don’t really need it for now.

def publish_callback(result, status):
    pass
    # Handle PNPublishResult and PNStatus

Here is where you decide what data you want to publish. Since we are building just a simple GPS tracking device, we’re just going to be dealing with the latitude and longitude coordinates.

When you want to publish multiple variables in one JSON, you must create a dictionary like so:

dictionary = {"DATA 1 NAME": gps.DATA1, "DATA 2 NAME": gps.DATA2}

So in our case we would write:

dictionary = {"latitude": gps.latitude, "longitude": gps.longitude}

And then to publish that data, you would format the dictionary like this:

pubnub.publish().channel("CHANNEL").message(dictionary).pn_async(publish_callback)

It is best to place the dictionary and publishing lines within the “if gps.DATA is not none” to avoid any program failures.

Visualize your GPS Data with Google Maps

It’s time to visualize our GPS data in a way that humans can understand.

We’re just going to create a small HTML page that will grab GPS data from our PubNub channel and graph the data with a geolocation API.

Google Maps API

The Google Maps API is a universal tool that is not only one of the cheaper APIs for a greater amount of API calls but also has a rich and expansive toolset for developers. The GPS data is not only more accurate than most other APIs, but also has extensive tools such as “ETA” that uses Google’s geographical terrain data.

So if you ever want to build a serious GPS tracking app with PubNub, Google Maps is the way to go.

Image result for google maps api with marker

You’ll first need to get a Google Maps API Key

Once that’s done, create an .html file and copy-paste the code below (explanation of the code is below as well).

<!DOCTYPE html>
<html>
  <head>
    <title>Simple Map</title>
    <meta name="viewport" content="initial-scale=1.0">
    <meta charset="utf-8">
    <style>
      /* Always set the map height explicitly to define the size of the div
       * element that contains the map. */
      #map {
        height: 100%;
      }
      /* Optional: Makes the sample page fill the window. */
      html, body {
        height: 100%;
        margin: 0;
        padding: 0;
      }
    </style>
    <script src="https://cdn.pubnub.com/sdk/javascript/pubnub.4.23.0.js"></script>
  </head>
  <body>
    <div id="map"></div>
    <script>
  // the smooth zoom function
  function smoothZoom (map, max, cnt) {
      if (cnt >= max) {
          return;
      }
      else {
          z = google.maps.event.addListener(map, 'zoom_changed', function(event){
              google.maps.event.removeListener(z);
              smoothZoom(map, max, cnt + 1);
          });
          setTimeout(function(){map.setZoom(cnt)}, 80); // 80ms is what I found to work well on my system -- it might not work well on all systems
      }
  } 
    var pubnub = new PubNub({
    subscribeKey: "YOUR SUBSCRIBE KEY",
    ssl: true
  });  
  var longitude = 30.5;
  var latitude = 50.5;
  pubnub.addListener({
      message: function(m) {
          // handle message
          var channelName = m.channel; // The channel for which the message belongs
          var channelGroup = m.subscription; // The channel group or wildcard subscription match (if exists)
          var pubTT = m.timetoken; // Publish timetoken
          var msg = m.message; // The Payload
          longitude = msg.longitude;
          latitude = msg.latitude;
          var publisher = m.publisher; //The Publisher
    var myLatlng = new google.maps.LatLng(latitude, longitude);
    var marker = new google.maps.Marker({
        position: myLatlng,
        title:"PubNub GPS"
    });
    // To add the marker to the map, call setMap();
    map.setCenter(marker.position);
    smoothZoom(map, 14, map.getZoom());
    marker.setMap(map);
      },
      presence: function(p) {
          // handle presence
          var action = p.action; // Can be join, leave, state-change or timeout
          var channelName = p.channel; // The channel for which the message belongs
          var occupancy = p.occupancy; // No. of users connected with the channel
          var state = p.state; // User State
          var channelGroup = p.subscription; //  The channel group or wildcard subscription match (if exists)
          var publishTime = p.timestamp; // Publish timetoken
          var timetoken = p.timetoken;  // Current timetoken
          var uuid = p.uuid; // UUIDs of users who are connected with the channel
      },
      status: function(s) {
          var affectedChannelGroups = s.affectedChannelGroups;
          var affectedChannels = s.affectedChannels;
          var category = s.category;
          var operation = s.operation;
      }
  });
  pubnub.subscribe({
      channels: ['ch1'],
  });
      var map;
      function initMap() {
        map = new google.maps.Map(document.getElementById('map'), {
          center: {lat: latitude, lng: longitude},
          zoom: 8
        });
      }
    </script>
    <script src="https://maps.googleapis.com/maps/api/js?key=AIzaSyBLuWQHjBa9SMVVDyyqxqTpR2ZwnxwcbGE&callback=initMap"
    async defer></script>
  </body>
</html>

This part of the code is responsible for rendering our map on the HTML page.

<style>
  /* Always set the map height explicitly to define the size of the div
   * element that contains the map. */
  #map {
    height: 100%;
  }
  /* Optional: Makes the sample page fill the window. */
  html, body {
    height: 100%;
    margin: 0;
    padding: 0;
  }
</style>

Just a little below it, we enter a div id tag to tell where we want the map to render:

//div tag for map[] id

Here we simply import the PubNub JS SDK to enable PubNub data streaming for our GPS data:

<script src="https://cdn.pubnub.com/sdk/javascript/pubnub.4.23.0.js"></script>

We must also import the Google Maps API with this script tag:

<script src="https://maps.googleapis.com/maps/api/js?key=YOURAPIKEY&callback=initMap"async defer></script>

NOTE: The rest of the code is encapsulated within one script tag, so don’t be alarmed if we jump around in explaining this final part of the code.

In order to stream our data, instantiate a PubNub instance:

var pubnub = new PubNub({
    subscribeKey: "YOUR SUBSCRIBE KEY",
    ssl: true
  });

Then we instantiate a PubNub listener with the following code.

pubnub.addListener({
      message: function(m) {
          // handle message
          var channelName = m.channel; // The channel for which the message belongs
          var channelGroup = m.subscription; // The channel group or wildcard subscription match (if exists)
          var pubTT = m.timetoken; // Publish timetoken
          var publisher = m.publisher; //The Publisher
          
          var msg = m.message; // The Payload
          //extract and save the longitude and latitude data from your incomming PubNub message
          longitude = msg.longitude;
          latitude = msg.latitude;
          
        //Create a new Google Maps instance with updated GPS coordinates
      var myLatlng = new google.maps.LatLng(latitude, longitude);
      //Create a marker instance with the coordinates
      var marker = new google.maps.Marker({
          position: myLatlng,
          title:"PubNub GPS"
      });
      
      //center the map with the maker position
      map.setCenter(marker.position);
      //Optional: create a zooming annimation when the gps changes coordinates
      smoothZoom(map, 14, map.getZoom());
      // To add the marker to the map, call setMap();
      marker.setMap(map);
      },
      presence: function(p) {
          // handle presence
          var action = p.action; // Can be join, leave, state-change or timeout
          var channelName = p.channel; // The channel for which the message belongs
          var occupancy = p.occupancy; // No. of users connected with the channel
          var state = p.state; // User State
          var channelGroup = p.subscription; //  The channel group or wildcard subscription match (if exists)
          var publishTime = p.timestamp; // Publish timetoken
          var timetoken = p.timetoken;  // Current timetoken
          var uuid = p.uuid; // UUIDs of users who are connected with the channel
      },
      status: function(s) {
          var affectedChannelGroups = s.affectedChannelGroups;
          var affectedChannels = s.affectedChannels;
          var category = s.category;
          var operation = s.operation;
      }
  });

In order to avoid syntax errors, place a subscriber instance right below the listener.

pubnub.subscribe({
      channels: ['YOUR CHANNEL NAME'],
  });

As you can see, we open up incoming messages with the following line of code.

var msg = m.message; // The Payload

And then extract the variables we desire based on the sent JSON.

longitude = msg.longitude;
latitude = msg.latitude;

We then format the data variables in accordance to a Google Maps object.

var myLatlng = new google.maps.LatLng(latitude, longitude);

To set a Google marker on our GPS coordinates we create a Google Maps marker object.

var marker = new google.maps.Marker({
          position: myLatlng,
          title:"Title of Marker"
      });

Then add the marker to your Google Maps object by calling setMap().

marker.setMap(map);

Of course, it would be nice to center our map on the marker so we can actually see it so we center it on the markers position.

map.setCenter(marker.position);

This is optional, but if you want to add a smooth zooming animation every time you locate a marker, call a smoothZoom function like so.

smoothZoom(map, 14, map.getZoom());

And implement the smoothZoom function somewhere.

function smoothZoom (map, max, cnt) {
      if (cnt >= max) {
          return;
      }
      else {
          z = google.maps.event.addListener(map, 'zoom_changed', function(event){
              google.maps.event.removeListener(z);
              smoothZoom(map, max, cnt + 1);
          });
          setTimeout(function(){map.setZoom(cnt)}, 80); // 80ms is what I found to work well on my system -- it might not work well on all systems
      }
  } 

Lastly we’ll need to initialize the map so we write:

var map;
     function initMap() {
       map = new google.maps.Map(document.getElementById('map'), {
         center: {lat: latitude, lng: longitude},
         zoom: 8
       });
     }

And set the initial values of your latitude and longitude variables to wherever you want.

var longitude = 30.5;
var latitude = 50.5;

And that’s it!

Fetch the values over Google Map

open frontend.html
image

Top Kubernetes Tools You Need for 2021 – Devtron

Thanks to Collabnix community members Abhinav DubeyAshutosh Kale and Vinodkumar Mandalapu for all the collaboration and contribution towards these blog post series.

What’s the biggest benefit you’ve seen for your business or team from adopting Kubernetes? What are the primary reasons your organization is using Kubernetes? Which areas of the Kubernetes tech stack need to mature the most to make it easier to deploy cloud-native apps? – Portworx by Pure Storage commissioned a new survey of enterprise users to assess the state of Kubernetes and to find out how its adoption and usage evolved in the last 12 months and what the future may hold. They also explored how the pandemic impacted IT users’ attitudes toward their jobs. Interestingly, around 68% of the respondent said they increased their usage of Kubernetes as a result of the pandemic, primarily to accelerate their deployment of new applications and increase their use of automation. Reducing IT costs was also a significant factor, and more than a quarter of respondents said they expect to reduce costs by 30% or more annually as a result of using Kubernetes.

In our first blog, we talked about the rising pain of Enterprise businesses and discussed about how Popeye solved that problem. In our 2nd blog, we look at the most popular tools like K3d and Portainer in detail. Under this blog post, we will discuss about Devtron – an open source software delivery workflow for Kubernetes written in Go language.

GitHub - devtron-labs/devtron: Software Delivery Workflow For Kubernetes

Devtron is a free-to-use open source platform providing a ‘seamless,’ ‘implementation agnostic uniform interface’ across Kubernetes Life Cycle integrated with widely used open source, and commercial tools. It runs a self-serve Model with a slick user experience. It’s a no-code solution written in Go, for all your deployments over Kubernetes and helps you to monitor various metrics for CI-CD processes like Build Logs, Deployment Failure, Lead Time, Deployment Size and others with an interactive dashboard.

Why Devtron?

There are plenty of tools in the ocean of DevOps which serve you with various use-cases such as Prometheus for monitoring,  Jenkins for continuous integration, Argo CD for continuous delivery, Clair or Trivy for security, and more. But the major challenge is that these tools are completely isolated and don’t interact with each other and hence it becomes very difficult for the DevOps team to manage different tools simultaneously.  This lack of integration was a pain point and a gap that needs to be filled. 

https://collabnix.com/top-10-kubernetes-tool-you-need-for-2021-part-2/

With Devtron, it integrates with the existing open-source systems like Argo CD, Argo Workflow, Clair, Hibernator, Grafana, Prometheus, Casbin, and many others and adds capabilities on top of them to enable self-serve for developers and DevOps. It supports multi-cluster deployments. You can connect multiple clusters to it and deploy applications across them. Devtron is an application-first way of looking at Kubernetes, meaning deep integrations with existing open-source and commercial software to quickly onboard state-of-the-art systems. They call it ‘The AppOps approach.’

Features

No code self-serve DevOps platform

  • Workflow which understands the domain of Kubernetes, testing, CD, SecOps so that you don’t have to write scripts
  • Reusable and composable pipelines so that workflows are easy to construct and visualize

Multi-cloud/Multi-cluster deployment

  • Devtron gives the ability to deploy you applications to multiple clusters/cloud just with the same dashboard

Built-in SecOps tools and integration

  • UI driven hierarchical security policy (global, cluster, environment and application) for efficient policy management
  • Integration with Clair for vulnerability scanning

UI-enabled Application debugging dashboard

  • Application centric view for K8s components
  • Built-in monitoring for cpu, ram, http status code and latency
  • Advanced logging with grep and json search
  • Access all manifests securely for e.g. secret obfuscation
  • Auto issue identification

Enterprise-grade access control and compliances

  • Easy to control roles and permissions for users and also can club the users of similar roles provide the required permissions through the slick User Interface.

Automated Gitops based deployment using argocd

  • Automated git repository and application manifest management
  • Reduces complexity(configuration, access control) in adopting gitops practices
  • Gitops backed by Postgres for easier analysis

Getting Started

System Requirements

  • 2 CPUs+ cores
  • 4GB+ of free memory
  • 20GB+ free disk space

Installation

Now that we have understood about Devtron, its needs, system requirements and features, we are ready to move ahead with its installation process. Installing Devtron is quite straightforward with few commands. Devtron can be installed using helm3 (recommended), helm2 and kubectl. 

For this demonstration we will be using helm3 as it is recommended according to the official documentation. If you don’t have helm3 installed in your system, please refer to this docs for installation.

[NOTE: The only prerequisite to install Devtron is to have a kubernetes cluster installed ]

Let’s move ahead with the installation using helm3. Please follow the commands below for successful installation.

Firstly, we need to add devtron repo in the helm known repo list. Execute the following commands to add devtron in helm known repos. 

$ helm repo add devtron https://helm.devtron.ai

After adding helm repo, execute the below command to install devtron. It will initiate the Devtron-operator, which spins up all the Devtron micro-services one by one in about 15-20 mins.

$ helm install devtron devtron/devtron-operator --create-namespace --namespace devtroncd

To check the status of the installation, please execute the following command. If the installation is still in progress, it will print Downloaded. And when the installation is complete, it prints Applied.

$ kubectl -n devtroncd get installers installer-devtron -o jsonpath='{.status.sync.status}'

After the successful installation, we need to access the Devtron dashboard. If you are installing Devtron on cloud premises, run the following command. It will give you the link of LoadBalancer created to access the service.

$ kubectl get svc -n devtroncd devtron-service -o jsonpath='{.status.loadBalancer.ingress}'

But, if you are installing devtron in your local system, please change the service from LoadBalancer to NodePort in devtron-service and then access the dashboard with the localhost:NodePort.

You can install devtron over minikube as well as k3s cluster and access the dashboard easily on your local system.

Now after accessing the dashboard, we need to find out the credentials to login into the platform. For admin credentials, the username is admin , and for password, run the below command.

$ kubectl -n devtroncd get secret devtron-secret -o jsonpath='{.data.ACD_PASSWORD}' | base64 -d

Tada! We have successfully installed Devtron in our system and are ready to deploy our applications over kubernetes without any hassles. 

Application Deployment

After successful installation of Devtron, we are ready to begin with our first deployment. Before creating an application and its deployment, we need to make sure Global Configuration is configured properly. After its configuration please follow along with this tutorial for creating your first application and deploying over kubernetes using Devtron. 


For more information about the tool, please refer to the Github link.

Top Kubernetes Tools You Need for 2021 – K3d and Portainer

Thanks to Collabnix community members Abhinav Dubey, Ashutosh Kale and Vinodkumar Mandalapu for all the collaboration and contribution towards this blog post series.

In the last blog post, we discussed about the rising pain of Enterprise businesses and the popular tools “Popeye – A Kubernetes Cluster Sanitizer”. Under this blog, we will cover two of the most widely used Kubernetes tools – Portainer and K3d.



K3d – k3s in Docker

K3d as the name itself suggests, k3s-in-docker is a wrapper around k3s – Lightweight Kubernetes that runs it in docker. It provides a seamless experience working with k3s cluster management with some straight forward commands. K3d is efficient enough to create and manage k3s single node and well as k3s High Availability clusters just with few commands.

 

In this blog, let’s see how easily we can spin up k3s cluster in docker within seconds and start using it for development on your machine.

Prerequisites

To install and run k3d, you must have docker and linux shell. If you are using windows or MacOS, Docker Desktop is preferred and for linux shell you can use WSL2. For linux operating systems docker cli is the preferred solution.

In this demonstration I will be using MacOS for setting up the k3s cluster in docker with k3d and managing the cluster.

Installation

K3d is platform agnostic and can be installed over Windows, MacOS and Linux.

In this demonstration, we will be using MacOS for installation and will install using the installation script. Please follow along the commands for successful installation of k3d.

The below command will install the k3d, in your system using the installation script.

wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash

For installing k3d on other platforms, please check out the official documentation.

After the successful installation, you are ready to create your cluster using k3d and run k3s in docker within seconds.

k3d version

To verify the installation, please run the following command –

If everything works fine, the output will look similar to the following image.

Getting Started

As we have successfully installed k3d on our local-machine, now it’s time to get our hands dirty! 

Step 1 : Create k3D Cluster 

k3d cluster create k3d-demo-cluster

Step 2 : Switch context to newly created cluster. 

kubectl config use-context k3d-k3d-demo-cluster

Step 3 : Checking the nodes running on k3d cluster

k3d node list

You’ll get the list available nodes running in the cluster

Step 4 : Firing kubectl commands

About Kubectl – The kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes

The below command will list down the nodes available in our cluster

kubectl get nodes -o wide

Now as you can observe, the cluster is up and running and we can play around the cluster, you can create and deploy your applications over the cluster.

Deleting Cluster

K3d is known for its rapid creation and deletion of clusters. After the work is done, you can easily delete your cluster by the following command.

 k3d cluster delete k3d-demo-cluster

You can also create a k3d High Availability cluster and add as many nodes you want within seconds.

To learn more about k3d, please visit their Github Repository.


Portainer

Portainer is an open-source lightweight management graphical user interface that allows you to easily manage your Docker or Kubernetes environments. Portainer enables centralized configuration, management and security of Kubernetes and Docker environments, allowing you to deliver ‘Containers-as-a-Service’ to your users quickly, easily and securely.

You just can’t miss out the list of top 7 new features introduced under Portainer CE 2.0.

Top 8 critical factors that differentiate Portainer from other existing UI tools like Rancher

Learn how to deploy Portainer on Kubernetes Cluster in 5 Minutes

Portainer makes it easy for Platform Managers to centrally configure, manage and secure complex containerized environments, regardless of where they are hosted gives end users – typically developers – the ability to deploy, manage and troubleshoot containerized apps, as well as supporting an API that allows Portainer to integrate with industry standard CI/CD tools.

Portainer is available on Windows, Linux, and Mac. It works with Kubernetes, Docker, Docker Swarm. Few of the essential features includes:

  • Application Deployment
  • Observability and Monitoring
  • Governance and Security
  • Platform Management

Application Deployment

At its heart, Portainer helps developers deploy cloud-native applications into containers simply, quickly and securely. Portainer has its own simplified GUI, which makes it easy for users to get started. For advanced users, Portainer incorporates an API that allows it to connect to CI/CD tools or third-party dashboards/deployment tools.

For users who prefer to deploy applications manually using Portainer’s simplified UI, Portainer offers 4 deployment options:

  1. A step-by-step Application Deployment Form, which includes a number of input validators that help to reduce errors
  2. The option to use an existing compose or Kubernetes manifest files for code-based deployment
  3. Use HELM charts against Kubernetes endpoints
  4. Use our “click to deploy” Application Templates.

Portainer’s Application Deployment Form is by far the easiest and quickest way to get your application up and running. You don’t need to know how to write complex deployment code for Docker or Kubernetes, nor any need to know how best to deploy your application atop any orchestrator. You simply need to be able to answer some natural language questions about your application and Portainer will determine the best way to deploy it.

Observability:

To do their jobs properly, developers need to know how their apps are behaving inside their containers. This capability is captured under the category of ‘observability’. To monitor container-based apps properly you need to have direct and deep visibility into the underlying container platform. Containers can crash and be rescheduled in seconds, often meaning failures could go unnoticed by end users, but this doesn’t mean there isn’t a problem.

Through its close integration with the underlying container platforms, Portainer is able to help users not only identify issues in the application deployment but also identify issues in the container platform itself and provide a live visualization of what’s running where and how.

Governance and Security

Orchestration platforms like Kubernetes are insecure by default, which is a problem for any organization looking to deploy K8s at any scale.

Portainer helps Platform Engineers secure their environments by allowing them to control who can do what, logging who does what and providing the ability to backup and restore the Portainer configuration database. RBAC and oAuth are cornerstones of the Governance framework. At its core, Portainer is a powerful policy and governance platform and an essential element in the IT stack.

Platform Management

The ability to set up, manage and configure a containerized environment is central to the Platform Engineer (or SREs)  role. 

Portainer Business’ platform management functionality allows engineers to both configure the orchestrator and then set up configuration ‘rules’ which define what users of the platform (typically developers) can and can’t do inside the environment

Getting Started

In this blog, I will show you how to setup Portainer on a linux environment , In this case I am using Amazon linux 2

Prerequisite:

  1. Amazon linux 2 Instance
  2. Docker 

Setup:

Create a Docker volume by using the below command which is outside the scope of any container

docker volume create portainer_data 

Portainer it self is provided as a Docker image, all you have to do is start the container

docker run -d -p 9000:9000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce

Now you can look at the docker container running by the below command:

 docker ps

The above command starts the server at port 9000. You can access the admin page via http://localhost:9000 . You will be prompted to set up the admin account to sign into the portal.

In this case, Amazon Linux 2 will be used. You will get the public ip:9000. Soon you will be able to see the Dashboard as shown below 

Now Enter the password and confirm password as your wish  and click on create user, After clicking on user now we have four options to select (options might differ based on version available)

  1. Local
  2. Remote
  3. Agent
  4. Azure
  1. Local :  Manage the Docker environment where Portainer is running.
  1. Remote : Connect Portainer to a remote Docker environment using the Docker API over TCP.
  2. Agent: Connect directly to a Portainer agent running inside a Swarm cluster
  3. Azure :  Connect to Microsoft Azure to manage Azure Container Instances (ACI).

Here In this case I am selecting Local for simplicity purpose , Now for simplicity we will be using Local , So Select the first one Local and click on Connect. Once Connected , you can see the Home page as below 

Now click  on local above , it will give you full Dashboard for that Endpoint 

You Can even explore various options  under settings like Extensions , Users, Endpoints and registries

Based on the version you use Endpoints might differ, To check the available options for Endpoints Click on Endpoints and click on Add Endpoint 

Now you have different options like Docker , Kubernetes and Edge agent where you can configure accordingly and this endpoint option will always vary based on the latest versions available at that point of time

Downloading images and creating containers

To get started to create images and containers , navigate to Images and choose an image of your choice to pull. If you have added your DockerHub account under Registries, this will be used to download the appropriate image

Now you can opt to start the container from command line or within the Portainer itself

Now Run the sonarqube container in the server by using the command 

Once the container is running, access the portal to view the status of the newly created container (as per this example sonarqube)

You can now click on new container in this case vibrant_wilson which is a sonarqube container to see more details

And click on stats and logs to see the details accordingly 

From the Portainer console

The console eases the steps to create a container from within the console. Navigate to Container > Add Container to pull the image from the registry and enable necessary advanced settings to create the container.

Çlick on container and click on Add container as shown below 

Now you can see nginx container started automatically once it is deployed

You can even verify the same in your linux instance

While there are more features than what meets the eye, this is all you need to get started with Portainer.

You can have a look at the below blog from our Docker captain Ajeet Raina on how to setup portainer using HELM charts and additional features

In the next blog post, we will discuss about Devtron ~ An Open Source alternative to Heroku. Stay tuned !

Top Kubernetes Tools You Need for 2021 – Popeye

Thanks to Collabnix community members Abhinav Dubey and Ashutosh Kale for all the collaboration and contribution towards this blog post series.

Kubernetes and cloud native technologies have continued to gain momentum. As per the latest CNCF survey report, Kubernetes use in production has increased to 83%, up from 78% last year. Use of containers in production is the norm. Kubernetes simplifies the work of developers and operators, increasing agility and accelerating software delivery. While Kubernetes has been popular with developers for a number of years, it’s now moving steadily into production environments and well on its way to entering the IT mainstream.

The Rising Pain for Enterprise Businesses

As enterprises accelerate digital transformation and embrace the Kubernetes ecosystem, some of the enterprise businesses are experiencing growing pains due to a lack of expertise, complex deployments and challenges in integrating new and existing systems and deployment. In the latest State of Kubernetes 2021 survey conducted by VMware, almost 96% survey respondents reported difficulties selecting a Kubernetes distribution. Lack of internal experience and expertise remains the biggest challenge when making the choice (55%), but it has dropped 14% since last year, suggesting rapid improvement. Other notable challenges included: hard to hire needed expertise (37%), Kubernetes/cloud native speed of change (32%), and too many solutions to choose from (30%). Most of these challenges are likely to take care of themselves as more people gain familiarity and the ecosystem continues to mature.

In this blog, we will target the major pain of choosing the right tool for Kubernetes.  We picked up the most popular and effective tools based on reviews, votes, social media comments and listed them below:

#1 Popeye – A Kubernetes Cluster Sanitizer

This image has an empty alt attribute; its file name is popeye_logo.png

Popeye is a utility that scans live Kubernetes clusters and reports potential issues with deployed resources and configurations. It sanitizes your cluster based on what’s deployed and not what’s sitting on disk. By scanning your cluster, it detects misconfigurations and helps you to ensure that best practices are in place, thus preventing future headaches. It aims at reducing the cognitive overload one faces when operating a Kubernetes cluster in the wild. Furthermore, if your cluster employs a metric-server, it reports potential resources over/under allocations and attempts to warn you should your cluster run out of capacity.

How is Popeye different from other existing tools?

Popeye is a readonly tool, it does not alter any of your Kubernetes resources in any way! Popeye scans your cluster for best practices and potential issues. Currently, Popeye only looks at nodes, namespaces, pods and services. More will come soon! We are hoping Kubernetes friends will pitch in to make Popeye even better. The aim of the sanitizers is to pick up on misconfigurations, i.e. things like port mismatches, dead or unused resources, metrics utilization, probes, container images, RBAC rules, naked resources, etc…

Installation

Popeye works best with Kubernetes 1.13+. You can  containerize Popeye and run directly in your Kubernetes clusters as a one-off or CronJob. It  is available on Linux, OSX and Windows platforms. Binaries for Linux, Windows and Mac are available as tarballs in the release page.

OSX

$ brew install derailed/popeye/popeye

Linux or Windows

Building from source Popeye was built using go 1.12+. In order to build Popeye from source you must:

  1. Clone the repository
  2. Add the following command in your go.mod file
replace (
  github.com/derailed/popeye => MY_POPEYE_CLONED_GIT_REPO
)
  • Build and run the executable
go run main.go

  • Cloning outside of GOPATH

git clone https://github.com/derailed/popeye
cd popeye
# Build and install
go install
# Run
popeye

Checking the version:

$ popeye version
 ___     ___ _____   _____                       K          .-'-.     
| _ \___| _ \ __\ \ / / __|                       8     __|      `\  
|  _/ _ \  _/ _| \ V /| _|                         s   `-,-`--._   `\
|_| \___/_| |___| |_| |___|                       []  .->'  a     `|-'
  Biffs`em and Buffs`em!                            `=/ (__/_       /  
                                                      \_,    `    _)  
                                                         `----;  |     
Version:   0.9.7
Commit:    4f12a172495e2acb7a621b29cffa924f1cd72580
Date:      2021-07-20T14:57:08Z
Logs:      /var/folders/7k/2jz4csrs4ss65_x0slwbl1540000gn/T/popeye.log

Popeye a cluster

$ popeye

GENERAL [DOCKER-DESKTOP]
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Connectivity...................................................✅
  · MetricServer...................................................💥


CLUSTER (1 SCANNED)                            💥 0 😱 0 🔊 0 ✅ 1 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · Version.........................................................✅
    ✅ [POP-406] K8s version OK.


CLUSTERROLES (60 SCANNE.                     💥 0 😱 0 🔊 15 ✅ 45 100٪
┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅┅
  · admin...........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · cluster-admin............................................... ....✅
  · edit.............................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · kubeadm:getnodes.................................................✅
  · system:aggregate-to-admin.......................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:aggregate-to-edit........................................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:aggregate-to-view...............................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:auth-delegator..................................🔊
    🔊 [POP-400] Used? Unable to locate resource reference.
  · system:basic-user.......................................✅
 

If you have just enabled Kubernetes under Docker Desktop with no Pods in operation, then  you might end with the below score:

In order for Popeye to do his work, the signed-in user must have enough RBAC oomph to get/list the resources mentioned above.

Sample Popeye RBAC Rules:

---
# Popeye ServiceAccount.
apiVersion: v1
kind:       ServiceAccount
metadata:
  name:      popeye
  namespace: popeye

---
# Popeye needs get/list access on the following Kubernetes resources.
apiVersion: rbac.authorization.k8s.io/v1
kind:       ClusterRole
metadata:
  name: popeye
rules:
- apiGroups: [""]
  resources:
   - configmaps
   - deployments
   - endpoints
   - horizontalpodautoscalers
   - namespaces
   - nodes
   - persistentvolumes
   - persistentvolumeclaims
   - pods
   - secrets
   - serviceaccounts
   - services
   - statefulsets
  verbs:     ["get", "list"]
- apiGroups: ["rbac.authorization.k8s.io"]
  resources:
  - clusterroles
  - clusterrolebindings
  - roles
  - rolebindings
  verbs:     ["get", "list"]
- apiGroups: ["metrics.k8s.io"]
  resources:
  - pods
  - nodes
  verbs:     ["get", "list"]

---
# Binds Popeye to this ClusterRole.
apiVersion: rbac.authorization.k8s.io/v1
kind:       ClusterRoleBinding
metadata:
  name: popeye
subjects:
- kind:     ServiceAccount
  name:     popeye
  namespace: popeye
roleRef:
  kind:     ClusterRole
  name:     popeye
  apiGroup: rbac.authorization.k8s.io

Learn more about Popeye through this GITHUB link

In the next blog post, we will discuss about K3d ~ a lightweight Kubernetes that runs in a docker and Portainer. Stay tuned !

https://collabnix.com/top-10-kubernetes-tool-you-need-for-2021-part-2/

References:

Getting Started with Docker and AI workloads on NVIDIA Jetson AGX Xavier Developer Platform

If you’re an IoT Edge developer and looking out to build and deploy the production-grade end-to-end AI robotics applications, then check out a highly powerful and robust NVIDIA Jetson AGX Xavier developer platform. The NVIDIA® Jetson AGX Xavier™ Developer Kit provides a full-featured development platform designed for IoT Edge developers to easily create and deploy end-to-end AI robotics applications. This development platform is supported by NVIDIA JetPack and DeepStream SDKs, as well as CUDA®, cuDNN, and TensorRT software libraries.

Referring to AGX rightly as “Autonomous Machines Accelerator Technology” in loose term, the developer kit provides you with all the necessary tools you need to get started right away. And because it’s powered by the new NVIDIA Xavier processor, you now have more than 20X the performance and 10X the energy efficiency of its predecessor, the NVIDIA Jetson TX2.

At just 100 x 87 mm, Jetson AGX Xavier offers big workstation performance at 1/10 the size of a workstation. This makes it ideal for autonomous machines like delivery and logistics robots, factory systems, and large industrial UAVs. NVIDIA® Jetson™ brings accelerated AI performance to the Edge in a power-efficient and compact form factor. Together with NVIDIA JetPack™ SDK, these Jetson modules open the door for you to develop and deploy innovative products across all industries.

Top 5 Compelling Features of AGX Xavier Kit

  • Unlike NVIDIA Jetson Nano 2GB/4GB, NVIDIA Jetson AGX Xavier comes with inbuilt 32GB eMMC 5.1 storage. So, that means you really don’t need to buy separate SD card to install/run the operating system. Hence, saving your time to start with the developer kit flawlessly.
  • Compared to NVIDIA Jetson Nano, the new Jetson AGX Xavier module makes AI-powered autonomous machines possible, running in as little as 10W and delivering up to 32 TOPs.
  • AGX Xavier support up-to 6 cameras (36 via virtual channels). Cool, isn’t it?
  • AGX Xavier comes with inbuilt 32 GB 256-bit LPDDR4x 136.5GB/s memory, much powerful to run applications like DeepStreaming.
  • Check out production-ready products based on Jetson AGX Xavier available from Jetson ecosystem partners.

A Bonus..

Jetson AGX Xavier module with thermal solution:

  • Reference carrier board
  • 65W power supply with AC cord
  • Type C to Type A cable (USB 3.1 Gen2)
  • Type C to Type A adapter (USB 3.1 Gen 1)

Comparing Jetson Nano Vs Jetson AGX Xavier

FeaturesJetson NanoJetson AGX Xavier
GPU128-core Maxwell @ 921 MHz512-core Volta @ 1.37 GHz
Memory4 GB LPDDR4, 25.6 GB/s16 GB 256-bit LPDDR4, 137 GB/s
StorageMicroSD32 GB eMMC 5.1
USB(4x) USB 3.0 + Micro-USB 2.0(3x) USB 3.1 + (4x) USB 2.0
Power5W / 10W10W / 15W / 30W
PCI-Express lanes4 lanes PCIe Gen 216 lanes PCIe Gen 4
CPU (ARM)4-core ARM A57 @ 1.43 GHz8-core ARM Carmel v.8.2 @ 2.26 GHz
Tensor cores64
Video encoding1x 4K30 (H.265) 2x 1080p60 (H.265)4x 4K60 (H.265) 16x 1080p60 (H.265) 32x 1080p30 (H.265)

Getting Started

If you’re in India, I recommend you to buy it from the authorized dealer and not directly from Amazon, Inc. Amazon is selling it at a higher price. I recommend buying it from here. Thanks to ARM, Inc for delivering this powerful kit as part of ARM Innovator Influencer programme.

Installing Docker

By default, the latest version of Docker is shipped with the development platform. You can verify it by running the below command:

xavier@xavier-desktop:~$ sudo docker version
[sudo] password for xavier: 
Client: Docker Engine - Community
 Version:           20.10.8
 API version:       1.41
 Go version:        go1.16.6
 Git commit:        3967b7d
 Built:             Fri Jul 30 19:54:37 2021
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.8
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.6
  Git commit:       75249d8
  Built:            Fri Jul 30 19:52:46 2021
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.9
  GitCommit:        e25210fe30a0a703442421b0f60afac609f950a3
 runc:
  Version:          1.0.1
  GitCommit:        v1.0.1-0-g4144b63
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
xavier@xavier-desktop:~$ 

Identify the Jetson board

Clone the repository

git clone https://github.com/jetsonhacks/jetsonUtilities

Execute the Python Script:

python3 jetsonInfo.py 
NVIDIA Jetson AGX Xavier [16GB]
 L4T 32.3.1 [ JetPack 4.3 ]
   Ubuntu 18.04.3 LTS
   Kernel Version: 4.9.140-tegra
 CUDA NOT_INSTALLED
   CUDA Architecture: 7.2
 OpenCV version: NOT_INSTALLED
   OpenCV Cuda: NO
 CUDNN: NOT_INSTALLED
 TensorRT: NOT_INSTALLED
 Vision Works: NOT_INSTALLED
 VPI: NOT_INSTALLED
 Vulcan: 1.1.70
xavier@xavier-desktop:~/jetsonUtilities$ 

Installing Jtop

Lucky You! I have created a Docker Image for Jetson Nano few weeks back that you can leverage on Xavier developer kit too. Check this out:

docker run --rm -it --gpus all -v /run/jtop.sock:/run/jtop.sock ajeetraina/jetson-stats-nano jtop

If you want to keep it simple and new to Docker, no worries. Try to install the Python module and you are all good to go.

sudo -H pip install -U jetson-stats
Collecting jetson-stats
  Downloading https://files.pythonhosted.org/packages/70/57/ce1aec95dd442d94c3bd47fcda77d16a3cf55850fa073ce8c3d6d162ae0b/jetson-stats-3.1.1.tar.gz (85kB)
    100% |████████████████████████████████| 92kB 623kB/s 
Building wheels for collected packages: jetson-stats
  Running setup.py bdist_wheel for jetson-stats ... done
  Stored in directory: /root/.cache/pip/wheels/5e/b0/97/f0f8222e76879bf04b6e8c248154e3bb970e0a2aa6d12388f9
Successfully built jetson-stats
Installing collected packages: jetson-stats
Successfully installed jetson-stats-3.1.1
xavier@xavier-desktop:~/jetsonUtilities$ 

Don’t get surprise if you encounter the below message. Reboot your system and re-run the command:

$jtop
I can't access jetson_stats.service.
Please logout or reboot this board.
image

Using Jtop to see the GPU and CPU details

image

Displaying Xavier Information

image

Displaying the Xavier Release Info

xavier@xavier-desktop:~$ jetson_release -v
 - NVIDIA Jetson AGX Xavier [16GB]
   * Jetpack 4.3 [L4T 32.3.1]
   * NV Power Mode: MODE_15W - Type: 2
   * jetson_stats.service: active
 - Board info:
   * Type: AGX Xavier [16GB]
   * SOC Family: tegra194 - ID:25
   * Module: P2888-0001 - Board: P2822-0000
   * Code Name: galen
   * CUDA GPU architecture (ARCH_BIN): 7.2
   * Serial Number: 1420921055981
 - Libraries:
   * CUDA: NOT_INSTALLED
   * cuDNN: NOT_INSTALLED
   * TensorRT: NOT_INSTALLED
   * Visionworks: NOT_INSTALLED
   * OpenCV: NOT_INSTALLED compiled CUDA: NO
   * VPI: NOT_INSTALLED
   * Vulkan: 1.1.70
 - jetson-stats:
   * Version 3.1.1
   * Works on Python 2.7.17
xavier@xavier-desktop:~$ 

Displaying Jetson variables

export | grep JETSON
declare -x JETSON_BOARD="P2822-0000"
declare -x JETSON_BOARDIDS=""
declare -x JETSON_CHIP_ID="25"
declare -x JETSON_CODENAME="galen"
declare -x JETSON_CUDA="NOT_INSTALLED"
declare -x JETSON_CUDA_ARCH_BIN="7.2"
declare -x JETSON_CUDNN="NOT_INSTALLED"
declare -x JETSON_JETPACK="4.3"
declare -x JETSON_L4T="32.3.1"
declare -x JETSON_L4T_RELEASE="32"
declare -x JETSON_L4T_REVISION="3.1"
declare -x JETSON_MACHINE="NVIDIA Jetson AGX Xavier [16GB]"
declare -x JETSON_MODULE="P2888-0001"
declare -x JETSON_OPENCV="NOT_INSTALLED"
declare -x JETSON_OPENCV_CUDA="NO"
declare -x JETSON_SERIAL_NUMBER="1420921055981"
declare -x JETSON_SOC="tegra194"
declare -x JETSON_TENSORRT="NOT_INSTALLED"
declare -x JETSON_TYPE="AGX Xavier [16GB]"
declare -x JETSON_VISIONWORKS="NOT_INSTALLED"
declare -x JETSON_VPI="NOT_INSTALLED"
declare -x JETSON_VULKAN_INFO="1.1.70"
xavier@xavier-desktop:~$ 

Installing nvidia-docker

sudo apt install nvidia-docker2

Install nvidia-container-runtime package:

sudo yum install nvidia-container-runtime

Update docker daemon

sudo vim /etc/docker/daemon.json

Ensure that /etc/docker/daemon.json with the path to nvidia-container-runtime:

{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
sudo pkill -SIGHUP dockerd

Running the DeepStream Container

DeepStream Overview

DeepStream is a streaming analytic toolkit to build AI-powered applications. It takes the streaming data as input – from USB/CSI camera, video from file or streams over RTSP, and uses AI and computer vision to generate insights from pixels for better understanding of the environment. DeepStream SDK can be the foundation layer for a number of video analytic solutions like understanding traffic and pedestrians in smart city, health and safety monitoring in hospitals, self-checkout and analytics in retail, detecting component defects at a manufacturing facility and others. 

  • DeepStream 5.1 provides Docker containers for both dGPU and Jetson platforms.
  • These containers provide a convenient, out-of-the-box way to deploy DeepStream applications by packaging all associated dependencies within the container.
  • The associated Docker images are hosted on the NVIDIA container registry in the NGC web portal at https://ngc.nvidia.com.
  • They use the nvidia-docker package, which enables access to the required GPU resources from containers.
  • DeepStream builds on top of several NVIDIA libraries from the CUDA-X stack such as CUDA, TensorRT, Triton Inference server and multimedia libraries.
  • TensorRT accelerates the AI inference on NVIDIA GPU. DeepStream abstracts these libraries in DeepStream plugins, making it easy for developers to build video analytic pipelines without having to learn all the individual libraries.

Please Note:

The dGPU container is called deepstream and the Jetson container is called deepstream-l4t.

  • Unlike the container in DeepStream 3.0, the dGPU DeepStream 5.1 container supports DeepStream application development within the container.
  • It contains the same build tools and development libraries as the DeepStream 5.1 SDK.
  • In a typical scenario, you build, execute and debug a DeepStream application within the DeepStream container.
  • Once your application is ready, you can use the DeepStream 5.1 container as a base image to create your own Docker container holding your application files (binaries, libraries, models, configuration file, etc.,)
image

The above section describes the features supported by the DeepStream Docker container for the dGPU and Jetson platforms.

To run the container:

Allow external applications to connect to the host’s X display:

xhost +

Running DeepStream Docker container

DeepStream applications can be deployed in containers using NVIDIA container Runtime. The containers are available on NGC, NVIDIA GPU cloud registry.

sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples
sudo docker ps
CONTAINER ID   IMAGE                                             COMMAND       CREATED          STATUS         PORTS     NAMES
ad38d8f4612d   nvcr.io/nvidia/deepstream-l4t:5.1-21.02-samples   "/bin/bash"   10 seconds ago   Up 9 seconds             romantic_hopper
xavier@xavier-desktop:~$ 

Enter into Deep Streaming container and access the sample app structure:

root@xavier-desktop:/opt/nvidia/deepstream/deepstream-5.1# tree -L 2
.
|-- LICENSE.txt
|-- LicenseAgreement.pdf
|-- README
|-- bin
|   |-- deepstream-app
|   |-- deepstream-appsrc-test
|   |-- deepstream-audio
|   |-- deepstream-dewarper-app
|   |-- deepstream-gst-metadata-app
|   |-- deepstream-image-decode-app
|   |-- deepstream-image-meta-test
|   |-- deepstream-infer-tensor-meta-app
|   |-- deepstream-mrcnn-app
|   |-- deepstream-nvdsanalytics-test
|   |-- deepstream-nvof-app
|   |-- deepstream-opencv-test
|   |-- deepstream-perf-demo
|   |-- deepstream-segmentation-app
|   |-- deepstream-test1-app
|   |-- deepstream-test2-app
|   |-- deepstream-test3-app
|   |-- deepstream-test4-app
|   |-- deepstream-test5-app
|   |-- deepstream-testsr-app
|   |-- deepstream-transfer-learning-app
|   `-- deepstream-user-metadata-app
|-- doc
|   `-- nvidia-tegra
|-- install.sh
|-- lib
|   |-- gst-plugins
|   |-- libiothub_client.so
|   |-- libiothub_client.so.1 -> libiothub_client.so
|   |-- libnvbufsurface.so -> /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurface.so
|   |-- libnvbufsurftransform.so -> /usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so
|   |-- libnvds_amqp_proto.so
|   |-- libnvds_audiotransform.so
|   |-- libnvds_azure_edge_proto.so
|   |-- libnvds_azure_proto.so
|   |-- libnvds_batch_jpegenc.so
|   |-- libnvds_csvparser.so
|   |-- libnvds_dewarper.so
|   |-- libnvds_dsanalytics.so
|   |-- libnvds_infer.so
|   |-- libnvds_infer_custom_parser_audio.so
|   |-- libnvds_infer_server.so
|   |-- libnvds_infercustomparser.so
|   |-- libnvds_inferutils.so
|   |-- libnvds_kafka_proto.so
|   |-- libnvds_logger.so
|   |-- libnvds_meta.so
|   |-- libnvds_mot_iou.so
|   |-- libnvds_mot_klt.so
|   |-- libnvds_msgbroker.so
|   |-- libnvds_msgconv.so -> libnvds_msgconv.so.1.0.0
|   |-- libnvds_msgconv.so.1.0.0
|   |-- libnvds_msgconv_audio.so -> libnvds_msgconv_audio.so.1.0.0
|   |-- libnvds_msgconv_audio.so.1.0.0
|   |-- libnvds_nvdcf.so
|   |-- libnvds_nvtxhelper.so
|   |-- libnvds_opticalflow_dgpu.so
|   |-- libnvds_opticalflow_jetson.so
|   |-- libnvds_osd.so
|   |-- libnvds_redis_proto.so
|   |-- libnvds_utils.so
|   |-- libnvdsgst_helper.so
|   |-- libnvdsgst_inferbase.so
|   |-- libnvdsgst_meta.so
|   |-- libnvdsgst_smartrecord.so
|   |-- libnvdsgst_tensor.so
|   |-- libtritonserver.so
|   |-- pyds.so
|   |-- setup.py
|   `-- triton_backends
|-- samples
|   |-- configs
|   |-- models
|   |-- prepare_classification_test_video.sh
|   |-- prepare_ds_trtis_model_repo.sh
|   |-- streams
|   `-- trtis_model_repo
|-- sources
|   |-- SONYCAudioClassifier
|   |-- apps
|   |-- gst-plugins
|   |-- includes
|   |-- libs
|   |-- objectDetector_FasterRCNN
|   |-- objectDetector_SSD
|   |-- objectDetector_Yolo
|   `-- tools
|-- uninstall.sh
`-- version

Did you know?

DeepStream applications can be orchestrated on the edge using Kubernetes on GPU. Sample Helm chart to deploy DeepStream application is available on NGC.

What’s Next?

In my next blog post, we will deep dive into Deep Stream sample examples and see how to implement Face-mask detection system using NVIDIA DeepStream.

Further Readings

Hack into the DJI Robomaster S1 Battle Bot in 5 Minutes

When I saw the RoboMaster S1 for the first time, I was completely stoked. I was impressed by its sturdy look. It’s a powerful hefty drone weighing around 7 pounds and feels like a heavy tank. The DJI Robomaster S1 is undoubtedly one of the most complex battling robot that you can build yourself and control with your smartphone. You can install a special app to play games, drive around, and even shoot small water-based pellets at targets or other RoboMaster flawlessly.

Everything about the S1 is extremely sturdy and well-built. It’s mostly heavy-duty plastic. The DJI RoboMaster S1 arrives completely disassembled requiring you to build it from the ground up using the 46 customizable components that make up the unit. It displays AI technology in six cool ways, including: line recognition, vision marker recognition, people recognition, clap recognition, gesture recognition, and S1 recognition. Line recognition allows users to program the S1 to cruise along a line placed on the ground. AI technology lets the S1 recognize gestures, sounds, and even other S1 robots. Working with the RoboMaster S1 opens the doorway to AI learning, giving you a practical introduction to the technologies of tomorrow.

In my previous blog post, I showed how to assemble Robomaster S1 for the first time. I also showed how to connect to Robomaster via Python and Scratch programming language. In this blog post, I will show you how to hack into Robomaster S1 bot in easy steps. Let’s get started –

What do you need?

Clone the repository

git clone https://github.com/collabnix/robomaster

  • Unzip the Android SDK Platform‐Tools somewhere in your system
  • Use the Intelligent Controller Micro USB Port and connect the S1 to your computer.
  • Start the Robomaster S1 application. Go to the Lab, create a new Python application and paste the following code:
def root_me(module):
 __import__=rm_log.__dict__['__builtins__']['__import__']
 return __import__(module,globals(),locals(),[],0)
builtins=root_me('builtins')
subprocess=root_me('subprocess')
proc=subprocess.Popen('/system/bin/adb_en.sh',shell=True,executable='
/system/bin/sh',stdout=subprocess.PIPE,stderr=subprocess.PIPE)
  • Ensure that you run the Code within the S1 Lab. If you followed the steps correctly there should be no compilation errors. The Console will show: Execution Complete
  • Ensure that you don’t close the S1 Application! Open an Explorer window and go to the directory which holds the earlier extracted Android Platform Tools. Open a PowerShell in this directory (Shift + Right‐Click)
  • It’s time to run the ADP command to list the devices:
.\adb.exe devices

You should see something like this:

Screen Shot 2021-07-21 at 2 04 14 PM
  • Execute:
.\adb.exe shell
Screen Shot 2021-07-21 at 2 04 35 PM

DJI Specific Commands

dji
dji_amt_board       dji_derivekey       dji_monitor         dji_verify
dji_blackbox        dji_hdvt_uav        dji_net.sh          dji_vision
dji_camera          dji_log_control.sh  dji_network
dji_chkotp          dji_mb_ctrl         dji_sw_uav
dji_cpuburn         dji_mb_parser       dji_sys

Checking IP address

 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
8: rndis0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 0a:f8:f6:bb:55:64 brd ff:ff:ff:ff:ff:ff
    inet 192.168.42.2/24 brd 192.168.42.255 scope global rndis0
       valid_lft forever preferred_lft forever
9: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 60:60:1f:cd:95:f7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.1/24 brd 192.168.2.255 scope global wlan0
       valid_lft forever preferred_lft forever

Checking the type of Hunter.py file

./hunter.py
vision_ctrl.enable_detection(rm_define.vision_detection_marker)  

Checking Memory Stats

127|root@xw607_dz_ap0002_v4:/system/bin # cat /proc/meminfo
MemTotal:         271708 kB
MemFree:           59076 kB
Buffers:           18700 kB
Cached:            94776 kB
SwapCached:            0 kB
Active:           117648 kB
Inactive:          58020 kB
Active(anon):      62724 kB
Inactive(anon):      136 kB
Active(file):      54924 kB
Inactive(file):    57884 kB
Unevictable:         500 kB
Mlocked:               0 kB
HighTotal:             0 kB
HighFree:              0 kB
LowTotal:         271708 kB
LowFree:           59076 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                36 kB
Writeback:             0 kB
AnonPages:         62696 kB
Mapped:            12308 kB
Shmem:               176 kB
Slab:              12712 kB
SReclaimable:       6248 kB
SUnreclaim:         6464 kB
KernelStack:        2152 kB
PageTables:         1300 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:      135852 kB
Committed_AS:     341612 kB
VmallocTotal:     745472 kB
VmallocUsed:      153220 kB
VmallocChunk:     432132 kB

Top Command

oot@xw607_dz_ap0002_v4:/system/bin # top



User 8%, System 13%, IOW 0%, IRQ 0%
User 126 + Nice 0 + Sys 203 + Idle 1138 + IOW 2 + IRQ 0 + SIRQ 1 = 1470

  PID PR CPU% S  #THR     VSS     RSS PCY UID      Name
14020  1   3% S    28 146468K   8588K  fg root     /system/bin/dji_camera
  247  3   3% S    24 213128K  14876K  fg root     /system/bin/dji_vision
  483  1   2% S     8 112412K  11072K unk root     /data/python_files/bin/python
  233  1   2% S    22  44460K   5232K  fg root     /system/bin/dji_hdvt_uav
  239  0   1% S    15  31368K   4464K  fg root     /system/bin/dji_sw_uav
  237  0   0% S    13  24208K   4092K  fg root     /system/bin/dji_network
   41  0   0% S     1      0K      0K  fg root     kworker/0:1
  245  1   0% S     6  31904K  20492K  fg root     /system/bin/dji_blackbox
   69  1   0% S     1      0K      0K  fg root     mmcqd/0
  243  1   0% S    27  50832K   9300K  fg root     /system/bin/dji_sys

References

Building a Real-Time Crowd Face Mask Detection System using Docker on NVIDIA Jetson Nano

Did you know? Around 94% of AI Adopters are using or plan to use containers within 1 year time. Containers are revolutionizing a variety of workloads across the enterprise IT space, and AI adopters seem keen on using this technology to improve their AI workloads. AI adopters are intrigued by the benefits of scalability and speed that containers can bring to their AI deployments. In one of survey, 28% cite increased scalability as a benefit, and 27% say containers can decrease time to deployment. Another 26% even indicate containers will help lower costs. It seems that AI adopters are still figuring out some of the other ways containers can benefits their AI deployments. For example, only 19% cite the benefit of increased portability, which will likely only grow in importance as AI infrastructure becomes more hybrid in nature. 

The power of modern AI is now available for makers, learners, and embedded developers everywhere. Thanks to NVIDIA for introducing a $99 NVIDIA® Jetson Nano Developer Kit. It is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing. All in an easy-to-use platform that runs in as little as 5 watts. It’s simpler than ever to get started! Just insert a microSD card with the system image, boot the developer kit, and begin using the same NVIDIA JetPack SDK used across the entire NVIDIA Jetson™ family of products.

MaskCam is an open source project hosted over GITHUB. It is a prototype reference design for a Jetson Nano-based smart camera system . MaskCam can be run on a Jetson Nano Developer Kit, or on a Jetson Nano module (SOM). It measures crowd face mask usage in real-time, with all AI computation performed at the edge. It detects and tracks people in its field of view and determines whether they are wearing a mask via an object detection, tracking, and voting algorithm. It uploads statistics (not videos) to the cloud, where a web GUI can be used to monitor face mask compliance in the field of view. It saves interesting video snippets to local disk (e.g., a sudden influx of lots of people not wearing masks) and can optionally stream video via RTSP.

Please Note: Maskcam was designed to use the Raspberry Pi High Quality Camera but will also work with pretty much any USB webcam that is supported on Linux.

Technology

  • Written in Python
  • Runs perfectly well with JetPack 4.4.1 or 4.5
  • Edge AI processing is handled by NVIDIA’s DeepStream video analytics framework, YOLOv4-tiny, and Tryolabs’ Norfair tracker.
  • Reports statistics to and receives commands from the cloud using MQTT and a web-based GUI.
  • The software is containerized and for evaluation can be easily installed on a Jetson Nano DevKit using docker with just a couple of commands
  • For production, MaskCam can run under balenaOS, which makes it easy to manage and deploy multiple devices.

In this tutorial, you will learn how to implement a COVID-19 crowd face mask detector with Jetson Nano & Docker in 5 minutes.

Table of Contents

  1. Intent
  2. Hardware
  3. Software
  4. Preparing Your Jetson Nano

Hardware

  • A Jetson Nano Dev Kit running JetPack 4.4.1 or 4.5
  • An external DC 5 volt, 4 amp power supply connected through the Dev Kit’s barrel jack connector (J25). (See these instructions on how to enable barrel jack power.) This software makes full use of the GPU, so it will not run with USB power.
  • A USB webcam attached to your Nano

  • A 5V 4Ampere Charger
  • 64GB SD card
  • Another computer with a program that can display RTSP streams — we suggest VLC or QuickTime.

Software

Preparing Your Jetson Nano

1. Preparing Your Raspberry Pi Flashing Jetson SD Card Image

  • Unzip the SD card image
  • Insert SD card into your system.
  • Bring up Etcher tool and select the target SD card to which you want to flash the image.
My Image
sudo lshw -C system
pico2                       
    description: Computer
    product: NVIDIA Jetson Nano Developer Kit
    serial: 1422919082257
    width: 64 bits
    capabilities: smp cp15_barrier setend swp

CUDA Compiler and Libraries

ajeetraina@ajeetraina-desktop:~/meetup$ nvcc --version
-bash: nvcc: command not found
ajeetraina@ajeetraina-desktop:~/meetup$ export PATH=${PATH}:/usr/local/cuda/bin
ajeetraina@ajeetraina-desktop:~/meetup$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64
ajeetraina@ajeetraina-desktop:~/meetup$ source ~/.bashrc
ajeetraina@ajeetraina-desktop:~/meetup$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_21:14:42_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

DeviceQuery

$ pwd

/usr/local/cuda/samples/1_Utilities/deviceQuery
sudo make
ajeetraina@ajeetraina-desktop:/usr/local/cuda/samples/1_Utilities/deviceQuery$ sudo make
/usr/local/cuda-10.2/bin/nvcc -ccbin g++ -I../../common/inc  -m64    -gencode arch=compute_30,code=sm_30 -gencode arch=compute_32,code=sm_32 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o deviceQuery.o -c deviceQuery.cpp
/usr/local/cuda-10.2/bin/nvcc -ccbin g++   -m64      -gencode arch=compute_30,code=sm_30 -gencode arch=compute_32,code=sm_32 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75 -o deviceQuery deviceQuery.o
mkdir -p ../../bin/aarch64/linux/release
cp deviceQuery ../../bin/aarch64/linux/release
ajeetraina@ajeetraina-desktop:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ls
Makefile  NsightEclipse.xml  deviceQuery  deviceQuery.cpp  deviceQuery.o  readme.txt
ajeetraina@ajeetraina-desktop:/usr/local/cuda/samples/1_Utilities/deviceQuery$ ./deviceQuery
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3956 MBytes (4148387840 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

2. Verifying if it is shipped with Docker Binaries

ajeetraina@ajeetraina-desktop:~$ sudo docker version
[sudo] password for ajeetraina: 
Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:47:53 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:06:16 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:       

3. Checking Docker runtime

Starting with JetPack 4.2, NVIDIA has introduced a container runtime with Docker integration. This custom runtime enables Docker containers to access the underlying GPUs available in the Jetson family.

pico@pico1:/tmp/docker-build$ sudo nvidia-docker version
NVIDIA Docker: 2.0.3
Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:47:53 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:06:16 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:

4. Configuring Docker Daemon

Open the daemon.json ( /etc/docker/daemon.json)


{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

5. Restarting the Docker daemon

systemctl restart docker

6. Install nvidia-container-runtime

sudo apt install nvidia-container-runtime

7. Run the Face Mask detection

Run the below docker command to implement the face mask detection system. The MaskCam container should start running the maskcam_run.py script, using the USB camera as the default input device (/dev/video0). It will produce various status output messages (and error messages, if it encounters problems). If there are errors, the process will automatically end after several seconds. Check the Troubleshooting section for tips on resolving errors.

Otherwise, after 30 seconds or so, it should continually generate status messages (such as Processed 100 frames...). Leave it running (don’t press Ctrl+C, but be aware that the device will start heating up) and continue to the next section to visualize the video!

sudo docker run --runtime nvidia --privileged --rm -it --env MASKCAM_DEVICE_ADDRESS=<your-jetson-ip> -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta

 

Viewing the Live Video Stream

If you scroll through the logs and don’t see any errors, you should find a message like:

Streaming at rtsp://IP_Address:8554/maskcam

where IP_Addressis the address that you provided in MASKCAM_DEVICE_ADDRESS previously. If you didn’t provide an address, you’ll see some unknown address label there, but the streaming will still work.

Next, you just need to put the URL into your RSTP streaming viewer (VLC) on another computer. If all goes well, you should be rewarded with streaming video of your Nano, with green boxes around faces wearing masks and red boxes around faces not wearing masks.

References

Getting Started with Shipa

Are you frustrated with how much time it takes to create, deploy and manage an application on Kubernetes? Wouldn’t it be nice if you could focus on writing and delivering code instead of worrying about Kubernetes objects? Welcome to Shipa.

shipa - CD Foundation

Shipa is designed to make it simple for developers to run their code on Kubernetes without having to know Kubernetes and for platform engineers to enforce controls and policies. With Shipa’s developer-centric portal, DevOps can eliminate the need for custom Terraform scripts, Helm charts, and YAML files so developers can get started with Kubernetes immediately. At the same time, Platform teams maintain full control over configurations, which reduces the security, cost, and compliance risks from configuration errors.

Shipa is an Application Management Framework. It abstracts the underlying cloud and Kubernetes infrastructure through a developer portal so developers can focus on application deployment and management rather than infrastructure-related objects and manifests. In contrast, platform engineering teams focus on implementing controls and policies for developers’ applications and services using Shipa’s developer portal.

This image has an empty alt attribute; its file name is image-1024x522.png

Under this blog tutorial, we will walkthrough the feature-rich Shipa CLI and we will see how Developers can deploy application directly over Cloud without even caring about the underlying Kubernetes objects flawlessly. Below are the list of items which we will be covering –

  • Installing Shipa CLI on your desktop
  • Adding Shipa instance as a target on your CLI
  • Creating user for login
  • Listing the existing applications
  • Creating & Removing the application
  • Deploying the application
  • Checking the available Platforms
  • Creating & Managing Pool
  • Creating an app and selecting Pool
  • Listing the certificates
  • Checking Logs 
  • Connecting external Kubernetes Cluster to your Shipa Pool
  • Adding Shipa Node in AWS
  • Security Management
  • Create and deploy sample application from CI-CD tool

Installing Shipa CLI tool

In order to use and operate Shipa, you will need to download the Shipa CLI for your operating system (currently available for MacOS, Linux and Windows). Follow the below steps:

MacOShttps://storage.googleapis.com/shipa-cli/shipa_darwin_amd64
Linuxhttps://storage.googleapis.com/shipa-cli/shipa_linux_amd64
Windowshttps://storage.googleapis.com/shipa-cli/shipa_windows_amd64.exe

Run the below command in order to setup Shipa CLI tool on your Mac system:

chmod +x shipa_darwin_amd64 && mv -v shipa_darwin_amd64 /usr/local/bin/shipa

Add your Shipa instance as a target on your CLI

Targets are used to manage the addresses of the remote Shipa servers.Each target is identified by a label and a HTTP/HTTPS address. Shipa’s client requires at least one target to connect to, there’s no default target. A user may have multiple targets, but only one will be used at a time.

[Captains-Bay]? >  shipa versionshipa version 1.0.1.

[Captains-Bay]? >  shipa target-add default http://34.105.46.12:8080 -s
New target default -> http://34.105.46.12:8080 added to target list and defined as the current target
[Captains-Bay]? >

Creating a user

After configuring the shipa target, we will proceed further and create a user.

[Captains-Bay]? >  shipa user-create admin@shipa.io
Password:
Confirm:
Error: you're not authenticated or your session has expired.
Calling the "login" command...Email: admin@shipa.io
Password:
Password:
Confirm:
Error: this email is already registered

Successfully logged in!

Once you create admin user, you should be able to login to remote Shipa platform.

[Captains-Bay]? >  shipa login
Email: admin@shipa.io
Password:
Successfully logged in!
[Captains-Bay]? >

Shipa requires users to be a member of at least one team in order to create an application or a service instance. Let us first check the list of team by running the below CLI:

[Captains-Bay]? >  shipa team-list
+--------+------------------+------+
| Team   | Permissions      | Tags |
+--------+------------------+------+
| admin  | app              |      |
|        | team             |      |
|        | service          |      |
|        | service-instance |      |
|        | cluster          |      |
|        | volume           |      |
|        | volume-plan      |      |
|        | webhook          |      |
+--------+------------------+------+
| system | app              |      |
|        | team             |      |
|        | service          |      |
|        | service-instance |      |
|        | cluster          |      |
|        | volume           |      |
|        | volume-plan      |      |
|        | webhook          |      |
+--------+------------------+------+
[Captains-Bay]? >

Add an SSH key

Next, we need to send a public key to the git server used by Shipa. Run the below command to accomplish this.

[Captains-Bay]? >  shipa key-add my-rsa-key ~/.ssh/id_rsa.pubKey "my-rsa-key" successfully added!
[Captains-Bay]? >

Listing the application

Shipa comes with a capability to list all applications that a user has access to. Application access is controlled by teams. If a user’s team has access to an application, then this user also has access to it. Run the below command to list out all the applications:

[Captains-Bay]? >  shipa app-list
+------------------------------+-----------+--------------------------------------------------------+
| Application                  | Units     | Address                                                |
+------------------------------+-----------+--------------------------------------------------------+
| aks-app1                     | 1 started | http://aks-app1.34.82.73.71.nip.io                     |
+------------------------------+-----------+--------------------------------------------------------+
| dashboard                    | 1 started | http://dashboard.34.82.73.71.nip.io                    |
+------------------------------+-----------+--------------------------------------------------------+
| eks-app1                     | 1 started | http://eks-app1.34.82.73.71.nip.io                     |
+------------------------------+-----------+--------------------------------------------------------+
| gke-app1                     | 1 started | http://gke-app1.34.82.73.71.nip.io                     |
+------------------------------+-----------+--------------------------------------------------------+
| gke-app2                     | 1 started | http://gke-app2.34.82.73.71.nip.io                     |
+------------------------------+-----------+--------------------------------------------------------+
| longhorn-app                 | 1 started | http://longhorn-app.34.82.73.71.nip.io                 |
+------------------------------+-----------+--------------------------------------------------------+
| postgres-service-service-app | 1 started | http://postgres-service-service-app.34.82.73.71.nip.io |
+------------------------------+-----------+--------------------------------------------------------+
[Captains-Bay]? >
As shown above, there are multiple applications hosted on various Cloud platform like AWS EKS, GKE etc.

Application Information

The below command shows information about a specific application, it’s state, platform, git repository, and more. Users need to be a member of a team that has access to the application to be able to see information about it.

[Captains-Bay]? >  shipa app-info -a dashboard
Application: dashboard
Description:
Tags:
Dependency Files:
Repository: git@34.105.46.12:dashboard.git
Platform: staticTeams: admin
Address: http://dashboard.34.82.73.71.nip.io
Owner: admin@shipa.io
Team owner: admin
Deploys: 1
Pool: theonepool
Quota: 1/4 units
Routing settings:
   1 version => 1 weight

Units [web]: 1
+---------+----------------------------------+---------+---------------+------+
| Version | Unit                             | Status  | Host          | Port |
+---------+----------------------------------+---------+---------------+------+
| 1       | dashboard-web-1-5d58db8779-ztgcs | started | 34.105.121.67 | 8888 |
+---------+----------------------------------+---------+---------------+------+
App Plan:+---------------+--------+------+-----------+---------+
| Name          | Memory | Swap | Cpu Share | Default |

+---------------+--------+------+-----------+---------+
| autogenerated | 0      | 0    | 100       | false   |
+---------------+--------+------+-----------+---------+
Routers:
+---------+---------+------+------------------------------+--------+
| Name    | Type    | Opts | Address                      | Status |
+---------+---------+------+------------------------------+--------+
| traefik | traefik |      | dashboard.34.82.73.71.nip.io |        |
+---------+---------+------+------------------------------+--------+

Checking the available Platforms

A platform is a well-defined pack with installed dependencies for a language or framework that a group of applications will need. A platform can also be a container template (Docker image).

Platforms are easily extendable and managed by Shipa. Every application runs on top of a platform.

You can list out the platforms by running the below CLI:

[Captains-Bay]? >  shipa platform-list
- go
- java
- nodejs
- php
- python
- static

Listing the app

Verifying Logs

[Captains-Bay]? >  shipa app-log --app collabnix

Removing app

If the application is bound to any service instance, all binds will be removed before the application gets deleted Do check the service-unbind command for this in Shipa documentation. In our case, we can go ahead and use app-remove option to remove an app smoothly.

[Captains-Bay]? >  shipa app-remove --app collabnix
Are you sure you want to remove app "collabnix"? (y/n) y
---- Removing application "collabnix"...
Failed to remove router backend from database: not found
---- Done removing application. Some errors occurred during removal.
running autoscale checks
finished autoscale checks
[Captains-Bay]? >

Creating an app and selecting a specific pool

Let’s create a new application called collabnix and assign it to a team called admin along with existing pool called gke-longhorn.


[Captains-Bay]? >  shipa app-create collabnix python --team admin --pool gke-longhorn
App "collabnix" has been created!
Use app-info to check the status of the app and its units.
Your repository for "collabnix" project is "git@34.105.46.12:collabnix.git"
[Captains-Bay]? >


Deploying an application

Currently, Shipa supports 4 application deployment options:

  • CI/CD
  • Git
  • app-deploy
  • Docker image
[Captains-Bay]? >  shipa app-deploy . -a collabnix
context args: [.]
Uploading files (0.02MB)... 100.00% Processing ok
 ---> collabnix-v1-build - Successfully assigned shipa-gke-longhorn/collabnix-v1-build to gke-lhcl-default-pool-c6caa3b2-rc9k [default-scheduler]
https://files.pythonhosted.org/packages/98/13/a1d703ec396ade42c1d33df0e1cb691a28b7c08
/

...
 ---> Sending image to repository (34.105.46.12:5000/shipa/app-collabnix:v1)
The push refers to repository [34.105.46.12:5000/shipa/app-collabnix]
b428a7ad5d5f: Pushed
...

OK
running autoscale checks
finished autoscale checks

Listing the Deployments

You can use “app-deploy-list” option to list information about deploys for an application. Users can list available images with the app-deploy-list command.

[Captains-Bay]? >  shipa app-deploy-list  -a collabnix
+--------+----------------------------------------------+------------+----------------+-----------------------------+-------+
| Active | Image (Rollback)                             | Origin     | User           | Date (Duration)             | Error |
+--------+----------------------------------------------+------------+----------------+-----------------------------+-------+
| *      | 34.105.46.12:5000/shipa/app-collabnix:v1 (*) | app-deploy | admin@shipa.io | 12 Jun 20 00:13 IST (01:55) |       |
+--------+----------------------------------------------+------------+----------------+-----------------------------+-------+
[Captains-Bay]? >

Listing the certificates

You can run the below command to list an application TLS certificates.


[Captains-Bay]? >  shipa certificate-list -a collabnix
+--------------+--------+---------------------+
| Router  | CName     | Expires | Issuer | Subject |
+---------+----------------+--------+---------+
| traefik | collabnix.34.82.73.71.nip.io | -| - | -  |
+---------+----------------+--------+---------+

Checking logs

[Captains-Bay]? >  shipa app-log -a collabnix
2020-06-12 00:13:49 +0530 [shipa][api]:   ---> collabnix-web-1-5c667c4fc5-d6v7j - Started container collabnix-web-1 [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
2020-06-12 00:13:49 +0530 [shipa][api]:  ---> 1 of 1 new units ready
...
[Captains-Bay]? >

Using Git

For Shipa, a platform is provisioner dependent. The command below creates a new application using the given name and platform. Once it is completed, it shows the GITHUB repository URL.


[Captains-Bay]? >  shipa app-create collabnix1 python --team admin --pool gke-longhorn
App "collabnix1" has been created!
Use app-info to check the status of the app and its units.
Your repository for "collabnix1" project is "git@34.105.46.12:collabnix1.git"
[Captains-Bay]? >


git push git@34.105.46.12:collabnix1.git  master

…
remote:  ---> Running a security scan
remote:  ---> Found 0 vulnerability(ies)

...
remote: HEAD is now at a0bb216... Added 
remote: .shipa-ci.yml not found and post-receive hook is disabledTo 34.105.46.12:collabnix1.git * [new branch]      master -> master
[Captains-Bay]? >

Go to Shipa Dashboard > Click on “Application” > Pick up endpoint http://collabnix1.34.82.73.71.nip.io/ and it will displays the below error while accessing it via browser:

How to fix it?

If you go into the file blog/settings.py, there is a line called ALLOWED_HOSTS.

Inside that line, you have an entry like xxxx.nip.io, just replace that entry with collabnix1.34.82.73.71.nip.io (the cname Shipa gave to your app) , save the file

With that saved, please run the normal git add . , git commit and git push

Once deployment is complete, your blog application should be available on: collabnix1.34.82.73.71.nip.io/admin

.

Accessing Shell

[Captains-Bay]? >  shipa app-shell -a collabnix1
ubuntu@collabnix1-web-1-f96f7bf9-sh4cz:/home/application/current$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
ubuntu@collabnix1-web-1-f96f7bf9-sh4cz:/home/application/current$

Listing the Cluster

Shipa clusters allows registering existing clusters of external provisioners in the platform. Currently, Kubernetes is the only supported external cluster provisioner.

On Shipa, clusters are directly attached to a pool.

[Captains-Bay]? >  shipa cluster-list
+-------------+-------------+--------------+-------+-------+
| Name         | Provisioner | Addresses                                               | Custom Data | Default | Pools        | Teams | Error |
+---------------------------+--------------+-------+-------+
| aks          | kubernetes  | https://aks-ajeet-raina-dns-afc18577.hcp.eastus.azmk8s↵ |             | false   | aks
          |       |       |
|              |             | .io:443                                                |             |         |              |       |       |
+--------------+--------+---------+--------------+--------+
| eks          | kubernetes  | https://D7CB020B4656D5E5BFCC096C529A3BD7.gr7.us-east-1↵ |             | false   | eks          |       |       |
|              |             | .eks.amazonaws.com                                      |             |         |              |       |       |
+--------------+------------------+--------------+--+-------+
| gke          | kubernetes  | https://35.238.48.234                                   |             | false   | gke          |       |       |+--------------+-------------+---------------------------------------------------------+-------------+---------+--------------+-------+-------+
| gke-longhorn | kubernetes  | https://35.205.250.127                                  |             | false   | gke-longhorn |       |       |+--------------+-------------+---------------------+-------+-------+
| theonepool   | kubernetes  | 10.64.0.1:443                                           |             | false   | theonepool   |       |       |+--------------+-------------+-----------------------+-------+-------+
[Captains-Bay]? >

[Captains-Bay]? >  shipa app-list -n collabnix1
+-------------+-----------+--------------------------------------+
| Application | Units     | Address                              |
+-------------+-----------+--------------------------------------+| collabnix1  | 1 started | http://collabnix1.34.82.73.71.nip.io |+-------------+-----------+--------------------------------------+
[Captains-Bay]? >

Security Management

The command below lists all security scans for a specific image

[Captains-Bay]? >  shipa app-security list -a collabnix1
1. [Deployment] scan at 13 Jun 2020 11:30, 0 vulnerability(es), 0 ignored
[Captains-Bay]? >

Creating a new database and binding your application

Let us try to create a new instance of PostgreSQL and bind it to collabnix1 app.

Persistent Volume

You can list the existing volume plans via volume-plan-list CLI:

[Captains-Bay]? >  shipa volume-plan-list
Error: you're not authenticated or your session has expired.
Calling the "login" command...
Email: admin@shipa.io
Password:
+----------+---------------+-------+
| Name     | Storage Class | Teams |
+----------+---------------+-------+
| longhorn | longhorn      | []    |
+----------+---------------+-------+
Successfully logged in!
[Captains-Bay]? >

Listing the Volume

[Captains-Bay]? >  shipa volume-list
+---------+----------+--------------+-------+--------------------+------+-----------------------+| Name    | Plan     | Pool         | Team  | Plan Storage Class | Opts | Binds                 |+---------+----------+--------------+-------+--------------------+------+-----------------------+| lh-vol1 | longhorn | gke-longhorn | admin | longhorn           |      | longhorn-app:       ↵ ||         |          |              |       |                    |      | /mnt/lh-vol1:rw       |+---------+----------+--------------+-------+--------------------+------+-----------------------+
[Captains-Bay]? >

Creating Volume

[Captains-Bay]? >  shipa volume-create collabvol longhorn -p gke-longhorn -t admin --am ReadWriteOnce --capacity=1Gi
Volume successfully created.
[Captains-Bay]? >

Run the below CLI to verify the volumes as shown below:

[Captains-Bay]? >  shipa volume-list
+-----------+----------+--------------+-------+--------------------+------+------------------------------+| Name      | Plan     | Pool         | Team  | Plan Storage Class | Opts | Binds                        |+-----------+----------+--------------+-------+--------------------+------+------------------------------+| collabvol | longhorn | gke-longhorn | admin | longhorn           |      |                              |+-----------+----------+--------------+-------+--------------------+------+------------------------------+| lh-vol1   | longhorn | gke-longhorn | admin | longhorn           |      | longhorn-app:/mnt/lh-vol1:rw |+-----------+----------+--------------+-------+--------------------+------+------------------------------+
[Captains-Bay]? >

As you see above, it is not bound to anything. So, let’s go ahead and try to bind it using the below comand:

[Captains-Bay]? >  shipa volume-bind collabvol /mnt/collabvol -a collabnix1
---- restart the app "collabnix1" ----
---- Updating units [web] ----
 ---> 1 of 1 new units created
 ---> 0 of 1 new units ready
 ---> 1 old units pending termination
  ---> collabnix1-web-3-7d847f5646-n4sh7 - pod has unbound immediate PersistentVolumeClaims (repeated 3 times) [default-scheduler]
  ---> collabnix1-web-3-7d847f5646-n4sh7 - Container image "34.105.46.12:5000/shipa/app-collabnix1:v3" already present on machine [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
  ---> collabnix1-web-3-7d847f5646-n4sh7 - Created container collabnix1-web-3 [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
  ---> collabnix1-web-3-7d847f5646-n4sh7 - Started container collabnix1-web-3 [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
 ---> 1 of 1 new units ready
  ---> collabnix1-web-3-6f6c6d6f58-wl8sn - Stopping container collabnix1-web-3 [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
 ---> Done updating unitsVolume successfully bound.
[Captains-Bay]? >

Verifying the mount point

shipa app-shell -a collabnix1
ubuntu@collabnix1-web-3-7d847f5646-n4sh7:/home/application/current$ mount | grep collab
/dev/longhorn/pvc-3c7afeca-af35-11ea-9f87-42010a8401eb on /mnt/collabvol type ext4 (rw,relatime,data=ordered)
ubuntu@collabnix1-web-3-7d847f5646-n4sh7:/home/application/current$

Pool Management

Pools on Shipa can host two types of provisioners:

  • Kuberrnetes: Which we can then tie the pool to any K8s cluster
  • Shipa nodes: Through Shipa, you can also create nodes on EC2, GCP and Azure using IaaS and attach them to a pool. Shipa nodes are basically Docker nodes, that you can use to deploy applications, enforce security and so on. Exactly the same you do with K8s nodes/clusters

When you create a pool and you don’t specify a provisioner, it automatically selects Shipa node  (described as “default” when you do a shipa pool-list). See below:

[Captains-Bay]? >  shipa pool-list
+--------------+---------+-------------+---------------+---------+| Pool         | Kind    | Provisioner | Teams         | Routers |+--------------+---------+-------------+---------------+---------+| aks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| collabpool   |         | default     | admin, system | traefik |+--------------+---------+-------------+---------------+---------+| eks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke-longhorn |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| theonepool   | default | kubernetes  |               | traefik |+--------------+---------+-------------+---------------+---------+[Captains-Bay]? >

Notes: Right now, pools on Shipa can only host one type of provisioner. It has to be either shipa or kubernetes

[Captains-Bay]? >  shipa pool-add collabpool
Pool successfully registered.
[Captains-Bay]? >  shipa pool-list
+--------------+---------+-------------+---------------+---------+| Pool         | Kind    | Provisioner | Teams         | Routers |+--------------+---------+-------------+---------------+---------+| aks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| collabpool   |         | default     | admin, system | traefik |+--------------+---------+-------------+---------------+---------+| eks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke-longhorn |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| theonepool   | default | kubernetes  |               | traefik |+--------------+---------+-------------+---------------+---------+

Let us go ahead and updates attributes for a specific pool as shown below:

[Captains-Bay]? >  shipa pool-update collabpool --plan k8s
Pool successfully updated.
[Captains-Bay]? >  shipa pool-list
+--------------+---------+-------------+---------------+---------+| Pool         | Kind    | Provisioner | Teams         | Routers |+--------------+---------+-------------+---------------+---------+| aks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| collabpool   |         | default     | admin, system | traefik |+--------------+---------+-------------+---------------+---------+| eks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke-longhorn |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| theonepool   | default | kubernetes  |               | traefik |+--------------+---------+-------------+---------------+---------+[Captains-Bay]? >

Attached is  a sample yaml file which one can use to update the collabpool or create new ones also. You can do it with the command: shipa pool-update collabpool collabpool-config.yaml

ShipaPool: collabpool
Resources:
   General:
      Setup:
         Force: true
         Default: false
         Public: false
         Provisioner: kubernetes
      Plan:         
Name: k8s
      AppQuota:
         Limit: 8
      Security:
         Disable-Scan: false
         Scan-Platform-Layers: true
          Ignore-Components:
            - systemd
            - bash
            - glibc
            - tar
         Ignore-CVES:
            - CVE-2019-18276
            - CVE-2016-2781
            - CVE-2019-9169
      Access:
         Append:
            - admin
      Services:
         Append:
            - postgres-service

[Captains-Bay]? >  shipa pool-update collabpool collabpool-config.yaml
Pool successfully updated.
[Captains-Bay]? >

Please note that:

  • Once you create a new pool, in order to be able to deploy apps to it, you have to assign that pool to a cluster (unless you will add a new cluster later and will use that new pool)-
  • One cluster can host multiple pools, so you can assign multiple pools you create to a single cluster. For each pool, Shipa creates a different namespace inside the K8s cluster, so workloads will be isolated
  • You can also adjust the security scan configuration by adding or removing exceptions, disabling scans for platform layer, disable it all and so on. You can always make changes and use the shipa pool-update command for the changes to be applied

You can update an existing cluster to assign it to your new pool with the following command: shipa cluster-update gke-longhorn –addr https://35.205.250.127  –pool gke-longhorn –pool collabpool

[Captains-Bay]? >  shipa cluster-update gke-longhorn --addr https://35.205.250.127  --pool gke-longhorn --pool collabpool
Cluster successfully updated.
[Captains-Bay]? >

After that, your new pool is ready to receive apps, so you can use it when creating and deploying apps.

Exporting Pool Configuration

[Captains-Bay]? >  shipa pool-config-export collabpool -o mypoolconfig
[Captains-Bay]? >  cat mypoolconfigShipaPool: collabpool
Resources:
  General:
    Setup:
      Default: false
      Public: true
      Provisioner: ""
      Force: false
      KubeNamespace: ""
    Plan:
      Name: k8s
    Security:
      Disable-Scan: false
      Scan-Platform-Layers: false
      Ignore-Components: []
      Ignore-CVES: []
    Access:
      Append:
      - admin
      - system
      Blacklist: []
    Services:
      Append:
      - postgres-service
      Blacklist: []
    Volumes: []
    AppQuota:
      Limit: unlimited
  ShipaNode:
    Drivers: []
    AutoScale: null
[Captains-Bay]? >

Adding Shipa node in AWS

Attached is a sample of a pool configuration that you can use to create Shipa provisioned pools. After the pool is created, you can do the following:

ShipaPool: shipa-pool
Resources:
   General:
      Setup:
         Force: true
         Default: false
         Public: false 
         Provisioner: shipa
      Plan:
         Name: k8s
      AppQuota:
         Limit: 5
      Security:
         Disable-Scan: false
         Scan-Platform-Layers: false
   ShipaNode:
      Drivers:
         - amazonec2
      AutoScale:
         MaxContainer: 10
         MaxMemory: 0
         ScaleDown: 1.33
         Rebalance: true

Please note:

  • A Shipa pool can host nodes from multiple cloud providers and will distribute your application units/containers across multiple clouds/nodes
  • You can adjust the field Drivers in the shipa config file to adjust which cloud providers the pool can host nodes from. You can place one or multiple. Accepted values are amazonec2, google, azure

Security Management

[Captains-Bay]? >  shipa app-security-list  -a collabnix1
1. [Deployment] scan at 13 Jun 2020 11:30, 0 vulnerability(es), 0 ignored
2. [Deployment] scan at 13 Jun 2020 15:14, 0 vulnerability(es), 0 ignored
3. [Deployment] scan at 13 Jun 2020 15:17, 0 vulnerability(es), 0 ignored
..
11. [    Manual] scan at 17 Jun 2020 09:13, 0 vulnerability(es), 0 ignored

AutoScalibility

Using Shipa’s cloud provider native integration, Shipa manages the created nodes, perform self-healing, auto-scale and others. Let us look at the autoscability feature. The command below runs node auto scale checks once. Auto scaling checks may trigger the addition, removal or rebalancing of nodes, as long as these nodes were created using an IaaS provider registered in Shipa.

[Captains-Bay]? >  shipa node-autoscale-run
Are you sure you want to run auto scaling checks? (y/n) yfinished autoscale checks
[Captains-Bay]? >

Next, let us lists the current configuration for Shipa’s autoscale, including the set of rules and the current metadata filter.

[Captains-Bay]? >  shipa node-autoscale-info
Rules:
+------+---------------------+------------------+------------------+--------------------+---------+| Pool | Max container count | Max memory ratio | Scale down ratio | Rebalance on scale | Enabled |+------+---------------------+------------------+------------------+--------------------+---------+|      | 0                   | 0.0000           | 1.3330           | true               | true    |+------+---------------------+------------------+------------------+--------------------+---------+
[Captains-Bay]? >

Conclusion

If you are looking out for a platform that allows you to think and operate applications rather than infrastructure when delivering services, across any infrastructure, then Shipa is a perfect solution for you.

References:

Visualizing Time Series Data directly over IoT Edge device using Dockerized RedisTimeSeries & Grafana

Application developers look to Redis and RedisTimeSeries to work with real-time internet of things (IoT) sensor data. RedisTimeseries is a Redis module to enhance your experience managing time-series data with Redis. It simplifies the use of Redis for time-series use cases such as internet of things (IoT) data, stock prices, and telemetry. With RedisTimeSeries, you can ingest and query millions of samples and events at the speed of Redis. Advanced tooling such as downsampling and aggregation ensures a small memory footprint without impacting performance. Use a variety of queries for visualization and monitoring with built-in connectors to popular monitoring tools like Grafana, Prometheus, and Telegraf.

Introducing Redis Data Source for Grafana

The Redis Data Source for Grafana is a plug-in that allows users to connect to the Redis database and build dashboards in Grafana to easily monitor Redis and application data. It provides an out-of-the-box predefined dashboard but also lets you build customized dashboards tuned to your specific needs.

I published a blog for Redis Labs where I showcased how to fetch sensor data and push it to Redis Enterprise Cloud. In this tutorial, I will show how you can use a python script to fetch IoT sensor data, push it to dockerized RedisTimeSeries and then plot it over Grafana ~ all using Docker container running on the IoT device.

Let’s get started –

Hardware requirements:

Software requirements:

You can run RedisTimeSeries directly over an IoT Edge device. Follow the below steps to build RedisTimeSeries Docker Image over Jetson Nano:

Verifying Docker version

SSH to 70.167.220.160 and install Docker

pico@pico1:~$ docker version
Client: Docker Engine - Community
 Version:           20.10.3
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        48d30b5
 Built:             Fri Jan 29 14:33:34 2021
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8728dd2
  Built:            Fri Apr  9 22:43:42 2021
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 nvidia:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Verifying if Sensor is detected

 i2cdetect -r -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- --

Building Docker Image for RedisTimeSeries for Jetson Nano

git clone --recursive https://github.com/RedisTimeSeries/RedisTimeSeries.git
cd RedisTimeSeries.git
docker build -t ajeetraina/redistimeseries-jetson . -f Dockerfile.jetson.edge

Running RedisTimeSeries

docker run -dit -p 6379:6379 ajeetraina/redistimeseries-jetson

Verifying if RedisTimeSeries Module is loaded

redis-cli
127.0.0.1:6379> info modules
# Modules
module:name=timeseries,ver=999999,api=1,filters=0,usedby=[],using=[],options=[]
127.0.0.1:6379>

Clone the repository

$ git clone https://github.com/redis-developer/redis-datasets
$ cd redis-datasets/redistimeseries/realtime-sensor-jetson

Running Sensorload Script

sudo python3 sensorloader2.py --host localhost --port 6379

Running Grafana on Jetson Nano

docker run -d -e "GF_INSTALL_PLUGINS=redis-app" -p 3000:3000 grafana/grafana

There you go..

Point your browser to https://<IP_ADDRESS>:3000. Use “admin” as username and password to log in to the Grafana dashboard.

Click the Data Sources option on the left side of the Grafana dashboard to add a data source.

Under the Add data source option, search for Redis and the Redis data source will appear as shown below:

Supply the name, Redis Enterprise Cloud database endpoint, and password, then click Save & Test.

Click Dashboards to import Redis and Redis Streaming. Click Import for both these options.

Click on Redis to see a fancy Grafana dashboard that shows the Redis database information:

Finally, let’s create a sensor dashboard that shows temperature, pressure, and humidity. To start with temperature,  first click on + on the left navigation window. Under Create option, Select Dashboard and click on the Add new panel button.

A new window will open showing the Query section. Select SensorT from the drop-down menu, choose RedisTimeSeries as type, TS.GET as command and ts”temperature as key.

Choose TS.GET as a command.

Type ts”temperature as the key.

Click Run followed by Save, as shown below:

Now you can save the dashboard by your preferred name:

Click Save. This will open up a sensor dashboard. You can click on Panel Title and select Edit.

Type Temperature and choose Gauge under Visualization.

Click Apply and you should be able to see the temperature dashboard as shown here:

Follow the same process for pressure (ts:pressure) and humidity (ts:humidity), and add them to the dashboard. You should be able to see the complete dashboard readings for temperature, humidity, and pressure. Looks amazing. Isn’t it?

References

A First Look at Dev Environments Feature under Docker Desktop 3.5.0

Starting Docker Desktop 3.5.0, Docker introduced the Dev Environments feature for the first time. The Dev Environments feature is the foundation of Docker’s new collaborative team development experience. It provides a simple way for developers to share their work in progress code, connect code into a container to edit it/rebuild it and a way to do this within a compose project using one container as your development environment while still running the others. It keeps your development container backed up, up to date, and provides simple tools to share these with your team so anyone can look at what you’re working on in one click.

It is currently offered as a Preview and shouldn’t be used in production environments. VS Code is the only supported IDE as of now, but you can expect a bunch of other IDE vendors in the near future.

To access Dev Environments, from the Docker menu, select Dashboard > Dev Environments.

Why does your team require Dev Environments?

Imagine your team is working on a Git project that has a development, experiment, and production branch. Your team uses a development branch to build the next version, an experimental branch to try exploring the new ideas, and the production branch that requires maintenance. Switching between these branches on a single machine could be complex and tedious at the same time, particularly if the dependencies are different. The isolation of using a different container for each branch may help.

With Dev Environments, developers can now easily set up repeatable and reproducible development environments by keeping the environment details versioned in their SCM along with their code. Once a developer is working in a Development Environment, they can share their work-in-progress code and dependencies in one click via the Docker Hub. They can then switch between their developer environments or their teammates’ environments, moving between branches to look at work-in-progress changes without moving off their current Git branch. This makes reviewing PRs as simple as opening a new environment.

Dev Environments use tools built into code editors that allow Docker to access code mounted into a container rather than on the developer’s localhost. This isolates the tools, files, and running services on the developer’s machine allowing multiple versions of them to exist side by side, also improving file system performance!  And we have built this experience on top of Compose rather than adding another manifest for developers to worry about or look after. 

How is Dev Environments different from GitPod and GitHub Codespaces?

Good question! If you have ever used GitPod or GitHub Codespaces, you might wonder how is Dev Environments different from these existing ones. If you look closely at GitPod, it is an open-source Kubernetes application for automated and ready-to-code development environments that blends in your existing workflow. It does work very similarly, enables you to describe your dev environment as code and start instant and fresh development environments for each new task directly from your browser. Tightly integrated with GitLab, GitHub, and Bitbucket, Gitpod automatically and continuously prebuilds dev environments for all your branches. As a result, team members can instantly start coding with fresh, ephemeral, and fully-compiled dev environments – no matter if you are building a new feature, want to fix a bug, or do a code review. On the other hand, GitHub Codespaces is a configurable online development environment, hosted by GitHub and powered by Visual Studio Code.

The major difference between Dev Environments Vs GitPod/Codespaces is in the way it runs. GitPod and Codespaces run in the cloud, and collaboration is based on working on the same code repository, whereas Docker Development Environments run locally, and enable sharing of work in progress as a complete working piece.

Getting Started

Step 1. Download and install Docker Desktop 3.5.0 or higher:

Step 2. Install the tools and extensions

Step 3. Start a single container Dev Environment

Go to Dev Environments > Click on Single container dev environment. We will be using an example to try a single container sample of Docker Dev Environments.

In the above example, the names lucid_napier and vigilant_lalande are randomly generated. You’ll most likely see different names when you create your Dev Environment.

Hover over the container and click Open in VS Code to start working in VS Code as usual. You can also open a terminal in VS Code, and use Git to push or pull code to your repository, or switch between branches and work as you would normally.

You can launch the application by running the command “make run” in your VS Code terminal.

Step 4. Access the HTTP application

This opens an HTTP server on port 8080. Open http://localhost:8080 in your browser to see the running application.

❯ curl http://localhost:8080

          ##         .
    ## ## ##        ==
 ## ## ## ## ##    ===
/"""""""""""""""""\___/ ===
{                       /  ===-
\______ O           __/
 \    \         __/
  \____\_______/


Hello from Docker!

Step 5. Sharing Your Dev Environments

If you’re not a member of the Docker Team Plan, then this is the time to upgrade from the free plan to the Team Plan.

If you are a member of the Docker Team plan, you can now share your Dev Environment with your team. When you are ready to share your environment, just click the Share button and specify the Docker Hub namespace where you’d like to push your Dev Environment to.

This creates a Docker image of your dev environment, uploads it to the Docker Hub namespace you have specified in the previous step, and provides a tiny URL that you can use to share your work with your team members. Your team members just need to add this URL in the Create field and then click Create. Your Dev Environment now starts in the exact same state as you shared it! Using this shared Dev Environment, your team members can access the code, any dependencies, and the current Git branch you are working on. They can also review your changes and provide feedback even before you create a pull request!

In the next post, we will use Dev Environments to collaborate on any Docker Compose-based projects. Stay tuned!

References:

How to assemble DJI Robomaster S1 for the first time?

The RoboMaster S1 is an educational robot that provides users with an in-depth understanding of science, math, physics, programming, and more through captivating gameplay modes and intelligent features.

Top 10 Features

  • Support for Python and Scratch programming language
  • 46 Programmable Components – all in DIY mode
  • 6 Programmable AI Module
  • Low-latency HD FPV
  • Scratch & Python Coding
  • 4WD Omnidirectional Movement
  • Intelligent Sensing Armor
  • Multiple Exciting Battle Modes
  • Innovative Hands-On Learning
  • Two shooting methods: gel beads and infrared beams.
  • Capability to capture photos and record 1080p videos; without a microSD card, it supports only 720p.

Stimulus that S1 recognise

  • Clapping Recognition: the S1 can recognize two or three consecutive claps and be programmed to execute custom responses.
  • Gesture Recognition: the S1 can detect human gestures such as hand or arm signals and be programmed to execute custom responses.
  • S1 Robot Recognition: the S1 can detect other RoboMaster S1 units.
  • Vision Marker Recognition: the S1 can identify 44 kinds of official Vision Markers, which are comprised primarily of numbers, letters, and special characters. All of the files for these Vision Markers can be downloaded at insert web address.
  • Line Recognition: the S1 can detect and follow blue, red, and green tracks with a width of approximately 15-25 mm.

How it works?

  • The RoboMaster S1 can be operated using a computer or a smart device via the touchscreen and gamepad. When using the gamepad with a touchscreen device, the robot can also be operated using an external mouse, which can be connected through a dedicated USB port
  • Users can connect to the RoboMaster S1 via Wi-Fi or a router. When connecting via Wi-Fi, your mobile device or computer connects to the Wi-Fi of the S1. Connection via router provides broader signal coverage, which allows multiple control methods for robots to operate simultaneously on the same network.
  • Flat surfaces such as wood, carpet, tile, and concrete are optimal for operating the S1. Users should avoid surfaces that are too smooth as the S1 wheels may have problems gaining enough traction for precise control. Surfaces with fine particles like sand or dirt should be avoided.

Table of Contents

  1. Getting Started
  2. Items Check and Assembly
  3. Asembly the Mecanum Wheels
  4. Attaching the Gimbal to the Chassis
  5. Mounting the Gel Bead Container and Intelligent Battery

Getting Started

To be able to program the Robomaster S1 in Scratch or Python, you must run the Robomaster # S1 app, then connect the Robomaster S1 to it, via wireless mobile device or on your computer via WiFI.

  • Installing Robomaster Python Module on MacOS
conda create --name dji python=3.7
conda activate dji
pip install robomaster

Before we start interacting with Robomaster S1 through the script, the first and foremost step is to assemble Robomaster S1. It generally takes 45-1 hour time to assemble Robomaster S1 completely. Let’s get started:

20 Steps to assemble Robomaster

1. Items Check

robo

2. Soak the gel beads in water

image

image

3. Connect the battery to the charger

image

4. Assembly the Mecanum Wheels

image

5. Prepare the screw bar and grease

image

6. Mount the screw driver bit to the handle

image

7. Grease covering the bottom of shaft holes

image

8. Assembly the Mecanum Wheels

image

9. Screw the five T2 screws using the screw driver’s H1.5 end

image

10. Finish assembling all the 4 Mecanum wheels

image
image

11. Attaching the Gimbal to the chassis

image

12. Testing the battery eject button is functioning properly

image
image

13. Align the Motion Controller with a buckle and place it inside

image

14. Secure the 4 Hit Detectors to their 4 respective armour plates

image
image
image

15. Connect the cable to the Chassis left armour’s hit detector

image

16. Align all the three 3508I Brushless Motors and ESC with the motor mounting plate

image

17. Connect the motor cable to the motion controller’s organge port

image

18. Getting the foundation strong

image

Mounting the Gel Bead Container and Intelligent Battery

image
image
image
image

19. Getting Gel Beads ready

image
image

20. Get Ready!

image

Congratulations! You have assembled DJI Robomaster S1 successfully.

Installing Robomaster S1 Mac App

Next, you will need to install the Robomaster S1 app on your Macbook. Download it via this link. RoboMaster for Windows requires Windows7 64bit or above. RoboMaster for Mac requires macOS 10.13 or above.

The RoboMaster platform is intuitive and engaging allowing ease of connection to the RoboMaster S1 and block-based programming language for all activities.

Copy the below code and try to use the Robomaster S1 app to execute the scratch script. Place the multiple Vision Markers at some distance. Now the S1 should be able to identify all 44 kinds of official Vision Markers, which are comprised primarily of numbers, letters, and special characters.

pid_x = PIDCtrl()
pid_y = PIDCtrl()
pid_Pitch = PIDCtrl()
pid_Yaw = PIDCtrl()
variable_X = 0
variable_Y = 0
variable_Post = 0
list_MarkerList = RmList()
def start():
    global variable_X
    global variable_Y
    global variable_Post
    global list_MarkerList
    global pid_x
    global pid_y
    global pid_Pitch
    global pid_Yaw
    robot_ctrl.set_mode(rm_define.robot_mode_free)
    vision_ctrl.enable_detection(rm_define.vision_detection_marker)
    pid_Yaw.set_ctrl_params(115,0,5)
    pid_Pitch.set_ctrl_params(85,0,3)
    while True:
        list_MarkerList=RmList(vision_ctrl.get_marker_detection_info())
        if list_MarkerList[1] == 1:
            variable_X = list_MarkerList[3]
            variable_Y = list_MarkerList[4]
            pid_Yaw.set_error(variable_X - 0.5)
            pid_Pitch.set_error(0.5 - variable_Y)
            gimbal_ctrl.rotate_with_speed(pid_Yaw.get_output(),pid_Pitch.get_output())
            time.sleep(0.05)
            variable_Post = 0.01
            if abs(variable_X - 0.5) <= variable_Post and abs(0.5 - variable_Y) <= variable_Post:
                gun_ctrl.set_fire_count(1)
                gun_ctrl.fire_once()
                time.sleep(11)
        else:
            gimbal_ctrl.rotate_with_speed(0,0)

References:

Building Your First Jetson Container

The NVIDIA Jetson Nano 2GB Developer Kit is the ideal platform for teaching, learning, and developing AI and robotics applications. It uses the same proven NVIDIA JetPack Software Development Kit (SDK) used in breakthrough AI-based products. The new developer kit is unique in its ability to utilize the entire NVIDIA CUDA-X™ accelerated computing software stack including TensorRT for fast and efficient AI inference — all in a small form factor and at a significantly lower price. The Jetson Nano 2GB Developer Kit is priced at $59 and will be available for purchase starting end-October. 

Under this blog post, I will cover the below details:

  • Installing Docker
  • Installing Docker Compose
  • Testing GPU support
  • Running JTOP Docker container
  • Compiling CUDA drivers and libraries
  • Running deviceQuery on Docker with GPU support
  • Running deviceQuery on Containerd with GPU support
  • Running deviceQuery on the K3s cluster

Hardware

  • Jetson Nano
  • A Camera Module
  • A 5V 4Ampere Charger
  • 64GB SD card

Software

Preparing Your Jetson Nano

1. Preparing Your Raspberry Pi Flashing Jetson SD Card Image

  • Unzip the SD card image
  • Insert SD card into your system.
  • Bring up Etcher tool and select the target SD card to which you want to flash the image.
My Image

2. Verifying if it is shipped with Docker Binaries

Jetson Nano SD card images comes with Docker 20.10.6 by default.

ajeetraina@ajeetraina-desktop:~$ sudo docker version

Client:
 Version:           20.10.2
 API version:       1.41
 Go version:        go1.13.8
 Git commit:        20.10.2-0ubuntu1~18.04.2
 Built:             Tue Mar 30 21:35:54 2021
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.2
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.8
  Git commit:       20.10.2-0ubuntu1~18.04.2
  Built:            Mon Mar 29 19:27:41 2021
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.4-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.2-dev
  GitCommit:        
 docker-init:
  Version:          0.19.0
  GitCommit:        
pico@pico1:~$ 

 

Installing nvidia-docker

sudo apt install nvidia-docker2

Install nvidia-container-runtime package:

sudo yum install nvidia-container-runtime

Update docker daemon

sudo vim /etc/docker/daemon.json

Ensure that /etc/docker/daemon.json with the path to nvidia-container-runtime:

{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

Make docker update the path:

sudo pkill -SIGHUP dockerd

3. Installing Docker Compose on NVIDIA Jetson Nano

Jetson Nano doesnt come with Docker Compose installed by default. You will need to install it first:

export DOCKER_COMPOSE_VERSION=1.27.4
sudo apt-get install libhdf5-dev
sudo apt-get install libssl-dev
sudo pip3 install docker-compose=="${DOCKER_COMPOSE_VERSION}"
apt install python3
apt install python3-pip
pip install docker-compose
docker-compose version
docker-compose version 1.26.2, build unknown
docker-py version: 4.3.1
CPython version: 3.6.9
OpenSSL version: OpenSSL 1.1.1  11 Sep 2018

4. Identify the Jetson board

pico@pico1:~$ git clone https://github.com/jetsonhacks/jetsonUtilities
Cloning into 'jetsonUtilities'...
remote: Enumerating objects: 123, done.
remote: Counting objects: 100% (39/39), done.
remote: Compressing objects: 100% (30/30), done.
remote: Total 123 (delta 15), reused 23 (delta 8), pack-reused 84
Receiving objects: 100% (123/123), 32.87 KiB | 5.48 MiB/s, done.
Resolving deltas: 100% (49/49), done.
pico@pico1:~$ cd jetson
-bash: cd: jetson: No such file or directory
pico@pico1:~$ cd jetsonUtilities/
pico@pico1:~/jetsonUtilities$ ls
LICENSE  README.md  jetsonInfo.py  scripts

pico@pico1:~/jetsonUtilities$ python3 jetsonInfo.py 
NVIDIA Jetson Nano (Developer Kit Version)
 L4T 32.4.4 [ JetPack 4.4.1 ]
   Ubuntu 18.04.5 LTS
   Kernel Version: 4.9.140-tegra
 CUDA 10.2.89
   CUDA Architecture: 5.3
 OpenCV version: 4.1.1
   OpenCV Cuda: NO
 CUDNN: 8.0.0.180
 TensorRT: 7.1.3.0
 Vision Works: 1.6.0.501
 VPI: 4.4.1-b50
 Vulcan: 1.2.70

5. Running Jtop in a Docker Container

In the latest release, JTOP is recommended instead of NVIDIA-SMI.

sudo docker run --rm -it --gpus all \
                   -v /run/jtop.sock:/run/jtop.sock ajeetraina/jetson-stats-nano jtop

Use the “tab” key to switch to different GPUs and CPUs.

6. CUDA Compilers and Libraries

ajeetraina@ajeetraina-desktop:~/meetup$ nvcc --version
-bash: nvcc: command not found
ajeetraina@ajeetraina-desktop:~/meetup$ export PATH=${PATH}:/usr/local/cuda/bin
ajeetraina@ajeetraina-desktop:~/meetup$ export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64
ajeetraina@ajeetraina-desktop:~/meetup$ source ~/.bashrc
ajeetraina@ajeetraina-desktop:~/meetup$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_21:14:42_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89

7. Testing GPU Support

We’ll use the deviceQuery NVIDIA test application (included in L4T) to check that we can access the GPU in the cluster. First, we’ll create a Docker image with the appropriate software, run it directly as Docker, then run it using containerd ctr and finally on the Kubernetes cluster itself.

8. Running deviceQuery on Docker with GPU support

Create a directory

mkdir test
cd test

Copy the sample files

Copy the demos where deviceQuery is located to the working directory where the Docker image will be created:

cp -R /usr/local/cuda/samples .

Create a Dockerfile

FROM nvcr.io/nvidia/l4t-base:r32.5.0
RUN apt-get update && apt-get install -y --no-install-recommends make g++
COPY ./samples /tmp/samples
WORKDIR /tmp/samples/1_Utilities/deviceQuery
RUN make clean && make
CMD ["./deviceQuery"]
sudo docker build -t ajeetraina/jetson_devicequery . -f Dockerfile
pico@pico2:~/test$ sudo docker run --rm --runtime nvidia ajeetraina/jetson_devicequery:latest
./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3963 MBytes (4155383808 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS

9. Running deviceQuery on containerd with GPU support

Since K3s uses containerd as its runtime by default, we will use the ctr command line to test and deploy the deviceQuery image we pushed on containerd with this script:

#!/bin/bash
IMAGE=ajeetraina/jetson_devicequery:latest
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
ctr i pull docker.io/${IMAGE}
ctr run --rm --gpus 0 --tty docker.io/${IMAGE} deviceQuery

10. Execute the script

sudo sh usectr.sh
sudo sh usectr.sh 
docker.io/ajeetraina/jetson_devicequery:latest:                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:dfeaad4046f78871d3852e5d5fb8fa848038c57c34c6554c6c97a00ba120d550: done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:4438ebff930fb27930d802553e13457783ca8a597e917c030aea07f8ff6645c0:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:b1cdeb9e69c95684d703cf96688ed2b333a235d5b33f0843663ff15f62576bd4:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:bf60857fb4964a3e3ce57a900bbe47cd1683587d6c89ecbce4af63f98df600aa:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:0aac5305d11a81f47ed76d9663a8d80d2963b61c643acfce0515f0be56f5e301:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:37987db6d6570035e25e713f41e665a6d471d25056bb56b4310ed1cb1d79a100:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:f0f57d03cad8f8d69b1addf90907b031ccb253b5a9fc5a11db83c51aa311cbfb:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:08c23323368d4fde5347276d543c500e1ff9b712024ca3f85172018e9440d8b0:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:04da93b342eb651d6b94c74a934a3290697573a907fa0a06067b538095601745:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:f84ceb6e8887e9b3b454813459ee97c2b9730869dbd37d4cca4051958b7a5a36:    done           |++++++++++++++++++++++++++++++++++++++| 

elapsed: 81.4s                                                                    total:  305.5  (3.8 MiB/s)                                       
unpacking linux/arm64/v8 sha256:dfeaad4046f78871d3852e5d5fb8fa848038c57c34c6554c6c97a00ba120d550...

done

./deviceQuery Starting...

 CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "NVIDIA Tegra X1"
  CUDA Driver Version / Runtime Version          10.2 / 10.2
  CUDA Capability Major/Minor version number:    5.3
  Total amount of global memory:                 3963 MBytes (4155383808 bytes)
  ( 1) Multiprocessors, (128) CUDA Cores/MP:     128 CUDA Cores
  GPU Max Clock rate:                            922 MHz (0.92 GHz)
  Memory Clock rate:                             13 Mhz
  Memory Bus Width:                              64-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 32768
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            Yes
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device supports Compute Preemption:            No
  Supports Cooperative Kernel Launch:            No
  Supports MultiDevice Co-op Kernel Launch:      No
  Device PCI Domain ID / Bus ID / location ID:   0 / 0 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.2, CUDA Runtime Version = 10.2, NumDevs = 1
Result = PASS


11. Running deviceQuery on the K3s cluster

pico@pico2:~/test$ cat pod_deviceQuery.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: devicequery
spec:
  containers:
    - name: nvidia
      image: ajeetraina/jetson_devicequery:latest

      command: [ "./deviceQuery" ]
pico@pico2:~/test$
sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl apply -f ./pod_deviceQuery.yaml
pod/devicequery created
pico@pico2:~/test$ sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl describe pod devicequery
Name:         devicequery
Namespace:    default
Priority:     0
Node:         pico4/192.168.1.163
Start Time:   Sun, 13 Jun 2021 09:16:44 -0700
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  nvidia:
    Container ID:  
    Image:         ajeetraina/jetson_devicequery:latest
    Image ID:      
    Port:          <none>
    Host Port:     <none>
    Command:
      ./deviceQuery
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcrmv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-mcrmv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  78s   default-scheduler  Successfully assigned default/devicequery to pico4
  Normal  Pulling    77s   kubelet            Pulling image "ajeetraina/jetson_devicequery:latest"
pico@pico2:~/test$
cat pod_deviceQuery_jetson4.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: devicequery
spec:
  nodeName: pico4
  containers:
    - name: nvidia
      image: ajeetraina/jetson_devicequery:latest
      command: [ "./deviceQuery" ]
pico@pico2:~/test$ 
pico@pico2:~/test$ sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl describe pod devicequery
Name:         devicequery
Namespace:    default
Priority:     0
Node:         pico4/192.168.1.163
Start Time:   Sun, 13 Jun 2021 09:16:44 -0700
Labels:       <none>
Annotations:  <none>
Status:       Running
IP:           10.42.1.3
IPs:
  IP:  10.42.1.3
Containers:
  nvidia:
    Container ID:  containerd://fd502d6bfa55e2f80b2d50bc262e6d6543fd8d09e9708bb78ecec0b2e09621c3
    Image:         ajeetraina/jetson_devicequery:latest
    Image ID:      docker.io/ajeetraina/jetson_devicequery@sha256:dfeaad4046f78871d3852e5d5fb8fa848038c57c34c6554c6c97a00ba120d550
    Port:          <none>
    Host Port:     <none>
    Command:
      ./deviceQuery
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sun, 13 Jun 2021 09:21:50 -0700
      Finished:     Sun, 13 Jun 2021 09:21:50 -0700
    Ready:          False
    Restart Count:  5
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mcrmv (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-mcrmv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  7m51s                  default-scheduler  Successfully assigned default/devicequery to pico4
  Normal   Pulled     5m45s                  kubelet            Successfully pulled image "ajeetraina/jetson_devicequery:latest" in 2m5.699757621s
  Normal   Pulled     5m43s                  kubelet            Successfully pulled image "ajeetraina/jetson_devicequery:latest" in 1.000839703s
  Normal   Pulled     5m29s                  kubelet            Successfully pulled image "ajeetraina/jetson_devicequery:latest" in 967.072951ms
  Normal   Pulled     4m59s                  kubelet            Successfully pulled image "ajeetraina/jetson_devicequery:latest" in 1.025604394s
  Normal   Created    4m59s (x4 over 5m45s)  kubelet            Created container nvidia
  Normal   Started    4m59s (x4 over 5m45s)  kubelet            Started container nvidia
  Warning  BackOff    4m20s (x8 over 5m42s)  kubelet            Back-off restarting failed container
  Normal   Pulling    2m47s (x6 over 7m51s)  kubelet            Pulling image "ajeetraina/jetson_devicequery:latest"
pico@pico2:~/test$ sudo KUBECONFIG=/etc/rancher/k3s/k3s.yaml kubectl apply -f ./pod_deviceQuery_jetson4.yaml
pod/devicequery configured

In my next blog, we will see how to deploy Jetson Software stack for Deepstreaming .

References

What You Should Expect From Collabnix Joint Meetup with JFrog & Docker Bangalore Event?

This June 19th, 2021, the Collabnix community is coming together with JFrog & Docker Bangalore Meetup Group to conduct a meetup event for DevOps practitioners. This will be a 100% virtual event where DevOps mindset, skills, and tools converge. This will be a half-day event that brings together leaders in DevOps and the Cloud-native world to collaborate and learn from each other. We encourage people with technology and business backgrounds to attend, learn and share experiences.

Who can attend?

Primary Audience:

  • DevOps 
  • Practitioners: Solution Architects, SRE, Application Developers, Managers
  • Open Source Maintainers: Includes Projects Leads and Contributors
  • Leaders: Directors, CxOs, Presidents, VPs

We opened CFP last month accepting 30-minute talk proposals from interested speakers. As our conference is fully virtual, we cordially invited all to make a submission (or submissions). Our core audience consisted of DevOps, application developers, open source maintainers, software architects, Product Managers, open-source experts, IoT and application security professionals.

We welcomed submissions on perennially popular technical topics related to DevOps technologies listed below:

  • Continuous integration and continuous delivery
  • DevSecOps
  • Open Source Technology
  • Release Orchestration
  • Application Development
  • Automation
  • Monitoring and alerting
  • Microservices
  • Software Delivery Management
  • Cloud-Native
  • Monitoring and Observing Containers in Production
  • Managing Deployment Configuration (IaC)
  • Container Security Best Practices
  • Using Containers in Test Environments
  • DevOps Tools(Docker, Chef, Puppet, PowerShell, Kubernetes, GitHub, Ansible, SaltStack, Capistrano, Jenkins)
  • DevOps/Cloud Infra (Virtualization, Containerization, Orchestration, Microservices, Cloud Computing (AWS, Azure, Google Cloud, OpenStack).
  • DevOps Real World Experience – technology adoption examples, real-life implementation scenarios, best practices, and insights from real companies. 
  • Culture and Processes
  • Infrastructure as code

Our Speakers

Yes, It’s gonna be LIVE!

This is purely a live, online meetup event. We will be using the StreamYard platform for all our speakers to present their talk while the participants will get a chance to ask queries and watch the sessions via YouTube and other social media platform(will be announced shortly)

Goodies and Swags

Thanks to JFrog Team for being our event sponsors. In addition to the great speakers and sessions lined up, the active participants will get a chance to win T-shirts, swags, stickers, and surprise gifts. So what are you waiting for? Check out the below links to RSVP for this upcoming event.

See you there !

Reserve Your Spot

Join Our Community

Collabnix Tech Videos

Kubectl for Docker Beginners

Kubectl is a command-line interface for running commands against Kubernetes clusters. kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the $KUBECONFIG environment variable or by setting the --kubeconfig flag.

If you’re a Docker beginner and want to switch to Kubernetes, then this guide will be super useful for you. Most of the Docker-related commands have been tested on the Play with Docker platform. Play with Docker (PWD in short) is a Docker playground that allows users to run Docker commands in a matter of seconds. It gives the experience of having a free Alpine Linux Virtual Machine in the browser, where you can build and run Docker containers and even create clusters in Docker Swarm Mode. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs. In addition to the playground, PWD also includes a training site composed of a large set of Docker labs and quizzes from beginner to advanced level available at training.play-with-docker.com.

For Kubernetes, we will leverage Play with the Kubernetes platform. Play with Kubernetes(PWK in short) is a lab site provided by Docker. Play with Kubernetes is a playground that allows users to run K8s clusters in a matter of seconds. It gives the experience of having a free Alpine Linux Virtual Machine in the browser. Under the hood Docker-in-Docker (DinD) is used to give the effect of multiple VMs/PCs.

Prerequisites:

Setting up PWD environment
  • Create an account with DockerHub
  • Open PWD Platform on your browser
  • Click on Add New Instance on the left side of the screen to bring up Alpine OS instance on the right side
  • Verify if Docker is installed
Setting up PWK environment

Example: Running Nginx Service

PWD:

docker run -d --restart=always -e DOMAIN=cluster --name nginx-app -p 80:80 nginx
[node4 ~]$ docker exec -it 9dc env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=9dc0816328dcTERM=xterm
DOMAIN=cluster
NGINX_VERSION=1.17.6
NJS_VERSION=0.3.7
PKG_RELEASE=1~buster
HOME=/root

PWK:

Start the pod running nginx

kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"

Expose a port through with a service

kubectl expose deployment nginx-app --port=80 --name=nginx-http
[node1 replicaset101]$ kubectl get po,svc,deploy
NAME                             READY   STATUS    RESTARTS   AGE
pod/nginx-app-7c58988fb9-sckpd   1/1     Running   0          3m12s
pod/portainer-8586dccbb5-x66vk   1/1     Running   1          49m

NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        53m
service/nginx-http   ClusterIP      10.108.29.166   <none>        80/TCP         2m11s
service/portainer    LoadBalancer   10.98.58.121    <pending>     80:32001/TCP   49m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/nginx-app   1/1     1            1           3m15s
deployment.extensions/portainer   1/1     1            1           49m
[node1 replicaset101]$ curl 10.108.29.166:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Example: Listing Containers Vs Pods

PWD

docker ps -a

PWK

kubectl get po

Example: Attach a process that is already running in a container

PWD:

docker ps
docker attach <containerid>

PWK:

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
nginx-app-5jyvm   1/1       Running   0          10m
kubectl attach -it nginx-app-5jyvm
...

Example: To execute a command in a container,

PWD

docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
55c103fa1296        nginx               "nginx -g 'daemon of…"   6 minutes ago       Up 6 minutes        0.0.0.0:80->80/tcp   nginx-app
docker exec 55c103fa1296 cat /etc/hostname
55c103fa1296

PWK

kubectl get po
NAME              READY     STATUS    RESTARTS   AGE
nginx-app-5jyvm   1/1       Running   0          10m
kubectl exec nginx-app-5jyvm -- cat /etc/hostname
nginx-app-5jyvm

Example: To use interactive commands.

PWD

docker exec -ti 55c103fa1296 /bin/sh
# exit

PWK

kubectl exec -ti nginx-app-5jyvm -- /bin/sh      
# exit

Example: To follow stdout/stderr of a process that is running

PWD

docker logs -f a9e
192.168.9.1 - - [14/Jul/2015:01:04:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"
192.168.9.1 - - [14/Jul/2015:01:04:03 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.35.0" "-"

PWK

kubectl logs -f nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
kubectl logs --previous nginx-app-zibvs
10.240.63.110 - - [14/Jul/2015:01:09:01 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"
10.240.63.110 - - [14/Jul/2015:01:09:02 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.26.0" "-"

References:

Running RedisInsight Docker container in a rootless mode

On a typical installation, the Docker daemon manages all the containers. The Docker daemon controls every aspect of the container lifecycle. Older versions of Docker required that the Daemon started by a user with root privileges. This required giving users full access to a machine in order to control and configure Docker. As a result, this exposed potential security risks. Rootless mode allows running the Docker daemon and containers as a non-root user to mitigate potential vulnerabilities in the daemon and the container runtime.

Rootless mode does not require root privileges even during the installation of the Docker daemon, as long as the prerequisites are met. Rootless mode was introduced in Docker Engine v19.03 as an experimental feature. Rootless mode graduated from experimental in Docker Engine v20.10.

How does it work?

Rootless mode executes the Docker daemon and containers inside a user namespace. This is very similar to userns-remap mode, except that with userns-remap mode, the daemon itself is running with root privileges, whereas in rootless mode, both the daemon and the container are running without root privileges.

Rootless mode does not use binaries with SETUID bits or file capabilities, except newuidmap and newgidmap, which are needed to allow multiple UIDs/GIDs to be used in the user namespace.

New to RedisInsight?

A full-featured pure desktop GUI client, RedisInsight is an intuitive and efficient GUI for Redis, allowing you to interact with your databases and manage your data—with built-in support for most popular Redis modules. It’s 100% free Redis GUI tool to analyse the memory, profile the performance of your database, and guide you toward better Redis usage. It is available for Windows, macOS, and Linux and is fully compatible with Redis Enterprise. It works with any cloud provider as long as you run it on a host with network access to your cloud-based Redis server. RedisInsight makes it easy to discover cloud databases and configure connection details with a single click. It allows you to automatically add Redis Enterprise Software and Redis Enterprise Cloud databases too.

Starting v1.6 release, RedisInsight docker container is now rootless being compliant with best practices for containers. Let us see how to run RedisInsight Docker container in a rootless mode.

Install Docker

$ sudo curl -sSL https://get.docker.com/ | sh

Ensure that you have newuidmap and newgidmap CLI installed on your host system. These commands are provided by the uidmap package on most distros.

Running Docker as a non-privileged user

To run Docker as a non-privileged user, consider setting up the Docker daemon in rootless mode for your user:

dockerd-rootless-setuptool.sh install

Visit https://docs.docker.com/go/rootless/ to learn about rootless mode. To run the Docker daemon as a fully privileged service, but granting non-root users access, refer to https://docs.docker.com/go/daemon-access/ WARNING: Access to the remote API on a privileged Docker daemon is equivalent to root access on the host. Refer to the ‘Docker daemon attack surface’ documentation for details: https://docs.docker.com/go/attack-surface/

$ sudo apt install uidmap

If you installed Docker 20.10 or later with RPM/DEB packages, you should have dockerd-rootless-setuptool.sh in /usr/bin. Run dockerd-rootless-setuptool.sh install as a non-root user to set up the daemon:

$ dockerd-rootless-setuptool.sh install
May 31 05:06:06 ubuntu-rootless dockerd-rootless.sh[4298]: time="2021-05-31T05:06:06.409523458Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
May 31 05:06:06 ubuntu-rootless dockerd-rootless.sh[4298]: time="2021-05-31T05:06:06.409747732Z" level=info msg="Loading containers: start."
May 31 05:06:06 ubuntu-rootless dockerd-rootless.sh[4298]: time="2021-05-31T05:06:06.491803304Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Da
emon option --bip can be used to set a preferred IP address"
May 31 05:06:06 ubuntu-rootless dockerd-rootless.sh[4298]: time="2021-05-31T05:06:06.545120353Z" level=info msg="Loading containers: done."
May 31 05:06:06 ubuntu-rootless dockerd-rootless.sh[4298]: time="2021-05-31T05:06:06.556912719Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performanc
e for building images: running in a user namespace" storage-driver=overlay2
May 31 05:06:06 ubuntu-rootless dockerd-rootless.sh[4298]: time="2021-05-31T05:06:06.557189864Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
May 31 05:06:06 ubuntu-rootless dockerd-rootless.sh[4298]: time="2021-05-31T05:06:06.557347334Z" level=info msg="Daemon has completed initialization"
May 31 05:06:06 ubuntu-rootless dockerd-rootless.sh[4298]: time="2021-05-31T05:06:06.590839318Z" level=info msg="API listen on /run/user/1003/docker.sock"
+ DOCKER_HOST=unix:///run/user/1003/docker.sock /usr/bin/docker version
Client: Docker Engine - Community
 Version:           20.10.6
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        370c289
 Built:             Fri Apr  9 22:48:16 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8728dd2
  Built:            Fri Apr  9 22:46:27 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
+ systemctl --user enable docker.service
Created symlink /home/ajeet_raina/.config/systemd/user/default.target.wants/docker.service → /home/ajeet_raina/.config/systemd/user/docker.service.
[INFO] Installed docker.service successfully.
[INFO] To control docker.service, run: `systemctl --user (start|stop|restart) docker.service`
[INFO] To run docker.service on system startup, run: `sudo loginctl enable-linger ajeet_raina`
[INFO] Creating CLI context "rootless"
Successfully created context "rootless"
[INFO] Make sure the following environment variables are set (or add them to ~/.bashrc):
export PATH=/usr/bin:$PATH
export DOCKER_HOST=unix:///run/user/1003/docker.sock

If dockerd-rootless-setuptool.sh is not present, you may need to install the docker-ce-rootless-extras package manually, e.g.,

Make sure the following environment variables are set (or add them to ~/.bashrc):

export PATH=/usr/bin:$PATH
export DOCKER_HOST=unix:///run/user/1003/docker.sock

The systemd unit file is installed as ~/.config/systemd/user/docker.service.

Use systemctl --user to manage the lifecycle of the daemon:

$ systemctl --user start docker
$ systemctl --user enable docker

To specify the CLI context using docker context:

docker context use rootless
rootless
Current context is now "rootless"
Warning: DOCKER_HOST environment variable overrides the active context. To use "rootless", either set the global --context flag, or unset DOCKER_HOST environment variable.

Running RedisInsight in a Docker container

ajeet_raina@ubuntu-rootless:~$ docker run -d -v redisinsight:/db -p 8001:8001 redislabs/redisinsight:latest
Unable to find image 'redislabs/redisinsight:latest' locally
latest: Pulling from redislabs/redisinsight
bd8f6a7501cc: Pull complete 
44718e6d535d: Pull complete 
efe9738af0cb: Pull complete 
f37aabde37b8: Pull complete 
3923d444ed05: Pull complete 
a389cd00f6ac: Pull complete 
635fef62bb79: Pull complete 
d620e4e17484: Pull complete 
e2ee94785e13: Pull complete 
48b3e278075c: Pull complete 
100ed91c31ae: Pull complete 
55c329231ae6: Pull complete 
96d8432c61ad: Pull complete 
1ed83d76beb2: Pull complete 
b9f7ffeff2f8: Pull complete 
Digest: sha256:fd4bff16761308521952e802e1ac1fcafb0d78088c508cf3762754aa954c7009
Status: Downloaded newer image for redislabs/redisinsight:latest
e3e60f1a06066af7d990464788d64b2e7e837dddd00fbc2a473aafd5ec51a0c4
ajeet_raina@ubuntu-rootless:~$ docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED              STATUS              PORTS                                       NAMES
e3e60f1a0606   redislabs/redisinsight:latest   "bash ./docker-entry…"   About a minute ago   Up About a minute   0.0.0.0:8001->8001/tcp, :::8001->8001/tcp   musing_pike
ajeet_raina@ubuntu-rootless:~$ 

What’s Next?

References:

Docker Compose now shipped with Docker CLI by default

Last year, Dockercon attracted 78,000 registrants, 21,000 conversations across 193 countries. This year, it was an even much bigger event attracting over 90,000+ attendees. With tons of speakers, tracks, and community rooms, it was a 1-day busy event with exciting announcements, talks & community room sessions. This year, Docker Compose 2.0 was announced for the first time as a first-class citizen for Docker CLI. It was a long-awaited feature and finally, the Docker community was super happy to see this being shipped with Docker Desktop for Mac & Windows by default.

In this blog post, I will share the overall experience around the newer docker compose CLI.

#1 You don’t need to install Docker Compose as a separate package

Yes, you heard it right. Docker Desktop for Mac and for Windows version 3.2.1 and above includes the new Compose command along with the Docker CLI. Therefore, Windows and Mac users do not need to install the Compose CLI separately.

docker compose version
Docker Compose version 2.0.0-beta.1

#2 Full Compatibility with older Docker Compose CLI

The Docker CLI now supports the compose command, including most of the docker-compose features and flags, without the need for a separate tool. You can replace the dash (-) with a space when you use docker-compose to switch over to docker compose. You can also use them interchangeably, so that you are not locked-in with the new compose command and, if needed, you can still use docker-compose.

$ docker-compose up -d

#3 You can now specify the name of the project flawlessly

The newer Compose v2 comes with CLI option to specify the project name as shown below:

 % docker compose --project-name --help

Usage:  docker compose [OPTIONS] COMMAND

Docker Compose

Options:
      --ansi string                Control when to print ANSI control characters ("never"|"always"|"auto") (default "auto")
      --env-file string            Specify an alternate environment file.
  -f, --file stringArray           Compose configuration files
      --profile stringArray        Specify a profile to enable
      --project-directory string   Specify an alternate working directory
                                   (default: the path of the Compose file)
  -p, --project-name string        Project name

Let us try out a simple wordpress example. Below is a sample WordPress Docker compose file.

version: "3.6"

services:
  db:
    image: mysql:5.7
    volumes:
      - db_data:/var/lib/mysql
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: wordpress
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wordpress
      MYSQL_PASSWORD: wordpress

  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    ports:
      - "80:80"
    restart: always
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_PASSWORD: wordpress

volumes:
    db_data:

Let us use the newer Docker compose CLI and try to run WordPress app using a new project name.

 % docker compose --project-name demo up -d
[+] Running 2/2
 ⠿ Container demo_db_1         Running                                                                                                  0.0s
 ⠿ Container demo_wordpress_1  Running

Verify the services:

% docker compose ls
NAME                STATUS
demo                running(2)

Please note: In the earlier release of Docker Compose, one has to run the Compose CLI from a specific directory where the compose file used to reside. But with the newer release, you don’t need to worry about it.

#4 The docker compose "ls VS ps"

With the newer Docker Compose CLI, there are two distinct CLIs – the “ls” and “ps”. The “ls” list running compose projects whereas “ps” lists containers.

ajeetraina@Ajeets-MacBook-Pro wordpress % docker compose up -d
[+] Running 3/3
 ⠿ Network wordpress_default        Created                                                                                             3.9s
 ⠿ Container wordpress_db_1         Started                                                                                             2.2s
 ⠿ Container wordpress_wordpress_1  Started                                                                                             5.8s
ajeetraina@Ajeets-MacBook-Pro wordpress % docker compose ps
NAME                    SERVICE             STATUS              PORTS
wordpress_db_1          db                  running             3306/tcp, 33060/tcp
wordpress_wordpress_1   wordpress           running             0.0.0.0:80->80/tcp, :::80->80/tcp
ajeetraina@Ajeets-MacBook-Pro wordpress % docker compose ls
NAME                STATUS
wordpress           running(2)
ajeetraina@Ajeets-MacBook-Pro wordpress % docker compose ps
NAME                    SERVICE             STATUS              PORTS
wordpress_db_1          db                  running             3306/tcp, 33060/tcp
wordpress_wordpress_1   wordpress           running             0.0.0.0:80->80/tcp, :::80->80/tcp

#5 Checking the logs

docker compose logs -f wordpress
wordpress_1  | WordPress not found in /var/www/html - copying now...
wordpress_1  | Complete! WordPress has been successfully copied to /var/www/html
wordpress_1  | No 'wp-config.php' found in /var/www/html, but 'WORDPRESS_...' variables supplied; copying 'wp-config-docker.php' (WORDPRESS_DB_HOST WORDPRESS_DB_PASSWORD)
wordpress_1  | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.21.0.3. Set the 'ServerName' directive globally to suppress this message
wordpress_1  | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.21.0.3. Set the 'ServerName' directive globally to suppress this message
wordpress_1  | [Fri May 28 23:49:11.020711 2021] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.4.19 configured -- resuming normal operations
wordpress_1  | [Fri May 28 23:49:11.020767 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'

#6 A new Compose convert CLI

The new “Compose convert” command converts the compose file to platform’s canonical format. It’s the alias of the same config CLI that was introduced long back in Docker Compose binaries.

docker compose convert --help --services

Usage:  docker compose convert SERVICES

Converts the compose file to platform's canonical format

Aliases:
  convert, config

Options:
      --format string           Format the output. Values: [yaml | json] (default "yaml")
      --hash string             Print the service config hash, one per line.
      --no-interpolate          Don't interpolate environment variables.
      --profiles                Print the profile names, one per line.
  -q, --quiet                   Only validate the configuration, don't print anything.
      --resolve-image-digests   Pin image tags to digests.
      --services                Print the service names, one per line.
      --volumes                 Print the volume names, one per line.
ajeetraina@Ajeets-MacBook-Pro wordpress % docker compose convert --services
wordpress
db
ajeetraina@Ajeets-MacBook-Pro wordpress % docker compose config --services
db
wordpress
ajeetraina@Ajeets-MacBook-Pro wordpr

References

Delivering Container-based Apps to IoT Edge devices | Dockercon 2021

Did you know? Dockercon 2021 was attended by 80,000 participants on the first day. It was an amazing experience hosting “Docker for IoT Edge devices” community room sessions for the first time in Dockercon 2021. In total, there were 6 speakers who delivered a talk about IoT, ARM, 5G, Docker & Edge. I kicked off the day with a talk around”Delivering Container-based apps to IoT Edge devices“. I discussed how Docker Buildx can be used to build ARM-based Docker images on your Macbook and pushed to Docker Hub so as to deploy it over IoT Edge devices like Jetson Nano.

During the session, I discussed Docker access to NVIDIA Jetson Nano GPU devices using Docker Compose. This was one of the feature request I raised 2 years back that was recently introduced early this year.

I spent considerable amount of time on “Docker Buildx” – a CLI plugin that extends the docker command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently. Please note that Docker Buildx is included in Docker Desktop and Docker Linux packages when installed using the DEB or RPM packages.

Next, I showcased a short demo around managing sensor data using Docker + NVIDIA Jetson Nano + BME680 and plotted temperature, pressure, and humidity values over Grafana.

One of the most appreciated topic was “Crowd Mask detection system“. If you have NVIDIA Jetson Nano lying around, just connect a USB camera to your board and try the below command, you will surely gonna love it.

sudo docker run --runtime nvidia --privileged --rm -it --env MASKCAM_DEVICE_ADDRESS=<your-jetson-ip> -p 1883:1883 -p 8080:8080 -p 8554:8554 maskcam/maskcam-beta

References:

What are Kubernetes Pods? | KubeLabs Glossary

Kubernetes (commonly referred to as K8s) is an orchestration engine for container technologies such as Docker and rkt that is taking over the DevOps scene in the last couple of years. It is already available on Azure and Google Cloud as a managed service.

Kubernetes can speed up the development process by making easy, automated deployments, updates (rolling-update) and by managing our apps and services with almost zero downtime. It also provides self-healing. Kubernetes can detect and restart services when a process crashes inside the container. Kubernetes is originally developed by Google, it is open-sourced since its launch and managed by a large community of contributors.

Quick Overview:

  • Kubernetes pods are the foundational unit for all higher Kubernetes objects.
  • A pod hosts one or more containers.
  • It can be created using either a command or a YAML/JSON file.
  • Use kubectl to create pods, view the running ones, modify their configuration, or terminate them. Kuberbetes will attempt to restart a failing pod by default.
  • If the pod fails to start indefinitely, we can use the kubectl describe command to know what went wrong.

Why does Kubernetes use a Pod as the smallest deployable unit, and not a single container?

While it would seem simpler to just deploy a single container directly, there are good reasons to add a layer of abstraction represented by the Pod. A container is an existing entity, which refers to a specific thing. That specific thing might be a Docker container, but it might also be a rkt container, or a VM managed by Virtlet. Each of these has different requirements.

What’s more, to manage a container, Kubernetes needs additional information, such as a restart policy, which defines what to do with a container when it terminates, or a liveness probe, which defines an action to detect if a process in a container is still alive from the application’s perspective, such as a web server responding to HTTP requests.

Instead of overloading the existing “thing” with additional properties, Kubernetes architects have decided to use a new entity, the Pod, that logically contains (wraps) one or more containers that should be managed as a single entity.

Why does Kubernetes allow more than one container in a Pod?

Containers in a Pod run on a “logical host”; they use the same network namespace (in other words, the same IP address and port space), and the same IPC namespace. They can also use shared volumes. These properties make it possible for these containers to efficiently communicate, ensuring data locality. Also, Pods enable you to manage several tightly coupled application containers as a single unit.

So if an application needs several containers running on the same host, why not just make a single container with everything you need? Well first, you’re likely to violate the “one process per container” principle. This is important because with multiple processes in the same container it is harder to troubleshoot the container. That is because logs from different processes will be mixed together and it is harder manage the processes lifecycle. For example to take care of “zombie” processes when their parent process dies. Second, using several containers for an application is simpler, more transparent, and enables decoupling software dependencies. Also, more granular containers can be reused between teams.

Pre-requisite:

Steps

git clone https://github.com/collabnix/kubelabs
cd kubelabs/pods101
kubectl apply -f pods01.yaml

Viewing Your Pods

kubectl get pods

Which Node Is This Pod Running On?

kubectl get pods -o wide
$ kubectl describe po webserver
Name:               webserver
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               gke-standard-cluster-1-default-pool-78257330-5hs8/10.128.0.3
Start Time:         Thu, 28 Nov 2019 13:02:19 +0530
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:
                      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"webserver","namespace":"default"},"spec":{"containers":[{"image":"ngi...
                    kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container webserver
Status:             Running
IP:                 10.8.0.3
Containers:
  webserver:
    Container ID:   docker://ff06c3e6877724ec706485374936ac6163aff10822246a40093eb82b9113189c
    Image:          nginx:latest
    Image ID:       docker-pullable://nginx@sha256:189cce606b29fb2a33ebc2fcecfa8e33b0b99740da4737133cdbcee92f3aba0a
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 28 Nov 2019 13:02:25 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-mpxxg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  default-token-mpxxg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-mpxxg
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age    From                                                        Message
  ----    ------     ----   ----                                                        -------
  Normal  Scheduled  2m54s  default-scheduler                                           Successfully assigned default/webserver to gke-standard-cluster-1-default-pool-78257330-5hs8
  Normal  Pulling    2m53s  kubelet, gke-standard-cluster-1-default-pool-78257330-5hs8  pulling image "nginx:latest"
  Normal  Pulled     2m50s  kubelet, gke-standard-cluster-1-default-pool-78257330-5hs8  Successfully pulled image "nginx:latest"
  Normal  Created    2m48s  kubelet, gke-standard-cluster-1-default-pool-78257330-5hs8  Created container
  Normal  Started    2m48s  kubelet, gke-standard-cluster-1-default-pool-78257330-5hs8  Started container

Output in JSON

$ kubectl get pods -o json
{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "v1",
            "kind": "Pod",
            "metadata": {
                "annotations": {
                    "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"name\":\"webserver\",\"namespace\":\"default\"},\"spec\":{\"con
tainers\":[{\"image\":\"nginx:latest\",\"name\":\"webserver\",\"ports\":[{\"containerPort\":80}]}]}}\n",
                    "kubernetes.io/limit-ranger": "LimitRanger plugin set: cpu request for container webserver"
                },
                "creationTimestamp": "2019-11-28T08:48:28Z",
                "name": "webserver",
                "namespace": "default",
                "resourceVersion": "20080",
                "selfLink": "/api/v1/namespaces/default/pods/webserver",
                "uid": "d8e0b56b-11bb-11ea-a1bf-42010a800006"
            },
            "spec": {
                "containers": [
                    {
                        "image": "nginx:latest",
                        "imagePullPolicy": "Always",
                        "name": "webserver",
                        "ports": [
                            {
                                "containerPort": 80,
                                "protocol": "TCP"
                            }
                        ],
                        "resources": {
                            "requests": {
                                "cpu": "100m"
                            }
                        },
                        "terminationMessagePath": "/dev/termination-log",
                        "terminationMessagePolicy": "File",
             

Executing Commands Against Pods

$ kubectl exec -it webserver -- /bin/bash
root@webserver:/#
root@webserver:/# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"

Please exit from the shell (/bin/bash) session.

root@webserver:/# exit

Deleting the Pod

$ kubectl delete -f pods01.yaml
pod "webserver" deleted

$ kubectl get po -o wide
No resources found.

Get logs of Pod

$ kubectl logs webserver

/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

Ading a 2nd container to a Pod

In the microservices architecture, each module should live in its own space and communicate with other modules following a set of rules. But, sometimes we need to deviate a little from this principle. Suppose you have an Nginx web server running and we need to analyze its web logs in real-time. The logs we need to parse are obtained from GET requests to the web server. The developers created a log watcher application that will do this job and they built a container for it. In typical conditions, you’d have a pod for Nginx and another for the log watcher. However, we need to eliminate any network latency so that the watcher can analyze logs the moment they are available. A solution for this is to place both containers on the same pod.

Having both containers on the same pod allows them to communicate through the loopback interface (ifconfig lo) as if they were two processes running on the same host. They also share the same storage volume.

Let us see how a pod can host more than one container. Let’s take a look to the pods02.yaml file. It contains the following lines:

apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  containers:
  - name: webserver
    image: nginx:latest
    ports:
    - containerPort: 80
  - name: webwatcher
    image: afakharany/watcher:latest

Run the following command:

$ kubectl apply -f pods02.yaml
$ kubectl get po -o wide
NAME        READY   STATUS              RESTARTS   AGE   IP       NODE                                                NOMINATED NODE   READINESS GATES
webserver   0/2     ContainerCreating   0          13s   <none>   gke-standard-cluster-1-default-pool-78257330-5hs8   <none>           <none>
$ kubectl get po,svc,deploy
NAME            READY   STATUS    RESTARTS   AGE
pod/webserver   2/2     Running   0          3m6s
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.12.0.1    <none>        443/TCP   107m
$ kubectl get po -o wide
NAME        READY   STATUS    RESTARTS   AGE     IP         NODE                                                NOMINATED NODE   READINESS GATES
webserver   2/2     Running   0          3m37s   10.8.0.5   gke-standard-cluster-1-default-pool-78257330-5hs8   <none>           <none>

How to verify 2 containers are running inside a Pod?

$ kubectl describe po
Containers:
  webserver:
    Container ID:   docker://0564fcb88f7c329610e7da24cba9de6555c0183814cf517e55d2816c6539b829
    Image:          nginx:latest
    Image ID:       docker-pullable://nginx@sha256:36b77d8bb27ffca25c7f6f53cadd059aca2747d46fb6ef34064e31727325784e
    Port:           80/TCP
    State:          Running
      Started:      Wed, 08 Jan 2020 13:21:57 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xhgmm (ro)
  webwatcher:
    Container ID:   docker://4cebbb220f7f9695f4d6492509e58152ba661f3ab8f4b5d0a7adec6c61bdde26
    Image:          afakharany/watcher:latest
    Image ID:       docker-pullable://afakharany/watcher@sha256:43d1b12bb4ce6e549e85447678a28a8e7b9d4fc398938a6f3e57d2908a9b7d80
    Port:           <none>
    State:          Running
      Started:      Wed, 08 Jan 2020 13:22:26 +0530
    Ready:          True
    Restart Count:  0
    Requests:

Since we have two containers in a pod, we will need to use the -c option with kubectl when we need to address a specific container. For example:

$ kubectl exec -it webserver -c webwatcher -- /bin/bash

root@webserver:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.8.0.5        webserver

Please exit from the shell (/bin/bash) session.

root@webserver:/# exit

Cleaning up

kubectl delete -f pods02.yaml

Example of Multi-Container Pod

Let’s talk about communication between containers in a Pod. Having multiple containers in a single Pod makes it relatively straightforward for them to communicate with each other. They can do this using several different methods.

Use Cases for Multi-Container Pods

The primary purpose of a multi-container Pod is to support co-located, co-managed helper processes for a primary application. There are some general patterns for using helper processes in Pods:

Sidecar containers help the main container. Some examples include log or data change watchers, monitoring adapters, and so on. A log watcher, for example, can be built once by a different team and reused across different applications. Another example of a sidecar container is a file or data loader that generates data for the main container.

Proxies, bridges, and adapters connect the main container with the external world. For example, Apache HTTP server or nginx can serve static files. It can also act as a reverse proxy to a web application in the main container to log and limit HTTP requests. Another example is a helper container that re-routes requests from the main container to the external world. This makes it possible for the main container to connect to the localhost to access, for example, an external database, but without any service discovery.

Shared volumes in a Kubernetes Pod

In Kubernetes, you can use a shared Kubernetes Volume as a simple and efficient way to share data between containers in a Pod. For most cases, it is sufficient to use a directory on the host that is shared with all containers within a Pod.

Kubernetes Volumes enables data to survive container restarts, but these volumes have the same lifetime as the Pod. That means that the volume (and the data it holds) exists exactly as long as that Pod exists. If that Pod is deleted for any reason, even if an identical replacement is created, the shared Volume is also destroyed and created anew.

A standard use case for a multi-container Pod with a shared Volume is when one container writes logs or other files to the shared directory, and the other container reads from the shared directory. For example, we can create a Pod like so (pods03.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: mc1
spec:
  volumes:
  - name: html
    emptyDir: {}
  containers:
  - name: 1st
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  - name: 2nd
    image: debian
    volumeMounts:
    - name: html
      mountPath: /html
    command: ["/bin/sh", "-c"]
    args:
      - while true; do
          date >> /html/index.html;
          sleep 1;
        done

In this file (pods03.yaml) a volume named html has been defined. Its type is emptyDir, which means that the volume is first created when a Pod is assigned to a node, and exists as long as that Pod is running on that node. As the name says, it is initially empty. The 1st container runs nginx server and has the shared volume mounted to the directory /usr/share/nginx/html. The 2nd container uses the Debian image and has the shared volume mounted to the directory /html. Every second, the 2nd container adds the current date and time into the index.html file, which is located in the shared volume. When the user makes an HTTP request to the Pod, the Nginx server reads this file and transfers it back to the user in response to the request.

Image
kubectl apply -f pods03.yaml
[Captains-Bay]🚩 >  kubectl get po,svc
NAME      READY     STATUS    RESTARTS   AGE
po/mc1    2/2       Running   0          11s

NAME             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   10.15.240.1   <none>        443/TCP   1h
[Captains-Bay]🚩 >  kubectl describe po mc1
Name:         mc1
Namespace:    default
Node:         gke-k8s-lab1-default-pool-fd9ef5ad-pc18/10.140.0.16
Start Time:   Wed, 08 Jan 2020 14:29:08 +0530
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"mc1","namespace":"default"},"spec":{"containers":[{"image":"nginx","name":"1st","v...
              kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container 1st; cpu request for container 2nd
Status:       Running
IP:           10.12.2.6
Containers:
  1st:
    Container ID:   docker://b08eb646f90f981cd36c605bf8fead3ca62178c7863598fd4558cb026ed067dd
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:36b77d8bb27ffca25c7f6f53cadd059aca2747d46fb6ef34064e31727325784e
    Port:           <none>
    State:          Running
      Started:      Wed, 08 Jan 2020 14:29:09 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /usr/share/nginx/html from html (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xhgmm (ro)
  2nd:
    Container ID:  docker://63180b4128d477810d6062342f4b8e499de684ffd69ad245c29118e1661eafcb
    Image:         debian
    Image ID:      docker-pullable://debian@sha256:c99ed5d068d4f7ff36c7a6f31810defebecca3a92267fefbe0e0cf2d9639115a
    Port:          <none>
    Command:
      /bin/sh
      -c
    Args:
      while true; do date >> /html/index.html; sleep 1; done
    State:          Running
      Started:      Wed, 08 Jan 2020 14:29:14 +0530
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        100m
    Environment:  <none>
    Mounts:
      /html from html (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-xhgmm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  html:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:  
  default-token-xhgmm:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-xhgmm
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                                              Message
  ----    ------     ----  ----                                              -------
  Normal  Scheduled  18s   default-scheduler                                 Successfully assigned default/mc1 to gke-k8s-lab1-default-pool-fd9ef5ad-pc18
  Normal  Pulling    17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  pulling image "nginx"
  Normal  Pulled     17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Successfully pulled image "nginx"
  Normal  Created    17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Created container
  Normal  Started    17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Started container
  Normal  Pulling    17s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  pulling image "debian"
  Normal  Pulled     13s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Successfully pulled image "debian"
  Normal  Created    12s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Created container
  Normal  Started    12s   kubelet, gke-k8s-lab1-default-pool-fd9ef5ad-pc18  Started container
$ kubectl exec mc1 -c 1st -- /bin/cat /usr/share/nginx/html/index.html
...
Wed Jan  8 08:59:14 UTC 2020
Wed Jan  8 08:59:15 UTC 2020
Wed Jan  8 08:59:16 UTC 2020
 
$ kubectl exec mc1 -c 2nd -- /bin/cat /html/index.html
...
Wed Jan  8 08:59:14 UTC 2020
Wed Jan  8 08:59:15 UTC 2020
Wed Jan  8 08:59:16 UTC 2020

Cleaning Up

kubectl delete -f pods03.yaml

References:

  • https://kubelabs.collabnix.com
  • https://kubetools.collabnix.com

Running a Web Browser in a Docker container

Are you still looking out for a solution that allows you to open multiple web browsers in Docker containers at the same time? Most people use Docker as a standard unit of software that packages up code and all its dependencies so that the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries, and settings.

Today, all major cloud providers and leading open source serverless frameworks use Docker, and many are leveraging Docker for their container-native IaaS offerings. Docker makes development efficient and predictable. Docker takes away repetitive, mundane configuration tasks and is used throughout the development lifecycle for fast, easy, and portable application development – desktop and cloud. Docker’s comprehensive end-to-end platform includes UIs, CLIs, APIs, and security that are engineered to work together across the entire application delivery lifecycle.

Let us agree to the fact that browser testing can be a pain for the most seasoned testers, but it’s particularly challenging for newbies. In today’s world, an application behaves differently in different browsers, in different resolutions, and sometimes in different operating systems. To give new testers a running start, testing these web browsers in Docker containers can expedite their testing effort.

Under this blog, I will show you how to run Firefox web browser in Docker container. This has been tested on Docker Desktop for Mac.

Pre-requisite

  • Install Docker Desktop
  • Enable File Sharing under Docker Desktop > Preference as shown below:

The GUI of the application is accessed through a modern web browser (no installation or configuration needed on the client side) or via any VNC client.

Launch the Firefox docker container with the following command:

 % docker run -d \    
    --name=firefox \
    -p 5800:5800 \
    -v /Users/ajeetraina/datas:/config:rw \
    --shm-size 2g \
    jlesage/firefox
30220d1c22fa13ce24eb2ddaad7de88f67eefb8ce9388701aa5666795561b8eb

Open https://localhost:5800 and access the firefox over the browser.

You can check what’s happening in the backend by running the below CLI:

% docker logs -f 3022
The VNC desktop is:      30220d1c22fa:0
PORT=5900

******************************************************************************
Have you tried the x11vnc '-ncache' VNC client-side pixel caching feature yet?

The scheme stores pixel data offscreen on the VNC viewer side for faster
retrieval.  It should work with any VNC viewer.  Try it by running:

    x11vnc -ncache 10 ...

One can also add -ncache_cr for smooth 'copyrect' window motion.
More info: http://www.karlrunge.com/x11vnc/faq.html#faq-client-caching

Mozilla Firefox 84.0.2
01/05/2021 06:22:38 Got connection from client 127.0.0.1
01/05/2021 06:22:38   other clients:
01/05/2021 06:22:38 Got 'ws' WebSockets handshake
01/05/2021 06:22:38 Got protocol: binary

Click on “Dashboard” to see what percentage of resources this Docker container is consuming:

Click on the console sign just near to the stats to open browser.

You can also open terminal icon next to the browser option to enter into the firefox container shell:

ajeetraina@Ajeets-MacBook-Pro ~ % docker exec -it 30220d1c22fa13ce24eb2ddaad7de88f67eefb8ce9388701aa5666795561b8eb /bin/sh
/tmp # free -m
              total        used        free      shared  buff/cache   available
Mem:           1987        1365          82         218         538         359
Swap:          1023         475         548
/tmp # cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.12.6
PRETTY_NAME="Alpine Linux v3.12"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
/tmp # 

Further References

Running Minecraft in Rootless Mode under Docker 20.10.6

Rootless mode was introduced in Docker Engine v19.03 as an experimental feature for the first time. Rootless mode graduated from experimental in Docker Engine v20.10.

Rootless mode allows running the Docker daemon and containers as a non-root user to mitigate potential vulnerabilities in the daemon and the container runtime. The rootless mode does not require root privileges even during the installation of the Docker daemon, as long as the prerequisites are met.


How it works

Rootless mode executes the Docker daemon and containers inside a user namespace. This is very similar to userns-remap mode, except that with userns-remap mode, the daemon itself is running with root privileges, whereas in rootless mode, both the daemon and the container are running without root privileges.

Rootless mode does not use binaries with SETUID bits or file capabilities, except newuidmap and newgidmap, which are needed to allow multiple UIDs/GIDs to be used in the user namespace.

Pre-requisite

  • Google Cloud Platform
  • Ubuntu 20.04 LTS
sudo curl -sSL https://get.docker.com/ | sh
# Executing docker install script, commit: 7cae5f8b0decc17d6571f9f52eb840fbc13b2737
+ sudo -E sh -c apt-get update -qq >/dev/null
+ sudo -E sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
+ sudo -E sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null
Warning: apt-key output should not be parsed (stdout is not a terminal)
+ sudo -E sh -c echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" > /etc/apt/sources.list.d/docker.list
+ sudo -E sh -c apt-get update -qq >/dev/null
+ [ -n  ]
+ sudo -E sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/null
+ [ -n 1 ]
+ sudo -E sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce-rootless-extras >/dev/null
+ sudo -E sh -c docker version
Client: Docker Engine - Community
 Version:           20.10.6
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        370c289
 Built:             Fri Apr  9 22:47:17 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8728dd2
  Built:            Fri Apr  9 22:45:28 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
================================================================================
To run Docker as a non-privileged user, consider setting up the
Docker daemon in rootless mode for your user:
    dockerd-rootless-setuptool.sh install
Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.
To run the Docker daemon as a fully privileged service, but granting non-root
users access, refer to https://docs.docker.com/go/daemon-access/
WARNING: Access to the remote API on a privileged Docker daemon is equivalent to root access on the host. Refer to the 'Docker daemon attack surface'
         documentation for details: https://docs.docker.com/go/attack-surface/

Ensure that you have newuidmap and newgidmap CLI installed on your host system. These commands are provided by the uidmap package on most distros.

sudo apt install uidmap
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  libnuma1
Use 'sudo apt autoremove' to remove it.
The following NEW packages will be installed:
  uidmap
0 upgraded, 1 newly installed, 0 to remove and 1 not upgraded.
Need to get 26.3 kB of archives.
After this operation, 171 kB of additional disk space will be used.
Get:1 http://asia-south1.gce.archive.ubuntu.com/ubuntu focal-updates/universe amd64 uidmap amd64 1:4.8.1-1ubuntu5.20.04 [26.3 kB]
Fetched 26.3 kB in 1s (33.7 kB/s) 
Selecting previously unselected package uidmap.
(Reading database ... 63405 files and directories currently installed.)
Preparing to unpack .../uidmap_1%3a4.8.1-1ubuntu5.20.04_amd64.deb ...
Unpacking uidmap (1:4.8.1-1ubuntu5.20.04) ...
Setting up uidmap (1:4.8.1-1ubuntu5.20.04) ...
Processing triggers for man-db (2.9.1-1) ...

If you installed Docker 20.10 or later with RPM/DEB packages, you should have dockerd-rootless-setuptool.sh in /usr/bin.

Run dockerd-rootless-setuptool.sh install as a non-root user to set up the daemon:

dockerd-rootless-setuptool.sh install
[INFO] Creating /home/docker_captain_india/.config/systemd/user/docker.service
[INFO] starting systemd service docker.service
+ systemctl --user start docker.service
+ sleep 3
+ systemctl --user --no-pager --full status docker.service
● docker.service - Docker Application Container Engine (Rootless)
     Loaded: loaded (/home/docker_captain_india/.config/systemd/user/docker.service; disabled; vendor preset: enabled)
     Active: active (running) since Sat 2021-04-17 07:17:32 UTC; 3s ago
       Docs: https://docs.docker.com/go/rootless/
   Main PID: 4360 (rootlesskit)
     CGroup: /user.slice/user-1001.slice/user@1001.service/docker.service
             ├─4360 rootlesskit --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /usr/bin/dockerd
-rootless.sh
             ├─4376 /proc/self/exe --net=slirp4netns --mtu=65520 --slirp4netns-sandbox=auto --slirp4netns-seccomp=auto --disable-host-loopback --port-driver=builtin --copy-up=/etc --copy-up=/run --propagation=rslave /usr/bin/dock
erd-rootless.sh
             ├─4391 slirp4netns --mtu 65520 -r 3 --disable-host-loopback --enable-sandbox --enable-seccomp 4376 tap0
             ├─4399 dockerd
             └─4414 containerd --config /run/user/1001/docker/containerd/containerd.toml --log-level info
Apr 17 07:17:32 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:32.518269518Z" level=warning msg="Your kernel does not support CPU realtime scheduler"
Apr 17 07:17:32 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:32.518280114Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Apr 17 07:17:32 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:32.518289157Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Apr 17 07:17:32 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:32.518523007Z" level=info msg="Loading containers: start."
Apr 17 07:17:32 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:32.820716510Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred 
IP address"
Apr 17 07:17:33 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:33.100263067Z" level=info msg="Loading containers: done."
Apr 17 07:17:33 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:33.115802490Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: running in a user namespac
e" storage-driver=overlay2
Apr 17 07:17:33 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:33.116190065Z" level=info msg="Docker daemon" commit=8728dd2 graphdriver(s)=overlay2 version=20.10.6
Apr 17 07:17:33 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:33.116364504Z" level=info msg="Daemon has completed initialization"
Apr 17 07:17:33 minecraft dockerd-rootless.sh[4399]: time="2021-04-17T07:17:33.163000086Z" level=info msg="API listen on /run/user/1001/docker.sock"
+ DOCKER_HOST=unix:///run/user/1001/docker.sock /usr/bin/docker version
Client: Docker Engine - Community
 Version:           20.10.6
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        370c289
 Built:             Fri Apr  9 22:47:17 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true
Server: Docker Engine - Community
 Engine:
  Version:          20.10.6
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       8728dd2
  Built:            Fri Apr  9 22:45:28 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.4
  GitCommit:        05f951a3781f4f2c1911b05e61c160e9c30eaa8e
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
+ systemctl --user enable docker.service
Created symlink /home/docker_captain_india/.config/systemd/user/default.target.wants/docker.service → /home/docker_captain_india/.config/systemd/user/docker.service.Created symlink /home/docker_captain_india/.config/systemd/user/default.target.wants/docker.service → /home/docker_captain_india/.config/systemd/user/docker.service.
[INFO] Installed docker.service successfully.
[INFO] To control docker.service, run: `systemctl --user (start|stop|restart) docker.service`
[INFO] To run docker.service on system startup, run: `sudo loginctl enable-linger docker_captain_india`
[INFO] Creating CLI context "rootless"
Successfully created context "rootless"
[INFO] Make sure the following environment variables are set (or add them to ~/.bashrc):
export PATH=/usr/bin:$PATH
export DOCKER_HOST=unix:///run/user/1001/docker.sock

If dockerd-rootless-setuptool.sh is not present, you may need to install the docker-ce-rootless-extras package manually, e.g.,

sudo apt-get install -y docker-ce-rootless-extras
Reading package lists... Done
Building dependency tree       
Reading state information... Done
docker-ce-rootless-extras is already the newest version (5:20.10.6~3-0~ubuntu-focal).
The following package was automatically installed and is no longer required:
  libnuma1
Use 'sudo apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

The systemd unit file is installed as ~/.config/systemd/user/docker.service.

Use systemctl --user to manage the lifecycle of the daemon:

$ systemctl --user start docker
$ systemctl --user enable docker

If you try to run Nginx container, you still might not be able to run it as normal user.

docker_captain_india@minecraft:~$ docker run -d -p 8080:80 nginx
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create: dial unix /var/run/docker.sock: connect: permission
 denied.
See 'docker run --help'.

You need to specify either the socket path or the CLI context explicitly.

To specify the socket path using $DOCKER_HOST:

export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock

Now you can successfully Nginx container without sudo or root user:

docker run -d -p 8080:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
f7ec5a41d630: Pull complete 
aa1efa14b3bf: Pull complete 
b78b95af9b17: Pull complete 
c7d6bca2b8dc: Pull complete 
cf16cd8e71e0: Pull complete 
0241c68333ef: Pull complete 
Digest: sha256:75a55d33ecc73c2a242450a9f1cc858499d468f077ea942867e662c247b5e412
Status: Downloaded newer image for nginx:latest
795702f7965de6141ade74add2ed3c7d3881ae79b15924d5e5a55154ad2a6732

To specify the CLI context using docker context:

docker context use rootless
rootless
Current context is now "rootless"
Warning: DOCKER_HOST environment variable overrides the active context. To use "rootless", either set the global --context flag, or unset DOCKER_HOST environment variable.

Let us run Minecraft in rootless mode:

docker run -d -p 25565:25565 -e EULA=true -e ONLINE_MODE=false -e DIFFICULTY=hard -e OPS=collabnix  -e MAX_PLAYERS=50 -e MOTD="welcome to Raina World" -v /tmp/minecraft_data:/data --name mc itzg/minecraft-server

References:

Running RedisInsight using Docker Compose

RedisInsight is an intuitive and efficient GUI for Redis, allowing you to interact with your databases and manage your data—with built-in support for the most popular Redis modules. It runs on Windows, macOS, Linux and even can be run as a Docker container and Kubernetes Pods.

Last week, I came across the StackOverflow question around RedisInsight in Docker Compose and decided to come up with a blog post to talk detail around the solution.



Here’s a quick 5 minutes guide to help you get started with RedisInsight up and running using Compose file.

Step 1. Get Docker Desktop Ready

If you are on macOS, ensure that you install Docker Desktop on your system. Go to this link and choose “Download for Mac” for a quick installation.

Once you complete the installation, you should be able to verify by clicking “About Docker Desktop” on the whale icon.

You can verify it via CLI too:

 docker version
Client: Docker Engine - Community
 Cloud integration: 1.0.9
 Version:           20.10.5
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        55c4c88
 Built:             Tue Mar  2 20:13:00 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.5
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       363e9a8
  Built:            Tue Mar  2 20:15:47 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
ajeetraina@Ajeets-MacBook-Pro community % 

Step 2. Enable the File Sharing

Click “whale icon” appearing on the top bar, then select “Preference” under Docker Desktop > Select “Resources” > “File Sharing” and add the folder structure that you want to share. Please change the directory structure as per your environment

For this demonstration, I have add /data/redisinsight directory that I am planning to use in Docker Compose

Step 3. Create a Docker Compose file

We will be creating a compose file with 2 services – redis and redisinsight. Add the below content in Docker compose file and save it.

version: '3'
services:
  redis:
    image: redislabs/redismod
    ports:
      - 6379:6379
  redisinsight:
    image: redislabs/redisinsight:latest
    ports:
      - '8001:8001'
    volumes:
      - ./Users/ajeetraina/data/redisinsight:/db 

Ensure that you don’t have any running Redis instance on port 6379. If it is running, ensure that you change the port in the compose file to some other values like 6380.

Step 4. Execute the Compose CLI


docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.

Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.

To deploy your application across the swarm, use `docker stack deploy`.

Creating network "red_default" with the default driver
Creating red_redisinsight_1 ... done
Creating red_redis_1        ... done

Step 5. Verify if RedisInsight container is working or not

docker-compose ps
          Name                        Command               State           Ports         
------------------------------------------------------------------------------------------
pinegraph_redis_1          redis-server --loadmodule  ...   Up      0.0.0.0:6379->6379/tcp
pinegraph_redisinsight_1   bash ./docker-entry.sh pyt ...   Up      0.0.0.0:8001->8001/tcp

Step 6. Accessing RedisInsight

Open HTTP://IP:8001 to access RedisInsight UI

If you try to use “localhost” and try to add Redis database, there are chances that you might face the below error message.

Instead, use your host IP address as shown below:

ifconfig en0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 3c:22:fb:86:c1:b1 
	inet6 fe80::1cf6:84b:4aca:36ad%en0 prefixlen 64 secured scopeid 0x6 
	inet6 2401:4900:16e5:3dcc:184d:e209:e4ae:2ddc prefixlen 64 autoconf secured 
	inet6 2401:4900:16e5:3dcc:5c08:6a0d:bff2:b87f prefixlen 64 autoconf temporary 
	inet 192.168.43.81 netmask 0xffffff00 broadcast 192.168.43.255
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active

You should be able to connect to Redis database this time.

Further References:

Getting Started with BME680 Sensor on NVIDIA Jetson Nano

The BME680 is a digital 4-in-1 sensor with gas, humidity, pressure, and temperature measurement based on proven sensing principles. The state-of-the-art BME680 breakout lets you measure temperature, pressure, humidity, and indoor air quality, and is Raspberry Pi and Arduino-compatible!

The sensor module is housed in an extremely compact metal-lid LGA package with a footprint of only 3.0 × 3.0 mm² with a maximum height of 1.00 mm (0.93 ± 0.07 mm). Its small dimensions and its low power consumption enable the integration in battery-powered or frequency-coupled devices, such as handsets or wearables.

Typical applications

You can use this breakout to monitor every aspect of your indoor environment. Its gas resistance readings will react to changes in volatile organic compounds and can be combined with humidity readings to give a measure of indoor air quality. Here are the list of typical applications:


 Indoor air quality
 Home automation and control
 Internet of things
 Weather forecast
 GPS enhancement (e.g. time-to-first-fix improvement, dead reckoning, slope detection)
 Indoor navigation (change of floor detection, elevator detection)
 Outdoor navigation, leisure and sports applications
 Vertical velocity indication (rise/sink speed)

Features

  • Measuring temperature, pressure, humidity, air quality sensor
  • I2C interface, with address select via ADDR solder bridge (0x76 or 0x77)
  • 3.3V or 5V compatible
  • Reverse polarity protection
  • Raspberry Pi-compatible pinout (pins 1, 3, 5, 7, 9)
  • Compatible with all models of Raspberry Pi, and Arduino
  • Python library
  • Datasheet

Let us see how we can get started with BME680 on the NVIDIA Jetson Nano board:

Pre-requisite:

  • BME680 sensor
  • Jetson Nano

Installing software

  • Use i2cdetect command to detect the sensor
pico@pico1:~$ i2cdetect -r -y 1
     0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
00:          -- -- -- -- -- -- -- -- -- -- -- -- -- 
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
70: -- -- -- -- -- -- 76 --  
  • Install python modules
sudo pip3 install bme680
sudo pip3 install smbus

Cloning this repository

git clone https://github.com/collabnix/ioetplanet
cd ioetplanet/nvidia/jetsonnano/sensors

Running the Script

 sudo python3 read-sensor.py 
read-all.py - Displays temperature, pressure, humidity, and gas.

Press Ctrl+C to exit!


Calibration data:
par_gh1: -28
par_gh2: -11838
par_gh3: 18
par_h1: 708
par_h2: 1023
par_h3: 0
par_h4: 45
par_h5: 20
par_h6: 120
par_h7: -100
par_p1: 35680
par_p10: 30
par_p2: -10322
par_p3: 88
par_p4: 7420
par_p5: -83
par_p6: 30
par_p7: 22
par_p8: -214
par_p9: -3593
par_t1: 26242
par_t2: 26396
par_t3: 3
range_sw_err: 14
res_heat_range: 1
res_heat_val: 40
t_fine: 148732


Initial reading:
gas_index: 0
gas_resistance: 12561247.871044775
heat_stable: False
humidity: 16.973
meas_index: 0
pressure: 965.64
status: 32
temperature: 29.05


Polling:
29.05 C,965.65 hPa,16.98 %RH
29.06 C,965.63 hPa,16.97 %RH,3110.845596017859 Ohms
29.09 C,965.63 hPa,16.96 %RH,3862.703345199333 Ohms
29.13 C,965.61 hPa,16.93 %RH,4644.9963354335405 Ohms

How to run NodeJS Application inside Docker container on Raspberry Pi

This is the complete guide starting from all the required installation to actual dockerizing and running of a node.js application. But first of all, why should we even dockerize our application?

  1. You can launch an entire development environment on any computer supporting Docker which means you don’t have to install libraries, dependencies, download packages, etc.
  2. Collaboration is really easy because of docker. The environment of the application remains compatible over the whole workflow. This implies that the app runs precisely the same way for the developer, tester, and client, through development, staging, or production server.

Now that we have a reason, let’s start with docker!

Install Node.js & Npm on your Pi

Run the following command on a terminal to find out the version of node you require.

uname -m

My required version is ‘armv7’.

Download NodejS to your system,

Go to https://nodejs.org/en/download/ and copy the link of your required version

Use wget on the terminal to download your version of the node.

wget https://nodejs.org/dist/v12.18.0/node-v12.18.0-linux-armv7l.tar.xz

Extracting the freshly downloaded archive

You will generally find your file in ‘Downloads’ folder under the name ‘node-v12.18.0-linux-armv7l.tar.xz’ (12.18.0 is the version of node I downloaded. Your version could differ)

Now to extract the files, use tar

tar -xf node-v12.18.0-linux-armv7l.tar.xz

Make sure you change the command according to your package’s version

now, go to the extracted folder.

cd node-v12.18.0-linux-armv7l/

Checking if node and npm has been properly installed

node -v

If properly installed, these commands would return the versions of node and npm.

Create node app

To create the node.js app, first, we need a new directory where all our required files would reside.

mkdir docker-nodeapp
cd docker-nodeapp

Initialize Your NodeJS project with a package.json

This will hold the dependencies of the app. Use the following command and create a package.json by pressing enter to all the different prompts.

npm init

Adding the Express Framework as the first dependency:

npm install express –save

This will create ‘app.js’ in the docker-nodeapp directory.

Now you will have these 3 files in docker-nodeapp directory

Open package.json, you will see that under dependencies express is specified with the version installed.

Let’s create our node application. Open app.js and copy the following code into the app.js file.

var express = require(‘express’) var app = express() app.get(‘/’, function (req, res) { res.send(‘Hello World!’) }) app.listen(8081, function () { console.log(‘app listening on port 8081!’) })

This is a simple node application with an HTTP server that will serve our Hello World website.

Now let’s run the app using node.

In a terminal, go to the directory docker-node app and run app.js

node app.js

You will receive a log like this on your terminal app listening on port 8081!

GREAT! You have deployed your node app.

You can view the app running in your browser at (http://localhost:8081/)

You will see your Hello World website is deployed!

Hello World Website running on your localhost

Install Docker on your Pi

For installing Docker on you Raspberry Pi, make sure that your SSH connection is enabled, your OS is updated and upgraded

Note- Opposed to most other Linux distributions, Raspberry Pi is based on ARM architecture. Hence, not all Docker images will work on your Raspberry Pi.

To install docker, run the following commands on the terminal`


curl -fsSL https://get.docker.com -o get-docker.sh


sudo sh get-docker.sh

once installed, check the version of docker to make sure everything is properly installed


docker -v

You will see the version of your docker displayed like this.

REBOOT AFTER CHECKING THE VERSION

Let’s run the Docker hello-world image provided by Docker.


docker run hello-world

You should see something like this.

Great!

You just created a Docker hello-world container. This is a simply ‘hello world’ program in Java running within a docker container

Create your Dockerfile

First of all, you will need to create an empty docker file in docker-nodeapp directory


touch Dockerfile

Open the newly created file in your code editor.


FROM node:12
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . .
CMD node app.js
EXPOSE 8081

Create .dockerignore file in the same directory as your docker file and put the following lines of code in it.


node_modules npm-debug.log

This will prevent your local modules and debug logs from being copied onto your Docker image and possibly overwriting modules installed within your image.

Build your Docker image

To build your docker file using the command ‘docker build’.


docker build -t docker-nodeapp /full/path/to

or if you are in your project directory, you just need to use a dot


docker build -t docker-nodeapp .

The -t flag lets you tag your image so it’s easier to find later using the docker images command

Now to see your image listed by docker, run docker image in terminal


docker images

Run the Docker image


docker run -p 8080:8081 -d docker-nodeapp .

Running your image with -d runs the container in detached mode, leaving the container running in the background. The -p flag redirects a public port to a private port inside the container. Run the image you previously built.

Now you can go to (http://localhost:8080/)and see your node app running within a docker container. you will notice that the port has changed from 8081 to 8080 as 8080 is our private port inside the container

There you go! you have successfully dockerized your application.

OSCONF 2021: Save the date

https://osconf2021.collabnix.com

OSCONF 2021 is just around the corner and the free registration is still open for all DevOps enthusiasts who want to remain abreast of the Cloud-Native technologies. In case you’re completely new to OSCONF, it is a platform built to connect & bring all leading Meetup communities, Cloud-native experts, evangelists & DevOps together under one roof. It is a non-profit community event that allows you to interact with community leaders, ambassadors & open community contributors.

With over 25,000 views over YouTube, 1000+ subscribers in just 4 months, and over 50+ communities, the Collabnix community will be conducting the 6th OSCONF this April 10th. Last year, we conducted a community conference for leading cities like Bangalore, Pune, Hyderabad, Kochi & Jaipur and it will well-received.

Submit Your Talk

https://papercall.io

CFP for OSCONF 2021 is open. You are invited to submit your session and workshop ideas for this online virtual Conference. To keep it simple, we have the below topics which might help you in shaping your talk. Please note that the last date for submission of the talk is 31st March 2021.

Topics:

  • What’s New in Docker?
  • What’s New in Kubernetes Releases?
  • Kubernetes Best Practices
  • Docker Developer workflows (CI/CD)
  • Kubernetes Tools and Technologies
  • Serverless
  • Use Case: Setting up your local dev environment: Tips and Tricks
  • Use Case: How to onboard new developers – best practices
  • Use Case: Containerizing your microservices
  • Use Case: Containerizing legacy apps
  • Use Case: Using Docker to deploy machine learning models
  • Unique use cases and cool apps
  • Technical deep dives
  • Community Stories Open Source
  • Cloud-Native Applications
  • Automation
  • Artificial Intelligence(AI)
  • Machine Learning(ML)
  • Deep Learning(DL)
  • IoT

Check back in within the coming days for more OSCONF 2021 details or visit the below links to know about our community partners, volunteers, and dates.

A Closer Look at AI Data Pipeline

white and brown human robot illustration

According to a recent Gartner report, “By the end of 2024, 75% of organizations will shift from piloting to operationalizing artificial intelligence (AI), driving a 5x increase in streaming data analytics infrastructure.” The report, Top 10 Trends in Data and Analytics, 2020, further states, “Getting AI into production requires IT leaders to complement DataOps and ModelOps with infrastructures that enable end-users to embed trained models into streaming-data infrastructures to deliver continuous near-real-time predictions.”

What’s driving this tremendous shift to AI? The accelerated growth in the size and complexity of distributed data needed to optimize real-time decision making.

Realizing AI’s value, organizations are applying it to business problems by adopting intelligent applications and augmented analytics and exploring composite AI techniques. AI is finding its way into all aspects of applications, from AI-driven recommendations to autonomous vehicles, from virtual assistants, chatbots, and predictive analytics to products that adapt to the needs and preferences of users. 

Surprisingly, the hardest part of AI is not artificial intelligence itself, but dealing with AI data. The accelerated growth of data captured from the sensors in the internet of things (IoT) solutions and the growth of machine learning (ML) capabilities are yielding unparalleled opportunities for organizations to drive business value and create competitive advantage. That’s why ingesting data from many sources and deriving actionable insights or intelligence from it have become a prime objective of AI-enabled applications. In this blog post, we will discuss the AI data pipeline and the challenges of getting AI into real-world production.

From 30,000 feet up, a great AI-enabled application might look simple. But a closer look reveals a complex data pipeline. To understand what’s really happening, let’s break down the various stages of the AI data pipeline.

The first stage is data ingestion. Data ingestion is all about identifying and gathering the raw data from multiple sources, including the IoT, business processes, and so on. 

The gathered data is typically unstructured and not necessarily in the correct form to be processed, so you also need a data-preparation stage. This is where the pre-processing of data—filtering, construction, and selection—takes place. Data segregation also happens at this stage, as subsets of data are split in order to train the model, test it, and validate how it performs against the new data.

Next comes the model training phase. This includes incremental training of conventional neural network models, which generates trained models that are deployed by the model serving layer to deliver inferences or predictions. The training phase is iterative. Once a trained model is generated, it must be tested for inference accuracy and then re-trained to improve that accuracy. 

In nutshell, the fundamental building block of artificial intelligence comprises everything from ingest through several stages of data classification, transformation, analytics, machine learning, and deep learning model training, and then retraining through inference to yield increasingly accurate insights.


AI pipeline characterization and performance requirements

The AI data pipeline has varying characteristics and performance requirements. The data can be characterized by variety, volume, and disparity. The ingest phase must support fast, large-scale processing of the incoming data, while data quality is the primary focus of the data preparation phase. Both the training and inference phases are sensitive to model quality, data-access latency, response time, throughput, and data-caching capabilities of the AI solution.

In my next blog, I will showcase how AI Data Pipeline is used to build object detection and analytics platform using Edge devices and Cloud-Native application.

References:

How I built the first ARM-based Docker Image on Pinebook using buildx tool?

The Pinebook Pro is a Linux and *BSD ARM laptop from PINE64. It is built to be a compelling alternative to mid-ranged Chromebooks that people convert into Linux laptops. It features an IPS 1080p 14″ LCD panel, a premium magnesium alloy shell, high capacity eMMC storage, a 10,000 mAh capacity battery, and the modularity that only an open-source project can deliver.

The Pinebook Pro comes with Manjaro Linux pre-installed. Manjaro is, effectively, Arch Linux but with a set of reasonably sane defaults—” real” Arch might be considered more of a framework upon which to hang a distro, rather than a complete distribution itself. If you’re not into Manjaro, that’s fine—the Pine64 project offers a wide selection of additional, downloadable user-installable Pinebook Pro images including Debian, Fedora, NetBSD, Chromium OS, and more.

Many different Operating Systems (OS) are freely available from the open-source community and partner projects. These include various flavors of Linux (Ubuntu, Debian, Manjaro, etc.) and *BSD. Under ‘Pinebook Pro Software Release/OS Image Download Section’ you will find a complete list of currently supported Operating System images that work with the Pinebook as well as other related software.

Let us verify the architecture of Pinebook Pro Linux system via lscpu command:

$ sudo lscpu
Architecture:                    aarch64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
CPU(s):                          6
On-line CPU(s) list:             0-5
Thread(s) per core:              1
Core(s) per socket:              3
Socket(s):                       2
Vendor ID:                       ARM
Model:                           4
Model name:                      Cortex-A53
Stepping:                        r0p4
CPU max MHz:                     1800.0000
CPU min MHz:                     408.0000
BogoMIPS:                        48.00
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Vulnerable
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid

Installing Docker on Manjaro-ARM OS running on Pine64

Docker doesn’t come with Pine64 Pro Laptop by default but you can install it flawlessly following the below steps:

ssh ajeetraina@192.168.1.8
ajeetraina@192.168.1.8's password: 
Welcome to Manjaro-ARM
~~Website: https://manjaro.org
~~Forum:   https://forum.manjaro.org/c/manjaro-arm
~~IRC:     #manjaro-arm on irc.freenode.net
~~Matrix:  #manjaro-arm-public:matrix.org
Last login: Mon Oct  5 09:17:42 2020

Keeping Manjaro ARM Repository up-to-date

 sudo pacman -Syu

Installing Docker 20.10.3

pacman -S docker

Running Docker Service

systemctl start docker.service
systemctl enable docker.service

Verifying Docker



[ajeetraina@pine64 ~]$ sudo docker version
[sudo] password for ajeetraina: 
Client:
 Version:           20.10.3
 API version:       1.40
 Go version:        go1.15.8
 Git commit:        48d30b5b32
 Built:             Sun Feb 21 21:20:17 2021
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          19.03.12-ce
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.14.5
  Git commit:       48a66213fe
  Built:            Sat Jul 18 02:39:40 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          v1.4.1.m
  GitCommit:        c623d1b36f09f8ef6536a057bd658b3aa8632828.m
 runc:
  Version:          1.0.0-rc93
  GitCommit:        12644e614e25b05da6fd08a38ffa0cfe1903fdec
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

Installing Docker Compose

sudo pacman -S docker-compose
[ajeetraina@pine64 ~]$ sudo pacman -S docker-compose
warning: docker-compose-1.28.4-1 is up to date -- reinstalling
resolving dependencies...
looking for conflicting packages...

Packages (1) docker-compose-1.28.4-1

Total Installed Size:  1.08 MiB
Net Upgrade Size:      0.00 MiB

:: Proceed with installation? [Y/n] Y
(1/1) checking keys in keyring                                                                           [##############################################################] 100%
(1/1) checking package integrity                                                                         [##############################################################] 100%
(1/1) loading package files                                                                              [##############################################################] 100%
(1/1) checking for file conflicts                                                                        [##############################################################] 100%
(1/1) checking available disk space                                                                      [##############################################################] 100%
:: Processing package changes...
(1/1) reinstalling docker-compose                                                                        [##############################################################] 100%
:: Running post-transaction hooks...
(1/1) Arming ConditionNeedsUpdate...
[ajeetraina@pine64 ~]$ sudo docker-compose version
docker-compose version 1.28.4, build unknown
docker-py version: 4.4.4
CPython version: 3.9.2
OpenSSL version: OpenSSL 1.1.1j  16 Feb 2021

Running ARM-based Docker Images on Pinebook (64bit)

sudo docker run --rm arm64v8/alpine uname -a
Unable to find image 'arm64v8/alpine:latest' locally
latest: Pulling from arm64v8/alpine
069a56d6d07f: Pull complete 
Digest: sha256:c45a1db6e04b73aad9e06b08f2de11ce8e92d894141b2e801615fa7a8f68314a
Status: Downloaded newer image for arm64v8/alpine:latest
Linux 216b7a576112 5.7.19-1-MANJARO-ARM #1 SMP Wed Sep 2 20:43:09 +03 2020 aarch64 Linux
[ajeetraina@pine64 test]$ 

Running ARM-based Docker Images on Pinebook (32bit)

 sudo docker run --rm arm32v7/alpine uname -a
Unable to find image 'arm32v7/alpine:latest' locally
latest: Pulling from arm32v7/alpine
f55b840e27d3: Pull complete 
Digest: sha256:db5f021b29ec8fcf605f00d2aac06345756b0ffbdea0e7994044fe9172619a0a
Status: Downloaded newer image for arm32v7/alpine:latest
Linux 8c9b1e93f0d9 5.7.19-1-MANJARO-ARM #1 SMP Wed Sep 2 20:43:09 +03 2020 aarch64 Linux
[ajeetraina@pine64 test]$ 

Building Docker Image using buildx

Docker Buildx is a CLI plugin that extends the docker command with the full support of the features provided by Moby BuildKit builder toolkit. It provides the same user experience as docker build with many new features like creating scoped builder instances and building against multiple nodes concurrently.

Docker Buildx is included in Docker 19.03+. BuildKit is designed to work well for building for multiple platforms and not only for the architecture and operating system that the user invoking the build happens to run. When you invoke a build, you can set the --platform flag to specify the target platform for the build output, (for example, linux/amd64, linux/arm64, darwin/amd64).

[ajeetraina@pine64 ~]$ sudo docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                  
  default default         running linux/arm64, linux/arm/v7, linux/arm/v6
[ajeetraina@pine64 ~]$ 

We are currently using the default builder, which is basically the old builder.  Let’s create a new builder, which gives us access to some new multi-arch features.

[ajeetraina@pine64 ~]$ sudo docker buildx create --name mybuilder
mybuilder

To switch between different builders use docker buildx use <name>. After running this command, the build commands will automatically use this builder.

[ajeetraina@pine64 ~]$ sudo docker buildx use mybuilder
[ajeetraina@pine64 ~]$ sudo docker buildx ls
NAME/NODE    DRIVER/ENDPOINT             STATUS   PLATFORMS
mybuilder *  docker-container                     
  mybuilder0 unix:///var/run/docker.sock inactive 
default      docker                               
  default    default                     running  linux/arm64, linux/arm/v7, linux/arm/v6
[ajeetraina@pine64 ~]$ 

Here I created a new builder instance with the name mybuilder, switched to it, and inspected it.  Note that --bootstrap isn’t needed, it just starts the build container immediately.  Next we will test the workflow, making sure we can build, push, and run multi-arch images.

[ajeetraina@pine64 ~]$ sudo docker buildx inspect --bootstrap
[+] Building 70.8s (1/1) FINISHED                                                                                                                                             
 => [internal] booting buildkit                                                                                                                                         70.8s
 => => pulling image moby/buildkit:buildx-stable-1                                                                                                                      67.7s
 => => creating container buildx_buildkit_mybuilder0                                                                                                                     3.1s
Name:   mybuilder
Driver: docker-container

Nodes:
Name:      mybuilder0
Endpoint:  unix:///var/run/docker.sock
Status:    running
Platforms: linux/arm64, linux/arm/v7, linux/arm/v6
[ajeetraina@pine64 ~]$

Let’s create a simple example Dockerfile, build a couple of image variants, and push them to Hub.

mkdir test && cd test && cat <<EOF > Dockerfile
FROM ubuntu
RUN apt-get update && apt-get install -y curl
WORKDIR /src
COPY . .
EOF

Finally, we are all set to build our first ARM-based Docker image using buildx tool.


[ajeetraina@pine64 test]$ sudo docker buildx build --platform linux/arm64 -t ajeetraina/pinedemo:latest --push .
[+] Building 67.5s (7/7) FINISHED                                                                                                                                             
 => [internal] load build definition from Dockerfile                                                                                                                     0.1s
 => => transferring dockerfile: 31B                                                                                                                                      0.0s
 => [internal] load .dockerignore                                                                                                                                        0.0s
 => => transferring context: 2B                                                                                                                                          0.0s
 => [internal] load metadata for docker.io/library/ubuntu:latest                                                                                                         1.2s
 => [1/2] FROM docker.io/library/ubuntu@sha256:703218c0465075f4425e58fac086e09e1de5c340b12976ab9eb8ad26615c3715                                                          0.1s
 => => resolve docker.io/library/ubuntu@sha256:703218c0465075f4425e58fac086e09e1de5c340b12976ab9eb8ad26615c3715                                                          0.0s
 => CACHED [2/2] RUN apt-get -y update && apt-get install -y curl                                                                                                        0.0s
 => exporting to image                                                                                                                                                  65.9s
 => => exporting layers                                                                                                                                                 11.3s
 => => exporting manifest sha256:2790ff98adad1108fdeb51ff19cca417add8d800036e6fa5d270756eec8ab716                                                                        0.0s
 => => exporting config sha256:703a952f9c0b9447cc0a6ddcd3e4669b009c6367238c9ae6bd5609174dd45cb0                                                                          0.0s
 => => pushing layers                                                                                                                                                   53.8s
 => => pushing manifest for docker.io/ajeetraina/pinedemo:latest                                                                                                         0.7s
 => [auth] ajeetraina/pinedemo:pull,push token for registry-1.docker.io                                                                                                  0.0s
[ajeetraina@pine64 test]$

Congratulations! You just created your first ARM-based Docker Image over Pinebook Pro Linux laptop and pushed it to DockerHub.

Portainer Vs Rancher

According to Gartner “By 2024, low-code application development will be responsible for more than 65% of application development activity.” Low code development is a new approach to build apps at a faster pace and allows applications deployment by a simple drag-and-drop as well as a point-and-click interface. Many businesses are combining DevOps approaches with low-code(or no-code) app development platforms for the reason that these solutions provide them with the tools to build applications faster and more efficiently.

Low-code platforms allow non-technical users to work on and improve the speed at which applications can be built and deployed. Instead of the traditional app development team being solely responsible for a growing number of application requests, business users are now able to support digital transformation efforts by building and modifying their own processes, without writing complex lines of code. Hence, low code offers speedy, iterative delivery of new business applications, hence you can build great apps & innovate faster.

The Rise of No-Code Platform for Kubernetes


The adoption of Kubernetes is massively growing for cloud-native applications. These applications require a high degree of infrastructure automation and specialized operations skills, which are not commonly found in enterprise IT organizations. The number of applications being developed and deployed on Kubernetes platforms soon may accelerate as business users take advantage of a no-code platform to create applications. Most of the applications based on containers thus far have been created by professional developers. However, as the number of no-code and low-code platforms being made on Kubernetes increases, the rate at which containerized applications are being developed and deployed also will increase. The typical no-code platform relies solely on a visual interface through which applications are built by dragging and dropping applications, while a low-code platform makes it easier to build applications by automating various coding processes. Both approaches enable individuals with some coding or business expertise to participate more directly in the application development process.

With the advent of a popular container management software like Rancher, you can manage and deploy Kubernetes clusters. Rancher is primarily a Kubernetes-as-a-Service(KaaS), in that it’s designed to help deploy and manage Kubernetes clusters. It includes both a web-based GUI and a command-line interface that enable you to create and scale not just clusters, but also Kubernetes objects such as pods and deployments. Even though Rancher is a popular container management platform, it comes with its own demerits.

  • It’s not easy to use Rancher for new beginners. One needs to be close to Kubernetes technology to understand how to operate the breadth of the services it comes with. Hence, there is a need for guidelines(manuals) during the initial stage of learning this platform.
  • Under the heavy workload, degradation in application performance can be expected
  • Sometimes it is too slow or crashes or has to re-register many times if usage becomes heavy. 
  • There is a need to follow the documentation to get a complete understanding of the Rancher product.
  • Rancher does obfuscate a lot of the things you need to know about running a k8s cluster that eventually you have to learn.
  • Rancher is not flexible in terms of pricing and terms of the contract

Today most organizations face challenges in determining the right set of tools to scale pilot projects into production deployments, given the steep learning curve & lack of a DevOps culture. It is important for business leaders and DevOps architects to choose the right tool which can simplify the build, management, and deployment of Kubernetes environments at a much faster pace. They need a tool that simplifies the management of the entire cycle of a Kubernetes cluster and Portainer is a perfect solution for it. Portainer is not a direct replacement of Rancher, it adds value in its own right. Portainer is designed to minimize complexities in operating workloads and provide a developer-centric operating model for cloud-native applications.

Since it’s 2.0 release back in August 2020, Portainer has already crossed 57,000 installations and deployed by 105,000+ unique users. Impressively, more than 50% of the overall page views are just for Kubernetes Management. With over 2.1 billion+ DockerHub pulls and approximately 500,000+ users per month, Portainer has already gained massive popularity in the last 3 years as a lightweight management UI that allows you to easily manage both of your Docker and Kubernetes environments. But now with the latest 2.0 release, the much-awaited support for Kubernetes has finally arrived.

Before we dive into a comparative world, let us first try to chart out top 8 critical factors that differentiate Portainer from other existing UI tools:

  • Easy to use dashboard
  • No need to write YAML
  • Deploy apps in seconds not minutes
  • Inspect apps, volumes, configurations in a few clicks
  • Focus on apps, not infrastructure
  • Create small or large clusters and assign resources to individuals
  • Add additional clusters(endpoints) in seconds
  • Visually monitor how memory and CPU is used
  • Monitor events and the applications running in each node
  • Convert docker-compose format file to YAML compatible with k8s.

Easy to use dashboard

Portainer comes with super simple UX. It makes operating container platforms easy. It provides a simple, click-to-configure interface that removes all of the unnecessary complexity and negates the need for users to learn complex syntax. Portainer users can now deploy and manage notoriously complicated applications on a Kubernetes platform, quickly and easily. Portainer created an intuitive UI experience that abstracts away all of the confusing Kubernetes lingo and provides you with easy to follow steps to deploy your application.

No need to write YAML 

If you look at the Rancher Kubernetes dashboard, most of the components on Rancher still require you to deal with Kubernetes object YAML files as well as custom scripts for actions such as certificate generation and encryption for apps. With Portainer, users no longer need to know how to write YAML or understand the Kubernetes CLI or API. There is no need to know how to write Kubernetes manifests, no need to learn helm, no need for kubectl commands; Portainer does it for you. We believe it’s a game-changer and we hope you do, too.

Deploy apps in seconds not minutes

Even though Rancher supplies the entire software stack needed to manage containers in production, it takes a couple of minutes to deploy it completely. With Portainer, it reduces to seconds. If you can use Docker on your laptop, you are now able to deploy even the most complex multi-tier applications, with data persistence, resource reservations, placement constraints, auto-scaling, load balancing; all of that Kubernetes awesomeness, just by following Portainer’s super simple UX. 

Inspect apps, volumes, configurations in a few clicks

One of the serious challenges with the Rancher dashboard is difficulty in tracking what’s going on with your application. With portainer, it becomes easy to understand what is going on with your application if it’s not doing what you expected. This is a really valuable feature if you have deployed your app outside of Portainer and are more vulnerable to deployment errors. You can see if your deployment is being impacted by unexpected placement constraints, see if there are image pull issues, see if there are storage issues, see if there are network issues. You name it, we visualize it.

Conclusion

Comparatively, Portainer is a much powerful, open-source toolset that allows you to easily build and manage containers not only in Docker & Docker Swarm but also in Kubernetes and Azure ACI. It perfectly fits with your multi-cluster, hybrid, and multi-cloud container orchestration strategy.

Getting Started with NVIDIA Jetson Nano From Scratch

The NVIDIA Jetson Nano 2GB Developer Kit is the ideal platform for teaching, learning, and developing AI and robotics applications. It uses the same proven NVIDIA JetPack Software Development Kit (SDK) used in breakthrough AI-based products. The new developer kit is unique in its ability to utilise the entire NVIDIA CUDA-X™ accelerated computing software stack including TensorRT for fast and efficient AI inference — all in a small form factor and at a significantly lower price. The Jetson Nano 2GB Developer Kit is priced at $59 and will be available for purchase starting end-October. 

Don’t miss: Object Detection with Yolo Made Simple using Docker on NVIDIA Jetson Nano

Under this blog post, I will show you how to get started with NVIDIA Jetson Nano from the scratch

Hardware

  • Jetson Nano
  • A Camera Module
  • A 5V 4Ampere Charger
  • 64GB SD card

Software

Preparing Your Jetson Nano

1. Preparing Your Raspberry Pi Flashing Jetson SD Card Image

  • Unzip the SD card image
  • Insert SD card into your system.
  • Bring up Etcher tool and select the target SD card to which you want to flash the image.
My Image

2. Verifying if it is shipped with Docker Binaries

ajeetraina@ajeetraina-desktop:~$ sudo docker version

3. Checking Docker runtime

Starting with JetPack 4.2, NVIDIA has introduced a container runtime with Docker integration. This custom runtime enables Docker containers to access the underlying GPUs available in the Jetson family.

pico@pico1:/tmp/docker-build$ sudo nvidia-docker version
NVIDIA Docker: 2.0.3
Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:47:53 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:06:16 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:

Installing Docker Compose on NVIDIA Jetson Nano

Jetson Nano doesn’t come with Docker Compose installed by default. You will need to install it first:

export DOCKER_COMPOSE_VERSION=1.27.4
sudo apt-get install libhdf5-dev
sudo apt-get install libssl-dev
sudo pip3 install docker-compose=="${DOCKER_COMPOSE_VERSION}"
apt install python3
apt install python3-pip
pip install docker-compose
docker-compose version
docker-compose version 1.26.2, build unknown
docker-py version: 4.3.1
CPython version: 3.6.9
OpenSSL version: OpenSSL 1.1.1  11 Sep 2018

Next, add default runtime for NVIDIA:

Edit /etc/docker/daemon.json

{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },

    "default-runtime": "nvidia",
    "node-generic-resources": [ "NVIDIA-GPU=0" ]
}

Restart the Docker Daemon

systemctl restart docker

Running Redis on Multi-Node Kubernetes Cluster in 5 Minutes

Redis is a very popular open-source project with more than 47,200 GitHub stars, 18,700 forks, and 440+ contributors.  Stack Overflow’s annual Developer Survey has ranked Redis as the Most Loved Database platform for four years running! Redis fits very well into the DevOps model due to its ease of deployment, rigorous unit and functionality testing of core and supplementary Redis technology, and ease of automation through tools such as DockerAnsible, and Puppet.

In Datadog’s 2020 Container Report, Redis was the most-popular container image in Kubernetes StatefulSets. If you’re interested to test drive Redis over the multi-node Kubernetes cluster, this guide might be useful for you.

Prerequisite:

Ensure that Kubernetes(at least 2-node cluster) is installed in your system

Verify Kubernetes components:

$ kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}

Configuring Redis using a ConfigMap:

You can follow the steps below to configure a Redis cache using data stored in a ConfigMap.

[node1 kubelabs]$ curl -OL https://k8s.io/examples/pods/config/redis-config

First create a kustomization.yaml containing a ConfigMap from the redis-config file:

[node1 kubelabs]$ cat <<EOF >./kustomization.yaml
> configMapGenerator:
> - name: example-redis-config
>   files:
>   - redis-config
> EOF

Add the pod resource config to the kustomization.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: redis
spec:
  containers:
  - name: redis
    image: redis:5.0.4
    command:
      - redis-server
      - "/redis-master/redis.conf"
    env:
    - name: MASTER
      value: "true"
    ports:
    - containerPort: 6379
    resources:
      limits:
        cpu: "0.1"
    volumeMounts:
    - mountPath: /redis-master-data
      name: data
    - mountPath: /redis-master
      name: config
  volumes:
    - name: data
      emptyDir: {}
    - name: config
      configMap:
        name: example-redis-config
        items:
        - key: redis-config
          path: redis.conf

Apply the kustomization directory to create both the ConfigMap and Pod objects:

[node1 kubelabs]$ kubectl apply -k .
configmap/example-redis-config-dgh9dg555m created
pod/redis created

Examine the created objects by

kubectl get -k .
NAME                                        DATA   AGE
configmap/example-redis-config-dgh9dg555m   1      33s

NAME        READY   STATUS    RESTARTS   AGE
pod/redis   1/1     Running   0          33s

In the example, the config volume is mounted at /redis-master. It uses path to add the redis-config key to a file named redis.conf. The file path for the redis config, therefore, is /redis-master/redis.conf. This is where the image will look for the config file for the redis master.
Use kubectl exec to enter the pod and run the redis-cli tool to verify that the configuration was correctly applied:

[node1 kubelabs]$ kubectl exec -it redis -- redis-cli
127.0.0.1:6379> CONFIG GET maxmemory
1) "maxmemory"
2) "2097152"
127.0.0.1:6379> CONFIG GET maxmemory-policy
1) "maxmemory-policy"
2) "allkeys-lru"
127.0.0.1:6379>

Delete the created pod:

$ kubectl delete pod redis
pod "redis" deleted

Further Reference:

2 Minutes to “Nuke” Your AWS Cloud Resources

Are you seriously looking out for a tool that can save your bills while being a FREE tier user? If you’re one of those new AWS account holders who wants to ensure that all the resources get deleted once you finish your testing work, then you should look at “Cloud Nuke”.

Cloud-Nuke is a tool for cleaning up your cloud accounts by nuking (deleting) all resources within them. It’s open-source tool and hosted under GITHUB.

Why Cloud Nuke is so cool?

With Cloud Nuke, you can get the below list of work completed in just 5 minutes:

  • Deleting all Auto scaling groups in an AWS account
  • Deleting all Elastic Load Balancers (Classic and V2) in an AWS account
  • Deleting all EBS Volumes in an AWS account
  • Deleting all unprotected EC2 instances in an AWS account
  • Deleting all AMIs in an AWS account
  • Deleting all Snapshots in an AWS account
  • Deleting all Elastic IPs in an AWS account
  • Deleting all Launch Configurations in an AWS account
  • Deleting all ECS services in an AWS account
  • Deleting all ECS clusters in an AWS account
  • Deleting all EKS clusters in an AWS account
  • Deleting all RDS DB instances in an AWS account
  • Deleting all Lambda Functions in an AWS account
  • Deleting all S3 buckets in an AWS account – except for buckets tagged with Key=cloud-nuke-excluded Value=true
  • Deleting all default VPCs in an AWS account
  • Revoking the default rules in the un-deletable default security group of a VPC

Without any further delay, let us test drive this tool.

Before using this tool, I had a bunch of EC2 instances up and running for my testing work. These instances were lying around and I really didn’t need them anymore.

Pre-requisite

  • Ensure that you have aws configure configured.

Download

  • Linux System
wget https://github.com/gruntwork-io/cloud-nuke/releases/download/v0.1.24/cloud-nuke_linux_amd64

If you’re on MacOS, ensure that you pick up

wget https://github.com/gruntwork-io/cloud-nuke/releases/download/v0.1.24/cloud-nuke_darwin_amd64

Move the binary to a folder on your PATH.

mv cloud-nuke_darwin_amd64 /usr/local/bin/cloud-nuke.

Add execute permissions to the binary.

chmod u+x /usr/local/bin/cloud-nuke

Test it installed correctly:

cloud-nuke --help.

That’s it. Run the below CLI to clean up the overall AWS resources.

sudo cloud-nuke aws
INFO[2020-12-23T11:33:42Z] The following resource types will be nuked:  
INFO[2020-12-23T11:33:42Z] - ami                                        
INFO[2020-12-23T11:33:42Z] - asg                                        
INFO[2020-12-23T11:33:42Z] - ebs                                        
INFO[2020-12-23T11:33:42Z] - ec2                                        
INFO[2020-12-23T11:33:42Z] - ecscluster                                 
INFO[2020-12-23T11:33:42Z] - ecsserv                                    
INFO[2020-12-23T11:33:42Z] - eip                                        
INFO[2020-12-23T11:33:42Z] - ekscluster                                 
INFO[2020-12-23T11:33:42Z] - elb                                        
INFO[2020-12-23T11:33:42Z] - elbv2                                      
INFO[2020-12-23T11:33:42Z] - lambda                                     
INFO[2020-12-23T11:33:42Z] - lc                                         
INFO[2020-12-23T11:33:42Z] - rds                                        
INFO[2020-12-23T11:33:42Z] - s3                                         
INFO[2020-12-23T11:33:42Z] - snap                                       
INFO[2020-12-23T11:33:43Z] Retrieving active AWS resources in [eu-north-1, ap-south-1, eu-west-3, eu-west-2, eu-west-1, ap-northeast-2, ap-northeast-1, sa-east-1, ca-central-1, ap-southeast-1, ap-southeast-2, eu-central-1, us-east-1, us-east-2, us-west-1, us-west-2] 
INFO[2020-12-23T11:33:43Z] Checking region [1/16]: eu-north-1  

It will ask you to type “nuke” and there you go…

THE NEXT STEPS ARE DESTRUCTIVE AND COMPLETELY IRREVERSIBLE, PROCEED WITH CAUTION!!!

Are you sure you want to nuke all listed resources? Enter 'nuke' to confirm (or exit with ^C): nuke
INFO[2020-12-24T11:47:21Z] Terminating 1 resources in batches           
INFO[2020-12-24T11:47:21Z] Deleting all Elastic Load Balancers in region us-east-1 
INFO[2020-12-24T11:47:21Z] Deleted ELB: af6820e0fc547433a8b8cdc84c636d4a 
INFO[2020-12-24T11:47:21Z] [OK] 1 Elastic Load Balancer(s) deleted in us-east-1 
INFO[2020-12-24T11:47:21Z] Terminating 2 resources in batches           
INFO[2020-12-24T11:47:21Z] Deleting all V2 Elastic Load Balancers in region us-east-1 
INFO[2020-12-24T11:47:21Z] Deleted ELBv2: arn:aws:elasticloadbalancing:us-east-1:567085410233:loadbalancer/net/test-s9jfl-ext/0ecfc28fd2202161 
INFO[2020-12-24T11:47:21Z] Deleted ELBv2: arn:aws:elasticloadbalancing:us-east-1:567085410233:loadbalancer/net/test-s9jfl-int/eb345a2c6cb26c1a 
INFO[2020-12-24T11:47:21Z] [OK] 2 V2 Elastic Load Balancer(s) deleted in us-east-1 
INFO[2020-12-24T11:47:21Z] Terminating 9 resources in batches           
INFO[2020-12-24T11:47:21Z] Terminating all EC2 instances in region us-east-1 
Connection to ec2-3-238-203-58.compute-1.amazonaws.com closed by remote host.
Connection to ec2-3-238-203-58.compute-1.amazonaws.com closed.

By now, I could see that all of these resources getting terminated. Super cool, isn’t it?

A New DockerHub CLI Tool under Docker Desktop 3.0.0

mechanical tool lot on brown wooden table

Docker Desktop is the preferred choice for millions of developers building containerized applications. With the latest Docker Desktop Community 3.0.0 Release, a new Hub CLI tool was introduced, right called Hub-tool. The new Hub CLI tool lets you explore, inspect, and manage your content on Docker Hub as well as work with your teams and manage your account. In nutshell, Hub-tool is Docker’s official command-line tool that makes docker easier to use with Docker Hub.

Capabilities of Hub-tool

  • Manages Your Docker Hub Account
  • Manages Your Docker Hub Org
  • Manages Your Personal Access Tokens
  • Manages Your Docker Hub Repositories
  • Manages Your Docker Hub Tags

Here’s a quick glimpse of Docker’s Hub-tool:

Verifying Hub-tool version

% hub-tool version
Version: v0.2.0
Git commit: 0edf43ac9091e7cac892cbc4cbc6efbafb665aa4

Logging in to DockerHub

% hub-tool login
Username: ajeetraina
Password:
Login Succeeded

Hub-tool Manpages

% hub-tool --help
A tool to manage your Docker Hub images
Usage:
hub-tool
hub-tool [command]
Available Commands:
account Manage your account
help Help about any command
login Login to the Hub
logout Logout of the Hub
org Manage organizations
repo Manage repositories
tag Manage tags
token Manage Personal Access Tokens
version Version information about this tool
Flags:
-h, --help help for hub-tool
--verbose Print logs
--version Display the version of this tool
Use "hub-tool [command] --help" for more information about a command.

Managing Your Account

ajeetraina@Ajeets-MacBook-Pro ~ % hub-tool account info
Username: ajeetraina
Full name: Ajeet Singh Raina
Company: Dell
Location: India
Joined: 6 years ago
Plan: free
Limits:
Seats: 1
Private repositories: 5
Parallel builds: 5
Collaborators: unlimited
Teams: unlimited

hub-tool account rate-limiting
Unlimited

Managing repositories

hub-tool repo ls | wc -l
102

Managing organization

hub-tool org ls
NAMESPACE NAME MY ROLE TEAMS MEMBERS
collabnix Owner 1 1

Managing Personal Access Token


ajeetraina@Ajeets-MacBook-Pro ~ % hub-tool token
Manage Personal Access Tokens
Usage:
hub-tool token [flags]
hub-tool token [command]
Available Commands:
activate Activate a Personal Access Token
create Create a Personal Access Token
deactivate Deactivate a Personal Access Token
inspect Inspect a Personal Access Token
ls List all the Personal Access Tokens
rm Delete a Personal Access Token
Flags:
-h, --help help for token
Global Flags:
--verbose Print logs


ajeetraina@Ajeets-MacBook-Pro ~ % hub-tool token ls
DESCRIPTION UUID LAST USED CREATED ACTIVE
token for ECS f7XXXXXXXX4f5d 5 months ago 5 months true

References:

Running RedisAI on NVIDIA Jetson Nano for the first time

The hardest part of AI is not artificial intelligence itself, but dealing with AI data. The accelerated growth of data captured from the sensors in the internet of things (IoT) solutions and the growth of machine learning (ML) capabilities are yielding unparalleled opportunities for organizations to drive business value and create competitive advantage. That’s why ingesting data from many sources and deriving actionable insights or intelligence from it have become a prime objective of AI-enabled applications.

Recently, a new class of AI databases has emerged. An AI database is built with the sole purpose of speeding ML model training and model serving. Such databases help businesses optimize AI learning and training. They help you wrangle the volume, velocity, and complex data governance and management challenges associated with training ML and DL models to save time and optimize resources.

Redis is just such an AI database. The RedisAI module, which is seamlessly plugged into Redis,  is a scalable platform that addresses the unique requirements for both AI training and AI inference in one server. It provides a complete software platform that allows data scientists to easily deploy and manage AI solutions for enterprise applications. The platform combines popular open-source deep learning frameworks (PyTorch, ONNXRuntime, and TensorFlow), software libraries, and Redis modules like RedisGears, RedisTimeSeries, and more. With RedisAI, AI application developers no longer have to worry about tuning databases for performance. Requiring no added infrastructure, RedisAI lets you run your inference engine where the data lives, decreasing latency.

Adding Tensor data structure to Redis

RedisAI lets you add a tensor data structure to Redis. A tensor is an n-dimensional array and is the standard representation for data in deep learning and machine learning workloads. Like any data in Redis, RedisAI’s tensors are identified by key names, are used as inputs and outputs in the execution of models and scripts, and run well on CPUs as well as on GPUs.

The RedisAI model data structure represents a machine learning/deep learning (ML/DL) model stored in the database. RedisAI supports DL/ML identifiers and their respective backend libraries like Tensorflow, PyTorch, and ONNX. RedisAI can actually execute models from multiple frameworks as part of a single pipeline. If your tech stack is on some other language and you don’t want to introduce Python into it, as long as the language of your choice has a Redis client (very likely), you can deploy your model into RedisAI and use your Redis client to control the execution with little to no overhead. 

How RedisAI work?

RedisAI bundles together best-of-breed technologies for delivering stable and performant computation graph serving. Every DL/ML framework ships with runtime for executing the models developed with it and the common practice for serving these is building a simple server around them.

RedisAI aims to be that server, saving you from the need of installing the backend you’re using and developing a server for it. By itself that does not justify RedisAI’s existence so there’s more to it. Because RedisAI is implemented as a Redis module it automatically benefits from the server’s capabilities: be it Redis’ native data types, its robust eco-system of clients, high-availability, persistence, clustering, and Enterprise support.

Because Redis is an in-memory data structure server RedisAI uses it for storing all of its data. The main data type supported by RedisAI is the Tensor that is the standard representation of data in the DL/ML domain. Because tensors are stored in the memory space of the Redis server, they are readily accessible to any of RedisAI’s backend libraries at minimal latency.

The locality of data, which is tensor data in adjacency to DL/ML models backends, allows RedisAI to provide optimal performance when serving models. It also makes it a perfect choice for deploying DL/ML models in production and allowing them to be used by any application.

Furthermore, RedisAI is also an optimal testbed for models as it allows the parallel execution of multiple computation graphs and, in future versions, assessing their respective performance in real-time.

RedisAI provides the following data structures:

  • Tensor: Represents an n-dimensional array of values
  • Model: Represents a computation graph by one of the supported DL/ML framework backends
  • Script: Represents a TorchScript program

DL/ML backends

RedisAI supports DL/ML identifiers and their respective backend libraries, including:

  • TF: The TensorFlow backend
  • TFLITE: The TensorFlow Lite backend
  • TORCH: The PyTorch backend
  • ONNX: ONNXRuntime backend

A complete list of supported backends is in the release notes for each version.

Backend libraries are dynamically loaded as needed, but can also be loaded during booting or at runtime. Refer to these pages for more information on loading backends:

AI.CONFIG command
Backend configuration

Please Note:

  • The RedisAI module doesn’t train your data – for that you need a tool like TensorFlow and PyTorch(Two open-source projects for machine learning).
  • Where RedisAI comes in is at the application layer, when it’s time to apply logic to the data(inferencing) and then serve it to the user.
  • When you want to train your AI model somewhere in the Cloud. Once you want to do serving or inferencing, Redis is the right database to do that.

RedisAI delivers up to 10x more inferencing

By putting the AI serving engine inside Redis, RedisAI delivers up to 10x more inferences than other AI serving platforms, at a much lower latency, delivering up to 9x more throughput than other AI-model serving platforms. These performance improvements can drive dramatically better business outcomes for many leading AI-driven applications—including fraud detection, transaction scoring, ad serving, recommendation engines, image recognition, autonomous vehicles, and game monetization.

Why RedisAI on Jetson Nano?

Jetson Nano Developer Kit is purely an AI computer. Delivering 572 GFLOPS of computing performance, this is a small, powerful $99 computer that lets you run modern AI workloads and is highly power-efficient, consuming as little as 5 watts. It can run a wide variety of advanced networks, including the full native versions of popular ML frameworks like TensorFlow, PyTorch, Caffe/Caffe2, Keras, MXNet, and others and it comes with a GPU card inbuilt.

Pre-requisite

  • Jetson Nano Board
  • Power Adapter(5V 4A)
  • WiFi Module
  • Redis 6.x

RedisAI require Redis 6.x. You might have to compile the newer version of Redis to get it up and running.

Installing Redis 6.x

Follow the below steps:

$ wget http://download.redis.io/releases/redis-6.0.8.tar.gz
$ tar xzf redis-6.0.8.tar.gz
$ cd redis-6.0.8
$ make
$ sudo cp src/redis-server /usr/local/bin/
$ sudo cp src/redis-cli /usr/local/bin/

In case you are not able to stop Redis service, follow the below steps

redis-cli
127.0.0.1> shutdown

Steps

Clone the repository

git clone --recursive https://github.com/RedisAI/RedisAI
cd RedisAI

Download all the below scripts and place it under RedisAI directory.

Building RedisAI

Upload the below code to your Jetson board:

sudo apt update
sudo apt install -y git build-essential ninja-build cmake python3-pip python3-cffi redis unzip wget

git clone https://github.com/RedisAI/RedisAI.git

cd RedisAI

mkdir build

WITH_PT=0 WITH_TF=0 WITH_TFLITE=0 WITH_ORT=0 bash get_deps.sh

mv deps/linux-arm64v8-cpu deps/linux-x64-cpu

mkdir deps/linux-x64-cpu/libtorch

cd deps/linux-x64-cpu/libtorch


wget https://nvidia.box.com/shared/static/3ibazbiwtkl181n95n9em3wtrca7tdzp.whl -O torch-1.5.0-cp36-cp36m-linux_aarch64.whl
sudo apt install -y libopenblas-base

unzip torch-1.5.0-cp36-cp36m-linux_aarch64.whl
mv torch/* .

cd -

cd build

cmake -DBUILD_TF=OFF -DBUILD_TFLITE=OFF -DBUILD_TORCH=ON -DBUILD_ORT=OFF -DCMAKE_BUILD_TYPE=Release ../

make -j4 && make install
sh install.sh

It will take sometime based on your internet speed.

Running RedisAI with PyTorch

Upload the below code to your Jetson board:

# Put this script inside RedisAi main folder.
# Run with 'bash run_redisai_torch.sh'.
# Before running check that the script is executable 'chmod 755 run_redisai_torch.sh'

redis-server --loadmodule install-cpu/redisai.so

Before you execute the below script, provide sufficient permission to redisai.so

/RedisAI/install-cpu$ sudo chmod 755 redisai.so 
 sudo sh run.sh 
14438:C 20 Sep 2020 14:06:50.321 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
14438:C 20 Sep 2020 14:06:50.321 # Redis version=6.0.6, bits=64, commit=00000000, modified=0, pid=14438, just started
14438:C 20 Sep 2020 14:06:50.321 # Configuration loaded
14438:M 20 Sep 2020 14:06:50.323 * Increased maximum number of open files to 10032 (it was originally set to 1024).
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 6.0.6 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 14438
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

14438:M 20 Sep 2020 14:06:50.325 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
14438:M 20 Sep 2020 14:06:50.325 # Server initialized
14438:M 20 Sep 2020 14:06:50.325 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
14438:M 20 Sep 2020 14:06:50.326 * <ai> Redis version found by RedisAI: 6.0.6 - oss
14438:M 20 Sep 2020 14:06:50.326 * <ai> RedisAI version 999999, git_sha=unknown
14438:M 20 Sep 2020 14:06:50.326 * Module 'ai' loaded from install-cpu/redisai.so
14438:M 20 Sep 2020 14:06:50.327 * Loading RDB produced by version 6.0.6
14438:M 20 Sep 2020 14:06:50.327 * RDB age 151321 seconds
14438:M 20 Sep 2020 14:06:50.327 * RDB memory usage when created 0.80 Mb
14438:M 20 Sep 2020 14:06:50.327 * DB loaded from disk: 0.000 seconds
14438:M 20 Sep 2020 14:06:50.327 * Ready to accept connections

Testing RedisAI

Upload the below code to your Jetson board:

# Put this script inside RedisAi main folder.
# Run only after you started a redisai server (view run_redisai_torch.sh).

set -x

redis-cli -x AI.MODELSET m TORCH GPU < ./test/test_data/pt-minimal.pt

redis-cli AI.TENSORSET a FLOAT 2 2 VALUES 2 3 2 3
redis-cli AI.TENSORSET b FLOAT 2 2 VALUES 2 3 2 3

redis-cli AI.MODELRUN m INPUTS a b OUTPUTS c

redis-cli AI.TENSORGET c VALUES
sh test.sh 

+ redis-cli -x AI.MODELSET m TORCH GPU
+ redis-cli AI.TENSORSET a FLOAT 2 2 VALUES 2 3 2 3
OK
+ redis-cli AI.TENSORSET b FLOAT 2 2 VALUES 2 3 2 3
OK
....
....
ajeetraina@ajeetraina-desktop:~/redisaiscript$ 

Using RedisAI tensors

A tensor is an n-dimensional array and is the standard representation for data in DL/ML workloads. RedisAI adds to Redis a Tensor data structure that implements the tensor type. Like any datum in Redis, RedisAI’s Tensors are identified by key names.

Creating new RedisAI tensors is done with the AI.TENSORSET command. For example, consider the tensor:

tensorA

We can create the RedisAI Tensor with the key name ‘tA’ with the following command:

AI.TENSORSET tA FLOAT 2 VALUES 2 3

Copy the command to your cli and hit the on your keyboard to execute it. It should look as follows:

$ redis-cli
127.0.0.1:6379> AI.TENSORSET tA FLOAT 2 VALUES 2 3
OK

The reply ‘OK’ means that the operation was successful. We’ve called the AI.TENSORSET command to set the key named ‘tA’ with the tensor’s data, but the name could have been any string value. The FLOAT argument specifies the type of values that the tensor stores, and in this case a single-precision floating-point. After the type argument comes the tensor’s shape as a list of its dimensions, or just a single dimension of 2.

The VALUES argument tells RedisAI that the tensor’s data will be given as a sequence of numeric values and in this case the numbers 2 and 3. This is useful for development purposes and creating small tensors, however for practical purposes the AI.TENSORSET command also supports importing data in binary format.

The Redis key ‘tA’ now stores a RedisAI Tensor. We can verify that using standard Redis commands such as EXISTS and TYPE:

127.0.0.1:6379> EXISTS tA
(integer) 1
127.0.0.1:6379> TYPE tA
AI_TENSOR

Using AI.TENSORSET with the same key name, as long as it already stores a RedisAI Tensor, will overwrite the existing data with the new. To delete a RedisAI tensor, use the Redis DEL command.

RedisAI Tensors are used as inputs and outputs in the execution of models and scripts. For reading the data from a RedisAI Tensor value there is the AI.TENSORGET command:

127.0.0.1:6379> AI.TENSORGET tA VALUES
1) INT8
2) 1) (integer) 2
3) 1) (integer) 2
    1) (integer) 3

Reference:


A First Look at PineBook Pro – A 14” ARM Linux Laptop For Just $200

Image

If you’re a FOSS enthusiast and looking out for a powerful little ARM laptop, PineBook Pro is what you need.

The Pinebook Pro is a Linux and *BSD ARM laptop from PINE64. It is built to be a compelling alternative to mid-ranged Chromebooks that people convert into Linux laptops. It features an IPS 1080p 14″ LCD panel, a premium magnesium alloy shell, high capacity eMMC storage, a 10,000 mAh capacity battery, and the modularity that only an open-source project can deliver.

The Pinebook Pro is equipped with 4GB LPDDR4 system memory, high capacity eMMC flash storage, and 128Mb SPI boot Flash. The I/O includes 1x micro SD card reader (bootable), 1x USB 2.0, 1x USB 3.0, 1x USB type C Host with DP 1.2 and power-in, PCIe 4x for an NVMe SSD drive (requires an optional adapter), and UART (via the headphone jack by setting an internal switch).

Join Pine64 discord NOW!

PineBook Pro is not an x86 device—it’s a big-little heterogeneous ARM cluster architecture, with two Cortex A72 cores and four Cortex A53 cores. In 2020, this sharply limits the operating system selection—you’re not going to buy a Pinebook Pro and slap Windows on it after you get it

Supported Operating System

If you’re not into Manjaro, that’s fine—the Pine64 project offers a wide selection of additional, downloadable user-installable Pinebook Pro images including Debian, Fedora, NetBSD, Chromium OS, and more.

Many different Operating Systems (OS) are freely available from the open-source community and partner projects. These include various flavors of Linux (Ubuntu, Debian, Manjaro, etc.) and *BSD. Under ‘Pinebook Pro Software Release/OS Image Download Section’ you will find a complete list of currently supported Operating System images that work with the Pinebook as well as other related software.

Default Manjaro KDE Desktop Quick Start

When you first get your Pinebook Pro and boot it up for the first time, it’ll come with Manjaro using the KDE desktop. On first boot, it will ask for certain information such as your timezone location, keyboard layout, username, password, and hostname. Most of these should be self-explanatory. Note that the hostname it asks for should be thought of as the “codename” of your machine, and if you don’t know what it’s about, you can make something up (use a single word, all lower case, no punctuation; e.g. “pbpro”).

After you’re on the desktop, be sure to update it as soon as possible and reboot after updates are finished installing. If nothing appears when you click on the Networking icon in your system tray to connect to your Wi-Fi, ensure the Wi-Fi privacy switch is not disabled.

Comes with Manjaro Linux Pre-Installed

The Pinebook Pro comes with Manjaro Linux pre-installed. Manjaro is, effectively, Arch Linux but with a set of reasonably sane defaults—”real” Arch might be considered more of a framework upon which to hang a distro, rather than a complete distribution itself.

Installing Docker on Manjaro-ARM OS running on Pine64

Docker doesn’t come with Pine64 Pro Laptop by default but you can install it flawlessly following the below steps:

ssh ajeetraina@192.168.1.8
ajeetraina@192.168.1.8's password: 
Welcome to Manjaro-ARM
~~Website: https://manjaro.org
~~Forum:   https://forum.manjaro.org/c/manjaro-arm
~~IRC:     #manjaro-arm on irc.freenode.net
~~Matrix:  #manjaro-arm-public:matrix.org
Last login: Mon Oct  5 09:17:42 2020

Keeping Manjaro ARM Repository up-to-date

 sudo pacman -Syu

Installing Docker 19.03.12

pacman -S docker

Initialising Docker

You might have to reboot your system for Docker to get initialize properly

Running Docker Service

systemctl start docker.service
systemctl enable docker.service

Verifying Docker


[ajeetraina@pine64 ~]$ sudo docker version
[sudo] password for ajeetraina: 
Client:
Version:           19.03.12-ce
API version:       1.40
Go version:        go1.14.5
Git commit:        48a66213fe
Built:             Sat Jul 18 02:40:17 2020
OS/Arch:           linux/arm64
Experimental:      false

Server:
Engine:
 Version:          19.03.12-ce
 API version:      1.40 (minimum version 1.12)
 Go version:       go1.14.5
 Git commit:       48a66213fe
 Built:            Sat Jul 18 02:39:40 2020
 OS/Arch:          linux/arm64
 Experimental:     false
containerd:
 Version:          v1.4.1.m
 GitCommit:        c623d1b36f09f8ef6536a057bd658b3aa8632828.m
runc:
 Version:          1.0.0-rc92
 GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
 Version:          0.18.0
 GitCommit:        fec3683

Installing Docker Compose

sudo pacman -S docker-compose
[ajeetraina@pine64 ~]$ sudo docker-compose version
docker-compose version 1.27.3, build unknown
docker-py version: 4.3.1
CPython version: 3.8.5
OpenSSL version: OpenSSL 1.1.1g  21 Apr 2020

Installing K3s

[ajeetraina@pine64 ~]$ curl -sfL https://get.k3s.io | sh -

Installing Portainer

[ajeetraina@pine64 ~]$ sudo  curl -LO https://raw.githubusercontent.com/portainer/portainer-k8s/master/portainer-nodeport.yaml
[ajeetraina@pine64 ~]$ sudo kubectl apply -f portainer-nodeport.yaml
namespace/portainer created
serviceaccount/portainer-sa-clusteradmin created
clusterrolebinding.rbac.authorization.k8s.io/portainer-crb-clusteradmin created
service/portainer created
deployment.apps/portainer created
[ajeetraina@pine64 ~]$ sudo kubectl get svc -n portainer


NAME        TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)                         AGE
portainer   NodePort   10.43.9.186   <none>        9000:30777/TCP,8000:30776/TCP   114s
pico@pico1:~$ sudo k3s kubectl describe  po -n portainer
Name:         portainer-5fbd6bb5d8-dxgp4
Namespace:    portainer
Priority:     0
Node:         pico2/192.168.1.161
Start Time:   Tue, 10 Nov 2020 22:37:55 -0700
Labels:       app=app-portainer
              pod-template-hash=5fbd6bb5d8
Annotations:  <none>
Status:       Running
IP:           10.42.1.3
IPs:
  IP:           10.42.1.3
Controlled By:  ReplicaSet/portainer-5fbd6bb5d8
Containers:
  portainer:
    Container ID:   containerd://70d7a96eaaa5aaf338194ceaaf858d3e2ce2ed74390e17cbceaef9cefdccc092
    Image:          portainerci/portainer:develop
    Image ID:       docker.io/portainerci/portainer@sha256:31ce431595a4e8223e07e992a5d9d2412c05191355723d56d542c08ff64c971f
    Port:           9000/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 10 Nov 2020 22:38:07 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from portainer-sa-clusteradmin-token-g9qmz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  portainer-sa-clusteradmin-token-g9qmz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  portainer-sa-clusteradmin-token-g9qmz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  4m40s  default-scheduler  Successfully assigned portainer/portainer-5fbd6bb5d8-dxgp4 to pico2
  Normal  Pulling    4m39s  kubelet            Pulling image "portainerci/portainer:develop"
  Normal  Pulled     4m28s  kubelet            Successfully pulled image "portainerci/portainer:develop" in 10.98939761s
  Normal  Created    4m28s  kubelet            Created container portainer
  Normal  Started    4m28s  kubelet            Started container portainer

References:

Introducing 2GB NVIDIA Jetson Nano: An Affordable Yet Powerful $59 AI Computer

Today at GPU Technology Conference(GTC) 2020, NVIDIA announced a new 2GB Nvidia Jetson Nano for the first time. Last year, during the March timeframe, NVIDIA introduced the $99 Jetson Nano Developer Kit which came with 4GB 64-bit LPDDR4 Memory. The new NVIDIA Jetson Nano 2GB Developer Kit, priced at $59, makes it even more affordable for students, educators, and enthusiasts to learn AI and robotics.

Pre-order Now: https://nvda.ws/30v5w3M

The NVIDIA Jetson Nano 2GB Developer Kit is the ideal platform for teaching, learning, and developing AI and robotics applications. It uses the same proven NVIDIA JetPack Software Development Kit (SDK) used in breakthrough AI-based products. The new developer kit is unique in its ability to utilise the entire NVIDIA CUDA-X™ accelerated computing software stack including TensorRT for fast and efficient AI inference — all in a small form factor and at a significantly lower price. The Jetson Nano 2GB Developer Kit is priced at $59 and will be available for purchase starting end-October.  


JetPack SDK & Libraries for AI development

With Jetson Nano 2GB, the JetPack SDK comes pre-loaded with all the necessary libraries one would require to build AI applications. For example, OpenCV and Visionworks for computer vision and image processing, CUDA, cuDNN, and TensorRt to accelerate AI inferencing, libraries for camera and sensor processing, and much more. Getting your Jetson Nano 2GB up and running with the JetPack SDK takes only a few steps.

Popular Frameworks shipped as Docker containers

Jetson Nano 2GB enables you to learn and develop using the framework of your choice by supporting all popular frameworks including TensorFlow, PyTorch, and MxNet. Development containers for TensorFlow and Pytorch are hosted on NVIDIA NGC which provides a quick one-step method to get your framework environment up and running. The familiar Jupyter Notebook learning environment is also available on Jetson Nano 2GB using the machine learning container hosted on NGC.

Jetson Nano 2GB Developer kit is capable of running neural network training on-device. Of course, training a neural network from scratch requires a lot of compute resources and is not practical to run on Jetson Nano 2GB. But, one can easily run Transfer-Learning training jobs locally on the Jetson platform. The Jetson Deep Learning Institute (DLI) course illustrates the simplicity of the process by teaching you to modify a trained network through transfer-learning. For robotics learners, the JetBot Robotics Kit illustrates the process to train a neural network that can be used by the robot for collision avoidance and following road markings.

80,000+ Community members and still counting…

The Jetson community consists of 80,000+ active members that includes AI enthusiasts, researchers, developers, students, and hobbyists who are bonded by their passion for AI and the Jetson platform.Jetson community members create cool projects, applications, and demos that are shared with the community and best of all, they share the code and instructions for anyone to try, learn, and enhance these projects. 

Check out Jetson community projects built for Jetson Nano 2GB Developer Kit


Specifications

Let’s look at the specifications of the latest NVIDIA Jetson Nano 2GB Developer kit.

Is $59 Jetson Nano a Raspberry Pi Killer?

Last year, Raspberry Pi 4 was announced priced at $35 with 4K support and up to 4GB of RAM. It came with up to 4GB of RAM (four times that of any previous Pi), dual-band Wi-Fi, twice the amount of HDMI outputs, and two USB 3 ports. Below is the comparison chart which NVIDIA published comparing Jetson Nano 2GB Developer Kit features with Raspberry Pi 4 and Google Coral Development board.

Check out Jetson community projects built for Jetson Nano 2GB Developer Kit in the below video:

Compared to Raspberry Pi 4 and other development kits available at similar price points, the Jetson Nano 2GB not only supports all the popular AI frameworks and networks, but also delivers orders of magnitude higher AI performance. 

The chart below shows the AI inferencing performance of Jetson Nano 2GB on popular DNN models for image classification, object detection, pose estimation, segmentation, and others. The benchmark was run with FP16 precision using JetPack 4.4.
G

The world of AI computing is changing fast. Researchers are constantly inventing new neural network architectures that deliver better accuracy and performance for certain tasks. The wide variety of AI models and frameworks in use is evident by examining the various projects found in the Jetson community projects portal, some of which are highlighted in the Community Projects section of this document. This new 2GB Jetson Nano is surely an affordable device for students to learn and create AI projects should therefore be flexible enough to run a diverse set of AI models and also deliver performance required to create meaningful interactive AI experiences.

References:

Running Docker Compose on NVIDIA Jetson Nano in 5 Minutes

Starting with v4.2.1, NVIDIA JetPack includes a beta version of NVIDIA Container Runtime with Docker integration for the Jetson platform. This is essential because it enables users to run GPU accelerated Deep Learning and HPC containers on Jetson devices. This sounds good but to build, deploy and scale microservices, you will require Docker Compose. Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

Running Docker Compose on Jetson Nano can be a daunting task. You just can’t use the usual curl command to get it installed. In this blog post, I will show you how to get Docker and Docker Compose installed on your NVIDIA Jetson Nano board in 5 minutes.

Hardware

  • Jetson Nano
  • A Camera Module
  • A 5V 4Ampere Charger
  • 64GB SD card

Software

Preparing Your Raspberry Pi Flashing Jetson SD Card Image

  • Unzip the SD card image
  • Insert SD card into your system.
  • Bring up Etcher tool and select the target SD card to which you want to flash the image.
My Image

Verifying if it is shipped with Docker Binaries


ajeetraina@ajeetraina-desktop:~$ sudo docker version
[sudo] password for ajeetraina: 
Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:47:53 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:06:16 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:       

Checking Docker runtime

pico@pico1:/tmp/docker-build$ sudo nvidia-docker version
NVIDIA Docker: 2.0.3
Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:47:53 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:06:16 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:

Installing Docker Compose on Jetson Nano

Jetson Nano doesn’t come with Docker Compose installed by default. You will need to first install the below list of pre-requisite packages in the right order as shown below:

export DOCKER_COMPOSE_VERSION=1.27.4
sudo apt-get install libhdf5-dev
sudo apt-get install libssl-dev
apt install python3
apt install python3-pip
sudo pip3 install docker-compose=="${DOCKER_COMPOSE_VERSION}"
pip install docker-compose

Please note that docker compose require python 3.x. I would suggest you to remove Python 2.7 or lower versions if it exists for the smooth installation of the packages.

Verify Docker Compose installation

docker-compose version
docker-compose version 1.26.2, build unknown
docker-py version: 4.3.1
CPython version: 3.6.9
OpenSSL version: OpenSSL 1.1.1  11 Sep 2018

Let’s run Minecraft server using docker compose. First we will create a docker compose file as shown below:

pico@pico1:~$ cat docker-compose.yml 
version: '3.7'
services:
 minecraft:
   image: itzg/minecraft-server:multiarch
   ports:
     - "25590:25565"
   environment:
     EULA: "TRUE"
   deploy:
     resources:
       limits:
         memory: 1.5G

Running the Minecraft Server

sudo docker-compose up
WARNING: Some services (minecraft) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
Creating network "pico_default" with the default driver
Creating pico_minecraft_1 ... done
Attaching to pico_minecraft_1
minecraft_1  | [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 2 1000 1000 4096 Aug  9 18:11 /data'
minecraft_1  | [init] Resolved version given LATEST into 1.16.3
minecraft_1  | [init] Resolving type given VANILLA
minecraft_1  | [init] Downloading minecraft_server.1.16.3.jar ...
minecraft_1  | [init] Creating server.properties in /data/server.properties
minecraft_1  | [init] Setting server-name to 'Dedicated Server' in /data/server.properties
minecraft_1  | [init] Skip setting server-ip
minecraft_1  | [init] Setting server-port to '25565' in /data/server.properties
....

In my future blog, I will show you how to use Docker Compose for NVIDIA Jetson Nano GPU for the first time.

5 Minutes to Kubernetes Architecture

Kubernetes (a.k.a K8s) is an open-source container-orchestration system which manages the containerised applications and takes care of the automated deployment, storage, scaling, scheduling, load balancing, updates(rolling-updates), self-healing, batch-execution and monitoring of containers across clusters of hosts.

Kubernetes was originally developed by Google labs and later donated to Cloud Native Computing Foundation (CNCF). 

Why Kubernetes?

There are multiple Container-Orchestration systems available today but Kubernetes has become more popular as it is cost efficient and provides a lot of options to customize deployments and has support for many different vendors. It is supported on all major public cloud service providers like GCP, Azure, AWS, Oracle Cloud, Digital Ocean etc. 

Kubernetes Architecture


Kubernetes follows the master/slave architecture. So, we have the master nodes and the worker nodes. The master nodes manage the worker nodes and together they form a cluster. A cluster is a set of machines called nodes. A Kubernetes cluster has at least one master node and one worker node. However, there can be multiple clusters too. 

Kubernetes Master Node/ Control Plane


Kubernetes Master Node/Control Plane is the controlling unit of the cluster which manages the cluster, monitors the Nodes and Pods in the cluster, and when a node fails, it moves the workload of the failed node to another working node.

The various components of the Kubernetes Master Node:

API Server

The API Server is responsible for all communications (JSON over HTTP API). The Users, management devices, and Command line interfaces talk to the API Server to interact with the Kubernetes cluster. kubectl is the CLI tool used to interact with the Kubernetes API.

Scheduler 

The Scheduler schedules Pods across multiple nodes based on the information it receives from etcd, via the API Server.

Controller Manager

The Controller Manager is a component on the Master Node  that runs the Controllers. It  runs the watch-loops continuously to drive the actual cluster state towards the desired cluster state. It runs the Node/Replication/Endpoints/Service account and token Controllers and in case of the Cloud Platforms, it runs the Node/Route/Service/Volume Controllers.

etcd

etcd is the open-source persistent, lightweight, distributed key-value database developed by CoreOS, which communicates only with the API Server. etcd can be configured externally or inside the Master Node.

Worker Node 

A Worker Node can have one or more Pods, and a Pod can have one or more Containers, and a Cluster can have multiple Worker Nodes as well as Master nodes. Node components (Kube-proxy, kubelet, Container runtime) run on every Worker Node, maintaining the running Pods and providing the Kubernetes run-time environment.

The various components of the Kubernetes Worker Node:

kubelet

kubelet is an agent running on each Worker Node which monitors the state of a Pod (based on the specifications from PodSpecs), and if not in the desired state, the Pod re-deploys to the same node or other healthy nodes.

Kube-proxy

The Kube-proxy is an implementation of a network proxy (exposes services to the outside world) and a load-balancer (acts as a daemon, which watches the API server on the Master Node for the addition and removal of services and endpoints).

Container runtime/ Docker

Kubernetes does not have the capability to directly handle containers, so it requires a Container runtime. Kubernetes supports several container runtimes, such as Docker, Containerd, Cri-o etc. 

Add-ons 

Add-ons add to the functionality of Kubernetes.Some of the important add-ons are:

DNS – Cluster DNS is a DNS server required to assign DNS records to Kubernetes objects and resources.

Dashboard – A general purpose web-based user interface for cluster management.

Monitoring – Continuous and efficient monitoring of workload performance by recording cluster-level container metrics in a central database. 

Logging – Saving cluster-level container logs in a central database.

Get started with Kubernetes today

  • Lab 01 – 5-Node Kubernetes Cluster in 5 Minutes
  • Lab 02 – Kubernetes Cluster on AWS using Kops 
  • Lab 03 – Installing Portainer to Monitor Kubernetes 
  • Lab 04 – Deploy Your First Nginx Pod over Kubernetes Cluster
  • Lab 05 – A Quick Look at Kubernetes ReplicaSet101 
  • Lab 06 – A Quick Look at Kubernetes Deployment 101
  • Lab 07 – A Quick Look at Kubernetes Scheduler 101
  • Lab 08 – A Quick Look at Kubernetes DaemonSet101
  • Lab 09 – A Quick Look at Kubernetes RBAC 101
  • Lab 10 – Setting up GKE using Docker Desktop
  • Lab 11 – Installing WordPress App on Kubernetes using Helm
  • Lab 12 –KubeZilla

5 Reasons why you should attend ARM Dev Summit 2020

Arm DevSummit | IoT For All

Register Here

With 100+ sessions, 200+ speakers & tons of sponsors , Arm DevSummit is going to kick-off next month. It will take place virtually from October 6-8, 2020. Arm DevSummit is Arm’s new flagship annual conference bringing the best of Arm TechCon with an expanded focus of the technical conference for both software developers and hardware designer’s communities. It is a replacement for the long running Arm TechCon(15+ Years) with a new focus on software developers.

Arm DevSummit 2020 will solely be a virtual conference in 2020. It is a newly announced technology conference presented by Arm and its ecosystem that will take place exclusively online next month virtually. The event includes keynotes, technical sessions, BoFs, panels, tech talks, and comes with much more exciting agenda. Program highlights include immersive keynotes from Arm, Netflix, Volkswagen, Microsoft, and more , online networking with industry peers, technical sessions and workshops where software developers and hardware designers can advance their skills to the next level. The program will cover topics from best practices for cloud-native and mobile app development, tackle industry challenges for autonomous technology to deep dives in machine learning, IoT, infrastructure as well as explore the latest advancements in chip design. It is a completely new event designed for software and hardware engineers focused on the latest advancements in mobile, HPC, autonomous technology, AI, ML, IoT and chip design.

What’s cool about this conference?

Arm DevSummit 2020 virtual conference brings engineers and developers from both software and hardware into one place to learn, exchange knowledge, discuss real-world use cases and solutions and get hands-on with expert-led, deep-dive trainings and workshops. Arm DevSummit is where software and hardware meet. It’s the place that engineers and developers connect and collaborate on the latest applications and the semiconductor solutions that enable them.

During this technical conference, engineers and developers will advance their skills to the next level with best practices for cloud native and mobile development, tackle industry challenges for autonomous technology, machine learning, IoT and infrastructure, all the way to exploring the latest advancements in chip design.

Arm and its ecosystem provide deep insight into foundational hardware, enabling software development teams to build, deploy and manage the world’s best performing, richest and most impactful next-generation experiences through artificial intelligence, automotive, internet of things (IoT), security and 5G.

This year’s event includes eight technical tracks covering all aspects of Arm and Arm ecosystem solutions.

  • AI in the Real World: From Development to Deployment
  • Building the IoT: Efficient, Secure and Transformative Software Development
  • Chip Design Methodology
  • Cloud Native Developer Experience
  • Creating the Next Generation of Interactive Experiences
  • Infrastructure of Modern Computing
  • Tech for Global Goals: The World’s Largest To-Do List
  • The Journey to Autonomous

This year I’m excited about RedisAI & Redis Streams talk from Redis Labs. Andre Srinivasan, Global Solution Architect will be talking about how combination of ARM , Redis Streams & Redis AI work for Low Latency Inferencing at the Edge. As I am currently working on Redis AI on NVIDIA Jetson Nano board for my Pico 2.0 project, I believe that this talk is going to be super useful for me.

Attend this talk

I am also excited to attend “Cloud Native Development with Docker deployed on Arm” talk by Justin Cormack, Docker Inc. & Marc Meunier, ARM.

Attend this talk

Arm DevSummit is a new kind of hands-on and minds-on technology event. It’s built around the notion of connections; connecting global communities of software developers and hardware engineers across all fields in one forum so everyone learns and collaborates, helping to create an even more successful future for everyone. As promised, here are top 5 reasons why you shouldn’t miss out ARM DevSummit:

100% Virtual Event

This year’s ARM DevSummit will be completely a virtual event which includes keynotes, technical sessions, BoFs, panes , tech-talks and much more.

There will be 8 Conferences tracks where you get opportunity to join expert-led educational courses covering the entire technology stack. The Arm DevSummit conference agenda is filled with ways for software and hardware engineers to learn, connect and develop. Join technical sessions across 8 conference tracks, deep-dive workshops, panel sessions, expert office hours, ecosystem talks and more.

Keynotes from ARM, Netflix, Volkswagen, Microsoft, and more..

This year’s Arm DevSummit will hold 3 days of keynotes where you will get opportunity to hear from Arm and industry visionaries about the trends underpinning the future of technology. You will get to hear from Arm CEO Simon Segars where he shares his vision admist of how technology is playing a key role in shaping the world’s response to the virus. Below are the list of promising keynotes which you just can’t miss out:

  • A New Infrastructure for a New Era by Senior Vice President and General Manager of Infrastructure Chris Bergey
  • How Software Influences the Arm Architecture by Arm’s Chief Architect Richard Grisentwaite and VP Open Source Software Mark Hambleton
  • The Future is Being Developed on Arm by Arm Intellectual Property Group President Rene Haas
  • The Software Side of Arm by Mark Hambleton, Vice President, Open Source Software, Arm
  • Microsoft: Building for an Arm Ecosystem by Arun Kishan, Technical Fellow, Microsoft

Customised Agenda Builder

To facilitate the best experience for all participants, limited spots will be available for workshops. Be sure to sign-up for workshops before they sell out. This time customised agenda builder have been introduced. All you need is register or login to begin building your custom schedule. You can use filters to find sessions of interest and favourites you would like to attend. You can then use “Add to Schedule” to enroll in a session and it will get added to your calendar. In calendar, when logged in you can visualise your scheduled sessions

Networking & Community Forums

This time you will get to see Rec room where you will be allowed to take a break, play games, share, connect and have some fun. You can earn rewards by testing your knowledge, earn badges and share prizes with non-profits. You will get chance to share ideas, ask questions and engage with your peers and future collaborators.

Workshops on ML, DL, Cloud Native Developer Experience & Security

This year in ARM DevSummit, there will be tons of deep-dive workshops which will provide hands-on experience in areas such as implementing machine learning and protecting against security vulnerabilities. You will get chance to learn how to overcome design challenges in these interactive and in-depth sessions. Topics like Cloud Native Networking and Service Mesh, Secure IoT Lifecycle Management ,Autoware – Open Source Software for Autonomous Vehicles, Live Treasure Hunt With an Arm-based Robot – Misty etc. are few of the interesting workshop which might interest you.

To facilitate the best experience for all participants, limited spots are available for workshops. Each Workshop Plus session is an add-on to registration, $79 each. Several of the Workshop Plus sessions include development kits. Click Here to learn more about the workshop.

References:

Running Minecraft Server on NVIDIA Jetson Nano using Docker

With over 126 million monthly users, 200 million games sold & 40 million MAU, Minecraft still remains one of the biggest games on the planet. It is an incredibly popular game that was created for the first time in the year 2009. Microsoft acquired Minecraft for a $2.5-billion deal in early 2017 and ever since over 30 million copies of Minecraft have been sold for PC and Mac. There is a general perception that its player base skews more towards children. The simple graphics and LEGO-like design feed into this perception – but according to Microsoft, it’s simply not the case, and the average age of a Minecraft player is older than you might think. (24 years old)

Minecraft is educational

Yes, you heard it right. Minecraft is an educational game. It is unique in that it’s an unlimited world where kids can create literally anything they can imagine, but within the constraint that everything is made up of blocks that must fit within the 3D grid of the game.. One primary reason Minecraft is good for kids is the promotion of creativity, problem-solving, self-direction, and collaboration—all of which stand out as the less-tangible, non-academic benefits Minecraft provides. It is these life skills that will give kids the boost needed when they eventually work their way towards succeeding in college and future careers.

Why Minecraft inside Docker container?

  • It’s fun.
  • It allows my kid to build up his own Minecraft Server in just 2 minutes
  • No deep knowledge of Docker is required(for my kid). Just one single liner command and everything is all up and running
  • Can be run locally or hosted over the Cloud
  • Can be run on IoT boards like Jetson Nano & Raspberry Pi
  • Highly customizable

In this article, I will show you how you can set up Minecraft running inside Jetson Nano using Docker in just 2 minutes.

Hardware

  • NVIDIA Jetson Nano
  • SD Card
  • 5V 4A Power Adapter
  • HDMI Cable
  • 16GB/64GB SD card

Software

1. Preparing Your Jetson board

  • Unzip the SD card image
  • Insert SD card into your system.
  • Bring up Etcher tool and select the target SD card to which you want to flash the image.

2. Verifying if it is shipped with Docker Binaries

ajeetraina@ajeetraina-desktop:~$ sudo docker version
[sudo] password for ajeetraina: 
Client:
 Version:           19.03.6
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        369ce74a3c
 Built:             Fri Feb 28 23:47:53 2020
 OS/Arch:           linux/arm64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.6
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       369ce74a3c
  Built:            Wed Feb 19 01:06:16 2020
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu1~18.04.2
  GitCommit:        
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:        
 docker-init:
  Version:          0.18.0
  GitCommit:    

3. Running the Minecraft Server using Docker

sudo docker run -d -p 25565:25565 -e EULA=true -e ONLINE_MODE       =false -e DIFFICULTY=hard -e OPS=collabnix  -e MAX_PLAYERS=50 -e MOTD="welcome to Collabnix" -v /tmp/minecraft_data:/data --name mc itzg/minecraft-server:multiarch

where,

  • itzg/minecraft-server:multiarch is the right Docker image for ARM
  • /tmp/minecraft_data:/data is for persistence
  • MAX_PLAYERS is the maximum number of players you are allowing to participate
  • OPS is to add more “op” (aka adminstrator) users to your Minecraft server
  • DIFFICULTY is for the difficulty level (default: easy)
  • MOTD is for the message of the day

Almost done!

Open up your Minecraft client and look out for Server Name (which will be your host name) and use 25565 as port number (optional)

Key Takeaways:

  • You will need a dedicated Minecraft Docker Image(multiarch) to make it work on IoT devices like Raspberry Pi and Jetson Board
  • Minecraft is CPU intensive, hence even passing –gpus all option won’t help in improving the performance

Using Docker Compose

If you want to use Docker compose, you will need to install it first as Jetson Nano SD card image doesn’t come with it by default:

export DOCKER_COMPOSE_VERSION=1.27.4
sudo apt-get install libhdf5-dev
sudo apt-get install libssl-dev
sudo pip3 install docker-compose=="${DOCKER_COMPOSE_VERSION}"
apt install python3
apt install python3-pip
pip install docker-compose

Create a file called docker-compose.yml and add the below content:

version: '3.7'
services:
 minecraft:
   image: itzg/minecraft-server:multiarch
   ports:
     - "25565:25565"
   environment:
     EULA: "TRUE"
   deploy:
     resources:
       limits:
         memory: 1.5G

Now you should be able to run docker-compose as shown below:


sudo docker-compose up
WARNING: Some services (minecraft) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
Creating network "pico_default" with the default driver
Creating pico_minecraft_1 ... done
Attaching to pico_minecraft_1
minecraft_1  | [init] Running as uid=1000 gid=1000 with /data as 'drwxrwxr-x 2 1000 1000 4096 Aug  9 18:11 /data'
minecraft_1  | [init] Resolved version given LATEST into 1.16.3
minecraft_1  | [init] Resolving type given VANILLA
minecraft_1  | [init] Downloading minecraft_server.1.16.3.jar ...
minecraft_1  | [init] Creating server.properties in /data/server.properties
minecraft_1  | [init] Setting server-name to 'Dedicated Server' in /data/server.properties
minecraft_1  | [init] Skip setting server-ip
minecraft_1  | [init] Setting server-port to '25565' in /data/server.properties
....

A First Look at Portainer 2.0 CE – Now with Kubernetes Support

Managing Your Kubernetes Environment with few “mouse” clicks

Irrespective of the immaturity of the container ecosystem and lack of best practices, the adoption of Kubernetes is massively growing for legacy modernization and cloud-native applications. Cloud-native applications require a high degree of infrastructure automation and specialized operations skills, which are not commonly found in enterprise IT organizations. Kubernetes has emerged as the de facto standard for container orchestration, with a vibrant community and support from most of the leading commercial vendors. While Kubernetes solves a lot of problems with deploying containers, the platform itself is more complicated to understand and manage.Today most organizations face challenges in determining the right set of tools to scale pilot projects into production deployments, given the steep learning curve & lack of a DevOps culture. It is important for business leaders and DevOps architects to choose the right tool which can simplify the build, management and deployment of Kubernetes environments at a much faster pace.

Introducing Portainer 2.0 Community Edition

With over 2 billion+ DockerHub pulls and approximately 500,000+ users per month, Portainer has already gained a massive popularity in the last 3 years  as  a lightweight management UI which allows you to easily manage your different Docker environments. But now with the latest 2.0 release, the much awaited support for Kubernetes has finally arrived.

If you’re looking out for a simple but robust management UI which can help you to build, manage and deploy containers in your Kubernetes environment quickly and easily, Portainer 2.0 CE is the right tool for you. No more CLI, no more YAML Manifestos, no more mistakes; just simple, fast, Kubernetes configuration in a graphical UI, built on a trusted open source platform. You can use it to deploy apps in seconds not minutes, troubleshoot complex configurations and facilitate a seamless migration from Docker to Kubernetes without having to learn Kubernetes at all. Portainer CE lets you leave complex CLI commands behind, to focus on delivering outstanding software. As a GUI based tool, Portainer CE skips up the learning curve to get your Docker & Kubernetes environments up and running quickly. Once operational, Portainer CE allows you to reliably and quickly create, operate and troubleshoot your Docker & Kubernetes environments.

Click on this image to redirect to Collabnix Slack

Below are the list of top 8 critical factors which differentiate Portainer from other existing UI tools:

  • Focus on apps, not infrastructure
  • Deploy apps in seconds not minutes
  • Create small or large clusters and assign resources to individuals
  • Add additional clusters(endpoints) in seconds
  • Visually monitor how memory and CPU is used
  • Monitor events and the applications running in each node
  • Convert docker-compose format file to YAML compatible with k8s.
  • Inspect apps, volumes, configurations in a few clicks

What’s New in Portainer 2.0 CE Edition?

Portainer CE 2.0 includes a staggering number of enhancements (~150), as well as a few breaking changes. It is an entirely new image designed as portainer/portainer-ce with the latest tag pointing to the latest release. Below are the list of top 7 new features introduced under CE 2.0 Release:

Support for Kubernetes

Support for Kubernetes enabled endpoints have been introduced under Portainer CE 2.0 release for the first time. This means you can manage the deployment of applications atop Kubernetes clusters from within Portainer, using the familiar Portainer UX. If you are new to YAML and haven’t written any Kubernetes Manifests in the past, don’t panic. Portainer makes deploying apps and troubleshooting problems so simple, anyone can do it.

Introduction of Application Template

With the newer 2.0 release, the list of application templates is being published by Portainer, maintained and updated by Portainer, and users can’t change it. Admins can choose to unsubscribe from the list and instead provide their own centrally managed list, but it would be “consume only” from a user perspective.

In addition, a feature called “custom templates’ ‘has been introduced. Now Portainer users can create their own bespoke templates. Unlike previous capability, the new custom templates rely solely upon Stack/Compose files, so when you add a new template, you add it by pasting in (or uploading) a compose file and then annotating some detail around the file.  Through Portainer access control, users can choose to publish their custom templates for themselves, for their team, or for all users within their organization.

Built-in oAUTH Authentication support

With the new 2.0 release, a basic oAUTH authentication have been introduced.This means now you can configure Portainer to authenticate users from an oAUTH source. This looks to be a very technical implementation, so you need to be competent with oAUTH before attempting it. The Portainer 2.0 Business Edition is expected to retain ‘click to configure’ simplicity for the most common oAUTH providers like Azure AD, GitHub, Google.

Support for Azure ACI 

Portainer 2.0 CE brings support for Azure ACI. You can now reliably deploy applications in an Azure ACI instance from within Portainer. These containers are all stateless, and internet facing. It takes mere seconds to deploy any container in ACI with a Portainer, which is damn cool.

Edge Compute Features

Regarding Edge Compute; Portainer team have relocated the former “Host Jobs” functionality into “Edge Jobs” and reconfigured the logic that underpins this so that they only function when used against Edge Agent enabled endpoints. This now means that the “Edge Compute” specific features includes; the ability to group edge endpoints, the ability to deploy stacks against groups of endpoints, and the ability to run cron jobs against edge endpoints.

Custom Session Control 

Portainer 2.0 CE now comes with an added support for the Admin to set a custom “session timeout’. This setting defines how often users are forced to re-authenticate with their Portainer session; the default remains at 8 hours, but it can now be changed to up to 1 year. 

Adding Your Own Custom Logo

With the new 2.0 release, you have flexibility to add your own custom logo under “Settings” in just 2 minutes. This means that you can add your own company’s logo right on the top right corner of the UI.

I was lucky enough to get early access to Portainer 2.0 Community Edition. Thanks to Neil Cresswell, CEO at Portainer.io for all great work in leading this robust platform. Under this blog post, I will test drive Portainer 2.0 CE for the first time. Most of the CLIs are run on macOS but it should work for any of Linux distros. We will take a look at two popular applications –  Minecraft and wordpress and how easy it is to setup without any dropping to CLI console. Let us get started:

Prerequisite:

  • Macbook Pro
  • Docker Desktop 2.3.5.0 or any existing  Kubernetes cluster(either single node or multi-node based on your preference)

Download Docker Desktop from this link if you haven’t performed so far. Once installed, click on the “whale icon” on the top right of the screen. Choose “About Docker Desktop” to see the Docker version installed. (as shown below)

Close the window. Now click on “Preference” to select the Kubernetes option. Choose the first check box which shows “Kubernetes” and click on “Apply & Restart”.

Installing Portainer using Helm

% brew install helm
helm versionversion.BuildInfo{Version:"v3.3.0", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.6"}

Adding a Chart Repository for Portainer 

% helm repo add portainer https://portainer.github.io/k8s/"portainer" has been added to your repositories


Updating Helm Chart Repository

% helm repo update
Hang tight while we grab the latest from your chart repositories......Successfully got an update from the "portainer" chart repository
Update Complete. 
⎈ Happy Helming!⎈ 

Installing Portainer

% helm install -n portainer portainer portainer/portainer
NAME: portainer
LAST DEPLOYED: Sun Aug 30 10:36:44 2020
NAMESPACE: portainer
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace portainer -o jsonpath="{.spec.ports[0].nodePort}" services portainer)
  export NODE_IP=$(kubectl get nodes --namespace portainer -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT
ajeetraina@Ajeets-MacBook-Pro ~ %


Listing the Portainer Helm Release


% helm list -n portainer
NAME     	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART          	APP VERSION
portainer	portainer	1       	2020-08-30 10:36:44.244921 +0530 IST	deployed	portainer-1.0.1	2.0.0      

Get the application URL


ajeetraina@Ajeets-MacBook-Pro ~ % export NODE_PORT=$(kubectl get --namespace portainer -o jsonpath="{.spec.ports[0].nodePort}" services portainer)
ajeetraina@Ajeets-MacBook-Pro ~ % export NODE_IP=$(kubectl get nodes --namespace portainer -o jsonpath="{.items[0].status.addresses[0].address}")
ajeetraina@Ajeets-MacBook-Pro ~ % echo http://$NODE_IP:$NODE_PORT
http://192.168.65.3:30777
ajeetraina@Ajeets-MacBook-Pro ~ %

Open https://localhost:30777 on your browser. Create a new password as admin user and you will see a dashboard as shown below:

Click on the local cluster as shown below to open up resource pools, applications, volumes  and configurations.

It will show the endpoint summary and list of resources as shown below:

Click on “Cluster”  on the left side of the dashboard.

Let’s try to deploy a Minecraft application using Portainer UI. We will be referring to a popular Docker Image of Minecraft.

Click on “Add Application”.

Ensure that you add environmental variable EULA=TRUE under configuration section.

Click “Deploy” to deploy Minecraft running as container image as show below:

Click “Deploy Application”.

Verifying the Minecraft Server is up and running with 1 replica.

As you can see above, Minecraft server is running as Kubernetes Pods.

Using traditional YAML approach

Let us try to use “Advanced Deployment” option and try to deploy WordPress application using Portainer UI.

One you click on “Advanced deployment” option, you will find a web editor where you can paste your YAML file.

Select “Kubernetes” to add manifest file format.

You can directly pick up NGINX YAML from this link

You can verify the right port as shown below:

By now, you should be able to access Nginx server at port 80.

Deploying WordPress by using traditional YAML way

Creating Secrets YAML file


apiVersion: v1
kind: Secret
metadata:
 name: mysql-pass
type: Opaque
data:
 password: YWRtaW4=

The YAML file content can be directly pasted into UI:

Verifying mysql-pass Pod

Next, copy the below YAML files and past it under the web editor one by one:

  1. wordpress-mysql YAML with Statefulset
  2. WordPress YAML with Statefulset

Once you deploy both of the above applications, you can verify the functionality of the application below:

As we have configured Statefulsets, you can verify the PVC by clicking on “Volumes” section

Checking the logs

Finally, the WordPress application is up and running.

Cleaning up WordPress  

In order to clean up WordPress application, all you need is just a single click and all the Pods get removed.

Conclusion

Portainer Community Edition 2.0 is a powerful, open source toolset that allows you to easily build and manage containers not only in Docker & Docker Swarm but also in Kubernetes and Azure ACI. This new release simplifies container management & used by software engineers to speed up software deployments, troubleshoot problems and simplify migrations.

References:

Portainer CE for Kubernetes

Portainer CE for Swarm

Portainer CE for Edge

Portainer CE for ACI

Deploy your AWS EKS cluster with Terraform in 5 Minutes

Amazon Elastic Kubernetes Service (a.k.a Amazon EKS) is a fully managed service that helps make it easier to run Kubernetes on AWS. Through EKS, organisations can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes. Simply put, EKS is a managed containers-as-a-service (CaaS) that drastically simplifies Kubernetes deployment on AWS.

Why EKS the best place to run Kubernetes?

EKS is the best place to run Kubernetes for several reasons. First, you can choose to run your EKS clusters using AWS Fargate, which is serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design. Second, EKS is deeply integrated with services such as Amazon CloudWatch, Auto Scaling Groups, AWS Identity and Access Management (IAM), and Amazon Virtual Private Cloud (VPC), providing you a seamless experience to monitor, scale, and load-balance your applications. Third, EKS integrates with AWS App Mesh and provides a Kubernetes native experience to consume service mesh features and bring rich observability, traffic controls and security features to applications. Additionally, EKS provides a scalable and highly-available control plane that runs across multiple availability zones to eliminate a single point of failure.

Top 4 Reasons why you should consider EKS?

  • EKS runs the Kubernetes management infrastructure across multiple AWS Availability Zones, automatically detects and replaces unhealthy control plane nodes, and provides on-demand, zero downtime upgrades and patching.
  • EKS supports AWS Fargate to provide serverless compute for containers. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.
  • EKS automatically applies the latest security patches to your cluster control plane. AWS also works closely with the community to ensure critical security issues are addressed before new releases and patches are deployed to existing clusters.
  • With Amazon EKS, you can take advantage of all the performance, scale, reliability, and availability of the AWS platform, as well as integrations with AWS networking and security services, such as Application Load Balancers for load distribution, Identity Access Manager (IAM) for role based access control, and Virtual Private Cloud (VPC) for pod networking.

The purpose of this tutorial is to create an EKS cluster with Terraform.

Pre-requisite:

  • MacOS
  • Get an AWS free trial account
  • Install Terraform v0.12.26
brew install terraform

If you’re running Terraform 0.11, I would suggest to upgrade it to 0.12 ASAP.

  • Install AWSCLI 2.0.17
brew install awscli
  • Install AWS IAM Authenticator
brew install aws-iam-authenticator
  • Install WGET
brew install wget
  • Install Kubectl
brew install kubernetes-cli

Setting up AWS IAM users for Terraform

The first thing to set up is your Terraform. We will create an AWS IAM users for Terraform.

In your AWS console, go to the IAM section and create a user named “SudoAccess”. Then add your user to a group named “SudoAccessGroup”. Attaches to this group the following rights:

  • AdministratorAccess
  • AmazonEKSClusterPolicy

After these steps, AWS will provide you a Secret Access Key and Access Key ID. Save them preciously because this will be the only time AWS gives it to you.

In your own console, create a ~/.aws/credentials file and put your credentials in it:

[default]
aws_access_key_id=***********
aws_secret_access_key=****************************

Creating Config file


cat config
[default]
region=us-east-2

Cloning the Repository

git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster

You can explore this repository by changing directories or navigating in your UI.

$ cd learn-terraform-provision-eks-cluster

In here, you will find six files used to provision a VPC, security groups and an EKS cluster. The final product should be similar to this:

  • vpc.tf provisions a VPC, subnets and availability zones using the AWS VPC Module. A new VPC is created for this guide so it doesn’t impact your existing cloud environment and resources.
  • security-groups.tf provisions the security groups used by the EKS cluster.
  • eks-cluster.tf provisions all the resources (AutoScaling Groups, etc…) required to set up an EKS cluster in the private subnets and bastion servers to access the cluster using the AWS EKS Module.
  • On line 14, the AutoScaling group configuration contains three nodes.
  • outputs.tf defines the output configuration.
  • versions.tf sets the Terraform version to at least 0.12. It also sets versions for the providers used in this sample.

Initialize Terraform workspace

[Captains-Bay]? >  terraform init
Initializing modules...
Downloading terraform-aws-modules/eks/aws 12.0.0 for eks...
- eks in .terraform/modules/eks/terraform-aws-eks-12.0.0
- eks.node_groups in .terraform/modules/eks/terraform-aws-eks-12.0.0/modules/node_groups
Downloading terraform-aws-modules/vpc/aws 2.6.0 for vpc...
- vpc in .terraform/modules/vpc/terraform-aws-vpc-2.6.0

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "random" (hashicorp/random) 2.2.1...
- Downloading plugin for provider "local" (hashicorp/local) 1.4.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
- Downloading plugin for provider "kubernetes" (hashicorp/kubernetes) 1.11.3...
- Downloading plugin for provider "template" (hashicorp/template) 2.1.2...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.64.0...

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
[Captains-Bay]? >
Apply complete! Resources: 51 added, 0 changed, 0 destroyed.

Outputs:

cluster_endpoint = https://83AEAE7D9A99A68DFA4162E18F4AD470.gr7.us-east-2.eks.amazonaws.com
cluster_name = training-eks-9Vir2IUu
cluster_security_group_id = sg-000e8af737c088047
kubectl_config = apiVersion: v1
preferences: {}
kind: Config

clusters:
- cluster:
    server: https://83AEAE7D9A99A68DFA4162E18F4AD470.gr7.us-east-2.eks.amazonaws.com
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXpNVEpNWkFneVVBS1hma1pQV2d4OXBWdWFOMHkzeE02ZTdTaUtYNFpTNmhFQzcyK1hrK29Na2tsSlFlQ0J3TT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
  name: eks_training-eks-9Vir2IUu

contexts:
- context:
    cluster: eks_training-eks-9Vir2IUu
    user: eks_training-eks-9Vir2IUu
  name: eks_training-eks-9Vir2IUu

current-context: eks_training-eks-9Vir2IUu

users:
- name: eks_training-eks-9Vir2IUu
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "training-eks-9Vir2IUu"
region = us-east-2

Configure kubectl

Now that you’ve provisioned your EKS cluster, you need to configure kubectl. Customize the following command with your cluster name and region, the values from Terraform’s output. It will get the access credentials for your cluster and automatically configure kubectl.

aws eks --region us-east-2 update-kubeconfig --name training-eks-9Vir2IUu
Added new context arn:aws:eks:us-east-2:125346028423:cluster/training-eks-9Vir2IUu to /Users/ajeetraina/.kube/

Troubleshooting:

If you are facing the below error message while running terraform init:

[Captains-Bay]? >  terraform validate

Error: Error parsing /Users/ajeetraina/.aws/learn-terraform-provision-eks-cluster/eks-cluster.tf: At 3:18: Unknown token: 3:18 IDENT local.cluster_name

Then to fix it , you need to update your Terraform version by running

brew upgrade terraform

Have questions? Join me at https://launchpass.com/collabnix and ask your queries under #issues channel.

Reference:

From 1 to 5000: A Close Look at Collabnix Slack Journey

“Individually We Specialise, Together We Transform”

Slack has grown to be an excellent tool for communities- both large and small, open and pay-to-access. Collabnix is a Slack workspace introduced to build a community around DevOps Engineers. Today it holds 5300+ community members who are highly enthusiasts and ready to contribute towards Docker, Kubernetes, Automation tools and Cloud Native Technologies.

Community is Strength

We firmly believe that “Community is Strength”. We believe in the power of open source software. That’s the reason we participate in, contribute to, and support the open source container community so strongly.I still remember the evening of October 2018 when I introduced Collabnix slack workspace to a group of DevOps enthusiasts for the first time. Within a week time, this community crossed 100+ members which was quite impressive and exciting at the same time.

The Birth of DockerLabs

DockerLabs was the first GITHUB repository which the community started building in 2018. Today you can find 500+ tutorials for beginners, intermediate and advanced users. With over 87,000+ views, this GITHUB repository has already gained 1,200 stars and around 600 forks.

Docker Hands-on Labs

Over millions of DevOps engineers leveraged DockerLabs to get started with Docker. Hats off a group of Collabnix Slack contributors for their great ideas and thoughts to structure this amazing repository.

The Rise of KubeLabs

Last year, we started building KubeLabs – An Ultimate Kubernetes Resources for Beginners. Today we have 120+ tutorials around Kubernetes 101 – Pods, ReplicaSet, Services, Networking, Storage etc. We ran multiple LIVE workshops for Docker Bangalore community early this year. In one of the workshop, around 550+ audience appeared and the overall feedback was stunning.

A Peep into Kubetools

Kubetools was introduced in early 2020 for the first time. It was built with a purpose. It is being used by Collabnix Slack community internally to target the most popular tools and technique around Kubernetes and coming up with the best practices around these tools. The community started planning to conduct webinar for the most voted tools based on the popularity soon after its launch.

We’re still growing..

Yes, you heard it right. We are growing massively. Just look at the analytics graph below and you can realise the potential of our Slack workspace. In average, around 30-40 members are joining almost every day which is really overwhelming figure.

Did you know? We have 120+ various channels in our community slack. Channels like #devops-job-seeker, #devops-interview-preparation, #general #osconf #kubedaily #terraform etc are few of popular on the chart.

More than 2 Lakhs messages have been exchanged till date. In average, more than 4,000+ messages are sent every month.

Thank You…

Credits to active members who have been contributing tirelessly for GITHUB repositories. If you’re one of them and reading this blog, a BIG THANK YOU.

Check out our community initiatives:

Getting Started with Jenkins X in 5 Minutes

Jenkins works perfectly well as a stand-alone open source tool but with the shift to Cloud native and Kubernetes, it invites challenges in terms of management and operation. In a recent past Jenkins X has emerged as a way to improve and automate continuous delivery pipelines to Kubernetes and cloud native environments. Jenkins X is a CI/CD solution for modern cloud applications on Kubernetes. It helps us create great pipelines for our projects and implement a full CI and CD. Jenkins X provides an automated CI/CD for your modern cloud applications on Kubernetes and enables developers to quickly establish continuous delivery best practices.

Container orchestration tools such as Kubernetes allow DevOps teams to manage these containers, leading to faster deployments and shorter time to market, while also satisfying heightened customer expectations and remaining competitive. 

Compelling Feature of Jenkins X 

How to Install Jenkins X on an Existing Kubernetes Cluster – The New Stack


Jenkins X is not just a CI/CD tool to run your builds and deployments, it is an attempt to automate the whole development process end to end for containerised applications based on Docker and Kubernetes. It is obviously open source, as are all best applications. Jenkins X builds upon the DevOps model of loosely-coupled architectures. It is designed to support deployment of large numbers of distributed microservices in a repeatable and manageable fashion, across multiple teams.

Today Developers shouldn’t spend time figuring out how to package software as docker images. They shouldn’t really create the Kubernetes YAML to run their application on Kubernetes, even shouldn’t care about  creating Preview environments or even learn how to implement CI/CD pipelines with declarative pipeline-as-code Jenkinsfiles. They should focus on their code and hence, on delivering value! Jenkins X is a project which rethinks how developers should interact with CI/CD in the cloud with a focus on making development teams productive through automation, tooling and DevOps best practices. What’s cool about Jenkins X is as a developer you can type one command jx create or jx import and get your source code, git repository and application created, automatically built and deployed to Kubernetes on each Pull Request or git push with full CI/CD complete with Environments and Promotion via GitOps!

Click on the image to redirect to Collabnix Slack


Jenkins X is designed to make it simple for developers to work to DevOps principles and best practices

Rather than having to have deep knowledge of the internals of Jenkins X Pipeline, Jenkins X will default awesome pipelines for your projects that implement fully CI and CD. Below are the few compelling features of Jenkins X .

  • Automated CI-CD Pipeline
  • Environmental Promotion using GitOps 
  • Preview Environments

Jenkins X Architecture

At the heart of the system is Kubernetes. Kubernetes hosts all services deployed by JX, including administrative ones (Jenkins, Chartmuseum, Monocular etc).  Let us talk about each of these components which falls under architecture:

 Source ~ https://jenkins-x.io/about/#architecture

  • Git: Git stores all the code and configurations, including environment setup. Serves as a source of truth for everything. Jenkins X manages remote Git repositories and follows the GitOps principles.
  • Helm: Deployment of the services (or applications) is coordinated via Helm. Helm’s Charts allow sharing of application templates and makes versioning easy. Helm also takes care of upgrade and rollback cases, which makes it quite useful.
  • Chartmuseum: It is basically a Helm Charts repository which helps to manage charts via rest api. 
  • Monocular: It is a web-based UI to Helm Charts repository
  • Nexus –  acts as a dependency cache for Node Js and Java applications to dramatically improve build times. After an initial build of a SpringBoot application the build time is reduced from 12 mins to 4. We have not yet but intend to demonstrate swapping this with Artifactory soon.
  • Docker Registry — an in cluster docker registry where our pipelines push application images.

Introducing JX CLI

jx is a command line tool for working with Jenkins X. It does all the magic of bringing all building blocks together and providing an entry point to a system management and orchestration. It is written in Go.The JX CLI is utilized by end users to manage resources (apps, environments, urls etc) as well as in Jenkins pipelines created by JX. Few of the notable commands includes:

CommandPurpose
$ jx installInstall JX on Kubernetes Cluster
$ jx createCreates JX resources and associated services like Kubernetes Namespace, Pods, Services etc
$ jx bootBoots up Jenkins X in a Kubernetes cluster using GitOps and a Jenkins X Pipeline
$ jx importImports a project code into JX
$ jx previewCreates a temporary Preview Environment for an application
$ jx promotePromotes an apps version to a specific environment

Under this blog post, we will set up Jenkins X on Google Cloud Platform in 5 Minutes.

Prerequisites:

Jenkins X requires the following supported services installed & configured properly prior to installation:

  • You will require GITHUB as a GIT provider and GITHUB user account.
  • You need to create a GitHub organization. You will also need to create a GitHub bot account/username( In my case, I created it by name “collabnix-bot” over GitHub)
  • Publicly accessible DockerHub account for creating and managing docker images.
  • A local Desktop machine(Linux/Mac) with the jx program.
  • The Kubernetes command-line tool, which can be installed to your local installation using the jx install command.
  • A Google Cloud Platform (GCP) account with the ability to provision kubernetes resources / create kubernetes clusters with API Enabled as shown below:

Setting up GITHUB User & Organization

First of all, let us  create a GitHub organisation which will have two members, a GitHub user account, eg ajeetraina, and a GitHub ‘Pipeline’ bot account, eg collabnix-bot.

Creating GITHUB Organization:

  • Click the “+” at the top right of GitHub’s top navigation bar or by clicking to the create an organization page. Choose free ‘Team for Open Source’ plan for GitHub organisation.. Name your organisation anything you like, eg jenkins-x-testproject.
  • It’s time to Invite your GitHub user account, e.g. ajeetraina to the organisation. 
  • This GitHub user account will create and manage development repositories.

Creating GITHUB Pipeline bot Account

  • Create a GitHub Pipeline bot account. This Pipeline bot will automate pull request notifications and create preview environments for quick validation and acceptance for code merging. 
  • Please note that your Pipeline bot should be created as a member of your GitHub organisation, e.g. jenkins-x-testproject.
  • I would suggest you create a new account that will be only for your bot. 
  • The bot account which you create must have a token created in your organization that authenticates the bot & allows it to perform various tasks on the repositories within your organization. You might need to generate a Git token for your Pipeline Bot with the correct permissions via this GitHub Link.

Installing JX 

jx version --short
Version 2.1.78+cjxd.11


Installing JX Dependencies

jx install dependencies -d kubectl
Installing Kubectl

Creating Kubernetes Cluster using JX

Before you run the below command, ensure that you have a valid GCE account. Run the below `jx create cluster gke` CLI which allows you to create a cluster on Google Kubernetes Engine (GKE), which you can initialise with a name. 

[Captains-Bay]? >  jx create cluster gke --skip-installation -n mytestproject


This command will take 10-15 minutes and goes through the below process:

  • The program opens a web browser and you will be asked to choose the email address associated with your GCP account. 
  • It will allow the Google Cloud SDK access to your account. After confirming, you can close the browser page.
  • Back at the command-line, the jx create cluster program prompts you to choose your Google Cloud Project from the available list.
[Captains-Bay]? >  jx create cluster gke --skip-installation -n mytestproject

This command will take 10-15 minutes and goes through the below process:

The program opens a web browser and you will be asked to choose the email address associated with your GCP account. 
It will allow the Google Cloud SDK access to your account. After confirming, you can close the browser page.
Back at the command-line, the jx create cluster program prompts you to choose your Google Cloud Project from the available list.


? Google Cloud Project: famous-hull-276807
Updated property [core/project].
? Configured cluster name: mytestproject
? Defaulting to cluster type: Zonal
? Google Cloud Zone: us-east1-b
? Defaulting to machine type: n1-standard-2
? Defaulting to minimum number of nodes: 3
? Defaulting to maximum number of nodes: 5
? Defaulting use of preemptible VMs: No
? Defaulting access to Google Cloud Storage / Google Container Registry: Yes
? Defaulting enabling Cloud Build, Container Registry & Container Analysis API's: Yes
? Defaulting enabling Kaniko for building container images: No
Creating cluster...
Creating cluster...
WARNING: Currently VPC-native is not the default mode during cluster creation. In the future, this will become the default mode and can be disabled using `--no-enable-ip-alias` flag. Use `--[no-]enable-ip-alias` flag to suppress this warning.
WARNING: Newly created clusters and node-pools will have node auto-upgrade enabled by default. This can be disabled using the `--no-enable-autoupgrade` flag.
WARNING: Starting with version 1.18, clusters will have shielded GKE nodes by default.
WARNING: Your Pod address range (`--cluster-ipv4-cidr`) can accommodate at most 1008 node(s).
This will enable the autorepair feature for nodes. Please see https://cloud.google.com/kubernetes-engine/docs/node-auto-repair for more information on node autorepairs.
Creating cluster mytestproject in us-east1-b...
................................done.
Created [https://container.googleapis.com/v1/projects/famous-hull-276807/zones/us-east1-b/clusters/mytestproject].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-east1-b/mytestproject?project=famous-hull-276807
kubeconfig entry generated for mytestproject.
NAME           LOCATION    MASTER_VERSION  MASTER_IP     MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
mytestproject  us-east1-b  1.14.10-gke.36  34.73.89.190  n1-standard-2  1.14.10-gke.36  3          RUNNING
Initialising cluster ...
gke cluster created. Skipping Jenkins X installation.
Fetching cluster endpoint and auth data.
kubeconfig entry generated for mytestproject.
Context "gke_famous-hull-276807_us-east1-b_mytestproject" modified.

You can visit GCP console and click on “Workload” to see all Jenkins X specific workloads up and running:

Verifying Kubernetes Cluster

kubectl get nodes
NAME                                           STATUS   ROLES    AGE   VERSION
gke-mytestproject-default-pool-c87818e4-3vbd   Ready    <none>   50s   v1.14.10-gke.36
gke-mytestproject-default-pool-c87818e4-551t   Ready    <none>   50s   v1.14.10-gke.36
gke-mytestproject-default-pool-c87818e4-qg4b   Ready    <none>   50s   v1.14.10-gke.36

Clone the Jenkins X Boot configuration Repository 

Under this step, we will clone Jenkins X boot configuration repository. We will then open the jx-requirements.yml file of your newly cloned repo, eg. jenkins-x-boot-config/jx-requirements.yml. This specifies the requirements of your installation, including:

[Captains-Bay]? >  git clone https://github.com/jenkins-x/jenkins-x-boot-config

Running JX Boot CLI

The ‘jx boot’ utility command allows you to boot up Jenkins X in a Kubernetes cluster using GitOps and a Jenkins X Pipeline. Jenkins X Boot uses the following approach:

jx boot

Then you will be asked a series of questions to ensure Jenkins X is installed properly on your cluster:

QuestionsWhat to respond?
You will be asked to input the Git Owner name for environment repositories:Type in the organisation you created: jenkins-x-testproject
You will be asked to provide Comma-separated git provider usernames of approvers for development environment repositoryType in the name of the GitHub account that is a member of the organisation you created, eg ajeetraina
You may receive a Warning that TLS is not enabled so your webhooks will be called using HTTP. You will be asked for confirmation to continue.
If you type ‘No’, the jx boot process will end with error: cannot continue because TLS is not enabled.If you type ‘Yes’, then namespace jx will be created in your cluster and Jenkins X booted in that namespace.
You may be asked if you wish to upgrade jx. It is recommended you say ‘Yes’ and then re-run jx boot.
There will be information logged on enabling storage on GKE. You do not need to enable storage for this walkthrough tutorial. 
You will be asked Jenkins X Admin UsernameType in a username or press return to have the default username of admin
You will then be asked for Jenkins X Admin PasswordType one in 
You will then be asked for the Pipeline bot Git username: Type in the name of the Pipeline Bot you created, eg jx-bot.Type in the name of the Pipeline Bot you created, eg collabnix-bot
You will then be asked for Pipeline bot Git email addressType in the email address you used when setting up your Pipeline Bot. 
You will then be asked for Pipeline bot Git token..Type in the token generated and saved previously.
You will be asked, Do you want to configure an external Docker Registry?: ‘No’ is sufficient for this tutorial‘No’ is sufficient for this tutorial. But if you say “yes” you might need to provide Docker Registry Details ( ie. DockerHub username and password)
  

Then you will see confirmation on the state of your installation process, such as “Installation is currently looking: GOOD” message.In the organisation you created, eg jenkins-x-testproject, there should now be 3 additional repositories for the dev, staging, and production environments, which map to the dev, staging, and production namespaces in your cluster.

verifying the Jenkins X installation in namespace jx
verifying pods
Checking pod statuses
POD                                                STATUS
jenkins-x-chartmuseum-d87cbb789-h4kzz              Running
jenkins-x-controllerbuild-847f4f4b79-t6g64         Running
jenkins-x-controllerrole-75d8d87d98-p5p2l          Running
jenkins-x-heapster-54bdffbc79-fwlds                Running
jenkins-x-nexus-b69b7745b-xnzrl                    Running
jx-vault-mytestproject-0                           Running
jx-vault-mytestproject-configurer-5547f59f5b-d2vjk Running
lighthouse-foghorn-dd76f6664-w9ql4                 Running
lighthouse-keeper-5f89b8b978-chkdp                 Running
lighthouse-webhooks-5cdf4b9f65-6mchq               Running
lighthouse-webhooks-5cdf4b9f65-nqjvw               Running
tekton-pipelines-controller-88c7cd9d5-4hr7z        Running
vault-operator-75d5446bb7-mnj6h                    Running
Verifying the git config
Verifying username collabnix-bot at git server github at https://github.com
Found 1 organisations in git server https://github.com: jenkins-x-testproject
Validated pipeline user collabnix-bot on git server https://github.com

Congratulations ! Jenkins X should be installed on your Kubernetes cluster by now.

Getting Jenkins X Environment

jx get environments
NAME       LABEL       KIND        PROMOTE NAMESPACE     ORDER CLUSTER SOURCE                                                                            REF    PR
dev        Development Development Never   jx            0             https://github.com/jenkins-x-testproject/environment-mytestproject-dev.git        master
staging    Staging     Permanent   Auto    jx-staging    100           https://github.com/jenkins-x-testproject/environment-mytestproject-staging.git    master
production Production  Permanent   Manual  jx-production 200           https://github.com/jenkins-x-testproject/environment-mytestproject-production.git master
[Captains-Bay]? >

Bringing up GUI

To use the GUI, you must install and configure it for your CloudBees Jenkins X Distribution environment. Ensure that you have the proper namespace set by running jx from the command line to switch to the jx namespace:

jx ns jx
Using namespace 'jx' from context named 'gke_famous-hull-276807_us-east1-b_mytestproject' on server 'https://34.73.89.190'.


From a command-line, install the UI application by using the jx add app command. Please Note: If you have enabled GitOps mode in your CloudBees Jenkins X Distribution cluster, the jx add app command updates your development (dev) environment repository and automatically merges the UI app source code changes to your dev environment.


[Captains-Bay]? >  jx add app jx-app-ui --version 0.1.211
WARNING: No secrets found on "helm/repos" due: reading the secret "helm/repos" from vault: no secret "helm/repos" not found in vault
Read credentials for http://chartmuseum.jenkins-x.io from vault helm/repos
Preparing questions to configure jx-app-ui. If this is the first time you have installed the app, this may take a couple of minutes.
Questions prepared.
Checking if TLS is enabled in the cluster
Created Pull Request: https://github.com/jenkins-x-testproject/environment-mytestproject-dev/pull/1
Added app via Pull Request https://github.com/jenkins-x-testproject/environment-mytestproject-dev/pull/1
[Captains-Bay]? >  jx gadd app jx-app-ui --version 0.1.211kubectl get -n jx ingress jenkins-x-jxui

The ‘jx ui’ CLI opens the CloudBees Jenkins X UI app for Kubernetes for visualising CI/CD and your environments. Before running the above command, let us ensure that we are under “jx” namespace.


[Captains-Bay]? >  jx ui -p 8080
UI not configured to run with TLS - The UI will open in read-only mode with port-forwarding only for the current user
Waiting for the UI to be ready on http://localhost:8080...
.
.
Jenkins X UI: http://localhost:8080
Opening the UI in the browser...

Opening up Jenkins X UI

Let’s go ahead and test drive a new app from a Quickstart and import the generated code into Git and Jenkins for CI/CD.

Search for Python-http and click “Continue”.

You have to enter the GITHUB Access token, organization name and name of the new repository which you want to create. Click “Finish” once you enter the details.

If you didn’t opt for GUI, you can perform the quickstart via CLI too as shown below:

jx create quickstart
? select the quickstart you wish to create python-http
Using Git provider github.com at https://github.com
? Git user name? collabnix-bot
? Who should be the owner of the repository? jenkins-x-testproject
? Enter the new repository name:  tp-alpha
Creating repository jenkins-x-testproject/tp-alpha
Generated quickstart at /Users/ajeetraina/july1/tp-alpha
Created project at /Users/ajeetraina/july1/tp-alpha
The directory /Users/ajeetraina/july1/tp-alpha is not yet using git
? Would you like to initialise git now? Yes
? Commit message:  Initial import

Git repository created
performing pack detection in folder /Users/ajeetraina/july1/tp-alpha
--> Draft detected Python (95.544554%)
selected pack: /Users/ajeetraina/.jx/draft/packs/github.com/jenkins-x-buildpacks/jenkins-x-kubernetes/packs/python
replacing placeholders in directory /Users/ajeetraina/july1/tp-alpha
app name: tp-alpha, git server: github.com, org: jenkins-x-testproject, Docker registry org: famous-hull-276807
skipping directory "/Users/ajeetraina/july1/tp-alpha/.git"
Draft pack python added
? Would you like to define a different preview namespace? Yes
? Enter the name for the preview namespace:  jx-previews
Pushed Git repository to https://github.com/jenkins-x-testproject/tp-alpha
Creating GitHub webhook for jenkins-x-testproject/tp-alpha for url http://hook-jx.35.237.205.64.nip.io/hook
Created Pull Request: https://github.com/jenkins-x-testproject/environment-mytestproject-dev/pull/5
Added label updatebot to Pull Request https://github.com/jenkins-x-testproject/environment-mytestproject-dev/pull/5
created pull request https://github.com/jenkins-x-testproject/environment-mytestproject-dev/pull/5 on the development git repository https://github.com/jenkins-x-testproject/environment-mytestproject-dev.git
regenerated Prow configuration
PipelineActivity for jenkins-x-testproject-tp-alpha-master-1
upserted PipelineResource meta-jenkins-x-testproject-tp-a-w9sxr for the git repository https://github.com/jenkins-x-testproject/tp-alpha.git
upserted Task meta-jenkins-x-testproject-tp-a-w9sxr-meta-pipeline-1
upserted Pipeline meta-jenkins-x-testproject-tp-a-w9sxr-1
created PipelineRun meta-jenkins-x-testproject-tp-a-w9sxr-1
created PipelineStructure meta-jenkins-x-testproject-tp-a-w9sxr-1

Watch pipeline activity via:    jx get activity -f tp-alpha -w
Browse the pipeline log via:    jx get build logs jenkins-x-testproject/tp-alpha/master
You can list the pipelines via: jx get pipelines
When the pipeline is complete:  jx get applications

For more help on available commands see: https://jenkins-x.io/developing/browsing/

Note that your first pipeline may take a few minutes to start while the necessary images get downloaded!

Conclusion

If you’re looking for a tool which can help you achieve CI-CD without any effort of assembling things together yourself, Jenkins X is a right tool for you.  It doesn’t aim to replace Jenkins but builds on it with best of breed open source tools. You do not directly install Jenkins to use Jenkins X, Jenkins is right embedded for you as a pipeline engine as part of the installation for a team.It is a complete CI/CD process with a Jenkins pipeline that builds and packages project code for deployment to Kubernetes and access to pipelines for promoting projects to staging and production environments.

Jenkins X Cloud Native CI/CD with TestProject

Building microservices isn’t merely breaking up a monolithic software application into a collection of smaller services. When shifting to microservices, there are best practices developers must adopt. It becomes equally important to learn microservices development best practices, optimizing which languages to use, using an opinionated stack and pre-configured pipeline, and testing apps using continuous deployment. Hence, it’s also about automation and the method for developing software.

With too many services interacting with each other through APIs, it introduces complexity, which is generally considered to be the primary foe of good software. In a microservices architecture, an individual service concerns itself only with minimal responsibilities. If a single service develops a problem, it’s much easier to rewrite that single service than rewrite and merge-in a fix to the entire monolith. Hence, one needs to maintain the ever-growing release cycle which becomes more frequent as compared to the old monolithic architecture. To allow our DevOps teams to deploy and test their microservices directly over Cloud and multiple times per day, a robust continuous integration and delivery tool is required.

Today, DevOps is a set of practices which mainly focuses on reducing the time between committing a change to a system and the change being placed into normal production, while ensuring better quality. Teams should be able to deploy multiple times per day compared to the industry average that falls between once per week and once per month. The lead time for code to migrate from ‘code committed’ to ‘code in production’ should be less than one hour and the change failure rate should be less than 15%, compared to an average of between 31-45%. In nutshell, MTTR (mean time to repair) from a failure is expected to be less than one hour.

Jenkins is one such continuous integration and continuous delivery tool which can be used to automate building, testing, and deploying software. It is an open-source automation server that lets you flexibly orchestrate your build, test, and deployment pipelines. It has extensive community support, heavily Java-based codebase making it portable to all major platforms. It has a rich ecosystem of more than 1000 plugins (there’s even a Jenkins plugin for TestProject too ?). Jenkins works well with all popular Source Control Management systems like Git, SVN, Mercurial and CVS. It works well with popular build tools like Ant, Maven & Grunt. Jenkins plugins provide support for technologies like Docker and Kubernetes, which enable the creation and deployment of cloud-based microservice environments, both for testing as well as production deployments.

Jenkins works perfectly well as a stand-alone open source tool, but with the shift to Cloud native and Kubernetes, it invites challenges in terms of management and operation. Recently, Jenkins X has emerged as a way to improve and automate continuous delivery pipelines to Kubernetes and cloud native environments. To achieve speed and agility, it is important to automate all the testing processes and configure them to run automatically over the Kubernetes cluster. To enhance the automation testing capabilities independently and within DevOps, TestProject comes to the rescue ?‍?

New to Kubernetes? Check out series of KubeLabs to get started with it.

In this tutorial, we will discuss what is Jenkins X and review its architecture and benefits. We will also demonstrate how to set up Jenkins X on Google Cloud Platform and then integrate it with TestProject in a seamless manner.

Table of Contents

  1. Why Jenkins X?
  2. Compelling Features of Jenkins X 
  3. Jenkins X Architecture
  4. Introducing JX CLI
  5. Jenkins X Prerequisites
  6. Installing JX
  7. Creating Kubernetes Cluster using JX
  8. Verifying Kubernetes Cluster
  9. Clone the Jenkins X Boot Configuration Repository
  10. Running JX Boot CLI
  11. Getting Jenkins X Environment
  12. Bringing up Jenkins X GUI
  13. Jenkins X Integration with TestProject
  14. Conclusion

Why Jenkins X?

Read the article tutorial at TestProject.io

The Rise of Shipa – A Continuous Operation Platform

Cloud native microservices have undergone an exciting architectural evolution. 4 years back, the industry was busy talking about the rise of microservice architecture where the focus was on modularizing the application by splitting it into smaller standalone services that can be built, deployed, scaled, and even maintained independently of other existing services. Splitting a Monolith into much smaller, independent services has many advantages in speed and agility, but it comes with challenges as well such as risk in getting a very fragmented system where developers need to spend a lot of time and effort on gluing together services and tools, and where there’s a lack of common patterns and platforms that makes working across projects viable. The other challenges like increase in operational overhead for support & maintenance as each service has its own languages and requirements, complexity in monitoring and security requiring new levels of automation and tooling, new requirements for service discovery, messaging, caching and fault tolerance that can strain a system and possibly lead to performance issues if not handled properly.

Today most of the talk around microservices, however, goes straight to the technology: continuous integration and deployment, containers, orchestrators, and so on. The concept of cloud-native microservices stems from the evolution of container architectures. Containers are an enabling technology for microservices, which is why microservices are often delivered in one or more containers. Since containers are isolated environments, they can be used to deploy microservices quickly and securely, regardless of the coding language used to create each microservice. But the use of Docker containerization and microservices caused new challenges in the development process of organizations and therefore, a solid strategy to maintain those several containers and microservices running on production systems becomes a survival factor. These challenges created new demands when using DevOps tools, so one needs to define new processes in DevOps activities, and find answers for questions like which tool to use for development, which tool for CI-CD, management and operations, how to manage error in containers while running in production, how to change a piece of software in production with minimal downtime, scale and monitor the apps and so on. Kubernetes introduced a new standard for container orchestration. It transformed the entire DevOps ecosystem — which is ultimately transforming businesses. By abstracting away management complexities, Kubernetes unlocked the potential of containers in a great way.


As DevOps and Platform Engineering teams scale microservices and Kubernetes across clouds and clusters, organisations realize their budget, resources and time shift away from delivering applications and updates to managing objects, lack of integrations, developer productivity challenges, building Kubernetes customisations and more.As organisations scale Kubernetes, the level of complexity scales together, presenting DevOps and Platform Engineering teams issues that are driven by the lack of application context.With its increasing popularity, using Kubernetes is on the way to become the new standard for many complex software applications. No matter if it is a managerial decision or technical necessity to use Kubernetes, developers cannot simply neglect container technologies but need to interact with them when these technologies are used to run production workloads.

The Rising Pain for Developers

One of the discussed topics often these days when we talk about Kubernetes and Microservices is developer experience and how platform and devops engineers can help them with that. Developers don’t want to spend time on how to create and maintain the Kubernetes objects, and many times when they have to do it by doing so application delivery  speed gets impacted.  The chances of applications with misconfigured objects being deployed in the cluster increases and it creates more burden on the Platform and DevOps engineering team to support developers on  creating, deploying and maintaining Kubernetes objects. Most of the time the Platform Engineering team is tasked with creating another platform layer which becomes cumbersome, expensive, hard to maintain, scales and opens up opportunities for failure. 

Hence, it becomes important for any enterprise IT to re-think over the below essential points:

  • How much time does your Developer  spend to deploy and operate your application? 
  • How much does it take for your developer to scale clusters?
  • How much time does your Developer spend to create Kubernetes objects/create YAML files?
  • What tool do they use for monitoring security?
  • How frequently do they have to write  a myriad of Ansible and Terraform scripts?
  • Do they get sufficient time to focus on cloud infrastructure APIs?

Developers don’t want to (and shouldn’t) learn and spend time developing for Kubernetes, which impacts the Developer experience and speed on delivering what adds value to the organisation, which are the applications. This also opens up opportunities for misconfigured / created objects, not understanding well how apps should be deployed and so on.

Introducing Shipa – Landing Pad for Cloud-Native Applications

Shipa is a platform that aims to make it easier for developers to run their code in production. Rightly called Landing Pad for Cloud Native Applications, it is a Continuous Operation platform, where the goal is to completely abstract the underlying infrastructure (both Cloud and Kubernetes infrastructure), while allowing users to focus solely on the deployment of their applications. With Shipa performing all the infrastructure layer abstraction and proper placement of the applications, users don’t need to think about servers at all.With Shipa, users are able to:

  • Write apps in the programming language of their choice
  • Back it with built-in global resources (called services) such as SQL and NoSQL databases, and many others.
  • Manage applications using Shipa’s command-line tool
  • Deploy code using both, Git and CI/CD systems
  • Shipa takes care of where in your cluster to run the applications and the services they use. Users can then focus on making their applications awesome and going to market faster.

How does Shipa work?

Shipa uses the concept of a landing pad, called Pools, to bring application context to your application delivery and management workflow. Pools are where your CI/CD pipeline will deliver the application code and where the process starts with Shipa. Once code is received, Shipa pools are responsible for creating application objects, run security scans, attach policies, start producing application metrics and more. Shipa API has a scheduler algorithm that distributes applications intelligently across a cluster of nodes.

Top 5 Reasons why should you consider Shipa?

Shipa is reinventing how cloud-native applications are delivered across heterogeneous clusters and cloud environments. Shipa with its concept called Landing Pad comes with a capability to help Platform and DevOps Team finally deliver a strong developer experience and help in speeding the delivery the application as the organisation scale the use of microservices and Kubernetes

By leveraging Pool Management feature of Shipa,, the platform and engineering team can hook up Shipa pools directly into the existing CI-CD pipeline to deploy a single K8s object as the pool is responsible for creating all Kubernetes objects that are required by the application in the bastion cluster. By taking this approach, if there is any issue with the application deployment or security vulnerability during the deployment , Shipa will automatically detect it and roll-back any object leaving your cluster in good state.

Here are the quick 5 top reasons why you might need to consider Shipa:

  • Shipa is the only platform that allows you to think and operate applications rather than infrastructure when delivering services, across any infrastructure
  • With Shipa, teams can now focus on the application delivery and governance, rather than infrastructure.
  • Shipa is backed by established venture capital firms with large experience investing in the space. The leadership team is made of serial entrepreneurs with large experience in the cloud native space
  • Shipa allows the platform and engineering team to hook up Shipa pools directly into the existing Ci-CD pipeline to deploy a single K8s object as the pool is responsible for creating all Kubernetes objects that are required by the application in the bastion cluster. 
  • Easy to use Ship CLI tool which is powerful enough for developers to manage their apps.

Under this blog tutorial, we will walkthrough the feature-rich Shipa CLI and we will see how Developers can deploy application directly over Cloud without even caring about the underlying Kubernetes objects flawlessly. Below are the list of items which we will be covering –

  • Installing Shipa CLI on your desktop
  • Adding Shipa instance as a target on your CLI
  • Creating user for login
  • Listing the existing applications
  • Creating & Removing the application
  • Deploying the application
  • Checking the available Platforms
  • Creating & Managing Pool
  • Creating an app and selecting Pool
  • Listing the certificates
  • Checking Logs 
  • Connecting external Kubernetes Cluster to your Shipa Pool
  • Adding Shipa Node in AWS
  • Security Management
  • Create and deploy sample application from CI-CD tool

Installing Shipa CLI tool

In order to use and operate Shipa, you will need to download the Shipa CLI for your operating system (currently available for MacOS, Linux and Windows). Follow the below steps:

MacOShttps://storage.googleapis.com/shipa-cli/shipa_darwin_amd64
Linuxhttps://storage.googleapis.com/shipa-cli/shipa_linux_amd64
Windowshttps://storage.googleapis.com/shipa-cli/shipa_windows_amd64.exe

Run the below command in order to setup Shipa CLI tool on your Mac system:

chmod +x shipa_darwin_amd64 && mv -v shipa_darwin_amd64 /usr/local/bin/shipa

Add your Shipa instance as a target on your CLI

Targets are used to manage the addresses of the remote Shipa servers.Each target is identified by a label and a HTTP/HTTPS address. Shipa’s client requires at least one target to connect to, there’s no default target. A user may have multiple targets, but only one will be used at a time.

[Captains-Bay]? >  shipa versionshipa version 1.0.1.

[Captains-Bay]? >  shipa target-add default http://34.105.46.12:8080 -s
New target default -> http://34.105.46.12:8080 added to target list and defined as the current target
[Captains-Bay]? >

Creating a user

After configuring the shipa target, we will proceed further and create a user.

[Captains-Bay]? >  shipa user-create admin@shipa.io
Password:
Confirm:
Error: you're not authenticated or your session has expired.
Calling the "login" command...Email: admin@shipa.io
Password:
Password:
Confirm:
Error: this email is already registered

Successfully logged in!

Once you create admin user, you should be able to login to remote Shipa platform.

[Captains-Bay]? >  shipa login
Email: admin@shipa.io
Password:
Successfully logged in!
[Captains-Bay]? >

Shipa requires users to be a member of at least one team in order to create an application or a service instance. Let us first check the list of team by running the below CLI:

[Captains-Bay]? >  shipa team-list
+--------+------------------+------+
| Team   | Permissions      | Tags |
+--------+------------------+------+
| admin  | app              |      |
|        | team             |      |
|        | service          |      |
|        | service-instance |      |
|        | cluster          |      |
|        | volume           |      |
|        | volume-plan      |      |
|        | webhook          |      |
+--------+------------------+------+
| system | app              |      |
|        | team             |      |
|        | service          |      |
|        | service-instance |      |
|        | cluster          |      |
|        | volume           |      |
|        | volume-plan      |      |
|        | webhook          |      |
+--------+------------------+------+
[Captains-Bay]? >

Add an SSH key

Next, we need to send a public key to the git server used by Shipa. Run the below command to accomplish this.

[Captains-Bay]? >  shipa key-add my-rsa-key ~/.ssh/id_rsa.pubKey "my-rsa-key" successfully added!
[Captains-Bay]? >

Listing the application

Shipa comes with a capability to list all applications that a user has access to. Application access is controlled by teams. If a user’s team has access to an application, then this user also has access to it. Run the below command to list out all the applications:

[Captains-Bay]? >  shipa app-list
+------------------------------+-----------+--------------------------------------------------------+
| Application                  | Units     | Address                                                |
+------------------------------+-----------+--------------------------------------------------------+
| aks-app1                     | 1 started | http://aks-app1.34.82.73.71.nip.io                     |
+------------------------------+-----------+--------------------------------------------------------+
| dashboard                    | 1 started | http://dashboard.34.82.73.71.nip.io                    |
+------------------------------+-----------+--------------------------------------------------------+
| eks-app1                     | 1 started | http://eks-app1.34.82.73.71.nip.io                     |
+------------------------------+-----------+--------------------------------------------------------+
| gke-app1                     | 1 started | http://gke-app1.34.82.73.71.nip.io                     |
+------------------------------+-----------+--------------------------------------------------------+
| gke-app2                     | 1 started | http://gke-app2.34.82.73.71.nip.io                     |
+------------------------------+-----------+--------------------------------------------------------+
| longhorn-app                 | 1 started | http://longhorn-app.34.82.73.71.nip.io                 |
+------------------------------+-----------+--------------------------------------------------------+
| postgres-service-service-app | 1 started | http://postgres-service-service-app.34.82.73.71.nip.io |
+------------------------------+-----------+--------------------------------------------------------+
[Captains-Bay]? >
As shown above, there are multiple applications hosted on various Cloud platform like AWS EKS, GKE etc.

Application Information

The below command shows information about a specific application, it’s state, platform, git repository, and more. Users need to be a member of a team that has access to the application to be able to see information about it.

[Captains-Bay]? >  shipa app-info -a dashboard
Application: dashboard
Description:
Tags:
Dependency Files:
Repository: git@34.105.46.12:dashboard.git
Platform: staticTeams: admin
Address: http://dashboard.34.82.73.71.nip.io
Owner: admin@shipa.io
Team owner: admin
Deploys: 1
Pool: theonepool
Quota: 1/4 units
Routing settings:
   1 version => 1 weight

Units [web]: 1
+---------+----------------------------------+---------+---------------+------+
| Version | Unit                             | Status  | Host          | Port |
+---------+----------------------------------+---------+---------------+------+
| 1       | dashboard-web-1-5d58db8779-ztgcs | started | 34.105.121.67 | 8888 |
+---------+----------------------------------+---------+---------------+------+
App Plan:+---------------+--------+------+-----------+---------+
| Name          | Memory | Swap | Cpu Share | Default |

+---------------+--------+------+-----------+---------+
| autogenerated | 0      | 0    | 100       | false   |
+---------------+--------+------+-----------+---------+
Routers:
+---------+---------+------+------------------------------+--------+
| Name    | Type    | Opts | Address                      | Status |
+---------+---------+------+------------------------------+--------+
| traefik | traefik |      | dashboard.34.82.73.71.nip.io |        |
+---------+---------+------+------------------------------+--------+

Checking the available Platforms

A platform is a well-defined pack with installed dependencies for a language or framework that a group of applications will need. A platform can also be a container template (Docker image).

Platforms are easily extendable and managed by Shipa. Every application runs on top of a platform.

You can list out the platforms by running the below CLI:

[Captains-Bay]? >  shipa platform-list
- go
- java
- nodejs
- php
- python
- static

Listing the app

Verifying Logs

[Captains-Bay]? >  shipa app-log --app collabnix

Removing app

If the application is bound to any service instance, all binds will be removed before the application gets deleted Do check the service-unbind command for this in Shipa documentation. In our case, we can go ahead and use app-remove option to remove an app smoothly.

[Captains-Bay]? >  shipa app-remove --app collabnix
Are you sure you want to remove app "collabnix"? (y/n) y
---- Removing application "collabnix"...
Failed to remove router backend from database: not found
---- Done removing application. Some errors occurred during removal.
running autoscale checks
finished autoscale checks
[Captains-Bay]? >

Creating an app and selecting a specific pool

Let’s create a new application called collabnix and assign it to a team called admin along with existing pool called gke-longhorn.


[Captains-Bay]? >  shipa app-create collabnix python --team admin --pool gke-longhorn
App "collabnix" has been created!
Use app-info to check the status of the app and its units.
Your repository for "collabnix" project is "git@34.105.46.12:collabnix.git"
[Captains-Bay]? >


Deploying an application

Currently, Shipa supports 4 application deployment options:

  • CI/CD
  • Git
  • app-deploy
  • Docker image
[Captains-Bay]? >  shipa app-deploy . -a collabnix
context args: [.]
Uploading files (0.02MB)... 100.00% Processing ok
 ---> collabnix-v1-build - Successfully assigned shipa-gke-longhorn/collabnix-v1-build to gke-lhcl-default-pool-c6caa3b2-rc9k [default-scheduler]
https://files.pythonhosted.org/packages/98/13/a1d703ec396ade42c1d33df0e1cb691a28b7c08
/

...
 ---> Sending image to repository (34.105.46.12:5000/shipa/app-collabnix:v1)
The push refers to repository [34.105.46.12:5000/shipa/app-collabnix]
b428a7ad5d5f: Pushed
...

OK
running autoscale checks
finished autoscale checks

Listing the Deployments

You can use “app-deploy-list” option to list information about deploys for an application. Users can list available images with the app-deploy-list command.

[Captains-Bay]? >  shipa app-deploy-list  -a collabnix
+--------+----------------------------------------------+------------+----------------+-----------------------------+-------+
| Active | Image (Rollback)                             | Origin     | User           | Date (Duration)             | Error |
+--------+----------------------------------------------+------------+----------------+-----------------------------+-------+
| *      | 34.105.46.12:5000/shipa/app-collabnix:v1 (*) | app-deploy | admin@shipa.io | 12 Jun 20 00:13 IST (01:55) |       |
+--------+----------------------------------------------+------------+----------------+-----------------------------+-------+
[Captains-Bay]? >

Listing the certificates

You can run the below command to list an application TLS certificates.


[Captains-Bay]? >  shipa certificate-list -a collabnix
+--------------+--------+---------------------+
| Router  | CName     | Expires | Issuer | Subject |
+---------+----------------+--------+---------+
| traefik | collabnix.34.82.73.71.nip.io | -| - | -  |
+---------+----------------+--------+---------+

Checking logs

[Captains-Bay]? >  shipa app-log -a collabnix
2020-06-12 00:13:49 +0530 [shipa][api]:   ---> collabnix-web-1-5c667c4fc5-d6v7j - Started container collabnix-web-1 [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
2020-06-12 00:13:49 +0530 [shipa][api]:  ---> 1 of 1 new units ready
...
[Captains-Bay]? >

Using Git

For Shipa, a platform is provisioner dependent. The command below creates a new application using the given name and platform. Once it is completed, it shows the GITHUB repository URL.


[Captains-Bay]? >  shipa app-create collabnix1 python --team admin --pool gke-longhorn
App "collabnix1" has been created!
Use app-info to check the status of the app and its units.
Your repository for "collabnix1" project is "git@34.105.46.12:collabnix1.git"
[Captains-Bay]? >


git push git@34.105.46.12:collabnix1.git  master

…
remote:  ---> Running a security scan
remote:  ---> Found 0 vulnerability(ies)

...
remote: HEAD is now at a0bb216... Added 
remote: .shipa-ci.yml not found and post-receive hook is disabledTo 34.105.46.12:collabnix1.git * [new branch]      master -> master
[Captains-Bay]? >

Go to Shipa Dashboard > Click on “Application” > Pick up endpoint http://collabnix1.34.82.73.71.nip.io/ and it will displays the below error while accessing it via browser:

How to fix it?

If you go into the file blog/settings.py, there is a line called ALLOWED_HOSTS.

Inside that line, you have an entry like xxxx.nip.io, just replace that entry with collabnix1.34.82.73.71.nip.io (the cname Shipa gave to your app) , save the file

With that saved, please run the normal git add . , git commit and git push

Once deployment is complete, your blog application should be available on: collabnix1.34.82.73.71.nip.io/admin

.

Accessing Shell

[Captains-Bay]? >  shipa app-shell -a collabnix1
ubuntu@collabnix1-web-1-f96f7bf9-sh4cz:/home/application/current$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.3 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.3 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
ubuntu@collabnix1-web-1-f96f7bf9-sh4cz:/home/application/current$

Listing the Cluster

Shipa clusters allows registering existing clusters of external provisioners in the platform. Currently, Kubernetes is the only supported external cluster provisioner.

On Shipa, clusters are directly attached to a pool.

[Captains-Bay]? >  shipa cluster-list
+-------------+-------------+--------------+-------+-------+
| Name         | Provisioner | Addresses                                               | Custom Data | Default | Pools        | Teams | Error |
+---------------------------+--------------+-------+-------+
| aks          | kubernetes  | https://aks-ajeet-raina-dns-afc18577.hcp.eastus.azmk8s↵ |             | false   | aks
          |       |       |
|              |             | .io:443                                                |             |         |              |       |       |
+--------------+--------+---------+--------------+--------+
| eks          | kubernetes  | https://D7CB020B4656D5E5BFCC096C529A3BD7.gr7.us-east-1↵ |             | false   | eks          |       |       |
|              |             | .eks.amazonaws.com                                      |             |         |              |       |       |
+--------------+------------------+--------------+--+-------+
| gke          | kubernetes  | https://35.238.48.234                                   |             | false   | gke          |       |       |+--------------+-------------+---------------------------------------------------------+-------------+---------+--------------+-------+-------+
| gke-longhorn | kubernetes  | https://35.205.250.127                                  |             | false   | gke-longhorn |       |       |+--------------+-------------+---------------------+-------+-------+
| theonepool   | kubernetes  | 10.64.0.1:443                                           |             | false   | theonepool   |       |       |+--------------+-------------+-----------------------+-------+-------+
[Captains-Bay]? >

[Captains-Bay]? >  shipa app-list -n collabnix1
+-------------+-----------+--------------------------------------+
| Application | Units     | Address                              |
+-------------+-----------+--------------------------------------+| collabnix1  | 1 started | http://collabnix1.34.82.73.71.nip.io |+-------------+-----------+--------------------------------------+
[Captains-Bay]? >

Security Management

The command below lists all security scans for a specific image

[Captains-Bay]? >  shipa app-security list -a collabnix1
1. [Deployment] scan at 13 Jun 2020 11:30, 0 vulnerability(es), 0 ignored
[Captains-Bay]? >

Creating a new database and binding your application

Let us try to create a new instance of PostgreSQL and bind it to collabnix1 app.

Persistent Volume

You can list the existing volume plans via volume-plan-list CLI:

[Captains-Bay]? >  shipa volume-plan-list
Error: you're not authenticated or your session has expired.
Calling the "login" command...
Email: admin@shipa.io
Password:
+----------+---------------+-------+
| Name     | Storage Class | Teams |
+----------+---------------+-------+
| longhorn | longhorn      | []    |
+----------+---------------+-------+
Successfully logged in!
[Captains-Bay]? >

Listing the Volume

[Captains-Bay]? >  shipa volume-list
+---------+----------+--------------+-------+--------------------+------+-----------------------+| Name    | Plan     | Pool         | Team  | Plan Storage Class | Opts | Binds                 |+---------+----------+--------------+-------+--------------------+------+-----------------------+| lh-vol1 | longhorn | gke-longhorn | admin | longhorn           |      | longhorn-app:       ↵ ||         |          |              |       |                    |      | /mnt/lh-vol1:rw       |+---------+----------+--------------+-------+--------------------+------+-----------------------+
[Captains-Bay]? >

Creating Volume

[Captains-Bay]? >  shipa volume-create collabvol longhorn -p gke-longhorn -t admin --am ReadWriteOnce --capacity=1Gi
Volume successfully created.
[Captains-Bay]? >

Run the below CLI to verify the volumes as shown below:

[Captains-Bay]? >  shipa volume-list
+-----------+----------+--------------+-------+--------------------+------+------------------------------+| Name      | Plan     | Pool         | Team  | Plan Storage Class | Opts | Binds                        |+-----------+----------+--------------+-------+--------------------+------+------------------------------+| collabvol | longhorn | gke-longhorn | admin | longhorn           |      |                              |+-----------+----------+--------------+-------+--------------------+------+------------------------------+| lh-vol1   | longhorn | gke-longhorn | admin | longhorn           |      | longhorn-app:/mnt/lh-vol1:rw |+-----------+----------+--------------+-------+--------------------+------+------------------------------+
[Captains-Bay]? >

As you see above, it is not bound to anything. So, let’s go ahead and try to bind it using the below comand:

[Captains-Bay]? >  shipa volume-bind collabvol /mnt/collabvol -a collabnix1
---- restart the app "collabnix1" ----
---- Updating units [web] ----
 ---> 1 of 1 new units created
 ---> 0 of 1 new units ready
 ---> 1 old units pending termination
  ---> collabnix1-web-3-7d847f5646-n4sh7 - pod has unbound immediate PersistentVolumeClaims (repeated 3 times) [default-scheduler]
  ---> collabnix1-web-3-7d847f5646-n4sh7 - Container image "34.105.46.12:5000/shipa/app-collabnix1:v3" already present on machine [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
  ---> collabnix1-web-3-7d847f5646-n4sh7 - Created container collabnix1-web-3 [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
  ---> collabnix1-web-3-7d847f5646-n4sh7 - Started container collabnix1-web-3 [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
 ---> 1 of 1 new units ready
  ---> collabnix1-web-3-6f6c6d6f58-wl8sn - Stopping container collabnix1-web-3 [kubelet, gke-lhcl-default-pool-c6caa3b2-rc9k]
 ---> Done updating unitsVolume successfully bound.
[Captains-Bay]? >

Verifying the mount point

shipa app-shell -a collabnix1
ubuntu@collabnix1-web-3-7d847f5646-n4sh7:/home/application/current$ mount | grep collab
/dev/longhorn/pvc-3c7afeca-af35-11ea-9f87-42010a8401eb on /mnt/collabvol type ext4 (rw,relatime,data=ordered)
ubuntu@collabnix1-web-3-7d847f5646-n4sh7:/home/application/current$

Pool Management

Pools on Shipa can host two types of provisioners:

  • Kuberrnetes: Which we can then tie the pool to any K8s cluster
  • Shipa nodes: Through Shipa, you can also create nodes on EC2, GCP and Azure using IaaS and attach them to a pool. Shipa nodes are basically Docker nodes, that you can use to deploy applications, enforce security and so on. Exactly the same you do with K8s nodes/clusters

When you create a pool and you don’t specify a provisioner, it automatically selects Shipa node  (described as “default” when you do a shipa pool-list). See below:

[Captains-Bay]? >  shipa pool-list
+--------------+---------+-------------+---------------+---------+| Pool         | Kind    | Provisioner | Teams         | Routers |+--------------+---------+-------------+---------------+---------+| aks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| collabpool   |         | default     | admin, system | traefik |+--------------+---------+-------------+---------------+---------+| eks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke-longhorn |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| theonepool   | default | kubernetes  |               | traefik |+--------------+---------+-------------+---------------+---------+[Captains-Bay]? >

Notes: Right now, pools on Shipa can only host one type of provisioner. It has to be either shipa or kubernetes

[Captains-Bay]? >  shipa pool-add collabpool
Pool successfully registered.
[Captains-Bay]? >  shipa pool-list
+--------------+---------+-------------+---------------+---------+| Pool         | Kind    | Provisioner | Teams         | Routers |+--------------+---------+-------------+---------------+---------+| aks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| collabpool   |         | default     | admin, system | traefik |+--------------+---------+-------------+---------------+---------+| eks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke-longhorn |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| theonepool   | default | kubernetes  |               | traefik |+--------------+---------+-------------+---------------+---------+

Let us go ahead and updates attributes for a specific pool as shown below:

[Captains-Bay]? >  shipa pool-update collabpool --plan k8s
Pool successfully updated.
[Captains-Bay]? >  shipa pool-list
+--------------+---------+-------------+---------------+---------+| Pool         | Kind    | Provisioner | Teams         | Routers |+--------------+---------+-------------+---------------+---------+| aks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| collabpool   |         | default     | admin, system | traefik |+--------------+---------+-------------+---------------+---------+| eks          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke          |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| gke-longhorn |         | kubernetes  | admin         | traefik |+--------------+---------+-------------+---------------+---------+| theonepool   | default | kubernetes  |               | traefik |+--------------+---------+-------------+---------------+---------+[Captains-Bay]? >

Attached is  a sample yaml file which one can use to update the collabpool or create new ones also. You can do it with the command: shipa pool-update collabpool collabpool-config.yaml

ShipaPool: collabpool
Resources:
   General:
      Setup:
         Force: true
         Default: false
         Public: false
         Provisioner: kubernetes
      Plan:         
Name: k8s
      AppQuota:
         Limit: 8
      Security:
         Disable-Scan: false
         Scan-Platform-Layers: true
          Ignore-Components:
            - systemd
            - bash
            - glibc
            - tar
         Ignore-CVES:
            - CVE-2019-18276
            - CVE-2016-2781
            - CVE-2019-9169
      Access:
         Append:
            - admin
      Services:
         Append:
            - postgres-service

[Captains-Bay]? >  shipa pool-update collabpool collabpool-config.yaml
Pool successfully updated.
[Captains-Bay]? >

Please note that:

  • Once you create a new pool, in order to be able to deploy apps to it, you have to assign that pool to a cluster (unless you will add a new cluster later and will use that new pool)-
  • One cluster can host multiple pools, so you can assign multiple pools you create to a single cluster. For each pool, Shipa creates a different namespace inside the K8s cluster, so workloads will be isolated
  • You can also adjust the security scan configuration by adding or removing exceptions, disabling scans for platform layer, disable it all and so on. You can always make changes and use the shipa pool-update command for the changes to be applied

You can update an existing cluster to assign it to your new pool with the following command: shipa cluster-update gke-longhorn –addr https://35.205.250.127  –pool gke-longhorn –pool collabpool

[Captains-Bay]? >  shipa cluster-update gke-longhorn --addr https://35.205.250.127  --pool gke-longhorn --pool collabpool
Cluster successfully updated.
[Captains-Bay]? >

After that, your new pool is ready to receive apps, so you can use it when creating and deploying apps.

Exporting Pool Configuration

[Captains-Bay]? >  shipa pool-config-export collabpool -o mypoolconfig
[Captains-Bay]? >  cat mypoolconfigShipaPool: collabpool
Resources:
  General:
    Setup:
      Default: false
      Public: true
      Provisioner: ""
      Force: false
      KubeNamespace: ""
    Plan:
      Name: k8s
    Security:
      Disable-Scan: false
      Scan-Platform-Layers: false
      Ignore-Components: []
      Ignore-CVES: []
    Access:
      Append:
      - admin
      - system
      Blacklist: []
    Services:
      Append:
      - postgres-service
      Blacklist: []
    Volumes: []
    AppQuota:
      Limit: unlimited
  ShipaNode:
    Drivers: []
    AutoScale: null
[Captains-Bay]? >

Adding Shipa node in AWS

Attached is a sample of a pool configuration that you can use to create Shipa provisioned pools. After the pool is created, you can do the following:

ShipaPool: shipa-pool
Resources:
   General:
      Setup:
         Force: true
         Default: false
         Public: false 
         Provisioner: shipa
      Plan:
         Name: k8s
      AppQuota:
         Limit: 5
      Security:
         Disable-Scan: false
         Scan-Platform-Layers: false
   ShipaNode:
      Drivers:
         - amazonec2
      AutoScale:
         MaxContainer: 10
         MaxMemory: 0
         ScaleDown: 1.33
         Rebalance: true

Please note:

  • A Shipa pool can host nodes from multiple cloud providers and will distribute your application units/containers across multiple clouds/nodes
  • You can adjust the field Drivers in the shipa config file to adjust which cloud providers the pool can host nodes from. You can place one or multiple. Accepted values are amazonec2, google, azure

Security Management

[Captains-Bay]? >  shipa app-security-list  -a collabnix1
1. [Deployment] scan at 13 Jun 2020 11:30, 0 vulnerability(es), 0 ignored
2. [Deployment] scan at 13 Jun 2020 15:14, 0 vulnerability(es), 0 ignored
3. [Deployment] scan at 13 Jun 2020 15:17, 0 vulnerability(es), 0 ignored
..
11. [    Manual] scan at 17 Jun 2020 09:13, 0 vulnerability(es), 0 ignored

AutoScalibility

Using Shipa’s cloud provider native integration, Shipa manages the created nodes, perform self-healing, auto-scale and others. Let us look at the autoscability feature. The command below runs node auto scale checks once. Auto scaling checks may trigger the addition, removal or rebalancing of nodes, as long as these nodes were created using an IaaS provider registered in Shipa.

[Captains-Bay]? >  shipa node-autoscale-run
Are you sure you want to run auto scaling checks? (y/n) yfinished autoscale checks
[Captains-Bay]? >

Next, let us lists the current configuration for Shipa’s autoscale, including the set of rules and the current metadata filter.

[Captains-Bay]? >  shipa node-autoscale-info
Rules:
+------+---------------------+------------------+------------------+--------------------+---------+| Pool | Max container count | Max memory ratio | Scale down ratio | Rebalance on scale | Enabled |+------+---------------------+------------------+------------------+--------------------+---------+|      | 0                   | 0.0000           | 1.3330           | true               | true    |+------+---------------------+------------------+------------------+--------------------+---------+
[Captains-Bay]? >

Conclusion

If you are looking out for a platform that allows you to think and operate applications rather than infrastructure when delivering services, across any infrastructure, then Shipa is a perfect solution for you.

References:

Monitoring Multi-Node K3s Cluster running on IoT using Datadog – Part 1

The rapid adoption of cloud-based solutions in the IT industry is acting as the key driver for the growth of the internet of things (IoT) market. The top 3 reasons why small & medium enterprises are immensely adopting IoT solutions include maintaining the cost efficiency, productivity and operation enhancements in their business. IoT encompasses a set of advanced equipment (sensors and meters), network connectivity architecture, smart devices and software, that helps to interchange the information between machines and devices.

Why do we need IoT Monitoring?

If you talk about the state of monitoring IT systems, which include servers and services, there has been tremendous improvement. Monitoring tools and practices in the cloud-native world of microservices and Kubernetes are excellent at monitoring based on time-series metric data. But these tools aren’t designed specifically for monitoring IoT devices or physical processes.

IoT devices produce many types of information, including telemetry, metadata, state, and commands and responses. Telemetry data from devices can be used in short operational timeframes or for longer-term analytics and model building.If you’re looking out to bridge the gap between devices and business by collecting and analyzing diverse IoT data at web-scale across connected devices, customers and applications, IoT monitoring is of utmost importance. 

IoT monitoring helps you to analyze dynamic systems and process billions of events and alerts.It basically helps in accelerating  IoT development with immediate insight into performance across modern platforms, including Node.js, Docker containers and RESTful APIs. Not only this, it even helps you to bridge performance gaps by optimizing performance across multiple applications, APIs, networks and protocols. 

Why Datadog?

Datadog is a monitoring service for cloud-scale applications, providing monitoring of servers, databases, tools, and services, through a SaaS based data analytics platform. It supports over 400+ integrations, majorly with AWS, Microsoft Azure, Google Cloud Platform and Red Hat OpenShift service to name a few. 804 companies reportedly use Datadog in their tech stacks, including Airbnb, Facebook, and Spotify which is quite impressive figure.

Datadog helps developers and operations teams see their full infrastructure – cloud, servers, apps, services, metrics, and more – all in one place. This includes real-time interactive dashboards that can be customised to a team’s specific needs, full-text search capabilities for metrics and events, sharing and discussion tools so teams can  collaborate using the insights they surface, targeted alerts for critical issues, and API access to accommodate unique infrastructures.

Under this post, we will go through the series of blog tutorials which shows how to setup Raspberry Pi from scratch, test driving datadog agent on Pi nodes using Docker container and then running K3s cluster. I have divided this tutorial into 2 parts. In Part-1 we will cover how to install datadog-agent Docker container on Raspberry Pi nodes and retrieve Pi system metrics onto the Datadog dashboard.

  • Step #1: Preparing Raspberry Pi Cluster nodes
  • Step #2: Installing Docker 19.03 on all Pi Nodes
  • Step #3: Setting up Datadog Account
  • Step #4: Installing Your First Datadog Monitoring Agent on Pi nodes using Docker container
  • Step #5: Installing K3s Cluster on Pi nodes
  • Step #6: Viewing Datadog dashboard

Step #1: Preparing Raspberry Pi Cluster nodes

Prerequisite:

  • Macbook/Windows
  • Raspberry Pi 3/4 nodes
  • WGET software installed
  • Raspberry Pi Imager

Installing Raspberry Pi OS

Raspberry Pi OS (previously called Raspbian) is an official operating system for all models of the Raspberry Pi. We will be using Raspberry Pi Imager for an easy way to install Raspberry Pi OS on top of Raspberry Pi:

Visit https://www.raspberrypi.org/downloads/raspberry-pi-os/ and download Raspberry Pi OS by running the below CLI:

In case you are in hurry, just run the below command and you should be good to go:

wget https://downloads.raspberrypi.org/raspios_full_armhf_latest

Using Raspberry Pi Imager

Next, we will be installing Raspberry Pi Imager. You can download via https://www.raspberrypi.org/blog/raspberry-pi-imager-imaging-utility/

All you need to do is choose the right operating system and SD card, and it should be able to flash OS on your SD card.

Click “Write” and it’s time to grab a coffee.

Once the write is successful, you can remove the SD card from card reader and then insert it into Raspberry Pi SD card slot.

SSH to Raspberry Pi nodes

$ssh pi@192.168.1.7$ssh pi @192.168.1.4
pi@raspberrypi:~ $ uname -arn
Linux raspberrypi 4.19.118-v7+ #1311 SMP Mon Apr 27 14:21:24 BST 2020 armv7l GNU/Linuxpi@raspberrypi:~ $

Step #2: Installing Docker 19.03 on each Pi nodes

sudo curl -sSL https://get.docker.com/ | sh
pi@raspi2:~ $ docker version
Client: Docker Engine - Community 
Version:           19.03.4 
API version:       1.40 
Go version:        go1.12.10 
Git commit:        9013bf5 
Built:             Fri Oct 18 16:03:00 2019 
OS/Arch:           linux/arm 
Experimental:      false

Server: Docker Engine - Community Engine:  
Version:          19.03.8  
API version:      1.40 (minimum version 1.12)  
Go version:       go1.12.17  
Git commit:       afacb8b  
Built:            Wed Mar 11 01:29:22 2020  OS/Arch:          linux/arm  Experimental:     false 
containerd:  Version:          1.2.10  
GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339 
runc:  Version:          1.0.0-rc8+dev  
GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657 docker-init:  
Version:          0.18.0  
GitCommit:        fec368
pi@raspi2:~ $

Running Nginx Docker container

pi@raspi2:~ $ docker run -d -p 80:80 nginx
pi@raspi2:~ $ docker psCONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMESd7055f45bf23        nginx               "/docker-entrypoint.…"   2 minutes ago       Up About a minute   0.0.0.0:80->80/tcp   silly_maxwellpi@raspi2:~ $

Step #3: Creating Datadog Account

Datadog provide 14-days trial period for end-users and you can register for the monitoring service by visiting the official website https://www.datadoghq.com/

Datadog support 400+ integration and you can see the list of vendors as shown below. You can go ahead and choose your stack. As we are planning to run datadog agent inside Docker container, I will go ahead and choose Docker as of now.

Datadog provides step by step instructions for almost all OS distributions as shown below:

Open a terminal and paste the below command:

DOCKER_CONTENT_TRUST=1 
sudo docker run -d --name dd-agent -v /var/run/docker.sock:/var/run/docker.sock:ro -v /proc/:/host/proc/:ro -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro -e DD_API_KEY=8411XXXXXXXXXXXXba3 achntrl/datadog-agent

sudo docker ps
CONTAINER ID        IMAGE                   COMMAND             CREATED             STATUS                             PORTS                NAMES
c43b639522fc        achntrl/datadog-agent   "/init"             37 seconds ago      Up 32 seconds (health: starting)   8125/udp, 8126/tcp   dd-agent

This runs a signed Docker container which embeds the Datadog Agent to monitor your Pi host. The Docker integration is enabled by default, as well as Autodiscovery in auto config mode.

You can open up Datadog Dashboard and verify if Raspberry Pi node is detected or not.

As shown below, the Raspberry pi node appear at the right hand side of the dashboard successfully.

Click on the node and you will see the detailed information as shown below.

Datadog provides fancy metrics for CPU, memory, load average, processor usage etc.

Running Datadog agent all the nodes

You need to repeat the below command on rest of the Pi nodes:

DOCKER_CONTENT_TRUST=1 
sudo docker run -d --name dd-agent -v /var/run/docker.sock:/var/run/docker.sock:ro -v /proc/:/host/proc/:ro -v /sys/fs/cgroup/:/host/sys/fs/cgroup:ro -e DD_API_KEY=8411XXXXXXXXXXXXba3 achntrl/datadog-agent

sudo docker ps
CONTAINER ID        IMAGE                   COMMAND             CREATED             STATUS                             PORTS                NAMES
c43b639522fc        achntrl/datadog-agent   "/init"             37 seconds ago      Up 32 seconds (health: starting)   8125/udp, 8126/tcp   dd-agent

Visualising both the Pi nodes on Datadog Dashboard

You will see system metrics details around both the nodes as shown below:

Visualising both the Pi nodes

Hence, we are able to run datadog agent inside Docker containers and get it visualised under Datadog dashboard. In Part-2, we will see how to install K3s cluster on 2-Nodes cluster and then run datadog agent using Helm.

The Ultimate Docker Tutorial for Automation Testing

main post image

If you’re looking out for a $0 cloud-based, SaaS test automation development framework designed for your agile team, TestProject is the right choice for you. TestProject is a community-powered end-to-end automation platform for your web, mobile (Android & iOS) apps as well as API testing. It is an automation platform that aims to create a powerful and collaborative environment for the entire test automation community, without any barriers, and completely free of cost. It is built on top of industry standard open source tools (Selenium & Appium), supports all major operating systems, and ensures quality with speed using advanced built-in recording capabilitiesaddonsreports and analytics dashboards, or develop coded tests using TestProject’s powerful SDK for Java/C#!

100% Free Automation Platform

TestProject is purely a FREE automation platform built on open source tools, with features testers and developers love. It has a cloud-based user interface. It comes with free addons provided directly by the community. TestProject uses best in breed solutions like Selenium and Appium in their framework—with the added benefit of numerous functionalities you don’t have to create from scratch. Selenium and Appium are used in the background. TestProject is like a wrapper on top of Selenium and Appium. 

In this 5-series tutorials, we are going to focus on Docker: What is Docker, how do we set up a Docker environment, and how can we benefit from TestProject Agents in Docker containers? And more! Let’s get started with this ultimate Docker tutorial to accelerate your automation testing! ??

Read rest of the tutorial at TestProject.io website

How to migrate AWS ElastiCache data to Redis with Zero downtime

Are you looking out for a tool which can help you migrate Elasticache data to Redis Open Source or Redis Enterprise and that too without any downtime, then you are at the right place. Before I go ahead and recommend any tool, it is equally important to understand why would you want to migrate from Amazon Elasticache to Redis Enterprise. Now, that’s a great question! I went through several blogs and stackoverflow comments and one of the common reason I usually read is around Amazon Incapability around multi-model database support. Amazon Elasticache lack multi-model database support for modern applications. Redis Enterprise supports multiple data models and structures, so you can iterate applications quickly without worrying about schemas or indexes

Introducing RIOT

RIOT is an open source data import/export tool for Redis. It is used to bulk load/unload data from files (CSV, JSON, XML) and relational databases (JDBC), replicate data between Redis databases, or generate random datasets.

RIOT was developed by Julien Ruaux, Solution Architect currently working in RedisLabs. I was lucky enough to get chance to work with him and present about this tool to wider audience inside RedisLabs.

RIOT is like a Swiss-army knife. RIOT can also be used for the below purposes:

  • Import CSV into RediSearch
  • Export CSV
  • Import CSV into Geo
  • Importing JSON
  • Exporting JSON
  • Exporting compressed JSON
  • Import from a databse
  • Export to a database
  • Creating Random data in Redis DB
  • Live Replication of database

RIOT reads records from a source (file, database, Redis, generator) and writes them to a target (file, database, Redis). It can import/export local or remote files in CSV, fixed-width, or JSON format with optional GZIP compression.

Under this blog post, we will see how we can migrate AWS ElastiCache database to Redis Enterprise without any downtime.

The below topics outline the process for migrating your DB from AWS Elasticache to Redis Enterprise: 

  • Preparing Elasticache (Source)
  • Preparing Redis Enterprise (Target) 
  • Begin the Migration Process 
  • Verifying the Data Migration Progress 
  • Completing the Data Migration 

Create an EC2 instance

Create an EC2 instance on AWS Cloud. Ensure that this new instance falls under the same security group as well as the same VPC for accessibility. 

SSH to this new EC2 instance from my laptop as shown below: 

ssh -i "migration.pem" ubuntu@ec2-18-219-64-32.us-east-2.compute.amazonaws.com

where pem is key-pair used to connect to EC2 instance. 

ubuntu@ip-172-31-46-31:~$ sudo redis-cli -h ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com -p 6379 
ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379> info 
# Server 
redis_version:5.0.6 
redis_git_sha1:0 
redis_git_dirty:0 
redis_build_id:0 
redis_mode:cluster 
os:Amazon ElastiCache 
arch_bits:64 
multiplexing_api:epoll 
..  
# Cluster 
cluster_enabled:1 
  
# Keyspace 

Setting up RIOT Tool 

We will leverage the same EC2 instance running Ubuntu 16.04 to setup riot tool. 

Login to ubuntu system and install the below software 

Prerequisite: 

  • Installing JAVA 

It is recommended to install at least openjdk 11 version on this Ubuntu OS.

sudo add-apt-repository ppa:openjdk-r/ppa \&& sudo apt-get update -q \&& sudo apt install -y openjdk-11-jdk 
  • Installing RIOT 
wget https://github.com/Redislabs-Solution-Architects/riot/releases/download/v1.8.11/riot-1.8.11.zip  unzip riot-1.8.11.zip

Run the below command to generates hashes in the keyspace test2:<index> with fields field1 and field of respectively 100 and 1,000 bytes: 

$. /riot --cluster -s ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379 gen --sleep 1 --data field1=100 field=1000 hmset --keyspace test2 --keys index
  • -s refers to Server Endpoints,  
  • –cluster refers to Connect to a Redis cluster 
  • gen indicates Generate data,  
  • hmset primarily sets the specified fields to their respective values in the hash stored at key,  
  • sleep 1 to delay for a specified amount of time, in our case, 1 sec., 
  • test2 is a keyspace                                                           

Let the above command run without any interruption. 

As soon as you run the above command, you can verify keyspace entry with the below CLI: 

# Keyspace 
db0:keys=279763,expires=0,avg_ttl=0 

If you want to use Docker container for RIOT, then you can head over https://github.com/ajeetraina/riot/blob/master/Dockerfile which can be built using:

$git clone https://github.com/ajeetraina/riot
$cd riot
$docker build -t ajeetraina/riot .

Preparing Redis Enterprise as a target DB 

In order to test the migration, we need to setup target database, hence we will be installing Redis Enterprise. I have set it up on top of Ubuntu 16.04 LTS running on Google Cloud Platform. 

$ wget https://s3.amazonaws.com/redis-enterprise-software-downloads/5.4.14/redislabs-5.4.14-19-xenial-amd64.tar  

$sudo tar xvf redislabs-5.4.14-19-xenial-amd64.tar  

$sudo chmod +x install.sh  
$sudo ./install.sh  

Access the Redis Enterprise at https://<public-ip>:8443/ 

Enter cluster name of your choice, in my case it’s ajeetmigtest. 

Under “create database” page, go ahead and specify memory limit as per your infrastructure and supply Redis password.





Once you save the configuration, you can verify all the entries as shown below:







Please save the public endpoint for future reference(shown above).

Joining the rest of the nodes

By now, you should be able to see memory allocation  as close to 66 GB. 

Inserting image...




Ensure that the below command is up and running: 

ubuntu@ip-172-31-41-56:~/riot-1.8.11/bin$ ./riot --cluster -s ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379 gen --sleep 1 --data field1=100 field=1000 hmset --keyspace test2 --keys index

Begin the Replication Process 

Run the below command to begin the replication from source to target database: 

ubuntu@ip-172-31-41-56:~/riot-1.8.11/bin$ sudo ./riot -s 35.185.1.55:12000 replicate --cluster -s ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379 

In the above CLI, 35.185.1.55 is public IP of GCP instance where RE is running while ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379 is the endpoint of elasticache.

Verifying the Data Replication Progress 

ajeet-migtest-1shard.8mys2u.clustercfg.use2.cache.amazonaws.com:6379> keys * .. .. 1084029) "test2:2181214" 1084030) "test2:375848" (36.02s) 

You can verify the total number of keys both in AWS Elasticache Vs GCP with the above command using redis-cli. 

References:

Building Your First Certified Kubernetes Cluster On-Premises, Part 2: – iSCSI Support

In my first post, I discussed how to build your first certified on-premises Kubernetes cluster using Docker Enterprise 3.0. In this post, I will explain how to build and configure Kubernetes external storage provisioner for iSCSI storage to work with DKS.

According to a 2018 IDC survey on containers, 85% of organizations that have adopted containers use them for production apps, and 83% use containers on multiple public clouds (3.7 clouds on average). Interestingly, 55% of them run containers on-premises, compared to 45% that run containers in the public cloud. This shows that containers are now even more common in the  on-premises world than they are in the cloud.

There are several reasons why on-premises remains the primary focus of Enterprise IT:

  • Direct access to the underlying OS’s hardware.
  • Full control over the host OS and container environment.
  • Ability  to use certain hardware features, such as storage, or processor-specific operations
  • Legacy applications that still run directly on hardware or can’t easily be moved to the cloud.

For organizations that run Kubernetes on premises, they need to connect it with local storage. In this post, I’ll explain how to use iSCSI storage with Kubernetes.

iSCSI Support for Kubernetes

Some of you are steeped in storage and dream in iSCSI, NFS and block. If that’s you, skip this intro section. If you’re just learning about storage, I’ve included a helpful primer on iSCSI in Kubernetes below.

iSCSI (Internet Small Computer System Interface) is an industry standard that allows SCSI block I/O protocol to be sent over the network using a TCP/IP-based protocol for establishing and managing connections between IP-based storage devices, hosts and clients. iSCSI SAN solutions (often called IP SANs) consist of iSCSI initiators (software driver or adapter) in the application servers, connected to iSCSI arrays via standard Gigabit Ethernet switches, routers and cables. 

One of the major advantages of using iSCSI technology is the minimal investment. iSCSI allows businesses to deploy and scale storage  without changing their existing network infrastructure. iSCSI creates IP-based SANs which allows ,organizations to capitalize on their existing IP infrastructure by delivering block based storage across an IP network. Organizations do not have to invest in a new storage-only infrastructure, such as with FC which can be costly. iSCSI also allows companies to capitalize on existing IT skill sets to create IP based-SANs as in-house network expertise.

New to Kubernetes? Check out Musical TechSeries of KubeLabs.


Docker Enterprise 3.0, Kubernetes and iSCSI

Docker Enterprise 3.0 brings iSCSI support for Kubernetes for the first time. iSCSI support in Docker UCP enables Kubernetes workloads to consume persistent storage from iSCSI targets. As of today, iSCSI support in Docker Enterprise is enabled for Linux workloads, but not yet enabled for Windows.

This image has an empty alt attribute; its file name is image-6-1024x523.png

Configuring iSCSI in Kubernetes via UCP

You’ll need to cover these prerequisites to get started with iSCSI for Kubernetes:

  • Configuring an iSCSI target 
  • Setting up UCP nodes as iSCSI initiators
  • Setting up external provisioner & Kubernetes objects
  • Configuring UCP

To demonstrate iSCSI support for Kubernetes under Docker Enterprise 3.0, I will be using Dell iSCSI SAN storage.

Configuring an iSCSI Target

An iSCSI target refers to a server that shares storage and receives iSCSI commands from an initiator. An iSCSI target can run on dedicated/stand-alone hardware, or can be configured on a hyper-converged system to run alongside container workloads on UCP nodes. To provide access to the storage device, each target is configured with one or more logical unit numbers (LUNs). Note that the steps here are specific to DellEMC iSCSI storage. They will vary based on the storage platform you’re using, but the workflow should be similar for any iSCSI device.

Step 1 – Configuring the iSCSI Storage Array

First, we configure the DellEMC iSCSI SAN storage as an iSCSI target. Before we use Storage Manager Software to manage storage arrays, we need to use DellEMC Modular Disk Configuration utility(MDCU) to configure the iSCSI on each host connected to the storage array. Run the MDCU utility on Windows system and click on “Auto Discover” option to discover the storage array automatically.

You should now be able to discover your storage arrays successfully.

Next, Launch Dell MD Storage Manager. If this is the first storage array to be set up, the Add New Storage Array window appears. Choose Automatic and click OK. It may take several minutes for the discovery process to complete. Closing the discovery status window before the discovery process completes will cancel the discovery process. After discovery is complete, a confirmation screen appears. Click Close to close the screen.

When discovery is complete, the name of the first storage array found appears under the Summary tab in MD Storage Manager. Click the Initial Setup Tasks option to see links to the remaining post- installation tasks.

Step 2 – Configuring the iSCSI ports on the Storage Array

To configure the iSCSI ports on the storage array, open MD Storage Manager and click on iSCSI tab under Storage Array section to configure iSCSI Host Ports.

Step 3 – Perform Target Discovery from the iSCSI Initiator

The iSCSI server must be accessible to all UCP worker nodes, so that workloads using iSCSI volumes can be run anywhere on the cluster. If your UCP worker node runs Ubuntu, then you can just install the open-iscsi using apt as shown below:

$ sudo apt install open-iscsi


If you are running CentOS or RHEL, use the command below:

sudo yum install -y iscsi-initiator-utils


Set up UCP nodes as iSCSI initiators

Open up /etc/iscsi/initiatorname.iscsi  on each of UCP nodes and configure initiator names for each node as shown below:

$sudo cat /etc/iscsi/initiatorname.iscsi
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator.  The InitiatorName must be unique
## for each iSCSI initiator.  Do NOT duplicate iSCSI InitiatorNames.InitiatorName=iqn.1993-08.org.debian:01:343822ee8898

Alternatively, you can directly use the below command to add the entries. The iqn must be in the following format: iqn.YYYY-MM.reverse.domain.name:OptionalIdentifier.

 sudo sh -c 'echo "InitiatorName=iqn.<1993-08.org.debian>:<uniqueID>" > /etc/iscsi/<initiatorname>.iscsi


Restart the iSCSI service

 sudo systemctl restart iscsid


Step 4 – Configuring Host Access

To specify which host system will access virtual disks on the storage array, we need to follow the steps below. 

Launch MD Storage Manager, click on “Configure” option to set up Host Access. You will need to enter the Hostname and select Host Type. From the Drop down menu, select the appropriate Host Type. Click on “Next” to see the known iSCSI Initiators. Click Next to enter the iSCSI initiator Name to complete this process.

This completes the configuration of the iSCSI target system. 

Configuring UCP Nodes as iSCSI Initiators

In order to configure UCP nodes as iSCSI initiators, we will need to make changes in UCP configuration file. There are two ways to configure UCP – either through web interface or by importing & exporting the UCP config in a TOML file. We will be using the latter option.

For this demonstration, we will be using  the config-toml API to export the current settings and write them to a file. I assume you have 2 Node Docker Enterprise 3.0 cluster with Kubernetes already configured. Open up the UCP master terminal and run the below command:

Get an authtoken

$ AUTHTOKEN=$(curl --silent --insecure --data '{"username":"ajeetraina","password":"XXXXX"}' https://100.98.26.115/auth/login | jq --raw-output .auth_token)


Download the config file

curl --silent --insecure -X GET "https://100.98.26.115/api/ucp/config-toml" -H  "accept: application/toml" -H  "Authorization: Bearer $AUTHTOKEN" > ucp-config.toml


Editing the ucp-config.toml file

We only need to focus on 3 entries in the ucp-config.toml file for iSCSI configuration:

  • storage-iscsi = true – enables ISCSI based Persistent Volumes in Kubernetes.
  • iscsiadm-path = <path> – specified the absolute path of the iscsiadm binary on the host. Default value is “/usr/sbin/iscsiadm”
  • iscsidb-path=<path> – specifies the path of the iscsi database on the host. Default value is “/etc/iscsi”.

Once these changes are performed, it’s time to import those configuration changes which can be achieved with the below command:  

Upload the Config file

curl --insecure -X PUT -H  "accept: application/toml" -H "Authorization: Bearer $AUTHTOKEN" --upload-file 'ucp-config.toml' https://100.98.26.115/api/ucp/config-toml

{"message":"Your UCP config has been set. It may take a moment for some config changes to propagate throughout the cluster."}

This completes the configuration to implement the UCP nodes as iSCSI initiators.

Setting up External Provisioner

An external provisioner is a piece of software running out of process from Kubernetes that is responsible for creating and deleting Persistent Volumes. It allows a cluster developer to order storage with a pre-defined type and configuration. External provisioners monitor the Kubernetes API server for PV claims and create PVs accordingly. This is different from in-tree dynamic provisioners that run as part of the Kubernetes controller manager. Check out the Github repository which has a library for writing external provisioners.

Kubernetes External Provisioner for Dell ISCSI SAN

I will leverage the Kubernetes External Provisioner for Dell iSCSI SAN  forked out of the nmaupu repository in Github, and will demonstrate how to configure it as external Storage provisioner.Dell-provisioner is a Kubernetes external provisioner. It creates and deletes volumes and associates LUN using the Dell smcli command line tool on a remote SAN whenever a PersistentVolumeClaim appears on the cluster.

Building the Storage Provisioner

Under this section, we will see how to build our own external Dell provisioner and then use it with Kubernetes resources.

Before you begin

I assume that you have followed the blog till now and have already DellEMC ISCSI SAN configured, up and running. Ensure that both the hosts – UCP master and worker node are mapped to the target storage.

  • Install Ubuntu 18.04  either on VM or on bare metal system
  • Install make
sudo apt-get install make
  • Install glide
sudo add-apt-repository ppa:masterminds/glide && sudo apt-get update
sudo apt-get install glide


  • Install go
sudo wget https://dl.google.com/go/go1.13.1.linux-amd64.tar.gz
sudo tar xvf go1.13.1.linux-amd64.tar.gz
sudo mv go /home/dell/

Completing the iSCSI Setup

Congrats! You’re done with the pre-requisites. Now we can move on to completing the iSCSI setup.

Cloning the Repository

sudo git clone https://github.com/collabnix/dell-provisioner

Building the dell-provisioner

sudo make vendor && make

Once this command is successful, the binary gets copied into

bin/dell-provisioner-linux-amd64 .

Provisioning the Dell Provisioner

You will require a Docker image with Dell smcli available to be able to use dell-provisioner. The Dockerfile is available under https://github.com/collabnix/dell-provisioner/blob/master/Dockerfile

Building Dell Provisioner Docker Image

Run the below command to build a SMcli based Docker Image:

$docker build -t ajeetraina/dellsmcli-docker .

Testing the Dell Provisioner Docker Image

$ sudo docker run -itd -v /tmp:/tmp ajeetraina/dellsmcli-docker 100.98.26.154 -p <password> -c "show storageArray profile;" -o /tmp/storageprofile.txt
$cat /tmp/storageprofile.txtPROFILE FOR STORAGE ARRAY: mykube (Sun Oct 20 04:44:13 UTC 2019)
SUMMARY------------------------------  
Number of RAID controller modules:              2   
High performance tier RAID controller modules:  Disabled  
Number of disk groups:                          1   
RAID 6:                                         Enabled
…...

The above command shows that the Dell Provisioner Docker image is working and able to fetch storage system information flawlessly.

Dynamic Provisioning

In order to manage storage, there are 2 Kubernetes API resources – PersistentVolume (PV) and PersistentVolumeClaim (PVC).  PV is a piece of storage in the cluster dynamically provisioned using Storage Classes (SC).

In order to dynamically provision persistent storage, you need to define the type and configuration of the storage. SC is a medium which abstracts the underlying storage platform so that you don’t have to know all the details (supported sizes, IOPs, etc.) to provision persistent storage in a cluster. A PVC is the request to provision persistent storage with a specific type and configuration.

As shown above, we will first create a storage class which will determine the type of storage that is provisioned and the allowed ranges for sizes. Then we will create a PVC that specifies the storage type, storage class, size in gigabytes etc. Creating a PVC in a cluster automatically triggers the storage plug-in for the requested type of storage to provision storage with the given specification.The PVC and PV are automatically connected to each other. The status of the PVC and the PV changes to Bound. Then we can now use the PVC to mount persistent storage to your app.

YAML Spec for using Pod to deploy Dell Provisioner

Before you deploy Dell Provisioner, RBAC need to be configured. RBAC is a method of restricting network access based on individual user roles within an enterprise.

Copy the full content from here & paste it on Object YAML section under UCP UI > Kubernetes option. Click on “Create” to bring up RBAC.

Next, you need to use YAML file under the root directory of the Github repository which allows us to deploy dell-provisioner. The content of the yaml file will look like this:

Click on “Create” to bring up dell-provisioner Pod. Alternatively, you can execute the below command in the UCP terminal to bring up the storage provisioner.

$kubectl apply -f dell-provisioner.yaml

This will deploy Dell Provisioner and give the Kubernetes cluster access to the Dell iSCSI storage system.

Next, we can verify if Pod is running successfully:

cse@node1-linux:~/dell-provisioner$ kubectl describe po dell-provisioner -n kube-system
Name:               dell-provisioner
Namespace:          kube-system
Priority:           0Priority
ClassName:  <none>
Node:               node2-linux.dell.com/100.98.26.116
Start Time:         Sun, 20 Oct 2019 01:28:23 -0400
Labels:             <none>
Annotations:        kubectl.kubernetes.io/last-applied-configuration:      

  {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dell-provisioner","namespace":"kube-system"},"spec":{"containers":[{"...                   
 kubernetes.io/psp: privileged
Status:             Running
IP:                 192.168.215.5
Containers:  
  dell-provisioner:    
    Container ID:   
docker://d503b94bfb6c41be2155226cef91cdc7092a31b1b1eb7784928868dd09213cbd    
   Image:          ajeetraina/dellsmcli-docker    
   Image ID:       
docker-pullable://ajeetraina/dellsmcli-docker@sha256:e0eadfc725b3a411b4eb76d1035eeb96bdde6268c4c71a9222b80145aa59a24e    
   Port:           <none>    
   Host Port:      <none>   

Specifying the Storage Class 

You can use the sc.yaml file under the root of Github repository to bring up the storage class. We will first need a storage class which tells which provisioner to use and how to use it. 

Go to “Kubernetes” section under UCP UI and click on “+Create”. Copy the YAML from this location.

Click on “Create” to bring up a storageClass “dell-provisioner-sc” as shown below:

You can also run the CLI command below to view storage class from the master node: 

$kubectl get sc -n kube-system
NAME                            PROVISIONER                  AGE
dell-provisioner-sc       ajeetraina/dell-provisioner   2d16h


YAML Spec for Persistent Volume Claim

Click on “Create” to bring up a storageClass “dell-provisioner-sc” as shown below:

You can also use the ‘kubectl get’ CLI on your UCP node to check the status of the PVC as shown below:

$ kubectl get pvc NAME                        
STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS          AGE
dell-provisioner-test-pvc   BOUND                                dell-provisioner-sc   2d16h

YAML Spec for leveraging the claim on a Testing Pod 

In order to validate that your application Pod can leverage the PVC as well as PV, use the YAML file below:

By applying the above YAML, the underlying volume should get created and thereby get associated with your pod. 

Coming up Next..

Till now, I have walked you through deploying on-premises certified Kubernetes cluster and demonstrated iSCSI support for Kubernetes under Docker Enterprise.We saw that it is possible to provision your Kubernetes cluster with persistent storage using iSCSI. In the next series of blog post, I will show how to implement Kubernetes cluster ingress on-premises using Docker Enterprise. I will show you how to install cluster ingress on a UCP cluster, deploy a sample application with Ingress rules and much more..

Additional References: