Top 6 Docker Security Scanning Practices

When it comes to running containers and using Kubernetes, it’s important to make security just as much of a priority as development. The DevOps approach to development is what brings security and development teams together to create code that’s effective and secure.

Managing container vulnerabilities can be tricky due to how many elements are involved. This can cause delays to delivery dates for applications. However, implementing DevOps to securely create code can alleviate many of these vulnerabilities along the way.

As a result, developers can work more productively on code that works effectively while also being secure. This post covers some of the best Docker security scanning practices to consider during development to keep containers as secure as possible.

Inline Scanning


Inline image scanning can be implemented through your CD/CI pipeline easily and efficiently. Developers can easily manage their privacy as they’re able to focus on only scanning data that has been specifically sent to the tool that you’re using.


Inline image scanning helps developers to discover whether credentials have been included within images by accident. When developers have an idea about what these mistakes are, they can prevent them from getting into the hands of hackers and prevent more damage from being caused.

Image Scanning

It’s good practice for developers to properly scan container images before they execute them. This ensures that any security risks within the images can be found and fixed before being executed.

Once developers have tested their code and finished building it, they can send them to a repository for staging. This allows them to use tools to scan for vulnerabilities which are provided in the form of reports which include details about the severity of each security risk. This is fantastic for allowing developers to prioritize the security risks in order of severity. They can work on the most severe vulnerabilities first and work their way systematically down the list.


If there are numerous issues found after checking the results from image scanning, developers may decide to put the project on a halt depending on the severity of the issues discovered.


These tools can be implemented with automated systems that make them much easier and efficient to use. Developers can run image scanning tools and be notified of issues that need fixing. It’s an effective way to prevent vulnerabilities from becoming a bigger problem as they’re sorted before reaching the next stage of development.


Preventing Vulnerable Images From Deployment

Preventing images in CI/CD pipelines that contain vulnerabilities sometimes isn’t enough. They can still make their way into the rest of the production. As a result, it’s a good idea to implement Kubernetes that can scan images before they’re scheduled to be executed. This enables developers to stop images with vulnerabilities or images that haven’t been scanned from being deployed. Kubernetes admission controllers are a feature within Kubernetes that helps developers personalize the specific elements that are permitted to run
within a cluster.

As result, any actions that are trying to run in a cluster that isn’t within the customization settings that you’ve created will come up as a red flag. Admission controllers can stop vulnerabilities from going any further as long as there is the proper authentication in place. OPA (Open Policy Agent) is a feature that can help with automating decision-making processes. This enables developers to make decisions within their Kubernetes cluster which allows them to use information directly from the cluster. It can be a more effective way to ensure that vulnerabilities within images are found more precisely and that developers have more control over what gets approved and what doesn’t.

Registry Image Scanning

It’s good practice for developers to use registries along with image scanning. This helps them to scan images before they’re pulled and to be included within production. As a result, developers already know that any images being pulled from their registries have already been through scans to check for vulnerabilities. This makes the whole process of running images securely more efficient.

Scanning 3rd-Party Libraries

Developers often include 3rd-party libraries within their code because it’s an incredibly effective way to finish and deploy projects. However, organizations must be aware that using 3rd-party components can come with a higher risk of vulnerabilities.

Using scanning tools is a must for 3rd-party libraries. You’re provided with information about vulnerabilities within these elements that enables developers to either fix the security risks or find other components to use instead.

Scanning for Errors in Dockerfiles

It’s common for developers to come across misconfigurations within their Dockerfiles. There are several ways that you can approach finding misconfigurations within Dockerfiles. One of the best ways to find these misconfigurations is to run applications as privileged users. This is because it grants you more access to resources that could prove to be useful. Private files may have included mistaken commands that could leave the files more exposed to vulnerabilities. Developers may also want to consider allowing all users to create options to an entry point for improved security choices.

In addition to this, developers should observe whether insecure ports have been included inside containers. Insecure ports that are left open can provide attackers with an entry point to gain access to the rest of your system.

Conclusion


Scanning images is becoming a standard part of the development process. It combines the efforts of developers and security teams to help organizations create applications that are secure during every stage.


As a result, developers have an easier time working systematically to discover vulnerabilities and prioritize them in terms of severity. Image scanning is also something that should be integrated throughout the entire project as a continuous process that developers use as checkpoints.


When Docker scanning practices are used correctly, they can save organizations time and hassle on having to go back and fix security risks. Developers can work more productively to deliver applications faster and more securely.

Hopefully, the information in this post has provided you with more insight into what some of the best Docker scanning methods involve.

Shipa Vs OpenShift: A Comparative Guide

With the advent of a popular PaaS like Red Hat OpenShift, developers are able to focus on delivering value via their product, not building IT infrastructure. Red Hat OpenShift is a multifaceted container application platform from Red Hat for the development, deployment and management of applications. OpenShift is a layer on top of Docker and Kubernetes that makes it accessible and easy for the developer to create applications and a platform that is a dream for operators to deploy containers on for both development and production workloads.Today OpenShift is heavily used by enterprise IT but let us agree to the fact that customizing and building a complex platform and services adds no specific value to your organization and complication is intrinsically a killer in and of itself, an exponential risk to your chances of success. 

Today you need to enable your developers to self-service through a continuous operation platform and empowering them to deploy and operate their service with minimal Ops intervention.Enabling developers to self-service means treating Ops as a product team. The infrastructure automation, deployment automation, configuration management, logging, monitoring, and production tools — these are all products and it’s these products that allow teams to fully own their services. This leads to empowerment. Products enable ownership. We move away from Ops as masters of production responsible for everything and push that responsibility onto dev teams. They are the experts for their services. They are best equipped to deal with problems that arise but we provide the tools they need to diagnose and resolve those problems on their own. 

Continuous Operation for Cloud Native Applications 

Shipa is reinventing how cloud-native applications are delivered across heterogeneous clusters and cloud environments. Our landing pad approach allows Platform and DevOps teams to build security, compliance, and operational guardrails directly into their infrastructure while enabling Developers to focus on what matters, delivering software and changes that will add value to the organization. With Shipa, teams can now focus on the application delivery and governance, rather than infrastructure.Shipa implements a clear separation of concern between Developers and the Platform Engineering teams, improving Developer experience, governance, monitoring and control. Shipa provides native services to applications that are available right at deployment, services such as different databases, queuing, canary based deployment, deployment rollback and more, allowing Developers to focus on application code delivery while platform teams support development speed and experience, rather than spending time and effort installing and maintaining different infrastructure components.

Now you might be thinking why the world needs another offering such as Shipa, and how it’s different. Under this comparative blog post, I will be taking a deeper look at how Shipa compares to OpenShift. 

Key motivation to become Cloud-Native 

Before we deep dive into the comparative world, let us first try to chart out top capabilities to consider while evaluating an enterprise Kubernetes Platform to become Cloud Native: 

– Developer Productivity 

– A Cluster Agnostic Platform 

– Application Portability 

– Resiliency 

– Multi-Cloud Support 

– Scalability 

– Out of the box Monitoring 

– Out of the box Security

– Seamless 3rd Party Integration 

– OpenAPI & Extension to Edge devices 

– Business Agility 

– Cost Saving 

– Automated Routing & Observability 

Let’s deep dive into each of these capabilities and see how Shipa fits into Cloud-Native world: 

Cluster Agnostic Platform 

The world is moving to cloud-native architectures with multi-vendor, open-cloud hybrid systems. As compared to OpenShift which locks you to OpenShift only, Shipa is purely a cluster agnostic platform. It means that users can attach any Kubernetes clusters to the Shipa pools, such as GKE, on-premises, AKS and so on.You can connect Shipa’s landing pads (also known as Pools) to multiple clusters, across multiple clouds, such as GKE, AKS, EKS and OKE to centrally configure policies at the pool level, monitor applications, monitor security, perform deployment rollback and others, helping your team deliver on governance and control goals required by your organization. 

Users can install Shipa on any existing Kubernetes cluster (1.10.x to newer versions). Shipa leverages Helm charts for the install and it supports both versions, Helm 2 and 3. Fully automated deployment and easy UI-driven wizard gets Kubernetes clusters running in a few minutes. Hence, you can manage clusters with one-click UI-based upgrades and troubleshooting. 

Application Portability 

OpenShift generally offers multi-cluster management for OpenShift clusters only. Comparatively, Shipa has the capability to offer multi-cluster management through Pools. It handles the application lifecycle across multi-cluster & clouds, no matter if it’s GKE, EKS, AKS, and irrespective of Kubernetes version difference, no matter if it is 1.14 or 1.16. 

Shipa makes the underlying cluster and infrastructure irrelevant. It means that users can move apps between different clusters flawlessly. It is responsible for moving the app and creating the required application objects, which can help in scenarios such as high-availability, cluster upgrades, multi-cloud and others. 

Shipa uses the concept of a landing pad, called Pools, to bring application context to your application delivery and management workflow. Pools are where your CI/CD pipeline will deliver the application code and where the process starts with Shipa. Once code is received, Shipa pools are responsible for creating application objects, run security scans, attach policies, start producing application metrics and more. A Shipa pool can host nodes from multiple cloud providers and will distribute your application units/containers across multiple clouds/nodes 

Enhanced Out-of-box Monitoring 

In Shipa, monitoring comes out-of-the-box, with the option to also export it to 3rd party tools, such as New Relic and Datadog. Since application objects are created, deployed and monitored automatically, now developers can focus solely on delivering application code rather than worrying about monitoring tools. 

Enhanced Out-of-the-Box Security 

OpenShift requires you to install additional tools to have security in place and still be somewhat complex for the operations team. To harden OpenShift, you need to be aware of sophisticated tools like SELinux, Stateful and stateless inspection firewall, process/network/storage separation, OAuth to authenticate using an identity provider, such as LDAP, GitHub, or Google and much more. 

Under Shipa, Security comes out of the box.While Shipa allows security configuration at the pool level, which makes it flexible, especially when there are scenarios of multi-tenancy and/or services deployed across different pools with different requirements and because its native to the pool, no additional tools or setup is necessary 

OpenAPI & Extensible to Edge Devices 

Capability of using docker nodes, not only Kubernetes (Shipa nodes), so you can leverage cloud instances such as EC2 and others as well as extend it to edge devices that are linux based. Shipa’s APIs are open and documented, allowing Platform and DevOps teams to integrate Shipa with additional external platforms as well as leverage Shipa as the center of their automation strategy. In addition to that, Shipa has the concept of Plugins, which allows DevOps and Platform teams to build even further automation inside Shipa that can be used at different stages of the application operation lifecycle. 

Say “No” to YAML 

If you are looking out for a platform that allows you to think and operate applications rather than infrastructure when delivering services, across any infrastructure. Shipa abstracts away the need for creating and maintaining object yaml files, not only for applications but across, such as with volumes management. Many of the components on OpenShift still require you to deal with Kubernetes object yaml files as well as custom scripts for actions such as certificate generation and encryption for apps. With governance, guardrails and automated object creation and management in place for applications, developers can focus on writing code that delivers value and services to the organization, not worrying about the infrastructure that runs them 

Since application objects are created, deployed and monitored automatically, Developers can focus solely on delivering application code rather than learning Kubernetes related requirements. And as Shipa integrates into your existing CI/CD workflow, Developers do not need to change or learn additional tools. the need to learn or create any yaml or volume related files, improving Developer experience and application delivery speed. 

Automated Routing & Observability 

Shipa’s DNS capability ensures that users can reach your application from the moment it launches, with no extra effort required with capabilities such as Canary Deployment and

Deployment based rollback, which allows Platform teams to ensure applications are also available. At application deployment time, Shipa provides automated monitoring and application related metrics, helping you better understand the status of your applications, which can also be exported to 3rd party monitoring platforms. 

Support to Persistent Application 

Shipa supports connection to all major CSI providers, allowing Platform Engineering teams to make volumes available across different clusters and CSI providers at the pool level, so Developers can just attach these available volumes to their application, without the need to learn or create any yaml or volume related files, improving Developer experience and application delivery speed. 

Conclusion 

In nutshell, Shipa has re-addressed the traditional PaaS model compared to Red Hat OpenShift & added extra value to enterprises cloud native successes, especially for enterprises key objectives listed below: 

● Even though OpenShift helps a bit on the developer side, it still hold “object oriented/context” rather than application context, so you can expect your platform, devops and development team to still be tied to infrastructure and objects rather than on app only. 

● With OpenShift you will end up locking you up on an OpenShift only platform, which becomes expensive in the long run . Additionally, the platform doesn’t allow you to use different clusters, versions and others that might be best for your application requirements. 

● Shipa leverages and creates the application and its objects inside the cluster, so in future, if you decide to move away from Shipa, your apps will continue working (you will just have to take on the burden of managing everything and every object manually) 

● OpenShift upgrades are painful, takes time and if not planned right, can impact your environment and applications. 

Hence, Shipa provides native services to applications that are available right at deployment, services such as different databases, queuing, canary based deployment, deployment rollback , out of the box monitoring & security and more, allowing Developers to focus on application code delivery while platform teams support development speed and experience, rather than spending time and effort installing and maintaining different infrastructure components.

Top 5 Docker Myths and Facts That You Should be Aware of

 

Today, every fast-growing business enterprise has to deploy new features of their app rapidly if they really want to survive in this competitive market. Developing apps today requires so much more than writing code. For developers, there is a vast array of complex tooling and a duplicate set of commands and tasks to go from local desktop to cloud-native development. It takes hours and possibly days for the development team to decide on the right cloud environment to meet their requirements and to have that environment successfully set up. Docker simplifies and accelerates your workflow, while giving developers the freedom to innovate with their choice of tools, application stacks, and deployment environments for each project.

With over 396 billion all-time DockerHub pulls, 16.2 million Docker Desktop downloads & 9 million Docker accounts, Docker is still the most popular container platform among developers. If you search “Docker ” in GitHub, you will find over 20 million code results, 690 K repositories and over 14,000 discussions around Docker. It shows how Docker is being used by millions of developers to build, share, and run any app, anywhere. As per the latest StackOverFlow 2021 survey, Docker is still the #1 most wanted and #2 most loved developer tools, and helps millions of developers build, share and run any app, anywhere – on-prem or in the cloud. 

Today, all major cloud providers use Docker platform. For example, AWS and Docker have collaborated to make a simplified developer experience that enables you to deploy and manage containers on Amazon ECS directly using Docker tools. Amazon ECS uses Docker images in task definitions to launch containers as part of tasks in your clusters. This year, Docker announced that all of the Docker Official Images are now made available on AWS ECR Public.

The Docker Azure Integration enables developers to use native Docker commands to run applications in Azure Container Instances (ACI) when building cloud-native applications. The new experience provides a tight integration between Docker Desktop and Microsoft Azure allowing developers to quickly run applications using the Docker CLI or VS Code extension, to switch seamlessly from local development to cloud deployment. Nevertheless, technologies and tools available from Docker and its open source project, Moby, have been leveraged by all major data center vendors and cloud providers. Many of these providers are leveraging Docker for their container-native IaaS offerings. Additionally, the leading open source serverless frameworks utilize Docker container technology.

Undoubtedly, Docker today is the de facto standard for most of the developers for packaging their apps but as the container market continues to evolve and diversify in terms of standards and implementations, there is a rise of a confusion among the enterprise developers  to choose the right container platform for their environment. Fortunately, I am here to help you with top 5 reasons debunking many of these modern myths. This blog aims to clear up some commonly held misconceptions in the field of Docker capabilities. The truth, as they say, shall set you free and ‘whalified’.

Myth – 1: Docker doesn’t support rootless containers

This myth says that the Docker daemon requires root privileges and hence admins can’t launch containers as a non-privileged user. 

Fact: Rootless mode was introduced in Docker Engine v19.03 as an experimental feature. Rootless mode graduated from experimental mode in Docker Engine v20.10. This means that Docker today can also be run as a non-root user. Rootless containers have a huge advantage over rootful containers since (you guessed it) they do not run under the root account. The benefit of this is that if an attacker is able to capture and escape a container, this attacker is still a normal user on the host. Containers that are started by a user cannot have more privileges or capabilities than the user itself.

Learn more – https://docs.docker.com/engine/security/rootless/

Myth – 2: Docker doesn’t support daemonless architecture. 

Let us understand this myth. It says that when working with Docker, you have to use the Docker CLI, which communicates with a background daemon (the Docker daemon). The main logic resides in the daemon, which builds images and executes containers. This daemon runs with root privileges which presents a security challenge when providing root privileges to users. It also means that an improperly configured Docker container could potentially access the host filesystem without restriction. As Docker depends on a daemon running in the background, whenever a problem arises with the daemon, container management comes to a halt. This point of failure therefore becomes a potential problem.

Fact: By default, when the Docker daemon terminates, it shuts down running containers. You can configure the daemon so that containers remain running if the daemon becomes unavailable. This functionality is called live restore. The live restore option helps reduce container downtime due to daemon crashes, planned outages, or upgrades. To  enable the live restore setting to keep containers alive when the daemon becomes unavailable, you can add the configuration to the daemon configuration file:

On Linux, this defaults to /etc/docker/daemon.json.  On Docker Desktop for Mac or Docker Desktop for Windows, select the Docker icon from the task bar, then click Preferences -> Docker Engine 

Use the following JSON to enable live-restore.

{

"live-restore": true

}

Learn more: https://docs.docker.com/config/containers/live-restore/ 

Myth – 3: Docker doesn’t support Container Image signing

This myth states that Docker is not secure. Docker images can’t be trusted as they are not signed. Docker doesn’t validate your images and doesn’t have capability to track the source from where the Docker images are being pulled.

Fact: Docker Content Trust has been there since v1.8. Docker version 1.8 introduces Content Trust, which allows you to verify the authenticity, integrity, and publication date of Docker images that are made available on the Docker Hub Registry. Docker Content Trust (DCT) provides the ability to use digital signatures for data sent to and received from remote Docker registries. These signatures allow client-side or runtime verification of the integrity and publisher of specific image tags. 

Within the Docker CLI we can sign and push a container image with the ‘docker trust’ command syntax. This is built on top of the Notary feature set. A prerequisite for signing an image is a Docker Registry with a Notary server attached (such as the Docker Hub ).

docker trust

Usage:  docker trust COMMAND

Manage trust on Docker images

Management Commands:
  key         Manage keys for signing Docker images
  signer      Manage entities who can sign Docker images

Commands:
  inspect     Return low-level information about keys and signatures
  revoke      Remove trust for an image
  sign        Sign an image

Run 'docker trust COMMAND --help' for more information on a command.


Learn more – https://docs.docker.com/engine/security/trust/

Myth – 4: Docker is becoming paid and not free software anymore

This myth states that Docker is not free software anymore. Docker has completely monetized the software and hence one needs to pay for the subscription if they want to use it.

Fact: Docker Engine and all upstream open source Docker and Moby projects are still free. Docker Desktop is free to download and install for your personal use. If you’re running a small business with fewer than 250 employees and less than $10 million in annual revenue, Docker Desktop is  still free. No matter, if you are a student or an instructor either in an academic or professional environment, it is still free to download and install. If you are working on any open source non-commercial project hosted over GitHub and abide by the Open Source Initiative definition, you can use Docker Desktop for free. All you need to do is to fill up the form and apply here.

For your open source project namespace on Docker Hub, Docker offers unlimited pulls and unlimited egress to any and all users, with no egress restrictions applying to any Docker users pulling images from that namespace. In addition, if your open source project uses Autobuild capabilities, you can continue using them for free. You are also free to continue to use Docker Desktop via the Docker Personal subscription. 

Myth – 5: Docker doesn’t support Kubernetes

This myth states that Docker is incapable to run Kubernetes Pods. A Pod represents a single instance of a running process in your cluster. Pods contain one or more containers, such as Docker containers. When a Pod runs multiple containers, the containers are managed as a single entity and share the Pod’s resources.

Fact: Docker Desktop does allow you to run Kubernetes Pods. If you have Docker Desktop installed in your Mac or Windows system, you can enable Kubernetes under Dashboard UI and then deploy Pods over it. You can even use the native Docker compose tool to bring up Kubernetes resources seamlessly.

Learn more – https://docs.docker.com/desktop/kubernetes/ 

Conclusion:

Docker today is still heavily used by millions of developers to build, share, and run any app, anywhere, almost everyday. It is enabling developers to accelerate their productivity and spend more time on delivering value that’s core to their business. If you are looking out for matured, stable and enterprise-ready container desktop platform, Docker Desktop is a right choice for you and your organization.

References:

Here at Collabnix Community Slack , we’re happy to chat around Docker and how it is being adopted by millions of Developer communities. If interested, leave your comments below.

How to control DJI Tello Mini-Drone using Python

If you want to take your drone programming skills to the next level, DJI Tello is the right product to buy. Tello is $99 mini-drone that can be programmed using Python, Scratch and Swift. It is rightly called as an indoor quadcopter that comes equipped with an HD camera, giving you a bird’s-eye view of the world. It’s damn easy to fly. Start flying by simply tossing Tello into the air. Slide on your mobile screen to perform 8D flips cool aerial stunts. It’s quite lightweight and dimensions are 3.8 x 3.6 x 1.6 inches, weighing only 2.8 ounces. One of the amazing feature is its VR headset compatibility. You can fly it with a breathtaking first-person view.

Tello is a small quadcopter that features a Vision Positioning System and an onboard camera. Tello is a great choice if you want to learn AI analytics at the Edge. Imagine you’re building an application that captures video streaming from the drone and sends it to AI computers like Jetson Xavier or Nano for AI analytics, storing the time-series data in Redis running over Cloud and visualizing it over Grafana. There’s ample amount of learning opportunity using these affordable drones for researchers, engineers and students.

Notable Features of Tello Drone

  • DJI Tello has an excellent flight time of 13 minutes. (enough for your indoor testing!)
  • It comes with a 5MP camera. It can shoot 720p videos, and has digital image stabilization!
  • Approximately 80 g (Propellers and Battery Included) in weight.
  • This small drone has a maximum flight distance of 100 meters and you can fly it in various flight modes using your smartphone or via the Bluetooth controller!
  • It has two antennas that make video transmission extra stable and a high-capacity battery that offers impressively long flight times.
  • It comes with a Micro USB Charging Port
  • Equipped with a high-quality image processor, Tello shoots incredible photos and videos. Even if you don’t know how to fly, you can record pro-level videos with EZ Shots and share them on social media from your smartphone.
Photo Quality Setting in Tello App
  • Tello comes with sensors that helps it finding obstacles and during the landing
DJI Tello Calibration Settings
  • Tello height can be hacked . Check this out: http://protello.com/tello-hacking-height-limit/
  • DJI Tello has a brushed motor, which is cheaper than the brushless motor, but it’s also less efficient. (Sadly, brushed motors are known to burn out sometimes due to low quality or poor implementation. They’re also susceptible to rough impacts).
  • Being a Non-GPS drone, it is very stable. Video quality is quite decent and landing is also accurate. Also fly it during calm winds or no winds conditions otherwise it’ll sway away with he wind. Good for indoors as well and to click some good selfies.
  • DJI Tello is controlled using an application on an iOS or Android mobile phone. It is also possible to control via Bluetooth joystick connected via application.

Getting Started

Hardware Required:

Unboxing DJI Tello Drone
  • DJI Tello Drone (Buy)
  • Charging Cable
  • Battery (Buy)

DJI Tello comes with a detachable Battery of 1.1Ah/3.8V. Insert the 26g battery into the aircraft and charge the battery by connecting the Micro USB port on the aircraft to a charger.

Ways of controlling Your DJI Tello

There are 2 ways you can control your DJI Tello. The first one is using your mobile device, you will need to download Tello or Tello EDU App first. You can also control your Tello via Python or Scratch programming. In this blog, we will see how to control Tello using Python.

Pre-requisite:

  • Linux System( Desktop or Edge device)
  • Python3
  • Tello Mobile app

Press the “Power” button of Tello once. Once it start blinking, open up Tello Android app to discover Tello drone. Open settings and configure WiFi settings like username and password. Connect your laptop to the Tello WiFI network. Follow the below steps to connect via Python script.

Install using pip

pip install djitellopy

For Linux distributions with both python2 and python3 (e.g. Debian, Ubuntu, …) you need to run

pip3 install djitellopy

API Reference

See djitellopy.readthedocs.io for a full reference of all classes and methods available.

Step 1. Connect, TakeOff, Move and Land

The below Python script allows you to connect to the drone, take off, make some movement – Left and Right and then Land smoothly.

from djitellopy import Tello

tello = Tello()

tello.connect()
tello.takeoff()

tello.move_left(100)
tello.rotate_counter_clockwise(90)
tello.move_forward(100)

tello.land()

Step 2. Take a Picture

import cv2
from djitellopy import Tello

tello = Tello()
tello.connect()

tello.streamon()
frame_read = tello.get_frame_read()

tello.takeoff()
cv2.imwrite("picture.png", frame_read.frame)

tello.land()

Step 3. Recording a Video


# source https://github.com/damiafuentes/DJITelloPy
import time, cv2
from threading import Thread
from djitellopy import Tello

tello = Tello()

tello.connect()

keepRecording = True
tello.streamon()
frame_read = tello.get_frame_read()

def videoRecorder():
    # create a VideoWrite object, recoring to ./video.avi
   
    height, width, _ = frame_read.frame.shape
    video = cv2.VideoWriter('video.avi', cv2.VideoWriter_fourcc(*'XVID'), 30, (width, height))

    while keepRecording:
        video.write(frame_read.frame)
        time.sleep(1 / 30)

    video.release()

# we need to run the recorder in a seperate thread, otherwise blocking options
#  would prevent frames from getting added to the video
recorder = Thread(target=videoRecorder)
recorder.start()

tello.takeoff()
tello.move_up(100)
tello.rotate_counter_clockwise(360)
tello.land()

keepRecording = False
recorder.join()

Step 4. Control the drone using Keyboard

# source https://github.com/damiafuentes/DJITelloPy
from djitellopy import Tello
import cv2, math, time

tello = Tello()
tello.connect()

tello.streamon()
frame_read = tello.get_frame_read()

tello.takeoff()

while True:
    # In reality you want to display frames in a seperate thread. Otherwise
    #  they will freeze while the drone moves.
   
    img = frame_read.frame
    cv2.imshow("drone", img)

    key = cv2.waitKey(1) & 0xff
    if key == 27: # ESC
        break
    elif key == ord('w'):
        tello.move_forward(30)
    elif key == ord('s'):
        tello.move_back(30)
    elif key == ord('a'):
        tello.move_left(30)
    elif key == ord('d'):
        tello.move_right(30)
    elif key == ord('e'):
        tello.rotate_clockwise(30)
    elif key == ord('q'):
        tello.rotate_counter_clockwise(30)
    elif key == ord('r'):
        tello.move_up(30)
    elif key == ord('f'):
        tello.move_down(30)

tello.land()

In my next blog post, I will showcase how to implement object detection and analytics using Deep Learning, DJI Tello, Jetson Nano and DeepStream.

References: