Docker Desktop is the easiest way to run Kubernetes on your local machine – it gives you a fully certified Kubernetes cluster and manages all the components for you.
In this tutorial you’ll learn how to set up Kubernetes on Docker Desktop and run a simple demo app. You’ll gain experience of working with Kubernetes and comparing the app definition syntax to Docker Compose.
1. Install Docker Desktop
2. Enable Kubernetes
Kubernetes itself runs in containers. When you deploy a Kubenetes cluster you first install Docker (or another container runtime like containerd) and then use tools like kubeadm which starts all the Kubernetes components in containers. Docker Desktop does all that for you.
Make sure you have Docker Desktop running – in the taskbar in Windows and the menu bar on the Mac you’ll see Docker’s whale logo. Click the whale and select Settings:
3. The Docker Desktop menu
A new screen opens with all of Docker Desktop’s configuration options. Click on Kubernetes and check the Enable Kubernetes checkbox:
4. Enabling Kubernetes in Docker Desktop
That’s it! Docker Desktop will download all the Kubernetes images in the background and get everything started up. When it’s ready you’ll see two green lights in the bottom of the settings screen saying Docker running and Kubernetes running.
The star in the screenshot shows the Reset Kubernetes Cluster button, which is one of the reasons why Docker Desktop is the best of the local Kubernetes options. Click that and it will reset your cluster back to a fresh install of Kubernetes.
5. Verify your Kubernetes cluster
If you’ve worked with Docker before, you’re used to managing containers with the docker and docker-compose command lines. Kubernetes uses a different tool called kubectl to manage apps – Docker Desktop installs kubectl for you too.
6. Test driving Kubeview
Kubeview is a Kubernetes cluster visualization tool that offers a graphical representation of your cluster’s resources, including pods, deployments, services, and more. It provides a visual interface to explore and navigate through the various components of your cluster, helping you gain insights into resource allocation, dependencies, and performance.
i. Using Helm
Assuming that you have already installed Git and Helm on your laptop, follow the below steps
git clone https://github.com/benc-uk/kubeview
cd kubeview/charts/
helm install kubeview kubeview
Testing it locally
kubectl port-forward svc/kubeview -n default 80:80
ii. Using Kubectl
To start using Kubeview, you need to install it in your Kubernetes cluster. Follow these steps:
Step 1: Deploy the Kubeview application
Kubeview can be deployed as a Kubernetes application using a YAML manifest file. Simply apply the manifest to your cluster using the following command:
kubectl apply -f https://raw.githubusercontent.com/benc-uk/kubeview/main/deploy/kubeview.yaml
Step 2: Verify the installation:
After applying the manifest, check that Kubeview is up and running by executing the following command:
kubectl get pods -n kubeview
Exploring Your Cluster with Kubeview
Once Kubeview is installed, you can access its web-based interface to explore your Kubernetes cluster visually. Here’s how to get started:
Step 1: Access the Kubeview dashboard
Retrieve the URL to access the Kubeview dashboard by executing the following command:
kubectl --namespace kubeview port-forward service/kubeview 8080:80
This command sets up a port forward to access Kubeview on your local machine.
Step 2: Open the Kubeview dashboard
Open a web browser and visit http://localhost:8080 to access the Kubeview dashboard. You will be greeted with an intuitive interface that displays an overview of your cluster’s resources.
Step 3: Navigate through your cluster
Explore your cluster by clicking on different resources in the Kubeview interface. You can drill down into pods, deployments, services, and other components to visualize their relationships and connections. Kubeview provides clear visual representations, such as node graphs and dependency diagrams, to help you understand the structure of your cluster.
Step 4: Analyze resource details
Clicking on individual resources reveals detailed information about them, including resource usage, health status, and labels. You can also view logs and events associated with each resource, allowing for efficient troubleshooting and monitoring.
Leveraging Kubeview’s Advanced Features
Kubeview offers additional advanced features that enhance your cluster exploration and monitoring experience. These include:
- Search and filter capabilities: Use Kubeview’s search and filter functionalities to quickly locate specific resources or filter them based on labels, namespaces, or other criteria.
- Real-time updates: Kubeview automatically updates the visual representation of your cluster as changes occur, providing you with near real-time insights into the state of your resources.
- Support for multiple clusters: If you manage multiple Kubernetes clusters, Kubeview can handle them all. You can easily switch between clusters within the Kubeview interface.
Visualizing K8s cluster using kubectl ai
Using Kubernetes via AI
Getting Started
- Docker Desktop
- Install Kubectl-ai
brew tap sozercan/kubectl-ai https://github.com/sozercan/kubectl-ai
brew install kubectl-ai
- Get OpenAI Keys via https://platform.openai.com/account/api-keys
kubectl-ai requires an OpenAI API key or an Azure OpenAI Service API key and endpoint, and a valid Kubernetes configuration.
Creating your first Nginx Pod
kubectl ai "create an nginx pod"
✨ Attempting to apply the following manifest:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+ Reprompt
▸ Apply
Don't Apply
Deployment
Select “Reprompt” and type “make this into deployment”
Reprompt: make this into deployment
✨ Attempting to apply the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
ReplicaSet
Reprompt: Scale to 3 replicas
Reprompt: Scale to 3 replicas
✨ Attempting to apply the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
kubectl ai "Create Nginx Pod running on port 82 with 3 replicasets labeled web"
Services
kubectl ai "create a service for the nginx deployment with load balancer that uses nginx selector"
✨ Attempting to apply the following manifest:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
type: LoadBalancer
✔ Apply
Here’s the final view of the Kubeview:
Conclusion
If you’re new to Kubernetes and looking for a hassle-free setup, Docker Desktop is your gateway to a single node Kubernetes cluster. With Docker Desktop, you can quickly get started and effortlessly install essential tools like kubeview and kubectl-ai, further simplifying your Kubernetes experience.
Keep Reading
-
Testcontainers and Playwright
Discover how Testcontainers-Playwright simplifies browser automation and testing without local Playwright installations. Learn about its features, limitations, compatibility, and usage with code examples.
-
Getting Started with the Low-Cost RPLIDAR Using NVIDIA Jetson Nano
Conclusion Getting started with low-code RPlidar with Jetson Nano is an exciting journey that can open up a wide range of possibilities for building robotics projects. In this blog post, we covered the basic steps to get started with low-code RPlidar with Jetson Nano, including setting up ROS, installing the RPlidar driver and viewing RPlidar…
-
Docker and Wasm Containers – Better Together
Learn how Docker Desktop and CLI both manages Linux containers and Wasm containers side by side.
-
Running Ollama on Windows: A Comprehensive Guide
Ollama, the versatile platform for running large language models (LLMs) locally, is now available on Windows in preview as of February 15, 2024. This update empowers Windows users to pull, run, and create LLMs with a seamless native experience. Packed with features like GPU acceleration, access to an extensive model library, and OpenAI-compatible APIs, Ollama…
-
What is Apache Kafka? A Guide to the Distributed Streaming Platform
In today’s world, billions of data sources generate streams of events every second. These events, ranging from a customer placing an online order to a sensor reporting temperature changes, drive processes and decisions in real-time. This ever-growing need for handling and processing massive amounts of streaming data has led to the rise of platforms like…
-
Is Kubernetes a Part of CI/CD?
As modern software engineering embraces automation, scalability, and agility, Kubernetes often emerges at the crossroads of CI/CD pipelines. But is Kubernetes a part of CI/CD, or does it extend beyond these boundaries? This blog explores the role Kubernetes plays in continuous integration and continuous delivery pipelines. It examines whether Kubernetes is merely a deployment target…