Join our Discord Server
Karan Singh Karan is a highly experienced DevOps Engineer with over 13 years of experience in the IT industry. Throughout his career, he has developed a deep understanding of the principles of DevOps, including continuous integration and deployment, automated testing, and infrastructure as code.

How to Install a Kubernetes Cluster on CentOS 8

1 min read

Kubernetes, an open-source platform for managing containerized workloads, is a powerful tool that automates the deployment, scaling, and management of containerized applications. If you’re looking to harness the benefits of Kubernetes on CentOS 8, follow this comprehensive guide to install a Kubernetes cluster step by step.

What is Kubernetes?

Kubernetes is an open-source platform designed to manage containerized workloads and services. It automates various aspects of deploying, scaling, and managing containerized applications across diverse environments, including physical, virtual, and cloud-based infrastructures.

Prerequisites

Before embarking on the installation process, ensure you have the following prerequisites in place:

  • A CentOS 8 server with a minimum of 2GB RAM and 2 CPUs.
  • Root access to the server.
  • Docker installed on the server.

Step 1: Update the System

Keep your system up to date by running:

sudo yum update

This command ensures that all packages on your CentOS 8 server are current.

Step 2: Install Kubernetes and its Dependencies

Install the necessary Kubernetes components using the following command:

sudo yum install -y kubelet kubeadm kubectl

This installs kubelet, kubeadm, and kubectl, the essential components for a Kubernetes cluster.

Step 3: Disable SELinux and Swap

Disable SELinux and swap on your server with the following commands:

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Disabling SELinux is necessary to prevent interference with Kubernetes.

Step 4: Initialize the Cluster

Initiate the Kubernetes cluster with the specified pod network CIDR:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

This command initializes the Kubernetes cluster and sets the pod network CIDR.

Step 5: Set Up Your User

Set up your user to access the Kubernetes cluster:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

These commands create the necessary directories and configuration files for your user.

Step 6: Install a Pod Network

Install a pod network for communication between pods:

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

This command installs the Flannel pod network.

Step 7: Join Nodes to the Cluster

On worker nodes, join them to the cluster by running the command generated during master node initialization:

sudo kubeadm join <master_node_ip>:<port> --token <token> --discovery-token-ca-cert-hash <hash>

Replace placeholders with your master node’s IP, port, token, and hash.

Step 8: Verify the Cluster

On the master node, verify that the Kubernetes cluster is running:

sudo kubectl get nodes

This command displays a list of all nodes in the cluster.

Additional Steps for Advanced Configuration
Enhance your Kubernetes cluster with advanced configurations:

  • Set Up Persistent Storage: Enable persistent storage for applications by configuring storage classes and persistent volume claims.
  • Configure Load Balancing: Improve performance and scalability by setting up load balancing for multiple worker nodes.
  • Enable RBAC: Enhance security by enabling Role-Based Access Control (RBAC) to control user access to the Kubernetes API.
  • Install Monitoring and Logging Tools: Monitor your cluster’s health and performance by installing tools like Prometheus and Grafana.

Conclusion

Congratulations! You’ve successfully installed a Kubernetes cluster on CentOS 8. Kubernetes brings unparalleled efficiency to managing applications and infrastructure. Explore additional configurations and troubleshooting steps to fully harness the power of Kubernetes for your containerized workloads. Your cluster is now ready to deploy and scale containerized applications seamlessly.

Keep Reading

  • Testcontainers and Playwright

    Testcontainers and Playwright

    Discover how Testcontainers-Playwright simplifies browser automation and testing without local Playwright installations. Learn about its features, limitations, compatibility, and usage with code examples.

    Read More

  • Getting Started with the Low-Cost RPLIDAR Using NVIDIA Jetson Nano

    Getting Started with the Low-Cost RPLIDAR Using NVIDIA Jetson Nano

    Conclusion Getting started with low-code RPlidar with Jetson Nano is an exciting journey that can open up a wide range of possibilities for building robotics projects. In this blog post, we covered the basic steps to get started with low-code RPlidar with Jetson Nano, including setting up ROS, installing the RPlidar driver and viewing RPlidar…

    Read More

  • Docker and Wasm Containers – Better Together

    Docker and Wasm Containers – Better Together

    Learn how Docker Desktop and CLI both manages Linux containers and Wasm containers side by side.

    Read More

  • Is Kubernetes a Part of CI/CD?

    Is Kubernetes a Part of CI/CD?

    As modern software engineering embraces automation, scalability, and agility, Kubernetes often emerges at the crossroads of CI/CD pipelines. But is Kubernetes a part of CI/CD, or does it extend beyond these boundaries? This blog explores the role Kubernetes plays in continuous integration and continuous delivery pipelines. It examines whether Kubernetes is merely a deployment target…

    Read More

  • 10 Creative Uses of AI in DevOps Today

    10 Creative Uses of AI in DevOps Today

    The junction of artificial intelligence (AI) and DevOps is changing the way deployment and software development occur. AI becomes more important as companies use data-driven approaches in helping DevOps processes to be quicker, smarter, and more efficient. The opportunities appear limitless from identifying weaknesses before they become problems to automating tedious chores. Let’s explore some…

    Read More

  • Running Ollama and Open WebUI in a Kubernetes Cluster

    Running Ollama and Open WebUI in a Kubernetes Cluster

    In today’s world of machine learning and AI, managing models and running them efficiently is crucial for developers. Ollama is a powerful platform for running AI models, while Open WebUI provides a user-friendly interface to interact with these models. Running both Ollama and Open WebUI in a Kubernetes cluster provides scalability, flexibility, and easy management.…

    Read More

Have Queries? Join https://launchpass.com/collabnix

Karan Singh Karan is a highly experienced DevOps Engineer with over 13 years of experience in the IT industry. Throughout his career, he has developed a deep understanding of the principles of DevOps, including continuous integration and deployment, automated testing, and infrastructure as code.
Join our Discord Server
Index