Join our Discord Server
Karan Singh Karan is a highly experienced DevOps Engineer with over 13 years of experience in the IT industry. Throughout his career, he has developed a deep understanding of the principles of DevOps, including continuous integration and deployment, automated testing, and infrastructure as code.

Introducing Karpenter – An Open-Source High-Performance Kubernetes Cluster Autoscaler

10 min read

Kubernetes has become the de facto standard for managing containerized applications at scale. However, one common challenge is efficiently scaling the cluster to meet the demands of your workloads. This is where Karpenter comes in. Karpenter is a powerful tool that automates the provisioning of nodes in response to unschedulable pods. In this blog, we’ll explore how to set up and use Karpenter with Kubernetes, and we’ll provide code snippets to guide you through the process.

Table of Contents:

  1. What is Karpenter?
  2. Benefits of Using Karpenter
  3. Prerequisites
  4. Setting up a Kubernetes Cluster
    1. 4.1 Installing Required Utilities
    2. 4.2 Setting Environment Variables
    3. 4.3 Creating a Kubernetes Cluster with eksctl
    4. 4.4 Deploying Karpenter with Helm
  5. Using Karpenter
    1. 5.1 Creating a Provisioner
    2. 5.2 Scaling Pods and Node Provisioning
  6. Monitoring Karpenter with Grafana and Prometheus
  7. Cleaning Up
  8. Use-Case
  9. Conclusion

What is Karpenter?

Karpenter is an open-source project that extends Kubernetes by automatically provisioning nodes to accommodate unschedulable pods. It observes events within the cluster and interacts with the underlying cloud provider to provision new nodes when needed.

Benefits of Using Karpenter

  • Automatic node provisioning: Karpenter eliminates the need for manual scaling of your Kubernetes cluster by automatically provisioning nodes based on pod demands.
  • Efficient resource utilization: Karpenter optimizes resource allocation by dynamically scaling the cluster to match workload requirements, avoiding overprovisioning or underutilization.
  • Seamless integration: Karpenter integrates seamlessly with existing Kubernetes workflows and leverages standard Kubernetes APIs and controllers.
  • Support for spot instances: Karpenter supports spot instances on cloud providers like AWS, allowing you to take advantage of cost savings while maintaining high availability.

Prerequisites:

Before getting started with Karpenter, ensure that you have the following prerequisites:

  • AWS CLI
  • kubectl
  • eksctl
  • Helm

Setting up a Kubernetes Cluster

To use Karpenter, you need a supported Kubernetes cluster on a supported cloud provider. In this guide, we’ll focus on setting up a cluster using AWS Elastic Kubernetes Service (EKS).

To use Karpenter, you must be running a supported Kubernetes cluster on a supported cloud provider. Currently, only EKS on AWS is supported.

Installing Required Utilities:

Install the necessary utilities, including AWS CLI, kubectl, eksctl, and Helm.

Setting Environment Variables

Set environment variables to specify the 

  • Karpenter version, 
  • cluster name, 
  • AWS region, 
  • AWS account ID, and
  • temporary file.
export KARPENTER_VERSION=v0.27.5
export CLUSTER_NAME=”${USER}-karpenter-demo”
export AWS_DEFAULT_REGION=”us-west-2″
export AWS_ACCOUNT_ID=”$(aws sts get-caller-identity –query Account –output text)”
export TEMPOUT=$(mktemp)

NOTE:

Warning

If you open a new shell to run steps in this procedure, you need to set some or all of the environment variables again.

echo $KARPENTER_VERSION $CLUSTER_NAME $AWS_DEFAULT_REGION $AWS_ACCOUNT_ID $TEMPOUT

Creating a Kubernetes Cluster with eksctl

Use eksctl to create a Kubernetes cluster on AWS EKS. 

This step sets up the infrastructure, service accounts, and IAM roles required by Karpenter.

  • Use eksctl with CloudFormation:
    • eksctl simplifies the process of creating Kubernetes clusters on AWS EKS. By using eksctl with CloudFormation, you can easily set up the necessary infrastructure for your EKS cluster.
  • Create Kubernetes Service Account and IAM Role:
    • To enable Karpenter to launch instances, you need to create a Kubernetes service account and an AWS IAM Role. These two entities will be associated using IRSA (IAM Roles for Service Accounts).
  • Add Karpenter Node Role to aws-auth Configmap:
    • In order to allow nodes to connect to the cluster, you need to add the Karpenter node role to the aws-auth configmap. This step ensures that nodes can join and participate in the cluster.
  • Choose Node Group Configuration:
    • By default, the configuration uses AWS EKS managed node groups for the kube-system and karpenter namespaces. If you prefer to use Fargate for both namespaces instead, you can modify the configuration accordingly.
  • Set KARPENTER_IAM_ROLE_ARN:
    • To leverage spot instances with Karpenter, you need to set the KARPENTER_IAM_ROLE_ARN variable. This variable specifies the IAM role that will be used for launching spot instances.
# Deploying CloudFormation Stack for Karpenter
curl -fsSL “https://karpenter.sh/${KARPENTER_VERSION}/getting-started/getting-started-with-karpenter/cloudformation.yaml” > “$TEMPOUT”
aws cloudformation deploy \
  –stack-name “Karpenter-${CLUSTER_NAME}” \
  –template-file “${TEMPOUT}” \
  –capabilities CAPABILITY_NAMED_IAM \
  –parameter-overrides “ClusterName=${CLUSTER_NAME}”
# Creating EKS Cluster with eksctl
cat <<EOF | eksctl create cluster -f –
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ${CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
  version: “1.24”
  tags:
    karpenter.sh/discovery: ${CLUSTER_NAME}
iam:
  withOIDC: true
  serviceAccounts:
  – metadata:
      name: karpenter
      namespace: karpenter
    roleName: ${CLUSTER_NAME}-karpenter
    attachPolicyARNs:
    – arn:aws:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}
    roleOnly: true
iamIdentityMappings:
– arn: “arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}”
  username: system:node:{{EC2PrivateDNSName}}
  groups:
  – system:bootstrappers
  – system:nodes
managedNodeGroups:
– instanceType: m5.large
  amiFamily: AmazonLinux2
  name: ${CLUSTER_NAME}-ng
  desiredCapacity: 2
  minSize: 1
  maxSize: 10
## Optionally run on fargate
# fargateProfiles:
# – name: karpenter
#   selectors:
#   – namespace: karpenter
EOF
# Exporting Cluster Endpoint and Karpenter IAM Role ARN
export CLUSTER_ENDPOINT=”$(aws eks describe-cluster –name ${CLUSTER_NAME} –query “cluster.endpoint” –output text)”
export KARPENTER_IAM_ROLE_ARN=”arn:aws:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}-karpenter”
echo $CLUSTER_ENDPOINT $KARPENTER_IAM_ROLE_ARN
  • Create Role for Spot Instances:
    • Next, create an IAM role that grants the necessary permissions for Karpenter to utilize spot instances effectively. This role ensures smooth integration and utilization of spot instances within the cluster.
# aws iam create-service-linked-role –aws-service-name spot.amazonaws.com || true
# If the role has already been successfully created, you will see:
# An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation: Service role name AWSServiceRoleForEC2Spot has been taken in this account, please try a different suffix.
  • Install Karpenter with Helm:
# Logout of docker to perform an unauthenticated pull against the public ECR
docker logout public.ecr.aws
helm upgrade –install karpenter oci://public.ecr.aws/karpenter/karpenter –version ${KARPENTER_VERSION} –namespace karpenter –create-namespace \
  –set serviceAccount.annotations.”eks\.amazonaws\.com/role-arn”=${KARPENTER_IAM_ROLE_ARN} \
  –set settings.aws.clusterName=${CLUSTER_NAME} \
  –set settings.aws.defaultInstanceProfile=KarpenterNodeInstanceProfile-${CLUSTER_NAME} \
  –set settings.aws.interruptionQueueName=${CLUSTER_NAME} \
  –set controller.resources.requests.cpu=1 \
  –set controller.resources.requests.memory=1Gi \
  –set controller.resources.limits.cpu=1 \
  –set controller.resources.limits.memory=1Gi \
  –wait

Using Karpenter:

Now that Karpenter is set up, let’s explore how to use it to automatically provision nodes and scale pods.

  • Creating a Provisioner:
    • Create a provisioner that defines the capacity and constraints for node provisioning. This step ensures that Karpenter provisions nodes based on the resource requirements of the pods.
cat <<EOF | kubectl apply -f –
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  requirements:
    – key: karpenter.sh/capacity-type
      operator: In
      values: [“spot”]
  limits:
    resources:
      cpu: 1000
  providerRef:
    name: default
  ttlSecondsAfterEmpty: 30

apiVersion: karpenter.k8s.aws/v1alpha1
kind: AWSNodeTemplate
metadata:
  name: default
spec:
  subnetSelector:
    karpenter.sh/discovery: ${CLUSTER_NAME}
  securityGroupSelector:
    karpenter.sh/discovery: ${CLUSTER_NAME}
EOF
  • Scaling Pods and Node Provisioning:
    • With the provisioner in place, you can now scale pods and let Karpenter automatically provision nodes as needed. You can specify resource requests and limits for pods, and Karpenter will ensure that the required nodes are provisioned to accommodate them.

Scale up deployment

This deployment uses the pause image and starts with zero replicas.

cat <<EOF | kubectl apply -f –
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate
spec:
  replicas: 0
  selector:
    matchLabels:
      app: inflate
  template:
    metadata:
      labels:
        app: inflate
    spec:
      terminationGracePeriodSeconds: 0
      containers:
        – name: inflate
          image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
          resources:
            requests:
              cpu: 1
EOF
kubectl scale deployment inflate –replicas 5
kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller

Monitoring Karpenter with Grafana and Prometheus

To monitor the performance and resource utilization of Karpenter, you can set up Grafana and Prometheus. These tools provide insights into the cluster’s resource usage, node provisioning, and pod scheduling.

helm repo add grafana-charts https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create namespace monitoring
curl -fsSL https://karpenter.sh/”${KARPENTER_VERSION}”/getting-started/getting-started-with-karpenter/prometheus-values.yaml | tee prometheus-values.yaml
helm install –namespace monitoring prometheus prometheus-community/prometheus –values prometheus-values.yaml
curl -fsSL https://karpenter.sh/”${KARPENTER_VERSION}”/getting-started/getting-started-with-karpenter/grafana-values.yaml | tee grafana-values.yaml
helm install –namespace monitoring grafana grafana-charts/grafana –values grafana-values.yaml

Cleanup

Delete Karpenter nodes manually

If you delete a node with kubectl, Karpenter will gracefully cordon, drain, and shutdown the corresponding instance. 

When you’re done experimenting with Karpenter, it’s important to clean up any resources to avoid unnecessary costs. This involves deleting the Kubernetes cluster, Karpenter deployment, and any associated resources.

kubectl delete node $NODE_NAME

Use-Case:

Here’s an example of a web application and database use-case with Karpenter on Kubernetes. In this scenario, we will deploy a simple web application using a microservice architecture, along with a backend database. Karpenter will be used to provision and scale the required pods and nodes in the Kubernetes cluster.

Define the Application Components

  1. Let’s start by defining the application components. We’ll have a frontend web service, a backend API service, and a database service.
# frontend-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  selector:
    app: frontend
  ports:
    – protocol: TCP
      port: 80
      targetPort: 80

# frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
        – name: frontend
          image: myapp/frontend:latest
          ports:
            – containerPort: 80

# api-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: api-service
spec:
  selector:
    app: api
  ports:
    – protocol: TCP
      port: 80
      targetPort: 80

# api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
        – name: api
          image: myapp/api:latest
          ports:
            – containerPort: 80

# database-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: database-service
spec:
  selector:
    app: database
  ports:
    – protocol: TCP
      port: 5432
      targetPort: 5432

# database-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: database-statefulset
spec:
  replicas: 1
  serviceName: database-service
  selector:
    matchLabels:
      app: database
  template:
    metadata:
      labels:
        app: database
    spec:
      containers:
        – name: database
          image: myapp/database:latest
          ports:
            – containerPort: 5432

Provision and Scale Pods and Nodes using Karpenter

  1. Now, let’s deploy Karpenter to provision and scale the pods and nodes dynamically based on the workload.
# Install Karpenter using Helm
helm repo add karpenter https://awslabs.github.io/karpenter/charts
helm install karpenter karpenter/karpenter
# Create a Provisioner for Frontend Deployment
kubectl apply -f – <<EOF
apiVersion: provisioning.karpenter.sh/v1alpha3
kind: Provisioner
metadata:
  name: frontend-provisioner
spec:
  constraints:
    labels:
      app: frontend
  template:
    spec:
      replicas: 3
      template:
        spec:
          containers:
            – name: frontend
              resources:
                requests:
                  cpu: 200m
                  memory: 256Mi
              volumeMounts:
                – mountPath: /data
                  name: frontend-persistent-storage
          volumes:
            – name: frontend-persistent-storage
              emptyDir: {}
EOF
# Create a Provisioner for API Deployment
kubectl apply -f – <<EOF
apiVersion: provisioning.karpenter.sh/v1alpha3
kind: Provisioner
metadata:
  name: api-provisioner
spec:
  constraints:
    labels:
      app: api
  template:
    spec:
      replicas: 3
      template:
        spec:
          containers:
            – name: api
              resources:
                requests:
                  cpu: 200m
                  memory: 256Mi
EOF

Monitor Karpenter with Grafana and Prometheus

  1. To monitor the provisioning and scaling activities performed by Karpenter, we can use Grafana and Prometheus.
Install Prometheus and Grafana using Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack
# Forward Grafana port to localhost
kubectl port-forward service/prometheus-grafana –namespace prometheus 3000
# Access Grafana dashboard at http://localhost:3000
# Configure Prometheus as a datasource in Grafana

Clean Up

  1. To clean up the resources created, you can delete the Kubernetes objects and uninstall Karpenter and other components
# Delete the application components
kubectl delete -f frontend-service.yaml
kubectl delete -f frontend-deployment.yaml
kubectl delete -f api-service.yaml
kubectl delete -f api-deployment.yaml
kubectl delete -f database-service.yaml
kubectl delete -f database-statefulset.yaml
# Uninstall Karpenter using Helm
helm uninstall karpenter
# Uninstall Prometheus and Grafana using Helm
helm uninstall prometheus

Conclusion

Karpenter is a powerful tool that simplifies the scaling of Kubernetes clusters by automating node provisioning. It optimizes resource utilization and seamlessly integrates with existing Kubernetes workflows. By following the steps outlined in this blog, you can harness the power of Karpenter to efficiently manage your workloads and ensure high availability in your Kubernetes environment.

Have Queries? Join https://launchpass.com/collabnix

Karan Singh Karan is a highly experienced DevOps Engineer with over 13 years of experience in the IT industry. Throughout his career, he has developed a deep understanding of the principles of DevOps, including continuous integration and deployment, automated testing, and infrastructure as code.
Join our Discord Server
Index