Join our Discord Server
Tanvir Kour Tanvir Kour is a passionate technical blogger and open source enthusiast. She is a graduate in Computer Science and Engineering and has 4 years of experience in providing IT solutions. She is well-versed with Linux, Docker and Cloud-Native application. You can connect to her via Twitter https://x.com/tanvirkour

How to Run Load Tests in AWS EKS

1 min read

Running load tests in AWS Elastic Kubernetes Service (EKS) is a powerful way to validate your applications’ performance and scalability. By leveraging tools like Locust for load generation and Kubernetes features such as Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler, you can test your application’s ability to handle real-world traffic scenarios effectively.

This blog will guide you through setting up and running load tests on AWS EKS, from creating the infrastructure to monitoring the test results.

Why Load Testing in AWS EKS?

  • Scalability: EKS integrates seamlessly with AWS autoscaling capabilities, allowing dynamic resource adjustments.
  • Flexibility: Kubernetes allows you to run and manage load-testing tools like Locust in isolated namespaces.
  • Observability: AWS offers robust monitoring solutions, and tools like Prometheus and Grafana enhance visibility into system performance during load tests.

Prerequisites

Before starting, ensure the following:

  • AWS CLI: Installed and configured.
  • Terraform: Installed for infrastructure creation.
  • Basic knowledge of Kubernetes, including Deployments, HPA, and Cluster Autoscaler.
  • Familiarity with load-testing tools like Locust.

Step 1: Set Up AWS EKS

Create EKS Clusters

  1. Use Terraform to create two EKS clusters:
    • One for the application under test.
    • Another for the load-testing tools like Locust.

2. Initialize and apply the Terraform configuration:

    terraform init 
    terraform apply

    3. Attach IAM policies to the worker nodes to enable scaling:

      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Action": [
              "autoscaling:DescribeAutoScalingGroups",
              "autoscaling:DescribeScalingActivities",
              "autoscaling:SetDesiredCapacity"
            ],
            "Resource": ["*"]
          }
        ]
      }
      

        Step 2: Deploy the Application

        Deploy a sample Node.js application with the following Kubernetes Deployment and Service manifest:

        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: node-app
          labels:
            app: node-app
        spec:
          replicas: 3
          selector:
            matchLabels:
              app: node-app
          template:
            metadata:
              labels:
                app: node-app
            spec:
              containers:
              - name: node-app
                image: my-node-app:latest
                ports:
                - containerPort: 3000
        ---
        apiVersion: v1
        kind: Service
        metadata:
          name: node-app
        spec:
          selector:
            app: node-app
          ports:
          - protocol: TCP
            port: 80
            targetPort: 3000
          type: LoadBalancer
            

        Apply the manifest:

        kubectl apply -f node-app-deployment.yaml
            

        Step 3: Deploy Locust for Load Testing

        Configure Locust with a master-worker architecture. Here is an example deployment file:

        apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: locust-master
          labels:
            app: locust
        spec:
          replicas: 1
          selector:
            matchLabels:
              app: locust
              role: master
          template:
            metadata:
              labels:
                app: locust
                role: master
            spec:
              containers:
              - name: locust-master
                image: locustio/locust
                args: ["-f", "/mnt/locust/locustfile.py"]
                ports:
                - containerPort: 8089
            

        Step 4: Monitor the Load Test

        Install Prometheus and Grafana for observability:

        helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
        helm repo update
        kubectl create namespace monitoring
        helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
            

        Retrieve the Grafana admin password:

        kubectl get secret --namespace monitoring prometheus-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
            

        Conclusion

        Following this guide, you can set up and run load tests in AWS EKS, leveraging Kubernetes’ scaling capabilities to handle real-world traffic effectively.

        Pro Tip: Don’t forget to clean up resources after testing:

        terraform destroy
            

        Have Queries? Join https://launchpass.com/collabnix

        Tanvir Kour Tanvir Kour is a passionate technical blogger and open source enthusiast. She is a graduate in Computer Science and Engineering and has 4 years of experience in providing IT solutions. She is well-versed with Linux, Docker and Cloud-Native application. You can connect to her via Twitter https://x.com/tanvirkour

        What is a Multi-Agent RAG and what problem does…

        Have you ever felt like traditional AI systems are great but hit their limits when handling really complex tasks? Retrieval-Augmented Generation (RAG) has been...
        Tanvir Kour
        3 min read
        Join our Discord Server
        Index