Have you ever encountered a situation where your Service’s external IP address stubbornly stays in “pending” mode? We’ve all been there. In this post, we’ll delve into this common Minikube issue and explore two effective solutions to get that external IP up and running.
Understanding the LoadBalancer Service
In Kubernetes, the LoadBalancer service type shines when you need to expose network applications to the external world. It’s ideal for scenarios where you want a dedicated IP address assigned to each service.
On public cloud platforms like AWS or Azure, the LoadBalancer service seamlessly deploys a network load balancer in the cloud. However, Minikube takes a different approach. It simulates a load balancer using the tunnel protocol, and by default, the service’s external IP gets stuck on “pending.”
Setting Up the Example
Before diving into solutions, let’s set up a simple example using namespaces and deployments:
Create a Namespace:
kubectl create ns lbservice
Deploy a Redis Pod:
kubectl create deploy redis --image=redis:alpine -n lbservice
Verify the Pod:
kubectl get pods -n lbservice
This creates a namespace (lbservice) and deploys a Redis pod within it. We’ll use this example to showcase exposing the Redis server externally.
Fixing the Pending External IP
Now, let’s tackle that pesky “pending” status! Here are two methods to assign an external IP address to the LoadBalancer service in Minikube:
Method 1: Using Minikube Tunnel
The LoadBalancer service thrives when the cluster supports external load balancers. While Minikube doesn’t provide a built-in implementation, it allows simulating them through network routes. Here’s how:
Create a LoadBalancer Service:
kubectl expose deploy redis --port 6379 --type LoadBalancer -n lbservice
Check the External IP:
kubectl get service -n lbservice
You’ll see the “EXTERNAL-IP” column showing “”. Don’t worry, we’ll fix that!
Establish the Minikube Tunnel:
minikube tunnel
This command sets up network routes using the load balancer emulator.
Verify the IP Again:
kubectl get service -n lbservice
Now you should see a newly allocated external IP address. That’s your gateway to the Redis server!
Access the Redis Server (Optional):
redis-cli -h <EXTERNAL_IP> PING
Replace with the actual IP you obtained. If everything’s configured correctly, you’ll receive a “PONG” response, indicating successful communication.
Method 2: Using MetalLB Addon
Similar to the tunnel approach, MetalLB is another load balancer option for assigning external IPs. Minikube offers an addon specifically for MetalLB, allowing easy configuration. Here’s how to use it:
Enable the MetalLB Addon:
minikube addons enable metallb
Verify Addon Status:
minikube addons list
This ensures the addon is up and running.
Configure MetalLB (Optional):
By default, MetalLB uses a specific IP range. To customize this, find the Minikube node’s IP address using minikube ip and then run:
minikube addons configure metallb
Specify the desired IP address range during the configuration process.
Create a LoadBalancer Service:
kubectl expose deploy redis --port 6379 --type LoadBalancer -n lbservice
Check the External IP:
kubectl get service -n lbservice
The “EXTERNAL-IP” column should now display an IP address within the configured range.
Access the Redis Server (Optional):
Use the service’s external IP address obtained in step 5 to connect to the Redis server using redis-cli.
Cleaning Up
Method 1 Cleanup (Tunnel):
In the first terminal window, press Ctrl + C to terminate the tunnel process and remove network routes.
Delete the service:
kubectl delete svc redis -n lbservice
Method 2 Cleanup (MetalLB):
Important Note: The provided cleanup steps involve deleting the entire namespace, which is suitable for testing environments only. In production, avoid bulk deletions and target specific resources.
kubectl delete ns lbservice
Conclusion
This blog post explored two effective solutions to address the “pending” external IP issue in Minikube’s LoadBalancer services: the tunnel protocol and the MetalLB addon. Remember, the best approach depends on your specific needs and environment.
Feel free to experiment and choose the method that best suits your Kubernetes journey! And as always, if you have any questions or comments, don’t hesitate to reach out to us. Happy Deployments!