Join our Discord Server
Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).

How to Fix “Pods stuck in Terminating status” Error

2 min read

Kubernetes, a powerful container orchestration platform, has revolutionized the deployment and management of containerized applications. However, as with any technology, it comes with its own set of challenges. One common issue that Kubernetes administrators encounter is pods getting stuck in the “Terminating” status. In this blog post, we’ll explore the causes of this problem and provide step-by-step solutions to get your pods unstuck and your cluster back on track.

4-Hour Kubernetes 10 workshop for Beginners

Understanding the Terminating Status

In Kubernetes, pods enter the “Terminating” status when they are being shut down, either due to scaling down a deployment or updating a pod’s configuration. While this process is usually smooth and automatic, sometimes pods can get stuck in this state, preventing new pods from being created and potentially causing disruptions to your applications.

Common Causes of Stuck Pods

Unresponsive Applications

If an application within a pod does not respond to the termination signal, Kubernetes may be unable to gracefully shut down the pod.

Pod Dependencies

Pods with interdependencies may get stuck if one pod relies on another, which prevents them from terminating simultaneously.

Network or Storage Issues

If the pod has network or storage operations pending, it might not shut down properly.

Finalizers

Kubernetes uses finalizers to clean up resources before a pod is deleted. If a finalizer is stuck, it can block the pod’s termination.

Step-by-Step Solutions

Solution 1: Graceful Termination of Applications

Identify the problematic pod using

kubectl get pods

Check the pod’s logs using

kubectl logs <pod-name>

to understand if the application is responding to termination signals.

If the application isn’t shutting down gracefully, you can update its configuration to handle termination signals properly. Alternatively, you can force terminate the pod using

kubectl delete pod <pod-name> --force

Solution 2: Handling Dependencies

If your pods have interdependencies, consider updating your application to handle temporary unavailability of dependent services.

Use Kubernetes readiness and liveness probes to ensure that your application is ready to accept traffic and respond appropriately.

Solution 3: Network and Storage Considerations:

Check if the pod has any active network connections or pending storage operations. Address these issues to allow the pod to shut down gracefully.

Update your application’s code to handle network or storage-related issues, ensuring that resources are released upon termination.

Solution 4: Handling Finalizers

If the pod is stuck due to a finalizer, you can manually remove the finalizer using

kubectl edit pod <pod-name>

Locate the finalizers section and remove the problematic finalizer. Save and exit the editor.

The pod should now terminate normally.

In case the ‘–force’ option fail to work and you still see the pod, try the below command:

kubectl -n redis patch pod <pod> -p '{"metadata":{"finalizers":null}}'

Also, delete the finalizers block from resource (pod,deployment,ds etc…) yaml:

"finalizers": [
  "foregroundDeletion"
]

Solution 5: Forcibly Deleting Stuck Pods

If none of the above solutions work, and the stuck pods are causing disruptions, you can forcibly delete them using:

kubectl delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>

You might find this command more straightforward:

for p in $(kubectl get pods | grep Terminating | awk '{print $1}'); do kubectl delete pod $p --grace-period=0 --force;done

It will delete all pods in Terminating status in default namespace.

This is not recommended unless absolutely necessary, as it can lead to data loss or corruption.

Also, the following script might also help.

kubectl get pods --all-namespaces | grep Terminating | while read line; do
  pod_name=$(echo $line | awk '{print $2}' ) \
  name_space=$(echo $line | awk '{print $1}' ); \
  kubectl delete pods $pod_name -n $name_space --grace-period=0 --force
done

Preventive Measures

  • Graceful Shutdown Handling: Always ensure that your application code handles termination signals gracefully. This will help prevent pods from getting stuck during the shutdown process.
  • Readiness and Liveness Probes: Implement readiness and liveness probes to ensure your application is responsive and ready to handle traffic. This will help avoid situations where pods become unresponsive during termination.
  • Dependency Management: If your application relies on other services, design it to handle temporary unavailability and reconnections.
  • Resource Management: Monitor your pods and their resource utilization. Address any network or storage-related issues to ensure smooth termination.

Conclusion

Stuck pods in the “Terminating” status can be a frustrating challenge to address, but with the right troubleshooting techniques, you can quickly diagnose and fix the issue. By understanding the common causes and following the step-by-step solutions provided in this guide, you’ll be able to keep your Kubernetes cluster healthy and ensure the seamless operation of your containerized applications. Remember that preventative measures, such as handling termination signals gracefully and implementing readiness/liveness probes, can significantly reduce the likelihood of encountering this issue in the first place. With these best practices in mind, you can navigate the complexities of Kubernetes with confidence.

Have Queries? Join https://launchpass.com/collabnix

Ajeet Raina Ajeet Singh Raina is a former Docker Captain, Community Leader and Distinguished Arm Ambassador. He is a founder of Collabnix blogging site and has authored more than 700+ blogs on Docker, Kubernetes and Cloud-Native Technology. He runs a community Slack of 9800+ members and discord server close to 2600+ members. You can follow him on Twitter(@ajeetsraina).
Join our Discord Server
Index