I have observed a scenario where the kubernetes cluster worker node was not accepting any deployments. My cluster was a 3 node cluster. The cluster was working fine for long time. After a reboot, one of the worker node stopped accepting any deployments. Later on investigation, I found that the node status was showing “Ready, SchedulingDisabled“.
The explanation for this behavior is given below. This is a feature in kubernetes to decommission or drain an existing node in the cluster. The drain state will stop scheduling new pods to the drained node. This will be helpful to perform maintenance on a node in kubernetes. More details can be found from the official documentation
To resume the pod deployments, we just need to issue the following command.
kubectl uncordon [node-name]
node-name is the hostname of the node that is in disabled state. After issuing the command, check the status of all nodes in the cluster using the below command and check the status.
kubectl get nodes
I hope this tip helps you. If you have any queries, feel free to ask your question in the comments section.
it can be resulting also of the working node running out of memory.
so it can’t accept any deployment untill you increase the memory and pass the command “kubectl uncordon ‘noename’ “