kubectl uncordon
Mark a node as schedulable again, allowing the scheduler to place new pods on it. Used after maintenance or troubleshooting.
kubectl uncordon NODE_NAME [flags]Common Flags
| Flag | Short | Description |
|---|---|---|
| --dry-run | — | Must be 'none', 'server', or 'client'. Preview the action without executing |
| --selector | -l | Selector (label query) to filter nodes |
Examples
Uncordon a specific node
kubectl uncordon worker-node-1Uncordon all previously cordoned nodes
kubectl get nodes -o name | xargs -I{} kubectl uncordon {}Uncordon nodes by label
kubectl uncordon -l node-pool=webVerify a node is schedulable after uncordoning
kubectl uncordon worker-node-1 && kubectl get node worker-node-1When to Use kubectl uncordon
kubectl uncordon reverses a cordon operation, marking a node as schedulable again. It is the final step in the node maintenance workflow: cordon, drain, maintain, uncordon.
Basic Usage
kubectl uncordon worker-node-1
# node/worker-node-1 uncordoned
kubectl get node worker-node-1
# NAME STATUS ROLES AGE VERSION
# worker-node-1 Ready <none> 45d v1.28.5
The SchedulingDisabled status is removed, and the node is ready to accept new pods.
Complete Maintenance Workflow
Here is the full lifecycle of a node maintenance operation:
# 1. Cordon: stop new pods
kubectl cordon worker-node-1
# 2. Drain: evict existing pods
kubectl drain worker-node-1 --ignore-daemonsets --delete-emptydir-data
# 3. Perform maintenance
ssh worker-node-1 "sudo apt-get update && sudo apt-get upgrade -y && sudo reboot"
# 4. Wait for node to come back
kubectl wait --for=condition=Ready node/worker-node-1 --timeout=300s
# 5. Uncordon: allow scheduling again
kubectl uncordon worker-node-1
Pods Do Not Automatically Rebalance
A critical point for interviews: uncordoning a node does not trigger pod migrations. If you drained 50 pods off the node, those pods are now running on other nodes. After uncordoning:
- New pod creations (scale-ups, new deployments) may land on this node.
- Existing pods remain where the scheduler originally placed them.
To force rebalancing, you can use the descheduler project or manually trigger redeployments:
# Force a rollout restart to redistribute pods
kubectl rollout restart deployment/my-app -n production
Uncordoning Multiple Nodes
After bulk maintenance:
# Uncordon all nodes that are currently SchedulingDisabled
kubectl get nodes --no-headers | grep SchedulingDisabled | awk '{print $1}' | xargs kubectl uncordon
# Uncordon by label
kubectl uncordon -l maintenance-window=batch-1
# Verify all nodes are schedulable
kubectl get nodes
Pre-Uncordon Checks
Before uncordoning, verify the node is healthy:
# Check node readiness
kubectl get node worker-node-1 -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}'
# Should output: True
# Check node resource availability
kubectl describe node worker-node-1 | grep -A 5 "Conditions"
# Verify kubelet is running
kubectl get node worker-node-1 -o jsonpath='{.status.conditions[?(@.type=="Ready")].message}'
Uncordoning an unhealthy node can lead to pods being scheduled there and immediately failing.
Automation and Monitoring
In production environments, automate the uncordon step with health checks:
#!/bin/bash
NODE=$1
# Wait for node readiness
until kubectl get node "$NODE" -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}' | grep -q True; do
echo "Waiting for $NODE to be Ready..."
sleep 10
done
# Uncordon
kubectl uncordon "$NODE"
echo "$NODE is now schedulable."
# Verify scheduling works by checking for SchedulingDisabled
STATUS=$(kubectl get node "$NODE" -o jsonpath='{.spec.unschedulable}')
if [ "$STATUS" != "true" ]; then
echo "Successfully uncordoned $NODE"
else
echo "ERROR: $NODE is still unschedulable"
exit 1
fi
Common Interview Scenario
A frequent interview question combines cordon, drain, and uncordon:
"How do you perform a kernel upgrade on a worker node without downtime?"
The answer involves:
kubectl cordon <node>to stop new pods.kubectl drain <node> --ignore-daemonsetsto evict pods gracefully (PodDisruptionBudgets are respected).- Perform the upgrade and reboot.
- Wait for the node to rejoin the cluster.
kubectl uncordon <node>to restore scheduling.
If the application has multiple replicas spread across nodes and proper PodDisruptionBudgets, this process is zero-downtime.
Relationship to Cluster Autoscaler
Cluster autoscalers interact with cordon/uncordon behavior. When a node is cordoned, the autoscaler may:
- Scale up additional nodes to compensate for reduced capacity.
- Not scale down the cordoned node (it still has running DaemonSet pods).
After uncordoning, the autoscaler may scale down other nodes if there is now excess capacity, gradually moving workloads back to the returned node.
Interview Questions About This Command
Common Mistakes
- Expecting pods to automatically move back to the uncordoned node — they stay on their current nodes unless evicted or rescheduled.
- Forgetting to uncordon after maintenance, silently reducing cluster capacity.
- Not checking that the node is Ready before uncordoning — an unhealthy node should not accept workloads.