kubectl taint
Update the taints on one or more nodes. Taints work with tolerations to control which pods can be scheduled on specific nodes.
kubectl taint nodes NODE_NAME KEY=VALUE:EFFECT [flags]Common Flags
| Flag | Short | Description |
|---|---|---|
| --all | — | Apply taint to all nodes |
| --overwrite | — | Overwrite a taint with the same key and effect if it already exists |
| --selector | -l | Selector (label query) to filter nodes |
| --dry-run | — | Must be 'none', 'server', or 'client'. Preview without executing |
Examples
Add a NoSchedule taint to a node
kubectl taint nodes worker-1 dedicated=gpu:NoScheduleRemove a taint from a node (note the trailing minus)
kubectl taint nodes worker-1 dedicated=gpu:NoSchedule-Add a NoExecute taint (evicts non-tolerating pods)
kubectl taint nodes worker-1 maintenance=true:NoExecuteTaint all nodes
kubectl taint nodes --all env=production:NoScheduleRemove a taint by key from a node
kubectl taint nodes worker-1 dedicated-When to Use kubectl taint
kubectl taint adds or removes taints on nodes. Taints repel pods — only pods with a matching toleration can schedule on a tainted node. This mechanism is used for dedicated hardware, node isolation, and controlled maintenance.
Taint Effects
Kubernetes supports three taint effects:
| Effect | Behavior |
|--------|----------|
| NoSchedule | New pods without toleration cannot be scheduled; existing pods are unaffected |
| PreferNoSchedule | Scheduler tries to avoid the node but will use it if necessary |
| NoExecute | Existing non-tolerating pods are evicted; new ones cannot schedule |
Adding Taints
# Dedicate nodes for GPU workloads
kubectl taint nodes gpu-node-1 dedicated=gpu:NoSchedule
# Mark nodes for a specific team
kubectl taint nodes team-a-node team=backend:NoSchedule
# Soft preference — avoid but don't prevent
kubectl taint nodes spot-node-1 spot=true:PreferNoSchedule
# Maintenance — evict pods immediately
kubectl taint nodes worker-1 maintenance=true:NoExecute
Removing Taints
Append a minus sign (-) to remove a taint:
# Remove a specific taint
kubectl taint nodes gpu-node-1 dedicated=gpu:NoSchedule-
# Remove all taints with a key (any effect)
kubectl taint nodes worker-1 maintenance-
Pod Tolerations
For a pod to run on a tainted node, it needs a matching toleration in its spec:
apiVersion: v1
kind: Pod
metadata:
name: gpu-workload
spec:
tolerations:
- key: "dedicated"
operator: "Equal"
value: "gpu"
effect: "NoSchedule"
containers:
- name: cuda
image: nvidia/cuda:12.0-base
Toleration operators:
- Equal: Key, value, and effect must all match.
- Exists: Only the key (and optionally effect) must match; value is ignored.
# Tolerate any taint with key "dedicated"
tolerations:
- key: "dedicated"
operator: "Exists"
# Tolerate everything (the "master toleration")
tolerations:
- operator: "Exists"
Common Taint Patterns
Dedicated Nodes
Reserve nodes for specific workloads:
# Taint GPU nodes
kubectl taint nodes -l node-type=gpu dedicated=gpu:NoSchedule
# Label them too (for nodeSelector/affinity)
kubectl label nodes -l node-type=gpu gpu=true
Pods must have both the toleration (to bypass the taint) and a nodeSelector (to target the node):
spec:
nodeSelector:
gpu: "true"
tolerations:
- key: "dedicated"
value: "gpu"
effect: "NoSchedule"
Control-Plane Taints
Master/control-plane nodes have built-in taints:
# View control-plane taints
kubectl describe node control-plane | grep Taints
# Taints: node-role.kubernetes.io/control-plane:NoSchedule
These prevent workloads from running on control-plane nodes. To schedule on them (e.g., in a single-node cluster):
# Remove control-plane taint
kubectl taint nodes control-plane node-role.kubernetes.io/control-plane:NoSchedule-
NoExecute with Toleration Seconds
The NoExecute effect can evict existing pods. Tolerations can specify how long to endure:
tolerations:
- key: "maintenance"
operator: "Equal"
value: "true"
effect: "NoExecute"
tolerationSeconds: 3600 # Stay for 1 hour, then leave
This allows for graceful migration — pods have time to finish current work before being evicted.
Verifying Taints
# View taints on a specific node
kubectl describe node worker-1 | grep -A 5 Taints
# List taints using jsonpath
kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints
# Check for specific taints across all nodes
kubectl get nodes -o json | jq '.items[] | {name: .metadata.name, taints: .spec.taints}'
Taints vs. Cordon
| Aspect | Cordon | Taint | |--------|--------|-------| | Scope | All pods | Selective (toleration-based) | | Existing pods | Unaffected | NoExecute evicts them | | Flexibility | Binary (on/off) | Key/value with three effects | | Use case | Temporary maintenance | Permanent node dedication |
Use cordon for quick maintenance windows. Use taints for long-term node specialization or isolation policies.
Interview Questions About This Command
Common Mistakes
- Applying NoExecute without realizing it immediately evicts existing pods that lack matching tolerations.
- Forgetting the trailing minus (-) when removing a taint, which causes kubectl to try adding a new taint instead.
- Not adding tolerations to DaemonSet pods, causing them to be evicted or unschedulable on tainted nodes.