kubectl rollout undo
Roll back to a previous revision of a deployment, DaemonSet, or StatefulSet.
kubectl rollout undo [TYPE] [NAME] [flags]Common Flags
| Flag | Short | Description |
|---|---|---|
| --to-revision | — | The revision to roll back to. Default is 0 (previous revision) |
| --namespace | -n | Namespace of the resource |
| --dry-run | — | Must be none, server, or client. Preview the rollback without executing it |
Examples
Roll back to the previous revision
kubectl rollout undo deployment/my-appRoll back to a specific revision
kubectl rollout undo deployment/my-app --to-revision=3Preview the rollback without executing
kubectl rollout undo deployment/my-app --dry-run=serverRoll back a DaemonSet
kubectl rollout undo daemonset/fluentd -n kube-systemRoll back and watch the result
kubectl rollout undo deployment/my-app && kubectl rollout status deployment/my-appWhen to Use kubectl rollout undo
kubectl rollout undo is your emergency brake for failed deployments. When a new version causes errors, performance degradation, or outages, rollback restores the previous working configuration. It is one of the most critical commands to know for production operations.
Quick Rollback
Roll back to the immediately previous revision:
# Roll back to the previous version
kubectl rollout undo deployment/my-app
# Verify the rollback completed
kubectl rollout status deployment/my-app
This is the fastest way to recover from a bad deployment. The previous ReplicaSet is scaled up while the current one is scaled down.
Targeted Rollback
When you need to go further back than the previous revision, specify the target:
# View available revisions
kubectl rollout history deployment/my-app
# Check what revision 3 contains
kubectl rollout history deployment/my-app --revision=3
# Roll back to revision 3
kubectl rollout undo deployment/my-app --to-revision=3
# Verify
kubectl rollout status deployment/my-app
Always inspect the target revision before rolling back to confirm it contains the expected configuration.
Incident Response Workflow
During a production incident, follow this systematic approach:
# Step 1: Confirm the issue is deployment-related
kubectl rollout status deployment/my-app
kubectl describe deployment my-app | grep -A 10 "Conditions"
# Step 2: Review recent changes
kubectl rollout history deployment/my-app
# Step 3: Identify the last known good revision
kubectl rollout history deployment/my-app --revision=2
# Step 4: Execute the rollback
kubectl rollout undo deployment/my-app --to-revision=2
# Step 5: Monitor the rollback
kubectl rollout status deployment/my-app --timeout=120s
# Step 6: Verify the application is healthy
kubectl get pods -l app=my-app
kubectl logs -l app=my-app --tail=20
Preview Before Rolling Back
Use dry-run to see what the rollback would produce without executing it:
# Preview the rollback result
kubectl rollout undo deployment/my-app --dry-run=server -o yaml
# Compare with current state
kubectl get deployment my-app -o yaml > current.yaml
kubectl rollout undo deployment/my-app --dry-run=server -o yaml > rollback.yaml
diff current.yaml rollback.yaml
How Rollback Works
When you run rollout undo, Kubernetes:
- Retrieves the pod template from the target revision's ReplicaSet.
- Updates the Deployment's pod template to match that revision.
- Creates a new revision number for this change (rollbacks are recorded as new revisions).
- Triggers a standard rolling update to the restored template.
# Before rollback: revision 4 is current
kubectl rollout history deployment/my-app
# REVISION CHANGE-CAUSE
# 1 Initial deployment
# 2 Updated to v1.1
# 3 Updated to v1.2
# 4 Updated to v1.3 (broken)
# After rollback to revision 3
kubectl rollout undo deployment/my-app --to-revision=3
kubectl rollout history deployment/my-app
# REVISION CHANGE-CAUSE
# 1 Initial deployment
# 2 Updated to v1.1
# 4 Updated to v1.3 (broken)
# 5 Updated to v1.2 (rollback from revision 3, now renumbered as 5)
Notice that the rolled-back revision (3) is consumed and appears as a new revision (5).
Automated Rollback
Integrate rollback into CI/CD pipelines for automatic recovery:
# Deploy and auto-rollback on failure
kubectl set image deployment/my-app app=myapp:${TAG}
if ! kubectl rollout status deployment/my-app --timeout=300s; then
echo "Deployment failed, initiating rollback"
kubectl rollout undo deployment/my-app
if ! kubectl rollout status deployment/my-app --timeout=120s; then
echo "CRITICAL: Rollback also failed"
exit 2
fi
echo "Rollback successful"
exit 1
fi
echo "Deployment successful"
Rolling Back DaemonSets and StatefulSets
Rollback works the same way for DaemonSets and StatefulSets:
# DaemonSet rollback
kubectl rollout undo daemonset/node-exporter -n monitoring
# StatefulSet rollback (pods are updated in reverse ordinal order)
kubectl rollout undo statefulset/postgresql -n databases
kubectl rollout status statefulset/postgresql -n databases
StatefulSet rollbacks update pods one at a time in reverse ordinal order (from highest to lowest), respecting the ordered update guarantee.
Best Practices
Always check rollout history before rolling back to choose the correct revision. Monitor the rollback with rollout status to confirm completion. After a rollback, investigate the root cause of the failed deployment before attempting to re-deploy. Document rollback incidents for post-mortems. In CI/CD pipelines, always include automatic rollback logic with appropriate timeouts. Keep sufficient revisionHistoryLimit to enable rollbacks to known good versions.
Interview Questions About This Command
Common Mistakes
- Rolling back without first checking rollout history to understand which revision to target, potentially rolling back to another broken version.
- Assuming rollback restores the YAML manifest — it only restores the pod template. Annotations and other metadata may differ.
- Not monitoring the rollback with kubectl rollout status to confirm it completed successfully.