How Do You Roll Back a Kubernetes Deployment?
Use kubectl rollout undo deployment/<name> to revert to the previous revision. Kubernetes re-activates the old ReplicaSet and scales down the current one using the configured rollout strategy.
Detailed Answer
When a deployment goes wrong -- a misconfigured environment variable, a buggy image, a crash loop -- you need to revert fast. Kubernetes makes this straightforward because every Deployment update is tracked as a revision backed by a ReplicaSet.
Understanding Revision History
Every time you change a Deployment's Pod template, Kubernetes creates a new ReplicaSet and increments the revision number. The old ReplicaSets are kept (within the revisionHistoryLimit) so you can roll back.
# View revision history
kubectl rollout history deployment/web-app
Output:
REVISION CHANGE-CAUSE
1 kubectl apply --filename=deployment.yaml
2 kubectl set image deployment/web-app nginx=nginx:1.26
3 kubectl set image deployment/web-app nginx=nginx:broken
The CHANGE-CAUSE is populated from the kubernetes.io/change-cause annotation. You can set it explicitly:
kubectl annotate deployment/web-app kubernetes.io/change-cause="Deploy v1.26 hotfix" --overwrite
Rolling Back to the Previous Revision
# Undo the last rollout
kubectl rollout undo deployment/web-app
This single command tells Kubernetes to:
- Identify the previous ReplicaSet revision.
- Scale up that ReplicaSet.
- Scale down the current (broken) ReplicaSet.
- Follow the same rolling update strategy (
maxSurge/maxUnavailable).
Rolling Back to a Specific Revision
If the previous revision is also problematic, target an exact revision:
# Inspect what a specific revision looks like
kubectl rollout history deployment/web-app --revision=1
# Roll back to revision 1
kubectl rollout undo deployment/web-app --to-revision=1
The --revision flag on rollout history shows the full Pod template for that revision so you can verify before rolling back.
A Practical Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
annotations:
kubernetes.io/change-cause: "Initial release v1.0"
spec:
replicas: 4
revisionHistoryLimit: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: web-app:1.0
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
Now simulate a bad release:
# Deploy a broken image
kubectl set image deployment/web-app web-app=web-app:broken
# Watch the rollout stall -- new Pods fail readiness checks
kubectl rollout status deployment/web-app
# Waiting for deployment "web-app" rollout to finish: 1 out of 4 new replicas have been updated...
# Roll back immediately
kubectl rollout undo deployment/web-app
# Confirm the rollback succeeded
kubectl rollout status deployment/web-app
# deployment "web-app" successfully rolled out
Because maxUnavailable: 0 was set, the broken release never took down any healthy Pods. The old Pods continued serving traffic while the single new Pod kept failing its readiness probe.
How a Rollback Actually Works
A rollback is not a special operation. It is a regular rollout that happens to use an older Pod template:
- Kubernetes copies the Pod template from the target ReplicaSet.
- It updates the Deployment's
.spec.templatewith that older template. - This triggers a new rollout, creating a new revision number.
- The old ReplicaSet is scaled up and the current one is scaled down.
This means after rolling back from revision 3 to revision 2, your history looks like:
REVISION CHANGE-CAUSE
1 Initial release v1.0
3 kubectl set image deployment/web-app web-app=web-app:broken
4 kubectl rollout undo deployment/web-app --to-revision=2
Revision 2 disappears because its template was moved to revision 4.
Automating Rollback with progressDeadlineSeconds
You can configure Kubernetes to mark a Deployment as failed if it does not make progress within a timeout:
spec:
progressDeadlineSeconds: 300 # 5 minutes
When the deadline is exceeded, the Deployment's condition changes to Progressing=False with reason ProgressDeadlineExceeded. While Kubernetes does not automatically roll back, this condition can be monitored by external tools (CI/CD pipelines, Argo Rollouts, Flagger) to trigger automatic rollbacks.
# Check Deployment conditions
kubectl get deployment web-app -o jsonpath='{.status.conditions[?(@.type=="Progressing")].reason}'
Best Practices
- Set
revisionHistoryLimitto a reasonable number (e.g., 10). Too high wastes resources; too low limits rollback options. - Always annotate change-cause so the revision history is meaningful.
- Use readiness probes so broken Pods never receive traffic and rollouts stall visibly.
- Set
maxUnavailable: 0for critical services to prevent any downtime during a bad release. - Practice rollbacks in staging before you need them in production.
Summary
Kubernetes Deployments track every update as a numbered revision backed by a ReplicaSet. The kubectl rollout undo command lets you revert to any previous revision quickly and safely. Combined with readiness probes and a sensible revisionHistoryLimit, rollbacks become a routine, low-risk operation rather than a panic-inducing one.
Why Interviewers Ask This
Rolling back a broken deployment is one of the most common production operations. Interviewers want to see if you can recover quickly from a bad release without scrambling.
Common Follow-Up Questions
Key Takeaways
- kubectl rollout undo is the primary rollback mechanism.
- Each rollback creates a new revision entry in the history.
- revisionHistoryLimit controls how many old ReplicaSets are kept for rollback.