What Is the Difference Between Recreate and RollingUpdate Deployment Strategies?
The Recreate strategy kills all existing Pods before creating new ones, causing downtime but avoiding version conflicts. RollingUpdate replaces Pods incrementally with zero downtime but briefly runs two versions simultaneously.
Detailed Answer
Kubernetes Deployments support two built-in strategies: RollingUpdate (the default) and Recreate. They represent fundamentally different approaches to replacing Pods during an update.
Recreate Strategy
apiVersion: apps/v1
kind: Deployment
metadata:
name: legacy-app
spec:
replicas: 3
strategy:
type: Recreate
selector:
matchLabels:
app: legacy-app
template:
metadata:
labels:
app: legacy-app
spec:
containers:
- name: legacy-app
image: legacy-app:2.0
ports:
- containerPort: 8080
What happens during an update:
- All 3 existing Pods are terminated simultaneously.
- Kubernetes waits for all old Pods to be fully terminated.
- 3 new Pods are created with the updated template.
- New Pods go through scheduling, image pulling, and readiness checks.
- Service resumes once new Pods are Ready.
Timeline:
Old Pods: [████████████] ← All terminated
Gap ← DOWNTIME
New Pods: [████████████] ← All created
RollingUpdate Strategy
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: web-app:2.0
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
What happens during an update:
- A new ReplicaSet is created.
- 1 new Pod is created (respecting
maxSurge: 1). - When the new Pod is Ready, 1 old Pod is terminated (respecting
maxUnavailable: 1). - This cycle repeats until all old Pods are replaced.
Timeline:
Old Pods: [████████] [██████] [████] [──]
New Pods: [██] [████] [██████] [████████]
↑ always at least 2 Pods available
Direct Comparison
| Aspect | Recreate | RollingUpdate |
|---|---|---|
| Downtime | Yes, during Pod replacement | No (when configured correctly) |
| Version coexistence | Never -- only one version runs | Yes -- both versions run briefly |
| Speed | Faster for small deployments | Depends on maxSurge/maxUnavailable |
| Resource overhead | None -- old Pods are gone before new ones start | Temporary -- extra Pods from maxSurge |
| Rollback complexity | Must wait for full redeploy | Instant via kubectl rollout undo |
| Configuration | No additional parameters | maxSurge and maxUnavailable |
When to Use Recreate
Database migrations with breaking schema changes:
If version 2.0 requires a database schema that is incompatible with version 1.0, running both simultaneously would cause errors.
Single-writer applications:
Applications that acquire a lock or write to a shared resource and cannot tolerate two instances running at the same time.
GPU or specialized hardware:
When Pods require exclusive access to hardware resources that cannot be shared during the transition period.
Development environments:
Where downtime is acceptable and you want simpler, faster deployments.
# Common pattern: Recreate with a pre-stop hook for cleanup
spec:
strategy:
type: Recreate
template:
spec:
terminationGracePeriodSeconds: 60
containers:
- name: legacy-app
image: legacy-app:2.0
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "cleanup.sh"]
When to Use RollingUpdate
Any user-facing service where downtime is unacceptable. This includes:
- Web applications and APIs
- Microservices
- Background workers (where brief version overlap is acceptable)
The safest configuration for zero downtime:
spec:
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0 # Never remove an old Pod until a new one is Ready
template:
spec:
containers:
- name: web-app
image: web-app:2.0
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
Setting maxUnavailable: 0 ensures no capacity is lost during the rollout. Combined with a readiness probe, no traffic is sent to a Pod until it is fully operational.
Handling Version Compatibility in RollingUpdate
Because both old and new Pods run simultaneously during a RollingUpdate, your application must handle:
- API compatibility: Old and new versions must coexist behind the same Service.
- Database compatibility: Schema changes must be backward-compatible (add columns, do not rename or remove them during the transition).
- Message format compatibility: If services communicate via queues, both versions must understand the message format.
A common approach is the expand-contract pattern:
- Expand: Deploy v2.0 that reads both old and new formats, writes in the new format.
- Contract: Once all instances are on v2.0, deploy v2.1 that removes old format support.
Summary
The Recreate strategy is simple and eliminates version coexistence, but it causes downtime. The RollingUpdate strategy provides zero-downtime deployments at the cost of briefly running two versions simultaneously. Most production workloads should use RollingUpdate with maxUnavailable: 0 and readiness probes. Reserve Recreate for specialized cases where running two versions would cause correctness problems.
Why Interviewers Ask This
Interviewers ask this to see if you understand the tradeoffs between downtime and version consistency. The answer reveals whether you have thought about real deployment scenarios beyond the defaults.
Common Follow-Up Questions
Key Takeaways
- RollingUpdate is the default and preferred strategy for zero-downtime deployments.
- Recreate causes downtime but guarantees only one version runs at a time.
- Choose Recreate when version coexistence would cause data or API conflicts.