kubectl rollout restart
Trigger a rolling restart of a deployment, DaemonSet, or StatefulSet without changing the pod template.
kubectl rollout restart [TYPE] [NAME] [flags]Common Flags
| Flag | Short | Description |
|---|---|---|
| --namespace | -n | Namespace of the resource |
| --selector | -l | Label selector to filter resources to restart |
| --field-selector | — | Field selector to filter resources |
Examples
Restart a deployment
kubectl rollout restart deployment/my-appRestart all deployments in a namespace
kubectl rollout restart deployment -n stagingRestart a DaemonSet
kubectl rollout restart daemonset/fluentd -n kube-systemRestart a StatefulSet
kubectl rollout restart statefulset/postgresqlRestart deployments matching a label
kubectl rollout restart deployment -l app=my-appRestart and monitor
kubectl rollout restart deployment/my-app && kubectl rollout status deployment/my-appWhen to Use kubectl rollout restart
kubectl rollout restart triggers a rolling replacement of all pods in a workload without changing the pod template. It is the clean way to restart pods, replacing the old approach of deleting pods manually or making dummy changes to trigger a rollout.
How It Works
The restart command adds a kubectl.kubernetes.io/restartedAt annotation with the current timestamp to the pod template. Since the pod template changed (even though only the annotation differs), Kubernetes triggers a rolling update:
# Restart a deployment
kubectl rollout restart deployment/my-app
# The annotation is added to the pod template
kubectl get deployment my-app -o jsonpath='{.spec.template.metadata.annotations}'
# {"kubectl.kubernetes.io/restartedAt":"2024-01-15T10:30:00Z"}
The rolling update follows the deployment's configured strategy (maxSurge and maxUnavailable), ensuring availability during the restart.
Common Use Cases
Picking up ConfigMap or Secret changes: When you update a ConfigMap or Secret, existing pods do not automatically reload the new values. A rollout restart forces new pods that mount the updated data:
# Update a ConfigMap
kubectl create configmap app-config --from-file=config.yaml -o yaml --dry-run=client | kubectl apply -f -
# Restart to pick up the new config
kubectl rollout restart deployment/my-app
kubectl rollout status deployment/my-app
Refreshing TLS certificates: When certificates stored in Secrets are rotated, restart the workload to load the new certificates:
# After certificate rotation
kubectl rollout restart deployment/web-server
Clearing application state: When pods accumulate in-memory state that needs to be reset:
# Restart to clear caches and connections
kubectl rollout restart deployment/api-server
Recovering from issues: When pods are in a degraded state but not failing health checks:
# Force fresh pods
kubectl rollout restart deployment/problematic-app
kubectl rollout status deployment/problematic-app
Restarting Multiple Workloads
Restart all deployments or filter by label:
# Restart all deployments in a namespace
kubectl rollout restart deployment -n staging
# Restart specific deployments by label
kubectl rollout restart deployment -l tier=frontend
# Restart DaemonSets
kubectl rollout restart daemonset -n kube-system
Monitoring the Restart
Always verify the restart completes successfully:
# Restart and watch
kubectl rollout restart deployment/my-app
kubectl rollout status deployment/my-app --timeout=300s
# Check that old pods are replaced
kubectl get pods -l app=my-app -o wide
# Verify pod ages (all should be recent)
kubectl get pods -l app=my-app -o custom-columns=NAME:.metadata.name,AGE:.metadata.creationTimestamp
Rolling Restart vs Deleting Pods
Using rollout restart is better than deleting pods for several reasons:
# BAD: Deleting pods causes uncontrolled restarts
kubectl delete pods -l app=my-app
# All pods are terminated simultaneously, causing an outage
# GOOD: Rolling restart maintains availability
kubectl rollout restart deployment/my-app
# Pods are replaced one at a time following the update strategy
Rolling restart respects maxUnavailable and maxSurge settings, maintains the deployment history, and can be rolled back if the new pods have issues.
Rollback After Restart
If a restart causes issues (for example, the application fails to start due to a bad ConfigMap), you can undo it:
# If the restart causes issues
kubectl rollout undo deployment/my-app
# This reverts to the pod template from before the restart
# (without the restartedAt annotation)
Single-Replica Considerations
For deployments with a single replica, the restart behavior depends on the update strategy:
# With maxUnavailable=1 (default), there is brief downtime
# With maxSurge=1, maxUnavailable=0, the new pod starts before the old one stops
# Check the strategy
kubectl get deployment my-app -o jsonpath='{.spec.strategy}'
For zero-downtime restarts of single-replica deployments, set maxSurge: 1 and maxUnavailable: 0 in the rolling update strategy.
Best Practices
Use rollout restart instead of deleting pods for controlled restarts. Always monitor the restart with rollout status. Consider using tools like Reloader or stakater/Reloader to automatically restart deployments when ConfigMaps or Secrets change. For critical services, ensure you have multiple replicas and proper rolling update settings before restarting.
Interview Questions About This Command
Common Mistakes
- Expecting ConfigMap or Secret changes to take effect without a restart. Pods do not automatically reload mounted ConfigMaps or Secrets unless the application watches for file changes.
- Restarting a deployment with a single replica without understanding there will be a brief period where no pods are ready (unless maxUnavailable is 0 and maxSurge is greater than 0).
- Using delete pod to restart instead of rollout restart, which does not provide a controlled rolling update.