kubectl delete

Delete resources by filenames, stdin, resource type and name, or by label selector.

kubectl delete [TYPE] [NAME | -l label | --all] [flags]

Common Flags

FlagShortDescription
--filename-fFilename, directory, or URL to files containing resources to delete
--selector-lFilter resources to delete by label selector
--allDelete all resources of the specified type in the namespace
--grace-periodPeriod of time in seconds given to the resource to terminate gracefully (default 30)
--forceImmediately remove resources from API and bypass graceful deletion
--cascadeMust be background, foreground, or orphan. Defaults to background

Examples

Delete a pod by name

kubectl delete pod my-pod

Delete resources defined in a file

kubectl delete -f deployment.yaml

Delete all pods with a specific label

kubectl delete pods -l app=nginx

Force delete a stuck pod

kubectl delete pod my-pod --grace-period=0 --force

Delete all pods in a namespace

kubectl delete pods --all -n staging

Delete a deployment without deleting its pods

kubectl delete deployment my-app --cascade=orphan

Delete a namespace and all its resources

kubectl delete namespace staging

When to Use kubectl delete

kubectl delete removes resources from your cluster. It supports multiple selection methods: by name, by file, by label selector, or all resources of a type. Understanding how deletion interacts with controllers and finalizers is key to avoiding surprises.

Graceful Deletion

By default, Kubernetes gives pods 30 seconds to shut down gracefully. During this period, the pod receives a SIGTERM signal and has time to finish in-flight requests, close connections, and clean up:

# Delete with default 30-second grace period
kubectl delete pod my-pod

# Delete with a custom grace period
kubectl delete pod my-pod --grace-period=60

# Delete immediately (skip graceful shutdown)
kubectl delete pod my-pod --grace-period=0 --force

The graceful shutdown period is defined by terminationGracePeriodSeconds in the pod spec. The --grace-period flag overrides this value. Force deletion should be reserved for pods truly stuck in Terminating state, such as when the node is unreachable.

Cascade Deletion

When you delete a resource that owns other resources, Kubernetes follows a cascade policy. A Deployment owns ReplicaSets, which own Pods. By default, deleting a Deployment removes everything:

# Default: delete deployment and all its pods (background cascade)
kubectl delete deployment my-app

# Foreground cascade: parent waits until all children are deleted
kubectl delete deployment my-app --cascade=foreground

# Orphan: delete only the deployment, leave pods running
kubectl delete deployment my-app --cascade=orphan

Orphan cascade is useful during migrations. You can delete the old Deployment, modify the orphaned ReplicaSet, and then create a new Deployment to adopt the running pods.

Deleting by Label Selector

Label selectors let you delete groups of related resources in a single command:

# Delete all resources with a label
kubectl delete pods,services -l app=old-version

# Delete using set-based selectors
kubectl delete pods -l 'environment in (staging, testing)'

# Delete pods not matching a label
kubectl delete pods -l 'app!=production-critical'

This is particularly useful during cleanup operations or when removing an entire application stack.

Deleting from Files

If you created resources from a YAML file, you can delete them the same way:

# Delete everything defined in the file
kubectl delete -f deployment.yaml

# Delete everything in a directory
kubectl delete -f ./manifests/

# Delete recursively
kubectl delete -f ./environments/staging/ -R

This approach ensures you delete exactly the resources you created, without accidentally removing others.

Handling Stuck Resources

Resources can get stuck in Terminating state due to finalizers or unreachable nodes:

# Check what finalizers are blocking deletion
kubectl get pod stuck-pod -o jsonpath='{.metadata.finalizers}'

# Remove finalizers to unblock deletion (use with caution)
kubectl patch pod stuck-pod -p '{"metadata":{"finalizers":null}}'

# Force delete when node is unreachable
kubectl delete pod stuck-pod --grace-period=0 --force

Finalizers are hooks that must complete before a resource is removed. Common finalizers include volume cleanup and external resource deprovisioning. Removing finalizers manually can leave orphaned external resources.

Deleting Namespaces

Deleting a namespace removes everything inside it. This is a powerful cleanup mechanism but can be destructive:

# Delete a namespace and all its contents
kubectl delete namespace staging

# Check what is inside before deleting
kubectl get all -n staging
kubectl get configmaps,secrets,pvc -n staging

Namespace deletion can also get stuck if resources inside have finalizers that cannot complete. Check for stuck resources with kubectl get all -n <namespace> and address finalizers on individual resources.

Bulk Deletion

For large-scale cleanup operations:

# Delete all pods in a namespace
kubectl delete pods --all -n staging

# Delete all completed jobs
kubectl delete jobs --field-selector status.successful=1

# Delete all evicted pods
kubectl get pods --field-selector status.phase=Failed -o name | xargs kubectl delete

Safety Practices

Always double-check the namespace and label selector before running delete commands. Use --dry-run=client to preview what would be affected. In production environments, prefer deleting from files so that you can easily recreate resources. Consider using kubectl get with the same selector first to verify which resources match.

Interview Questions About This Command

What happens when you delete a pod managed by a Deployment?
The ReplicaSet controller detects the pod count is below the desired replicas and creates a new pod to replace it. To actually remove the workload, delete the Deployment itself.
How do you force delete a pod stuck in Terminating state?
Use kubectl delete pod <name> --grace-period=0 --force. This removes the pod from the API server immediately. The kubelet will eventually clean up the container.
What does --cascade=orphan do?
It deletes the parent resource (like a Deployment) but leaves the child resources (like Pods and ReplicaSets) running. This is useful when you want to adopt resources under a new controller.

Common Mistakes

  • Deleting a pod that is managed by a Deployment or ReplicaSet and expecting it to stay deleted. The controller will recreate it.
  • Using --force --grace-period=0 routinely instead of investigating why pods are stuck in Terminating state.
  • Deleting a namespace without realizing it removes everything inside it, including secrets, configmaps, and persistent volume claims.

Related Commands