Kubernetes ImagePullBackOff
Causes and Fixes
ImagePullBackOff means Kubernetes tried to pull a container image and failed, and is now waiting with exponential backoff before retrying. This typically indicates the image does not exist, the tag is wrong, or the cluster lacks credentials to access a private registry.
Symptoms
- Pod status shows ImagePullBackOff in kubectl get pods output
- kubectl describe pod shows 'Failed to pull image' events
- Pod stays in Pending or ImagePullBackOff state and never starts
- Events may show 'repository does not exist' or '401 Unauthorized'
Common Causes
Step-by-Step Troubleshooting
1. Identify the Failing Pod and Image
Start by listing pods and spotting the ones stuck in ImagePullBackOff.
kubectl get pods -A | grep -i imagepull
Then describe the pod to see exactly which image failed and why.
kubectl describe pod <pod-name> -n <namespace>
Look at the Events section at the bottom. You will see messages like:
Failed to pull image "myregistry.io/app:v2.1": rpc error: code = NotFound
Back-off pulling image "myregistry.io/app:v2.1"
The error message tells you whether the problem is authentication, a missing image, or a network issue.
2. Verify the Image Exists
Check that the image and tag actually exist in the registry. You can do this from your local machine or a node with network access.
# Using docker
docker pull myregistry.io/app:v2.1
# Using crane (no Docker daemon needed)
crane digest myregistry.io/app:v2.1
# Using skopeo
skopeo inspect docker://myregistry.io/app:v2.1
If the image does not exist, you have found your problem. Push the image or correct the tag in your deployment manifest.
3. Check Image Pull Secrets
If the image is in a private registry, ensure the pod references the correct imagePullSecret.
# List secrets in the namespace
kubectl get secrets -n <namespace>
# Check the pod spec for imagePullSecrets
kubectl get pod <pod-name> -n <namespace> -o jsonpath='{.spec.imagePullSecrets}'
# Inspect the secret contents (base64 encoded)
kubectl get secret <secret-name> -n <namespace> -o jsonpath='{.data.\.dockerconfigjson}' | base64 -d
If the secret is missing or contains stale credentials, recreate it:
kubectl create secret docker-registry regcred \
--docker-server=myregistry.io \
--docker-username=myuser \
--docker-password=mypassword \
-n <namespace>
Then ensure your pod spec or ServiceAccount references it:
spec:
imagePullSecrets:
- name: regcred
containers:
- name: app
image: myregistry.io/app:v2.1
Alternatively, attach the secret to the default ServiceAccount so all pods in the namespace use it automatically:
kubectl patch serviceaccount default -n <namespace> \
-p '{"imagePullSecrets": [{"name": "regcred"}]}'
4. Test Network Connectivity from the Node
If the image exists and credentials are correct, the node may not be able to reach the registry.
# Run a debug pod on the same node
kubectl debug node/<node-name> -it --image=busybox -- sh
# Test DNS resolution
nslookup myregistry.io
# Test connectivity
wget -q --spider https://myregistry.io/v2/
Common network issues include:
- Corporate firewalls blocking outbound HTTPS
- DNS misconfiguration on worker nodes
- NetworkPolicies in the kube-system namespace blocking egress
5. Check for Rate Limiting
Docker Hub limits anonymous pulls to 100 per six hours and authenticated pulls to 200. Check whether you are being rate-limited:
# Check rate limit headers
curl -s -D - https://registry-1.docker.io/v2/ 2>&1 | grep -i rate
If rate limiting is the issue, authenticate your pulls or set up a registry mirror:
# Create a Docker Hub secret
kubectl create secret docker-registry dockerhub-cred \
--docker-server=https://index.docker.io/v1/ \
--docker-username=<username> \
--docker-password=<token>
6. Inspect the Image Reference Format
Make sure the image reference is well-formed. Common mistakes include:
- Missing the registry prefix for private images (Kubernetes defaults to Docker Hub)
- Using a wrong port number for the registry
- Including a scheme like
https://in the image name (this is invalid)
# Correct formats:
# docker.io/library/nginx:1.25
# myregistry.io:5000/team/app:v1.0
# ghcr.io/org/app@sha256:abc123...
# View the exact image the pod is trying to pull
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].image}'
7. Fix and Redeploy
Once you identify the root cause, update your deployment:
# Fix the image reference
kubectl set image deployment/<deploy-name> <container-name>=myregistry.io/app:v2.1-fixed
# Or edit the deployment directly
kubectl edit deployment <deploy-name>
# Force a re-pull if you pushed a fix to the same tag
kubectl rollout restart deployment/<deploy-name>
8. Verify the Fix
# Watch the pod status
kubectl get pods -w
# Confirm the image was pulled successfully
kubectl describe pod <pod-name> | grep -A5 "Events"
You should see Successfully pulled image in the events, and the pod should transition to Running.
Quick Reference: ImagePullBackOff vs ErrImagePull
| State | Meaning | |-------|---------| | ErrImagePull | First pull attempt failed | | ImagePullBackOff | Kubernetes is waiting before retrying (backoff increases up to 5 minutes) |
Both point to the same underlying issue. ImagePullBackOff simply means Kubernetes has already tried and failed at least once and is in a retry loop.
How to Explain This in an Interview
I would explain that ImagePullBackOff is the backoff state that follows an initial ErrImagePull. Kubernetes retries the pull with exponential backoff up to five minutes. I would walk through the debugging steps — checking the image name, verifying the tag exists, ensuring imagePullSecrets are configured, and testing network connectivity from the node. I'd also mention that using a private registry mirror or always pinning image digests are production best practices.
Prevention
- Pin images to digests instead of mutable tags like latest
- Use imagePullSecrets on ServiceAccounts for private registries
- Set up a pull-through registry cache to avoid rate limits
- Validate image references in CI before deploying
- Use admission controllers to enforce image policies