Kubernetes RunContainerError
Causes and Fixes
RunContainerError means the container runtime successfully created the container but failed when trying to start it. This typically indicates problems with the container's entrypoint, command, working directory, or user configuration that prevent the process from launching.
Symptoms
- Pod status shows RunContainerError in kubectl get pods output
- Container stays in Waiting state with reason RunContainerError
- kubectl describe pod shows runtime errors about failed execution
- No application logs are generated because the process never started
Common Causes
Step-by-Step Troubleshooting
1. Check the Error Details
kubectl describe pod <pod-name> -n <namespace>
Look for the specific error message in the container state and events:
State: Waiting
Reason: RunContainerError
Message: failed to start container: exec: "/app/start.sh": stat /app/start.sh: no such file or directory
This message directly tells you what failed during the container start attempt.
2. Verify the Command and Args
Check what command the container is trying to run.
# Check the pod spec's command and args
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].command}'
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].args}'
# Check the image's default entrypoint
docker inspect <image> --format='{{.Config.Entrypoint}} {{.Config.Cmd}}'
Remember that in Kubernetes:
commandoverrides the DockerENTRYPOINTargsoverrides the DockerCMD- If only
argsis set, it is passed to the image'sENTRYPOINT
3. Inspect the Image Contents
Verify the entrypoint binary exists and is executable inside the image.
# Run the image with an alternative entrypoint to explore it
kubectl run debug-image --image=<image> --restart=Never --command -- sleep 3600
kubectl exec -it debug-image -- sh
# Inside the container:
ls -la /app/start.sh
which <binary-name>
file /app/start.sh
If the image uses a distroless base, you may need to use an ephemeral debug container:
kubectl debug <pod-name> -it --image=busybox --target=<container-name> -- sh
# Check if the binary exists
ls -la /proc/1/root/app/start.sh
4. Check the Working Directory
If the pod spec sets workingDir, verify it exists in the image.
# Check the pod's working directory
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].workingDir}'
# Verify the directory exists in the image
kubectl run debug-image --image=<image> --restart=Never --command -- ls -la /path/to/workdir
Fix by either creating the directory in the Dockerfile or correcting the workingDir in the pod spec.
5. Check File Permissions
The entrypoint must be executable by the user the container runs as.
# Check what user the container runs as
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].securityContext.runAsUser}'
kubectl get pod <pod-name> -o jsonpath='{.spec.securityContext.runAsUser}'
# Check file permissions in the image
kubectl run debug-image --image=<image> --restart=Never --command -- ls -la /app/start.sh
Fix permissions in the Dockerfile:
RUN chmod +x /app/start.sh
Or change the security context to use a user that has execute permissions.
6. Check for Missing Libraries
If the binary exists but cannot execute, it may have unresolved library dependencies.
# Check library dependencies
kubectl run debug-image --image=<image> --restart=Never --command -- ldd /app/binary
# If ldd is not available, check with a debug container
kubectl debug <pod-name> -it --image=ubuntu --target=<container-name> -- bash
apt-get update && apt-get install -y file
ldd /proc/1/root/app/binary
If libraries are missing, update the Dockerfile to include them or use a base image that provides them.
7. Test Locally
Reproduce the issue locally to speed up debugging.
# Try running with the same command/args as the pod spec
docker run --rm <image> <command> <args>
# Try running with the pod's security context
docker run --rm --user 1000:1000 <image> <command> <args>
# Try with the same working directory
docker run --rm -w /app/nonexistent <image> <command>
8. Fix the Pod Spec
Apply the appropriate fix based on your findings:
# Fix the command
kubectl patch deployment <deploy-name> --type=json \
-p='[{"op": "replace", "path": "/spec/template/spec/containers/0/command", "value": ["/app/correct-start.sh"]}]'
# Remove command override to use the image default
kubectl patch deployment <deploy-name> --type=json \
-p='[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]'
# Fix working directory
kubectl patch deployment <deploy-name> --type=json \
-p='[{"op": "replace", "path": "/spec/template/spec/containers/0/workingDir", "value": "/app"}]'
9. Verify the Fix
kubectl get pods -w
kubectl describe pod <pod-name> | grep -A3 "State:"
kubectl logs <pod-name>
The container should start successfully and begin producing logs.
How to Explain This in an Interview
I would explain that RunContainerError sits between CreateContainerError and CrashLoopBackOff in the container lifecycle. The container was created but the runtime could not start the process. This is distinct from CrashLoopBackOff where the process starts and then exits. I would check the command and args fields, the working directory, and whether the entrypoint binary is executable within the container. Debugging often requires inspecting the image directly.
Prevention
- Test container images locally before deploying to Kubernetes
- Use multi-stage builds to ensure all dependencies are in the final image
- Validate command and args in CI with docker run --entrypoint
- Avoid overriding entrypoint unless necessary
- Use image scanning to verify binary dependencies