Kubernetes ContainerCannotRun

Causes and Fixes

ContainerCannotRun indicates the container runtime attempted to start the container but the process could not execute. This is typically caused by an invalid entrypoint binary, wrong executable format, or permission issues that prevent the container's main process from launching.

Symptoms

  • Pod status shows ContainerCannotRun as the termination reason
  • Container exits immediately with no useful application logs
  • kubectl describe pod shows 'ContainerCannotRun' in the last state
  • Exit code is often 126 (permission denied) or 127 (not found)

Common Causes

1
Entrypoint binary not found
The ENTRYPOINT or command specified does not exist in the container image. The binary path may be wrong or the image may be missing layers.
2
Binary format incompatible
The binary was compiled for a different architecture (e.g., amd64 binary on arm64 node) or OS. Use multi-arch builds.
3
Permission denied on binary
The entrypoint file exists but is not marked as executable, or the container user does not have execute permission.
4
Script missing shebang line
A shell script used as entrypoint does not have a #!/bin/sh or #!/bin/bash shebang line, so the kernel cannot determine how to execute it.
5
Corrupt or incomplete image
The image was partially pushed or a layer is corrupt. Re-pull or re-push the image.

Step-by-Step Troubleshooting

1. Check the Container State and Exit Code

kubectl describe pod <pod-name> -n <namespace>

Look at the container state:

Last State:     Terminated
  Reason:       ContainerCannotRun
  Message:      OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/app/run": stat /app/run: no such file or directory
  Exit Code:    127

The Message field tells you exactly what went wrong.

2. Interpret the Exit Code

kubectl get pod <pod-name> -o jsonpath='{.status.containerStatuses[0].lastState.terminated.exitCode}'

| Exit Code | Meaning | Likely Cause | |-----------|---------|-------------| | 126 | Cannot execute | File exists but is not executable or wrong format | | 127 | Not found | The binary or script does not exist at the specified path | | 1 | Exec format error | Wrong architecture or missing shebang |

3. Verify the Entrypoint Exists

Inspect the image to check the entrypoint binary.

# Check the image's entrypoint
docker inspect <image> --format='ENTRYPOINT={{.Config.Entrypoint}} CMD={{.Config.Cmd}}'

# Check the pod spec override
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].command}'
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].args}'

# Run the image with a shell to check the filesystem
docker run --rm -it --entrypoint sh <image>
ls -la /app/run
file /app/run

4. Check for Architecture Mismatches

If the binary exists but cannot execute, it may be built for the wrong architecture.

# Check the node architecture
kubectl get node <node-name> -o jsonpath='{.status.nodeInfo.architecture}'

# Check the binary architecture inside the image
docker run --rm --entrypoint sh <image> -c "file /app/binary"

Output like ELF 64-bit LSB executable, x86-64 on an arm64 node confirms an architecture mismatch.

Fix by building multi-arch images:

docker buildx build --platform linux/amd64,linux/arm64 -t myapp:v1 --push .

5. Check File Permissions

# Check permissions on the entrypoint
docker run --rm --entrypoint sh <image> -c "ls -la /app/run && stat /app/run"

If the file is not executable:

# Fix in Dockerfile
COPY run /app/run
RUN chmod +x /app/run

If the container runs as a non-root user, ensure that user has execute permission:

# Check what user the image runs as
docker inspect <image> --format='{{.Config.User}}'

# Test with the same user locally
docker run --rm --user 1000:1000 <image>

6. Check Script Shebang Lines

If the entrypoint is a script, ensure it has a proper shebang line.

# Check the first line of the script
docker run --rm --entrypoint sh <image> -c "head -1 /app/run.sh"

The script must start with a valid interpreter path:

#!/bin/sh
# or
#!/bin/bash

In minimal images (alpine, distroless), /bin/bash may not exist. Use /bin/sh instead.

7. Check for Corrupt Images

If the image layers are corrupt, the binary may be missing or incomplete.

# Re-pull the image
docker pull <image>

# Verify image integrity
crane validate --remote <image>

# Check image manifest
crane manifest <image> | jq .

If the image is corrupt, re-build and re-push it, then force Kubernetes to re-pull:

# Force re-pull by restarting the deployment
kubectl rollout restart deployment/<deploy-name>

8. Debug with a Different Entrypoint

Run the image with a sleep command to inspect the filesystem interactively.

# Override the entrypoint to get a shell
kubectl run debug --image=<image> --restart=Never --command -- sleep 3600
kubectl exec -it debug -- sh

# Inside the container, manually run the entrypoint
/app/run
# Observe the error output

# Check shared library dependencies
ldd /app/binary

9. Fix and Verify

Apply the fix based on your findings:

# Fix image and redeploy
kubectl set image deployment/<deploy-name> <container>=<fixed-image>

# Or fix command override
kubectl patch deployment <deploy-name> --type=json \
  -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/command", "value": ["/bin/sh", "-c", "/app/run.sh"]}]'

# Watch the pod
kubectl get pods -w

# Verify it is running
kubectl logs <pod-name>

The container should start successfully and produce application logs.

How to Explain This in an Interview

I would explain that ContainerCannotRun differs from CrashLoopBackOff in that the process never actually runs — the exec system call itself fails. This is a lower-level failure than an application crash. I would debug by checking the exit code (126 for permission denied, 127 for not found), verifying the entrypoint exists and is executable in the image, and checking for architecture mismatches. This is often seen when switching to distroless images or scratch-based images without including all required dependencies.

Prevention

  • Test images locally with docker run before deploying
  • Build multi-arch images for mixed-architecture clusters
  • Ensure entrypoint scripts have proper shebang lines and execute permissions
  • Use image scanning in CI to detect missing dependencies
  • Verify image integrity with digest pinning

Related Errors