Kubernetes Exit Code 126

Causes and Fixes

Exit code 126 means 'command cannot execute' — the file specified as the entrypoint or command exists but cannot be executed. This typically happens when a binary file lacks execute permissions, is in an incompatible format, or a shell script references an interpreter that cannot run it.

Symptoms

  • Pod shows CrashLoopBackOff or Error with exit code 126
  • kubectl describe pod shows exit code 126 in terminated state
  • Container logs may be empty or show 'permission denied'
  • The container exits immediately after creation

Common Causes

1
File not executable
The entrypoint file exists but does not have the executable bit set. This often happens when files are copied without preserving permissions in the Dockerfile.
2
Wrong binary format
The file is an ELF binary built for a different architecture (e.g., amd64 binary on arm64 node) or a different OS (e.g., macOS binary in a Linux container).
3
Script with wrong interpreter
The script has a shebang line pointing to an interpreter that does not exist in the container (e.g., #!/bin/bash in an Alpine image that only has /bin/sh).
4
Binary is a directory
The path specified as the command points to a directory instead of a file.
5
Security context prevents execution
The container's security context (runAsUser, readOnlyRootFilesystem, or seccomp profile) prevents executing the file.

Step-by-Step Troubleshooting

1. Confirm the Exit Code

kubectl describe pod <pod-name>

Look for:

Last State:     Terminated
  Reason:       ContainerCannotRun
  Exit Code:    126
  Message:      OCI runtime create failed: ... permission denied

2. Identify the Command Being Executed

# Check the pod spec command and args
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].command}'
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].args}'

# Check the image's default entrypoint
docker inspect <image> --format='ENTRYPOINT={{.Config.Entrypoint}} CMD={{.Config.Cmd}}'

3. Check File Permissions

Run the image with a shell to inspect the file.

# Override entrypoint to get a shell
kubectl run debug --image=<image> --restart=Never --command -- sleep 3600
kubectl exec -it debug -- sh

# Check permissions
ls -la /app/start.sh
# Output: -rw-r--r-- 1 root root 1234 Jan 1 00:00 /app/start.sh
# Missing 'x' bit = not executable

# Check the file type
file /app/start.sh
# Output: /app/start.sh: ASCII text (it is a script, needs execute permission)

4. Fix Missing Execute Permission

Option A: Fix in the Dockerfile

COPY start.sh /app/start.sh
RUN chmod +x /app/start.sh

# Or with BuildKit (preferred)
COPY --chmod=755 start.sh /app/start.sh

Option B: Use a shell to execute the script

Instead of running the script directly, invoke it through a shell:

spec:
  containers:
    - name: app
      image: myapp:v1
      command: ["/bin/sh"]
      args: ["-c", "/app/start.sh"]

This works because /bin/sh is already executable and reads the script as input.

5. Check for Architecture Mismatch

# Check the node architecture
kubectl get node <node-name> -o jsonpath='{.status.nodeInfo.architecture}'

# Check the binary architecture
kubectl exec -it debug -- file /app/binary
# Output: /app/binary: ELF 64-bit LSB executable, x86-64
# If the node is arm64, this binary cannot run

Fix by building multi-arch images:

docker buildx create --use
docker buildx build --platform linux/amd64,linux/arm64 -t myapp:v1 --push .

6. Check the Shebang Line

For shell scripts, verify the interpreter exists.

kubectl exec -it debug -- head -1 /app/start.sh
# Output: #!/bin/bash

kubectl exec -it debug -- which bash
# If bash is not found (common in Alpine), the script cannot execute

Fix by using /bin/sh in the shebang:

#!/bin/sh
# Use POSIX-compatible shell syntax

Or install bash in the Dockerfile:

RUN apk add --no-cache bash

7. Check Security Context

The container's security context may prevent execution.

# Check security context
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].securityContext}' | jq .
kubectl get pod <pod-name> -o jsonpath='{.spec.securityContext}' | jq .

Issues to look for:

  • readOnlyRootFilesystem: true with the binary on the root filesystem
  • runAsUser pointing to a user that does not have execute permission on the file
  • A restrictive seccomp profile blocking exec syscalls

8. Check for Windows Line Endings

If the script was created on Windows, it may have CRLF line endings that break the shebang.

kubectl exec -it debug -- cat -A /app/start.sh | head -1
# If you see ^M at the end: #!/bin/sh^M$
# The ^M (carriage return) makes the shebang invalid

Fix in the Dockerfile:

RUN sed -i 's/\r$//' /app/start.sh

Or use a .gitattributes file to enforce LF line endings:

*.sh text eol=lf

9. Apply the Fix and Verify

# Rebuild and push the fixed image
docker build -t myapp:v2 .
docker push myapp:v2

# Update the deployment
kubectl set image deployment/<deploy-name> <container>=myapp:v2

# Watch the pod
kubectl get pods -w

# Verify it is running
kubectl logs <pod-name>

The container should start successfully without exit code 126.

How to Explain This in an Interview

I would explain that exit code 126 is a shell convention meaning 'command found but not executable,' in contrast to exit code 127 which means 'command not found.' I would check permissions with ls -la, verify the file format with the file command, and check the shebang line for scripts. The most common fix is adding chmod +x in the Dockerfile. In multi-arch clusters, I would also check for architecture mismatches by comparing the binary format with the node architecture.

Prevention

  • Add chmod +x for all executable files in Dockerfiles
  • Use COPY --chmod=755 in Dockerfiles (BuildKit required)
  • Build multi-arch images for mixed-architecture clusters
  • Use /bin/sh instead of /bin/bash for Alpine-based images
  • Test images locally before deploying

Related Errors