Kubernetes Exit Code 127
Causes and Fixes
Exit code 127 means 'command not found' — the shell could not locate the binary or script specified as the container's entrypoint or command. This typically indicates a typo in the command, a missing binary in the container image, or an incorrect PATH configuration.
Symptoms
- Pod shows CrashLoopBackOff or Error with exit code 127
- kubectl describe pod shows exit code 127 in terminated state
- Container logs show 'not found' or 'command not found' messages
- Container exits immediately after starting
Common Causes
Step-by-Step Troubleshooting
1. Confirm the Exit Code
kubectl describe pod <pod-name>
Look for:
Last State: Terminated
Reason: Error
Exit Code: 127
2. Check the Container Logs
kubectl logs <pod-name> --previous
Common error messages:
/bin/sh: /app/server: not found
exec: "myapp": executable file not found in $PATH
sh: myapp: command not found
3. Identify What Command Is Being Run
# Check the pod spec
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].command}'
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].args}'
# Check the image default
docker inspect <image> --format='ENTRYPOINT={{.Config.Entrypoint}} CMD={{.Config.Cmd}}'
4. Verify the Binary Exists in the Image
# Run the image with a shell
kubectl run debug --image=<image> --restart=Never --command -- sleep 3600
kubectl exec -it debug -- sh
# Check if the binary exists
which myapp
ls -la /app/server
find / -name "myapp" 2>/dev/null
# Check the PATH
echo $PATH
# Clean up
kubectl delete pod debug
If the image is distroless and has no shell, use an ephemeral container:
kubectl debug <pod-name> -it --image=busybox --target=<container-name> -- sh
ls -la /proc/1/root/app/
find /proc/1/root/ -name "server" 2>/dev/null
5. Check for Multi-Stage Build Issues
A common cause is forgetting to copy the binary in a multi-stage Dockerfile.
# Stage 1: Build
FROM golang:1.21 AS builder
WORKDIR /app
COPY . .
RUN go build -o /app/server ./cmd/server
# Stage 2: Runtime
FROM alpine:3.19
# BUG: Forgot to COPY the binary!
# COPY --from=builder /app/server /app/server # This line is missing
CMD ["/app/server"]
Fix:
FROM alpine:3.19
COPY --from=builder /app/server /app/server
RUN chmod +x /app/server
CMD ["/app/server"]
6. Check for Dynamic Linking Issues
Even if the binary exists, it may report "not found" if its dynamic linker is missing. This is extremely common when copying binaries from glibc-based images to Alpine (musl-based).
kubectl exec -it debug -- ldd /app/server
Output like:
Error loading shared library libc.so.6: No such file or directory
indicates a glibc vs musl mismatch.
Fixes:
- Build with
CGO_ENABLED=0for static linking (Go) - Use the same base image family for build and runtime
- Install glibc compatibility:
apk add libc6-compat(Alpine)
# Go: Static build
RUN CGO_ENABLED=0 GOOS=linux go build -o /app/server ./cmd/server
7. Use Absolute Paths
If the binary is not in PATH, use its absolute path.
spec:
containers:
- name: app
image: myapp:v1
command: ["/usr/local/bin/myapp"] # Absolute path
# Instead of: command: ["myapp"] # Relies on PATH
8. Check for Typos
Compare the command in the pod spec with what is actually in the image.
# What the pod spec says
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].command}'
# Output: ["/app/srver"]
# What the image has
kubectl exec -it debug -- ls /app/
# Output: server config.yaml
# The typo: "srver" vs "server"
9. Fix the Issue
# Fix the command in the deployment
kubectl set image deployment/<deploy-name> <container>=<fixed-image>
# Or fix the command override
kubectl patch deployment <deploy-name> --type=json \
-p='[{"op": "replace", "path": "/spec/template/spec/containers/0/command", "value": ["/app/server"]}]'
# Watch the pod start
kubectl get pods -w
# Verify
kubectl logs <pod-name>
10. Verify the Fix
# Confirm the container is running
kubectl get pod <pod-name>
# Check no more exit code 127
kubectl describe pod <pod-name> | grep "Exit Code"
# Verify the application is working
kubectl logs <pod-name> --tail=20
The container should start and remain running without exit code 127.
How to Explain This in an Interview
I would explain that exit code 127 means the shell could not find the command, which is different from 126 where the command is found but not executable. My debugging approach would be to check the exact command being run, verify the binary exists in the image by running it with a shell override, check the PATH environment variable, and review the Dockerfile for multi-stage build copy issues. This is a common error when switching to minimal base images like distroless or scratch.
Prevention
- Use absolute paths for entrypoints in Dockerfiles and pod specs
- Verify the final image contains all required binaries before pushing
- Add a smoke test stage in CI that runs the container briefly
- Use docker run --entrypoint sh <image> -c 'which <binary>' in CI
- Document required binaries for each container image