Kubernetes CreateContainerError

Causes and Fixes

CreateContainerError indicates the container runtime failed to create the container. Unlike CreateContainerConfigError which is a configuration issue, this error means the runtime itself encountered a problem during container creation, such as a failed volume mount, device plugin issue, or runtime bug.

Symptoms

  • Pod status shows CreateContainerError in kubectl get pods output
  • Container stays in Waiting state and never starts
  • kubectl describe pod shows container runtime errors in events
  • Events may mention volume mount failures or device allocation issues

Common Causes

1
Volume mount failure
A volume cannot be mounted into the container — the host path does not exist, the NFS share is unreachable, or a CSI driver failed.
2
Device plugin not available
The container requests a device (e.g., GPU) via resource limits but the device plugin has not allocated one. Check device plugin pods.
3
SELinux or AppArmor conflicts
Security policies on the node prevent the container runtime from creating the container with the specified security context.
4
Container runtime issue
containerd or CRI-O has a bug or is in a bad state. Restarting the runtime may help.
5
Read-only filesystem conflict
The container spec sets readOnlyRootFilesystem: true but the application tries to write to paths not covered by writable volume mounts.

Step-by-Step Troubleshooting

1. Get the Error Details

Start with the pod description to see the exact error message.

kubectl describe pod <pod-name> -n <namespace>

Look at the container state and events:

State:          Waiting
  Reason:       CreateContainerError
  Message:      container create failed: ...

The error message following container create failed: tells you the specific runtime failure.

2. Check Volume Mounts

Volume mount failures are the most common cause of CreateContainerError.

# List the pod's volumes
kubectl get pod <pod-name> -o jsonpath='{.spec.volumes}' | jq .

# List the container's volume mounts
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].volumeMounts}' | jq .

# Check PVC status if using persistent volumes
kubectl get pvc -n <namespace>
kubectl describe pvc <pvc-name> -n <namespace>

For NFS or CSI volumes, check that the backing storage is accessible:

# Debug from the node
kubectl debug node/<node-name> -it --image=busybox -- sh

# Test NFS mount
mount -t nfs <nfs-server>:<path> /mnt/test

# Check CSI driver pods
kubectl get pods -n kube-system | grep csi
kubectl logs -n kube-system <csi-driver-pod>

3. Check Device Plugin Issues

If the container requests devices like GPUs, verify the device plugin is running.

# Check device plugin pods
kubectl get pods -n kube-system | grep -i device-plugin

# Check node capacity for the device
kubectl describe node <node-name> | grep -A10 "Allocated resources"

# Check if the device resource is available
kubectl get node <node-name> -o jsonpath='{.status.allocatable}' | jq .

If the device plugin is not running or reporting zero devices, restart it:

kubectl delete pod -n kube-system <device-plugin-pod>

4. Check Security Policy Conflicts

SELinux or AppArmor may block container creation.

# Check the pod's security context
kubectl get pod <pod-name> -o jsonpath='{.spec.securityContext}' | jq .
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[0].securityContext}' | jq .

# Check SELinux status on the node
kubectl debug node/<node-name> -it --image=ubuntu -- bash -c "getenforce"

# Check for SELinux denials
kubectl debug node/<node-name> -it --image=ubuntu -- bash -c "ausearch -m avc -ts recent 2>/dev/null || grep 'avc:' /var/log/audit/audit.log | tail -20"

5. Check Container Runtime Logs

If the pod events do not give enough detail, check the runtime logs on the node.

# Find the node
kubectl get pod <pod-name> -o jsonpath='{.spec.nodeName}'

# Check kubelet logs
kubectl debug node/<node-name> -it --image=ubuntu -- bash -c "journalctl -u kubelet --since '10 minutes ago' | grep -i 'error\|create'"

# Check containerd logs
kubectl debug node/<node-name> -it --image=ubuntu -- bash -c "journalctl -u containerd --since '10 minutes ago' | grep -i 'error\|create'"

6. Handle Read-Only Root Filesystem Issues

If using readOnlyRootFilesystem: true, ensure all writable paths have volume mounts.

spec:
  containers:
    - name: app
      securityContext:
        readOnlyRootFilesystem: true
      volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: var-run
          mountPath: /var/run
  volumes:
    - name: tmp
      emptyDir: {}
    - name: var-run
      emptyDir: {}

7. Restart the Container Runtime (Last Resort)

If the runtime is in a bad state, restarting it on the affected node may help.

# This is a disruptive operation — all containers on the node will restart
kubectl debug node/<node-name> -it --image=ubuntu -- bash -c "systemctl restart containerd"

Only do this after confirming the issue is runtime-level and not a configuration problem.

8. Verify the Fix

# Delete the stuck pod to force recreation
kubectl delete pod <pod-name> -n <namespace>

# Watch the new pod
kubectl get pods -n <namespace> -w

# Confirm the container started
kubectl describe pod <pod-name> -n <namespace> | grep -A3 "State:"

The pod should transition to Running without CreateContainerError.

How to Explain This in an Interview

I would explain that CreateContainerError happens at the container runtime level — after the kubelet has prepared the configuration but during the actual container creation call to containerd or CRI-O. This is less common than CreateContainerConfigError and often requires checking runtime logs on the node. I would walk through checking kubelet logs, container runtime logs, and the specific error message in pod events to narrow down the cause.

Prevention

  • Test volume mounts and CSI drivers in staging before production
  • Ensure device plugins are healthy before deploying GPU workloads
  • Validate security contexts against the node's SELinux/AppArmor policies
  • Monitor container runtime health with node-problem-detector
  • Add writable emptyDir volumes for paths the application writes to when using readOnlyRootFilesystem

Related Errors