Kubernetes CreateContainerConfigError

Causes and Fixes

CreateContainerConfigError occurs when the kubelet cannot configure a container before starting it. This typically means a referenced ConfigMap, Secret, or ServiceAccount does not exist, or a volume mount is misconfigured. The container never starts because its configuration is invalid.

Symptoms

  • Pod status shows CreateContainerConfigError in kubectl get pods output
  • Container stays in Waiting state with reason CreateContainerConfigError
  • kubectl describe pod shows warnings about missing ConfigMaps or Secrets
  • Pod does not crash-loop — it simply cannot start

Common Causes

1
Referenced ConfigMap does not exist
The pod spec references a ConfigMap (as envFrom, env valueFrom, or volume) that has not been created in the namespace.
2
Referenced Secret does not exist
The pod spec references a Secret that is missing. This includes Secrets used for environment variables, volumes, or TLS.
3
Key not found in ConfigMap or Secret
The pod references a specific key from a ConfigMap or Secret, but that key does not exist in the resource.
4
ServiceAccount does not exist
The pod spec references a ServiceAccount that has not been created in the namespace.
5
Invalid SecurityContext configuration
The security context specifies values that the container runtime cannot apply, such as an invalid user ID or capability.

Step-by-Step Troubleshooting

1. Identify the Missing Resource

The pod events will tell you exactly which resource is missing.

kubectl describe pod <pod-name> -n <namespace>

Look for messages in the Events section like:

Warning  Failed  configmap "app-config" not found
Warning  Failed  secret "db-credentials" not found
Warning  Failed  couldn't find key "DATABASE_URL" in ConfigMap "app-config"

2. Check Which ConfigMaps and Secrets the Pod References

Extract all external resource references from the pod spec.

# Check environment variable references
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].env[*].valueFrom}' | jq .

# Check envFrom references
kubectl get pod <pod-name> -o jsonpath='{.spec.containers[*].envFrom}' | jq .

# Check volume references
kubectl get pod <pod-name> -o jsonpath='{.spec.volumes}' | jq .

3. Verify ConfigMaps Exist

# List ConfigMaps in the namespace
kubectl get configmaps -n <namespace>

# Check if the specific ConfigMap exists and has the expected keys
kubectl describe configmap <configmap-name> -n <namespace>

# View the ConfigMap data
kubectl get configmap <configmap-name> -n <namespace> -o yaml

If the ConfigMap is missing, create it:

# From literal values
kubectl create configmap app-config \
  --from-literal=DATABASE_HOST=postgres \
  --from-literal=DATABASE_PORT=5432 \
  -n <namespace>

# From a file
kubectl create configmap app-config \
  --from-file=config.yaml=./config.yaml \
  -n <namespace>

4. Verify Secrets Exist

# List Secrets in the namespace
kubectl get secrets -n <namespace>

# Check if the specific Secret exists
kubectl describe secret <secret-name> -n <namespace>

# Check the Secret has the expected keys (without revealing values)
kubectl get secret <secret-name> -n <namespace> -o jsonpath='{.data}' | jq 'keys'

If the Secret is missing, create it:

# From literal values
kubectl create secret generic db-credentials \
  --from-literal=username=admin \
  --from-literal=password=secret123 \
  -n <namespace>

# From a file
kubectl create secret generic tls-cert \
  --from-file=tls.crt=./cert.pem \
  --from-file=tls.key=./key.pem \
  -n <namespace>

5. Check for Missing Keys

If the ConfigMap or Secret exists but a specific key is missing, the error message will say couldn't find key.

# Check available keys in a ConfigMap
kubectl get configmap <name> -n <namespace> -o jsonpath='{.data}' | jq 'keys'

# Check available keys in a Secret
kubectl get secret <name> -n <namespace> -o jsonpath='{.data}' | jq 'keys'

Add the missing key:

# Patch a ConfigMap to add a key
kubectl patch configmap <name> -n <namespace> \
  --type merge -p '{"data":{"MISSING_KEY":"value"}}'

# Patch a Secret to add a key (value must be base64)
kubectl patch secret <name> -n <namespace> \
  --type merge -p '{"data":{"MISSING_KEY":"dmFsdWU="}}'

6. Verify ServiceAccount

If the pod references a non-default ServiceAccount, ensure it exists.

# Check which ServiceAccount the pod uses
kubectl get pod <pod-name> -o jsonpath='{.spec.serviceAccountName}'

# List ServiceAccounts in the namespace
kubectl get serviceaccounts -n <namespace>

# Create the ServiceAccount if missing
kubectl create serviceaccount <sa-name> -n <namespace>

7. Use Optional References for Resilience

If a ConfigMap or Secret is not critical for the container to start, mark it as optional:

env:
  - name: OPTIONAL_SETTING
    valueFrom:
      configMapKeyRef:
        name: optional-config
        key: setting
        optional: true
envFrom:
  - configMapRef:
      name: optional-config
      optional: true
volumes:
  - name: optional-vol
    configMap:
      name: optional-config
      optional: true

With optional: true, the pod starts even if the ConfigMap or Secret does not exist.

8. Prevent with Deployment Ordering

Use Helm hooks or ArgoCD sync waves to ensure ConfigMaps and Secrets are created before the pods that reference them.

# Helm hook example
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-weight": "-5"

9. Verify the Fix

After creating the missing resource, the pod should automatically retry and start.

# Watch the pod
kubectl get pod <pod-name> -w

# If the pod does not recover, delete it to force a fresh attempt
kubectl delete pod <pod-name>

# Confirm the container is running
kubectl describe pod <pod-name> | grep -A3 "State:"

The pod should transition from CreateContainerConfigError to ContainerCreating to Running.

How to Explain This in an Interview

I would explain that CreateContainerConfigError is a pre-start failure — the kubelet assembles the container configuration before creating the container process, and this error means that assembly failed. Unlike CrashLoopBackOff where the container starts and then crashes, this error means the container never starts at all. The fix is straightforward: identify the missing resource from the pod events and create it. I would also mention that using optional references for non-critical ConfigMaps and Secrets can make pods more resilient.

Prevention

  • Deploy ConfigMaps and Secrets before deploying pods that reference them
  • Use Helm or Kustomize to bundle dependent resources together
  • Mark non-critical ConfigMap and Secret references as optional
  • Use admission webhooks to validate that referenced resources exist
  • Include ConfigMap and Secret creation in CI/CD pipelines

Related Errors