Kubernetes FailedMount

Causes and Fixes

FailedMount is a warning event indicating that a volume could not be mounted into a pod's container after being attached to the node. This occurs at the kubelet level during the volume setup phase and prevents the container from starting. Common causes include filesystem errors, wrong mount options, and permission issues.

Symptoms

  • Pod events show 'FailedMount' or 'MountVolume.SetUp failed'
  • Pod stuck in ContainerCreating state for extended periods
  • Events show 'Unable to attach or mount volumes' with a timeout
  • Container logs are empty because the container never started
  • Volume-related warnings in kubelet logs on the node

Common Causes

1
Secret or ConfigMap does not exist
The pod references a Secret or ConfigMap as a volume, but the referenced resource does not exist in the namespace. This is one of the most common FailedMount causes.
2
Filesystem corruption on the volume
The underlying volume's filesystem is corrupted and cannot be mounted. This can happen after ungraceful node shutdowns or forced volume detachments.
3
Wrong fsType or mount options
The PersistentVolume specifies a filesystem type or mount options that are incompatible with the volume or not supported by the node's kernel.
4
Permission denied on NFS mount
For NFS volumes, the NFS server is rejecting the mount due to export permissions, network restrictions, or authentication requirements.
5
Volume attach timeout
The volume was not attached to the node within the expected time (default 2 minutes), causing the mount operation to fail before it can begin.
6
Subpath does not exist
A volumeMount specifies a subPath that does not exist on the volume, and the kubelet cannot create it due to permissions or volume type restrictions.

Step-by-Step Troubleshooting

FailedMount errors mean the kubelet on the node cannot mount a volume into the pod's filesystem. This can involve Secrets, ConfigMaps, PersistentVolumeClaims, or other volume types. This guide diagnoses the specific mount failure and walks through resolution.

1. Check Pod Events for the Specific Error

The pod events contain the exact mount failure reason.

kubectl describe pod <pod-name>

Look at the Events section. Common messages include:

  • MountVolume.SetUp failed for volume "my-secret": secret "my-secret" not found
  • MountVolume.SetUp failed for volume "my-pvc": mount failed: exit status 32
  • Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[data]: timed out
  • MountVolume.SetUp failed for volume "nfs-data": mount failed: mount.nfs: access denied

2. Identify the Failing Volume

Determine which volume is causing the failure.

# List all volumes in the pod spec
kubectl get pod <pod-name> -o jsonpath='{range .spec.volumes[*]}{.name}: {end}' | tr ':' '\n'

# Get detailed volume configuration
kubectl get pod <pod-name> -o yaml | grep -A10 "volumes:"

3. Fix Missing Secrets or ConfigMaps

If the error references a missing Secret or ConfigMap, verify it exists.

# Check if the Secret exists
kubectl get secret <secret-name> -n <namespace>

# Check if the ConfigMap exists
kubectl get configmap <configmap-name> -n <namespace>

If the resource does not exist, create it.

# Create a missing Secret
kubectl create secret generic <secret-name> --from-literal=key=value

# Create a missing ConfigMap
kubectl create configmap <configmap-name> --from-literal=key=value

Alternatively, make the volume optional so the pod can start without it.

volumes:
- name: optional-secret
  secret:
    secretName: my-secret
    optional: true

4. Check PVC-Backed Volume Issues

For PVC volumes, verify the PVC is bound and the PV is available.

# Check PVC status
kubectl get pvc <pvc-name>

# If bound, check the PV
kubectl get pv <pv-name>

# Check the volume's filesystem type
kubectl get pv <pv-name> -o jsonpath='{.spec.csi.fsType}'

If the PVC is pending, see the pvc-pending troubleshooting guide. If it is bound but mounting fails, the issue is likely at the filesystem level.

5. Debug Filesystem Issues

If the volume is attached but mounting fails with filesystem errors, check the node.

# Debug the node
kubectl debug node/<node-name> -it --image=ubuntu -- bash

# Find the volume device
lsblk

# Check filesystem on the device
fsck -n /dev/<device>

# Check if the device has a filesystem
blkid /dev/<device>

# Try mounting manually to see the error
mount /dev/<device> /mnt/test

If the filesystem is corrupted, you may need to run fsck. If the volume has no filesystem (new volume), it needs to be formatted.

# Format a new volume (CAUTION: destroys data)
mkfs.ext4 /dev/<device>

6. Debug NFS Mount Failures

NFS volumes fail to mount for several reasons.

# From the node or a debug pod, test NFS connectivity
kubectl debug node/<node-name> -it --image=ubuntu -- bash

# Check if NFS server is reachable
ping <nfs-server>

# Check NFS port (2049)
nc -zv <nfs-server> 2049

# Try mounting manually
mount -t nfs <nfs-server>:<export-path> /mnt/test

# Check NFS client utilities are installed
which mount.nfs

Common NFS issues:

  • NFS client packages not installed on the node
  • NFS server firewall blocking connections
  • Export permissions not allowing the node's IP
  • NFS version mismatch

7. Check SubPath Issues

If the volume mount uses subPath, verify the path exists.

# Check subPath configuration
kubectl get pod <pod-name> -o yaml | grep -B2 -A2 subPath

SubPath issues include:

  • The specified subdirectory does not exist on the volume
  • Permission issues preventing subdirectory creation
  • Using subPath with Secrets/ConfigMaps that are updated (subPath mounts are not updated when the source changes)
# Verify the path exists on the volume by mounting without subPath first
# Then check the contents
kubectl exec <pod-name> -- ls -la /full/mount/path/

8. Check Kubelet Logs

The kubelet on the node has detailed mount logs.

# On the node or via debug pod
journalctl -u kubelet --no-pager --since "30 minutes ago" | grep -i "mount\|volume"

# Look for specific mount errors
journalctl -u kubelet --no-pager --since "30 minutes ago" | grep -i "FailedMount\|MountVolume"

The kubelet logs often contain more detail than the pod events, including the exact mount command that failed and the error output.

9. Restart the Pod

After resolving the underlying issue, delete the stuck pod so a new one is created.

kubectl delete pod <pod-name>

# If managed by a Deployment, watch for the replacement
kubectl get pods -w | grep <deployment-name>

# If the pod is a standalone pod, recreate it
kubectl apply -f pod.yaml

10. Verify the Volume Is Mounted

Confirm the new pod starts successfully with the volume mounted.

# Check pod is running
kubectl get pod <pod-name>

# Verify the volume mount
kubectl exec <pod-name> -- df -h
kubectl exec <pod-name> -- mount | grep <mount-path>

# Verify data is accessible
kubectl exec <pod-name> -- ls -la <mount-path>

# Check for any mount-related warnings in events
kubectl describe pod <pod-name> | grep -i mount

A successfully mounted volume will appear in the df and mount output, and files will be accessible at the mount path. If the pod continues to fail, check if there are multiple failing volumes and address each one individually.

How to Explain This in an Interview

I would explain that FailedMount is separate from FailedAttachVolume — attachment happens at the cloud/storage level (connecting the block device to the node), while mounting happens at the OS level (mounting the filesystem into the container's mount namespace). I'd discuss the kubelet's volume manager workflow: wait for attachment, format the volume if needed (MountDevice), set up the mount point (SetUp), and bind-mount into the container. I'd cover the different volume types (Secrets, ConfigMaps, PVCs, NFS) and how each can fail at the mount stage. For debugging, I'd check pod events, kubelet logs on the node, and verify the volume content and filesystem health.

Prevention

  • Verify Secrets and ConfigMaps exist before deploying pods that reference them
  • Use optional flags on volume sources when the volume may not always exist
  • Test mount configurations in non-production environments
  • Monitor kubelet volume manager metrics for mount failures
  • Implement proper shutdown procedures to prevent filesystem corruption

Related Errors