How Do NFS Volumes Work in Kubernetes?

intermediate|storagedevopssrebackend developerCKA
TL;DR

NFS (Network File System) volumes provide shared, network-attached storage that supports ReadWriteMany access mode. Pods on any node can simultaneously read and write to the same NFS volume, making it suitable for shared data use cases.

Detailed Answer

NFS (Network File System) is a distributed file system protocol that allows multiple nodes to access the same storage simultaneously. In Kubernetes, NFS volumes are commonly used when multiple Pods need to read and write to the same data — a capability that block storage does not provide.

Static NFS Volume

The simplest approach is manually creating a PV pointing to an NFS export:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 100Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    server: 10.0.1.100
    path: /exports/shared-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-data
  namespace: production
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi

Multiple Pods can mount this PVC simultaneously:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  replicas: 5
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: nginx
          image: nginx:1.27
          volumeMounts:
            - name: shared
              mountPath: /usr/share/nginx/html
          resources:
            requests:
              cpu: "100m"
              memory: "128Mi"
      volumes:
        - name: shared
          persistentVolumeClaim:
            claimName: shared-data

All 5 replicas can read and write to the same NFS directory simultaneously.

Dynamic Provisioning with NFS CSI Driver

The NFS CSI driver automates PV creation by creating subdirectories on an NFS share:

# Install NFS CSI driver
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs \
  --namespace kube-system
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: 10.0.1.100
  share: /exports/dynamic
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - nfsvers=4.1
  - hard
  - noatime

Now PVCs using this StorageClass are automatically provisioned:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: nfs-csi
  resources:
    requests:
      storage: 10Gi

NFS Mount Options

| Option | Purpose | |--------|---------| | nfsvers=4.1 | Use NFSv4.1 for better performance and security | | hard | Retry NFS requests indefinitely (vs. soft which gives up) | | noatime | Skip updating access time on reads (improves performance) | | rsize=1048576 | Read buffer size in bytes | | wsize=1048576 | Write buffer size in bytes | | timeo=600 | Timeout for retransmissions (in tenths of a second) |

Common Use Cases for NFS

| Use Case | Why NFS | |----------|---------| | Shared content/assets | Multiple web servers serve the same files | | ML training data | Multiple training Pods read the same dataset | | Legacy application shared state | Applications that use file-based coordination | | Build artifacts | CI/CD Pods share build outputs | | Log aggregation | Write logs to a central location |

NFS Performance Considerations

NFS adds a network round-trip to every I/O operation. Benchmark before using NFS for performance-sensitive workloads:

# Quick write test inside a Pod
kubectl exec test-pod -- dd if=/dev/zero of=/mnt/nfs/testfile bs=1M count=1000 oflag=direct
# Compare with local disk:
kubectl exec test-pod -- dd if=/dev/zero of=/tmp/testfile bs=1M count=1000 oflag=direct

Performance optimization:

  1. Use NFSv4.1 or later (supports session trunking and parallel NFS)
  2. Increase rsize/wsize to 1MB for large file transfers
  3. Use noatime to eliminate metadata updates on reads
  4. Place NFS server on fast network (10GbE or better)
  5. Use SSD-backed NFS for write-intensive workloads

Security Considerations

# NFS mounts use node-level credentials, not Pod ServiceAccounts
# Ensure proper NFS export configuration:
# /exports/shared-data  10.244.0.0/16(rw,sync,no_subtree_check,no_root_squash)
  • no_root_squash: Allows root in containers to write as root (required for some workloads)
  • root_squash (default): Maps root to nobody — more secure but may cause permission errors
  • Use securityContext.fsGroup to set group ownership on mounted files
spec:
  securityContext:
    fsGroup: 1000
  containers:
    - name: app
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000

Troubleshooting NFS Issues

# Check if Pod can reach NFS server
kubectl exec test-pod -- showmount -e 10.0.1.100

# Check mount inside Pod
kubectl exec test-pod -- mount | grep nfs

# Common errors:
# "mount.nfs: access denied by server" → Check NFS exports
# "mount.nfs: Connection timed out" → Check network/firewall
# "Stale file handle" → NFS export was changed; remount required

Alternatives to NFS for RWX

| Solution | Advantages over NFS | |----------|---------------------| | CephFS | Distributed, no single point of failure | | GlusterFS | Distributed, better write performance | | AWS EFS | Managed NFS, no server to maintain | | Azure Files | Managed SMB/NFS | | Longhorn | Cloud-native, replicated storage |

Why Interviewers Ask This

NFS is one of the most commonly used storage backends in on-premises Kubernetes clusters. Understanding its configuration, limitations, and alternatives shows practical infrastructure experience.

Common Follow-Up Questions

What access modes does NFS support?
NFS supports all three access modes: ReadWriteOnce (RWO), ReadOnlyMany (ROX), and ReadWriteMany (RWX). RWX is its most important capability.
What are the performance limitations of NFS in Kubernetes?
NFS adds network latency to every I/O operation. It can become a bottleneck for write-heavy workloads. Consider local volumes or block storage for latency-sensitive applications.
How do you dynamically provision NFS volumes?
Use the NFS CSI driver (csi-driver-nfs) or the NFS subdir external provisioner. Both create subdirectories on an NFS share for each PVC.

Key Takeaways

  • NFS is the simplest way to provide ReadWriteMany (RWX) storage in Kubernetes.
  • Use the NFS CSI driver for dynamic provisioning instead of manually creating PVs.
  • NFS performance depends on network speed — it is not suitable for latency-sensitive databases.

Related Questions

You Might Also Like