How Do PersistentVolumeClaims Work in Kubernetes?
A PersistentVolumeClaim (PVC) is a namespaced request for storage. It specifies size, access mode, and optionally a StorageClass. Kubernetes binds the PVC to a matching PersistentVolume, and Pods mount the PVC to access the underlying storage.
Detailed Answer
PVC as a Storage Request
A PersistentVolumeClaim is the developer's way of requesting storage without knowing the underlying infrastructure. The PVC specifies what kind of storage is needed, and Kubernetes finds or creates a suitable PersistentVolume.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
kubectl apply -f pvc.yaml
kubectl get pvc
# NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
# app-data Bound pvc-a1b2c3d4-e5f6-7890-abcd-ef1234567890 10Gi RWO standard
The Binding Process
When a PVC is created, the PV controller searches for a matching PV based on:
- Access modes: The PV must support the requested mode(s).
- Capacity: The PV must have at least the requested storage size.
- StorageClass: Must match exactly (empty string matches PVs with no class).
- Label selector: If the PVC specifies
selector.matchLabels, only matching PVs are considered. - Volume name: If
spec.volumeNameis set, only that specific PV is considered.
If no existing PV matches and a StorageClass is specified, dynamic provisioning creates a new PV automatically.
Mounting a PVC in a Pod
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: nginx
image: nginx:1.25
volumeMounts:
- name: web-data
mountPath: /usr/share/nginx/html
volumes:
- name: web-data
persistentVolumeClaim:
claimName: app-data
The Pod references the PVC by name. Kubernetes ensures the underlying PV is mounted at the specified mountPath inside the container.
PVC in StatefulSets
StatefulSets use volumeClaimTemplates to automatically create a PVC for each replica:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
replicas: 3
serviceName: postgres
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
volumeMounts:
- name: pgdata
mountPath: /var/lib/postgresql/data
volumeClaimTemplates:
- metadata:
name: pgdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: fast-ssd
This creates PVCs named pgdata-postgres-0, pgdata-postgres-1, and pgdata-postgres-2. Each Pod gets its own dedicated volume, and the PVCs persist even if the StatefulSet is scaled down.
Resizing a PVC
If the StorageClass supports expansion:
# StorageClass must have this:
allowVolumeExpansion: true
# Edit the PVC to request more storage
kubectl patch pvc app-data -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'
# Check resize status
kubectl get pvc app-data -o yaml | grep -A 5 conditions
For filesystem-based volumes, the resize may require the Pod to be restarted. For block volumes, online expansion is often supported.
PVC Status and Troubleshooting
# Check PVC status
kubectl get pvc -A
# See why a PVC is stuck in Pending
kubectl describe pvc app-data
# Common events when Pending:
# "waiting for a volume to be created, either by external provisioner or manually"
# "no persistent volumes available for this claim"
Common reasons a PVC stays Pending:
- No PV with sufficient capacity exists (static provisioning)
- The StorageClass does not exist or the provisioner is not running
- Access mode mismatch between PVC and available PVs
- The cluster has reached storage quota limits
Binding Modes
The volumeBindingMode on the StorageClass controls when PVC binding happens:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-ssd
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer # Bind only when a Pod uses it
- Immediate (default): The PVC is bound as soon as it is created.
- WaitForFirstConsumer: Binding is delayed until a Pod using the PVC is scheduled. This is essential for topology-aware storage like local volumes and zone-specific cloud disks.
Protecting Against Accidental Deletion
Kubernetes adds a kubernetes.io/pvc-protection finalizer to PVCs in use. If you try to delete a PVC that a Pod is actively using, the deletion is deferred until the Pod releases it:
# PVC shows Terminating but is not deleted yet
kubectl delete pvc app-data
kubectl get pvc app-data
# STATUS: Terminating (still bound to a running Pod)
# Once the Pod is deleted, the PVC deletion completes
This protection prevents accidental data loss from deleting a PVC while workloads are still using it.
Why Interviewers Ask This
Interviewers want to see that you understand the consumer side of Kubernetes storage and can correctly wire up Pods to persistent storage.
Common Follow-Up Questions
Key Takeaways
- PVCs are namespaced; PVs are cluster-scoped
- PVC-to-PV binding considers size, access mode, and StorageClass
- Pods reference PVCs in their volume spec, not PVs directly