PersistentVolume (PV) vs PersistentVolumeClaim (PVC)
Key Differences in Kubernetes
A PersistentVolume is a piece of cluster storage provisioned by an admin or dynamically by a StorageClass — it represents the actual storage resource. A PersistentVolumeClaim is a request for storage by a user — it specifies how much storage and what access mode is needed. PVs are the supply; PVCs are the demand. Pods consume storage by referencing PVCs, not PVs directly.
Side-by-Side Comparison
| Dimension | PersistentVolume (PV) | PersistentVolumeClaim (PVC) |
|---|---|---|
| Role | The actual storage resource (supply) | A request for storage (demand) |
| Scope | Cluster-scoped — not tied to any namespace | Namespace-scoped — exists in a specific namespace |
| Created By | Cluster admin or dynamically by a StorageClass | Application developer or user |
| Lifecycle | Exists independently of any Pod | Exists independently of any Pod; bound to a PV when matched |
| Reclaim Policy | Retain, Delete, or Recycle — controls what happens after PVC is deleted | No reclaim policy — this is a PV concern |
| Binding | Bound to a PVC that matches capacity and access modes | Bound to a PV that satisfies the request |
| Storage Details | Specifies backend (NFS, EBS, GCE PD, etc.), capacity, and access modes | Specifies desired capacity and access modes — backend is abstracted away |
Detailed Breakdown
The Storage Model
Kubernetes storage follows a provider-consumer model:
Admin provisions → PV (storage resource)
Developer requests → PVC (storage claim)
Kubernetes binds → PV ↔ PVC
Pod mounts → PVC reference
Static Provisioning
In static provisioning, an admin creates PVs manually:
# PV — created by the cluster admin
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-volume-01
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
server: 10.0.0.100
path: /exports/data01
# PVC — created by the developer
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-data
namespace: production
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Kubernetes matches the PVC to a PV that has at least 20Gi capacity and supports ReadWriteMany. The PV nfs-volume-01 (50Gi, RWX) satisfies this request, so they are bound together.
Dynamic Provisioning
Dynamic provisioning eliminates the need for admins to pre-create PVs. A StorageClass tells Kubernetes how to provision storage on demand:
# StorageClass — defines how to provision
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iops: "5000"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
# PVC — references the StorageClass
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: db-data
namespace: production
spec:
storageClassName: fast-ssd
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
When this PVC is created, the fast-ssd StorageClass automatically provisions a 100Gi gp3 EBS volume and creates a corresponding PV. The PV and PVC are bound immediately.
Mounting in a Pod
Pods always reference PVCs, never PVs directly:
apiVersion: v1
kind: Pod
metadata:
name: database
spec:
containers:
- name: postgres
image: postgres:16
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
volumes:
- name: data
persistentVolumeClaim:
claimName: db-data
This abstraction means the Pod manifest is portable. It does not care whether the PV is backed by NFS, EBS, GCE PD, or local disk — it just references the PVC name.
Access Modes
Both PVs and PVCs specify access modes, and they must be compatible for binding:
| Mode | Abbreviation | Meaning | |------|-------------|---------| | ReadWriteOnce | RWO | Single node read-write | | ReadOnlyMany | ROX | Multiple nodes read-only | | ReadWriteMany | RWX | Multiple nodes read-write | | ReadWriteOncePod | RWOP | Single Pod read-write (Kubernetes 1.27+) |
Not all storage backends support all modes. EBS volumes are RWO only. NFS supports RWX. The access mode must match between PV and PVC for binding to succeed.
Reclaim Policy
The reclaim policy on a PV determines what happens when the PVC is deleted:
spec:
persistentVolumeReclaimPolicy: Retain # Keep the PV and data
- Retain — the PV and its data are preserved. An admin must manually clean up and reclaim the PV. Best for critical data.
- Delete — the PV and the underlying storage resource (EBS volume, GCE PD) are deleted. Default for dynamically provisioned volumes.
- Recycle — deprecated. Was equivalent to
rm -rf /volume/*.
StatefulSet volumeClaimTemplates
StatefulSets create PVCs automatically for each Pod using volumeClaimTemplates:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
volumeMounts:
- name: data
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
storageClassName: fast-ssd
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 50Gi
This creates PVCs named data-mysql-0, data-mysql-1, data-mysql-2 — one per Pod. Each PVC is bound to its own PV, giving each database instance dedicated storage that persists across Pod rescheduling.
PV and PVC Lifecycle States
PVs go through these phases:
- Available — not yet bound to a PVC
- Bound — bound to a PVC
- Released — PVC deleted, but PV not yet reclaimed
- Failed — automatic reclamation failed
PVCs go through:
- Pending — no matching PV found yet
- Bound — bound to a PV
# Check PV and PVC status
kubectl get pv
kubectl get pvc -n production
Volume Binding Mode
The volumeBindingMode on a StorageClass controls when the PV is provisioned:
volumeBindingMode: Immediate # Provision as soon as PVC is created
volumeBindingMode: WaitForFirstConsumer # Wait until a Pod using the PVC is scheduled
WaitForFirstConsumer is important for topology-aware storage. It ensures the PV is created in the same availability zone as the node where the Pod is scheduled, avoiding cross-zone mounting issues.
Expanding Volumes
If a StorageClass allows expansion, you can grow a PVC by editing its size:
spec:
resources:
requests:
storage: 200Gi # increased from 100Gi
The StorageClass must have allowVolumeExpansion: true. Shrinking is not supported — you can only increase the size.
Use PersistentVolume (PV) when...
- •You're a cluster admin pre-provisioning storage for teams
- •You need to use a specific storage backend not covered by a StorageClass
- •You want to control the reclaim policy for critical data
- •You're setting up static provisioning for specialized storage
Use PersistentVolumeClaim (PVC) when...
- •You're a developer who needs persistent storage for your application
- •You want to request storage without knowing the backend details
- •You're defining storage requirements in a Deployment or StatefulSet
- •You want portable manifests that work across different clusters
- •You're using dynamic provisioning with a StorageClass
Model Interview Answer
“PersistentVolumes and PersistentVolumeClaims separate storage provisioning from consumption. A PV is the actual storage — it's cluster-scoped and describes the capacity, access modes, and backend (NFS, AWS EBS, etc.). A PVC is a namespace-scoped request for storage — it specifies how much space and what access mode the application needs. When a PVC is created, Kubernetes finds a matching PV and binds them together. The Pod then references the PVC to mount the volume. This separation means developers don't need to know storage infrastructure details — they just request what they need. In practice, most teams use dynamic provisioning where a StorageClass automatically creates PVs when PVCs are created.”