How Do StatefulSet Volume Claim Templates Work?
volumeClaimTemplates in a StatefulSet automatically create a unique PersistentVolumeClaim for each Pod. When a Pod is rescheduled, it reattaches to its original PVC, preserving data. PVCs are not deleted when Pods or StatefulSets are removed.
Detailed Answer
volumeClaimTemplates is a field unique to StatefulSets that defines a PersistentVolumeClaim (PVC) template. For each Pod replica, Kubernetes automatically creates a dedicated PVC from this template, binding each Pod to its own persistent storage.
How It Works
When a StatefulSet with replicas: 3 is created, Kubernetes creates three Pods and three PVCs:
| Pod | PVC | |---|---| | postgres-0 | data-postgres-0 | | postgres-1 | data-postgres-1 | | postgres-2 | data-postgres-2 |
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres-headless"
replicas: 3
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:16
ports:
- containerPort: 5432
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "2"
memory: "2Gi"
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "fast-ssd"
resources:
requests:
storage: 20Gi
PVC Naming Convention
The PVC name follows the pattern:
<volumeClaimTemplate-name>-<statefulset-name>-<ordinal>
This deterministic naming is what allows Kubernetes to reattach the correct PVC when a Pod is rescheduled.
Lifecycle: PVCs Outlive Pods
This is a critical concept. When you:
- Delete a Pod: The PVC remains. When the StatefulSet controller recreates the Pod, it reattaches the same PVC.
- Scale down: The PVC remains. If you scale back up, the Pod reattaches to the existing PVC with all its data intact.
- Delete the StatefulSet: The PVCs remain. You must delete them manually with
kubectl delete pvc. - Delete the PVC: The underlying PersistentVolume follows the reclaim policy of its StorageClass (Delete or Retain).
# List PVCs for a StatefulSet
kubectl get pvc -l app=postgres
# Manually clean up PVCs after deleting a StatefulSet
kubectl delete pvc data-postgres-0 data-postgres-1 data-postgres-2
Multiple Volume Claim Templates
A StatefulSet can define multiple volumeClaimTemplates for applications that need separate volumes:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "fast-ssd"
resources:
requests:
storage: 50Gi
- metadata:
name: wal
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: "ultra-fast-ssd"
resources:
requests:
storage: 10Gi
This creates two PVCs per Pod: data-postgres-0 and wal-postgres-0 for the first replica, and so on.
Volume Expansion
You cannot modify a volumeClaimTemplate after creation. However, if the StorageClass has allowVolumeExpansion: true, you can resize individual PVCs:
kubectl patch pvc data-postgres-0 -p '{"spec":{"resources":{"requests":{"storage":"50Gi"}}}}'
The Pod may need to be restarted for the filesystem to expand, depending on the CSI driver.
Persistent Volume Reclaim Policy
When a PVC is deleted, what happens to the underlying storage depends on the StorageClass reclaimPolicy:
- Delete (default for most cloud providers): The volume is deleted along with the PVC
- Retain: The volume is preserved but becomes unavailable for new claims until manually released
For production databases, consider using Retain to prevent accidental data loss.
Why Interviewers Ask This
Interviewers ask this to confirm you understand how Kubernetes provides persistent, per-Pod storage — a fundamental requirement for running databases and other stateful workloads.
Common Follow-Up Questions
Key Takeaways
- Each StatefulSet Pod gets its own PVC, unlike Deployments where Pods share or lack persistent storage.
- PVCs outlive Pods and even the StatefulSet itself — you must delete them manually.
- Volume claim templates support StorageClass selection for dynamic provisioning.