What Is Pod Affinity in Kubernetes?
Pod affinity and anti-affinity rules let you control which Pods are co-located on the same node or topology domain, and which Pods should be kept apart, using label selectors and topology keys.
Detailed Answer
Pod affinity and pod anti-affinity give you fine-grained control over where the Kubernetes scheduler places Pods relative to other Pods. Unlike node affinity, which targets node labels, pod affinity targets the labels of already-running Pods.
Why Pod Affinity Matters
In distributed systems, latency between components can make or break performance. Pod affinity lets you co-locate tightly coupled services — for example, placing a web frontend Pod on the same node as its caching layer. Conversely, pod anti-affinity ensures replicas of the same Deployment spread across nodes or zones, so a single failure does not take out all replicas.
Pod Affinity Example
This manifest ensures the web Pod is scheduled on a node that already runs a Pod with the label app: cache:
apiVersion: v1
kind: Pod
metadata:
name: web-server
labels:
app: web
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cache
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx:1.27
resources:
requests:
cpu: "100m"
memory: "128Mi"
The topologyKey: kubernetes.io/hostname means "same node." If you changed it to topology.kubernetes.io/zone, the rule would mean "same availability zone."
Pod Anti-Affinity Example
To spread replicas of a Deployment across nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- web
topologyKey: kubernetes.io/hostname
containers:
- name: nginx
image: nginx:1.27
resources:
requests:
cpu: "100m"
memory: "128Mi"
This guarantees that no two web Pods land on the same node. If there are fewer nodes than replicas, pending Pods will remain unscheduled.
Hard vs. Soft Rules
| Type | Behavior |
|------|----------|
| requiredDuringSchedulingIgnoredDuringExecution | Hard constraint — Pod stays pending if unsatisfied |
| preferredDuringSchedulingIgnoredDuringExecution | Soft constraint — scheduler tries but compromises if needed |
In production, use required anti-affinity for critical HA guarantees and preferred affinity for performance optimization, so Pods are not stuck pending when the cluster is under pressure.
Understanding topologyKey
The topologyKey field references a node label that defines the topology domain:
kubernetes.io/hostname— per-node granularitytopology.kubernetes.io/zone— per-availability-zonetopology.kubernetes.io/region— per-region- Custom labels like
rackorbuildingfor on-premises clusters
Performance Considerations
Pod affinity rules are more expensive for the scheduler than node affinity because the scheduler must evaluate the labels of all running Pods in the target namespace. In large clusters (hundreds of nodes, thousands of Pods), heavy use of pod affinity can slow scheduling. Use namespaceSelector or namespaces to limit the scope of label evaluation.
Common Pitfalls
- Forgetting topologyKey: The field is required. Omitting it causes a validation error.
- Symmetric anti-affinity deadlocks: If service A requires affinity to service B and vice versa, neither can schedule first. Use preferred for one side.
- Over-constraining: Using required anti-affinity with
hostnametopology on a cluster with fewer nodes than replicas guarantees pending Pods.
Verifying Affinity Rules
Use kubectl describe pod <name> to inspect events when a Pod is stuck pending. The scheduler emits messages like didn't match pod affinity rules or didn't match pod anti-affinity rules.
kubectl get pods -o wide
kubectl describe pod web-server | grep -A 5 Events
Why Interviewers Ask This
Interviewers want to know if you can design workloads that optimize performance through co-location or improve resilience by spreading Pods across failure domains.
Common Follow-Up Questions
Key Takeaways
- Pod affinity co-locates Pods with matching labels on the same topology domain for performance or data locality.
- Pod anti-affinity spreads Pods apart to improve high availability across failure domains.
- Always pair affinity rules with a topologyKey to define the scope of co-location or separation.