Node affinity is an advanced scheduling mechanism that attracts Pods to nodes based on label expressions. It comes in two forms: requiredDuringSchedulingIgnoredDuringExecution (hard requirement) and preferredDuringSchedulingIgnoredDuringExecution (soft preference). It replaces nodeSelector with more expressive matching capabilities.
Scheduling Interview Questions
Why Scheduling Matters in Interviews
Scheduling is where abstract Kubernetes manifests meet physical infrastructure. Interviewers focus on scheduling to determine whether candidates can reason about workload placement beyond simply relying on defaults.
Common interview scenarios include: "You have GPU nodes that should only run ML workloads — how do you set this up?" (taints and tolerations), "Ensure replicas of a critical service are spread across availability zones" (topology spread constraints or pod anti-affinity), and "Co-locate a cache with the application that uses it" (pod affinity).
Candidates should understand the difference between required (hard) and preferred (soft) constraints, and when each is appropriate. A required constraint guarantees placement rules but risks Pods remaining Pending if no node matches. A preferred constraint is best-effort and more resilient to resource pressure.
Advanced interviews may cover the scheduler's plugin architecture, custom schedulers, priority classes and preemption behavior, and how resource requests interact with scheduling decisions. Demonstrating that you can design placement strategies that balance reliability, performance, and cost signals operational maturity.
All Questions
Taints are applied to nodes to repel Pods that do not tolerate them. Tolerations are applied to Pods to allow scheduling onto tainted nodes. Together they ensure only specific workloads run on designated nodes, such as GPU nodes, dedicated tenant nodes, or control plane nodes.
Pod affinity schedules Pods near other Pods that match a label selector, while pod anti-affinity ensures Pods are spread apart. Both operate within a topology domain (node, zone, rack) and support required (hard) and preferred (soft) rules. Anti-affinity is commonly used to spread replicas across failure domains.
Topology spread constraints control how Pods are distributed across topology domains (zones, nodes, racks). Unlike pod anti-affinity which is binary, topology spread uses maxSkew to define how unevenly Pods can be distributed. This enables fine-grained, even workload distribution for high availability.