How Do You Target Specific Nodes with a DaemonSet?
You can restrict a DaemonSet to specific nodes using nodeSelector for simple label matching or nodeAffinity for complex rules. This lets you run specialized agents only on nodes that need them, such as GPU monitoring on GPU nodes or storage daemons on storage nodes.
Detailed Answer
By default, a DaemonSet runs on every node. In real-world clusters, you often need DaemonSet Pods only on certain node types — GPU monitoring on GPU nodes, NVMe utilities on storage nodes, or specialized logging on edge nodes.
Using nodeSelector
The simplest approach is nodeSelector, which matches nodes by label:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: gpu-monitor
namespace: kube-system
spec:
selector:
matchLabels:
app: gpu-monitor
template:
metadata:
labels:
app: gpu-monitor
spec:
nodeSelector:
accelerator: nvidia-gpu
containers:
- name: gpu-monitor
image: nvidia/dcgm-exporter:3.3.0
ports:
- containerPort: 9400
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "256Mi"
This DaemonSet only runs on nodes labeled accelerator=nvidia-gpu. Nodes without this label are ignored.
Labeling Nodes
# Add a label to target a node
kubectl label node worker-03 accelerator=nvidia-gpu
# DaemonSet Pod is automatically created on worker-03
# Remove the label
kubectl label node worker-03 accelerator-
# DaemonSet Pod is automatically removed from worker-03
Using nodeAffinity
For more complex targeting, use nodeAffinity:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: storage-agent
namespace: kube-system
spec:
selector:
matchLabels:
app: storage-agent
template:
metadata:
labels:
app: storage-agent
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node.kubernetes.io/instance-type
operator: In
values:
- i3.xlarge
- i3.2xlarge
- i3.4xlarge
- key: topology.kubernetes.io/zone
operator: In
values:
- us-east-1a
- us-east-1b
containers:
- name: storage-agent
image: mycompany/storage-agent:v2.1
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "1"
memory: "512Mi"
This targets nodes that are both NVMe-backed instance types (i3 family) and in specific availability zones.
nodeAffinity Operators
| Operator | Meaning | Example | |---|---|---| | In | Label value is in the set | instance-type In [m5.large, m5.xlarge] | | NotIn | Label value is not in the set | instance-type NotIn [t2.micro] | | Exists | Label key exists (any value) | gpu Exists | | DoesNotExist | Label key does not exist | spot DoesNotExist | | Gt | Label value is greater than | disk-count Gt 4 | | Lt | Label value is less than | memory-gb Lt 32 |
Combining nodeSelector with Tolerations
Specialized nodes are often tainted to repel regular workloads. Your DaemonSet needs both a selector and a toleration:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: gpu-driver
namespace: kube-system
spec:
selector:
matchLabels:
app: gpu-driver
template:
metadata:
labels:
app: gpu-driver
spec:
nodeSelector:
accelerator: nvidia-gpu
tolerations:
- key: "nvidia.com/gpu"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: driver
image: nvidia/driver:535
securityContext:
privileged: true
resources:
requests:
cpu: "100m"
memory: "256Mi"
Without the toleration, the DaemonSet Pod would be repelled by the taint even though the nodeSelector matches.
Dynamic Node Targeting
One powerful pattern is using labels to dynamically control DaemonSet coverage:
# Roll out a new agent to 10% of nodes by labeling them
kubectl label node worker-01 agent-rollout=canary
kubectl label node worker-02 agent-rollout=canary
# DaemonSet targets only canary nodes
# nodeSelector: agent-rollout: canary
# After verification, label all nodes
kubectl label nodes --all agent-rollout=canary
This gives you canary-style rollout control for node-level infrastructure.
Why Interviewers Ask This
Interviewers ask this to test whether you know how to efficiently manage infrastructure agents in heterogeneous clusters with different node types.
Common Follow-Up Questions
Key Takeaways
- nodeSelector provides simple label-based node targeting for DaemonSets.
- nodeAffinity offers more expressive rules including preferred (soft) constraints.
- Changing node labels dynamically adds or removes DaemonSet Pods in real time.