What Is a DaemonSet?

beginner|daemonsetsdevopssrebackend developerCKACKAD
TL;DR

A DaemonSet ensures that a copy of a specific Pod runs on every node (or a selected subset of nodes) in the cluster. When nodes are added, the DaemonSet automatically schedules a Pod on them. When nodes are removed, the Pod is garbage collected.

Detailed Answer

A DaemonSet is a Kubernetes controller that ensures a copy of a Pod runs on every node in the cluster (or a targeted subset). Unlike Deployments that specify a fixed replica count, DaemonSets automatically scale with your cluster — add a node and a Pod appears; remove a node and the Pod is cleaned up.

How a DaemonSet Works

When you create a DaemonSet, Kubernetes:

  1. Finds all eligible nodes in the cluster
  2. Schedules exactly one Pod on each node
  3. Watches for new nodes being added and creates Pods on them
  4. Detects node removal and garbage collects the corresponding Pods
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
  labels:
    app: fluentd
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      containers:
        - name: fluentd
          image: fluent/fluentd:v1.16
          volumeMounts:
            - name: varlog
              mountPath: /var/log
            - name: containers
              mountPath: /var/lib/docker/containers
              readOnly: true
          resources:
            requests:
              cpu: "100m"
              memory: "200Mi"
            limits:
              cpu: "500m"
              memory: "500Mi"
      volumes:
        - name: varlog
          hostPath:
            path: /var/log
        - name: containers
          hostPath:
            path: /var/lib/docker/containers

DaemonSet Scheduling

DaemonSets use the default Kubernetes scheduler (since Kubernetes 1.12). The controller creates a Pod for each eligible node and sets the spec.nodeName field, which bypasses the scheduler's placement decision while still respecting node taints and tolerations.

Common DaemonSet Workloads

| Use Case | Example | |---|---| | Log collection | Fluentd, Fluent Bit, Filebeat | | Monitoring | Prometheus node-exporter, Datadog agent | | Networking | Calico node, Cilium agent, kube-proxy | | Storage | CSI node drivers, GlusterFS | | Security | Falco, Sysdig |

Targeting Specific Nodes

You can restrict a DaemonSet to run only on certain nodes using nodeSelector:

spec:
  template:
    spec:
      nodeSelector:
        disk: ssd
      containers:
        - name: cache-warmer
          image: myapp/cache-warmer:v1

Or use nodeAffinity for more flexible rules:

spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: node-role.kubernetes.io/gpu
                    operator: Exists

DaemonSet vs Static Pods

Static Pods are managed directly by the kubelet on a node, not by the API server. DaemonSets are managed by the control plane and offer:

  • Centralized management via kubectl
  • Rolling update support
  • Status reporting and health monitoring
  • Label-based node selection

Static Pods are only used for bootstrapping control plane components (etcd, kube-apiserver) before the DaemonSet controller is available.

Why Interviewers Ask This

Interviewers ask this to verify you understand how Kubernetes runs node-level infrastructure like log collectors, monitoring agents, and network plugins that must be present on every node.

Common Follow-Up Questions

How is a DaemonSet different from a Deployment with many replicas?
A Deployment creates N replicas distributed across nodes by the scheduler. A DaemonSet guarantees exactly one Pod per eligible node, scaling automatically with the cluster.
Can you prevent a DaemonSet from running on certain nodes?
Yes, use nodeSelector or node affinity in the DaemonSet spec to target specific nodes, or use taints on nodes you want to exclude.
What happens to DaemonSet Pods during a node drain?
DaemonSet Pods are typically ignored during drain operations unless you use --ignore-daemonsets=false. They are expected to be present on every node.

Key Takeaways

  • DaemonSets guarantee exactly one Pod per node, making them ideal for node-level agents and infrastructure.
  • Pods are automatically added to new nodes and removed from deleted nodes.
  • Common uses include log collection (Fluentd), monitoring (Prometheus node-exporter), and networking (CNI plugins).

Related Questions

You Might Also Like