DaemonSet vs Deployment

Key Differences in Kubernetes

DaemonSets ensure exactly one Pod runs on every (or selected) node in the cluster, making them ideal for node-level agents like log collectors and monitoring daemons. Deployments manage a specified number of identical Pod replicas distributed across the cluster, making them the standard choice for stateless application workloads.

Side-by-Side Comparison

DimensionDaemonSetDeployment
Scheduling ModelOne Pod per node — automatically schedules on every qualifying nodeN replicas distributed across nodes by the scheduler
ScalingScales automatically as nodes are added or removedScales manually or via HPA by changing the replica count
Pod PlacementGuarantees exactly one Pod per node (or per matching node)Pods may land on the same node or be spread with topology constraints
Rolling UpdatesUpdates one Pod per node at a time (maxUnavailable controls pace)Creates a new ReplicaSet and gradually shifts traffic
Use CaseNode-level infrastructure: log shippers, monitoring agents, network pluginsApplication workloads: web servers, APIs, microservices
Node AffinitynodeSelector or affinity limits which nodes run the DaemonSet PodnodeSelector or affinity guides scheduling but doesn't guarantee per-node coverage
Replica CountNo replica count field — determined by number of matching nodesExplicit replicas field in the spec
ReplicaSet OwnershipManages Pods directly via a controller, no intermediate ReplicaSetCreates and manages ReplicaSets which in turn own the Pods

Detailed Breakdown

Scheduling Model

The fundamental difference between a DaemonSet and a Deployment comes down to how Pods get scheduled. A DaemonSet guarantees exactly one Pod per node. A Deployment asks for N replicas and lets the scheduler decide where they land.

# DaemonSet — one Pod per node, no replica count
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: fluentd
  template:
    metadata:
      labels:
        app: fluentd
    spec:
      containers:
        - name: fluentd
          image: fluentd:v1.16
          volumeMounts:
            - name: varlog
              mountPath: /var/log
      volumes:
        - name: varlog
          hostPath:
            path: /var/log
# Deployment — explicit replica count
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-api
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-api
  template:
    metadata:
      labels:
        app: web-api
    spec:
      containers:
        - name: api
          image: my-api:1.2.0
          ports:
            - containerPort: 8080

When a new node is added to the cluster, the DaemonSet controller notices and immediately creates a Pod on it. When a node is removed, the Pod is garbage collected. Deployments are oblivious to node changes — the scheduler simply has more or fewer targets for the existing replicas.

Node Selection

DaemonSets commonly use nodeSelector or affinity to target a subset of nodes. For example, you might only want GPU monitoring on GPU nodes:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: gpu-monitor
spec:
  selector:
    matchLabels:
      app: gpu-monitor
  template:
    metadata:
      labels:
        app: gpu-monitor
    spec:
      nodeSelector:
        hardware: gpu
      tolerations:
        - key: nvidia.com/gpu
          operator: Exists
          effect: NoSchedule
      containers:
        - name: monitor
          image: gpu-monitor:latest

Deployments can also use nodeSelector and affinity, but the intent is different. A Deployment guides replicas toward preferred nodes. A DaemonSet defines which nodes must have exactly one Pod.

Rolling Updates

Both resources support rolling updates, but the mechanics differ.

A Deployment creates a new ReplicaSet with the updated Pod template and scales it up while scaling the old ReplicaSet down. The maxSurge and maxUnavailable fields control the pace.

A DaemonSet updates Pods directly on each node. The maxUnavailable field (default 1) determines how many nodes can be without a running DaemonSet Pod at once. There is also a maxSurge option (added in Kubernetes 1.22) that allows a new Pod to start on a node before the old one terminates.

# DaemonSet rolling update strategy
apiVersion: apps/v1
kind: DaemonSet
spec:
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 0

Tolerations and System Workloads

DaemonSets are often used for system-level components that must run on control-plane nodes or nodes with specific taints. By default, DaemonSets automatically add tolerations for node.kubernetes.io/unschedulable and several other system taints. This is why kube-proxy and CNI plugins (which run as DaemonSets) can schedule on control-plane nodes that reject regular workloads.

Deployments do not get these automatic tolerations. If you want a Deployment Pod to run on a tainted node, you must add the toleration explicitly.

Host Access

DaemonSets frequently need to access the underlying node — reading /var/log, accessing the container runtime socket, or using host networking:

spec:
  template:
    spec:
      hostNetwork: true
      hostPID: true
      containers:
        - name: node-exporter
          image: prom/node-exporter:latest
          ports:
            - containerPort: 9100
              hostPort: 9100

Deployments rarely need this level of host access. If you find yourself mounting hostPath volumes in a Deployment, consider whether a DaemonSet is a better fit.

Scaling Behavior

DaemonSets scale with the cluster. If you go from 10 nodes to 50 nodes, you go from 10 DaemonSet Pods to 50 automatically — no HPA, no manual intervention.

Deployments scale independently of cluster size. You can have 3 replicas on a 100-node cluster or 100 replicas on a 3-node cluster (if resources allow). This makes Deployments the right tool for application workloads where replica count is driven by traffic, not infrastructure.

When They Overlap

In some cases, the choice is not obvious. A metrics collector could be either a DaemonSet (one per node collecting node metrics) or a Deployment (centralized collector scraping endpoints). The question to ask is: does the workload have a one-to-one relationship with nodes? If yes, use a DaemonSet. If the workload is node-agnostic and driven by request volume, use a Deployment.

Use DaemonSet when...

  • You need a Pod on every node — log collection, metrics, or CNI plugins
  • The workload must scale with cluster size automatically
  • You're deploying node-level infrastructure like Fluentd, Datadog, or kube-proxy
  • You need to mount host paths or access node hardware

Use Deployment when...

  • You want a specific number of replicas regardless of node count
  • You're running a stateless application like a web server or API
  • You need advanced rollout strategies like blue-green via multiple Deployments
  • You want Horizontal Pod Autoscaler (HPA) to manage replica count

Model Interview Answer

A DaemonSet ensures exactly one Pod runs on every node in the cluster — when a new node joins, the DaemonSet controller automatically schedules a Pod on it. This makes DaemonSets perfect for node-level infrastructure like Fluentd for log collection or a monitoring agent. A Deployment, on the other hand, maintains a specified number of replicas distributed by the scheduler across available nodes. Deployments are the standard choice for stateless application workloads because they support declarative updates, rollbacks, and horizontal scaling via HPA.

Related Comparisons