containerd vs Docker
Key Differences in Kubernetes
containerd is a lightweight, industry-standard container runtime that Kubernetes uses directly via the CRI to manage container lifecycle. Docker is a full container platform that includes containerd internally plus additional tooling for building images, managing volumes, and developer experience. Since Kubernetes 1.24, Docker is no longer supported as a runtime — containerd (or CRI-O) is used directly.
Side-by-Side Comparison
| Dimension | containerd | Docker |
|---|---|---|
| Scope | Focused container runtime — manages container lifecycle only | Full platform — runtime, image building, CLI, networking, volumes |
| CRI Support | Native CRI plugin — speaks CRI directly to the kubelet | Required dockershim adapter (removed in Kubernetes 1.24) |
| Architecture | Single daemon: containerd → runc → container | Multiple layers: dockerd → containerd → runc → container |
| Resource Overhead | Lower memory and CPU usage — fewer moving parts | Higher overhead — extra Docker daemon layer |
| Image Building | No built-in image build — use Docker, BuildKit, or Buildah separately | Built-in docker build command with Dockerfile support |
| CLI | ctr (low-level) and nerdctl (Docker-compatible) CLI tools | docker CLI — feature-rich and widely known |
| Kubernetes Status | Recommended runtime for Kubernetes — default in most distributions | Removed as a Kubernetes runtime since v1.24 |
Detailed Breakdown
Architecture Comparison
The key difference is the number of layers between the kubelet and the actual container:
With Docker (pre-1.24):
kubelet → dockershim → Docker daemon (dockerd) → containerd → runc → container
With containerd (current):
kubelet → CRI plugin → containerd → runc → container
Docker was never designed as a Kubernetes runtime — it was designed for developers running containers on their laptops. The kubelet needed a translation layer (dockershim) to convert CRI calls into Docker API calls. Docker then delegated to containerd, which did the actual work.
Removing Docker eliminates two unnecessary layers (dockershim and dockerd), reducing latency, resource usage, and potential failure points.
The Dockershim Removal Timeline
Kubernetes 1.20: dockershim deprecated (warning logs)
Kubernetes 1.24: dockershim removed from kubelet
After 1.24, if you configure a node to use Docker as the runtime, the kubelet will not start. You must migrate to containerd or CRI-O.
Configuring containerd for Kubernetes
containerd is configured via /etc/containerd/config.toml:
version = 2
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "registry.k8s.io/pause:3.9"
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
The kubelet connects to containerd's CRI socket:
# kubelet configuration
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
Image Compatibility
Container images are standardized by the Open Container Initiative (OCI). Images built with Docker, Buildah, kaniko, or any OCI-compliant tool work identically with containerd:
# Build with Docker (on a developer machine or CI)
docker build -t registry.example.com/my-app:1.0.0 .
docker push registry.example.com/my-app:1.0.0
# containerd on the Kubernetes node pulls and runs it
# The image format is identical — OCI standard
Your Dockerfiles, multi-stage builds, and image layers work exactly the same. The change is purely at the runtime level on Kubernetes nodes.
CLI Tools
Docker provides the familiar docker CLI. containerd provides ctr (low-level) and the community-maintained nerdctl (Docker-compatible):
# Docker CLI (developer machine)
docker ps
docker images
docker run -it ubuntu bash
# containerd ctr (low-level, on Kubernetes nodes)
ctr containers list
ctr images list
# nerdctl (Docker-compatible CLI for containerd)
nerdctl ps
nerdctl images
nerdctl run -it ubuntu bash
On Kubernetes nodes, you rarely interact with the container runtime directly. You use kubectl to manage Pods, and the kubelet handles all communication with containerd. The CLI tools are mainly useful for debugging node-level issues:
# Debug: list containers managed by Kubernetes
crictl ps
crictl pods
crictl images
crictl logs <container-id>
crictl is the standard CRI debugging tool that works with any CRI-compatible runtime.
Performance Impact
Benchmarks consistently show containerd outperforming Docker as a Kubernetes runtime:
- Pod startup latency: ~10-15% faster without the Docker daemon layer
- Memory usage: containerd uses ~50-70MB vs Docker's ~100-150MB per node
- CPU overhead: Lower syscall overhead with fewer daemon layers
- Image pull: Slightly faster due to direct CRI integration
For clusters with hundreds of nodes, these savings compound significantly.
Migration from Docker to containerd
Most managed Kubernetes services (GKE, EKS, AKS) completed this migration automatically. For self-managed clusters:
# 1. Drain the node
kubectl drain node-1 --ignore-daemonsets --delete-emptydir-data
# 2. Stop Docker and kubelet
systemctl stop kubelet
systemctl stop docker
# 3. Install and configure containerd
apt-get install containerd
containerd config default > /etc/containerd/config.toml
# Edit config.toml to set SystemdCgroup = true
# 4. Update kubelet configuration
# Set containerRuntimeEndpoint to containerd socket
# 5. Start containerd and kubelet
systemctl start containerd
systemctl start kubelet
# 6. Uncordon the node
kubectl uncordon node-1
CRI-O — The Alternative
containerd is not the only CRI runtime. CRI-O is another option, purpose-built for Kubernetes:
containerd: General-purpose runtime, also used outside Kubernetes
CRI-O: Purpose-built for Kubernetes, nothing else
Both are production-grade. containerd is more widely adopted because it was Docker's runtime and has broader tooling support. CRI-O is used by default in OpenShift.
Building Images Without Docker
Since Docker is no longer on Kubernetes nodes, image building in CI/CD has adapted:
# BuildKit (standalone, from the Docker team)
buildctl build --frontend=dockerfile.v0 --local context=. --output type=image,name=my-app:1.0.0
# kaniko (runs in a container, no daemon needed — perfect for Kubernetes CI)
# Runs as a Pod in the cluster
docker run gcr.io/kaniko-project/executor:latest \
--dockerfile=Dockerfile --context=. --destination=my-app:1.0.0
# Buildah (daemonless, OCI-native)
buildah bud -t my-app:1.0.0 .
buildah push my-app:1.0.0 registry.example.com/my-app:1.0.0
These tools build OCI images without requiring a Docker daemon, making them suitable for CI/CD pipelines running inside Kubernetes.
Use containerd when...
- •You're running a Kubernetes cluster (this is the standard runtime)
- •You want minimal overhead and attack surface on cluster nodes
- •You're building a production cluster and need CRI-native runtime
- •You want the runtime used by GKE, EKS, AKS, and most managed Kubernetes
Use Docker when...
- •You're building container images on developer machines
- •You need a local development environment with docker run and docker compose
- •You want a familiar CLI for container management outside of Kubernetes
- •You're running containers on a single host without Kubernetes
Model Interview Answer
“containerd is the core container runtime that Kubernetes uses to manage containers. It handles pulling images, creating containers, and managing their lifecycle via the Container Runtime Interface (CRI). Docker is a full container platform that actually contains containerd inside it, plus additional tooling for image building, networking, and developer UX. Kubernetes 1.24 removed Docker support because the kubelet had to use a shim layer (dockershim) to talk to Docker, which then delegated to containerd anyway — an unnecessary indirection. Now the kubelet talks directly to containerd via CRI. Importantly, Docker images are OCI-compliant, so images built with Docker work perfectly with containerd — only the runtime layer changed.”