Kubernetes vs Docker
Key Differences in Kubernetes
Docker builds and runs containers on a single machine. Kubernetes orchestrates containers across a cluster of machines, handling scheduling, scaling, self-healing, and service discovery. They are complementary — Docker (or containerd) creates the containers, and Kubernetes manages where and how they run at scale.
Side-by-Side Comparison
| Dimension | Kubernetes | Docker |
|---|---|---|
| Primary Role | Container orchestration platform — manages containers across a cluster | Container runtime and tooling — builds images and runs containers |
| Scope | Multi-node cluster — distributes workloads across machines | Single machine — runs containers on one host |
| Scaling | Automatic horizontal scaling with HPA and cluster autoscaler | Manual — you run more containers yourself |
| Self-Healing | Restarts failed containers, replaces Pods, reschedules on node failure | Restart policies only — no rescheduling to another host |
| Networking | Cluster-wide networking with Services, Ingress, DNS, and network policies | Docker bridge network, port mapping on single host |
| Service Discovery | Built-in DNS-based service discovery (CoreDNS) | Basic container linking or Docker Compose service names |
| Storage | PersistentVolumes, StorageClasses, dynamic provisioning across nodes | Docker volumes on the local host |
| Configuration | Declarative YAML manifests applied to the API server | docker run commands or Docker Compose YAML |
| Production Readiness | Designed for production — rolling updates, RBAC, resource limits, audit logs | Designed for development and single-host deployments |
Detailed Breakdown
Different Layers of the Stack
Docker and Kubernetes operate at different layers:
Application Layer: Your code packaged as container images
Orchestration Layer: Kubernetes — scheduling, scaling, networking
Runtime Layer: containerd / CRI-O — running containers
Image Layer: Docker / Buildah — building container images
OS Layer: Linux kernel — namespaces, cgroups
Docker spans the runtime and image layers. Kubernetes operates at the orchestration layer and delegates container runtime to containerd or CRI-O via the Container Runtime Interface (CRI).
Running a Container: Docker vs Kubernetes
With Docker, you run a container directly:
docker run -d -p 8080:80 --name web nginx:latest
This starts one container on the current machine. If the machine goes down, the container is gone.
With Kubernetes, you describe the desired state:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
selector:
app: web
ports:
- port: 80
targetPort: 80
type: LoadBalancer
Kubernetes schedules 3 replicas across available nodes, creates a load-balanced Service, and if a Pod fails or a node goes down, it automatically reschedules the Pod on a healthy node.
Docker Compose vs Kubernetes
Docker Compose is closer to Kubernetes in concept — it defines multi-container applications:
# docker-compose.yml
version: "3.8"
services:
web:
image: my-app:1.0.0
ports:
- "8080:80"
depends_on:
- db
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
This is great for local development. But Docker Compose runs everything on a single host, has no auto-scaling, no rolling updates, no self-healing across machines, and no built-in service mesh or network policies.
The Kubernetes equivalent provides all of those features but requires more configuration — Deployments, Services, ConfigMaps, Secrets, PVCs, and potentially Ingress resources.
The Docker Deprecation in Kubernetes
In Kubernetes 1.24, Docker was removed as a container runtime. This caused confusion but was less dramatic than it sounded:
- Docker internally used containerd as its actual container runtime
- Kubernetes talked to Docker, which talked to containerd — an unnecessary layer
- Kubernetes now talks to containerd directly via CRI
- Container images built with Docker still work perfectly — OCI images are the standard
Before: kubelet → dockershim → Docker → containerd → container
After: kubelet → CRI → containerd → container
The change removed a middleman. Your Dockerfiles, Docker build process, and container images are completely unaffected.
When to Use Each
Use Docker for:
- Building container images (
docker build,docker push) - Local development and testing
- Running quick one-off containers
- Learning container concepts
Use Kubernetes for:
- Running production workloads at scale
- Multi-service architectures
- Workloads that need self-healing and auto-scaling
- Teams that need RBAC, namespaces, and resource quotas
Development Workflow
In practice, most teams use both:
- Developers write a
Dockerfileand use Docker to build and test locally - CI/CD pipelines build the image and push to a registry
- Kubernetes pulls the image from the registry and runs it in production
# Developer workflow
docker build -t my-app:1.2.0 .
docker run -p 8080:80 my-app:1.2.0 # local testing
docker push registry.example.com/my-app:1.2.0
# Production deployment
kubectl set image deployment/my-app app=registry.example.com/my-app:1.2.0
Local Kubernetes Options
For local development with Kubernetes, several tools bridge the gap:
- minikube — runs a single-node Kubernetes cluster in a VM or container
- kind (Kubernetes in Docker) — runs Kubernetes nodes as Docker containers
- k3d — runs lightweight k3s clusters in Docker containers
- Docker Desktop — includes a built-in single-node Kubernetes cluster
These tools use Docker as the underlying infrastructure to run Kubernetes itself, which is another way Docker and Kubernetes complement each other.
Networking Differences
Docker networking is simple — bridge network on a single host with port mapping:
docker run -p 8080:80 nginx # Maps host:8080 → container:80
Kubernetes networking is a flat network model:
- Every Pod gets its own IP address
- Pods can communicate across nodes without NAT
- Services provide stable virtual IPs and DNS names
- Network Policies control traffic flow between Pods
- Ingress manages external HTTP routing
This cluster-wide networking is a fundamental capability that Docker alone does not provide.
Use Kubernetes when...
- •You're running production workloads that need high availability
- •You need automatic scaling, self-healing, and rolling updates
- •Your application spans multiple services across multiple machines
- •You need service discovery, load balancing, and network policies
- •You want declarative infrastructure as code
Use Docker when...
- •You're building and testing container images locally
- •You need a simple local development environment
- •You're running a single-host application with Docker Compose
- •You want a quick container runtime without cluster overhead
Model Interview Answer
“Docker and Kubernetes solve different problems and work at different levels. Docker is a container runtime — it builds container images and runs containers on a single machine. Kubernetes is a container orchestration platform — it takes containers and distributes them across a cluster of machines, providing scheduling, scaling, self-healing, service discovery, and declarative configuration. In modern Kubernetes clusters, Docker as a runtime has been replaced by containerd (which was actually the core runtime inside Docker). Kubernetes tells containerd to start and stop containers, while managing the higher-level concerns like placement, networking, and lifecycle.”