Pod vs Container
Key Differences in Kubernetes
A container is a single isolated process running from a container image. A Pod is a Kubernetes abstraction that wraps one or more containers, giving them a shared network namespace, shared storage volumes, and a common lifecycle. Kubernetes schedules Pods, not containers — the Pod is the smallest deployable unit.
Side-by-Side Comparison
| Dimension | Pod | Container |
|---|---|---|
| Abstraction Level | Kubernetes scheduling unit — groups one or more containers | Single process running in an isolated namespace |
| Networking | Has its own IP address; all containers in the Pod share it | Shares the Pod's network namespace — uses localhost to reach other containers |
| Storage | Defines volumes that containers can mount | Mounts volumes defined by the Pod spec |
| Lifecycle | Created, scheduled, and terminated as a unit | Starts and stops within the Pod's lifecycle |
| Scheduling | Scheduled to a node by the Kubernetes scheduler | Not scheduled independently — always part of a Pod |
| IP Address | Gets a unique IP from the Pod CIDR | No individual IP — uses the Pod's IP |
| Runtime | A Kubernetes concept — not a runtime construct | Runs via a container runtime (containerd, CRI-O) |
| Scaling | Scaled by creating more Pods (via Deployments, etc.) | Not scaled independently — scaling means adding more Pods |
Detailed Breakdown
The Pod as a Wrapper
A Pod is not a process — it is a specification that tells Kubernetes how to run one or more containers together. The Pod provides the shared environment; the containers provide the running processes.
apiVersion: v1
kind: Pod
metadata:
name: web-app
spec:
containers:
- name: app
image: my-app:1.0.0
ports:
- containerPort: 8080
volumeMounts:
- name: shared-data
mountPath: /data
- name: log-shipper
image: fluentd:latest
volumeMounts:
- name: shared-data
mountPath: /data
volumes:
- name: shared-data
emptyDir: {}
In this example, the Pod contains two containers. Both share the shared-data volume — the app writes logs to /data and the log shipper reads from the same path. Both containers share the same IP address and can reach each other on localhost.
Shared Network Namespace
Every Pod gets a unique IP address. All containers within the Pod share that IP. If the app container listens on port 8080 and the sidecar listens on port 9090, they are both accessible on the Pod's IP — no port conflicts as long as they use different ports.
spec:
containers:
- name: app
image: my-app:1.0.0
ports:
- containerPort: 8080
- name: metrics
image: metrics-exporter:1.0.0
ports:
- containerPort: 9090
The metrics container can scrape the app at localhost:8080, and external Pods can reach both at <pod-ip>:8080 and <pod-ip>:9090.
This is fundamentally different from Docker Compose, where each container gets its own IP and containers communicate over a bridge network using service names.
One Container Per Pod (The Common Case)
Despite the ability to run multiple containers, the most common pattern is one container per Pod. You then use a Deployment to manage multiple Pod replicas:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-api
spec:
replicas: 3
selector:
matchLabels:
app: web-api
template:
metadata:
labels:
app: web-api
spec:
containers:
- name: api
image: my-api:2.0.0
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 256Mi
Each replica is a Pod with one container. You scale by adding more Pods (not more containers in the same Pod).
Multi-Container Patterns
When you do use multiple containers in a Pod, it follows established patterns:
Sidecar Pattern — a helper container that enhances the main container:
spec:
containers:
- name: app
image: my-app:1.0.0
- name: envoy-proxy
image: envoyproxy/envoy:latest
Ambassador Pattern — a proxy that simplifies access to external services:
spec:
containers:
- name: app
image: my-app:1.0.0
- name: cloud-sql-proxy
image: gcr.io/cloud-sql-connectors/cloud-sql-proxy:latest
Adapter Pattern — a container that transforms or normalizes output:
spec:
containers:
- name: app
image: legacy-app:1.0.0
- name: log-adapter
image: log-format-adapter:1.0.0
Init Containers
Pods can also define init containers that run before the main containers start:
spec:
initContainers:
- name: wait-for-db
image: busybox:1.36
command: ['sh', '-c', 'until nc -z db-service 5432; do sleep 2; done']
containers:
- name: app
image: my-app:1.0.0
Init containers run to completion sequentially. Only after all init containers succeed do the regular containers start. This is a Pod-level feature — individual containers cannot define their own initialization dependencies.
Lifecycle and Restarts
When a container crashes, the Pod does not die. The kubelet restarts the failed container within the same Pod, using the restartPolicy (default: Always). The Pod maintains its identity, IP, and volumes.
When a Pod is terminated (or the node fails), all containers in the Pod are terminated together. The controller (Deployment, StatefulSet, etc.) creates a new Pod, which may be scheduled on a different node with a different IP.
spec:
restartPolicy: Always # Always | OnFailure | Never
Resource Management
Resource requests and limits are set per container, not per Pod. The Pod's total resource consumption is the sum of its containers:
spec:
containers:
- name: app
resources:
requests:
cpu: 200m
memory: 256Mi
- name: sidecar
resources:
requests:
cpu: 50m
memory: 64Mi
# Total Pod request: 250m CPU, 320Mi memory
The scheduler uses the total Pod resource request to find a node with enough capacity.
The Container Runtime Interface
Kubernetes does not run containers directly. It delegates to a container runtime (containerd, CRI-O) via the Container Runtime Interface (CRI). The kubelet tells the runtime to create the Pod's sandbox (network namespace) and then start each container within it.
The Pod concept is Kubernetes-specific. If you run docker run directly, there is no Pod — you just have a container. The Pod abstraction adds scheduling, health checking, volume sharing, and lifecycle management on top of the raw container primitive.
Use Pod when...
- •You want to deploy one or more tightly coupled containers as a single unit
- •Containers need to share network (localhost) and storage volumes
- •You need sidecar patterns like log shippers, proxies, or adapters
- •You want Kubernetes to manage scheduling, health checks, and lifecycle
Use Container when...
- •You're defining the application process that runs inside a Pod
- •You need to specify the image, command, ports, and resource limits
- •You're adding a sidecar container to an existing Pod spec
- •You're running containers outside Kubernetes (Docker, Podman)
Model Interview Answer
“A container is a single isolated process running a specific image. A Pod is a Kubernetes abstraction that wraps one or more containers into a co-scheduled, co-located group. All containers in a Pod share the same network namespace — they have the same IP address and can communicate over localhost. They also share storage volumes and have a common lifecycle. Kubernetes never schedules a bare container; it always schedules Pods. The most common pattern is one container per Pod, but multi-container Pods are used for sidecars like log collectors, service mesh proxies, or init containers that prepare the environment before the main container starts.”