What Is the Kubernetes Pod Networking Model?
Kubernetes requires that every Pod gets its own IP address, and all Pods can communicate with each other without NAT. This flat networking model simplifies service discovery and application design.
Detailed Answer
The Core Networking Contract
Kubernetes defines a simple but strict networking model built on three fundamental rules:
- Every Pod gets its own unique IP address.
- All Pods can communicate with every other Pod without NAT (Network Address Translation).
- Agents on a node (e.g., kubelet, system daemons) can communicate with all Pods on that node.
This is often called the "flat network" model because, from a Pod's perspective, the network looks like one large, flat Layer 3 space where every participant is directly reachable.
Why a Flat Network Matters
In traditional container environments such as plain Docker, containers use port mapping on the host. This leads to port conflicts, complicates service discovery, and makes applications harder to reason about. Kubernetes eliminates these problems by giving every Pod a first-class IP.
Because every Pod has its own IP, applications running inside containers can use standard ports (e.g., 80 for HTTP, 5432 for PostgreSQL) without worrying about collisions. This makes it straightforward to migrate existing applications into Kubernetes.
How It Works in Practice
The Kubernetes networking model is an interface specification, not an implementation. The actual network plumbing is handled by a CNI (Container Network Interface) plugin such as Calico, Cilium, Flannel, or Weave Net.
When a new Pod is scheduled onto a node, the following sequence occurs:
- The kubelet calls the container runtime to create the Pod sandbox.
- The container runtime invokes the configured CNI plugin.
- The CNI plugin allocates an IP from the node's Pod CIDR and configures the network namespace.
- Routes or overlay tunnels are set up so other nodes can reach the Pod.
You can inspect Pod IPs with:
kubectl get pods -o wide
Output:
NAME READY STATUS RESTARTS AGE IP NODE
nginx-abc 1/1 Running 0 5m 10.244.1.12 node-1
redis-xyz 1/1 Running 0 3m 10.244.2.8 node-2
Pod CIDR and Node Allocation
The cluster-wide Pod CIDR (for example 10.244.0.0/16) is typically defined at cluster creation time. Each node receives a smaller subnet from this range:
# Example: kube-controller-manager flags
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
spec:
containers:
- command:
- kube-controller-manager
- --cluster-cidr=10.244.0.0/16
- --allocate-node-cidrs=true
- --node-cidr-mask-size=24
With a /24 mask per node, each node can host up to 254 Pods, and the /16 cluster CIDR supports up to 256 nodes.
Containers Within a Pod
All containers inside the same Pod share the same network namespace. This means they share:
- The same IP address
- The same
localhost - The same port space
Containers in a Pod communicate via localhost:
apiVersion: v1
kind: Pod
metadata:
name: sidecar-example
spec:
containers:
- name: app
image: myapp:latest
ports:
- containerPort: 8080
- name: sidecar
image: log-collector:latest
env:
- name: APP_URL
value: "http://localhost:8080"
Verifying Network Connectivity
To test Pod-to-Pod communication, you can run a temporary debug Pod:
# Start a debug pod
kubectl run nettest --image=busybox --rm -it --restart=Never -- sh
# Inside the pod, ping another pod by IP
ping 10.244.1.12
# Or use wget to test HTTP
wget -qO- http://10.244.2.8:6379
Common Misconceptions
A frequent mistake is assuming Pods communicate through the Kubernetes Service abstraction only. While Services provide stable endpoints and load balancing, the underlying Pod-to-Pod connectivity works independently. Services are built on top of the flat Pod network, not the other way around.
Another misconception is that Pod IPs are stable. They are not. When a Pod is deleted and recreated, it gets a new IP. This is why Services and DNS exist, to provide stable names in front of ephemeral Pod IPs.
Summary
The Kubernetes Pod networking model keeps things simple: every Pod gets an IP, and every Pod can reach every other Pod. CNI plugins implement this contract using techniques like VXLAN overlays, BGP peering, or native cloud-provider routing. Understanding this model is foundational to troubleshooting connectivity issues and designing network policies.
Why Interviewers Ask This
Interviewers ask this to verify you understand the foundational networking contract that all Kubernetes components and CNI plugins must satisfy.
Common Follow-Up Questions
Key Takeaways
- Every Pod gets a unique, routable IP address
- Pods communicate across nodes without NAT
- The CNI plugin is responsible for implementing the networking model