How Does Pod-to-Pod Communication Work in Kubernetes?
Pod-to-Pod communication happens at the IP layer. Pods on the same node communicate through a virtual bridge, while Pods on different nodes use overlay networks or direct routing configured by the CNI plugin.
Detailed Answer
Same-Node Communication
When two Pods run on the same node, they communicate through a virtual Ethernet bridge. Here is how the plumbing works:
- Each Pod gets a veth pair (virtual Ethernet pair). One end is in the Pod's network namespace; the other is attached to a bridge on the host (commonly
cni0orcbr0). - When Pod A sends a packet to Pod B (same node), the packet exits Pod A's veth interface, enters the host bridge, and is forwarded to Pod B's veth interface.
- No encapsulation or routing is needed because both Pods are on the same Layer 2 segment.
Pod A (10.244.1.5) Pod B (10.244.1.8)
| |
veth-a veth-b
| |
+---- cni0 bridge (host) -----+
You can inspect this on a node:
# List network interfaces on a node
ip link show
# View the bridge and connected interfaces
bridge link show
# Show routes for Pod CIDR
ip route show | grep 10.244
Cross-Node Communication
When Pods are on different nodes, the CNI plugin must get the packet from one node to another. There are two primary approaches:
Overlay Networks (VXLAN, Geneve)
The packet from Pod A is encapsulated in an outer UDP packet by the CNI plugin on Node 1, sent across the physical network to Node 2, and decapsulated before delivery to Pod B.
Pod A (10.244.1.5) on Node 1
-> veth -> cni0 bridge -> VXLAN encapsulation
-> Physical network (UDP port 4789)
-> Node 2 -> VXLAN decapsulation -> cni0 bridge -> veth
-> Pod B (10.244.2.8) on Node 2
Flannel and Weave Net use this approach by default. It works on any underlying network but adds encapsulation overhead (about 50 bytes per packet).
Direct Routing (BGP)
Calico in BGP mode advertises Pod CIDR routes directly to the network. Each node announces its Pod subnet, and the physical network infrastructure routes packets natively.
Pod A (10.244.1.5) on Node 1
-> veth -> routing table
-> Physical network (native IP routing)
-> Node 2 -> routing table -> veth
-> Pod B (10.244.2.8) on Node 2
This avoids encapsulation overhead but requires the network infrastructure to support BGP or the nodes to be on the same Layer 2 network.
A Practical Example
Deploy two Pods and test connectivity:
apiVersion: v1
kind: Pod
metadata:
name: client
labels:
app: client
spec:
nodeName: node-1
containers:
- name: client
image: busybox:1.36
command: ["sleep", "3600"]
---
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
nodeName: node-2
containers:
- name: server
image: nginx:1.25
ports:
- containerPort: 80
# Get the server Pod IP
SERVER_IP=$(kubectl get pod server -o jsonpath='{.status.podIP}')
# Test connectivity from client to server
kubectl exec client -- wget -qO- http://$SERVER_IP
# Test with ping
kubectl exec client -- ping -c 3 $SERVER_IP
The Role of NetworkPolicies
By default, all Pods can communicate freely. NetworkPolicies act as firewalls to restrict traffic:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-only-client
namespace: default
spec:
podSelector:
matchLabels:
app: server
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: client
ports:
- protocol: TCP
port: 80
This policy allows only Pods labeled app: client to reach the server on port 80. All other ingress traffic to the server is denied.
Containers Within the Same Pod
Containers in the same Pod share a network namespace, so they communicate over localhost. This is fundamentally different from Pod-to-Pod communication, as no routing is involved at all.
# Container A can reach Container B on localhost
curl http://localhost:8080
Debugging Connectivity Issues
# Check Pod IPs and node placement
kubectl get pods -o wide
# Exec into a Pod and test DNS + connectivity
kubectl exec -it client -- sh
ping <pod-ip>
nslookup <service-name>
wget -qO- http://<service-name>
# Check node routing tables
ip route show
# Verify CNI plugin is healthy
kubectl get pods -n kube-system -l k8s-app=calico-node
Common causes of Pod-to-Pod communication failure include misconfigured CNI plugins, NetworkPolicies blocking traffic, nodes not having the correct routes, and firewall rules on the underlying infrastructure blocking overlay traffic (e.g., UDP port 4789 for VXLAN).
Why Interviewers Ask This
Interviewers ask this to gauge your understanding of how traffic flows between workloads and whether you can diagnose connectivity problems.
Common Follow-Up Questions
Key Takeaways
- Same-node Pods communicate through a virtual bridge without leaving the host
- Cross-node Pods rely on the CNI plugin for routing or encapsulation
- NetworkPolicies are the only way to restrict Pod-to-Pod traffic