Kubernetes requires that every Pod gets its own IP address, and all Pods can communicate with each other without NAT. This flat networking model simplifies service discovery and application design.
Kubernetes Networking Interview Questions
Why Kubernetes Networking Matters in Interviews
Networking is arguably the most challenging domain in Kubernetes, and interviewers know it. Questions here reveal whether you understand the infrastructure that makes everything else work, from Pod communication to Service routing to external access.
Be prepared to explain the Kubernetes networking model and its three fundamental requirements: pod-to-pod, pod-to-Service, and external-to-Service communication. Interviewers will ask you to compare CNI plugins, explain when to use Calico versus Cilium versus Flannel, and describe how kube-proxy translates Service definitions into actual packet forwarding rules. You should also understand overlay versus direct routing, how to troubleshoot DNS resolution failures with CoreDNS, and what role a service mesh plays in production environments. Candidates who can trace a packet from one Pod to another across nodes, explaining each network hop, demonstrate the deep operational knowledge that distinguishes senior engineers.
All Questions
Pod-to-Pod communication happens at the IP layer. Pods on the same node communicate through a virtual bridge, while Pods on different nodes use overlay networks or direct routing configured by the CNI plugin.
Kubernetes requires three distinct network segments: a Pod network (Pod CIDR) where every Pod gets a unique IP, a Service network (Service CIDR) for virtual ClusterIPs, and a Node network for host-to-host communication. All three must be non-overlapping.
CNI plugins implement the Kubernetes networking model. Popular choices include Calico (BGP-based, supports NetworkPolicy), Cilium (eBPF-based, advanced observability), Flannel (simple overlay), and Weave Net (mesh overlay). The right choice depends on scale, security, and feature requirements.
CoreDNS is the default DNS server in Kubernetes. It runs as a Deployment in the kube-system namespace and resolves Service names, Pod names, and external domains. Every Pod's /etc/resolv.conf points to the CoreDNS ClusterIP.
kube-proxy runs on every node and implements Kubernetes Service routing. It operates in three modes: iptables (default), IPVS (better performance at scale), and userspace (legacy). It watches the API server for Service and Endpoint changes and programs forwarding rules accordingly.
iptables mode uses sequential rule chains with O(n) lookup time, while IPVS mode uses kernel-level hash tables with O(1) lookup. IPVS supports multiple load-balancing algorithms and handles thousands of Services efficiently, making it the better choice for large production clusters.
Kubernetes network troubleshooting follows a systematic approach: verify DNS resolution, test Pod-to-Pod connectivity, check Service endpoints, inspect NetworkPolicies, and examine CNI plugin and kube-proxy health. Tools like kubectl exec, nslookup, curl, and tcpdump are essential.