What Are the Kubernetes Cluster Networking Requirements?

intermediate|networkingdevopssreCKA
TL;DR

Kubernetes requires three distinct network segments: a Pod network (Pod CIDR) where every Pod gets a unique IP, a Service network (Service CIDR) for virtual ClusterIPs, and a Node network for host-to-host communication. All three must be non-overlapping.

Detailed Answer

The Three Network Segments

Every Kubernetes cluster operates across three distinct IP address spaces. Understanding and properly configuring these is essential for a functioning cluster.

1. Node Network

The node network is the physical or virtual network where Kubernetes nodes (machines) reside. This is typically managed by your infrastructure team, cloud provider, or VPC configuration. Nodes must be able to reach each other over this network.

Example: 192.168.0.0/24 for a small on-premises cluster.

2. Pod Network (Pod CIDR)

The Pod network provides IP addresses for all Pods in the cluster. It is configured at cluster initialization and implemented by the CNI plugin.

Example: 10.244.0.0/16 gives 65,536 addresses across the cluster.

# Set Pod CIDR during cluster init with kubeadm
kubeadm init --pod-network-cidr=10.244.0.0/16

# Verify Pod CIDR
kubectl cluster-info dump | grep -m 1 cluster-cidr

3. Service Network (Service CIDR)

The Service network provides virtual IPs (ClusterIPs) for Kubernetes Services. These IPs exist only in iptables/IPVS rules and are never assigned to any network interface.

Example: 10.96.0.0/12 (the default, providing ~1 million addresses).

# Set Service CIDR during cluster init
kubeadm init --service-cidr=10.96.0.0/16

# View the configured Service CIDR
kubectl get svc kubernetes -o jsonpath='{.spec.clusterIP}'

CIDR Planning

All three networks must be non-overlapping. Here is a typical allocation:

# kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
  podSubnet: "10.244.0.0/16"       # Pod CIDR
  serviceSubnet: "10.96.0.0/16"    # Service CIDR
  dnsDomain: "cluster.local"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: "192.168.1.10"  # Node network

Sizing the Pod CIDR:

| Cluster Size | Pod CIDR | Node Mask | Max Nodes | Pods per Node | |---|---|---|---|---| | Small | /20 | /24 | 16 | 254 | | Medium | /16 | /24 | 256 | 254 | | Large | /12 | /24 | 4,096 | 254 |

Required Port Access

Control Plane Nodes

| Port | Component | Purpose | |---|---|---| | 6443 | kube-apiserver | Kubernetes API | | 2379-2380 | etcd | Client and peer communication | | 10250 | kubelet | kubelet API | | 10259 | kube-scheduler | Scheduler metrics | | 10257 | kube-controller-manager | Controller metrics |

Worker Nodes

| Port | Component | Purpose | |---|---|---| | 10250 | kubelet | kubelet API | | 10256 | kube-proxy | Health check | | 30000-32767 | NodePort Services | External access |

CNI-Specific Ports

| Port | Protocol | CNI | Purpose | |---|---|---|---| | 4789 | UDP | Flannel/Calico VXLAN | VXLAN overlay | | 8472 | UDP | Flannel (default) | VXLAN overlay | | 179 | TCP | Calico BGP | BGP peering | | 4240 | TCP | Cilium | Health checks | | 8091 | TCP | Cilium | Hubble |

Connectivity Validation

After setting up a cluster, verify that all networking requirements are met:

# 1. Verify node-to-node connectivity
kubectl get nodes -o wide
# Ensure all nodes are Ready

# 2. Verify Pod networking
kubectl run test1 --image=busybox --restart=Never -- sleep 3600
kubectl run test2 --image=busybox --restart=Never -- sleep 3600
TEST2_IP=$(kubectl get pod test2 -o jsonpath='{.status.podIP}')
kubectl exec test1 -- ping -c 3 $TEST2_IP

# 3. Verify Service networking
kubectl create service clusterip test-svc --tcp=80:80
kubectl exec test1 -- nslookup test-svc.default.svc.cluster.local

# 4. Check that cluster DNS is working
kubectl exec test1 -- nslookup kubernetes.default

# Cleanup
kubectl delete pod test1 test2
kubectl delete svc test-svc

Cloud Provider Considerations

In managed Kubernetes (EKS, GKE, AKS), much of the network planning is handled by the provider. However, you still need to be aware of:

  • VPC CIDR conflicts: Your Pod CIDR must not overlap with other VPC subnets or peered networks.
  • Secondary CIDR ranges: GKE and EKS can use secondary CIDR ranges for Pods, keeping them separate from node IPs.
  • Security groups / NSGs: Ensure the necessary ports are allowed between node groups and the control plane.
# AWS EKS: Check VPC CNI configuration
kubectl get daemonset aws-node -n kube-system -o yaml | grep WARM_IP_TARGET

Common Mistakes

The most frequent networking errors during cluster setup include overlapping CIDRs between Pod/Service/Node networks, forgetting to open CNI-specific ports in firewall rules, choosing a Pod CIDR that is too small for the expected cluster growth, and not accounting for VPN or peered network address ranges that might conflict with the cluster CIDRs.

Why Interviewers Ask This

Interviewers want to know if you can plan and architect the network for a production Kubernetes cluster, including CIDR allocation and connectivity prerequisites.

Common Follow-Up Questions

What happens if the Pod CIDR overlaps with the node network?
Routing conflicts occur. Pods may become unreachable or traffic may be misrouted to physical hosts instead of Pods.
How do you size the Pod CIDR for a large cluster?
Calculate based on max nodes times max Pods per node. A /16 with /24 per node supports 256 nodes with 254 Pods each.
What ports must be open between nodes?
API server (6443), etcd (2379-2380), kubelet (10250), NodePort range (30000-32767), and CNI-specific ports like VXLAN (4789).

Key Takeaways

  • Pod, Service, and Node networks must use non-overlapping CIDRs
  • Specific ports must be open between control plane and worker nodes
  • The CNI plugin implements the Pod network; Kubernetes handles the Service network