What is CNI (Container Network Interface) and how does it work in Kubernetes?

intermediate|architecturedevopssrecloud architectCKA
TL;DR

CNI is a specification and set of libraries for configuring network interfaces in Linux containers. In Kubernetes, the kubelet invokes a CNI plugin when creating or destroying pods to assign IP addresses, set up routes, and connect pods to the cluster network. Popular CNI plugins include Calico, Cilium, Flannel, and Weave Net.

Detailed Answer

The Container Network Interface (CNI) is a CNCF specification that defines how network plugins should configure networking for containers. In Kubernetes, CNI is the standard mechanism by which pods get their network connectivity, including IP address assignment, routing between pods, and integration with the broader network.

Kubernetes Networking Model

Before diving into CNI, it is important to understand the networking requirements Kubernetes imposes:

  1. Every pod receives a unique IP address
  2. Pods on the same node can communicate with each other directly
  3. Pods on different nodes can communicate without NAT
  4. The IP a pod sees for itself is the same IP that other pods see

CNI plugins implement these requirements using various techniques: overlay networks (VXLAN, Geneve), direct routing (BGP), or cloud-native networking (AWS VPC CNI, Azure CNI).

How CNI Works

When the kubelet creates a pod, the sequence is:

1. kubelet calls CRI to create a pod sandbox (network namespace)
2. CRI runtime creates the namespace
3. kubelet invokes the CNI plugin with ADD command
4. CNI plugin:
   a. Creates a veth pair (virtual ethernet)
   b. Places one end in the pod's network namespace
   c. Places the other end on the node's network (bridge or direct)
   d. Assigns an IP address to the pod interface
   e. Sets up routes for the pod to reach other pods and services
   f. Returns the IP address to the kubelet
5. kubelet reports the pod IP to the API server

When a pod is deleted, the kubelet calls the CNI plugin with the DEL command to clean up networking.

CNI Configuration

CNI plugins are configured through JSON files in /etc/cni/net.d/ and binaries in /opt/cni/bin/:

# View installed CNI plugins
ls /opt/cni/bin/
# bandwidth  bridge  calico  calico-ipam  dhcp  flannel  host-local
# ipvlan  loopback  macvlan  portmap  ptp  sbr  static  tuning  vlan

# View CNI configuration
cat /etc/cni/net.d/10-calico.conflist

A typical CNI configuration (conflist format):

{
  "name": "k8s-pod-network",
  "cniVersion": "1.0.0",
  "plugins": [
    {
      "type": "calico",
      "datastore_type": "kubernetes",
      "mtu": 0,
      "nodename_file_optional": false,
      "log_level": "Info",
      "log_file_path": "/var/log/calico/cni/cni.log",
      "ipam": {
        "type": "calico-ipam",
        "assign_ipv4": "true",
        "assign_ipv6": "false"
      },
      "policy": {
        "type": "k8s"
      }
    },
    {
      "type": "portmap",
      "snat": true,
      "capabilities": {
        "portMappings": true
      }
    },
    {
      "type": "bandwidth",
      "capabilities": {
        "bandwidth": true
      }
    }
  ]
}

Popular CNI Plugins

Flannel -- Simple overlay network using VXLAN. Good for getting started but lacks network policy support natively:

# Install Flannel
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

# Check Flannel pods
kubectl get pods -n kube-flannel

Calico -- Production-grade networking with network policy enforcement. Supports BGP for direct routing (no overlay overhead) or VXLAN overlay:

# Install Calico
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/tigera-operator.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/custom-resources.yaml

# Check Calico components
kubectl get pods -n calico-system

# View Calico node status
kubectl exec -n calico-system calico-node-xxxxx -- calico-node -bird-ready

Cilium -- eBPF-based networking with advanced features including transparent encryption, L7 network policies, and service mesh capabilities:

# Install Cilium using CLI
cilium install

# Check Cilium status
cilium status
kubectl get pods -n kube-system -l k8s-app=cilium

Network Policies with CNI

CNI plugins that support network policies allow you to control traffic flow between pods:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-web-to-api
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: web
    ports:
    - protocol: TCP
      port: 8080

Troubleshooting CNI Issues

# Check CNI plugin pods are running
kubectl get pods -n kube-system -l k8s-app=calico-node
# or
kubectl get pods -n kube-system -l k8s-app=cilium

# Check CNI logs on a node
journalctl -u kubelet | grep -i cni
cat /var/log/calico/cni/cni.log

# Verify pod IP allocation
kubectl get pods -o wide

# Test pod-to-pod connectivity
kubectl exec pod-a -- ping -c 3 <pod-b-ip>
kubectl exec pod-a -- wget -qO- http://<pod-b-ip>:8080

# Check routes on a node
ip route show
# Look for pod CIDR routes pointing to the correct interface

# Verify the CNI binary and config exist
ls /opt/cni/bin/
ls /etc/cni/net.d/

IP Address Management (IPAM)

CNI delegates IP allocation to an IPAM plugin. Common IPAM approaches:

  • host-local -- Allocates from a per-node subnet range
  • calico-ipam -- Uses Calico's IP pool management with block allocation
  • whereabouts -- Cluster-wide IPAM for environments needing cross-node IP range management
  • AWS VPC CNI -- Allocates real VPC IPs to pods for native cloud networking

Why Interviewers Ask This

Understanding CNI reveals whether a candidate grasps how pod networking is implemented at a fundamental level. Interviewers evaluate knowledge of the networking model, plugin selection trade-offs, and the ability to troubleshoot pod connectivity issues.

Common Follow-Up Questions

What are the Kubernetes networking requirements that CNI plugins must satisfy?
Every pod gets a unique IP, pods can communicate with any other pod without NAT, agents on a node can communicate with all pods on that node, and the network the pod sees itself on is the same network others see it on.
How would you choose between Calico, Cilium, and Flannel?
Flannel is simple overlay networking for basic needs. Calico provides network policies with BGP routing for performance. Cilium uses eBPF for advanced networking, observability, and security with the best performance at scale.
How do you troubleshoot pod-to-pod networking issues?
Check CNI plugin pods are healthy, verify IP allocation, test connectivity with kubectl exec, inspect routes and iptables on nodes, and check CNI logs in /var/log/ or journalctl.

Key Takeaways

  • CNI provides a standard interface so Kubernetes works with many different network implementations
  • The kubelet calls CNI plugins during pod creation and deletion to manage network configuration
  • CNI plugins are responsible for IP address management (IPAM), routing, and optionally network policy enforcement