How Does IPv6 Dual-Stack Work in Kubernetes?

advanced|networkingsreplatform engineerCKA
TL;DR

Kubernetes dual-stack networking assigns both IPv4 and IPv6 addresses to Pods and Services. It has been GA since Kubernetes 1.23 and requires dual-stack support from the CNI plugin, node network, and cloud provider.

Detailed Answer

Dual-stack networking allows Kubernetes to assign both IPv4 and IPv6 addresses to Pods and Services simultaneously. This feature became GA in Kubernetes 1.23 and is essential for organizations transitioning to IPv6 or operating in IPv6-only environments.

Enabling Dual-Stack

Dual-stack requires configuration at multiple layers:

1. API Server and Controller Manager

# kube-apiserver flags
--service-cluster-ip-range=10.96.0.0/16,fd00::/108

# kube-controller-manager flags
--cluster-cidr=10.244.0.0/16,fd00:10:244::/48
--service-cluster-ip-range=10.96.0.0/16,fd00::/108
--node-cidr-mask-size-ipv4=24
--node-cidr-mask-size-ipv6=64

2. kubelet

# kubelet flags (or config)
--node-ip=192.168.1.10,fd00::10

3. kube-proxy

# kube-proxy config
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16,fd00:10:244::/48"

Pod Dual-Stack Addresses

When dual-stack is enabled, every Pod receives both addresses:

kubectl get pod my-app -o jsonpath='{.status.podIPs}'
# [{"ip":"10.244.1.5"},{"ip":"fd00:10:244:1::5"}]

The Pod can communicate over either protocol. Applications listening on 0.0.0.0 accept IPv4 connections; those listening on :: accept both (on most systems).

Service IP Family Configuration

Services have three ipFamilyPolicy options:

SingleStack (Default)

apiVersion: v1
kind: Service
metadata:
  name: api-v4-only
spec:
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  selector:
    app: api
  ports:
    - port: 80
      targetPort: 8080

PreferDualStack

apiVersion: v1
kind: Service
metadata:
  name: api-prefer-dual
spec:
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
    - IPv4
    - IPv6
  selector:
    app: api
  ports:
    - port: 80
      targetPort: 8080

The Service gets both IPv4 and IPv6 ClusterIPs if the cluster supports dual-stack. If not, it falls back to single-stack.

RequireDualStack

apiVersion: v1
kind: Service
metadata:
  name: api-require-dual
spec:
  ipFamilyPolicy: RequireDualStack
  ipFamilies:
    - IPv6
    - IPv4
  selector:
    app: api
  ports:
    - port: 80
      targetPort: 8080

The order of ipFamilies determines which address is primary. Here, IPv6 is primary.

Verifying Dual-Stack

# Check node addresses
kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}: {.status.addresses}{"\n"}{end}'

# Check Service ClusterIPs
kubectl get service api-server -o jsonpath='{.spec.clusterIPs}'
# ["10.96.45.12","fd00::2d0c"]

# Check Pod addresses
kubectl get pods -o wide
# Shows primary IP; use podIPs for both

# Verify DNS returns both A and AAAA records
kubectl exec debug -- dig api-server.default.svc.cluster.local A
kubectl exec debug -- dig api-server.default.svc.cluster.local AAAA

LoadBalancer Services with Dual-Stack

Cloud load balancers need to support dual-stack. On AWS:

apiVersion: v1
kind: Service
metadata:
  name: web
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ip-address-type: "dualstack"
spec:
  type: LoadBalancer
  ipFamilyPolicy: RequireDualStack
  ipFamilies:
    - IPv4
    - IPv6
  selector:
    app: web
  ports:
    - port: 80
      targetPort: 8080

CNI Plugin Support

| CNI | Dual-Stack Support | Notes | |-----|-------------------|-------| | Calico | Yes (GA) | Requires BGP or VXLAN dual-stack config | | Cilium | Yes (GA) | Full dual-stack with eBPF | | Flannel | Partial | Limited to VXLAN mode | | AWS VPC CNI | Yes | Custom networking required | | Azure CNI | Yes | Overlay mode recommended |

Calico Dual-Stack Configuration

apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: default-ipv4
spec:
  cidr: 10.244.0.0/16
  natOutgoing: true
  encapsulation: VXLAN
---
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: default-ipv6
spec:
  cidr: fd00:10:244::/48
  natOutgoing: true
  encapsulation: VXLAN

Common Challenges

  1. Application readiness: Many applications default to IPv4. Ensure they bind to :: (dual-stack) or have separate listeners.
  2. NetworkPolicy: Policies apply per-IP-family. A policy blocking IPv4 does not automatically block IPv6 and vice versa.
  3. External DNS: DNS records need both A (IPv4) and AAAA (IPv6) entries for dual-stack Services.
  4. Monitoring: Ensure your monitoring stack collects metrics for both IP families.
  5. Retrofit complexity: Adding dual-stack to an existing single-stack cluster requires reconfiguring every component. Plan for dual-stack at cluster creation.

IPv6-Only Clusters

For environments with no IPv4 at all:

--service-cluster-ip-range=fd00::/108
--cluster-cidr=fd00:10:244::/48

All Pods and Services get only IPv6 addresses. This is common in environments facing IPv4 exhaustion, such as large-scale IoT or telco platforms.

Why Interviewers Ask This

IPv6 adoption is accelerating, especially in environments with IPv4 address exhaustion. This question tests your readiness to operate clusters in modern network environments.

Common Follow-Up Questions

Can a Service be IPv6-only?
Yes — set spec.ipFamilies to [IPv6] and spec.ipFamilyPolicy to SingleStack. The Service gets only an IPv6 ClusterIP.
What CNI plugins support dual-stack?
Calico, Cilium, Flannel (with limitations), and most cloud provider CNIs support dual-stack. Check your CNI documentation for specific configuration.
Do all Pods get both IPv4 and IPv6 addresses in dual-stack mode?
Yes — when dual-stack is enabled at the cluster level, every Pod receives both an IPv4 and an IPv6 address.

Key Takeaways

  • Dual-stack assigns both IPv4 and IPv6 addresses to every Pod.
  • Services can be configured as SingleStack (IPv4 or IPv6 only) or PreferDualStack/RequireDualStack.
  • Enable dual-stack at cluster creation — retrofitting an existing single-stack cluster is complex.

Related Questions

You Might Also Like