How Do Readiness Probes Work in Kubernetes?

beginner|podsdevopssreCKACKAD
TL;DR

A readiness probe tells Kubernetes whether a container is ready to accept traffic. If the probe fails, the Pod is removed from Service endpoints but is not restarted. This prevents traffic from being routed to Pods that are still initializing or temporarily unable to serve requests.

Detailed Answer

A readiness probe is a periodic check that determines whether a container is prepared to handle incoming requests. Unlike liveness probes, readiness probes do not kill or restart the container. Instead, when a readiness probe fails, Kubernetes removes the Pod from the endpoints of any Service that targets it.

Why Readiness Probes Matter

Without readiness probes, a Pod is added to Service endpoints as soon as its containers start. This can route traffic to Pods that:

  • Have not finished loading configuration or warming caches
  • Are performing a database migration
  • Are experiencing a transient resource crunch
  • Are in the middle of a graceful shutdown

Readiness probes prevent these scenarios by gating traffic based on actual application readiness.

Configuration Example

apiVersion: v1
kind: Pod
metadata:
  name: web-app
  labels:
    app: web
spec:
  containers:
    - name: app
      image: myapp/server:2.1
      ports:
        - containerPort: 8080
      readinessProbe:
        httpGet:
          path: /ready
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 5
        timeoutSeconds: 2
        failureThreshold: 3
        successThreshold: 1
      resources:
        requests:
          cpu: "250m"
          memory: "256Mi"
        limits:
          cpu: "500m"
          memory: "512Mi"

How Readiness Probes Affect Services

When a Pod's readiness probe fails:

  1. The kubelet reports the Pod's Ready condition as False.
  2. The Endpoints controller removes the Pod's IP from the Service's Endpoints object.
  3. kube-proxy (or the CNI plugin) updates iptables/IPVS rules so traffic no longer routes to the Pod.
  4. When the probe passes again, the Pod is re-added to the Endpoints.
# Check Pod readiness
kubectl get pod web-app -o jsonpath='{.status.conditions[?(@.type=="Ready")].status}'

# See which Pods are in the Service endpoints
kubectl get endpoints my-service

Readiness Probes During Rolling Updates

Readiness probes play a critical role during Deployment rolling updates:

  1. Kubernetes creates a new Pod with the updated container image.
  2. The new Pod starts and begins running its readiness probe.
  3. Only after the readiness probe passes does Kubernetes consider the new Pod available.
  4. Once the new Pod is ready, Kubernetes terminates an old Pod.
  5. This process repeats until all Pods are updated.

Without readiness probes, Kubernetes has no way to know if the new Pod can actually serve traffic before it terminates old Pods, potentially causing downtime.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
        - name: app
          image: myapp/server:2.2
          readinessProbe:
            httpGet:
              path: /ready
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 5

Setting maxUnavailable: 0 combined with a readiness probe ensures that at least 3 Pods are serving traffic at all times during the update.

Readiness vs. Liveness: When to Use Each

| Aspect | Liveness Probe | Readiness Probe | |--------|---------------|-----------------| | Purpose | Is the container still alive? | Can the container serve traffic? | | On failure | Container is killed and restarted | Pod removed from Service endpoints | | External deps | Never check | Can check when appropriate | | Use case | Detect deadlocks, hung processes | Gate traffic during startup, overload |

Checking External Dependencies

Unlike liveness probes, readiness probes can check external dependencies in certain scenarios. For example, if your application cannot serve meaningful responses without a database connection, failing the readiness probe when the database is down prevents users from seeing errors.

readinessProbe:
  httpGet:
    path: /ready
    port: 8080
  # The /ready endpoint checks:
  # 1. Database connection pool has available connections
  # 2. Required cache is populated
  # 3. Feature flag service is reachable

However, be cautious: if all Pods fail their readiness probes simultaneously, the Service will have no endpoints and no traffic will be served. Consider having the readiness check degrade gracefully rather than failing hard on optional dependencies.

Pod Readiness Gates

Beyond container-level probes, Kubernetes supports Pod readiness gates -- custom conditions that must be true for the Pod to be considered ready. External controllers can set these conditions.

spec:
  readinessGates:
    - conditionType: "custom.example.com/data-loaded"

An external controller sets the condition:

kubectl patch pod web-app --type=json -p='[
  {"op": "add", "path": "/status/conditions/-",
   "value": {"type": "custom.example.com/data-loaded", "status": "True"}}
]'

This is useful when readiness depends on something outside the container, such as data synchronization from an external system.

Best Practices

  1. Always define readiness probes for Pods behind a Service -- this is critical for safe rolling updates.
  2. Make the ready endpoint check meaningful dependencies that the app genuinely needs to serve requests.
  3. Use different endpoints for liveness and readiness -- /healthz for liveness (local checks only), /ready for readiness (can include dependency checks).
  4. Keep the probe fast -- under 200ms response time to avoid probe timeout issues under load.
  5. Set successThreshold appropriately -- the default of 1 is fine for most cases, but increase it if you need the container to prove stability before receiving traffic again.

Why Interviewers Ask This

Interviewers ask this to see if you understand how Kubernetes manages traffic routing. Readiness probes are critical for zero-downtime deployments and graceful handling of transient failures.

Common Follow-Up Questions

What happens to in-flight requests when a readiness probe fails?
The Pod is removed from the Service's Endpoints list, so new requests stop arriving. Existing TCP connections may still complete depending on the load balancer's draining behavior.
Can a Pod be live but not ready?
Yes. A Pod can pass its liveness probe (it's running fine) but fail its readiness probe (it can't serve traffic yet, for example during a cache warm-up).
How do readiness probes affect rolling updates?
During a rolling update, Kubernetes waits for new Pods to pass their readiness probe before terminating old Pods. Without readiness probes, traffic could be sent to unready Pods.

Key Takeaways

  • Readiness probes control whether a Pod receives traffic from Services -- they never trigger restarts.
  • They are essential for zero-downtime deployments and rolling updates.
  • Unlike liveness probes, readiness probes CAN check external dependencies when appropriate.

Related Questions