Kubernetes Service Not Reachable

Causes and Fixes

Service Not Reachable errors occur when clients cannot connect to a Kubernetes Service, even though the backing pods may be running. This can manifest as connection timeouts, connection refused, or empty responses when trying to access a Service by its ClusterIP, DNS name, or NodePort.

Symptoms

  • Connection timeout when accessing a Service by ClusterIP or DNS name
  • curl or wget to the Service endpoint hangs or returns connection refused
  • Application logs show inability to connect to dependent services
  • Service exists but has no endpoints
  • Intermittent connectivity failures to the Service

Common Causes

1
Label selector mismatch
The Service's selector does not match the labels on the target pods, so no endpoints are created. This is the most common cause of unreachable services.
2
Pods not ready
The backing pods exist but are failing readiness probes, so they are removed from the Service's endpoints list and receive no traffic.
3
Wrong port configuration
The Service's targetPort does not match the port the container is actually listening on, so connections reach the pod but nothing is listening.
4
kube-proxy not functioning
kube-proxy is not running or is misconfigured on the node, so iptables/IPVS rules that implement service routing are missing or incorrect.
5
NetworkPolicy blocking traffic
A NetworkPolicy is blocking ingress traffic to the pods on the Service's port, preventing connections even though the endpoints are correct.
6
Service type mismatch
The Service is a ClusterIP type but is being accessed from outside the cluster, or the NodePort range is blocked by firewall rules.

Step-by-Step Troubleshooting

When a Kubernetes Service is not reachable, the problem can exist at multiple layers: DNS resolution, endpoint selection, kube-proxy routing, network policies, or the pods themselves. This guide works through each layer systematically.

1. Verify the Service Exists and Check Its Configuration

Start by confirming the Service is properly configured.

kubectl get service <service-name> -o wide

kubectl describe service <service-name>

Check the following in the output:

  • Type: ClusterIP, NodePort, LoadBalancer, or ExternalName
  • ClusterIP: Should be assigned (not None unless it is a headless service)
  • Port: The port clients connect to
  • TargetPort: The port on the pod that receives traffic
  • Selector: The labels used to find backing pods
  • Endpoints: Listed at the bottom of describe output

2. Check if Endpoints Exist

The most critical check is whether the Service has endpoints.

kubectl get endpoints <service-name>

If the ENDPOINTS column is empty or shows <none>, no pods match the Service's selector, or matching pods are not ready. This is the most common cause of unreachable services.

# For EndpointSlices (Kubernetes 1.21+)
kubectl get endpointslice -l kubernetes.io/service-name=<service-name>

3. Verify Label Selectors Match

Compare the Service's selector with the pod labels.

# Get the Service selector
kubectl get service <service-name> -o jsonpath='{.spec.selector}'

# List pods that match the selector
kubectl get pods -l <key>=<value> -o wide

# Example: if selector is app=myapp
kubectl get pods -l app=myapp -o wide

If no pods are returned, the labels do not match. Check the actual pod labels.

# Show labels on pods
kubectl get pods --show-labels

# Compare with the service selector
kubectl get service <service-name> -o jsonpath='{.spec.selector}' | jq .

Common mismatches include typos in label keys or values, using app.kubernetes.io/name versus app, or labels being set at the Deployment level but not propagated to the pod template.

4. Check Pod Readiness

Even if labels match, pods must pass readiness probes to be included in endpoints.

kubectl get pods -l <selector> -o custom-columns=NAME:.metadata.name,READY:.status.conditions[?(@.type=="Ready")].status,IP:.status.podIP

If pods show READY: False, they are excluded from endpoints. Check why readiness is failing.

kubectl describe pod <pod-name> | grep -A10 "Readiness"
kubectl logs <pod-name> --tail=50

5. Verify Port Configuration

Confirm the Service's targetPort matches what the container is listening on.

# Check the Service port mapping
kubectl get service <service-name> -o jsonpath='{range .spec.ports[*]}Port: {.port} -> TargetPort: {.targetPort} ({.protocol}){"\n"}{end}'

# Check what port the container is configured to listen on
kubectl get pod <pod-name> -o jsonpath='{range .spec.containers[*].ports[*]}{.containerPort} ({.protocol}){"\n"}{end}'

# Test direct connectivity to the pod on the target port
kubectl exec <debug-pod> -- curl -s --connect-timeout 5 http://<pod-ip>:<target-port>/

If direct pod connectivity on the target port works but the Service does not, the issue is in the Service routing layer. If direct connectivity also fails, the application is not listening on the expected port.

6. Test DNS Resolution

Verify the Service name resolves to the correct ClusterIP.

kubectl run dns-test --image=busybox --restart=Never -- sleep 3600
kubectl exec dns-test -- nslookup <service-name>
kubectl exec dns-test -- nslookup <service-name>.<namespace>.svc.cluster.local

The resolved IP should match the Service's ClusterIP. If DNS fails, see the dns-resolution-failure troubleshooting guide.

7. Check kube-proxy

kube-proxy programs the network rules that route Service traffic to pod endpoints.

# Check kube-proxy is running on all nodes
kubectl get pods -n kube-system -l k8s-app=kube-proxy -o wide

# Check kube-proxy logs for errors
kubectl logs -n kube-system <kube-proxy-pod> --tail=50

# On a node, verify iptables rules exist for the service (iptables mode)
# From a debug pod on the node:
kubectl debug node/<node-name> -it --image=ubuntu -- bash
iptables -t nat -L KUBE-SERVICES | grep <service-clusterip>

If kube-proxy is not running or has errors, Service routing will not work. Restart kube-proxy if needed.

kubectl rollout restart daemonset kube-proxy -n kube-system

8. Check NetworkPolicies

NetworkPolicies can block traffic to pods even when everything else is correct.

# List NetworkPolicies in the namespace
kubectl get networkpolicy -n <namespace>

# Describe each policy
kubectl describe networkpolicy -n <namespace>

Check if any policy applies to the target pods (via podSelector) and whether it allows ingress on the service's target port. If a default-deny ingress policy exists, you must explicitly allow traffic.

# Example: allow traffic on port 8080 to pods with label app=myapp
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-service-traffic
spec:
  podSelector:
    matchLabels:
      app: myapp
  ingress:
  - ports:
    - port: 8080
      protocol: TCP
EOF

9. Test End-to-End Connectivity

Use a test pod to verify connectivity at each layer.

# Test Service ClusterIP
kubectl exec dns-test -- wget -qO- --timeout=5 http://<service-clusterip>:<port>/

# Test Service DNS name
kubectl exec dns-test -- wget -qO- --timeout=5 http://<service-name>:<port>/

# Test direct pod IP
kubectl exec dns-test -- wget -qO- --timeout=5 http://<pod-ip>:<target-port>/

# Clean up
kubectl delete pod dns-test

If direct pod IP works but ClusterIP does not, the issue is with kube-proxy or iptables rules. If both work from inside the cluster but not from outside, check the Service type and any external access configuration.

10. Verify Resolution

After fixing the issue, confirm the Service is fully reachable.

# Verify endpoints are populated
kubectl get endpoints <service-name>

# Test connectivity
kubectl run verify-test --image=busybox --restart=Never --rm -it -- wget -qO- --timeout=5 http://<service-name>.<namespace>.svc.cluster.local:<port>/

# Verify the application is receiving traffic
kubectl logs <pod-name> --tail=20

The Service is working correctly when endpoints are populated, DNS resolves to the ClusterIP, and test connections receive responses from the application.

How to Explain This in an Interview

I would explain the Service abstraction and how it works under the hood: the Service controller watches for pods matching the selector and populates the Endpoints object, kube-proxy programs iptables or IPVS rules to route traffic from the Service ClusterIP to pod IPs, and CoreDNS resolves the Service name to the ClusterIP. I'd walk through debugging at each layer — checking labels and selectors, verifying endpoints, testing direct pod connectivity, and inspecting iptables rules. I'd also distinguish between ClusterIP, NodePort, LoadBalancer, and ExternalName types and their reachability scopes.

Prevention

  • Use kubectl describe service to verify endpoints exist after creating a Service
  • Ensure pod labels match service selectors exactly
  • Always configure readiness probes so only healthy pods receive traffic
  • Test service connectivity as part of deployment pipelines
  • Document port conventions and enforce them through admission policies

Related Errors