Kubernetes Endpoints Not Found
Causes and Fixes
Endpoints Not Found means a Kubernetes Service has no backing endpoints, so traffic sent to the Service has nowhere to go. The Endpoints object associated with the Service is either missing or empty, which causes connection failures for any client trying to reach the Service.
Symptoms
- kubectl get endpoints shows <none> for the Service
- Service returns connection refused or connection reset
- Ingress controllers report 502/503 errors for backends
- kubectl describe service shows no Endpoints listed
- Application errors referencing 'no endpoints available' for the target service
Common Causes
Step-by-Step Troubleshooting
Empty endpoints are the most common reason for service connectivity failures in Kubernetes. When a Service has no endpoints, every connection attempt fails. This guide walks through identifying why endpoints are missing and how to restore them.
1. Check the Endpoints Object
Start by inspecting the Endpoints associated with the Service.
kubectl get endpoints <service-name>
# For more detail
kubectl describe endpoints <service-name>
# Also check EndpointSlices (Kubernetes 1.21+)
kubectl get endpointslice -l kubernetes.io/service-name=<service-name> -o yaml
If the Endpoints object shows no addresses under the Subsets field, or if it only has entries under NotReadyAddresses, then the Service has no healthy backends.
2. Examine the Service Selector
Get the Service's label selector and understand what pods it expects to find.
kubectl get service <service-name> -o jsonpath='{.spec.selector}' | jq .
This returns the key-value pairs that pods must have as labels to be included. Note these carefully — even a single character difference will prevent matching.
3. Find Pods That Should Match
Search for pods with the expected labels.
# Use the selector from the previous step
kubectl get pods -l <key1>=<value1>,<key2>=<value2> -o wide
# If no pods are found, list all pods and their labels
kubectl get pods --show-labels
# Check if pods exist in a different namespace
kubectl get pods --all-namespaces -l <key>=<value>
If no pods match the selector, you have found the root cause. Either the pods have wrong labels or the Service has the wrong selector.
4. Fix Label Mismatches
If labels do not match, decide whether to fix the pod labels or the Service selector.
# Option A: Fix the pod labels (via Deployment template)
kubectl patch deployment <deployment-name> -p '{"spec":{"template":{"metadata":{"labels":{"app":"correct-value"}}}}}'
# Option B: Fix the Service selector
kubectl patch service <service-name> -p '{"spec":{"selector":{"app":"correct-value"}}}'
# Verify endpoints appear
kubectl get endpoints <service-name>
If you are modifying a Deployment's pod template labels, be aware that this creates a new ReplicaSet and triggers a rolling update.
5. Check Pod Readiness
If pods match the selector but endpoints are still empty, the pods are likely not ready.
# Check pod readiness status
kubectl get pods -l <selector> -o custom-columns=NAME:.metadata.name,READY:.status.containerStatuses[0].ready,PHASE:.status.phase
# For pods that are not ready, check why
kubectl describe pod <not-ready-pod> | grep -A15 "Conditions:"
# Check readiness probe events
kubectl describe pod <not-ready-pod> | grep -i "readiness"
Pods that fail readiness probes are placed in the Endpoints object's NotReadyAddresses list, not in the active Addresses list. This means they will not receive traffic.
# See NotReadyAddresses
kubectl get endpoints <service-name> -o yaml | grep -A5 notReadyAddresses
To include not-ready pods in the endpoints (for debugging purposes only), you can use the service.alpha.kubernetes.io/tolerate-unready-endpoints annotation or the publishNotReadyAddresses field.
# For debugging only - do NOT use in production
kubectl patch service <service-name> -p '{"spec":{"publishNotReadyAddresses":true}}'
6. Verify the Deployment Is Scaled Up
The simplest explanation may be that the workload is scaled to zero.
kubectl get deployment <deployment-name>
# If replicas is 0, scale up
kubectl scale deployment <deployment-name> --replicas=3
# Wait for pods to be ready
kubectl rollout status deployment/<deployment-name>
# Verify endpoints
kubectl get endpoints <service-name>
7. Check for Namespace Issues
Services only select pods in the same namespace. If your pods are in a different namespace, the Service will have no endpoints.
# Check the Service namespace
kubectl get service <service-name> -o jsonpath='{.metadata.namespace}'
# Check which namespace the pods are in
kubectl get pods --all-namespaces -l <selector>
If the Service and pods are in different namespaces, either move the Service to the same namespace as the pods, or create an ExternalName Service or a Service without a selector with manual Endpoints pointing to the pods in the other namespace.
# Create a manual endpoint for cross-namespace service
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Endpoints
metadata:
name: <service-name>
namespace: <service-namespace>
subsets:
- addresses:
- ip: <pod-ip-in-other-namespace>
ports:
- port: <target-port>
EOF
8. Check the Endpoints Controller
In rare cases, the endpoints controller in kube-controller-manager may not be functioning correctly.
# Check kube-controller-manager health
kubectl get pods -n kube-system | grep controller-manager
# Check controller manager logs
kubectl logs -n kube-system <controller-manager-pod> --tail=100 | grep -i endpoint
Also check if there is excessive API server load that might delay endpoint updates.
# Check API server health
kubectl get --raw /healthz
kubectl get --raw /readyz
9. Use a Headless Service for Debugging
If you want to bypass the Endpoints abstraction and connect directly to pods, create a headless Service temporarily.
# A headless service returns pod IPs directly via DNS
kubectl expose deployment <deployment-name> --name=<service-name>-headless --cluster-ip=None --port=<port>
# DNS lookup will return individual pod IPs
kubectl run dns-test --image=busybox --restart=Never --rm -it -- nslookup <service-name>-headless
# Clean up
kubectl delete service <service-name>-headless
10. Verify Resolution
After fixing the issue, confirm endpoints are populated and traffic flows.
# Verify endpoints
kubectl get endpoints <service-name>
# Test connectivity
kubectl run ep-test --image=busybox --restart=Never --rm -it -- wget -qO- --timeout=5 http://<service-name>:<port>/
# Monitor endpoints for stability
kubectl get endpoints <service-name> -w
Endpoints should list all ready pod IPs and their ports. If the list keeps changing (endpoints appearing and disappearing), pods are flapping between ready and not-ready states, which requires further investigation into readiness probe configuration and application health.
How to Explain This in an Interview
I would explain that Endpoints are a core primitive in Kubernetes networking. When you create a Service with a selector, the Endpoints controller automatically creates and maintains an Endpoints object (and EndpointSlices in newer versions) that maps the Service to the IPs and ports of matching ready pods. I'd walk through the lifecycle: pod becomes ready, endpoint controller detects the label match, adds the pod IP to the Endpoints object, and kube-proxy updates routing rules. I'd discuss the difference between Services with selectors and without (manual endpoints), and explain how EndpointSlices improve scalability. Debugging involves verifying the selector-to-label chain and pod readiness.
Prevention
- Always verify endpoints exist after creating or updating a Service
- Use consistent labeling conventions across the organization
- Implement readiness probes that accurately reflect pod health
- Add endpoint monitoring and alerting to catch missing endpoints early
- Use Helm or Kustomize to keep Service selectors and pod labels in sync