How Do You Create a Kubernetes Service Without a Selector?
A Service without a selector does not automatically discover Pods. Instead, you manually create Endpoints or EndpointSlice objects to point the Service at specific IP addresses. This is used to proxy traffic to external services, other namespaces, or non-Kubernetes systems while using Kubernetes DNS and kube-proxy.
What Is a Service Without a Selector?
Normally, a Kubernetes Service includes a selector that matches Pod labels, and the Endpoints controller automatically populates the list of backend IPs. When you omit the selector field, Kubernetes creates the Service but does not create any Endpoints. You then manually define where the Service should route traffic.
This gives you the full power of a Kubernetes Service (ClusterIP, DNS name, kube-proxy routing, port remapping) while pointing at arbitrary IP addresses that may or may not be Pods.
Creating the Service and Endpoints
Using Legacy Endpoints
apiVersion: v1
kind: Service
metadata:
name: external-database
namespace: production
spec:
type: ClusterIP
ports:
- name: postgres
port: 5432
targetPort: 5432
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-database # MUST match the Service name
namespace: production
subsets:
- addresses:
- ip: 10.0.5.100
- ip: 10.0.5.101
ports:
- name: postgres
port: 5432
Critical rule: the Endpoints object name must exactly match the Service name in the same namespace.
Using EndpointSlices (Modern Approach)
apiVersion: v1
kind: Service
metadata:
name: external-database
namespace: production
spec:
type: ClusterIP
ports:
- name: postgres
port: 5432
targetPort: 5432
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: external-database-manual
namespace: production
labels:
kubernetes.io/service-name: external-database
addressType: IPv4
endpoints:
- addresses:
- "10.0.5.100"
conditions:
ready: true
- addresses:
- "10.0.5.101"
conditions:
ready: true
ports:
- name: postgres
port: 5432
protocol: TCP
For EndpointSlices, the link is through the kubernetes.io/service-name label, not the object name.
How It Works
After applying, the Service behaves identically to a regular ClusterIP Service:
kubectl get svc external-database -n production
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
external-database ClusterIP 10.96.78.200 <none> 5432/TCP 10s
kubectl get endpoints external-database -n production
NAME ENDPOINTS AGE
external-database 10.0.5.100:5432,10.0.5.101:5432 10s
Pods in the cluster can now connect to external-database:5432, and kube-proxy routes traffic to 10.0.5.100 or 10.0.5.101.
┌──────────────┐
│ Application │
│ Pod │
└──────┬───────┘
│ external-database:5432
│ (DNS -> ClusterIP 10.96.78.200)
▼
┌──────────────┐
│ kube-proxy │
│ (iptables) │
└──────┬───────┘
│ DNAT to 10.0.5.100:5432 or 10.0.5.101:5432
▼
┌──────────────┐
│ External DB │
│ (not in K8s) │
└──────────────┘
Use Cases
1. External Database
The most common use case is pointing a Service at a database that lives outside the cluster:
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
---
apiVersion: v1
kind: Endpoints
metadata:
name: mysql
subsets:
- addresses:
- ip: 192.168.1.50
ports:
- port: 3306
Application Pods connect to mysql:3306 as if the database were running in the cluster.
2. Gradual Migration Into the Cluster
During a migration, you might start with a selector-less Service pointing to the external system, then add a selector when the in-cluster replacement is ready:
Before migration:
apiVersion: v1
kind: Service
metadata:
name: user-api
spec:
ports:
- port: 443
targetPort: 443
# No selector -- manual endpoints
After migration:
apiVersion: v1
kind: Service
metadata:
name: user-api
spec:
selector:
app: user-api
ports:
- port: 443
targetPort: 8443
The Service name and port stay the same, so no application changes are needed.
3. Multi-Cluster Routing
In a multi-cluster setup, a Service in cluster A can point to Pod IPs in cluster B (assuming network connectivity):
apiVersion: v1
kind: Service
metadata:
name: remote-service
spec:
ports:
- port: 80
---
apiVersion: v1
kind: Endpoints
metadata:
name: remote-service
subsets:
- addresses:
- ip: 172.16.10.5 # Pod in cluster B
- ip: 172.16.10.12 # Pod in cluster B
ports:
- port: 80
4. Port Remapping
Unlike ExternalName, selector-less Services support port remapping. The Service port can differ from the endpoint port:
apiVersion: v1
kind: Service
metadata:
name: legacy-api
spec:
ports:
- port: 80 # Clients connect on port 80
targetPort: 9080 # Forwarded to port 9080
---
apiVersion: v1
kind: Endpoints
metadata:
name: legacy-api
subsets:
- addresses:
- ip: 10.0.1.200
ports:
- port: 9080 # Must match targetPort
Selector-less Service vs. ExternalName
| Feature | Selector-less Service | ExternalName | |---|---|---| | Target type | IP addresses | Hostnames only | | ClusterIP | Yes | No | | kube-proxy routing | Yes | No (DNS only) | | Port remapping | Yes | No | | Health checks | No (manual) | No | | Network policies | Apply normally | Do not apply |
Automating Endpoint Updates
Since Kubernetes does not manage endpoints for selector-less Services, you need automation if the external IPs change:
#!/bin/bash
# Simple script to update endpoints based on DNS resolution
EXTERNAL_HOST="db.example.com"
IPS=$(dig +short $EXTERNAL_HOST | head -5)
# Generate endpoints YAML
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Endpoints
metadata:
name: external-database
namespace: production
subsets:
- addresses:
$(for ip in $IPS; do echo " - ip: $ip"; done)
ports:
- port: 5432
EOF
This could be run as a CronJob in the cluster.
Common Mistakes
- Name mismatch -- The Endpoints object name must exactly match the Service name.
- Namespace mismatch -- Both must be in the same namespace.
- Missing port names -- If the Service has named ports, the Endpoints ports must use the same names.
- Using Pod IPs that are not routable -- Ensure the external IPs are reachable from cluster nodes.
Summary
Services without selectors let you leverage Kubernetes networking features (DNS, ClusterIP, kube-proxy load balancing) for targets that are not Kubernetes Pods. You manually manage Endpoints or EndpointSlice objects to define where traffic goes. This pattern is essential for integrating external systems, executing migrations, and bridging multi-cluster environments.
Why Interviewers Ask This
Interviewers ask this to test understanding of how Services work beyond auto-discovery. It reveals whether candidates can integrate Kubernetes with external systems and understand the Endpoints mechanism.
Common Follow-Up Questions
Key Takeaways
- Omitting the selector field creates a Service that does not auto-discover Pods.
- You must manually create matching Endpoints or EndpointSlice objects.
- This pattern is ideal for proxying to external databases, legacy systems, or cross-namespace Services.