Ingress vs Service
Key Differences in Kubernetes
A Service provides stable network access to a set of Pods within the cluster using L4 (TCP/UDP) load balancing. An Ingress operates at L7 (HTTP/HTTPS) and routes external traffic to Services based on hostnames and URL paths. Services are required for any Pod networking; Ingress adds HTTP-aware routing on top.
Side-by-Side Comparison
| Dimension | Ingress | Service |
|---|---|---|
| OSI Layer | Layer 7 — HTTP/HTTPS routing with host and path rules | Layer 4 — TCP/UDP load balancing |
| Traffic Direction | External traffic into the cluster | Internal cluster traffic (ClusterIP) or external (NodePort/LoadBalancer) |
| TLS Termination | Built-in TLS termination with certificate configuration | No TLS termination (handled by the application or a sidecar) |
| Path-Based Routing | Routes /api to one Service and /web to another | No path-based routing — sends all traffic to backend Pods |
| Host-Based Routing | Routes api.example.com and web.example.com to different Services | No host-based routing |
| Dependency | Requires an Ingress controller (NGINX, Traefik, etc.) and backend Services | Works natively — no extra components needed |
| Cost (Cloud) | Single load balancer shared across many Services | LoadBalancer type provisions one cloud LB per Service |
| Protocol Support | HTTP and HTTPS (some controllers support TCP/gRPC) | TCP, UDP, SCTP — any protocol |
Detailed Breakdown
How They Work Together
Ingress does not replace Services — it sits in front of them. The typical flow is:
Client → Ingress (L7 routing) → Service (L4 LB) → Pod
Every Ingress rule points to a backend Service. You always need both.
Service Definition
apiVersion: v1
kind: Service
metadata:
name: web-api
spec:
selector:
app: web-api
ports:
- port: 80
targetPort: 8080
type: ClusterIP
This creates a cluster-internal IP that load-balances traffic across all Pods with the app: web-api label. Other Pods in the cluster can reach it at web-api.default.svc.cluster.local.
Ingress Definition
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- api.example.com
secretName: tls-secret
rules:
- host: api.example.com
http:
paths:
- path: /v1
pathType: Prefix
backend:
service:
name: api-v1
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-v2
port:
number: 80
- host: dashboard.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dashboard
port:
number: 80
This single Ingress resource routes traffic from two different hostnames to three different Services, all behind one load balancer. Without Ingress, you would need three separate LoadBalancer Services — three cloud load balancers, three external IPs, three costs.
The Ingress Controller Requirement
An Ingress resource by itself does nothing. You need an Ingress controller — a Pod running a reverse proxy that watches Ingress resources and configures itself accordingly.
Popular Ingress controllers:
- NGINX Ingress Controller — the most widely used, based on NGINX
- Traefik — automatic configuration, built-in Let's Encrypt support
- HAProxy Ingress — high-performance, battle-tested proxy
- AWS ALB Ingress Controller — provisions AWS Application Load Balancers
- Istio Gateway — advanced traffic management with service mesh
Installing an Ingress controller is typically done with Helm or a manifest:
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace
TLS Termination
Ingress handles TLS termination natively. You create a Secret with the certificate and reference it:
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
type: kubernetes.io/tls
data:
tls.crt: <base64-cert>
tls.key: <base64-key>
The Ingress controller decrypts HTTPS traffic and forwards plain HTTP to the backend Services. This centralizes certificate management instead of configuring TLS in every application.
Combined with cert-manager, you can automate certificate provisioning from Let's Encrypt:
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
Gateway API — The Evolution of Ingress
The Gateway API is a newer Kubernetes standard designed to replace Ingress with a more expressive, role-oriented model. It separates infrastructure concerns (GatewayClass, Gateway) from routing concerns (HTTPRoute, TCPRoute):
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-route
spec:
parentRefs:
- name: my-gateway
hostnames:
- "api.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /v1
backendRefs:
- name: api-v1
port: 80
Gateway API supports traffic splitting, header-based routing, and cross-namespace references — features that Ingress handles only through controller-specific annotations. While Ingress remains widely used, Gateway API is the future direction.
Cost Implications
In cloud environments, each Service of type LoadBalancer provisions an external load balancer (e.g., an AWS NLB or GCP LB). At ~$20/month each, exposing 20 Services directly costs ~$400/month.
An Ingress controller uses a single LoadBalancer Service and routes traffic to many backend Services internally. This consolidation can significantly reduce cloud costs while adding HTTP-aware routing capabilities.
When to Use Each
Use Services alone when you have non-HTTP workloads (databases, gRPC services, message queues) or simple internal communication. Use Ingress when you need to expose HTTP services externally with routing rules, TLS termination, or cost-efficient load balancer sharing. In most production clusters, you will use both.
Use Ingress when...
- •You need to route external HTTP traffic based on hostname or URL path
- •You want TLS termination managed by the cluster
- •You want one external load balancer to serve multiple backend Services
- •You need features like rate limiting, authentication, or URL rewriting
Use Service when...
- •You need internal Pod-to-Pod communication within the cluster
- •You're exposing a non-HTTP service like a database or message queue
- •You need simple external access via NodePort or LoadBalancer
- •You're providing a stable endpoint for a set of Pods
- •You need headless service discovery for StatefulSets
Model Interview Answer
“A Service is the fundamental building block for Pod networking in Kubernetes — it provides a stable IP and DNS name that routes traffic to a set of Pods. Services operate at L4 (TCP/UDP). An Ingress is a layer on top that provides L7 HTTP routing. It routes external requests to backend Services based on hostnames and URL paths, handles TLS termination, and can consolidate many Services behind a single external load balancer. You always need a Service — Ingress is optional and depends on an Ingress controller like NGINX or Traefik to function.”