How Does a NodePort Service Work in Kubernetes?
A NodePort Service exposes a Service on a static port on every node's IP address. External clients can reach the Service by sending traffic to any node's IP on that port, and kube-proxy forwards it to the correct backend Pods.
What Is a NodePort Service?
A NodePort Service extends the default ClusterIP Service by opening a specific port on every node in the cluster. Any traffic that arrives at <NodeIP>:<NodePort> is forwarded by kube-proxy to the Service and then to one of the backend Pods. This is the simplest way to make a Kubernetes Service reachable from outside the cluster without a cloud load balancer.
Creating a NodePort Service
apiVersion: v1
kind: Service
metadata:
name: web-frontend
spec:
type: NodePort
selector:
app: web-frontend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
nodePort: 30080
The three port fields serve different purposes:
| Field | Value | Description |
|---|---|---|
| port | 80 | The port on the ClusterIP (internal access) |
| targetPort | 3000 | The port on the container |
| nodePort | 30080 | The port opened on every node (external access) |
If you omit nodePort, Kubernetes assigns one randomly from the default range of 30000-32767.
kubectl apply -f web-frontend-svc.yaml
kubectl get svc web-frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web-frontend NodePort 10.96.55.120 <none> 80:30080/TCP 8s
How Traffic Flows
External traffic to a NodePort Service follows this path:
External Client
│
│ http://<NodeIP>:30080
▼
┌──────────────┐
│ Node (any) │
│ port 30080 │
└──────┬───────┘
│ kube-proxy intercepts
│ DNAT to ClusterIP 10.96.55.120:80
▼
┌──────────────┐
│ iptables / │
│ IPVS rules │
└──────┬───────┘
│ DNAT to Pod IP
▼
┌──────────────┐
│ Backend Pod │
│ :3000 │
└──────────────┘
Key detail: the traffic can land on any node, even one that is not running the target Pod. kube-proxy will route it across the cluster network to a node that does have the Pod.
NodePort with an Existing Deployment
Here is a complete example with a Deployment and NodePort Service:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-frontend
spec:
replicas: 3
selector:
matchLabels:
app: web-frontend
template:
metadata:
labels:
app: web-frontend
spec:
containers:
- name: nginx
image: nginx:1.27
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-frontend
spec:
type: NodePort
selector:
app: web-frontend
ports:
- port: 80
targetPort: 80
nodePort: 31000
After applying, you can access the service from any machine that can reach a node:
# Get a node IP
NODE_IP=$(kubectl get nodes -o jsonpath='{.items[0].status.addresses[?(@.type=="InternalIP")].address}')
# Access the service
curl http://$NODE_IP:31000
Changing the Default Port Range
The kube-apiserver flag --service-node-port-range controls the allowed NodePort range. In a kubeadm cluster, this is set in the API server manifest:
# /etc/kubernetes/manifests/kube-apiserver.yaml (excerpt)
spec:
containers:
- command:
- kube-apiserver
- --service-node-port-range=20000-40000
# ... other flags
Changing this range requires restarting the API server.
Security Considerations
NodePort opens a port on every node in the cluster, which has security implications:
- Firewall rules -- You must configure your network firewall or cloud security groups to allow traffic on the NodePort range, but only for the specific ports you intend to expose.
- No TLS termination -- NodePort does not handle TLS. You need to terminate TLS in the application, a sidecar, or use an Ingress controller in front.
- Node exposure -- All node IPs become entry points. Consider whether you want to expose worker node IPs directly.
# Example: allow only port 31000 in a security group (AWS CLI)
aws ec2 authorize-security-group-ingress \
--group-id sg-0123456789abcdef0 \
--protocol tcp \
--port 31000 \
--cidr 203.0.113.0/24
externalTrafficPolicy and NodePort
By default, externalTrafficPolicy is set to Cluster, meaning kube-proxy can forward traffic to any Pod on any node. This adds an extra network hop when the Pod is on a different node than the one that received the traffic.
Setting externalTrafficPolicy: Local restricts traffic to Pods running on the node that received the request:
apiVersion: v1
kind: Service
metadata:
name: web-frontend
spec:
type: NodePort
externalTrafficPolicy: Local
selector:
app: web-frontend
ports:
- port: 80
targetPort: 80
nodePort: 31000
Trade-offs:
| Policy | Pros | Cons |
|---|---|---|
| Cluster | Even load distribution | Extra hop, client IP is SNATed |
| Local | Preserves client IP, no extra hop | Uneven distribution if Pods are not spread evenly |
When to Use NodePort
- On-premises clusters without a cloud load balancer integration
- Development and testing when you need quick external access
- Behind an external load balancer that you manage yourself (the LB sends traffic to
<NodeIP>:<NodePort>)
For production workloads exposed to the internet, prefer a LoadBalancer Service or an Ingress controller, which provide TLS termination, path-based routing, and health checks.
Summary
NodePort is a straightforward mechanism for exposing a Service externally by opening a port on every cluster node. It builds on top of ClusterIP, adding external reachability at the cost of exposing node IPs and using high-numbered ports. It is a solid choice for on-premises clusters and development environments, but production internet-facing traffic typically calls for a LoadBalancer or Ingress.
Why Interviewers Ask This
Interviewers ask this to see if candidates understand how Kubernetes exposes services externally without a cloud load balancer, which is common in on-premises environments and during development.
Common Follow-Up Questions
Key Takeaways
- NodePort opens a static port (30000-32767) on every node, making the Service externally reachable.
- A NodePort Service automatically creates a ClusterIP, so it is also accessible internally.
- NodePort is commonly used in on-premises clusters or for quick development access.