kubectl top

Display resource usage (CPU/memory) for pods and nodes. Requires the Metrics Server to be installed in the cluster.

kubectl top [pods|nodes] [NAME] [flags]

Common Flags

FlagShortDescription
--containersShow metrics for individual containers within pods
--selector-lFilter by label selector
--sort-bySort output by cpu or memory
--namespace-nNamespace to query
--all-namespaces-AShow pod metrics across all namespaces
--no-headersDo not show column headers

Examples

View resource usage for all nodes

kubectl top nodes

View pod resource usage in current namespace

kubectl top pods

View pod metrics across all namespaces sorted by CPU

kubectl top pods -A --sort-by=cpu

View per-container metrics

kubectl top pods --containers

View metrics for pods with a specific label

kubectl top pods -l app=nginx

View metrics sorted by memory

kubectl top pods --sort-by=memory

View metrics for a specific pod

kubectl top pod my-pod

When to Use kubectl top

kubectl top provides a quick view of CPU and memory consumption for pods and nodes. It is the Kubernetes equivalent of the Linux top command and is the fastest way to identify resource-hungry workloads or nodes approaching capacity limits.

Prerequisites

kubectl top requires the Metrics Server to be running in the cluster:

# Check if Metrics Server is installed
kubectl get deployment metrics-server -n kube-system

# Install Metrics Server if missing
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Verify metrics are available
kubectl top nodes

If you see an error like "Metrics API not available," the Metrics Server is not installed or not functioning correctly.

Node Metrics

View resource usage across cluster nodes:

# View all node metrics
kubectl top nodes

# Example output:
# NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
# worker-1   450m         22%    3200Mi           40%
# worker-2   800m         40%    5100Mi           63%
# worker-3   250m         12%    2800Mi           35%

The percentages are relative to the node's total allocatable resources. A node consistently above 80% CPU or memory should trigger scaling actions.

Pod Metrics

View resource consumption for pods:

# All pods in current namespace
kubectl top pods

# All pods across all namespaces
kubectl top pods -A

# Sort by CPU usage to find hot spots
kubectl top pods --sort-by=cpu

# Sort by memory to find memory-hungry pods
kubectl top pods --sort-by=memory -A

# Filter by label
kubectl top pods -l app=nginx

# Example output:
# NAME                     CPU(cores)   MEMORY(bytes)
# my-app-7d6f8b4c5-x2k9m   150m         256Mi
# my-app-7d6f8b4c5-a1b2c   120m         230Mi

Container-Level Metrics

For multi-container pods, view per-container breakdown:

# Show per-container metrics
kubectl top pods --containers

# Example output:
# POD                      NAME          CPU(cores)   MEMORY(bytes)
# my-app-7d6f8b4c5-x2k9m   app           120m         200Mi
# my-app-7d6f8b4c5-x2k9m   istio-proxy    30m          56Mi

# Filter for specific pods with container breakdown
kubectl top pods --containers -l app=nginx

This helps identify whether the main container or a sidecar is consuming unexpected resources.

Identifying Resource Issues

Use top metrics to diagnose common problems:

# Find pods close to their memory limits
kubectl top pods -A --sort-by=memory

# Compare with configured limits
kubectl get pods -o custom-columns=\
NAME:.metadata.name,\
MEM_REQUEST:.spec.containers[0].resources.requests.memory,\
MEM_LIMIT:.spec.containers[0].resources.limits.memory

# Find nodes under pressure
kubectl top nodes
kubectl describe node <node-name> | grep -A 5 "Conditions"

Scripting with top

Combine top with other tools for automated analysis:

# Find pods using more than 500Mi memory
kubectl top pods --no-headers | awk '$3 > 500 {print $1, $3}'

# Alert on high CPU nodes
kubectl top nodes --no-headers | awk -F'[[:space:]]+' '$3 > 80 {print "HIGH CPU:", $1, $3}'

# Export metrics for reporting
kubectl top pods -A --no-headers --sort-by=memory > pod-metrics.txt

Comparison with Resource Requests and Limits

Understanding the relationship between actual usage (shown by top) and configured requests/limits is key to right-sizing workloads:

# Get actual usage
kubectl top pods

# Get configured requests and limits
kubectl get pods -o custom-columns=\
"NAME:.metadata.name,\
CPU_REQ:.spec.containers[0].resources.requests.cpu,\
CPU_LIM:.spec.containers[0].resources.limits.cpu,\
MEM_REQ:.spec.containers[0].resources.requests.memory,\
MEM_LIM:.spec.containers[0].resources.limits.memory"

If actual usage is consistently far below requests, you are over-provisioning and wasting cluster resources. If usage frequently approaches limits, the workload needs more resources or the limits need adjusting.

Limitations

kubectl top shows point-in-time metrics with a delay (typically 15-30 seconds). It does not show historical data, trends, or percentile distributions. For production monitoring, use Prometheus with Grafana or your cloud provider's monitoring solution. kubectl top is best for quick spot checks and interactive debugging.

Interview Questions About This Command

What is required for kubectl top to work?
The Metrics Server must be installed and running in the cluster. It collects CPU and memory metrics from kubelets and makes them available through the Metrics API.
How do you identify pods that are consuming excessive memory?
Use kubectl top pods --sort-by=memory -A to see all pods sorted by memory usage across all namespaces. Compare actual usage with resource requests and limits.
What is the difference between kubectl top and monitoring tools like Prometheus?
kubectl top shows real-time point-in-time metrics only. Prometheus stores historical data, supports alerting, and provides rich querying. kubectl top is for quick checks; Prometheus is for production monitoring.

Common Mistakes

  • Trying to use kubectl top without Metrics Server installed, which produces an error about metrics not being available.
  • Interpreting kubectl top values as limits instead of current usage. The numbers show actual consumption, not configured limits.
  • Relying on kubectl top for capacity planning when it only shows current usage. Use Prometheus or similar tools for historical trends.

Related Commands