kubectl events

Display events in the cluster. Events provide a timeline of what happened to resources — scheduling, pulling images, errors, and more.

kubectl events [flags]

Common Flags

FlagShortDescription
--forFilter events for a specific resource (e.g., pod/my-pod)
--typesFilter by event type (Normal or Warning)
--watch-wWatch for new events in real-time
--all-namespaces-AShow events across all namespaces
--output-oOutput format: json or yaml

Examples

List events in the current namespace

kubectl events

Watch events in real-time

kubectl events -w

Show only warning events

kubectl events --types=Warning

Show events for a specific pod

kubectl events --for=pod/my-pod

Show events across all namespaces

kubectl events -A

When to Use kubectl events

kubectl events provides a chronological view of what is happening in your cluster. Events record state changes — pods being scheduled, containers starting, images being pulled, errors occurring. They are the first place to look when debugging issues.

kubectl events vs kubectl get events

Kubernetes 1.26+ includes kubectl events as a dedicated command with better defaults:

# New dedicated command (sorted by time, better formatting)
kubectl events

# Legacy approach (still works)
kubectl get events --sort-by=.lastTimestamp

The kubectl events command provides chronological ordering by default and cleaner output.

Watching Events in Real-Time

For live debugging, watch mode is invaluable:

# Watch all events
kubectl events -w

# Watch only warnings
kubectl events -w --types=Warning

# Watch events in a specific namespace
kubectl events -w -n production

This is especially useful during deployments — you can watch events stream in as pods are created, scheduled, and started.

Filtering Events

By Resource

# Events for a specific pod
kubectl events --for=pod/my-pod

# Events for a deployment
kubectl events --for=deployment/my-app

# Events for a node
kubectl events --for=node/worker-1

By Type

# Only warnings (problems)
kubectl events --types=Warning

# Only normal events
kubectl events --types=Normal

Using the Legacy Approach with Field Selectors

# Events for a specific resource
kubectl get events --field-selector=involvedObject.name=my-pod

# Events by reason
kubectl get events --field-selector=reason=FailedScheduling

# Warning events sorted by time
kubectl get events --field-selector=type=Warning --sort-by=.lastTimestamp

Common Event Types and Meanings

Pod Events

| Reason | Type | Meaning | |--------|------|---------| | Scheduled | Normal | Pod assigned to a node | | Pulled | Normal | Container image pulled | | Created | Normal | Container created | | Started | Normal | Container started | | Killing | Normal | Container being terminated | | Failed | Warning | Container failed to start | | BackOff | Warning | Container restarting after failure | | FailedScheduling | Warning | No suitable node found | | Unhealthy | Warning | Readiness/liveness probe failed | | OOMKilling | Warning | Container exceeded memory limit |

Node Events

| Reason | Type | Meaning | |--------|------|---------| | NodeReady | Normal | Node is healthy | | NodeNotReady | Warning | Node is unhealthy | | NodeHasDiskPressure | Warning | Disk space running low | | NodeHasMemoryPressure | Warning | Memory running low | | EvictionThresholdMet | Warning | Eviction threshold reached |

Debugging with Events

Pod Not Starting

# Check events for the specific pod
kubectl events --for=pod/my-pod

# Common issues visible in events:
# - FailedScheduling: insufficient resources, taint/toleration mismatch
# - Failed: image pull error, CrashLoopBackOff
# - Unhealthy: probe failures

Deployment Rollout Issues

# Watch events during a rollout
kubectl events -w -n production &
kubectl rollout restart deployment/my-app -n production

# Look for:
# - New pods being created
# - Old pods being terminated
# - Any warnings about failed scheduling or image pulls

Node Issues

# Events for a specific node
kubectl events --for=node/worker-1

# All node-related warnings
kubectl get events --field-selector=involvedObject.kind=Node,type=Warning

Events in kubectl describe

Events are also shown at the bottom of kubectl describe output:

kubectl describe pod my-pod
# ...
# Events:
#   Type    Reason     Age   From               Message
#   ----    ------     ----  ----               -------
#   Normal  Scheduled  5m    default-scheduler  Successfully assigned default/my-pod to worker-1
#   Normal  Pulled     5m    kubelet            Container image "nginx:1.25" already present
#   Normal  Created    5m    kubelet            Created container nginx
#   Normal  Started    5m    kubelet            Started container nginx

Event Persistence and Archival

Events are ephemeral — they expire after 1 hour by default. For long-term storage:

# Stream events to a log aggregator
kubectl events -w -A -o json | your-log-shipper

# Periodic snapshot
kubectl get events -A -o json > "/tmp/events-$(date +%Y%m%d-%H%M).json"

For production environments, consider tools like:

  • Event Exporter: Exports events to external systems (Elasticsearch, Stackdriver)
  • kube-eventer: Sinks events to various storage backends
  • Kubernetes Audit Logs: For security-relevant events

Scripting with Events

#!/bin/bash
# Alert on warning events
kubectl events -w --types=Warning -o json | while read -r event; do
  reason=$(echo "$event" | jq -r '.reason')
  message=$(echo "$event" | jq -r '.message')
  object=$(echo "$event" | jq -r '.regarding.name')
  echo "WARNING: $reason on $object — $message"
  # Send to alerting system
done

Events are the observability foundation of Kubernetes. Combined with logs and metrics, they provide a complete picture of cluster behavior and are essential for both real-time debugging and post-incident analysis.

Interview Questions About This Command

How do you view events in a Kubernetes cluster?
kubectl events lists cluster events. You can also use kubectl get events --sort-by=.lastTimestamp or check events in kubectl describe output.
How long do events persist in Kubernetes?
Events have a default TTL of 1 hour in etcd. After that, they are garbage-collected. Adjust with --event-ttl on the API server.
What is the difference between Normal and Warning events?
Normal events indicate expected operations (scheduling, pulling images). Warning events indicate problems (failed scheduling, back-off restarts, OOM kills).

Common Mistakes

  • Relying on events for long-term audit trails — events expire after 1 hour by default.
  • Not using --types=Warning to focus on actual problems when debugging.
  • Forgetting that events are namespaced — use -A or -n to see events in the right namespace.

Related Commands