Kubernetes 401 Unauthorized

Causes and Fixes

A 401 Unauthorized error in Kubernetes means the API server could not authenticate the request. Unlike 403 Forbidden where the identity is known but lacks permissions, 401 means the identity itself could not be verified — the credentials are missing, invalid, or expired.

Symptoms

  • kubectl returns 'error: You must be logged in to the server (Unauthorized)'
  • API calls return HTTP 401 status code
  • Service account tokens are rejected by the API server
  • kubeconfig context is invalid or expired
  • Webhook or operator logs show authentication failures to the API server

Common Causes

1
Expired client certificate
The user's client certificate used for authentication has expired. Kubernetes certificates typically have a 1-year validity period and must be renewed.
2
Invalid or expired token
The bearer token (service account token, OIDC token, or bootstrap token) has expired or been revoked. Kubernetes 1.22+ uses time-limited projected service account tokens.
3
Wrong kubeconfig context
The active kubeconfig context points to a cluster with different credentials, an old cluster that no longer exists, or has been corrupted.
4
Service account deleted
The service account associated with a running pod was deleted, invalidating the pod's mounted token. The API server rejects requests with tokens from deleted service accounts.
5
OIDC provider misconfiguration
The OIDC identity provider is unreachable, the client ID is wrong, or the OIDC token has expired and cannot be refreshed.
6
API server authentication configuration changed
The API server's authentication flags (token auth file, OIDC issuer URL, client CA) were changed, invalidating previously valid credentials.

Step-by-Step Troubleshooting

401 Unauthorized means the API server cannot verify who you are. This is an authentication failure, not an authorization failure. This guide walks through identifying which credential is failing and how to fix it.

1. Check the Current kubeconfig Context

Start by verifying which context and credentials kubectl is using.

# Show the current context
kubectl config current-context

# Show the full context details
kubectl config view --minify

# List all available contexts
kubectl config get-contexts

Verify the server URL, user, and cluster are correct. If you have multiple kubeconfig files, check which one is active.

# Check the KUBECONFIG environment variable
echo $KUBECONFIG

# If not set, the default is ~/.kube/config
ls -la ~/.kube/config

2. Test Basic API Server Connectivity

Verify you can reach the API server and check what authentication method is being used.

# Test raw API access
kubectl cluster-info

# Try with verbose output to see authentication details
kubectl get pods -v=6 2>&1 | head -20

# Test API server directly
curl -k https://<api-server>:6443/healthz

If curl returns 200 for /healthz but kubectl fails with 401, the issue is with your credentials, not server availability.

3. Check Client Certificate Expiration

If your kubeconfig uses client certificates, check if they have expired.

# Extract the client certificate from kubeconfig
kubectl config view --minify --raw -o jsonpath='{.users[0].user.client-certificate-data}' | base64 -d | openssl x509 -text -noout | grep -A2 "Validity"

# Or if using a file reference
kubectl config view --minify -o jsonpath='{.users[0].user.client-certificate}'
openssl x509 -in <cert-file> -text -noout | grep -A2 "Validity"

If the certificate's "Not After" date is in the past, it has expired and needs to be renewed.

# For kubeadm clusters, renew certificates
kubeadm certs renew all

# Check certificate expiration for all kubeadm certs
kubeadm certs check-expiration

4. Check Service Account Token

If a pod's service account is getting 401 errors, check the token.

# Check which service account the pod uses
kubectl get pod <pod-name> -o jsonpath='{.spec.serviceAccountName}'

# Verify the service account exists
kubectl get serviceaccount <sa-name> -n <namespace>

# Check the token mount inside the pod
kubectl exec <pod-name> -- cat /var/run/secrets/kubernetes.io/serviceaccount/token

# Test the token directly
TOKEN=$(kubectl exec <pod-name> -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
kubectl exec <pod-name> -- curl -sk -H "Authorization: Bearer $TOKEN" https://kubernetes.default.svc/api/v1/namespaces

In Kubernetes 1.22+, projected service account tokens are time-limited (default 1 hour) and automatically rotated by the kubelet. If the pod has been running for a long time and the token refresh failed, restart the pod.

5. Verify the Service Account Still Exists

If the service account was deleted, the token is invalidated immediately.

# Check if the service account exists
kubectl get serviceaccount <sa-name> -n <namespace>

# If it was deleted, recreate it
kubectl create serviceaccount <sa-name> -n <namespace>

# The pod needs to be restarted to get a new token
kubectl delete pod <pod-name>

6. Check OIDC Configuration

If using OIDC authentication, verify the provider and token.

# Check if the OIDC token is expired
kubectl config view --minify -o jsonpath='{.users[0].user.auth-provider.config}' 2>/dev/null

# If using a kubeconfig with exec-based auth
kubectl config view --minify -o jsonpath='{.users[0].user.exec}' 2>/dev/null

# Try refreshing the token manually
# (specific to your OIDC provider — e.g., dex, Keycloak, Azure AD)

For OIDC issues, verify:

  • The OIDC provider is reachable from the API server
  • The client ID matches the API server's --oidc-client-id flag
  • The token has not expired
  • The issuer URL matches the API server's --oidc-issuer-url flag

7. Check API Server Authentication Configuration

If credentials were previously working, the API server configuration may have changed.

# Check API server flags (for kubeadm clusters)
kubectl get pod kube-apiserver-<master-node> -n kube-system -o yaml | grep -E "oidc|token|cert|auth"

# Check API server logs for authentication errors
kubectl logs -n kube-system kube-apiserver-<master-node> --tail=100 | grep -i "auth\|401\|unauthorized"

8. Generate New Credentials

If certificates are expired or tokens are invalid, generate new ones.

# For certificate-based auth (kubeadm)
kubeadm kubeconfig user --client-name=<username> --org=<group> > new-kubeconfig.yaml

# For service account tokens, create a new long-lived token (not recommended for production)
kubectl create token <sa-name> -n <namespace> --duration=8760h

# For a quick test, get an admin kubeconfig (kubeadm clusters)
cat /etc/kubernetes/admin.conf

9. Switch to a Working Context

If you have multiple kubeconfig contexts, switch to one that works.

# List contexts
kubectl config get-contexts

# Switch context
kubectl config use-context <working-context>

# Test
kubectl get pods

If no context works, you may need to obtain new credentials from the cluster administrator or re-bootstrap access from the control plane.

10. Verify Authentication Is Working

Confirm the 401 error is resolved.

# Test authentication
kubectl auth whoami  # Kubernetes 1.27+

# Or check your identity
kubectl get pods
kubectl cluster-info

# For service accounts, test from inside the pod
kubectl exec <pod-name> -- curl -sk -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc/api/v1/namespaces/default/pods

If kubectl auth whoami returns your identity and commands succeed, authentication is restored. If you received a new certificate, note its expiration date and set up monitoring to alert before it expires again.

How to Explain This in an Interview

I would explain Kubernetes authentication as a pluggable system: the API server supports multiple authentication methods including client certificates (X.509), bearer tokens (service accounts, OIDC, webhook), and authentication proxies. Each request is evaluated by the configured authenticators in order. 401 means none of them could validate the credentials. I'd discuss the difference between authentication (who are you?) and authorization (what can you do?), how service account tokens work (projected volumes with automatic rotation in 1.22+), and how kubeconfig files store credentials. I'd walk through debugging by checking the kubeconfig, testing token validity, and reviewing API server logs for authentication failures.

Prevention

  • Monitor certificate expiration dates and set up alerts before expiry
  • Use OIDC or external identity providers with automatic token refresh
  • Implement certificate rotation automation
  • Regularly validate kubeconfig files in CI/CD pipelines
  • Use short-lived tokens with automatic refresh rather than static tokens

Related Errors