What happens when you run kubectl apply from start to finish?
Running kubectl apply triggers a chain of events: kubectl validates and sends the manifest to the API server, which authenticates, authorizes, and runs admission controllers, then persists to etcd. Controllers detect the new object and create dependent resources, the scheduler assigns pods to nodes, and kubelets start containers via the container runtime.
Detailed Answer
Understanding what happens when you run kubectl apply -f deployment.yaml from start to finish reveals the entire Kubernetes architecture in action. Here is the complete lifecycle, step by step.
Step 1: Client-Side Processing (kubectl)
# Run with high verbosity to see the full API interaction
kubectl apply -f deployment.yaml -v=8
kubectl performs several operations before contacting the API server:
- Reads the kubeconfig (default
~/.kube/config) to determine the API server endpoint, credentials, and default namespace. - Parses the YAML/JSON manifest and validates it against the OpenAPI schema fetched from the API server (client-side validation).
- Determines the API endpoint using the resource's
apiVersionandkind. For example,apps/v1 Deploymentmaps to/apis/apps/v1/namespaces/{ns}/deployments. - Performs a three-way merge (for apply specifically): compares the new manifest, the
kubectl.kubernetes.io/last-applied-configurationannotation on the live object, and the live object itself to compute a strategic merge patch. - Sends the HTTP request (PATCH for existing resources, POST for new ones) to the API server with appropriate headers and TLS credentials.
Step 2: API Server Processing
The API server receives the request and processes it through its pipeline:
Authentication -- The server validates the identity of the caller using the configured authentication chain (client certificate, bearer token, OIDC token, etc.). If authentication fails, a 401 Unauthorized response is returned.
Authorization -- RBAC (or another configured authorizer) checks whether the authenticated user has permission to perform the requested action on the specified resource. Failure returns 403 Forbidden.
Mutating Admission -- Mutating admission webhooks and built-in mutating controllers can modify the object. Common examples include injecting sidecar containers (Istio), setting default resource limits (LimitRanger), or adding labels.
Schema Validation -- The API server validates the object against its schema, checking field types, required fields, and structural integrity.
Validating Admission -- Validating admission webhooks can accept or reject the request. Examples include OPA/Gatekeeper policies enforcing naming conventions or disallowing privileged containers.
Persistence to etcd -- The validated object is serialized (typically as protobuf) and written to etcd. The API server returns a success response to kubectl.
Step 3: Controller Reconciliation
Once the Deployment object exists in etcd, the controller manager's watch connections pick up the event:
Deployment Controller
-> Detects new Deployment
-> Creates a ReplicaSet with matching pod template and replica count
ReplicaSet Controller
-> Detects new ReplicaSet with desired replicas > 0
-> Creates Pod objects (spec.nodeName is empty)
Step 4: Scheduling
The kube-scheduler watches for pods with no spec.nodeName:
Scheduler
-> Filters nodes (resource availability, taints, affinity)
-> Scores remaining nodes (balanced resources, image locality)
-> Selects the best node
-> Updates pod's spec.nodeName via the API server
Step 5: kubelet Execution
The kubelet on the assigned node receives the pod update through its watch:
kubelet
-> Calls CRI RuntimeService.RunPodSandbox (creates network namespace)
-> CNI plugin configures pod networking (assigns IP, sets up routes)
-> For each init container (sequentially):
-> CRI ImageService.PullImage (if not cached)
-> CRI RuntimeService.CreateContainer + StartContainer
-> Wait for completion
-> For each regular container (in parallel):
-> CRI ImageService.PullImage (if not cached)
-> CRI RuntimeService.CreateContainer + StartContainer
-> CSI driver mounts any persistent volumes
-> Starts health probes (startup, then liveness and readiness)
-> Reports pod status back to API server
Step 6: Service Integration
Once the pod's readiness probe passes:
EndpointSlice Controller
-> Detects pod is Ready and matches a Service selector
-> Adds pod IP to the EndpointSlice for that Service
kube-proxy (on every node)
-> Detects EndpointSlice change
-> Updates iptables/IPVS rules to include the new pod
-> Traffic to the Service ClusterIP now reaches this pod
Observing the Full Flow
# Apply with verbose logging
kubectl apply -f deployment.yaml -v=6
# Watch events in real-time
kubectl get events -w --field-selector involvedObject.kind=Deployment
# Watch pods being created
kubectl get pods -w -l app=my-app
# View the full event timeline after creation
kubectl describe deployment my-app
kubectl describe replicaset my-app-<hash>
kubectl describe pod my-app-<hash>-<hash>
A typical event timeline looks like:
0s Deployment created (API server)
0s ReplicaSet created (Deployment controller)
0s Pod created (ReplicaSet controller)
0s Successfully assigned pod to node (Scheduler)
1s Pulling image "nginx:1.27" (kubelet)
3s Successfully pulled image (kubelet)
3s Created container (kubelet)
3s Started container (kubelet)
5s Readiness probe succeeded (kubelet)
5s Endpoints updated (EndpointSlice controller)
Failure Scenarios
Understanding this flow helps diagnose where failures occur:
| Symptom | Likely Stage | |---------|-------------| | "Forbidden" error from kubectl | Authorization (RBAC) | | Object rejected with policy error | Admission webhooks | | Pod stuck in Pending | Scheduling (resources, taints) | | Pod stuck in ContainerCreating | Image pull or volume mount | | Pod in CrashLoopBackOff | Container startup failure | | Service not routing traffic | Readiness probe or endpoints |
Why Interviewers Ask This
This is a classic deep-dive question that tests end-to-end understanding of the Kubernetes architecture. A strong answer demonstrates knowledge of every control plane component, the data flow between them, and the event-driven reconciliation model.
Common Follow-Up Questions
Key Takeaways
- The request flows through kubectl, API server (auth, admission, persistence), controllers, scheduler, and kubelet
- Each stage is loosely coupled through the API server acting as a central hub
- The entire system is event-driven: components watch for changes and react asynchronously