What is the Container Runtime Interface (CRI) in Kubernetes?

intermediate|architecturedevopssrecloud architectCKA
TL;DR

CRI is a plugin interface that allows the kubelet to communicate with different container runtimes without being tightly coupled to any one implementation. It defines a gRPC-based API for image management and container lifecycle operations, with containerd and CRI-O being the two primary implementations.

Detailed Answer

The Container Runtime Interface (CRI) is a plugin interface defined by Kubernetes that standardizes how the kubelet interacts with container runtimes. Before CRI existed, the kubelet had hard-coded integrations with Docker, which made supporting alternative runtimes difficult. CRI was introduced in Kubernetes 1.5 to solve this by defining a clear contract between Kubernetes and any container runtime.

CRI Architecture

CRI defines a gRPC-based API with two main services:

RuntimeService handles container and pod sandbox lifecycle:

  • RunPodSandbox / StopPodSandbox / RemovePodSandbox -- Manage the pod-level sandbox (network namespace, etc.)
  • CreateContainer / StartContainer / StopContainer / RemoveContainer -- Manage individual containers within a pod
  • ListContainers / ContainerStatus -- Query container information
  • ExecSync / Exec / Attach / PortForward -- Interactive operations

ImageService handles container image operations:

  • PullImage -- Download an image from a registry
  • ListImages -- List images on the node
  • ImageStatus -- Get details about a specific image
  • RemoveImage -- Delete an image from local storage

The Runtime Stack

Kubernetes uses a layered runtime architecture:

kubelet
  |
  | CRI (gRPC)
  v
High-level runtime (containerd / CRI-O)
  |
  | OCI Runtime Spec
  v
Low-level runtime (runc / crun / kata-containers / gVisor)
  |
  | Linux kernel
  v
Namespaces, cgroups, seccomp, etc.

The high-level runtime implements the CRI interface, manages image storage, and supervises container processes. The low-level runtime (also called an OCI runtime) handles the actual creation of the container using Linux kernel primitives.

containerd

containerd is the most widely used CRI runtime. It was originally extracted from Docker and is now a CNCF graduated project:

# Check which runtime the kubelet is using
kubectl get node worker-1 -o jsonpath='{.status.nodeInfo.containerRuntimeVersion}'
# containerd://1.7.15

# Interact with containerd directly using crictl
crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps
crictl images
crictl pods

# Check containerd configuration
cat /etc/containerd/config.toml

CRI-O

CRI-O is a lightweight runtime built specifically for Kubernetes. It implements only what CRI requires:

# On a CRI-O node
kubectl get node worker-1 -o jsonpath='{.status.nodeInfo.containerRuntimeVersion}'
# cri-o://1.30.0

# CRI-O configuration
cat /etc/crio/crio.conf

crictl: The CRI Debugging Tool

crictl is the standard CLI for interacting with any CRI-compliant runtime. It is indispensable for node-level debugging:

# List running pods
crictl pods

# List running containers
crictl ps

# List all containers including stopped ones
crictl ps -a

# Inspect a specific container
crictl inspect <container-id>

# View container logs
crictl logs <container-id>

# Pull an image
crictl pull nginx:1.27

# List images
crictl images

# Execute a command in a running container
crictl exec -it <container-id> /bin/sh

# Get runtime info
crictl info

Configuring the kubelet for CRI

The kubelet is configured to use a specific CRI endpoint:

# In /var/lib/kubelet/config.yaml
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock

# Or for CRI-O:
containerRuntimeEndpoint: unix:///var/run/crio/crio.sock

Runtime Classes

Kubernetes supports multiple runtimes on the same cluster through RuntimeClass, allowing workloads to select different runtime sandboxes:

# Define a RuntimeClass for gVisor (runsc)
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
  name: gvisor
handler: runsc
scheduling:
  nodeSelector:
    runtime: gvisor

---
# Use the RuntimeClass in a pod
apiVersion: v1
kind: Pod
metadata:
  name: sandboxed-pod
spec:
  runtimeClassName: gvisor
  containers:
  - name: app
    image: nginx:1.27

This is useful for running untrusted workloads in stronger isolation (gVisor, Kata Containers) while keeping standard workloads on runc for performance.

Post-Docker Transition

Kubernetes removed the built-in Docker support (dockershim) in version 1.24. This did not break container images since Docker images are OCI-compliant and work with any CRI runtime. The transition primarily affected:

  • Node setup and troubleshooting workflows (replacing docker ps with crictl ps)
  • Container log paths and socket paths
  • Build pipelines that relied on Docker-in-Docker patterns

Most managed Kubernetes services completed this migration transparently for their users.

Why Interviewers Ask This

This question tests whether a candidate understands the abstraction layer between Kubernetes and the container runtime. It reveals awareness of the post-Docker ecosystem, runtime choices, and the architectural decision to keep Kubernetes runtime-agnostic.

Common Follow-Up Questions

Why was Docker support removed from Kubernetes?
Docker was not CRI-compliant natively. Kubernetes maintained a shim called dockershim that translated CRI calls to Docker API calls. This added complexity and maintenance burden. Since Docker uses containerd underneath, clusters switched to using containerd directly.
What is the difference between containerd and CRI-O?
Both are CRI-compliant runtimes. containerd is a CNCF graduated project with broader scope (used by Docker too). CRI-O is purpose-built specifically for Kubernetes, implementing only what CRI requires, resulting in a smaller footprint.
What is a low-level vs high-level container runtime?
High-level runtimes (containerd, CRI-O) handle image management and container supervision. Low-level runtimes (runc, crun, kata-containers) handle the actual OS-level container creation using namespaces and cgroups.

Key Takeaways

  • CRI decouples Kubernetes from any specific container runtime through a standardized gRPC interface
  • containerd and CRI-O are the two production-grade CRI implementations
  • The CRI spec covers two services: RuntimeService for container lifecycle and ImageService for image operations