How Do You Use RBAC to Achieve Namespace Isolation?

advanced|rbacdevopssrecloud architectCKACKS
TL;DR

RBAC achieves namespace isolation by granting teams permissions only within their designated namespaces using namespace-scoped RoleBindings, avoiding ClusterRoleBindings, locking down the default ServiceAccount, and preventing cross-namespace resource access. Combined with NetworkPolicies and ResourceQuotas, this creates a multi-tenant boundary.

Detailed Answer

In a multi-tenant Kubernetes cluster, each team or application typically owns one or more namespaces. RBAC is the primary mechanism for ensuring teams can only access their own namespaces. Here is how to design that isolation.

The Isolation Model

The goal is simple: Team A can manage resources in team-a-* namespaces but cannot see, modify, or even discover resources in team-b-* namespaces.

RBAC achieves this because of one fundamental property: without an explicit grant, all access is denied.

Step 1: Create Namespaces per Team

apiVersion: v1
kind: Namespace
metadata:
  name: team-alpha-prod
  labels:
    team: alpha
    environment: production
---
apiVersion: v1
kind: Namespace
metadata:
  name: team-alpha-staging
  labels:
    team: alpha
    environment: staging
---
apiVersion: v1
kind: Namespace
metadata:
  name: team-beta-prod
  labels:
    team: beta
    environment: production

Step 2: Create Namespace-Scoped RBAC

Grant each team permissions only in their namespaces:

# Reusable ClusterRole defining team member permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: namespace-admin
rules:
  - apiGroups: ["", "apps", "batch", "networking.k8s.io"]
    resources: ["pods", "deployments", "services", "configmaps",
                "jobs", "cronjobs", "ingresses", "replicasets",
                "statefulsets", "daemonsets"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["pods/log", "pods/portforward", "pods/exec"]
    verbs: ["get", "create"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["serviceaccounts"]
    verbs: ["get", "list", "create"]
---
# Bind to Team Alpha's production namespace ONLY
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: team-alpha-admin
  namespace: team-alpha-prod
subjects:
  - kind: Group
    name: team-alpha
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: namespace-admin
  apiGroup: rbac.authorization.k8s.io
---
# Bind to Team Alpha's staging namespace ONLY
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: team-alpha-admin
  namespace: team-alpha-staging
subjects:
  - kind: Group
    name: team-alpha
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: namespace-admin
  apiGroup: rbac.authorization.k8s.io

Team Alpha now has full management within their namespaces but zero visibility into team-beta-prod.

Step 3: Lock Down the Default ServiceAccount

In every namespace, disable the default ServiceAccount token:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: default
  namespace: team-alpha-prod
automountServiceAccountToken: false

Step 4: Restrict Namespace Creation

Teams should not be able to create namespaces (which would let them bypass isolation):

# Verify: can team-alpha members create namespaces?
kubectl auth can-i create namespaces \
  --as=alice --as-group=team-alpha
# no (good — only cluster admins should create namespaces)

Step 5: Prevent Cross-Namespace Escalation

Watch for these common escalation vectors:

ClusterRoleBindings that leak access:

# Audit: find ClusterRoleBindings that grant access to non-admin groups
kubectl get clusterrolebindings -o json | \
  jq -r '.items[] |
    select(.subjects[]?.kind == "Group" and
           (.subjects[]?.name | test("^system:") | not)) |
    .metadata.name + ": " + (.subjects[] | .kind + "/" + .name)'

ServiceAccounts with cross-namespace bindings:

# Find ServiceAccounts bound in namespaces they don't belong to
kubectl get rolebindings --all-namespaces -o json | \
  jq -r '.items[] |
    .metadata.namespace as $ns |
    (.subjects[]? | select(.kind == "ServiceAccount" and .namespace != $ns)) |
    "SA \(.namespace)/\(.name) bound in namespace \($ns)"'

Complete Namespace Provisioning Template

Automate namespace creation to ensure consistent isolation. Here is a template that sets up RBAC, quotas, and policies together:

# namespace.yaml — apply for each new team namespace
apiVersion: v1
kind: Namespace
metadata:
  name: ${TEAM}-${ENV}
  labels:
    team: ${TEAM}
    environment: ${ENV}
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: default
  namespace: ${TEAM}-${ENV}
automountServiceAccountToken: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ${TEAM}-members
  namespace: ${TEAM}-${ENV}
subjects:
  - kind: Group
    name: ${TEAM}
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: namespace-admin
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ResourceQuota
metadata:
  name: ${TEAM}-quota
  namespace: ${TEAM}-${ENV}
spec:
  hard:
    requests.cpu: "10"
    requests.memory: "20Gi"
    limits.cpu: "20"
    limits.memory: "40Gi"
    pods: "50"
    services: "20"
    secrets: "50"
---
apiVersion: v1
kind: LimitRange
metadata:
  name: default-limits
  namespace: ${TEAM}-${ENV}
spec:
  limits:
    - default:
        cpu: "500m"
        memory: "512Mi"
      defaultRequest:
        cpu: "100m"
        memory: "128Mi"
      type: Container

RBAC Is Only One Layer

RBAC controls who can interact with the Kubernetes API. For full namespace isolation, you need additional layers:

| Control | Purpose | |---|---| | RBAC | API authorization — who can manage what | | NetworkPolicy | Network isolation — which Pods can talk to which | | ResourceQuota | Resource limits — prevent one team from consuming all capacity | | LimitRange | Default limits — ensure every container has resource bounds | | PodSecurity | Runtime constraints — prevent privileged Pods, host mounts, etc. | | OPA/Kyverno | Custom policies — enforce naming, labels, image sources, etc. |

# Example NetworkPolicy for namespace isolation
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-cross-namespace
  namespace: team-alpha-prod
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - podSelector: {}  # Only allow traffic from same namespace
  egress:
    - to:
        - podSelector: {}  # Only allow traffic to same namespace
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: kube-system
      ports:
        - protocol: UDP
          port: 53  # Allow DNS

Verification

After setting up namespace isolation, validate from each team's perspective:

# Team Alpha can manage their namespace
kubectl auth can-i create deployments \
  --as=alice --as-group=team-alpha \
  -n team-alpha-prod
# yes

# Team Alpha cannot access Team Beta's namespace
kubectl auth can-i get pods \
  --as=alice --as-group=team-alpha \
  -n team-beta-prod
# no

# Team Alpha cannot create namespaces
kubectl auth can-i create namespaces \
  --as=alice --as-group=team-alpha
# no

# Team Alpha cannot list all namespaces (optional strictness)
kubectl auth can-i list namespaces \
  --as=alice --as-group=team-alpha
# no

Why Interviewers Ask This

Multi-tenancy is a core challenge in Kubernetes. Interviewers ask this to evaluate whether you can design a secure isolation strategy using RBAC as the authorization layer alongside other controls.

Common Follow-Up Questions

Is RBAC alone sufficient for namespace isolation?
No. RBAC controls API access but not network traffic or resource consumption. You also need NetworkPolicies for network isolation and ResourceQuotas/LimitRanges for resource boundaries.
How do you prevent a team from reading Secrets in another namespace?
Only grant Secrets access via namespace-scoped RoleBindings. Never create ClusterRoleBindings that grant Secrets access. The team will have zero visibility into other namespaces by default.
Can a Pod in one namespace communicate with the API to access resources in another namespace?
Only if its ServiceAccount has a ClusterRoleBinding or a RoleBinding in the target namespace. Without explicit cross-namespace bindings, the API server will deny the request.

Key Takeaways

  • Namespace-scoped RoleBindings are the foundation of RBAC-based isolation.
  • RBAC must be combined with NetworkPolicies and ResourceQuotas for complete isolation.
  • Avoid ClusterRoleBindings unless the subject genuinely needs cluster-wide access.
  • Automate namespace provisioning to ensure consistent RBAC setup.

Related Questions