
RBAC Role-Based Access Control (RBAC) is an efficient and secure method to implement security in Kubernetes. Though we have Node Authorization, Attribute Based Access Control (ABAC) and others but RBAC is mostly widely used by some of the prominent helm charts prefer RBAC.
In this blog, you can follow along to create 3 namespaces in addition to ‘default’ namespace and enable respective namespace’s default service account to access any of the namespace pods from any of the pods in the same clusters. Instead of service account, user or group can be bonded to roles created for individual namespace
Prerequisites:
- Linux or Windows host as your playground/lab
- 8 GB RAM & 20 GB HD
- Kubernetes cluster {minikube} with kubectl and Helm Charts
- Docker, docker images and Virtualbox
- Your precious time to give it a try!
Steps:
- Create Namespaces
- Create Pods in the respective Namespaces
- Create Roles
- Create RoleBindings
- Add Service Accounts to have read access across all namespaces
- Try and Test until it works!
Let’s kickstart to implement RBAC in Kubernetes
1. Create Namespaces
Before creating namespaces, install Kubectl, Minikube. Start Minikube single node cluster and then install Helm Charts. Please click the respective links and follow along the steps to install respective tools before proceeding further. Once all set check minikube status
debian:ben# minikube status
⚠️ minikube 1.3.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v/1.3.1
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 10.0.2.15
debian:ben# kubectl create namespace alpha
debian:ben# kubectl create namespace beta
debian:ben# kubectl create namespace cuda
debian:ben# kubectl get namespace
NAME STATUS AGE
alpha Active 15d
backup Active 3d
beta Active 15d
cuda Active 15d
default Active 16d
kube-node-lease Active 15d
kube-public Active 16d
kube-system Active 16d
debian:ben#
2. Create Pods in the respective Namespaces
I’ve used Alpine Linux container image available at my docker hub repo. Also, using helm charts Postgresql pod added to each namespace created above. Manifest YAML file alpine_pod.yml and commands are as under
apiVersion: v1
kind: Pod
metadata:
name: alpine-client1
spec:
containers:
- name: alpine-client1
image: babvin/alpine_client
imagePullPolicy: IfNotPresent
# Create pods in Default namespace
kubectl apply -f alpine_pod.yml
helm install stable/postgresql
# Create pods in Alpha namespace
kubectl apply -f alpine_pod.yml -n alpha
helm install stable/postgresql --namespace alpha
# Create pods in Default namespace
kubectl apply -f alpine_pod.yml -n beta
helm install stable/postgresql --namespace beta
# Create pods in Default namespace
kubectl apply -f alpine_pod.yml -n cuda
helm install stable/postgresql --namespace cuda
# List pods created using above commands
debian:alpine# kubectl get pods -A -o name
pod/alpha-postgresql-0
pod/alpine-client1
pod/beta-postgresql-0
pod/alpine-client1
pod/cuda-postgresql-0
pod/alpine-client
pod/alpine-client1
pod/batch-every-twelve-hours-1567083420-2dwbm
pod/batch-every-twelve-hours-1567083480-b48jt
pod/batch-every-twelve-hours-1567083540-x7vfn
pod/eating-clam-postgresql-0
pod/coredns-5c98db65d4-dsh54
pod/coredns-5c98db65d4-zn4c4
pod/etcd-minikube
pod/heapster-l6p59
pod/influxdb-grafana-m9vvx
pod/kube-addon-manager-minikube
pod/kube-apiserver-minikube
pod/kube-controller-manager-minikube
pod/kube-proxy-snvc7
pod/kube-scheduler-minikube
pod/kubernetes-dashboard-7b8ddcb5d6-8gz9g
pod/logviewer-8664c4bdcd-w5vh5
pod/storage-provisioner
pod/tiller-deploy-75f6c87b87-6pcnb
debian:alpine#
3. Create Roles
RBAC
uses the rbac.authorization.k8s.io
API group to drive authorization decisions, allowing admins to dynamically configure policies through the Kubernetes API. Based on below 5 elements RBAC is designed to effective implement security best practices
- Roles: Define permissions for each Kubernetes resources like pods, services, namespaces etc
- RoleBindings: Bind role created to user and groups defined under subjects
- Subjects: Users, groups and service accounts which are bind to the roles using rolebindings
- Clusterroles: Used to grant access to any particular or across all namespaces based on how we configure
- Clusterrolebindings: Similar to Rolesbindings, but for a Cluster wide access for the subjects.
Here is the yaml file to create roles for all the namespaces that we have created above. Kubernetes resources namely namespaces, pods and services and jobs are selected with read only access.
debian:alpine# cat example.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: example-role
rules:
- apiGroups: [""]
resources: ["namespaces", "pods", "services", "jobs"]
verbs: ["get", "watch", "list"]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: alpha
name: example-role
rules:
- apiGroups: [""]
resources: ["namespaces", "pods", "services", "jobs"]
verbs: ["get", "watch", "list"]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: beta
name: example-role
rules:
- apiGroups: [""]
resources: ["namespaces", "pods", "services", "jobs"]
verbs: ["get", "watch", "list"]
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: cuda
name: example-role
rules:
- apiGroups: [""]
resources: ["namespaces", "pods", "services", "jobs"]
verbs: ["get", "watch", "list"]
debian:alpine#
4. Create RoleBindings
Use below yaml file to create rolebindings for the above roles. Key part to observe is that the subjects section has arrays of service accounts group from different namespaces which is the main configuration to allow criss-cross access to different namespaces. If there is a better way to do this please let me know…
debian:alpine# cat ex.binding.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: example-rolebinding
namespace: alpha
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: alpha
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: beta
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: cuda
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: default
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: example-rolebinding
namespace: beta
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: beta
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: alpha
- kind: Group
name: system:serviceaccounts
namespace: cuda
- kind: Group
name: system:serviceaccounts
namespace: default
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: example-rolebinding
namespace: cuda
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: beta
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: alpha
- kind: Group
name: system:serviceaccounts
namespace: cuda
- kind: Group
name: system:serviceaccounts
namespace: default
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: example-rolebinding
namespace: default
subjects:
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: beta
- kind: Group
name: system:serviceaccounts
apiGroup: rbac.authorization.k8s.io
namespace: alpha
- kind: Group
name: system:serviceaccounts
namespace: cuda
- kind: Group
name: system:serviceaccounts
namespace: default
roleRef:
kind: Role
name: example-role
apiGroup: rbac.authorization.k8s.io
debian:alpine#
5. Add Service Accounts to have read access across all namespaces
# Create roles – kubectl apply -f roles.yml
# Create rolebindings – kubectl apply -f rolebindings.yml
# Ouptut of role and rolebindings creation
debian:alpine# kubectl get roles -A
NAMESPACE NAME AGE
alpha example-role 3h31m
beta example-role 148m
cuda example-role 148m
default example-role 11h
default pod-reader 11h
kube-public kubeadm:bootstrap-signer-clusterinfo 16d
kube-public system:controller:bootstrap-signer 16d
kube-system extension-apiserver-authentication-reader 16d
kube-system kube-proxy 16d
kube-system kubeadm:kubelet-config-1.15 16d
kube-system kubeadm:nodes-kubeadm-config 16d
kube-system system::leader-locking-kube-controller-manager 16d
kube-system system::leader-locking-kube-scheduler 16d
kube-system system:controller:bootstrap-signer 16d
kube-system system:controller:cloud-provider 16d
kube-system system:controller:token-cleaner 16d
debian:alpine# kubectl get rolebindings -A
NAMESPACE NAME AGE
alpha example-rolebinding 3h30m
beta example-rolebinding 159m
cuda example-rolebinding 159m
default example-rolebinding 11h
default test-foo 11h
kube-public kubeadm:bootstrap-signer-clusterinfo 16d
kube-public system:controller:bootstrap-signer 16d
kube-system kube-proxy 16d
kube-system kubeadm:kubelet-config-1.15 16d
kube-system kubeadm:nodes-kubeadm-config 16d
kube-system system::extension-apiserver-authentication-reader 16d
kube-system system::leader-locking-kube-controller-manager 16d
kube-system system::leader-locking-kube-scheduler 16d
kube-system system:controller:bootstrap-signer 16d
kube-system system:controller:cloud-provider 16d
kube-system system:controller:token-cleaner 16d
debian:alpine#
6. Try and Test until it works!
One of the ways to confirm whether service accounts can talk to pods inside the different namespaces is by Curling Kubernetes API calls. Here is the curl command. Token key is important to authenticate which is auto-generated during the pod creation. Please note below command is one line and execute by removing new lines.
curl -ik -H "Authorization: Bearer $(cat/var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
# Output:
kubectl -n cuda exec -it alpine-client1 -- bash
bash-5.0# printenv
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_PORT=443
CUDA_POSTGRESQL_PORT_5432_TCP_PROTO=tcp
HOSTNAME=alpine-client1
CUDA_POSTGRESQL_PORT_5432_TCP_PORT=5432
PWD=/client
CUDA_POSTGRESQL_PORT_5432_TCP_ADDR=10.109.178.227
HOME=/root
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
CUDA_POSTGRESQL_SERVICE_HOST=10.109.178.227
CUDA_POSTGRESQL_PORT=tcp://10.109.178.227:5432
CUDA_POSTGRESQL_PORT_5432_TCP=tcp://10.109.178.227:5432
CUDA_POSTGRESQL_SERVICE_PORT_POSTGRESQL=5432
TERM=xterm
SHLVL=1
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
CUDA_POSTGRESQL_SERVICE_PORT=5432
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
_=/bin/printenv
bash-5.0# curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://kubernetes.default.svc.cluster.local/api/v1/namespaces/default/pods
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces/default/pods",
"resourceVersion": "344123"
},
"items": [
{
"metadata": {
"name": "alpine-client",
"namespace": "default",
"selfLink": "/api/v1/namespaces/default/pods/alpine-client",
"uid": "a4b3ffe0-06a8-4344-b284-bfea61b57278",
"resourceVersion": "296784",
"creationTimestamp": "2019-08-29T07:35:49Z",
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"name\":\"alpine-client\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"babvin/alpine_client\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"alpine-client\"}]}}\n"
}
},
"spec": {
"volumes": [
{
"name": "default-token-sctd7",
"secret": {
"secretName": "default-token-sctd7",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "alpine-client",
"image": "babvin/alpine_client",
"resources": {
},
::: Output truncated :::
In the above output, I’ve logged into an alpine pod of cuda namespace but curl command was executed on ‘default’ namespace which showed the JSON formatted output listing out all the details of default namespace pods.
Hurray!! we now have a fully working RBAC access controlled Kubernetes LAB with multiple namespaces. Feel free to try out adding users and groups and let me know if you’re stuck…
Thanks for stopping by my blog. Your feedback is most important to me, please share by commenting below. You can also follow on Twitter – @babvin or email me vb@vinaybabu.in
References
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/
https://stackoverflow.com/questions/42642170/how-to-run-kubectl-commands-inside-a-container
Image Credits: https://www.innovativesys.com