Existing Cluster Install: Using Custom RBAC roles

Question from this week about using custom / BYO RBAC roles when installing App Manager (KOTS) into an existing kubernetes cluster – specifically Openshift in this case.

Our Openshift cluster has a built-in role admin that we’d like to use instead of the role created by kubectl kots install --use-minimal-rbac or oc kots install --use-minimal-rbac. Because the admin account permissions are slightly more narrow than the kots-created RBAC role, the installation fails if a user with the admin role attempts to perform an installation. Is there a way to have the App Manager workloads use this admin account instead of creating a new one with " * * * " permissions on the namespace / project?

Solution

Credit goes to @bco for this solution. Thanks Barry!

Disclaimer - This is an advanced topic and a deep dive on the nuances of Kubernetes RBAC is beyond the scope of this post. The viability of this workaround depends on the admin role in this example having equivalent or very nearly equivalent * * * permissions to every object in the target namespace, similar to the the default role created when --minimal-rbac is used. If the target role has insufficient permissions, this may result in a totally broken installation. App Manager needs these permissions to create, update, and destroy all managed objects in the target namespace.

Manually Creating RBAC objects

We’ll pre-create an RBAC ServiceAccount and RoleBinding, then use --use-minimal-rbac along with --skip-rbac-check and --ensure-rbac=false to perform the install.

# rbac.yaml - change namespace and roleRef as desired
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    kots.io/kotsadm: "true"
  name: kotsadm
  namespace: my-app
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    kots.io/kotsadm: "true"
  name: kotsadm-rolebinding
  namespace: my-app
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin
subjects:
- kind: ServiceAccount
  name: kotsadm
  namespace: my-app

Once these are created, run the kubectl kots install or oc kots install command with the appropriate flags. The below is for an installation into an airgapped openshift cluster using a private registry at private.registry.host.

oc kots install my-app \
  --namespace my-app \
  --license-file ./license.yaml \
  --airgap-bundle /path/to/application.airgap \
  --kotsadm-namespace my-app \
  --kotsadm-registry private.registry.host \
  --registry-username rw-username \
  --registry-password rw-password \
  --use-minimal-rbac \
  --skip-rbac-check \
  --ensure-rbac=false
1 Like

Supporting question/answer

Q: What if I can neither use the admin role, nor have * * * permissions on the target namespace? Are there custom RBAC roles with specific permissions that can be used?

A: Here are the RBAC objects with the specific required permissions (usage instructions below):

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    kots.io/backup: velero
    kots.io/kotsadm: "true"
  name: kotsadm
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    kots.io/backup: velero
    kots.io/kotsadm: "true"
  name: kotsadm-role
rules:
  - apiGroups: [""]
    resources: ["configmaps", "persistentvolumeclaims", "pods", "secrets", "services", "limitranges"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["apps"]
    resources: ["daemonsets", "deployments", "statefulsets"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["batch"]
    resources: ["jobs", "cronjobs"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["networking.k8s.io", "extensions"]
    resources: ["ingresses"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: [""]
    resources: ["namespaces", "endpoints", "serviceaccounts"]
    verbs: ["get"]
  - apiGroups: ["authorization.k8s.io"]
    resources: ["selfsubjectaccessreviews", "selfsubjectrulesreviews"]
    verbs: ["create"]
  - apiGroups: ["rbac.authorization.k8s.io"]
    resources: ["roles", "rolebindings"]
    verbs: ["get"]
  - apiGroups: [""]
    resources: ["pods/log", "pods/exec"]
    verbs: ["get", "list", "watch", "create"]
  - apiGroups: ["batch"]
    resources: ["jobs/status"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    kots.io/backup: velero
    kots.io/kotsadm: "true"
  name: kotsadm-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kotsadm-role
subjects:
- kind: ServiceAccount
  name: kotsadm

Now, you’ll need to create those manually by saving that content in a file (say rbac.yaml) and then running:

kubectl apply -f rbac.yaml -n <namespace>

After that, you’ll need to pass the --ensure-rbac=false and --skip-rbac-check flags to the install command so that KOTS doesn’t try to re-create them, like so:

kubectl kots install <app-slug> -n <namespace> --ensure-rbac=false --skip-rbac-check ...

You will also need to pass those flags to the upgrade command when upgrading KOTS later, like so:

kubectl kots admin-console upgrade -n <namespace> --ensure-rbac=false --skip-rbac-check ...

Hope that helps.

3 Likes