Introduction ๐
I run up clusters with
kubeadm
. Once a cluster is up and running, it can be enticing to just copy the /etc/kubernetes/admin.conf
credentials down to your local .kube/config
file and crack on. However, as you can guess, that’s not best practice, especially when working in teams.
In order for audit logs to contain more specific details about which user performed which actions, and for general access control purposes, it helps a lot if users authenticate to clusters using their own credentials, not using shared credentials.
In the post, I will cover how to configure the cluster to authenticate users against a TLS certificate. It is assumed that you have openssl
available locally, and administrative access to your K8S cluster.
NOTE: If you have an OIDC service available (such as Keycloak), it may be more appropriate to configure access via that instead.
Kubernetes auth ๐
To authenticate as an Administrator to a K8S cluster, you must generate a CSR (certificate signing request) which must be signed by the cluster API server. The resulting client certificate will allow you to authenticate to the cluster using the Kubernetes API (i.e. with ‘kubectl’ and other client tools).
For more info, see upstream docs .
Create a user certificate ๐
In summary, to create a CSR…
openssl genrsa -out rossg.key 2048
openssl req -new -key rossg.key -out rossg.csr -subj "/CN=rossg/O=admin"
cat >rossg-csr.yaml <<EOF
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: rossg
spec:
request: $(cat rossg.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOF
Get user certificate signed by cluster ๐
Copy that YAML file up to your home folder on cluster’s master node, and apply it as the admin user…
ssh your-cluster-master-node-1
...
sudo -s
...
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl apply -f rossg-csr.yaml
kubectl certificate approve rossg
...
Add the user to any roles/groups as applicable.
kubectl create clusterrolebinding your-cluster-admin \
--clusterrole=cluster-admin \
--group=admin
Prepare the signed certificate and cluster CA certificate for download…
kubectl get csr rossg -o jsonpath='{.status.certificate}' | base64 -d > rossg.crt
kubectl -n default get configmap kube-root-ca.crt -o jsonpath="{.data['ca\.crt']}" > k8s-api.crt
Configure signed certificate into local client configuration ๐
Next, retrieve the signed certificate and the cluster CA to your laptop and use it to configure a profile in your local user’s ~/.kube/config
(or Windows equivalent).
scp your-cluster-node-1:rossg.crt .
scp your-cluster-node-1:k8s-api.crt .
Create a user in the local K8S client configuration file. If you are working with many clusters it’s a good idea to use a unique unsername per cluster as each cluster will have issued a different client certificate.
kubectl config set users.rossg-your-cluster.client-certificate-data \
"$(cat rossg.crt | base64 -w0)"
kubectl config set users.rossg-your-cluster.client-key-data \
"$(cat rossg.key | base64 -w0)"
Now create a cluster profile, and select it as the current context…
kubectl config set-context your-cluster \
--cluster=your-cluster \
--user=rossg-your-cluster
kubectl config use-context your-cluster
At this point we need to set the API server endpoint details for the newly created cluster profile, otherwise it will assume 127.0.0.1
.
kubectl config set-cluster your-cluster-logging \
--server=https://10.96.4.1:6443
kubectl config set clusters.your-cluster.certificate-authority-data \
"$(cat k8s-api.crt | base64 -w0)"
Test it with…
kubectl cluster-info
kubectl get nodes
kubectl get pods --all-namespaces
Keep those creds safe!!!