如果你是 ACK容器叢集
(AlibabaCloud Container Service for Kubernetes)的管理者,你可能經常需要為其他的普通開發者角色建立不同的RAM子賬戶并進行授權操作,當需要對多個開發人員授予相同ACK叢集操作權限時,為每個開發者建立子賬号并授權就顯得太過重複和繁瑣了。
本文基于
ack-ram-authenticator 項目,示範如何配置ACK叢集使用RAM Role進行身份驗證。0. 步驟概覽
(1)
RAM控制台建立子賬戶
kubernetes-dev
、
dev01
dev02
...
devN
和RAM Role
KubernetesDev
并為子賬戶kubernetes-dev授權
(2) ACK叢集中部署和運作 ack-ram-authenticator server
(3) 配置ACK叢集Apiserver使用ack-ram-authenticator server
(4) 設定kubectl使用由ack-ram-authenticator提供的身份驗證令牌
1. RAM控制台建立子賬戶和RAM Role
1.1 建立子賬戶kubernetes-dev dev1 dev2 ... devN

分别對dev01 dev02 devN授權AliyunSTSAssumeRoleAccess:
1.2 在ACK叢集中對子賬戶kubernetes-dev授權開發者權限
按照提示完成授權:
1.3 建立RAM Role KubernetesDev
2. 部署和運作ack-ram-authenticator server
$ git clone https://github.com/haoshuwei/ack-ram-authenticator
參考
example.yaml檔案中的ConfigMap配置檔案為:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: kube-system
name: ack-ram-authenticator
labels:
k8s-app: ack-ram-authenticator
data:
config.yaml: |
# a unique-per-cluster identifier to prevent replay attacks
# (good choices are a random token or a domain name that will be unique to your cluster)
clusterID: <your cluster id>
server:
# each mapRoles entry maps an RAM role to a username and set of groups
# Each username and group can optionally contain template parameters:
# 1) "{{AccountID}}" is the 16 digit RAM ID.
# 2) "{{SessionName}}" is the role session name.
mapRoles:
# statically map acs:ram::000000000000:role/KubernetesAdmin to a cluster admin
- roleARN: acs:ram::<your main account uid>:role/KubernetesDev
username: 2377xxxx # <your subaccount kubernetes-dev uid>
$ kubectl apply -f ack-ram-authenticator-cm.yaml
部署DaemonSet:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
namespace: kube-system
name: ack-ram-authenticator
labels:
k8s-app: ack-ram-authenticator
spec:
updateStrategy:
type: RollingUpdate
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
k8s-app: ack-ram-authenticator
spec:
# run on the host network (don't depend on CNI)
hostNetwork: true
# run on each master node
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- key: CriticalAddonsOnly
operator: Exists
containers:
- name: ack-ram-authenticator
image: registry.cn-hangzhou.aliyuncs.com/acs/ack-ram-authenticator:v1.0.1
imagePullPolicy: Always
args:
- server
- --config=/etc/ack-ram-authenticator/config.yaml
- --state-dir=/var/ack-ram-authenticator
- --generate-kubeconfig=/etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml
resources:
requests:
memory: 20Mi
cpu: 10m
limits:
memory: 20Mi
cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/ack-ram-authenticator/
- name: state
mountPath: /var/ack-ram-authenticator/
- name: output
mountPath: /etc/kubernetes/ack-ram-authenticator/
volumes:
- name: config
configMap:
name: ack-ram-authenticator
- name: output
hostPath:
path: /etc/kubernetes/ack-ram-authenticator/
- name: state
hostPath:
path: /var/ack-ram-authenticator/
$ kubectl apply -f ack-ram-authenticator-ds.yaml
檢查ack-ram-authenticator在ack叢集的3個master節點是上否運作正常:
$ kubectl -n kube-system get po|grep ram
ack-ram-authenticator-7m92f 1/1 Running 0 42s
ack-ram-authenticator-fqhn8 1/1 Running 0 42s
ack-ram-authenticator-xrxbs 1/1 Running 0 42s
3. 配置ACK叢集Apiserver使用ack-ram-authenticator server
Kubernetes API使用令牌認證webhook來內建ACK RAM Authenticator,運作ACK RAM Authenticator Server時,它會生成一個webhook配置檔案并将其儲存在主機檔案系統中,是以我們需要在API Server中配置使用這個配置檔案:
分别修改3個master上的api server配置
/etc/kubernetes/manifests/kube-apiserver.yaml
并添加以下字段:
spec.containers.command:
--authentication-token-webhook-config-file=/etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml
spec.containers.volumeMounts:
- mountPath: /etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml
name: ack-ram-authenticator
readOnly: true
spec.volumes:
- hostPath:
path: /etc/kubernetes/ack-ram-authenticator/kubeconfig.yaml
type: FileOrCreate
name: ack-ram-authenticator
重新開機kubelet使其生效:
$ systemctl restart kubelet.service
4. 設定kubectl使用由ack-ram-authenticator提供的身份驗證令牌
配置開發角色人員可以使用的kubeconfig檔案:
基于kubernetes-dev子賬戶的kubeconfig檔案(控制台擷取)做修改如下:
apiVersion: v1
clusters:
- cluster:
server: https://xxx:6443
certificate-authority-data: xxx
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: "2377xxx"
name: 2377xxx-xxx
current-context: 2377xxx-xxx
kind: Config
preferences: {}
// 以下為修改部分
users:
- name: "kubernetes-dev"
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: ack-ram-authenticator
args:
- "token"
- "-i"
- "<your cluster id>"
- "-r"
- "acs:ram::xxxxxx:role/kubernetesdev"
此時,這個kubeconfig檔案就可以共享給開發角色的人員下載下傳使用。
開發人員使用共享出來的kubeconfig檔案之前需要在自己的環境裡安裝部署ack-ram-authenticator二進制用戶端檔案:
下載下傳并安裝ack-ram-authenticator(
下載下傳連結)二進制用戶端檔案:
$ go get -u -v github.com/AliyunContainerService/ack-ram-authenticator/cmd/ack-ram-authenticator
dev1 dev2 ... devN子賬戶使用自己的AK并配置檔案
~/.acs/credentials
:
{
"AcsAccessKeyId": "xxxxxx",
"AcsAccessKeySecret": "xxxxxx"
}
此時,dev1 dev2 ... devN使用共享的kubeconfig通路叢集資源時都會統一映射到kubernetes-dev的權限上:
ACK中不同角色的通路權限說明:
驗證:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
busybox-c5bd49fb9-n26zj 1/1 Running 0 3d3h
nginx-5966f7d8c5-rtzb6 1/1 Running 0 3d2h
$ kubectl get no
Error from server (Forbidden): nodes is forbidden: User "237753164652952730" cannot list resource "nodes" in API group "" at the cluster scope