天天看點

關于K8s叢集環境工作組隔離配置多叢集切換的一些筆記

作者:山河已無恙i

寫在前面

  • 分享一些 K8s 中叢集管理的筆記
  • 博文内容涉及叢集環境隔離的相關配置:
  • 叢集多命名空間隔離及使用者認證鑒權配置 Demo
  • 多叢集的統一管理配置,叢集切換 Demo
  • 了解不足小夥伴幫忙指正
  • 食用方式: 需要了解 K8s 叢集,RBAC 鑒權,CA 認證相關知識

佛告須菩提:"凡所有相,皆是虛妄,若見諸相非相,則見如來" --- 《金剛經》

在一個 Team 中,如果使用同一個叢集,不同的工作組需要在叢集内做到隔離,考慮最小權限原則,同時防止互相影響誤操作等,可以通過建立不同的使用者,給予不同命名空間,賦予不同的權限做到。命名空間可以隔離大部分 API 對象,通過限制使用者對命名空間的通路來實作隔離。獨立的命名空間,獨立的使用者,同一個叢集,需要上下文(運作環境)來區分。

  • 比如對于測試來講,可能隻需要自己命名空間的 get、create 權限,确認服務正常運作,可以正常測試,或者運作一些測試腳本的 job 。
  • 對于 開發來講,需要自己命名空間的 create delete deletecollection get list patch update watch 等權限,需要持續的內建部署開發測試
  • 對與 運維 來講,需要所有命名空間的所有權限,負責整個叢集的健康平穩的運作,配置設定一些 叢集級别的資源對象,比如 SC 等。

一般情況下,如果有 k8s 面闆工具,應該會有相關的比較完善的功能,今天和小夥伴們分享,如何通過 kubectl 實作,分兩種情況考慮:

  • 第一種為共享單個叢集,使用命名空間使用者隔離實作叢集環境的共享隔離
  • 第二種為多個叢集的情況,要考慮叢集的統一管理,控制台共享,通過一個用戶端,實作多叢集切換管理。

單叢集多命名空間隔離管理

這是假設在 team 中,有以下三個工作組:dev、prod、test ,對于叢集的使用,各自配置設定一個命名空間,使用者,上下文(運作環境),下面為操作完的效果。

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl config get-contexts
CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
          ctx-dev                       kubernetes   dev                liruilong-dev
          ctx-prod                      kubernetes   prod               liruilong-prod
          ctx-test                      kubernetes   test               liruilong-test
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin   awx
┌──[[email protected]]-[~/.kube/ca]
└─$
           

目前所在運作環境為 kubernetes-admin@kubernetes,這是 kubeadm 安裝 k8s 預設配置的,給系統管理者用的一個上下文和超級使用者,可以跳過RBAC鑒權

開始之前,需要知道目前叢集的一些基本資訊,檢視目前的叢集資訊

┌──[[email protected]]-[~/.kube]
└─$kubectl config view  -o json | jq .clusters
[
  {
    "name": "kubernetes",
    "cluster": {
      "server": "https://192.168.26.81:6443",
      "certificate-authority-data": "DATA+OMITTED"
    }
  }
]
           

建立工作組對應的命名空間

┌──[[email protected]]-[~/.kube]
└─$kubectl create ns liruilong-dev
namespace/liruilong-dev created
┌──[[email protected]]-[~/.kube]
└─$kubectl create ns liruilong-prod
namespace/liruilong-prod created
┌──[[email protected]]-[~/.kube]
└─$kubectl create ns liruilong-test
namespace/liruilong-test created
┌──[[email protected]]-[~/.kube]
└─$
           

檢視建立的命名空間

┌──[[email protected]]-[~/.kube]
└─$kubectl get ns -o wide --show-labels  |  grep liruilong
liruilong-dev                Active   4m21s   kubernetes.io/metadata.name=liruilong-dev
liruilong-prod               Active   2m15s   kubernetes.io/metadata.name=liruilong-prod
liruilong-test               Active   119s    kubernetes.io/metadata.name=liruilong-test
           

接下來,需要為這每個工作組分别定義一個 Context上下文,即運作環境。這個運作環境将屬于叢集下的命名空間,并且指定上下文的擁有使用者。

在指定叢集、命名空間、使用者下建立上下文運作環境。

┌──[[email protected]]-[~/.kube]
└─$kubectl config set-context ctx-dev --namespace=liruilong-dev --cluster=kubernetes  --user=dev
Context "ctx-dev" created.
┌──[[email protected]]-[~/.kube]
└─$kubectl config set-context ctx-prod --namespace=liruilong-prod --cluster=kubernetes  --user=prod
Context "ctx-prod" created.
┌──[[email protected]]-[~/.kube]
└─$kubectl config set-context ctx-test --namespace=liruilong-test --cluster=kubernetes  --user=test
Context "ctx-test" created.
┌──[[email protected]]-[~/.kube]
└─$
           

需要确認建立的配置資訊

┌──[[email protected]]-[~/.kube]
└─$kubectl config view  -o json | jq .contexts
[
  {
    "name": "ctx-dev",
    "context": {
      "cluster": "kubernetes",
      "user": "dev",
      "namespace": "liruilong-dev"
    }
  },
  {
    "name": "ctx-prod",
    "context": {
      "cluster": "kubernetes",
      "user": "prod",
      "namespace": "liruilong-prod"
    }
  },
  {
    "name": "ctx-test",
    "context": {
      "cluster": "kubernetes",
      "user": "test",
      "namespace": "liruilong-test"
    }
  },
  {

]
           

做完環境的隔離,還需要做一些 認證和鑒權的工作,上面定義了一些新的使用者用于對叢集操作,為了讓普通使用者能夠通過認證并調用 API, 首先,該使用者必須擁有 Kubernetes 叢集簽發的證書, 然後将該證書提供給 Kubernetes API。是以這裡需要為這些特定的使用者添加認證資訊,同時配置權限。

K8s中認證鑒權方式有很多,可以是基于 HTTP Token認證,或者 kubeconfig 證書認證(基于CA根證書簽名的雙向數字證書認證方式)。鑒權一般使用 RBAC 的方式,也有其他的 ABAC 、Web鈎子等鑒權方式。

一般情況下,基于 Token 的方式,如果之前沒有配置過,需要修改 kube-apiserver的啟動參數配置檔案,重新開機 kubelet 服務,如果配置過(當 API 伺服器的指令行設定了--token-auth-file=SOMEFILE選項時),會從檔案中 讀取持有者令牌。目前,令牌會長期有效,并且在不重新開機 API 伺服器的情況下 無法更改令牌清單,是以這裡使用 Kubeconfig 檔案認證

使用 Kubeconfig 證書認證

關于 kubeconfig 證書認證, 在k8s 中 使用 kubeconfig 檔案來組織有關叢集、使用者、命名空間和身份認證機制的資訊。kubectl 指令行工具使用 kubeconfig 檔案來查找選擇叢集所需的資訊,并與叢集的 API 伺服器進行通信。

要做到命名空間隔離,需要建立一個新的 kubeconfig 檔案,而不是把相關的使用者資訊添加到現有的 kubeconfig 檔案.

現有的 kubeconfig 檔案包含 目前叢集管理者的使用者資訊,在叢集建立過程中,kubeadm 對 admin.conf 中的證書進行簽名時,将其配置為 Subject: O=system:masters, CN=kubernetes-admin, system:masters 是一個例外的超級使用者組,可以繞過鑒權層(例如 RBAC)。 強烈建議不要将 admin.conf 檔案與任何人共享。

當然,如果使用 kubeadm 安裝 k8s 叢集,可以直接通過 kubeadm 命導令出普通使用者的 kubeconfig 證書檔案。分别來看下

預設情況下,kubectl 在 $HOME/.kube 目錄下查找名為 config 的檔案。一個kubeconfig 檔案包括一下幾部分:

  • 叢集資訊: 叢集CA憑證叢集位址
  • 上下文資訊 所有上下文資訊目前上下文
  • 使用者資訊 使用者CA憑證使用者私鑰

生成 kubeconfig 檔案

下面以 dev 工作組為 Demo ,建立一個證書生成 kubeconfig 檔案, k8s 中 證書 API 支援 X.509 的自動化配置, 它為 Kubernetes API 的用戶端提供一個程式設計接口, 用于從證書頒發機構(CA)請求并擷取 X.509 證書。

通過 openssl 生成一個 2048 位的 私鑰

┌──[[email protected]]-[~/.kube/ca]
└─$openssl genrsa -out dev-ca.key 2048
Generating RSA private key, 2048 bit long modulus
....................................................................................................+++...........................................................+++
e is 65537 (0x10001)
           

使用使用者名 dev 生成一個證書簽名請求(CSR),且該使用者屬于 2022 使用者組。設定 CSR 的 CN 和 O 屬性很重要。CN 是使用者名,O 是該使用者歸屬的組

┌──[[email protected]]-[~/.kube/ca]
└─$openssl req -new -key dev-ca.key -out dev-ca.csr -subj "/CN=dev/O=2022"
┌──[[email protected]]-[~/.kube/ca]
└─$ls
dev-ca.csr  dev-ca.key
           

目前為止,有一個 dev 使用者的 私鑰,一個證書簽名請求(CSR)檔案,下面需要建立一個 CertificateSigningRequest,并通過 kubectl 将其送出到 Kubernetes 叢集

┌──[[email protected]]-[~/.kube/ca]
└─$cat dev-csr.yaml
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
  name: dev-ca
spec:
  signerName: kubernetes.io/kube-apiserver-client
  request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1lqQ0NBVW9DQVFBd0hURU1NQW9HQTFVRUF3d0RaR1YyTVEwd0N3WURWUVFLREFReU1ESXlNSUlCSWpBTgpCZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUEzVHBUTEM1cjFCTEtYNHBkQXA1UHlReUN0VlNYCmxQZ1BqWTJ5TFZDckFkVGh0ZEVtMHcvclA4VlJrc3BucmhSQnhsQUNIa2FPcWNnSGFieUo0bHlWOGQ5RUIxcFMKanMxaThSc2VOUm5rd1NxN3FzRnpmR01FTXU5SjVjZUpaWnQxWWU1ZEhaZG5EbjFpK0VEdDJ0Z0VTcjRMTGlNdgpoVTNBTWtBbC82dTBLZCtZZ2tWOXlES0JMaEhoTUlZUHlUb0pTV215K0VJZkdDNzcyZXNmeER1Y1Q5SCtTTmN1CkdMQ3hvUUcvT2VVbEFVaFdqRkJ0L29MUWI2NU4yTTd3Ky9SVkN1YVRzZXZIZkQ5MXZNczE3dU5vN25mVXQ1a2MKd2E2czNKUm81UnYxQm9vRlVTRGtxa0NwMW5MejNOamszby9zdUdmY1QxTEpDRnJQVStLL3d5Z1JJUUlEQVFBQgpvQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQU1sTllNb21qbnBhZzVBMkQzTnRTUlA4UmRWajY0akEzZ2VICkFjZFFDR0x4VnRvZXNkUUMxV1RGaXZFTGZIdnFZVzdpekFNamNMUVh2U1g2MXFNSzg3ejVNcllEL0g5d1lrNk4Ka2VYNUJRamZxN2xCV0ZSU0VkTzd5WDFESHhTSGVRSEsxTU9QNTU1dWErT2haYldldFlwVmZEVmxReVpEME9LMwptTFhac1dnWnk0N2syek5jdmlWYkl0Rm9nT2Y2ZGhQenU0UHFhWXVuTzNNUmVJT2JCZGVINzMxVDhuQUFKZldRCmswMGJLYlRPeEphSkVSUktMcGVUS2k1dit5a09oNjBYTC9vK1k5TVd4T2EySUErS3JxcmgxYmhjWmxha1JiTnoKODZ4VkJLTDBDc3d1RW9abWZSSUJZejlBR0RxWlJEVUdTUkhOaUlzNTNCbnl1MlBuU3VzPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K
  usages:
  - client auth
           

需要注意的幾點:

  • usage 字段必須是 'client auth'
  • expirationSeconds 可以設定為更長(例如 864000 是十天)或者更短(例如 3600 是一個小時),這裡沒有定義使用預設值
  • request 字段是 CSR 檔案内容的 base64 編碼值。 要得到該值,可以執行指令 cat myuser.csr | base64 | tr -d "\n"。
┌──[[email protected]]-[~/.kube/ca]
└─$cat dev-ca.csr | base64  | tr -d '\n'
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ1lqQ0NBVW9DQVFBd0hURU1NQW9HQTFVRUF3d0RaR1YyTVEwd0N3WURWUVFLREFReU1ESXlNSUlCSWpBTgpCZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUEzVHBUTEM1cjFCTEtYNHBkQXA1UHlReUN0VlNYCmxQZ1BqWTJ5TFZDckFkVGh0ZEVtMHcvclA4VlJrc3BucmhSQnhsQUNIa2FPcWNnSGFieUo0bHlWOGQ5RUIxcFMKanMxaThSc2VOUm5rd1NxN3FzRnpmR01FTXU5SjVjZUpaWnQxWWU1ZEhaZG5EbjFpK0VEdDJ0Z0VTcjRMTGlNdgpoVTNBTWtBbC82dTBLZCtZZ2tWOXlES0JMaEhoTUlZUHlUb0pTV215K0VJZkdDNzcyZXNmeER1Y1Q5SCtTTmN1CkdMQ3hvUUcvT2VVbEFVaFdqRkJ0L29MUWI2NU4yTTd3Ky9SVkN1YVRzZXZIZkQ5MXZNczE3dU5vN25mVXQ1a2MKd2E2czNKUm81UnYxQm9vRlVTRGtxa0NwMW5MejNOamszby9zdUdmY1QxTEpDRnJQVStLL3d5Z1JJUUlEQVFBQgpvQUF3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQU1sTllNb21qbnBhZzVBMkQzTnRTUlA4UmRWajY0akEzZ2VICkFjZFFDR0x4VnRvZXNkUUMxV1RGaXZFTGZIdnFZVzdpekFNamNMUVh2U1g2MXFNSzg3ejVNcllEL0g5d1lrNk4Ka2VYNUJRamZxN2xCV0ZSU0VkTzd5WDFESHhTSGVRSEsxTU9QNTU1dWErT2haYldldFlwVmZEVmxReVpEME9LMwptTFhac1dnWnk0N2syek5jdmlWYkl0Rm9nT2Y2ZGhQenU0UHFhWXVuTzNNUmVJT2JCZGVINzMxVDhuQUFKZldRCmswMGJLYlRPeEphSkVSUktMcGVUS2k1dit5a09oNjBYTC9vK1k5TVd4T2EySUErS3JxcmgxYmhjWmxha1JiTnoKODZ4VkJLTDBDc3d1RW9abWZSSUJZejlBR0RxWlJEVUdTUkhOaUlzNTNCbnl1MlBuU3VzPQotLS0tLUVORCBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0K
           

準許證書簽名請求,使用 kubectl 建立 CSR 并準許。

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl apply  -f dev-csr.yaml
certificatesigningrequest.certificates.k8s.io/dev-ca created
┌──[[email protected]]-[~/.kube/ca]
└─$kubectl  get csr
NAME     AGE   SIGNERNAME                            REQUESTOR          REQUESTEDDURATION   CONDITION
dev-ca   4s    kubernetes.io/kube-apiserver-client   kubernetes-admin   <none>              Pending
           

準許 CSR:

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl certificate approve  dev-ca
certificatesigningrequest.certificates.k8s.io/dev-ca approved
┌──[[email protected]]-[~/.kube/ca]
└─$kubectl  get csr/dev-ca
NAME     AGE     SIGNERNAME                            REQUESTOR          REQUESTEDDURATION   CONDITION
dev-ca   3m32s   kubernetes.io/kube-apiserver-client   kubernetes-admin   <none>              Approved,Issued
           

從 CSR 取得證書,證書的内容使用 base64 編碼,存放在字段 status.certificate ,從 CertificateSigningRequest 導出頒發的證書

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl  get csr dev-ca  -o  jsonpath='{.status.certificate}'| base64 -d > dev-ca.crt
           

到這裡,得到了使用者的 .crt 證書, 目前使用者的 .key 私鑰.

┌──[[email protected]]-[~/.kube/ca]
└─$ll
總用量 16
-rw-r--r-- 1 root root 1103 12月 11 16:32 dev-ca.crt
-rw-r--r-- 1 root root  903 12月 11 16:14 dev-ca.csr
-rw-r--r-- 1 root root 1675 12月 11 13:30 dev-ca.key
-rw-r--r-- 1 root root 1390 12月 11 16:25 dev-csr.yaml
           

前面講到,一個 kubeconfig 檔案包含三部分,叢集資訊,上下文資訊,使用者資訊,這裡依次建構這三部分内容

拷貝目前叢集的證書到 目前目錄下,制作新的 kubeconfig 檔案需要 叢集的ca證書

┌──[[email protected]]-[~/.kube/ca]
└─$cp  /etc/kubernetes/pki/ca.crt .
           

--kubeconfig=dev-config 指定新kubeconfig 檔案名字,--certificate-authority=ca.crt 指定叢集證書

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl config --kubeconfig=dev-config set-cluster kubernetes   --server=https://192.168.26.81:6443 --certificate-authority=ca.crt --embed-certs=true
Cluster "kubernetes" set.
           

--embed-certs=true 的意思把配置資訊寫入到 kubeconfig 檔案,然後需要添加之前配置好的一些工作組命名空間,使用者相關的上下文

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl config --kubeconfig=dev-config set-context ctx-dev --namespace=liruilong-dev --cluster=kubernetes  --user=dev
Context "ctx-dev" created.
           

上下文添加完之後,隻剩使用者資訊了,這裡 用過 set-credentials dev 指定使用者,--client-key=dev-ca.key指定使用者的私鑰,--client-certificate=dev-ca.crt指定使用者的 CA憑證

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl config --kubeconfig=dev-config set-credentials dev --client-certificate=dev-ca.crt --client-key=dev-ca.key  --embed-certs=true
User "dev" set.
           

到這裡,就做完了使用者的認證,建立了 dev 使用者的 kubeconfig 檔案

┌──[[email protected]]-[~/.kube/ca]
└─$cat dev-config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXhNakUyTURBME1sb1hEVE14TVRJeE1ERTJNREEwTWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkdkCisrWnhFRDJRQlR2Rm5ycDRLNFBrd2lsYXUrNjdXNTVobVdwc09KSHF6ckVoWUREY3l4ZTU2Z1VJVDFCUTFwbU0KcGFrM0V4L0JZRStPeHY4ZmxtellGbzRObDZXQjl4VXovTW5HQi96dHZsTGpaVEVHZy9SVlNIZTJweCs2MUlSMQo2Mkh2OEpJbkNDUFhXN0pmR3VXNDdKTXFUNTUrZUNuR00vMCtGdnI2QUJnT2YwNjBSSFFuaVlzeGtpSVJmcjExClVmcnlPK0RFTGJmWjFWeDhnbi9tcGZEZ044cFgrVk9FNFdHSDVLejMyNDJtWGJnL3A0emd3N2NSalpSWUtnVlUKK2VNeVIyK3pwaTBhWW95L2hLYmg4RGRUZ3FZeERDMzR6NHFoQ3RGQnVia1hmb3Ftc3FGNXpQUm1ZS051RUgzVAo2c1FNSFl4emZXRkZvSGQ2Y0JNQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHRGNLU3V1VjVNNXlaTkJHUDEvNmg3TFk3K2VNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRVE0SUJhM0hBTFB4OUVGWnoyZQpoSXZkcmw1U0xlanppMzkraTdheC8xb01SUGZacElwTzZ2dWlVdHExVTQ2V0RscTd4TlFhbVVQSFJSY1RrZHZhCkxkUzM5Y1UrVzk5K3lDdXdqL1ZrdzdZUkpIY0p1WCtxT1NTcGVzb3lrOU16NmZxNytJUU9lcVRTbGpWWDJDS2sKUFZxd3FVUFNNbHFNOURMa0JmNzZXYVlyWUxCc01EdzNRZ3N1VTdMWmg5bE5TYVduSzFoR0JKTnRndjAxdS9MWAo0TnhKY3pFbzBOZGF1OEJSdUlMZHR1dTFDdEFhT21CQ2ZjeTBoZHkzVTdnQXh5blR6YU1zSFFTamIza0JDMkY5CkpWSnJNN1FULytoMStsOFhJQ3ZLVzlNM1FlR0diYm13Z1lLYnMvekswWmc1TE5sLzFJVThaTUpPREhTVVBlckQKU09ZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.26.81:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: liruilong-dev
    user: dev
  name: ctx-dev
current-context: ""
kind: Config
preferences: {}
users:
- name: dev
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURBakNDQWVxZ0F3SUJBZ0lRWVlWTHZPWkthTXEyUHV5akg4d0VvakFOQmdrcWhraUc5dzBCQVFzRkFEQVYKTVJNd0VRWURWUVFERXdwcmRXSmxjbTVsZEdWek1CNFhEVEl5TVRJeE1UQTRNakV4TjFvWERUSXpNVEl4TVRBNApNakV4TjFvd0hURU5NQXNHQTFVRUNoTUVNakF5TWpFTU1Bb0dBMVVFQXhNRFpHVjJNSUlCSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUEzVHBUTEM1cjFCTEtYNHBkQXA1UHlReUN0VlNYbFBnUGpZMnkKTFZDckFkVGh0ZEVtMHcvclA4VlJrc3BucmhSQnhsQUNIa2FPcWNnSGFieUo0bHlWOGQ5RUIxcFNqczFpOFJzZQpOUm5rd1NxN3FzRnpmR01FTXU5SjVjZUpaWnQxWWU1ZEhaZG5EbjFpK0VEdDJ0Z0VTcjRMTGlNdmhVM0FNa0FsCi82dTBLZCtZZ2tWOXlES0JMaEhoTUlZUHlUb0pTV215K0VJZkdDNzcyZXNmeER1Y1Q5SCtTTmN1R0xDeG9RRy8KT2VVbEFVaFdqRkJ0L29MUWI2NU4yTTd3Ky9SVkN1YVRzZXZIZkQ5MXZNczE3dU5vN25mVXQ1a2N3YTZzM0pSbwo1UnYxQm9vRlVTRGtxa0NwMW5MejNOamszby9zdUdmY1QxTEpDRnJQVStLL3d5Z1JJUUlEQVFBQm8wWXdSREFUCkJnTlZIU1VFRERBS0JnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUI4R0ExVWRJd1FZTUJhQUZHRGMKS1N1dVY1TTV5Wk5CR1AxLzZoN0xZNytlTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFEQkhHeE81VTcydGpQdQplN2NIYlhYaVVLNVhoSlF2TjlGQWdRaG00ZVdaNVRkNjJyRE9kQ3BwQ2JENDRLYkNabXFjRGR2d0krSWQyTmtxCnNtQ3R0Nmw4UW15WUhuQTl5ajc2TFQySzFJMm9CcVlpYzdIRlROTzhtN2lsVTBRYTZreWJJSUVxeGdxb3d2V0EKcVpOL3BHcUVWbTdxaEhkNW0yMFQrNjNYZ3FoR1JVaGcyayt3SnJBd3VHUy9wWGlObG1yVldDT2E3enhFWSs2dgpVVU81YS9EbjFZaXZjUExKRzRqdUY1VTdkWmJMS1FMMnkxTndUbEpDZTdRQkhxQzBzSnhuNTNDWC82blhIbW9OCm9mWThBMEVkZFd0WVNXT2FxSGVScVRpRG5XT2pIUk8vbmJVbTlwR3VLQmhhMmxqQUR4dmh4K1VlWnFselBZN3YKSHlWeXltU2gKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBM1RwVExDNXIxQkxLWDRwZEFwNVB5UXlDdFZTWGxQZ1BqWTJ5TFZDckFkVGh0ZEVtCjB3L3JQOFZSa3NwbnJoUkJ4bEFDSGthT3FjZ0hhYnlKNGx5VjhkOUVCMXBTanMxaThSc2VOUm5rd1NxN3FzRnoKZkdNRU11OUo1Y2VKWlp0MVllNWRIWmRuRG4xaStFRHQydGdFU3I0TExpTXZoVTNBTWtBbC82dTBLZCtZZ2tWOQp5REtCTGhIaE1JWVB5VG9KU1dteStFSWZHQzc3MmVzZnhEdWNUOUgrU05jdUdMQ3hvUUcvT2VVbEFVaFdqRkJ0Ci9vTFFiNjVOMk03dysvUlZDdWFUc2V2SGZEOTF2TXMxN3VObzduZlV0NWtjd2E2czNKUm81UnYxQm9vRlVTRGsKcWtDcDFuTHozTmprM28vc3VHZmNUMUxKQ0ZyUFUrSy93eWdSSVFJREFRQUJBb0lCQVFDbURNdzNBbFR2Sm5kKwpCTlhSVEZDb29GcFBqc0lFRDdsa3ozRm9yLzdiYmhWSXFrZFE3c2J0NDhaWnZ0RFppZHpnNUZiaXNLVU9iTlNiCm1lZUkzMk93MjVzdFJhOW4vbU9BZzVGRjNEeW1mTlBGMUZSQmpmU3Q0b3YrQzZwbWVLdy9xSEY5NzVGci85TlUKY1MvWExvTHlNdmtqVlVlcTcvUU9BN1pCMUhoellEcHpHUnJlcmdOTVU1K1lDRUo0Y0ZvN3pzSFh1ejhpWU5Nawp0K0k0by9BWEFtUktlOFBaWFZRWVdMR1lpdlBWcGU1bVFKUTZXKzdFTHlzMnAzYlRCVVJqN3JLSTBaT0dvcmpZCjNBTlRqQlRKY282eHAySTk5VXVQNHlCOWh6RG9RWGNublhURzdZcVZxZjZyL29vbXhZS3VVSUpZT1VaL21OOVUKZXF5SnpRdU5Bb0dCQVBrQzNmVzVWNDBGdjRYT29mS2VmcnNXVERESUEyMWhsVE9tSVJlNmpndjg2MnAwdElZVAo0L29WdjB0K3ZQeGE4Rkl5d0ZUUDJicFJQbUE2MlBnOVl6TUpuYytrRXY3cGFlT0p0MjNoUUxMM2lKTXFqNVBOCnp0SzZhSkI2K2w0RTVrVzZjRjZpVE5ROEpwZzRvUTgrNWlBZUJXbjhoM01ERjlKZDZMMzRYVmt6QW9HQkFPTnYKMWZSV2V6cE1TZEtPSktkakRzUlprYlczeXdxM3NFaEZEU2ZSTU5OYXNGcHhTcWpJZUx2OWRON21aL25adHpwSApzbXc4aWcyV3JrSENjRk5BM29zUkFzbWxXekdmNFVIWlNqVEFKSk1qK0ZEYSt1NjVhYTF4ckJ5R0Z3Q1RtWnB2CmRsZitJQktab21NTlpOcnpmK011Q0RQR2hhczFkZ2xhL1h1NnVUUmJBb0dBZEFQMzhmSm1iaGZOZ2NRaUErNEEKVVo0ejVVNXErbDFLcklPc1MyZnBvb0EyRnFWRkxtcTUvdHgvQWVlTW1XNnRKVDdzQ1JmRjgxN0Mxd2JUNitSKwpBVnRyb1VCcWNVWEN4ZloxOWNYSzVSY2JGS1h4dXdWYVpTZmdhK0JBSWVuYWQ0WkRzSE9ocEFoYVd2V1haSWtECm90Y1o0cVY3WGdTRTVzaEdGYXhQb2EwQ2dZQkFKa290dWJyV0xiQmcwRERzZVpjdnNLZlZubnFKa2xnSmVsaUUKazQ5Mi9jeGlKalJOdVFXODJIZC9hM09HV0c5QzQvZ2lhVXp6R2o0YVZES0VlUGFNT1FjVlF5dWVxcDdKaVBWUwpQYVBUVU1ENFpWdUR2QTVmbW9GV0prZ1VwSTBkcnpTdEN3T1cyM2lmQWFjaHpxNlNzR2dsMm1mWGE2UFliYTZ6Cm1HNG1vd0tCZ0VjS3U0WDQ3RGhLTmk4L1poZi9pank5UWsyYVZGR01jMVczVzhUWjdxY2R3aVMwK2VvSmtmTVoKRUFFVW1xME9IUWdmVGxHYnJobEpJZzlpRXIxbXFYMUUvaWFGU3N0Q0RYc3RhTG9FY3FwckhtcDFiZnplbUlWNgpLSGxiRTRqdG4vM1dCckNyUFdpeFVOUEZHYjMvUVRwZkJQNCtFa0F1WTVzemk3d1kxeUFuCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
           

目前生成的檔案,把檔案傳到任意 裝有 kubectl 的客戶機上面測試。在測試之前,需要修改一下 kubeconfig 配置檔案資訊,添加上下文資訊:current-context: "" 為 current-context: "ctx-dev"

┌──[[email protected]]-[~/.kube/ca]
└─$ll
總用量 28
-rw-r--r-- 1 root root 1099 12月 11 17:12 ca.crt
-rw-r--r-- 1 root root 1103 12月 11 16:32 dev-ca.crt
-rw-r--r-- 1 root root  903 12月 11 16:14 dev-ca.csr
-rw-r--r-- 1 root root 1675 12月 11 13:30 dev-ca.key
-rw------- 1 root root 5535 12月 11 17:12 dev-config
-rw-r--r-- 1 root root 1390 12月 11 16:25 dev-csr.yaml
┌──[[email protected]]-[~/.kube/ca]
└─$scp dev-config  [email protected]:~
[email protected]'s password:
dev-config
           

dev 使用者的認證做完了,還需要授權,隻有一個 認證資訊,相當于你可以通路叢集,但是關于叢集的資訊什麼也看不了。

使用 kubeadm 生成證書檔案

可以通過 kubeadm kubeconfig user 來為普通使用者生成一個新的 kubeconfig 檔案,需要定義一個 ClusterConfiguration 資源對象,這裡需要說明的是,這是基于 kubeadm 的,并不是 k8s 體系的 API 對象。

定義資源對象

┌──[[email protected]]-[/etc/kubernetes]
└─$cat example.yaml
# example.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
# kubernetes 将作為 kubeconfig 中叢集名稱
clusterName: "kubernetes"
# some-dns-address:6443 将作為叢集 kubeconfig 檔案中服務位址(IP 或者 DNS 名稱)
controlPlaneEndpoint: "192.168.26.81:6443"
# 從本地挂載叢集的 CA 秘鑰和 CA 證書
certificatesDir: "/etc/kubernetes/pki"
           

資源對象中需要的 叢集資訊 可以通過下面的方式檢視

┌──[[email protected]]-[/etc/kubernetes]
└─$kubectl get cm kubeadm-config -n kube-system -o=jsonpath="{.data.ClusterConfiguration}"
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.22.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}
           

kubeadm kubeconfig user 生成證書 ,這裡我們為上面的 prod 使用者生成證書,生成一個有效期為 10000 小時的 證書檔案 kubeconfig。

┌──[[email protected]]-[/etc/kubernetes]
└─$kubeadm kubeconfig user --config example.yaml --org 2022 --client-name prod --validity-period 10000h > prod-config
I1215 20:56:40.505079  127804 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.22
W1215 20:56:43.639674  127804 kubeconfig.go:88] WARNING: the specified certificate validity period 10000h0m0s is longer than the default duration 8760h0m0s, this may increase security risks.
┌──[[email protected]]-[/etc/kubernetes]
└─$ls
admin.conf  controller-manager.conf  example.yaml  kubelet.conf  manifests  pki  prod-config  scheduler.conf
┌──[[email protected]]-[/etc/kubernetes]
└─$cat prod-config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1USXhNakUyTURBME1sb1hEVE14TVRJeE1ERTJNREEwTWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTkdkCisrWnhFRDJRQlR2Rm5ycDRLNFBrd2lsYXUrNjdXNTVobVdwc09KSHF6ckVoWUREY3l4ZTU2Z1VJVDFCUTFwbU0KcGFrM0V4L0JZRStPeHY4ZmxtellGbzRObDZXQjl4VXovTW5HQi96dHZsTGpaVEVHZy9SVlNIZTJweCs2MUlSMQo2Mkh2OEpJbkNDUFhXN0pmR3VXNDdKTXFUNTUrZUNuR00vMCtGdnI2QUJnT2YwNjBSSFFuaVlzeGtpSVJmcjExClVmcnlPK0RFTGJmWjFWeDhnbi9tcGZEZ044cFgrVk9FNFdHSDVLejMyNDJtWGJnL3A0emd3N2NSalpSWUtnVlUKK2VNeVIyK3pwaTBhWW95L2hLYmg4RGRUZ3FZeERDMzR6NHFoQ3RGQnVia1hmb3Ftc3FGNXpQUm1ZS051RUgzVAo2c1FNSFl4emZXRkZvSGQ2Y0JNQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZHRGNLU3V1VjVNNXlaTkJHUDEvNmg3TFk3K2VNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRVE0SUJhM0hBTFB4OUVGWnoyZQpoSXZkcmw1U0xlanppMzkraTdheC8xb01SUGZacElwTzZ2dWlVdHExVTQ2V0RscTd4TlFhbVVQSFJSY1RrZHZhCkxkUzM5Y1UrVzk5K3lDdXdqL1ZrdzdZUkpIY0p1WCtxT1NTcGVzb3lrOU16NmZxNytJUU9lcVRTbGpWWDJDS2sKUFZxd3FVUFNNbHFNOURMa0JmNzZXYVlyWUxCc01EdzNRZ3N1VTdMWmg5bE5TYVduSzFoR0JKTnRndjAxdS9MWAo0TnhKY3pFbzBOZGF1OEJSdUlMZHR1dTFDdEFhT21CQ2ZjeTBoZHkzVTdnQXh5blR6YU1zSFFTamIza0JDMkY5CkpWSnJNN1FULytoMStsOFhJQ3ZLVzlNM1FlR0diYm13Z1lLYnMvekswWmc1TE5sLzFJVThaTUpPREhTVVBlckQKU09ZPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://192.168.26.81:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: prod
  name: prod@kubernetes
current-context: prod@kubernetes
kind: Config
preferences: {}
users:
- name: prod
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURDekNDQWZPZ0F3SUJBZ0lJUmxBQ2U4SG1kSXd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeU1USXhOakF3TkRKYUZ3MHlOREF5TURVd05EVTJORE5hTUI0eApEVEFMQmdOVkJBb1RCREl3TWpJeERUQUxCZ05WQkFNVEJIQnliMlF3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURhaWlIOXZqNnVpdG16VTFOd3VRdGVkN3I5aUhEMzZLNEFtaVJDUTBNaGRCWlYKZlkxU1JUTFhqV1VibXBNWjF3c1hDT3IrUnhFVHhkdlhIcmNXSEh1cHBMWlE1ZVRFWVQrUitkZFdhUlFoRnpPdQpEM1l1YWdkdjNUdTdXT1pnS1R2ckIzeWxWRWptQ1M0RTRsQXZidmJ4MkJ1OHk5bDVHQ3ZQSW9wUCtjbklNRGZmCjMyMkhtOXZkVE1uRjFZUUZKSEJqbnMrOUI0TnUxdjc2WUdwbElUbFpIQ01tTjZ2aDYzbHVxL01lajdYaEFwM24KRTM0dUNQRFRQSlFlU3p5QUltc2t0ZnRGV3NJdEpTSUhyS1kyY0RBVjltVXI0VFBhY2FGZDFrcjNSTTkvQjBnWApGTEhUdWYrL2tyd3BaM1FDd0luaHRDSGc5UkZ2MjNOSkRENHNNMVZIQWdNQkFBR2pWakJVTUE0R0ExVWREd0VCCi93UUVBd0lGb0RBVEJnTlZIU1VFRERBS0JnZ3JCZ0VGQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUI4R0ExVWQKSXdRWU1CYUFGR0RjS1N1dVY1TTV5Wk5CR1AxLzZoN0xZNytlTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBWAp0bmpWVTZhbEJzeTNoQ08xMGVnNkQzRElqODJMRGJSSmZQeU8waldoUGYveXU2VFNwS0VJUFVTQndvcWFFREhEClEraUlEZEdIRmZoa3FLRDlzOENNQ2tMZHF5S2ZDajQ4TjhvdXg5aUJXWXZVR0YzdDFyWWVCaDdCTmhrOTZPMnoKYW9MZm9GNnFqUkYyYzZIMitORjJmWFJMdlA2V0EwK1lXS3FXY2lickYvcEhYT2FidTBMN0NGR0VQSmJyUVZNcQpyeGUwTytWWkg5ZXIrQmZrelZOYmg5djJoNU5qaU1ZSHhSMkdkc01tUDNGT28vWTBUbzFlMzhVbGFndzh3Ty82Ci8zR1RtT01aUzJhZUNxazdyUmlic0ljKys5NVNwRTRqTlFpSmNPc0tFbDcrUXhsdHRaMlpib0ZheDZDaHJ4bEUKWnc4MmpvWlhxaVViaXhpbEgrUEIKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMm9vaC9iNCtyb3JaczFOVGNMa0xYbmU2L1lodzkraXVBSm9rUWtORElYUVdWWDJOClVrVXkxNDFsRzVxVEdkY0xGd2pxL2tjUkU4WGIxeDYzRmh4N3FhUzJVT1hreEdFL2tmblhWbWtVSVJjenJnOTIKTG1vSGI5MDd1MWptWUNrNzZ3ZDhwVlJJNWdrdUJPSlFMMjcyOGRnYnZNdlplUmdyenlLS1Qvbkp5REEzMzk5dApoNXZiM1V6SnhkV0VCU1J3WTU3UHZRZURidGIrK21CcVpTRTVXUndqSmplcjRldDVicXZ6SG8rMTRRS2Q1eE4rCkxnancwenlVSGtzOGdDSnJKTFg3UlZyQ0xTVWlCNnltTm5Bd0ZmWmxLK0V6Mm5HaFhkWks5MFRQZndkSUZ4U3gKMDduL3Y1SzhLV2QwQXNDSjRiUWg0UFVSYjl0elNRdytMRE5WUndJREFRQUJBb0lCQVFDZnYwMXRrRDE5bFIzaAp5YzA2bnVsQ21yN2pTWE5hcElsZEEwL3g1LzBRWFMxZVBMS3JLczRwWnNBNzExZ2tFVitYN1BycCtNVHc4VGJzCkh4V3lZZ3U3VEIzQk1PdHk2YXR3WjNNVFJTaGpyL1FsRGtSVFZVb3VhVWVhZ1RlVm4wNmZWUSsyUXRBdTV4THUKbXdnR1JGVGJJQi9XZUNSMk1rY0QyTG5HRUUrQnR2TVRRQzY2emQyYTI4Wm9PYlA0OWd0cmptNEcxRnhpcUJMWAowTWliMSt1VWhiNk1DMVZnVjZvak9YeXlRNkk2aXJ4Mmh5eDNSNS9qK0xMZ3lEb0lXUHFNUDFkNXpjN3BCSU81CmJGK1BLd0VVdXF5TGNYZDJ4OHJaclJPdGd1cjBOMWpnTWMvcXZaOFVpYWZFTkdzdURIQ1JkTE5BdHZPb2MxOTIKUC9XMEtNZHhBb0dCQVBzNmxsYmEwVWdqNy9NTXVzbVpmQkQ3UURoWVU4L3NqVmpTTUVVak5aVE9aVFFtVFRoNQpMSEZqRjE2VHdHb3FpcGU0VU5ndGhMcER2LzlUUUxEQzVHMVgrMkY1ODJZYTFKZmdZVFYwNkhsdGZJVWpMT0YvCk5NZ1hmejlBNVBsbGFpb3VyN010My9TSDJad3R2b0xGZjFiaFVFdTNOb3d2L0NMM3FlbDJnUmk5QW9HQkFONncKbmtwS2NOejVGZDNKbmJ4WE84NVkyaGE2dHJLcmZHVjhwM1dvTFRRYklRWU56M0pFYjFUYUxIT1B4eGlCN3lTSwpFNzJaQ2VYa2hsTGlyOXQwZHdRQXJwRm95d24vUUVJc1FOS0tERDBHSkxzZVVJeFRHUVBqbVFOWFF5TThzRmFGCnZvdWxlQzJ2TDJjbkNqYWtYQ1pwZnVlTFVpVnk5dFp0aWMrbUJwQlRBb0dBWDVmZVpyUWlXQW5jbnFYa1dSdCsKMnROUGoyRUVteVJPY0ZLaUxWeUZZZGJiS1dtOWpsU0ZOYXZYMDVQeTdqSzd3NWxOb2NSSU1idmZ6WjUzQ2d0TwpjZEM5aFV5cThkb1p0S1NiT0lVQWhGdkZ1cjgwcjZVQWgzWnhZN2NrcVVVT2pYaHdRSVNmSitPZFNORWJJWlZXCnE4OVdCMGx5aHdzbkxJTUNjeVExWVIwQ2dZQUpIcnFjMkVlZkJTUjhITkcwOE8ybUdjVjB3TmpTb0d0THpMc2UKK25BL2ZnendMb2ljYVdrVjFJbVZnZ0hwWXdqa09qTnN4R08vWW9pTnhITG5UZkhCM0RWS0J6eXBnQ2FsanlKbwpmUGJiV1BFUUtNR3J2WXQ4dVVsKzlZZnVYWUhyU1Rid2lTcE8xS25nVTV6N2QrZStPdnZUaDhVcGUzZlllRXY0CmtSZ2J1UUtCZ0FwcitUNU0wcGt5K1NQbVE0NHdTb3hTQkd4Mmp3aDArU3gyWE9iTUJjVVN0VFhKUE1oOUFjUDYKMXcwMDNBUVJuSjFRcy8vVnBid21iSWFENS9nQW02V1pJdVhjTjB3WWh0ejJWV2sxRW9kdjhpQmkzK0hmRXlVawo4L2tmdFRLSXNsZzRNbDgzR3pCcGhycFF1ODVsN2ZoUUhCQmZFT3k3NTNnZHZXcFVxNzJ2Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
           

上下文,命名空間都不是我們想要的,是以需要 修改 這個 kubeconfig 檔案,然後可以指定 kubeconfig 檔案為剛剛建立的,查詢 目前認證叢集資訊,确認修改資訊

┌──[[email protected]]-[/etc/kubernetes]
└─$kubectl --kubeconfig=prod-config  config  view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.26.81:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: liruilong-prod
    user: prod
  name: ctx-prod
current-context: ctx-prod
kind: Config
preferences: {}
users:
- name: prod
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
           

因為我們沒有做鑒權,是以什麼權限也沒有,啥也看不了

┌──[[email protected]]-[/etc/kubernetes]
└─$kubectl --kubeconfig=prod-config  get nodes
Error from server (Forbidden): nodes is forbidden: User "prod" cannot list resource "nodes" in API group "" at the cluster scope
           

使用 RBAC 鑒權

這裡關于 RBAC 的授權鑒權不多講,感興趣小夥伴可以看看官網介紹或者之前的文章。目前直接找一個現成的 叢集角色 admin 來綁定. 檢視角色權限資訊。

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl describe clusterrole admin
Name:         admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                       Non-Resource URLs  Resource Names  Verbs
  ---------                                       -----------------  --------------  -----
  rolebindings.rbac.authorization.k8s.io          []                 []              [create delete deletecollection get list patch update watch]
  roles.rbac.authorization.k8s.io                 []                 []              [create delete deletecollection get list patch update watch]
  configmaps                                      []                 []              [create delete deletecollection patch update get list watch]
  events                                          []                 []              [create delete deletecollection patch update get list watch]
  persistentvolumeclaims                          []                 []              [create delete deletecollection patch update get list watch]
  pods                                            []                 []              [create delete deletecollection patch update get list watch]
  ..................
  ..............
           

把叢集角色 admin 綁定到 使用者 dev 上,并且限制其權限隻對 liruilong-dev 命名空間有效。

┌──[[email protected]]-[~/.kube/ca]
└─$kubectl create rolebinding dev-admin  --clusterrole=admin --user=dev -n liruilong-dev
rolebinding.rbac.authorization.k8s.io/dev-admin created
           

對于通過 kubeadm 生成 kubeconfig 檔案的 prod 使用者,這裡我們建立一個角色,然後綁定。

┌──[[email protected]]-[/etc/kubernetes]
└─$kubectl create  role prod-role --verb=get,list --resource=pod -n liruilong-prod
role.rbac.authorization.k8s.io/prod-role created
┌──[[email protected]]-[/etc/kubernetes]
└─$kubectl get -n liruilong-prod role prod-role
NAME        CREATED AT
prod-role   2022-12-15T14:17:57Z
┌──[[email protected]]-[/etc/kubernetes]
└─$kubectl -n liruilong-prod describe role prod-role
Name:         prod-role
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  pods       []                 []              [get list]
┌──[[email protected]]-[/etc/kubernetes]
└─$
┌──[[email protected]]-[/etc/kubernetes]
└─$kubectl create rolebinding  prod-bind --role=prod-role  --user=prod -n liruilong-prod
rolebinding.rbac.authorization.k8s.io/prod-bind created
┌──[[email protected]]-[/etc/kubernetes]
└─$kubectl get rolebindings.rbac.authorization.k8s.io  -n liruilong-prod
NAME        ROLE             AGE
prod-bind   Role/prod-role   68s
           

用戶端測試

dev 使用者測試,為了友善 定義 kubeconfig 檔案位置的 全局變量,一般情況下,除非超級管理者,建議不直接在 叢集節點 機器上連接配接 k8s 叢集,而是通過拷貝 kubeconfig 檔案 ,到用戶端機器安裝 kubectl 用戶端的方式來操作叢集。

┌──[[email protected]]-[~]
└─$export KUBECONFIG=/root/dev-config
           

環境變量是全局的,适用于任何使用者。目前命名空間的: get、create 權限測試

┌──[[email protected]]-[~]
└─$kubectl get pods
No resources found in liruilong-dev namespace.
┌──[[email protected]]-[~]
└─$kubectl run web  --image nginx
pod/web created
┌──[[email protected]]-[~]
└─$kubectl get pods -owide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE                          NOMINATED NODE   READINESS GATES
web    1/1     Running   0          19s   10.244.217.15   vms155.liruilongs.github.io   <none>           <none>
┌──[[email protected]]-[~]
└─$
           

其他命名空間權限拒絕測試

┌──[[email protected]]-[~]
└─$ kubectl get pods -A
Error from server (Forbidden): pods is forbidden: User "dev" cannot list resource "pods" in API group "" at the cluster scope
┌──[[email protected]]-[~]
└─$
┌──[[email protected]]-[~]
└─$ kubectl get deployments.apps  -n awx
Error from server (Forbidden): deployments.apps is forbidden: User "dev" cannot list resource "deployments" in API group "apps" in the namespace "awx"
           

檢視上下文資訊,可以看到隻能看到自己的上下文資訊

┌──[[email protected]]-[~]
└─$ kubectl config get-contexts
CURRENT   NAME      CLUSTER      AUTHINFO   NAMESPACE
*         ctx-dev   kubernetes   dev        liruilong-dev
           

prod 使用者測試,這裡直接拷貝到指定的檔案位置 ~/.kube/config 裡,~/.kube/config的優先級要高于環境變量的方式。

┌──[[email protected]]-[/etc/kubernetes]
└─$scp  prod-config   [email protected]:~/.kube/config
[email protected]'s password:
prod-config                                                                 100% 5562     4.3MB/s   00:00
┌──[[email protected]]-[/etc/kubernetes]
└─$ssh [email protected]
[email protected]'s password:
Last login: Sun Dec 11 17:03:02 2022 from 192.168.26.1
┌──[[email protected]]-[~/.kube]
└─$ kubectl config get-contexts
CURRENT   NAME       CLUSTER      AUTHINFO   NAMESPACE
*         ctx-prod   kubernetes   prod       liruilong-pord

           

權限測試

┌──[[email protected]]-[~]
└─$ kubectl get pods
No resources found in liruilong-prod namespace.
┌──[[email protected]]-[~]
└─$ kubectl get pods -A
Error from server (Forbidden): pods is forbidden: User "prod" cannot list resource "pods" in API group "" at the cluster scope
┌──[[email protected]]-[~]
└─$ kubectl get deployments.apps
Error from server (Forbidden): deployments.apps is forbidden: User "prod" cannot list resource "deployments" in API group "apps" in the namespace "liruilong-prod"
┌──[[email protected]]-[~]
└─$
           

多叢集統一管理

對于多叢集來講,天然隔離,可以一個工作組配置設定一個叢集,需要做的是如何通過一個控制台來管理多個叢集

假設擁有這樣一個叢集給開發用。這裡稱為 A 叢集

┌──[[email protected]]-[~/.kube]
└─$kubectl get nodes
NAME                         STATUS   ROLES                  AGE   VERSION
vms81.liruilongs.github.io   Ready    control-plane,master   23h   v1.21.1
vms82.liruilongs.github.io   Ready    <none>                 23h   v1.21.1
vms83.liruilongs.github.io   Ready    <none>                 23h   v1.21.1
           

現在在建立第二個叢集,給測試用,這裡用于示範,一個 node 節點,一個 master 節點,稱為 B 叢集

[root@vms91 ~]# kubectl get nodes
NAME                         STATUS   ROLES                  AGE    VERSION
vms91.liruilongs.github.io   Ready    control-plane,master   139m   v1.21.1
vms92.liruilongs.github.io   Ready    <none>                 131m   v1.21.1
           

檢視 新建立的 B 叢集資訊

[root@vms91 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.26.91:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
[root@vms91 ~]#
           

要做到一個控制台管理多個叢集,實作多叢集切換,需要添加一個新的 kubeconfig 檔案 ,将 A、B 叢集配置檔案合并為一個。對于 kubeconfig 的每一個部分要分别合并。

假設以 A 的叢集 master 位址為控制台。備份 A 叢集 kubeconfig 檔案,然後修改他

┌──[[email protected]]-[~/.kube]
└─$pwd;ls
/root/.kube
cache  config
           

對于 Kubeconfig 檔案合并,首先是叢集資訊合并,然後是上下文資訊合并,最後是使用者資訊合并,合并之後需要注意各個部分都不能同名,是以需要修改

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0.........0tCg==
    server: https://192.168.26.81:6443
  name: cluster1
- cluster:
    certificate-authority-data: LS0.........0tCg==
    server: https://192.168.26.91:6443
  name: cluster2
contexts:
- context:
    cluster: cluster1
    namespace: kube-public
    user: kubernetes-admin1
  name: context1
- context:
    cluster: cluster2
    namespace: kube-system
    user: kubernetes-admin2
  name: context2
current-context: context2
kind: Config
preferences: {}
users:
- name: kubernetes-admin1
  user:
    client-certificate-data: LS0.......0tCg==
    client-key-data: LS0......LQo=
- name: kubernetes-admin2
  user:
    client-certificate-data: LS0.......0tCg==
    client-key-data: LS0......0tCg==
           

多叢集切換

合并之後就可以依托目前 kubeconfig 檔案實作 多叢集切換:kubectl config use-context context2,其實本質是上下文切換

檢視目前上下文資訊

┌──[[email protected]]-[~/.kube]
└─$kubectl config get-contexts
CURRENT   NAME       CLUSTER    AUTHINFO            NAMESPACE
*         context1   cluster1   kubernetes-admin1   kube-public
          context2   cluster2   kubernetes-admin2   kube-system
           

檢視目前叢集資訊,可以看到是 A 叢集資訊

┌──[[email protected]]-[~/.kube]
└─$kubectl get nodes
NAME                         STATUS   ROLES                  AGE   VERSION
vms81.liruilongs.github.io   Ready    control-plane,master   23h   v1.21.1
vms82.liruilongs.github.io   Ready    <none>                 23h   v1.21.1
vms83.liruilongs.github.io   Ready    <none>                 23h   v1.21.1
           

切換上下文實作叢集切換,并檢視上下

┌──[[email protected]]-[~/.kube]
└─$kubectl config use-context  context2
Switched to context "context2".
┌──[[email protected]]-[~/.kube]
└─$kubectl config get-contexts
CURRENT   NAME       CLUSTER    AUTHINFO            NAMESPACE
          context1   cluster1   kubernetes-admin1   kube-public
*         context2   cluster2   kubernetes-admin2   kube-system
           

檢視 B叢集資訊

┌──[[email protected]]-[~/.kube]
└─$kubectl get nodes
NAME                         STATUS   ROLES                  AGE   VERSION
vms91.liruilongs.github.io   Ready    control-plane,master   8h    v1.21.1
vms92.liruilongs.github.io   Ready    <none>                 8h    v1.21.1
┌──[[email protected]]-[~/.kube]
└─$
           

博文參考

https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/certificate-signing-requests/

https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/

https://kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-kubeconfig/

繼續閱讀