天天看點

如何利用單個主機資源、快速建立多節點 k8s 叢集環境

作者:運維技術幫
Kind ( Kubernetes In Docker ) 使用一個 Container 來模拟一個 Node, 即每個 "Node" 作為一個 docker 容器運作,是以可使用多個 Container 搭建具有多個 Node 的 k8s 叢集; 節點内 containerd 、 kubelet 以 systemd 方式運作,而 etcd 、kube-apiserver 、 kube-scheduler 、kube-controller-manager 、 kube-proxy 以容器的方式運作;本環境适用于快速部署多節點 k8s 叢集,用以驗證、測試 k8s 功能特征

環境介紹

# 各元件版本
CentOS 7.9.2009 ( 5.4.180-1.el7 )
Docker Engine Community : V20.10.20
Kubernetes Version : V1.25.2

# 網段規劃
node network: 172.18.0.0/16
pod  network: 10.15.0.0/16
service network: 10.16.0.0/16
測試主機 IP 位址: 192.168.31.19/24

# kind 相關鏡像
kindest/node:v1.25.2                 # 用于運作嵌套容器、systemd 和 Kubernetes 元件的 Docker 鏡像
kindest/haproxy:v20220607-9a4d8d2a   # 實作對 kube-apiserver 通路的負載均衡
           

基礎環境部署

kubectl 安裝

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安裝指令行工具 kubectl
yum makecache fast
yum install -y  kubectl

# kubectl 指令自動補全
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
           

Docker 安裝

yum-config-manager  --add-repo  https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce-20.10.9-3.el7 -y
systemctl enable docker && systemctl start docker
yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
           

自定義 Kind 配置

該自定義配置啟動了 6 個容器分别用于模拟 3 個 k8s 控制器節點和 3 個 k8s Worker 節點

cat > kind-config.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
  kubeProxyMode: "ipvs"         # kube-proxy 工作模式
  # podSubnet: "10.15.0.0/16"
  # serviceSubnet: "10.16.0.0/16"
kubeadmConfigPatches:
- |
  apiVersion: kubeadm.k8s.io/v1beta2
  kind: InitConfiguration
  metadata:
    name: config
  imageRepository: registry.aliyuncs.com/google_containers    # 指定鏡像倉庫
  nodeRegistration:
    kubeletExtraArgs:
      pod-infra-container-image: registry.aliyuncs.com/google_containers/pause:3.7    # 指定 pause 鏡像版本
- |
  apiVersion: kubeadm.k8s.io/v1beta3
  kind: ClusterConfiguration
  metadata:
    name: config
  kubernetesVersion: "1.25.2"
  networking:
    serviceSubnet: 10.15.0.0/16
    podSubnet: 10.16.0.0/16
    dnsDomain: cluster.local
nodes:
- role: control-plane
  # 通過 kubeadm 配置 kubeletExtraArgs,指定 ingress controller 綁定到目前 kind 節點
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 30443
    hostPort: 31443
    listenAddress: "0.0.0.0"
    protocol: TCP
  - containerPort: 80
    hostPort: 80
  - containerPort: 443
    hostPort: 443
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
EOF
           

extraPortMappings 将 kind 節點端口(containerPort)映射到本地主控端端口(hostPort), containerPort 端口為 k8s 中類型為 NodePort 的 Service 映射到本地 kind 節點的端口 其流量通路流程: 31443(主控端) --> 30443(kind 節點) -> 443(service 端口) --> 8443(pod 端口)

使用 Kind 建立 k8s 叢集

# 二進制解壓方式安裝
wget https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-linux-amd64
mv kind-linux-amd64 /usr/local/bin/kind
chmod +x /usr/local/bin/kind

# 建立叢集
kind create cluster -n demo --config kind-config.yaml
# 檢視、删除叢集
kind get clusters
kind delete cluster -n demo
           

叢集 Dashboard 部署

用于可視化管理 k8s 叢集環境

修改 Service 類型

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
           

修改 recommended.yaml,設定 service 類型為 NodePort,并指定将 Service 端口 443 映射到 k8s 節點的 30443;即端口通路流程依次為: 30443 --> 443 --> 8443

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort        # 新加配置行
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30443   # 新加配置行
  selector:
    k8s-app: kubernetes-dashboard
           

kubectl apply -f recommended.yaml

添加管理者賬号

為了保護你的叢集資料,預設情況下,Dashboard 會使用最少的 RBAC 配置進行部署,使用如下指令建立其通路 token kubectl -n kubernetes-dashboard create token kubernetes-dashboard

建立一個具有超級權限的服務賬号,用于通路 kubernetes dashboard

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
           

配置解讀 :

通過建立服務賬号 admin-user, 将 admin-user 與 叢集角色 cluster-admin 進行綁定,是以該服務賬号就具有了該叢集角色所擁有的權限;可通過指令 kubectl describe clusterroles.rbac.authorization.k8s.io cluster-admin 檢視角色 cluster-admin 所擁有的權限;通過執行指令 kubectl -n kubernetes-dashboard create token admin-user 擷取賬号 admin-user 對應的 token,用于 web 通路 Dashboard 時的 token 認證

部署 Ingress-Nginx 控制器

鏡像版本: registry.k8s.io/ingress-nginx/controller:v1.4.0;

# Ingress-Nginx 控制器,預設 Service 類型為 NodePort
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml

# 修改 usage.yaml 配置,添加 Ingress 資源對應 spec.rules.host: demo01.it123.me 配置,添加該 url ,将該 url 解析指向測試主機 IP (192.168.31.19)即可
wget https://kind.sigs.k8s.io/examples/ingress/usage.yaml
kubectl apply -f usage.yaml

# 通路測試
http://demo01.it123.me/foo
> foo
http://demo01.it123.me/bar
> bar
           

k8s 叢集環境的驗證、測試

1、驗證、測試 k8s 叢集環境是否正常

# 進入 k8s 控制器節點 demo-control-plane
docker exec -it demo-control-plane bash

kubectl cluster-info
# > Kubernetes control plane is running at https://demo-external-load-balancer:6443
# > CoreDNS is running at https://demo-external-load-balancer:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
# > To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubectl get node -o wide
# > NAME                  STATUS   ROLES           AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                CONTAINER-RUNTIME
# > demo-control-plane    Ready    control-plane   118m   v1.25.2   172.18.0.3    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-control-plane2   Ready    control-plane   117m   v1.25.2   172.18.0.4    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-control-plane3   Ready    control-plane   116m   v1.25.2   172.18.0.6    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-worker           Ready    <none>          115m   v1.25.2   172.18.0.5    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-worker2          Ready    <none>          115m   v1.25.2   172.18.0.7    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8
# > demo-worker3          Ready    <none>          115m   v1.25.2   172.18.0.2    <none>        Ubuntu 22.04.1 LTS   5.4.188-1.el7.elrepo.x86_64   containerd://1.6.8

# 擷取指定節點下所有命名空間下的 pod
kubectl get pod -A -o wide --field-selector spec.nodeName='demo-control-plane'

# 檢視指定 pod 的詳細配置
kubectl -n kube-system get pod kube-apiserver-demo-control-plane3 -o yaml

# 檢視目前節點運作的所有容器
crictl ps -a
# > CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
# > 72543f8c33d29       d921cee849482       16 hours ago        Running             kindnet-cni               1                   43e0043ff3f43       kindnet-629vz
# > b2e84393b76ee       d681a4ce3c509       17 hours ago        Running             controller                0                   0fcf6e2c1b623       ingress-nginx-controller-7d68cdddd8-7kvdh
# > cb08c0aa55734       ca0ea1ee3cfd3       17 hours ago        Running             kube-scheduler            1                   ea1562867409f       kube-scheduler-demo-control-plane
# > 63e6b0a3307a8       dbfceb93c69b6       17 hours ago        Running             kube-controller-manager   1                   9fecff5941a02       kube-controller-manager-demo-control-plane
# > c1c5a2fd825b1       5185b96f0becf       17 hours ago        Running             coredns                   0                   4a074814568df       coredns-c676cc86f-vv6bd
# > b5eb1a5d4bd4f       5185b96f0becf       17 hours ago        Running             coredns                   0                   14bb62cd7ec40       coredns-c676cc86f-r6fg2
# > 02330ab9295b9       4c1e997385b8f       17 hours ago        Running             local-path-provisioner    0                   d34f2aacb74ac       local-path-provisioner-684f458cdd-zkv44
# > 97ca308797460       1c7d8c51823b5       17 hours ago        Running             kube-proxy                0                   8625facdc9af7       kube-proxy-z8wh6
# > d12e75fb3911f       a8a176a5d5d69       17 hours ago        Running             etcd                      0                   679d1dfd21a64       etcd-demo-control-plane
# > 93e3ad34a0d4e       97801f8394908       17 hours ago        Running             kube-apiserver            0                   eabda9c0df282       kube-apiserver-demo-control-plane
           

備注 : 在測試主機也可以執行 kubectl 的所有指令,以與 k8s 叢集環境互動;而像 ctr、crictl 指令的執行,則需要進入 k8s 叢集環境中的節點内才行!!

2、kind 部署環境中,預設安裝 haproxy 元件,實作 kube-apiserver 的負載均衡

# 經測試主機端口 46097 映射到 haproxy frontend bind port, 該 frontend 對應 backend 為三個 k8s master 節點 上部署的 kube-apiserver
docker ps |grep haproxy
# > ca94fa820cdf   kindest/haproxy:v20220607-9a4d8d2a   "haproxy -sf 7 -W -d…"   17 hours ago   Up 17 hours   127.0.0.1:46097->6443/tcp    demo-external-load-balancer

# haproxy 主要配置 /usr/local/etc/haproxy/haproxy.cfg
frontend control-plane
  bind *:6443
  default_backend kube-apiservers

backend kube-apiservers
  option httpchk GET /healthz
  server demo-control-plane demo-control-plane:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
  server demo-control-plane2 demo-control-plane2:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
  server demo-control-plane3 demo-control-plane3:6443 check check-ssl verify none resolvers docker resolve-prefer ipv4
           

網絡分析

kind 附帶一個簡單的網絡實作 kindnetd , 它基于标準 CNI 插件和簡單的 netlink 路由來實作 k8s 叢集網絡

route -n
# > Kernel IP routing table
# > Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
# > 0.0.0.0         172.18.0.1      0.0.0.0         UG    0      0        0 eth0
# > 10.16.0.2       0.0.0.0         255.255.255.255 UH    0      0        0 vethb86d2467
# > 10.16.0.3       0.0.0.0         255.255.255.255 UH    0      0        0 vethd5a7e000
# > 10.16.0.4       0.0.0.0         255.255.255.255 UH    0      0        0 veth5da47f46
# > 10.16.0.5       0.0.0.0         255.255.255.255 UH    0      0        0 vethe5f5947e
# > 10.16.1.0       172.18.0.3      255.255.255.0   UG    0      0        0 eth0
# > 10.16.2.0       172.18.0.6      255.255.255.0   UG    0      0        0 eth0
# > 10.16.3.0       172.18.0.8      255.255.255.0   UG    0      0        0 eth0
# > 10.16.4.0       172.18.0.4      255.255.255.0   UG    0      0        0 eth0
# > 10.16.5.0       172.18.0.7      255.255.255.0   UG    0      0        0 eth0
# > 172.18.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
           

檢視目前 kind 節點路由表,可确定目前節點 pod 網段為 10.16.0.0/24 , kind 節點 172.18.0.3 的 pod 網段為 10.16.1.0, 其它 kind 節點如上圖依次類推

Q&A

kubectl 指令補全按 tab 鍵時提示 .bash: _get_comp_words_by_ref: command not found , 解決方式如下:

apt install bash-completion
source /usr/share/bash-completion/bash_completion
           

重 要 提 醒: 由于筆者時間、視野、認知有限,本文難免出現錯誤、疏漏等問題,期待各位讀者朋友、業界大佬指正交流, 共同進步 !!

繼續閱讀