天天看點

kubeadm實作k8s高可用叢集環境部署與配置

高可用架構

k8s叢集的高可用實際是k8s各核心元件的高可用,這裡使用主備模式,架構如下:

主備模式高可用架構說明:

核心元件 高可用模式 高可用實作方式
apiserver 主備 keepalived
controller-manager 主備 leader election
scheduler 主備 leader election
etcd 叢集 kubeadm
  • apiserver 通過keepalived實作高可用,當某個節點故障時觸發keepalived vip 轉移;
  • controller-manager k8s内部通過選舉方式産生上司者(由--leader-elect 選型控制,預設為true),同一時刻叢集内隻有一個controller-manager元件運作;
  • scheduler k8s内部通過選舉方式産生上司者(由--leader-elect 選型控制,預設為true),同一時刻叢集内隻有一個scheduler元件運作;
  • etcd 通過運作kubeadm方式自動建立叢集來實作高可用,部署的節點數為奇數,3節點方式最多容忍一台機器當機。

部署環境

k8s版本

kubelet version kubeadm version kubectl version
v1.15.1 v1.15.1 v1.15.1

主機配置

Centos版本 系統核心 docker version flannel version Keepalived version
7.8.2003 4.4.223 19.03.9 v0.11.0 v1.3.5

主機清單

主機名 ip 主機配置 備注
master01 192.168.213.181 4U4G control plane
master02 192.168.213.182 4U4G control plane
master03 192.168.213.183 4U4G control plane
node01 192.168.213.192 2U2G node
node02 192.168.213.192 2U2G node
VIP 192.168.213.200 4U4G 在control plane上浮動

私有倉庫

主機名 ip 主機配置 備注
docker-registry 192.168.213.129 2U1G reg.zhao.com

其他準備

系統初始化,docker安裝,k8s(kubelet、kubeadm和kubectl)安裝省略

  • kubelet 運作在叢集所有節點上,用于啟動Pod和容器
  • kubeadm 用于初始化叢集,啟動叢集
  • kubectl 用于和叢集通信,部署和管理應用,檢視各種資源,建立、删除和更新各種元件

啟動kubelet并設定開機啟動

systemctl enable kubelet && systemctl start kubelet

keepalived安裝

在所有master節點上安裝

安裝keepalived

[root@master01 ~]# yum -y install keepalived
           

keepalived配置

master01

[root@master01 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master01
}
vrrp_instance VI_1 {
    state MASTER 
    interface ens160
    virtual_router_id 50
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.27.34.130
    }
}
           

master02

[root@master02 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master02
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens160
    virtual_router_id 50
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.27.34.130
    }
}
           

master03

[root@master03 ~]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master03
}
vrrp_instance VI_1 {
    state BACKUP 
    interface ens160
    virtual_router_id 50
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.27.34.130
    }
}
           

啟動keepalived并設定開機啟動

[root@master01 ~]# systemctl start keepalived
[root@master01 ~]# systemctl enable keepalived
           

VIP檢視

kubeadm實作k8s高可用叢集環境部署與配置

配置master節點

初始化master01節點

master01初始化

#初始化的配置檔案
[root@master01 ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.1
apiServer:
  certSANs:		##填寫所有kube-apiserver節點的hostname、IP、VIP
  - master01
  - master02
  - master03
  - node01
  - node02
  - 192.168.213.181
  - 192.168.213.182
  - 192.168.213.183
  - 192.168.213.191
  - 192.168.213.192
  - 192.168.213.200
controlPlaneEndpoint: "192.168.213.200:6443"
networking:
  podSubnet: "10.244.0.0/16"
[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml|tee kubeadim-init.log
           

記錄kubeadm join的輸出,後面需要這個指令将備master節點和node節點加入叢集中

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.213.200:6443 --token ebx4uz.9y3twsnoj9yoscoo \
    --discovery-token-ca-cert-hash sha256:1bc280548259dd8f1ac53d75e918a8ec99c234b13f4fe18a71435bbbe8cb26f3
           

加載環境變量

[root@master01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
           

安裝flannel網絡

[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
           

備master節點加入叢集

配置免密登入

配置master01到master02、master03免密登入

#建立秘鑰
[root@master01 ~]# ssh-keygen -t rsa
#将秘鑰同步至master02,master03
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub [email protected]
#免密登陸測試
[root@master01 ~]# ssh master02
[root@master01 ~]# ssh 192.168.213.183
           

master01分發證書

在master01上運作腳本cert-main-master.sh,将證書分發至master02和master03

[root@master01 ~]# cat cert-main-master.sh
USER=root # customizable
CONTROL_PLANE_IPS="192.168.213.182 192.168.213.183"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    # Quote this line if you are using external etcd
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
[root@master01 ~]# ./cert-main-master.sh
           

備master節點移動證書至指定目錄

在master02,master03上運作腳本cert-other-master.sh,将證書移至指定目錄

[root@master02 ~]# cat cert-other-master.sh
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@master02 ~]# ./cert-other-master.sh 
           

備master節點加入叢集

在master02和master03節點上運作加入叢集的指令

kubeadm join 192.168.213.200:6443 --token ebx4uz.9y3twsnoj9yoscoo \
    --discovery-token-ca-cert-hash sha256:1bc280548259dd8f1ac53d75e918a8ec99c234b13f4fe18a71435bbbe8cb26f3
           

備master節點加載環境變量

此步驟是為了在備master節點上也能執行kubectl指令

scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
           

node節點加入叢集

加入

在node節點運作初始化master生成的加入叢集的指令

kubeadm join 192.168.213.200:6443 --token ebx4uz.9y3twsnoj9yoscoo \
    --discovery-token-ca-cert-hash sha256:1bc280548259dd8f1ac53d75e918a8ec99c234b13f4fe18a71435bbbe8cb26f3
           

叢集節點檢視

[root@master01 ~]# kubectl get nodes
[root@master01 ~]# kubectl get pod -o wide -n kube-system 
           

所有control plane節點處于ready狀态,所有的系統元件也正常

對接私有倉庫

私有倉庫配置省略,在所有節點上執行以下步驟

修改daemon.json

[root@master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.213.181 master01
192.168.213.182 master02
192.168.213.183 master03
192.168.213.191 node01
192.168.213.192 node02
192.168.213.129 reg.zhao.com
[root@master01 ~]# cat /etc/docker/daemon.json
{
    "registry-mirrors": ["https://sopn42m9.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
        "log-opts": {
            "max-size": "100m"
        },
    "insecure-registries": ["https://reg.zhao.com"]
}
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
           

建立認證secret

使用Kuberctl建立docker register認證secret

[root@master01 ~]# kubectl create secret docker-registry myregistrykey --docker-server=https://reg.zhao.com --docker-username=admin --docker-password=Harbor12345 --docker-email=""
secret/myregistrykey created
[root@master02 ~]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-6mrjd   kubernetes.io/service-account-token   3      18h
myregistrykey         kubernetes.io/dockerconfigjson        1      19s
           

在建立Pod的時通過imagePullSecret引用myregistrykey

imagePullSecrets:
  - name: myregistrykey
           

叢集功能測試

測試私有倉庫

[root@master02 ~]# cat test_sc.yaml
apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
    - name: foo
      image: reg.zhao.com/zhao/myapp:v1.0
#  imagePullSecrets:
#    - name: myregistrykey
           

打開注釋,應用密鑰,可以拉取到鏡像

測試叢集高可用

測試master節點高可用

通過ip檢視apiserver所在節點,通過leader-elect檢視scheduler和controller-manager所在節點

[root@master01 ~]# ip a|grep ens33
[root@master01 ~]# kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
[root@master01 ~]# kubectl get endpoints kube-controller-manager -n kube-system -o yaml |grep holderIdentity
           
元件名 所在節點
apiserver master01
controller-manager master01
scheduler master01

關閉master01,模拟當機,master01狀态為NotReady

[root@master01 ~]# init 0
           

VIP飄到了master02,controller-manager和scheduler也發生了遷移

元件名 所在節點
apiserver master02
controller-manager master03
scheduler master02

測試node節點高可用

K8S 的pod-eviction在某些場景下如節點 NotReady,資源不足時,會把 pod 驅逐至其它節點

Kube-controller-manager 周期性檢查節點狀态,每當節點狀态為 NotReady,并且超出 pod-eviction-timeout 時間後,就把該節點上的 pod 全部驅逐到其它節點,其中具體驅逐速度還受驅逐速度參數,叢集大小等的影響。最常用的 2 個參數如下:

pod-eviction-timeout:NotReady 狀态節點超過該時間後,執行驅逐,預設 5 min

node-eviction-rate:驅逐速度,預設為 0.1 pod/秒

建立pod,維持副本數3

[root@master02 ~]# cat myapp_deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: stabel
  template:
    metadata:
      labels:
        app: myapp
        release: stabel
        env: test
    spec:
      containers:
      - name: myapp
        image: library/nginx
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
           

可以看到pod分布在node01和node02節點上

kubeadm實作k8s高可用叢集環境部署與配置

關閉node02,模拟當機,node02狀态為NotReady

可以看到 NotReady 狀态節點超過指定時間後,pod被驅逐到 Ready 的節點上,deployment維持運作3個副本

問題

初始化master節點失敗

如果初始化失敗,可執行kubeadm reset後重新初始化

[root@master01 ~]# kubeadm reset
#非root使用者還須執行rm -rf $HOME/.kube/config
           

flanne檔案下載下傳失敗

方法一:可以直接下載下傳kube-flannel.yml檔案,然後再執行apply

方法二:配置域名解析

在https://site.ip138.com查詢伺服器IP

echo "151.101.76.133 raw.Githubusercontent.com" >>/etc/hosts

節點狀态NotReady

在節點機器上執行

journalctl -f -u kubelet

檢視kubelet的輸出日志資訊如下:

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

出現這個錯誤的原因是網絡插件沒有準備好,在節點上執行指令

docker images|grep flannel

檢視flannel鏡像是否已經成功拉取,這個花費的時間可能會很長

如果很長時間仍然沒有拉下來flannel鏡像,可以使用如下方法解決

docker save

把主節點上的flannel鏡像儲存為壓縮檔案(或在官方倉庫https://github.com/coreos/flannel/releases下載下傳鏡像傳到主機上,要注意版本對應),在節點機器上執行

docker load

加載鏡像

[root@master02 ~]# docker save -o my_flannel.tar quay.io/coreos/flannel:v0.11.0-amd64
[root@master02 ~]# scp my_flannel.tar node01:/root
[root@node01 ~]# docker load < my_flannel.tar
           

unexpected token `$’do\r”

shell,運作出錯:syntax error near unexpected token `$’do\r”

原因:Linux和windows下的回車換行符不相容

解決方法:将windows下面的CR LF,轉換為Linux下面的LF

用notepad++打開檔案,

編輯->檔案格式轉換->轉換為UNIX格式->儲存

即可

k8s

繼續閱讀