天天看點

基于kubeadm的kubernetes高可用叢集部署

部署架構

概要部署架構

  • kubernetes高可用的核心架構是master的高可用,kubectl、用戶端以及nodes通路load balancer實作高可用。
傳回目錄

詳細部署架構

  • kubernetes元件說明
kube-apiserver:叢集核心,叢集API接口、叢集各個元件通信的中樞;叢集安全控制;
etcd:叢集的資料中心,用于存放叢集的配置以及狀态資訊,非常重要,如果資料丢失那麼叢集将無法恢複;是以高可用叢集部署首先就是etcd是高可用叢集;
kube-scheduler:叢集Pod的排程中心;預設kubeadm安裝情況下--leader-elect參數已經設定為true,保證master叢集中隻有一個kube-scheduler處于活躍狀态;
kube-controller-manager:叢集狀态管理器,當叢集狀态與期望不同時,kcm會努力讓叢集恢複期望狀态,比如:當一個pod死掉,kcm會努力建立一個pod來恢複對應replicas set期望的狀态;預設kubeadm安裝情況下--leader-elect參數已經設定為true,保證master叢集中隻有一個kube-controller-manager處于活躍狀态;
kubelet: kubernetes node agent,負責與node上的docker engine打交道;
kube-proxy: 每個node上一個,負責service vip到endpoint pod的流量轉發,目前主要通過設定iptables規則實作。
  • 負載均衡
keepalived叢集設定一個虛拟ip位址,虛拟ip位址指向k8s-master1、k8s-master2、k8s-master3。
nginx用于k8s-master1、k8s-master2、k8s-master3的apiserver的負載均衡。外部kubectl以及nodes通路apiserver的時候就可以用過keepalived的虛拟ip(192.168.60.80)以及nginx端口(8443)通路master叢集的apiserver。

主機節點清單

主機名 IP位址 說明 元件
k8s-master1 192.168.60.71 master節點1 keepalived、nginx、etcd、kubelet、kube-apiserver、kube-scheduler、kube-proxy、kube-dashboard、heapster
k8s-master2 192.168.60.72 master節點2
k8s-master3 192.168.60.73 master節點3
192.168.60.80 keepalived虛拟IP
k8s-node1 ~ 8 192.168.60.81 ~ 88 8個node節點 kubelet、kube-proxy

安裝前準備

版本資訊

  • Linux版本:CentOS 7.3.1611
cat /etc/redhat-release 
CentOS Linux release 7.3.1611 (Core) 
           
  • docker版本:1.12.6
$ docker version
Client:
 Version: 1.12.6
 API version: 1.24
 Go version: go1.6.4
 Git commit: 78d1802
 Built: Tue Jan 10 20:20:01 2017
 OS/Arch: linux/amd64

Server:
 Version: 1.12.6
 API version: 1.24 Go version: go1.6.4
 Git commit: 78d1802
 Built: Tue Jan 10 20:20:01 2017
 OS/Arch: linux/amd64
           
  • kubeadm版本:v1.6.4
$ kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
           
  • kubelet版本:v1.6.4
$ kubelet --version
Kubernetes v1.6.4
           

所需docker鏡像

  • 國内可以使用daocloud加速器下載下傳相關鏡像,然後通過docker save、docker load把本地下載下傳的鏡像放到kubernetes叢集的所在機器上,daocloud加速器連結如下:
https://www.daocloud.io/mirror#accelerator-doc
  • 在本機MacOSX上pull相關docker鏡像
$ docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.6.4 $ docker pull gcr.io/google_containers/kube-proxy-amd64:v1.6.4 $ docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4 $ docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.6.4 $ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 $ docker pull quay.io/coreos/flannel:v0.7.1-amd64 $ docker pull gcr.io/google_containers/heapster-amd64:v1.3.0 $ docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 $ docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 $ docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 $ docker pull gcr.io/google_containers/etcd-amd64:3.0.17 $ docker pull gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 $ docker pull gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 $ docker pull nginx:latest $ docker pull gcr.io/google_containers/pause-amd64:3.0            
  • 在本機MacOSX上擷取代碼,并進入代碼目錄
$ git clone https://github.com/cookeem/kubeadm-ha $ cd kubeadm-ha            
  • 在本機MacOSX上把相關docker鏡像儲存成檔案
$ mkdir -p docker-images $ docker save -o docker-images/kube-apiserver-amd64 gcr.io/google_containers/kube-apiserver-amd64:v1.6.4 $ docker save -o docker-images/kube-proxy-amd64 gcr.io/google_containers/kube-proxy-amd64:v1.6.4 $ docker save -o docker-images/kube-controller-manager-amd64 gcr.io/google_containers/kube-controller-manager-amd64:v1.6.4 $ docker save -o docker-images/kube-scheduler-amd64 gcr.io/google_containers/kube-scheduler-amd64:v1.6.4 $ docker save -o docker-images/kubernetes-dashboard-amd64 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1 $ docker save -o docker-images/flannel quay.io/coreos/flannel:v0.7.1-amd64 $ docker save -o docker-images/heapster-amd64 gcr.io/google_containers/heapster-amd64:v1.3.0 $ docker save -o docker-images/k8s-dns-sidecar-amd64 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1 $ docker save -o docker-images/k8s-dns-kube-dns-amd64 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1 $ docker save -o docker-images/k8s-dns-dnsmasq-nanny-amd64 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1 $ docker save -o docker-images/etcd-amd64 gcr.io/google_containers/etcd-amd64:3.0.17 $ docker save -o docker-images/heapster-grafana-amd64 gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 $ docker save -o docker-images/heapster-influxdb-amd64 gcr.io/google_containers/heapster-influxdb-amd64:v1.1.1 $ docker save -o docker-images/pause-amd64 gcr.io/google_containers/pause-amd64:3.0 $ docker save -o docker-images/nginx nginx:latest            
  • 在本機MacOSX上把代碼以及docker鏡像複制到所有節點上
$ scp -r * root@k8s-master1:/root/kubeadm-ha $ scp -r * root@k8s-master2:/root/kubeadm-ha $ scp -r * root@k8s-master3:/root/kubeadm-ha $ scp -r * root@k8s-node1:/root/kubeadm-ha $ scp -r * root@k8s-node2:/root/kubeadm-ha $ scp -r * root@k8s-node3:/root/kubeadm-ha $ scp -r * root@k8s-node4:/root/kubeadm-ha $ scp -r * root@k8s-node5:/root/kubeadm-ha $ scp -r * root@k8s-node6:/root/kubeadm-ha $ scp -r * root@k8s-node7:/root/kubeadm-ha $ scp -r * root@k8s-node8:/root/kubeadm-ha            

系統設定

  • 以下在kubernetes所有節點上都是使用root使用者進行操作
  • 在kubernetes所有節點上增加kubernetes倉庫
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
 https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
           
  • 在kubernetes所有節點上進行系統更新
$ yum update -y            
  • 在kubernetes所有節點上關閉防火牆
$ systemctl disable firewalld && systemctl stop firewalld && systemctl status firewalld            
  • 在kubernetes所有節點上設定SELINUX為permissive模式
$ vi /etc/selinux/config
SELINUX=permissive
           
  • 在kubernetes所有節點上設定iptables參數,否則kubeadm init會提示錯誤
$ vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
           
  • 在kubernetes所有節點上重新開機主機
$ reboot            

kubernetes安裝

kubernetes相關服務安裝

  • 在kubernetes所有節點上驗證SELINUX模式,必須保證SELINUX為permissive模式,否則kubernetes啟動會出現各種異常
$ getenforce
Permissive
           
  • 在kubernetes所有節點上安裝并啟動kubernetes
$ yum install -y docker kubelet kubeadm kubernetes-cni $ systemctl enable docker && systemctl start docker $ systemctl enable kubelet && systemctl start kubelet            

docker鏡像導入

  • 在kubernetes所有節點上導入docker鏡像
$ docker load -i /root/kubeadm-ha/docker-images/etcd-amd64 $ docker load -i /root/kubeadm-ha/docker-images/flannel $ docker load -i /root/kubeadm-ha/docker-images/heapster-amd64 $ docker load -i /root/kubeadm-ha/docker-images/heapster-grafana-amd64 $ docker load -i /root/kubeadm-ha/docker-images/heapster-influxdb-amd64 $ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-dnsmasq-nanny-amd64 $ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-kube-dns-amd64 $ docker load -i /root/kubeadm-ha/docker-images/k8s-dns-sidecar-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kube-apiserver-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kube-controller-manager-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kube-proxy-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kubernetes-dashboard-amd64 $ docker load -i /root/kubeadm-ha/docker-images/kube-scheduler-amd64 $ docker load -i /root/kubeadm-ha/docker-images/pause-amd64 $ docker load -i /root/kubeadm-ha/docker-images/nginx 
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/google_containers/kube-apiserver-amd64 v1.6.4 4e3810a19a64 5 weeks ago 150.6 MB
gcr.io/google_containers/kube-proxy-amd64 v1.6.4 e073a55c288b 5 weeks ago 109.2 MB
gcr.io/google_containers/kube-controller-manager-amd64 v1.6.4 0ea16a85ac34 5 weeks ago 132.8 MB
gcr.io/google_containers/kube-scheduler-amd64 v1.6.4 1fab9be555e1 5 weeks ago 76.75 MB
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.1 71dfe833ce74 6 weeks ago 134.4 MB
quay.io/coreos/flannel v0.7.1-amd64 cd4ae0be5e1b 10 weeks ago 77.76 MB
gcr.io/google_containers/heapster-amd64 v1.3.0 f9d33bedfed3 3 months ago 68.11 MB
gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.1 fc5e302d8309 4 months ago 44.52 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.1 f8363dbf447b 4 months ago 52.36 MB
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.1 1091847716ec 4 months ago 44.84 MB
gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 4 months ago 168.9 MB
gcr.io/google_containers/heapster-grafana-amd64 v4.0.2 a1956d2a1a16 5 months ago 131.5 MB
gcr.io/google_containers/heapster-influxdb-amd64 v1.1.1 d3fccbedd180 5 months ago 11.59 MB
nginx latest 01f818af747d 6 months ago 181.6 MB
gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 14 months ago 746.9 kB
           

第一台master初始化

獨立etcd叢集部署

  • 在k8s-master1節點上以docker方式啟動etcd叢集
$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.0.17 \
etcd --name=etcd0 \ --advertise-client-urls=http://192.168.60.71:2379,http://192.168.60.71:4001 \ --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ --initial-advertise-peer-urls=http://192.168.60.71:2380 \ --listen-peer-urls=http://0.0.0.0:2380 \ --initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ --initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ --initial-cluster-state=new \ --auto-tls \ --peer-auto-tls \ --data-dir=/var/lib/etcd            
  • 在k8s-master2節點上以docker方式啟動etcd叢集
$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.0.17 \
etcd --name=etcd1 \ --advertise-client-urls=http://192.168.60.72:2379,http://192.168.60.72:4001 \ --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ --initial-advertise-peer-urls=http://192.168.60.72:2380 \ --listen-peer-urls=http://0.0.0.0:2380 \ --initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ --initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ --initial-cluster-state=new \ --auto-tls \ --peer-auto-tls \ --data-dir=/var/lib/etcd            
  • 在k8s-master3節點上以docker方式啟動etcd叢集
$ docker stop etcd && docker rm etcd
$ rm -rf /var/lib/etcd-cluster
$ mkdir -p /var/lib/etcd-cluster
$ docker run -d \
--restart always \
-v /etc/ssl/certs:/etc/ssl/certs \
-v /var/lib/etcd-cluster:/var/lib/etcd \
-p 4001:4001 \
-p 2380:2380 \
-p 2379:2379 \
--name etcd \
gcr.io/google_containers/etcd-amd64:3.0.17 \
etcd --name=etcd2 \ --advertise-client-urls=http://192.168.60.73:2379,http://192.168.60.73:4001 \ --listen-client-urls=http://0.0.0.0:2379,http://0.0.0.0:4001 \ --initial-advertise-peer-urls=http://192.168.60.73:2380 \ --listen-peer-urls=http://0.0.0.0:2380 \ --initial-cluster-token=9477af68bbee1b9ae037d6fd9e7efefd \ --initial-cluster=etcd0=http://192.168.60.71:2380,etcd1=http://192.168.60.72:2380,etcd2=http://192.168.60.73:2380 \ --initial-cluster-state=new \ --auto-tls \ --peer-auto-tls \ --data-dir=/var/lib/etcd            
  • 在k8s-master1、k8s-master2、k8s-master3上檢查etcd啟動狀态
$ docker exec -ti etcd ash

$ etcdctl member list
1a32c2d3f1abcad0: name=etcd2 peerURLs=http://192.168.60.73:2380 clientURLs=http://192.168.60.73:2379,http://192.168.60.73:4001 isLeader=false 1da4f4e8b839cb79: name=etcd1 peerURLs=http://192.168.60.72:2380 clientURLs=http://192.168.60.72:2379,http://192.168.60.72:4001 isLeader=false 4238bcb92d7f2617: name=etcd0 peerURLs=http://192.168.60.71:2380 clientURLs=http://192.168.60.71:2379,http://192.168.60.71:4001 isLeader=true

$ etcdctl cluster-health
member 1a32c2d3f1abcad0 is healthy: got healthy result from http://192.168.60.73:2379
member 1da4f4e8b839cb79 is healthy: got healthy result from http://192.168.60.72:2379
member 4238bcb92d7f2617 is healthy: got healthy result from http://192.168.60.71:2379
cluster is healthy

$ exit
           

kubeadm初始化

  • 在k8s-master1上修改kubeadm-init.yaml檔案,設定etcd.endpoints的${HOST_IP}為k8s-master1、k8s-master2、k8s-master3的IP位址
$ vi /root/kubeadm-ha/kubeadm-init.yaml 
apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration kubernetesVersion: v1.6.4 networking:
 podSubnet: 10.244.0.0/16
etcd:
 endpoints:
 - http://192.168.60.71:2379
 - http://192.168.60.72:2379
 - http://192.168.60.73:2379
           
  • 如果使用kubeadm初始化叢集,啟動過程可能會卡在以下位置,那麼可能是因為cgroup-driver參數與docker的不一緻引起
  • [apiclient] Created API client, waiting for the control plane to become ready
  • journalctl -t kubelet -S '2017-06-08'檢視日志,發現如下錯誤
  • error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd"
  • 需要修改KUBELET_CGROUP_ARGS=--cgroup-driver=systemd為KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs
$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf #Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
           
  • 在k8s-master1上使用kubeadm初始化kubernetes叢集,連接配接外部etcd叢集
$ kubeadm init --config=/root/kubeadm-ha/kubeadm-init.yaml            
  • 在k8s-master1上設定kubectl的環境變量KUBECONFIG,連接配接kubelet
$ vi ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf

$ source ~/.bashrc            

flannel網絡元件安裝

  • 在k8s-master1上安裝flannel pod網絡元件,必須安裝網絡元件,否則kube-dns pod會一直處于ContainerCreating
$ kubectl create -f /root/kubeadm-ha/kube-flannel
clusterrole "flannel" created
clusterrolebinding "flannel" created
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
           
  • 在k8s-master1上驗證kube-dns成功啟動,大概等待3分鐘,驗證所有pods的狀态為Running
$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system kube-apiserver-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1
kube-system kube-controller-manager-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1
kube-system kube-dns-3913472980-k9mt6 3/3 Running 0 4m 10.244.0.104 k8s-master1
kube-system kube-flannel-ds-3hhjd 2/2 Running 0 1m 192.168.60.71 k8s-master1
kube-system kube-proxy-rzq3t 1/1 Running 0 4m 192.168.60.71 k8s-master1
kube-system kube-scheduler-k8s-master1 1/1 Running 0 3m 192.168.60.71 k8s-master1
           

dashboard元件安裝

  • 在k8s-master1上安裝dashboard元件
$ kubectl create -f /root/kubeadm-ha/kube-dashboard/
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
           
  • 在k8s-master1上啟動proxy,映射位址到0.0.0.0
$ kubectl proxy --address='0.0.0.0' &            
  • 在本機MacOSX上通路dashboard位址,驗證dashboard成功啟動
http://k8s-master1:30000            

heapster元件安裝

  • 在k8s-master1上允許在master上部署pod,否則heapster會無法部署
$ kubectl taint nodes --all node-role.kubernetes.io/master-
node "k8s-master1" tainted
           
  • 在k8s-master1上安裝heapster元件,監控性能
$ kubectl create -f /root/kubeadm-ha/kube-heapster            
  • 在k8s-master1上重新開機docker以及kubelet服務,讓heapster在dashboard上生效顯示
$ systemctl restart docker kubelet            
  • 在k8s-master上檢查pods狀态
$ kubectl get all --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
kube-system heapster-783524908-kn6jd 1/1 Running 1 9m 10.244.0.111 k8s-master1
kube-system kube-apiserver-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1
kube-system kube-controller-manager-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1
kube-system kube-dns-3913472980-k9mt6 3/3 Running 3 16m 10.244.0.110 k8s-master1
kube-system kube-flannel-ds-3hhjd 2/2 Running 3 13m 192.168.60.71 k8s-master1
kube-system kube-proxy-rzq3t 1/1 Running 1 16m 192.168.60.71 k8s-master1
kube-system kube-scheduler-k8s-master1 1/1 Running 1 15m 192.168.60.71 k8s-master1
kube-system kubernetes-dashboard-2039414953-d46vw 1/1 Running 1 11m 10.244.0.109 k8s-master1
kube-system monitoring-grafana-3975459543-8l94z 1/1 Running 1 9m 10.244.0.112 k8s-master1
kube-system monitoring-influxdb-3480804314-72ltf 1/1 Running 1 9m 10.244.0.113 k8s-master1
           
  • 在本機MacOSX上通路dashboard位址,驗證heapster成功啟動,檢視Pods的CPU以及Memory資訊是否正常呈現
http://k8s-master1:30000            
  • 至此,第一台master成功安裝,并已經完成flannel、dashboard、heapster的部署

master叢集高可用設定

複制配置

  • 在k8s-master1上把/etc/kubernetes/複制到k8s-master2、k8s-master3
scp -r /etc/kubernetes/ k8s-master2:/etc/
scp -r /etc/kubernetes/ k8s-master3:/etc/            
  • 在k8s-master2、k8s-master3上重新開機kubelet服務,并檢查kubelet服務狀态為active (running)
$ systemctl daemon-reload && systemctl restart kubelet

$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
 Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
 Drop-In: /etc/systemd/system/kubelet.service.d
 └─10-kubeadm.conf
 Active: active (running) since Tue 2017-06-27 16:24:22 CST; 1 day 17h ago
 Docs: http://kubernetes.io/docs/
 Main PID: 2780 (kubelet)
 Memory: 92.9M
 CGroup: /system.slice/kubelet.service
 ├─2780 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-...
 └─2811 journalctl -k -f
           
  • 在k8s-master2、k8s-master3上設定kubectl的環境變量KUBECONFIG,連接配接kubelet
$ vi ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf

$ source ~/.bashrc            
  • 在k8s-master2、k8s-master3檢測節點狀态,發現節點已經加進來
$ kubectl get nodes -o wide
NAME STATUS AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
k8s-master1 Ready 26m v1.6.4 <none> CentOS Linux 7 (Core) 3.10.0-514.6.1.el7.x86_64
k8s-master2 Ready 2m v1.6.4 <none> CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64
k8s-master3 Ready 2m v1.6.4 <none> CentOS Linux 7 (Core) 3.10.0-514.21.1.el7.x86_64
           
  • 在k8s-master2、k8s-master3上修改kube-apiserver.yaml的配置,${HOST_IP}改為本機IP
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml
 - --advertise-address=${HOST_IP}
           
  • 在k8s-master2和k8s-master3上的修改kubelet.conf設定,${HOST_IP}改為本機IP
$ vi /etc/kubernetes/kubelet.conf
server: https://${HOST_IP}:6443
           
  • 在k8s-master2和k8s-master3上的重新開機服務
$ systemctl daemon-reload && systemctl restart docker kubelet            

建立證書

  • 在k8s-master2和k8s-master3上修改kubelet.conf後,由于kubelet.conf配置的crt和key與本機IP位址不一緻的情況,kubelet服務會異常退出,crt和key必須重新制作。檢視apiserver.crt的簽名資訊,發現IP Address以及DNS綁定了k8s-master1,必須進行相應修改。
openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt
Certificate: Data: Version: 3 (0x2)
 Serial Number: 9486057293403496063 (0x83a53ed95c519e7f)
 Signature Algorithm: sha1WithRSAEncryption
 Issuer: CN=kubernetes
 Validity
 Not Before: Jun 22 16:22:44 2017 GMT
 Not After : Jun 22 16:22:44 2018 GMT
 Subject: CN=kube-apiserver,
 Subject Public Key Info:
 Public Key Algorithm: rsaEncryption
 Public-Key: (2048 bit)
 Modulus: d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b: 53:4b
 Exponent: 65537 (0x10001)
 X509v3 extensions:
 X509v3 Subject Alternative Name: DNS:k8s-master1, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.71
 Signature Algorithm: sha1WithRSAEncryption
 dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91:
 9e:78:ab:ce            
  • 在k8s-master1、k8s-master2、k8s-master3上使用ca.key和ca.crt制作apiserver.crt和apiserver.key
$ mkdir -p /etc/kubernetes/pki-local 
$ cd /etc/kubernetes/pki-local            
  • 在k8s-master1、k8s-master2、k8s-master3上生成2048位的密鑰對
$ openssl genrsa -out apiserver.key 2048            
  • 在k8s-master1、k8s-master2、k8s-master3上生成證書簽署請求檔案
$ openssl req -new -key apiserver.key -subj "/CN=kube-apiserver," -out apiserver.csr            
  • 在k8s-master1、k8s-master2、k8s-master3上編輯apiserver.ext檔案,${HOST_NAME}修改為本機主機名,${HOST_IP}修改為本機IP位址,${VIRTUAL_IP}修改為keepalived的虛拟IP(192.168.60.80)
$ vi apiserver.ext
subjectAltName = DNS:${HOST_NAME},DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP:10.96.0.1, IP:${HOST_IP}, IP:${VIRTUAL_IP}
           
  • 在k8s-master1、k8s-master2、k8s-master3上使用ca.key和ca.crt簽署上述請求
$ openssl x509 -req -in apiserver.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out apiserver.crt -days 365 -extfile /etc/kubernetes/pki-local/apiserver.ext
           
  • 在k8s-master1、k8s-master2、k8s-master3上檢視新生成的證書:
$ openssl x509 -noout -text -in apiserver.crt
Certificate: Data: Version: 3 (0x2)
 Serial Number: 9486057293403496063 (0x83a53ed95c519e7f)
 Signature Algorithm: sha1WithRSAEncryption
 Issuer: CN=kubernetes
 Validity
 Not Before: Jun 22 16:22:44 2017 GMT
 Not After : Jun 22 16:22:44 2018 GMT
 Subject: CN=kube-apiserver,
 Subject Public Key Info:
 Public Key Algorithm: rsaEncryption
 Public-Key: (2048 bit)
 Modulus: d0:10:4a:3b:c4:62:5d:ae:f8:f1:16:48:b3:77:6b: 53:4b
 Exponent: 65537 (0x10001)
 X509v3 extensions:
 X509v3 Subject Alternative Name: DNS:k8s-master3, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, IP Address:10.96.0.1, IP Address:192.168.60.73, IP Address:192.168.60.80
 Signature Algorithm: sha1WithRSAEncryption
 dd:68:16:f9:11:be:c3:3c:be:89:9f:14:60:6b:e0:47:c7:91:
 9e:78:ab:ce            
  • 在k8s-master1、k8s-master2、k8s-master3上把apiserver.crt和apiserver.key檔案複制到/etc/kubernetes/pki目錄
$ cp apiserver.crt apiserver.key /etc/kubernetes/pki/            

修改配置

  • 在k8s-master2和k8s-master3上修改admin.conf,${HOST_IP}修改為本機IP位址
$ vi /etc/kubernetes/admin.conf
 server: https://${HOST_IP}:6443
           
  • 在k8s-master2和k8s-master3上修改controller-manager.conf,${HOST_IP}修改為本機IP位址
$ vi /etc/kubernetes/controller-manager.conf
 server: https://${HOST_IP}:6443
           
  • 在k8s-master2和k8s-master3上修改scheduler.conf,${HOST_IP}修改為本機IP位址
$ vi /etc/kubernetes/scheduler.conf
 server: https://${HOST_IP}:6443
           
  • 在k8s-master1、k8s-master2、k8s-master3上重新開機所有服務
$ systemctl daemon-reload && systemctl restart docker kubelet            

驗證高可用安裝

  • 在k8s-master1、k8s-master2、k8s-master3任意節點上檢測服務啟動情況,發現apiserver、controller-manager、kube-scheduler、proxy、flannel已經在k8s-master1、k8s-master2、k8s-master3成功啟動
$ kubectl get pod --all-namespaces -o wide | grep k8s-master2
kube-system kube-apiserver-k8s-master2 1/1 Running 1 55s 192.168.60.72 k8s-master2
kube-system kube-controller-manager-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2
kube-system kube-flannel-ds-t8gkh 2/2 Running 4 18m 192.168.60.72 k8s-master2
kube-system kube-proxy-bpgqw 1/1 Running 1 18m 192.168.60.72 k8s-master2
kube-system kube-scheduler-k8s-master2 1/1 Running 2 18m 192.168.60.72 k8s-master2

$ kubectl get pod --all-namespaces -o wide | grep k8s-master3
kube-system kube-apiserver-k8s-master3 1/1 Running 1 1m 192.168.60.73 k8s-master3
kube-system kube-controller-manager-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3
kube-system kube-flannel-ds-tmqmx 2/2 Running 4 18m 192.168.60.73 k8s-master3
kube-system kube-proxy-4stg3 1/1 Running 1 18m 192.168.60.73 k8s-master3
kube-system kube-scheduler-k8s-master3 1/1 Running 2 18m 192.168.60.73 k8s-master3
           
  • 在k8s-master1、k8s-master2、k8s-master3任意節點上通過kubectl logs檢查各個controller-manager和scheduler的leader election結果,可以發現隻有一個節點有效表示選舉正常
$ kubectl logs -n kube-system kube-controller-manager-k8s-master1 $ kubectl logs -n kube-system kube-controller-manager-k8s-master2 $ kubectl logs -n kube-system kube-controller-manager-k8s-master3 
$ kubectl logs -n kube-system kube-scheduler-k8s-master1 $ kubectl logs -n kube-system kube-scheduler-k8s-master2 $ kubectl logs -n kube-system kube-scheduler-k8s-master3            
  • 在k8s-master1、k8s-master2、k8s-master3任意節點上檢視deployment的情況
$ kubectl get deploy --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kube-system heapster 1 1 1 1 41m
kube-system kube-dns 1 1 1 1 48m
kube-system kubernetes-dashboard 1 1 1 1 43m
kube-system monitoring-grafana 1 1 1 1 41m
kube-system monitoring-influxdb 1 1 1 1 41m            
  • 在k8s-master1、k8s-master2、k8s-master3任意節點上把kubernetes-dashboard、kube-dns、 scale up成replicas=3,保證各個master節點上都有運作
$ kubectl scale --replicas=3 -n kube-system deployment/kube-dns $ kubectl get pods --all-namespaces -o wide| grep kube-dns 
$ kubectl scale --replicas=3 -n kube-system deployment/kubernetes-dashboard $ kubectl get pods --all-namespaces -o wide| grep kubernetes-dashboard 
$ kubectl scale --replicas=3 -n kube-system deployment/heapster $ kubectl get pods --all-namespaces -o wide| grep heapster 
$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-grafana $ kubectl get pods --all-namespaces -o wide| grep monitoring-grafana 
$ kubectl scale --replicas=3 -n kube-system deployment/monitoring-influxdb $ kubectl get pods --all-namespaces -o wide| grep monitoring-influxdb            

keepalived安裝配置

  • 在k8s-master、k8s-master2、k8s-master3上安裝keepalived
$ yum install -y keepalived 
$ systemctl enable keepalived && systemctl restart keepalived            
  • 在k8s-master1、k8s-master2、k8s-master3上備份keepalived配置檔案
$ mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak            
  • 在k8s-master1、k8s-master2、k8s-master3上設定apiserver監控腳本,當apiserver檢測失敗的時候關閉keepalived服務,轉移虛拟IP位址
$ vi /etc/keepalived/check_apiserver.sh
#!/bin/bash
err=0
for k in $( seq 1 10 )
do
 check_code=$(ps -ef|grep kube-apiserver | wc -l)
 if [ "$check_code" = "1" ]; then
 err=$(expr $err + 1)
 sleep 5
 continue else
 err=0
 break fi done if [ "$err" != "0" ]; then echo "systemctl stop keepalived"
 /usr/bin/systemctl stop keepalived
 exit 1
else exit 0
fi

chmod a+x /etc/keepalived/check_apiserver.sh
           
  • 在k8s-master1、k8s-master2、k8s-master3上檢視接口名字
$ ip a | grep 192.168.60            
  • 在k8s-master1、k8s-master2、k8s-master3上設定keepalived,參數說明如下:
  • state ${STATE}:為MASTER或者BACKUP,隻能有一個MASTER
  • interface ${INTERFACE_NAME}:為本機的需要綁定的接口名字(通過上邊的

    ip a

    指令檢視)
  • mcast_src_ip ${HOST_IP}:為本機的IP位址
  • priority ${PRIORITY}:為優先級,例如102、101、100,優先級越高越容易選擇為MASTER,優先級不能一樣
  • ${VIRTUAL_IP}:為虛拟的IP位址,這裡設定為192.168.60.80
$ vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
 router_id LVS_DEVEL
}
vrrp_script chk_apiserver {
 script "/etc/keepalived/check_apiserver.sh"
 interval 2
 weight -5
 fall 3 
 rise 2
}
vrrp_instance VI_1 {
 state ${STATE}
 interface ${INTERFACE_NAME}
 mcast_src_ip ${HOST_IP}
 virtual_router_id 51
 priority ${PRIORITY}
 advert_int 2
 authentication {
 auth_type PASS
 auth_pass 4be37dc3b4c90194d1600c483e10ad1d
 }
 virtual_ipaddress {
 ${VIRTUAL_IP}
 }
 track_script {
 chk_apiserver
 }
}
           
  • 在k8s-master1、k8s-master2、k8s-master3上重新開機keepalived服務,檢測虛拟IP位址是否生效
$ systemctl restart keepalived $ ping 192.168.60.80            

nginx負載均衡配置

  • 在k8s-master1、k8s-master2、k8s-master3上修改nginx-default.conf設定,${HOST_IP}對應k8s-master1、k8s-master2、k8s-master3的位址。通過nginx把通路apiserver的6443端口負載均衡到8433端口上
$ vi /root/kubeadm-ha/nginx-default.conf
stream {
 upstream apiserver {
 server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s;
 server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s;
 server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s;
 }

 server {
 listen 8443;
 proxy_connect_timeout 1s;
 proxy_timeout 3s;
 proxy_pass apiserver;
 }
}
           
  • 在k8s-master1、k8s-master2、k8s-master3上啟動nginx容器
$ docker run -d -p 8443:8443 \
--name nginx-lb \ --restart always \ -v /root/kubeadm-ha/nginx-default.conf:/etc/nginx/nginx.conf \
nginx
           
  • 在k8s-master1、k8s-master2、k8s-master3上檢測keepalived服務的虛拟IP位址指向
$ curl -L 192.168.60.80:8443 | wc -l  % Total % Received % Xferd Average Speed Time Time Time Current
 Dload Upload Total Spent Left Speed
100 14 0 14 0 0 18324 0 --:--:-- --:--:-- --:--:-- 14000
1
           
  • 業務恢複後務必重新開機keepalived,否則keepalived會處于關閉狀态
$ systemctl restart keepalived            
  • 在k8s-master1、k8s-master2、k8s-master3上檢視keeplived日志,有以下輸出表示目前虛拟IP位址綁定的主機
$ systemctl status keepalived -l
VRRP_Instance(VI_1) Sending gratuitous ARPs on ens160 for 192.168.60.80
           

kube-proxy配置

  • 在k8s-master1上設定kube-proxy使用keepalived的虛拟IP位址,避免k8s-master1異常的時候所有節點的kube-proxy連接配接不上
$ kubectl get -n kube-system configmap
NAME DATA AGE
extension-apiserver-authentication 6 4h
kube-flannel-cfg 2 4h
kube-proxy 1 4h
           
  • 在k8s-master1上修改configmap/kube-proxy的server指向keepalived的虛拟IP位址
$ kubectl edit -n kube-system configmap/kube-proxy
 server: https://192.168.60.80:8443
           
  • 在k8s-master1上檢視configmap/kube-proxy設定情況
$ kubectl get -n kube-system configmap/kube-proxy -o yaml            
  • 在k8s-master1上删除所有kube-proxy的pod,讓proxy重建
kubectl get pods --all-namespaces -o wide | grep proxy
           
  • 在k8s-master1、k8s-master2、k8s-master3上重新開機docker kubelet keepalived服務
$ systemctl restart docker kubelet keepalived            

驗證master叢集高可用

  • 在k8s-master1上檢查各個節點pod的啟動狀态,每個上都成功啟動heapster、kube-apiserver、kube-controller-manager、kube-dns、kube-flannel、kube-proxy、kube-scheduler、kubernetes-dashboard、monitoring-grafana、monitoring-influxdb。并且所有pod都處于Running狀态表示正常
$ kubectl get pods --all-namespaces -o wide | grep k8s-master1 
$ kubectl get pods --all-namespaces -o wide | grep k8s-master2 
$ kubectl get pods --all-namespaces -o wide | grep k8s-master3            

node節點加入高可用叢集設定

kubeadm加入高可用叢集

  • 在k8s-master1上禁止在所有master節點上釋出應用
$ kubectl patch node k8s-master1 -p '{"spec":{"unschedulable":true}}' 
$ kubectl patch node k8s-master2 -p '{"spec":{"unschedulable":true}}' 
$ kubectl patch node k8s-master3 -p '{"spec":{"unschedulable":true}}'            
  • 在k8s-master1上檢視叢集的token
$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION
xxxxxx.yyyyyy <forever> <never> authentication,signing The default bootstrap token generated by 'kubeadm init'            
  • 在k8s-node1 ~ k8s-node8上,${TOKEN}為k8s-master1上顯示的token,${VIRTUAL_IP}為keepalived的虛拟IP位址192.168.60.80
$ kubeadm join --token ${TOKEN} ${VIRTUAL_IP}:8443            

部署應用驗證叢集

  • 在k8s-node1 ~ k8s-node8上檢視kubelet狀态,kubelet狀态為active (running)表示kubelet服務正常啟動
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
 Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
 Drop-In: /etc/systemd/system/kubelet.service.d
 └─10-kubeadm.conf
 Active: active (running) since Tue 2017-06-27 16:23:43 CST; 1 day 18h ago
 Docs: http://kubernetes.io/docs/
 Main PID: 1146 (kubelet)
 Memory: 204.9M
 CGroup: /system.slice/kubelet.service
 ├─ 1146 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require...
 ├─ 2553 journalctl -k -f
 ├─ 4988 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl...
 └─14720 /usr/sbin/glusterfs --log-level=ERROR --log-file=/var/lib/kubelet/pl...
           
  • 在k8s-master1上檢查各個節點狀态,發現所有k8s-nodes節點成功加入
$ kubectl get nodes -o wide
NAME STATUS AGE VERSION
k8s-master1 Ready,SchedulingDisabled 5h v1.6.4
k8s-master2 Ready,SchedulingDisabled 4h v1.6.4
k8s-master3 Ready,SchedulingDisabled 4h v1.6.4
k8s-node1 Ready 6m v1.6.4
k8s-node2 Ready 4m v1.6.4
k8s-node3 Ready 4m v1.6.4
k8s-node4 Ready 3m v1.6.4
k8s-node5 Ready 3m v1.6.4
k8s-node6 Ready 3m v1.6.4
k8s-node7 Ready 3m v1.6.4
k8s-node8 Ready 3m v1.6.4
           
  • 在k8s-master1上測試部署nginx服務,nginx服務成功部署到k8s-node5上
$ kubectl run nginx --image=nginx --port=80
deployment "nginx" created

$ kubectl get pod -o wide -l=run=nginx
NAME READY STATUS RESTARTS AGE IP NODE
nginx-2662403697-pbmwt 1/1 Running 0 5m 10.244.7.6 k8s-node5
           
  • 在k8s-master1讓nginx服務外部可見
$ kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service "nginx" exposed

$ kubectl get svc -l=run=nginx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx 10.105.151.69 <nodes> 80:31639/TCP 43s

$ curl k8s-master2:31639
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p> <p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>            
  • 至此,kubernetes高可用叢集成功部署

本文轉移開源中國-

基于kubeadm的kubernetes高可用叢集部署

繼續閱讀