K8S-kubeadm安裝高可用(完成版本)
1 主機清單
主機名 | centos版本 | ip | docker version |
master01 | 7.6.1810 | 192.168.59.11 | 18.09.9 |
master02 | 192.168.59.12 | ||
master03 | 192.168.59.13 | ||
work01 | 192.168.59.21 | ||
work02 | 192.168.59.22 | ||
VIP | 192.168.59.10 |
共有5台裝置,3台master,1台work
2 K8S版本
kubelet版本 | kubeadm版本 | kubectl版本 | |
v1.16.4 | v1.16.4(選裝) | ||
3 高可用架構
![](https://img.laitimes.com/img/9ZDMuAjOiMmIsIjOiQnIsIyZuBnLmJTZ5ImYkFjMjhTMtQ2MlFWL0YTY00SY1cDOtgjZ3YTMycTYtIjN2cDO5kzM5MDM2EzLchTM1YDO3IzLcdmbw9CXwIDMy8CXw8CXlVXc1l3Lc12bj5yayFGbu5ibkN2Lc9CX6MHc0RHaiojIsJye.png)
采用kubeadm來建構k8s叢集,apiserver利用vip進行建構,kubelet連接配接vip,實作高可用的目的
4 高可用說明
核心元件 | 高可用模式 | 高可用實作方式 |
apiserver | 主備 | keepalived |
controller-manager | leader election | |
scheduler | ||
etcd | 叢集 | kubeadm |
- apiserver 通過keepalived實作高可用,當某個節點故障時觸發keepalived vip 轉移;
- controller-manager k8s内部通過選舉方式産生上司者(由--leader-elect 選型控制,預設為true),同一時刻叢集内隻有一個controller-manager元件運作;
- scheduler k8s内部通過選舉方式産生上司者(由--leader-elect 選型控制,預設為true),同一時刻叢集内隻有一個scheduler元件運作;
- etcd 通過運作kubeadm方式自動建立叢集來實作高可用,部署的節點數為奇數,3節點方式最多容忍一台機器當機。
5 安裝準備工作
所有裝置master01,master02,master03,work01,work02上執行
5.1 修改主機名
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname work01
hostnamectl set-hostname work02
systemctl restart systemd-hostnamed
5.2 修改host檔案
cat >> /etc/hosts << EOF
192.168.59.11 master01
192.168.59.12 master02
192.168.59.13 master03
192.168.59.21 work01
192.168.59.22 work02
EOF
5.3 修改DNS
cat >> /etc/resolv.conf << EOF
nameserver 114.114.114.114
nameserver 8.8.8.8
5.4 關閉SELINUX
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
5.5 關閉防火牆
#檢視防火牆狀态
systemctl status firewalld.service
#狀态是運作中,需要關閉
Active: active (running)
#關閉防火牆
systemctl stop firewalld
#再次檢視狀态确認
#設定開機關閉,不要啟動
systemctl disable firewalld
5.6 設定iptables為空規則
yum -y install iptables-services
#安裝iptablesservices
systemctl start iptables
#啟動防火牆
systemctl enable iptables
#設定防火牆卡開機啟動
iptables -F
service iptables save
#清空規則,并儲存
5.7 關閉swap分區
sed -i.bak '/swap/s/^/#/' /etc/fstab
swapoff -a
5.8 加載子產品
k8s網絡使用的flannel,該網絡需要使用核心的br_netfilter子產品
#查詢是否加載了這個子產品
lsmod |grep br_netfilter
#永久加載子產品
cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
chmod 755 /etc/sysconfig/modules/br_netfilter.modules
5.9 設定核心參數
cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfileter.nf_conntrack_max=2310720
5.10 安裝常用軟體包
yum -y install epel-release vim net-tools gcc gcc-c++ glibc htop atop iftop iotop nethogs lrzsz telnet ipvsadm ipset conntrack libnl-devel libnfnetlink-devel openssl openssl-devel contrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
5.11 同步時間
yum install ntpdate -y &&/usr/sbin/ntpdate -u ntp1.aliyun.com
5.12 關閉不需要服務
systemctl stop postfix && systemctl disable postfix
5.13 配置k8s YUM源
5.13.1新增阿裡雲yum
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg- [] 中括号中的是repository id,唯一,用來辨別不同倉庫
- name 倉庫名稱,自定義
- baseurl 倉庫位址
- enable 是否啟用該倉庫,預設為1表示啟用
- gpgcheck 是否驗證從該倉庫獲得程式包的合法性,1為驗證
- repo_gpgcheck 是否驗證中繼資料的合法性 中繼資料就是程式包清單,1為驗證
- gpgkey=URL 數字簽名的公鑰檔案所在位置,如果gpgcheck值為1,此處就需要指定gpgkey檔案的位置,如果gpgcheck值為0就不需要此項了
5.13.2 更新緩存
yum clean all && yum -y makecache
5.14 SSH無密碼認證
這裡在master01上配置無密碼認證,友善連接配接master02,master03上
[root@master01 ~]#ssh-keygen -t rsa
[root@master01 ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@master02
[root@master01 ~]#ssh-copy-id -i /root/.ssh/id_rsa.pub root@master03
6 Docker安裝
master01,master02,master03,work01,work02上執行安裝
6.1 安裝依賴包
yum install -y yum-utils device-mapper-persistent-data lvm2
6.2 設定docker源
yum-config-manager --add-repo
https://download.docker.com/linux/centos/docker-ce.repo6.3 檢視docker ce版本
yum list docker-ce --showduplicates | sort -r
Repository epel is listed more than once in the configuration
Repository epel-debuginfo is listed more than once in the configuration
Repository epel-source is listed more than once in the configuration
已加載插件:fastestmirror
可安裝的軟體包
* updates: mirrors.aliyun.com
Loading mirror speeds from cached hostfile
* extras: mirrors.aliyun.com
docker-ce.x86_64 3:19.03.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.13-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.12-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.11-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.10-3.el7 docker-ce-stable
docker-ce.x86_64 3:19.03.0-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.9-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.8-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.7-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.6-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.5-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.4-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.3-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.2-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.1-3.el7 docker-ce-stable
docker-ce.x86_64 3:18.09.0-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.3.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.2.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.1.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.06.0.ce-3.el7 docker-ce-stable
docker-ce.x86_64 18.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 18.03.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.12.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.09.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.06.0.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.3.ce-1.el7 docker-ce-stable
docker-ce.x86_64 17.03.2.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.1.ce-1.el7.centos docker-ce-stable
docker-ce.x86_64 17.03.0.ce-1.el7.centos docker-ce-stable
* base: mirrors.aliyun.com
6.4 安裝指定版本docker
yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
6.5 啟動docker
systemctl start docker
systemctl enable docker
7 安裝指令補全
所有裝置安裝
yum -y install bash-completion
source /etc/profile.d/bash_completion.sh
8 鏡像加速
預設的docker下載下傳位址速度很慢,這裡用阿裡雲的鏡像加速來作為鏡像站
登陸位址為:
https://cr.console.aliyun.com,未注冊的可以先注冊阿裡雲賬戶
8.1 配置加速
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://lss3ndia.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
8.2 重新開機docker
systemctl daemon-reload && systemctl restart docker
8.3 驗證
docker --version
Docker version 18.09.9, build 039a7df9ba
9 Keepalived安裝
在mster01,master02,master03上安裝
yum -y install keepalived
9.1 配置keepalived
master01上
[root@master01 ~]#more /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master01
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 50
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.59.10
master02上
[root@master02 ~]#more /etc/keepalived/keepalived.conf
router_id master02
state BACKUP
priority 90
master03上
[root@master03 ~]#more /etc/keepalived/keepalived.conf
router_id master03
priority 80
9.2 啟動服務
service keepalived start
systemctl enable keepalived
10 安裝K8S
所有節點安裝
10.1 檢視版本
本文安裝的kubelet版本是1.16.4,該版本支援的docker版本為1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09
yum list kubelet --showduplicates | sort -r
10.2 安裝指定版本
這裡安裝16.4版本
yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4
10.3 安裝包說明
kubelet 運作在叢集所有節點上,用于啟動Pod和容器等對象的工具
kubeadm 用于初始化叢集,啟動叢集的指令工具
kubectl 用于和叢集通信的指令行,通過kubectl可以部署和管理應用,檢視各種資源,建立、删除和更新各種元件
10.4 啟動kubelet
systemctl enable kubelet && systemctl start kubelet
10.5 kubectl指令補全
echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile
11 鏡像包下載下傳
因為預設的k8s的鏡像都在google,通路不了,這裡用網際網路上的分享的源下載下傳
10月28号補充,我已經将所有的鏡像打包成k8s_1.16.4.tar.gz,使用時候導入鏡像即可
所有master,所有work上
docker pull registry.cn-hangzhou.aliyuncs.com/loong576/kube-apiserver:v1.16.4
docker pull registry.cn-hangzhou.aliyuncs.com/loong576/kube-controller-manager:v1.16.4
docker pull registry.cn-hangzhou.aliyuncs.com/loong576/kube-scheduler:v1.16.4
docker pull registry.cn-hangzhou.aliyuncs.com/loong576/kube-proxy:v1.16.4
docker pull registry.cn-hangzhou.aliyuncs.com/loong576/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/loong576/etcd:3.3.15-0
docker pull registry.cn-hangzhou.aliyuncs.com/loong576/coredns:1.6.2
11.1 檢視下載下傳的鏡像
[root@master01 ~]#docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest bf756fb1ae65 9 months ago 13.3kB
registry.cn-hangzhou.aliyuncs.com/loong576/kube-apiserver v1.16.4 3722a80984a0 10 months ago 217MB
registry.cn-hangzhou.aliyuncs.com/loong576/kube-controller-manager v1.16.4 fb4cca6b4e4c 10 months ago 163MB
registry.cn-hangzhou.aliyuncs.com/loong576/kube-scheduler v1.16.4 2984964036c8 10 months ago 87.3MB
registry.cn-hangzhou.aliyuncs.com/loong576/kube-proxy v1.16.4 091df896d78f 10 months ago 86.1MB
registry.cn-hangzhou.aliyuncs.com/loong576/etcd 3.3.15-0 b2756210eeab 13 months ago 247MB
registry.cn-hangzhou.aliyuncs.com/loong576/coredns 1.6.2 bf261d157914 14 months ago 44.1MB
registry.cn-hangzhou.aliyuncs.com/loong576/pause 3.1 da86e6ba6ca1 2 years ago 742kB
11.2 鏡像改名字
因為k8s adm工具初始化的時候,查找有沒有這些鏡像,而且是按照名字來找的,不修改名字,還是從預設的google來下載下傳
docker tag registry.cn-hangzhou.aliyuncs.com/loong576/kube-apiserver:v1.16.4 k8s.gcr.io/kube-apiserver:v1.16.4
docker tag registry.cn-hangzhou.aliyuncs.com/loong576/kube-controller-manager:v1.16.4 k8s.gcr.io/kube-controller-manager:v1.16.4
docker tag registry.cn-hangzhou.aliyuncs.com/loong576/kube-scheduler:v1.16.4 k8s.gcr.io/kube-scheduler:v1.16.4
docker tag registry.cn-hangzhou.aliyuncs.com/loong576/kube-proxy:v1.16.4 k8s.gcr.io/kube-proxy:v1.16.4
docker tag registry.cn-hangzhou.aliyuncs.com/loong576/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
docker tag registry.cn-hangzhou.aliyuncs.com/loong576/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2
docker tag registry.cn-hangzhou.aliyuncs.com/loong576/pause:3.1 k8s.gcr.io/pause:3.1
#确認有如下鏡像
[root@master01 ~]#docker images | grep k8s
k8s.gcr.io/kube-apiserver v1.16.4 3722a80984a0 10 months ago 217MB
k8s.gcr.io/kube-controller-manager v1.16.4 fb4cca6b4e4c 10 months ago 163MB
k8s.gcr.io/kube-scheduler v1.16.4 2984964036c8 10 months ago 87.3MB
k8s.gcr.io/kube-proxy v1.16.4 091df896d78f 10 months ago 86.1MB
k8s.gcr.io/etcd 3.3.15-0 b2756210eeab 13 months ago 247MB
k8s.gcr.io/coredns 1.6.2 bf261d157914 14 months ago 44.1MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB
12 初始化master
此操作隻在master01上執行即可
12.1 修改kubeadm配置檔案
#apiserver裡面寫所有apiserver的主機名,ip位址或者後期可能加的ip
#controlPlaneEndpoint:vip位址
#podSubnet: "10.244.0.0/16",後期網絡的網段
[root@master01 ~]#more kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
certSANs:
- master01
- master02
- master03
- work01
- work02
- work03
- 192.168.59.10
- 192.168.59.11
- 192.168.59.12
- 192.168.59.13
- 192.168.59.21
- 192.168.59.22
controlPlaneEndpoint: "192.168.59.10:6443"
networking:
podSubnet: "10.244.0.0/16"
12.2 初始化
#初始化,并且把内容導入到一個k8s_install.log的檔案,友善後期加入節點的時候檢視秘鑰
[root@master01 ~]#kubeadm init --config=kubeadm-config.yaml | tee k8s_install.log
12.3 初始化後操作
[root@master01 ~]#mkdir -p $HOME/.kube
[root@master01 ~]#sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master01 ~]#sudo chown $(id -u):$(id -g) $HOME/.kube/config
12.4 加載環境變量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
13 安裝flannel
隻在master01上執行
[root@master01 ~]# kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml[root@master01 ~]#kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivilegedcreated
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
安裝完flannel之後,其實單節點apiserver的叢集已經好了,可以嘗試檢查下
[root@master01 ~]#kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-lkq9x 1/1 Running 0 4m59s
coredns-5644d7b6d9-rjtl8 1/1 Running 0 4m59s
etcd-master01 1/1 Running 0 4m3s
kube-apiserver-master01 1/1 Running 0 3m59s
kube-controller-manager-master01 1/1 Running 0 3m49s
kube-flannel-ds-amd64-mcccp 1/1 Running 0 57s
kube-proxy-l48f6 1/1 Running 0 4m59s
kube-scheduler-master01 1/1 Running 0 3m51s
14 添加其他master節點
master02 master03上
14.1 拷貝證書
需要将證書拷貝到其他的節點,才能加入叢集
cd /etc/kubernetes/pki/
#拷貝6個證書
scp /etc/kubernetes/pki/ca.crt ca.key sa.key sa.pub front-proxy-ca.crt front-proxy-ca.key root@master02:/root/
#拷貝etcd的兩個證書
#先在目标主機建立兩個目錄
[root@master02 ~]#mkdir etcd
[root@master03 ~]#mkdir etcd
[root@master01 /etc/kubernetes/pki]#scp /etc/kubernetes/pki/etcd/ca.* root@master02:/root/etcd
[root@master01 /etc/kubernetes/pki]#scp /etc/kubernetes/pki/etcd/ca.* root@master03:/root/etcd
14.2 移動證書
需要在master02,03上把剛才拷貝的證書,移動到應該在的目錄
#建立證書目錄
[root@master02 ~]#mkdir -p /etc/kubernetes/pki/etcd
[root@master03 ~]#mkdir -p /etc/kubernetes/pki/etcd
#移動證書
[root@master03 ~]#mv ca.* sa.* front-* /etc/kubernetes/pki/
[root@master03 ~]#mv etcd/ca.* /etc/kubernetes/pki/etcd/
[root@master02 ~]#mv ca.* sa.* front-* /etc/kubernetes/pki/
[root@master02 ~]#mv etcd/ca.* /etc/kubernetes/pki/etcd/
14.3 master02加入叢集
#檢視master01初始化時候生成的日志檔案
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77 \
--discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77 \
--discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0
開始初始化
#執行之前需要給docker鏡像改名,還是之前的操作
[root@master02 ~]#docker images | grep k8s
k8s.gcr.io/pause
kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77 --discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0 --control-plane
14.4 master03加入叢集
執行同上的操作
14.5 兩台maseter初始化後
master02,master03上執行
To start administering your cluster from this node, you need to run the following as a regular user:
#執行下面操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
master01上操作
#把master01上的adminconf配置檔案拷貝到master02,03上
[root@master01 ~]#scp /etc/kubernetes/admin.conf master02:/etc/kubernetes/
[root@master01 ~]#scp /etc/kubernetes/admin.conf master03:/etc/kubernetes/
#這樣做的目的是讓master02,03可以執行kubectl指令
15 驗證叢集
#master01上
[root@master01 ~]#kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 41m v1.16.4
master02 Ready master 4m53s v1.16.4
master03 Ready master 4m53s v1.16.4
#master02上
[root@master02 ~]#kubectl get nodes
#master03上
[root@master03 ~]#kubectl get nodes
[root@master01 ~]#kubectl get po -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-5644d7b6d9-lkq9x 1/1 Running 0 41m 10.244.0.2 master01 <none> <none>
coredns-5644d7b6d9-rjtl8 1/1 Running 0 41m 10.244.0.3 master01 <none> <none>
etcd-master01 1/1 Running 0 40m 192.168.59.11 master01 <none> <none>
etcd-master02 1/1 Running 0 5m36s 192.168.59.12 master02 <none> <none>
etcd-master03 1/1 Running 0 5m21s 192.168.59.13 master03 <none> <none>
kube-apiserver-master01 1/1 Running 0 40m 192.168.59.11 master01 <none> <none>
kube-apiserver-master02 1/1 Running 0 5m36s 192.168.59.12 master02 <none> <none>
kube-apiserver-master03 1/1 Running 0 4m19s 192.168.59.13 master03 <none> <none>
kube-controller-manager-master01 1/1 Running 1 40m 192.168.59.11 master01 <none> <none>
kube-controller-manager-master02 1/1 Running 0 5m36s 192.168.59.12 master02 <none> <none>
kube-controller-manager-master03 1/1 Running 0 4m30s 192.168.59.13 master03 <none> <none>
kube-flannel-ds-amd64-6v67w 1/1 Running 0 5m29s 192.168.59.13 master03 <none> <none>
kube-flannel-ds-amd64-9c75g 1/1 Running 0 5m37s 192.168.59.12 master02 <none> <none>
kube-flannel-ds-amd64-mcccp 1/1 Running 0 37m 192.168.59.11 master01 <none> <none>
kube-proxy-4mxlf 1/1 Running 0 5m29s 192.168.59.13 master03 <none> <none>
kube-proxy-hrlsn 1/1 Running 0 5m37s 192.168.59.12 master02 <none> <none>
kube-proxy-l48f6 1/1 Running 0 41m 192.168.59.11 master01 <none> <none>
kube-scheduler-master01 1/1 Running 1 40m 192.168.59.11 master01 <none> <none>
kube-scheduler-master02 1/1 Running 0 5m36s 192.168.59.12 master02 <none> <none>
kube-scheduler-master03 1/1 Running 0 4m33s 192.168.59.13 master03 <none> <none>
可以看到有3個etcd,3個apiserver,3個schedule,3個cm,3個flannel,3個proxy,2個dns
16 添加work節點
16.1 檢視添加指令
16.2 在work01上添加
kubeadm join 192.168.59.10:6443 --token 1l0i97.wf409vm48u2qlb77 --discovery-token-ca-cert-hash sha256:db8729d532f23e25c63df17f7cfaca073778f4ea5d4e0c88a9392b61b4a0eba0
16.3 在work02上添加
17 檢視叢集
發現flannel,proxy都是無法建立,檢視日志,是無法下載下傳鏡像,想到了之前下載下傳的鏡像沒有修改名字,于是,在work01和work02上修改名字,就可以了
18 安裝dashboard
這裡在work01上安裝dashboard
18.1 下載下傳yaml檔案
wget
https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml18.2 修改yaml檔案
#因為預設的yum源的位址不通,換成阿裡雲
sed -i 's/kubernetesui/registry.cn-hangzhou.aliyuncs.com\/loong576/g' recommended.yaml
#修改端口
sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml
cat >> recommended.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kubernetes-dashboard
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
subjects:
- kind: ServiceAccount
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
18.3 啟動dashborad
[root@work01 ~]#kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
18.3 檢視狀态
[root@work01 ~]#kubectl get all -n kubernetes-dashboard
18.4 檢視token
kubectl describe secrets -n kubernetes-dashboard dashboard-admin
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IldBNTktTWUxVUtTMm1BaWNzeEE2eFZWcGEtMjlZMDlfLUt4WmJpWEMtYlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDU4NmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzY0NzRjNWItNWUzYy00MDFhLWI2NzktYWVlMmRlYjg2MzQ3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.CkJVRmU_41JAqHa-jzl8m-Jzh7qr4Ct-jXY-LUzAg0ilR48wVQHUl48D1j-eHYU5_POdSgsoEJwGD77gy8AeDgjiF6BHbknRci5z-XA3x1WmMkziTkjUjRC2hsvi81zGSDRCgfP4iNotggg361yXbhokjwq82W6jWPSUskOvttpVAN7px3hc34bjvMJTWXaoAtWem29BGoi-FjQUF2nOJD5JqoKO7k5LNwgylMWqcsMeNDU9aJQSWy3axZP7BEUnhPiCMi94MjNeqYXDrREZO0GWvJVvGF8V8lj_w0TdTDwp8zJoZqe6N2IdWeRNc834949EACAHKrizXQO0QdccZg
19 故障模拟
19.1 關閉master01裝置
#先檢視排程器在哪個節點
[root@master01 ~]kubectl get endpoints kube-scheduler -n kube-system
#關閉master01
[root@master01 ~]#init 0
#在檢視
[root@master02 ~]#kubectl get endpoints kube-scheduler -n kube-system -o yaml |grep holderIdentity
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master02_a802b7d7-27df-46fb-8422-3c08115fb76f","leaseDurationSeconds":15,"acquireTime":"2020-10-27T03:59:37Z","renewTime":"2020-10-27T04:01:19Z","leaderTransitions":2}'
#檢視叢集節點
NAME STATUS ROLES AGE VERSION
master01 NotReady master 88m v1.16.4
master02 Ready master 51m v1.16.4
master03 Ready master 51m v1.16.4
work01 Ready <none> 42m v1.16.4
work02 Ready <none> 31m v1.16.4
#master01已經是not ready了
#此時vip已經飄到master02上了
20 建立pod
#編輯nginx.yaml檔案
apiVersion: apps/v1 #描述檔案遵循extensions/v1beta1版本的Kubernetes API
kind: Deployment #建立資源類型為Deployment
metadata: #該資源中繼資料
name: nginx-master #Deployment名稱
spec: #Deployment的規格說明
selector:
matchLabels:
app: nginx
replicas: 3 #指定副本數為3
template: #定義Pod的模闆
metadata: #定義Pod的中繼資料
labels: #定義label(标簽)
app: nginx #label的key和value分别為app和nginx
spec: #Pod的規格說明
containers:
- name: nginx #容器的名稱
image: nginx:latest #建立容器所使用的鏡像
kubectl apply -f nginx.yaml
kubectl get po -o wide