天天看點

kubeadm安裝部署k8s(1)

2 K8s 安裝部署

kubeadm安裝部署k8s(1)

2.1 安裝方式

2.1.1 部署工具

使用批量部署工具(anbile / slatstack)、手動二進制、kebeadm、apt-get/yum 等方式安裝、以守護程序的方式啟動在主控端上,類似于是 Nginx 一樣使用 service 腳本啟動

二進制部署:相容性最好,類似于在主控端上啟動了一個服務,這個服務可以直接使用主控端核心的很多特性

kubeadm部署:以容器的方式啟動,會在主控端上啟動很多容器,比如 api-server 容器、controller manager 容器等,這樣就導緻容器運作環境受限,隻能使用容器中的指令,很多主控端核心的功能無法使用

2.1.2 kubeadm

注意:kubeadm 部署 k8s,隻用于測試環境,不用于生産環境,生産環境的 Kubernetes 叢集會使用二進制部署

https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

https://v1-18.docs.kubernetes.io/zh/docs/setup/independent/create-cluster-kubeadm/
           

使用 k8s 官方提供的部署工具 kubeadm 自動安裝,需要在 master 和 node 節點上安裝 docker 等元件,然後初始化,把管理端的控制服務和 node 上的服務都以 pod 的方式運作

2.1.3 安裝注釋事項

注意事項

  • 禁用 swap
swapoff -a
           

如果不關閉 swap 将會報錯

[[email protected] ~]# kubeadm join 172.18.8.111:6443 --token e1zb26.ujwhegxxi452w53m \
> --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[[email protected] ~]#
           
  • 關閉 selinux
# 先臨時關閉(及時生效不用重新開機系統)
setenforce 0
# 再永久關閉selinux,以防止系統重起後恢複為 enforcing 模式
vim /etc/sysconfig/selinux
SELINUX=disable
           
  • 關閉 iptables
systemctl stop iptables 
systemctl stop firewalld
systemctl disable firewalld
           
  • 優化核心參數及資源限制參數
# 二層的網橋在轉發包時會被主控端 iptables 的 FORWARD 規則比對
# 注意:如果無法查到,在安裝完 docker 之後再次查找 sysctl -a | grep bridge-nf-call-iptables
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
           
  • 開啟路由轉發(Centos 系統,Ubuntu 預設是打開的)
cat /etc/sysctl.conf
net.ipv4.ip_forward = 1
sysctl -p
           

2.2 部署過程

元件規劃及版本選擇

# CRI 運作時選擇
https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/

# CNI 選擇
https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/
           

2.2.1 具體步驟

  1. 基礎環境準備
  2. 部署 haproxy 和 harbor 高可用反向代理,實作控制節點的 API 反向入口高可用
  3. 在所有 master 節點安裝指定版本的

    kubeadm、kubelet、kubectl、docker

  4. 在所有 node 節點安裝指定版本的

    kubeadm、kubelet、docker

    ,在 node 節點 kubectl 為可選安裝(考慮到安全,不建議安裝),看是否需要在 node 執行 kubectl 指令進行叢集管理及 pod 管理等操作
  5. master 節點運作 kubeadm init 初始化指令
  6. 驗證 master 節點狀态
  7. 在 node 節點使用 kubeadm 指令将自己加入 k8s master(需要手動使用 master 生成 token 認證)
  8. 驗證 node 節點狀态
  9. 建立 pod 并測試網絡通信
  10. 部署 web 服務 Dashboard
  11. k8s 叢集更新案例

目前官方最新版本為 1.21.3,因為涉及到後續的版本更新案例,是以 1.21.x 的版本無法示範後續的版本更新

kubeadm安裝部署k8s(1)

2.2.2 基礎環境準備

kubeadm安裝部署k8s(1)

伺服器環境

最小化安裝基礎系統,如果使用 centos 系統,則關閉防火牆 selinux 和 swap,更新軟體源,時間同步,安裝常用指令,重新開機後驗證基礎配置,centos 推薦使用 centos7.5 及以上的系統,ubuntu 推薦 18.04 及以上穩定版本

角色 IP位址 軟體版本 作業系統版本 生産建議配置
K8s-master1 172.18.8.9 kubeadm-1.20 Ubuntu 18.04 8 CPU + 16G Memory + 100G 磁盤
K8s-master2 172.18.8.19 kubeadm-1.20 Ubuntu 18.04 8 CPU + 16G Memory + 100G 磁盤
K8s-master3 172.18.8.29 kubeadm-1.20 Ubuntu 18.04 8 CPU + 16G Memory + 100G 磁盤
K8s-node1 172.18.8.39 Ubuntu 18.04 96 CPU + 386G Memory + SSD + 萬兆
K8s-node2 172.18.8.49 Ubuntu 18.04 96 CPU + 386G Memory + SSD + 萬兆
K8s-node3 172.18.8.59 Ubuntu 18.04 96 CPU + 386G Memory + SSD + 萬兆
K8s-node4 172.18.8.69 Ubuntu 18.04 96 CPU + 386G Memory + SSD + 萬兆
Harbor1 172.18.8.214(vip 172.18.8.9) Ubuntu 18.04 8 CPU + 16G Memory + 2T
Harbor2 172.18.8.215(vip 172.18.8.9) Ubuntu 18.04 8 CPU + 16G Memory + 2T
Ha+keepalived1 172.18.8.79 Ubuntu 18.04 8 CPU + 16G Memory + 100G 磁盤
Ha+keepalived2 172.18.8.89 Ubuntu 18.04 8 CPU + 16G Memory + 100G 磁盤
172.18.8.9  hostnamectl set-hostname K8s-master1
172.18.8.19 hostnamectl set-hostname K8s-master2
172.18.8.29 hostnamectl set-hostname K8s-master3
172.18.8.39 hostnamectl set-hostname K8s-node1
172.18.8.49 hostnamectl set-hostname K8s-node2
172.18.8.59 hostnamectl set-hostname K8s-node3
172.18.8.69 hostnamectl set-hostname K8s-node4
172.18.8.79 hostnamectl set-hostname ha_keepalive1
172.18.8.89 hostnamectl set-hostname ha_keepalive2
           

2.3 高可用反向代理

基于 keepalive 和 haproxy 實作高可用反向代理環境,為 k8s apiserver 提供高可用反向代理

2.3.1 Keepalive 安裝及配置

安裝及配置 Keepalived,并測試 VIP 的高可用

節點1安裝及配置 Keepalived

[[email protected]_keepalive1 ~]# apt -y install keepalived
[[email protected]_keepalive1 ~]# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
[[email protected]_keepalive1 ~]# grep -Ev "#|^$" /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
     acassen
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.18.8.111/24 dev eth0 label eth0:1   # haproxy:bind 172.18.8.111:6443
        172.18.8.222/24 dev eth0 label eth0:2   # haproxy:bind 172.18.8.222:80
    }
}
[[email protected]_keepalive1 ~]# systemctl enable --now keepalived
           

節點2安裝及配置 Keepalived

[[email protected]_keepalive2 ~]# apt -y install keepalived
[[email protected]_keepalive2 ~]# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
[[email protected]_keepalive2 ~]# grep -Ev "#|^$" /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
     acassen
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}
vrrp_instance VI_1 {
    state BACKUP          # 修改為備機
    interface eth0
    garp_master_delay 10
    smtp_alert
    virtual_router_id 51  # 確定和 Master 相同,否則會發生腦裂
    priority 80           # 修改優先級,要比 Master 低
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111  # 確定和 Master 相同,否則會發生腦裂
    }
    virtual_ipaddress {
        172.18.8.111/24 dev eth0 label eth0:1   # haproxy:bind 172.18.8.111:6443
        172.18.8.222/24 dev eth0 label eth0:2   # haproxy:bind 172.18.8.222:80
    }
}
[[email protected]_keepalive2 ~]# systemctl enable --now keepalived
           

2.3.2 Haproxy 安裝及配置

節點1安裝及配置 haproxy

[[email protected]_keepalive1 ~]# apt -y install haproxy 
[[email protected]_keepalive1 ~]# grep -Ev "#|^$" /etc/haproxy/haproxy.cfg
global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
	stats timeout 30s
	user haproxy
	group haproxy
	daemon
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private
	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
	ssl-default-bind-options no-sslv3
	
defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http
	
listen stats
  mode http
  bind 0.0.0.0:9999
  stats enable
  log global
  stats uri /haproxy-status
  stats auth haadmin:123456
  
listen k8s-6443
  bind 172.18.8.111:6443
  mode tcp
  balance roundrobin
  server 172.18.8.9 172.18.8.9:6443 check inter 3s fall 3 rise 5     # k8s-master1
  server 172.18.8.19 172.18.8.19:6443 check inter 3s fall 3 rise 5   # k8s-master2
  server 172.18.8.29 172.18.8.29:6443 check inter 3s fall 3 rise 5   # k8s-master3
  
listen nginx-80
  bind 172.18.8.222:80
  mode tcp
  balance roundrobin
  server 172.18.8.39 172.18.8.39:30004 check inter 3s fall 3 rise 5  # k8s-node1
  server 172.18.8.49 172.18.8.49:30004 check inter 3s fall 3 rise 5  # k8s-node2
  server 172.18.8.59 172.18.8.59:30004 check inter 3s fall 3 rise 5  # k8s-node3
  server 172.18.8.69 172.18.8.69:30004 check inter 3s fall 3 rise 5  # k8s-node4
[[email protected]_keepalive1 ~]# systemctl enable --now haproxy
# 不增加核心參數,無法看到 haproxy 的端口
[[email protected] ~]# vim /etc/sysctl.conf
[[email protected] ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
[[email protected] ~]# systemctl restart haproxy.service
           

節點2安裝及配置haproxy

[[email protected]_keepalive2 ~]# apt -y install haproxy 
[[email protected]_keepalive2 ~]# grep -Ev "#|^$" /etc/haproxy/haproxy.cfg
global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
	stats timeout 30s
	user haproxy
	group haproxy
	daemon
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private
	ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
	ssl-default-bind-options no-sslv3
	
defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
    timeout connect 5000
    timeout client  50000
    timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http
	
listen stats
  mode http
  bind 0.0.0.0:9999
  stats enable
  log global
  stats uri /haproxy-status
  stats auth haadmin:123456
  
listen k8s-6443
  bind 172.18.8.111:6443
  mode tcp
  balance roundrobin
  server 172.18.8.9 172.18.8.9:6443 check inter 3s fall 3 rise 5     # k8s-master1
  server 172.18.8.19 172.18.8.19:6443 check inter 3s fall 3 rise 5   # k8s-master2
  server 172.18.8.29 172.18.8.29:6443 check inter 3s fall 3 rise 5   # k8s-master3
  
listen nginx-80
  bind 172.18.8.222:80
  mode tcp
  balance roundrobin
  server 172.18.8.39 172.18.8.39:30004 check inter 3s fall 3 rise 5  # k8s-node1
  server 172.18.8.49 172.18.8.49:30004 check inter 3s fall 3 rise 5  # k8s-node2
  server 172.18.8.59 172.18.8.59:30004 check inter 3s fall 3 rise 5  # k8s-node3
  server 172.18.8.69 172.18.8.69:30004 check inter 3s fall 3 rise 5  # k8s-node4
[[email protected]_keepalive2 ~]# systemctl enable --now haproxy
# 不增加核心參數,無法看到 haproxy 的端口
[[email protected] ~]# vim /etc/sysctl.conf
[[email protected] ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
[[email protected] ~]# systemctl restart haproxy.service
           

2.4 安裝 Harbor

請檢視

2.5 安裝 kubeadm 等元件

在 master 和 node 節點安裝 kubeadm、kubelet、kubectl、docker等元件,負載均衡伺服器不需要安裝

2.5.1 版本選擇

在每個 master 節點和 node 節點安裝 docker

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.17.md#v11711


Update the latest validated version of Docker to 19.03 (#84476, @neolit123)
           

2.5.2 安裝 docker

# 解除安裝舊版 docker
sudo apt-get -y remove docker docker-engine docker.io containerd runc

# 安裝必要的一些系統工具
sudo apt-get update
sudo apt -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common

# 安裝 GPG 證書
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -

# 寫入軟體源資訊
sudo add-apt-repository \
   "deb [arch=amd64] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \
   $(lsb_release -cs) \
   stable"

# 更新軟體源
apt-get update

# 檢視可安裝的 docker 版本
apt-cache madison docker-ce docker-ce-cli

# 安裝并啟動 docker 19.03.x
apt install -y docker-ce=5:19.03.15~3-0~ubuntu-bionic docker-ce-cli=5:19.03.15~3-0~ubuntu-bionic

# 啟動 docker,并設定為開機自啟動
systemctl enable --now docker

# 驗證 docker 版本
docker version

# 檢視 docker 詳細資訊
docker info
           

2.5.3 所有節點安裝 kubelet kubadm kubectl

所有 master 和 node 節點配置阿裡雲倉庫位址并安裝相關元件,node 節點可選安裝 kubectl

配置阿裡雲鏡像的 Kubernetes 源(用于安裝 kubelet kubeadm kubectl 指令)

也可以使用清華的 kubernetes 鏡像源

apt-get update && apt -y install apt-transport-https

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

echo "deb https://mirrors.tuna.tsinghua.edu.cn/kubernetes/apt kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list

# 開始安裝 kubeadm
apt update
apt-cache madison kubeadm 
apt-cache madison kubelet 
apt-cache madison kubectl
apt -y install kubelet=1.20.5-00 kubeadm=1.20.5-00 kubectl=1.20.5-00

# 驗證版本
[[email protected] ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
[[email protected] ~]#
           

2.5.4 驗證 master 節點 kubelet 服務

目前啟動 kubelet 報錯

[[email protected] ~]# systemctl start kubelet
[[email protected] ~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Sat 2021-07-24 03:39:38 UTC; 759ms ago
     Docs: https://kubernetes.io/docs/home/
  Process: 12191 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $K
 Main PID: 12191 (code=exited, status=255)
...skipping...
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: activating (auto-restart) (Result: exit-code) since Sat 2021-07-24 03:39:38 UTC; 759ms ago
     Docs: https://kubernetes.io/docs/home/
  Process: 12191 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $K
 Main PID: 12191 (code=exited, status=255)
[[email protected] ~]# vim /var/log/syslog
           
kubeadm安裝部署k8s(1)

2.6 master 節點運作 kubeadm init 初始化指令

在三台 master 中任意一台 master 進行叢集初始化,而且叢集初始化隻需要初始化一次

2.6.1 kubeadm 指令使用

# 官網
https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/
           

檢視幫助

[[email protected] ~]# kubeadm --help


    ┌──────────────────────────────────────────────────────────┐
    │ KUBEADM                                                  │
    │ Easily bootstrap a secure Kubernetes cluster             │
    │                                                          │
    │ Please give us feedback at:                              │
    │ https://github.com/kubernetes/kubeadm/issues             │
    └──────────────────────────────────────────────────────────┘

Example usage:

    Create a two-machine cluster with one control-plane node
    (which controls the cluster), and one worker node
    (where your workloads, like Pods and Deployments run).

    ┌──────────────────────────────────────────────────────────┐
    │ On the first machine:                                    │
    ├──────────────────────────────────────────────────────────┤
    │ control-plane# kubeadm init                              │
    └──────────────────────────────────────────────────────────┘

    ┌──────────────────────────────────────────────────────────┐
    │ On the second machine:                                   │
    ├──────────────────────────────────────────────────────────┤
    │ worker# kubeadm join <arguments-returned-from-init>      │
    └──────────────────────────────────────────────────────────┘

    You can then repeat the second step on as many other machines as you like.

Usage:
  kubeadm [command]

Available Commands:
  alpha       Kubeadm experimental sub-commands
  certs       Commands related to handling kubernetes certificates
  completion  Output shell completion code for the specified shell (bash or zsh)
  config      Manage configuration for a kubeadm cluster persisted in a ConfigMap in the cluster
  help        Help about any command
  init        Run this command in order to set up the Kubernetes control plane
  join        Run this on any machine you wish to join an existing cluster
  reset       Performs a best effort revert of changes made to this host by 'kubeadm init' or 'kubeadm join'
  token       Manage bootstrap tokens
  upgrade     Upgrade your cluster smoothly to a newer version with this command
  version     Print the version of kubeadm

Flags:
      --add-dir-header           If true, adds the file directory to the header of the log messages
  -h, --help                     help for kubeadm
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --one-output               If true, only write logs to their native severity level (vs also writing to each lower severity level
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity

Use "kubeadm [command] --help" for more information about a command.
[[email protected] ~]#
           
header 1 header 2
alpha kubeadm 處于測試階段的指令
completion bash 指令補全,需要安裝

bash-completion

config

管理 kubeadm 叢集的配置,該配置保留在叢集的 configMap 中

kubeadm config print init-defaults

help 幫助
init 初始化一個 kubernetes 控制平面
join 将節點加入到已經存在的 k8s master
reset 用于恢複通過

kubeadm init

或者

kubeadm join

指令對節點進行的任何變更
token 管理 token
upgrade 更新 k8s 版本
version 檢視版本資訊

bash 指令補全 completion

mkdir -p /data/scripts
kubeadm completion bash > /data/scripts/kubeadm_completion.sh
srouce /data/scripts/kubeadm_completion.sh
vim /etc/profile
  source /data/scripts/kubeadm_completion.sh
           

2.6.2 kubeadm init 指令簡介

https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/
           
header 1 header 2
–apiserver-advertise-address string K8s API Server 将要監聽的本機 IP
–apiserver-bind-port int32 API 伺服器綁定的端口,預設值:6443
–apiserver-cert-extra-sans stringSlice 可選的證書額外資訊,用于指定 API Server 的服務證書。可以是 IP 位址和 DNS 名稱
–cert-dir string 儲存和存儲證書的路徑,預設值:"/etc/kubernetes/pki"
–certificate-key string 定義一個用于加密 kubeadm-certs Secret 中的控制平台證書的密鑰
–config string kubeadm 配置檔案的路徑
–control-plane-endpoint string

為控制平台指定一個固定的 IP 位址或 DNS 名稱

即配置一個可以長期使用且是高可用的 VIP 或者域名,K8s 多 master 高可用基于此參數實作

–cri-socket string

要連接配接的 CRI(容器運作時接口,Container Runtime Interface,簡稱CRI) 套接字的路徑。

如果為空,則 kubeadm 将嘗試自動檢測此值;僅當安裝了多個 CRI 或具有非标準 CRI 插槽時,

才使用此選項

–dry-run 不要應用任何更改;隻是輸出将要執行的操作,其實就是測試運作
–experimental-patches string 用于存儲 kustomize 為靜态 pod 清單所提供的更新檔路徑
–feature-gates string

一組用來描述各種功能特性的鍵值(key=value)對。

選項是:

IPv6DualStack=true|false (ALPHA - default=false)

–ignore-preflight-errors stringSlice 可以忽略檢查過程中出現的錯誤資訊,比如忽略 swap,取值為 ‘all’ 時将忽略檢查中的所有錯誤
–image-repository string 設定一個鏡像倉庫倉庫, 預設值:

"k8s.gcr.io"

–kubernetes-version string 指定安裝 Kubernetes 版本,預設值:“stable-1”
–node-name string 指定 node 節點的名稱
–pod-network-cidr string

指明 pod 網絡可以使用的 IP 位址段。如果設定了這個參數,

控制平台将會為每一個節點自動配置設定 CIDRs

–service-cidr string 設定 service 網絡位址範圍,預設值:“10.96.0.0/12”
–service-dns-domain string

設定 k8s 内部域名,預設值:“cluster.local”,

會有相應的 DNS 服務(kube-dns/coredns)解析生成的域名記錄

–skip-certificate-key-print 不列印用于加密控制平台證書的密鑰。
–skip-phases stringSlice 要跳過的階段清單
–skip-token-print 跳過列印 ‘kubeadm init’ 生成的預設引導令牌,即跳過列印 token 資訊
–token string 指定 token
–token-ttl duration 指定 token 過期時間,預設為 24 小時,如果設定為 ‘0’,則表示永不過期
–upload-certs 更新證書

全局可選項

header 1 header 2
–add-dir-header 如果為 true,在日志頭部添加日志目錄
–log-file string 如果不為空,将使用此日志檔案
–log-file-max-size unit 設定日志檔案的最大大小,機關為兆,預設為 1800 兆,0 表示沒有限制
–rootfs 主控端的根目錄,也就是絕對路徑
–skip-headers 如果為 true,在 log 日志裡面不顯示标題字首
–skip-log-headers 如果為 true,在 log 日志裡不顯示标題

2.6.3 驗證目前 kubeadm 版本

[[email protected] ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f",
GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
[[email protected] ~]#
           

2.6.4 準備鏡像

[[email protected] ~]# kubeadm config images list --kubernetes-version v1.20.5
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
[[email protected] ~]#
           

2.6.5 master 節點下載下傳鏡像

建議提前在 master 節點下載下傳鏡像以減少等待時間,但是鏡像預設使用 Google 的鏡像倉庫,是以國内無法使用下載下傳,但是可以通過阿裡雲的鏡像倉庫把鏡像先提前下載下傳下來,可以避免後期因鏡像下載下傳異常導緻 k8s 部署異常

[[email protected] ~]# cat images_download.sh
#!/bin/bash
#
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
[[email protected] ~]# bash images_download.sh
           

2.6.6 驗證目前鏡像

[[email protected] ~]# docker images|grep aliyuncs
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.20.5    5384b1650507   4 months ago    118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.20.5    d7e24aeb3b10   4 months ago    122MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.20.5    6f0c3da8c99e   4 months ago    116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.20.5    8d13f1db8bfb   4 months ago    47.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   11 months ago   253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   13 months ago   45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   17 months ago   683kB
[[email protected] ~]#
           

2.6.7 将外網下載下傳的鏡像上傳到公司的 Harbor 上

我們可以将這些從外網下載下傳的鏡像,push 到公司内部的 habor 上

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5 harbor.tech.com/baseimages/k8s/kube-proxy:v1.20.5
docker push harbor.tech.com/baseimages/k8s/kube-proxy:v1.20.5

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5 harbor.tech.com/baseimages/k8s/kube-controller-manager:v1.20.5
docker push harbor.tech.com/baseimages/k8s/kube-controller-manager:v1.20.5

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5 harbor.tech.com/baseimages/k8s/kube-scheduler:v1.20.5
docker push harbor.tech.com/baseimages/k8s/kube-scheduler:v1.20.5

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5 harbor.tech.com/baseimages/k8s/kube-apiserver:v1.20.5
docker push harbor.tech.com/baseimages/k8s/kube-apiserver:v1.20.5

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 harbor.tech.com/baseimages/k8s/etcd:3.4.13-0
docker push harbor.tech.com/baseimages/k8s/etcd:3.4.13-0

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 harbor.tech.com/baseimages/k8s/coredns:1.7.0
docker push harbor.tech.com/baseimages/k8s/coredns:1.7.0

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 harbor.tech.com/baseimages/k8s/pause:3.2
docker push harbor.tech.com/baseimages/k8s/pause:3.2

           

2.6.8 從公司 harbor 擷取鏡像

每個 Master 節點都需要執行

# 增加 harbor 域名解析
echo "172.18.8.9 harbor.tech.com" >> /etc/hosts

# 編寫擷取鏡像腳本
cat > images_download.sh << EOF
#!/bin/bash
#
docker pull harbor.tech.com/baseimages/k8s/kube-proxy:v1.20.5
docker pull harbor.tech.com/baseimages/k8s/kube-controller-manager:v1.20.5
docker pull harbor.tech.com/baseimages/k8s/kube-scheduler:v1.20.5
docker pull harbor.tech.com/baseimages/k8s/kube-apiserver:v1.20.5
docker pull harbor.tech.com/baseimages/k8s/etcd:3.4.13-0
docker pull harbor.tech.com/baseimages/k8s/coredns:1.7.0
docker pull harbor.tech.com/baseimages/k8s/pause:3.2
EOF


# 私有 Harbor 位址
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors":["https://0cde955d3600f3000fe5c004160e0320.mirror.swr.myhuaweicloud.com"],
"insecure-registries": ["172.18.8.215","172.18.8.214","172.18.8.9","harbor.tech.com"]
}
EOF
# 重新開機 docker
systemctl restart docker

# 從私有 Habor 下載下傳鏡像
bash images_download.sh
           

2.8 高可用 master 初始化

基于 keepalived 實作高可用 VIP,通過 haproxy 實作 kube-apiserver 的反向代理,然後将對 kube-apiserver 的管理請求轉發至多台 k8s master 以實作管理端高可用

kubeadm安裝部署k8s(1)

2.8.1 基于指令初始化高可用 master 方式(本例使用此方式)

Master1:172.18.8.9

上執行,因為

apiserver-advertise-address=172.18.8.9

,如果要在 Master2 或者 Master3 上執行,隻需要修改位址為 172.18.8.19 或 172.18.8.29

kubeadm init \
--apiserver-advertise-address=172.18.8.9 \
--control-plane-endpoint=172.18.8.111 \
--apiserver-bind-port=6443 \
--kubernetes-version=v1.20.5 \
--pod-network-cidr=10.100.0.0/16 \
--service-cidr=10.200.0.0/16 \
--service-dns-domain=song.local \
--image-repository=harbor.tech.com/baseimages/k8s \
--ignore-preflight-errors=swap
           

初始化過程

kubeadm安裝部署k8s(1)

初始化結果

kubeadm安裝部署k8s(1)
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 172.18.8.111:6443 --token 1grr4q.3v09zfh9pl6q9546 \
    --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.18.8.111:6443 --token 1grr4q.3v09zfh9pl6q9546 \
    --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada
           

--token 1grr4q.3v09zfh9pl6q9546

的預設有效期是 24 小時,過期後無法再使用這個 token 給伺服器叢集添加 Master 或者 Node 節點了,需要重新生成,操作如下

# 列出有效的 token
[[email protected] ~]# kubeadm token list  

# 建立 token,使用新建立的 token(e1zb26.ujwhegxxi452w53m) 替換上面失效的token(1grr4q.3v09zfh9pl6q9546)
[[email protected] ~]# kubeadm token create  
e1zb26.ujwhegxxi452w53m

# 列出有效的 token,會顯示剩餘過期時間
[[email protected] ~]# kubeadm token list    
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION   EXTRA GROUPS
e1zb26.ujwhegxxi452w53m   23h         2021-07-28T01:47:40Z   authentication,signing   <none>        system:bootstrappers:kubeadm:default-node-token
[[email protected] ~]#
           

2.8.2 基于檔案初始化高可用 master 方式(本例未使用基于檔案的方式)

# 輸出預設初始化配置
kubeadm config print init-defaults 
# 将預設配置輸出至檔案
kubeadm config print init-defaults > kubeadm-init.yaml
# 修改後的初始化檔案内容
[[email protected] ~]# cat kubeadm-init.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.18.8.9
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.18.8.111:6443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.5
networking:
  dnsDomain: song.local
  podSubnet: 10.100.0.0/16
  serviceSubnet: 10.200.0.0/16
scheduler: {}

# 基于檔案執行 k8s master 初始化
[[email protected] ~]# kubeadm init --config kubeadm-init.yaml
           

2.9 配置 kube-config 檔案及網絡元件

無論是使用指令還是檔案初始化的 k8s 環境,無論是單機還是叢集,需要配置一下 kube-config 檔案及網絡元件

2.9.1 kube-config 檔案

kube-config 檔案中包含 kube-apiserver 位址及相關認證資訊

[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[[email protected] ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

[[email protected] ~]# kubectl get node
NAME          STATUS     ROLES                  AGE   VERSION
k8s-master1   NotReady   control-plane,master   69m   v1.20.5
[[email protected] ~]#
           

部署網絡元件 flannel

https://github.com/coreos/flannel

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

[[email protected] ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
           

修改 kube-flannel.yml 檔案(修改三處)

[[email protected]-master1 ~]# cat kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.100.0.0/16",   # 改為自定義的 pod 網關
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: harbor.tech.com/baseimages/flannel:v0.14.0  # 最好是下載下傳下來,上傳到公司的 Harbor 上,速度快
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: harbor.tech.com/baseimages/flannel:v0.14.0  # 最好是下載下傳下來,上傳到公司的 Harbor 上,速度快
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
[[email protected]-master1 ~]#
           

應用 flannel.yml 檔案

[[email protected] ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[[email protected] ~]#
           

檢查 master 節點狀态

# 狀态從 NotReady 程式設計 Ready
[[email protected] ~]# kubectl get node
NAME          STATUS   ROLES                  AGE   VERSION
k8s-master1   Ready    control-plane,master   85m   v1.20.5
[[email protected] ~]#
           

k8s 認證資訊

# k8s-Master2 預設無法使用
[email protected]:~# kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?


# 在 k8s-master1 将認證資訊傳到 k8s-master2 上
[email protected]:~# scp -r /root/.kube 172.18.8.19:/root

# k8s-Master2 驗證
[email protected]:~# ll /root/.kube/
total 20
drwxr-xr-x 3 root root 4096 Jul 26 01:34 ./
drwx------ 6 root root 4096 Jul 26 01:34 ../
drwxr-x--- 4 root root 4096 Jul 26 01:34 cache/
-rw------- 1 root root 5568 Jul 26 01:34 config
[email protected]:~# kubectl get node
NAME          STATUS   ROLES                  AGE   VERSION
k8s-master1   Ready    control-plane,master   42h   v1.20.5
k8s-master2   Ready    control-plane,master   39h   v1.20.5
k8s-master3   Ready    control-plane,master   39h   v1.20.5
k8s-node1     Ready    <none>                 39h   v1.20.5
k8s-node2     Ready    <none>                 39h   v1.20.5
k8s-node3     Ready    <none>                 39h   v1.20.5
k8s-node4     Ready    <none>                 39h   v1.20.5
[email protected]:~#
           

2.9.2 目前 master 生成證書用于添加新控制節點

[[email protected] ~]# kubeadm init phase upload-certs --upload-certs
I0724 08:34:59.116907    7356 version.go:254] remote version is much newer: v1.21.3; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
06caaf47534774d1a9459b47771c3f2792536e5aa1a59bfa8a29bc5b2a5e4a42
[[email protected] ~]#
           

2.10 添加節點到 k8s 叢集

将其他的 master 節點及 node 節點分别添加到 k8s 叢集中

2.10.1 master 節點2

在另外一台已經安裝docker、kubeadm 和 kubelet 的 master 節點上執行以下操作

# 注意 --token 1grr4q.3v09zfh9pl6q9546 的有效期是 24小時,過期後需要重新生成新的 token
kubeadm join 172.18.8.111:6443 --token 1grr4q.3v09zfh9pl6q9546 \
    --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada \
    --control-plane --certificate-key 06caaf47534774d1a9459b47771c3f2792536e5aa1a59bfa8a29bc5b2a5e4a42
           
kubeadm安裝部署k8s(1)

2.10.2 master 節點3

# 注意 --token 1grr4q.3v09zfh9pl6q9546 的有效期是 24小時,過期後需要重新生成新的 token
kubeadm join 172.18.8.111:6443 --token 1grr4q.3v09zfh9pl6q9546 \
    --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada \
    --control-plane --certificate-key 06caaf47534774d1a9459b47771c3f2792536e5aa1a59bfa8a29bc5b2a5e4a42
           
kubeadm安裝部署k8s(1)

檢查 master 節點狀态

[[email protected] ~]# kubectl get node
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   164m    v1.20.5
k8s-master2   Ready    control-plane,master   9m23s   v1.20.5
k8s-master3   Ready    control-plane,master   2m42s   v1.20.5
[[email protected] ~]#
           

2.10.3 添加 node 節點

各需要加入到 k8s master 叢集中的 node 節點都要安裝 docker、kubeadm、kubelet,因為都要重新執行安裝 docker kubeam kubelet 的步驟,即配置 apt 倉庫,配置 docker 加速器,安裝指令,啟動 kubelet 服務

添加指令為 master 端 kubeadm init 初始化完成之後傳回的添加指令

# 注意 --token 1grr4q.3v09zfh9pl6q9546 的有效期是 24小時,過期後需要重新生成新的 token
[[email protected] ~]# kubeadm join 172.18.8.111:6443 --token 1grr4q.3v09zfh9pl6q9546 \
>     --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada

[[email protected] ~]# kubeadm join 172.18.8.111:6443 --token 1grr4q.3v09zfh9pl6q9546 \
>     --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada

[[email protected] ~]# kubeadm join 172.18.8.111:6443 --token 1grr4q.3v09zfh9pl6q9546 \
>     --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada

[[email protected] ~]# kubeadm join 172.18.8.111:6443 --token 1grr4q.3v09zfh9pl6q9546 \
>     --discovery-token-ca-cert-hash sha256:3fe215f2b665c40659a06ab1a874e44b64bf784fe78a3d7f2cbb792f26b34ada
           
kubeadm安裝部署k8s(1)

2.10.4 驗證目前 node 狀态

各 node 節點會自動加入到 master 節點,下載下傳鏡像并啟動 flannel,直到最終在 master 看到 node 處于 Ready 狀态

[[email protected] ~]# kubectl get node
NAME          STATUS   ROLES                  AGE     VERSION
k8s-master1   Ready    control-plane,master   3h31m   v1.20.5
k8s-master2   Ready    control-plane,master   57m     v1.20.5
k8s-master3   Ready    control-plane,master   50m     v1.20.5
k8s-node1     Ready    <none>                 8m38s   v1.20.5
k8s-node2     Ready    <none>                 8m33s   v1.20.5
k8s-node3     Ready    <none>                 8m30s   v1.20.5
k8s-node4     Ready    <none>                 8m27s   v1.20.5
[[email protected] ~]#
           

2.10.5 驗證目前證書狀态

[[email protected] ~]# kubectl get csr
NAME        AGE     SIGNERNAME                                    REQUESTOR                 CONDITION
csr-2cr92   9m35s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:1grr4q   Approved,Issued
csr-5l6gf   58m     kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:1grr4q   Approved,Issued
csr-7hnct   9m40s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:1grr4q   Approved,Issued
csr-b9pbb   51m     kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:1grr4q   Approved,Issued
csr-pxs4q   9m29s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:1grr4q   Approved,Issued
csr-vqhtx   9m32s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:1grr4q   Approved,Issued
[[email protected] ~]#
           

2.10.6 k8s 建立容器并測試内部網絡

建立測試容器,測試網絡連接配接是否可以通信

注:單 master 節點要允許 pod 運作在 master節點

kubectl taint nodes --all node-role.kubernetes.io/master-

[[email protected] ~]# kubectl run net-test1 --image=alpine sleep 360000
pod/net-test1 created
[[email protected] ~]# kubectl run net-test2 --image=alpine sleep 360000
pod/net-test2 created
[[email protected] ~]# kubectl get pod -o wide
NAME        READY   STATUS              RESTARTS   AGE   IP       NODE        NOMINATED NODE   READINESS GATES
net-test1   0/1     ContainerCreating   0          42s   <none>   k8s-node2   <none>           <none>
net-test2   0/1     ContainerCreating   0          34s   <none>   k8s-node1   <none>           <none>

# 狀态都為 Running 表示建立成功
[[email protected] ~]# kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          72s   10.100.4.2   k8s-node2   <none>           <none>
net-test2   1/1     Running   0          64s   10.100.3.2   k8s-node1   <none>           <none>
[[email protected] ~]#
           

驗證容器間通信

kubeadm安裝部署k8s(1)

2.10.7 驗證外部網絡

# docker exec 隻能進入本機的容器
# kubectl exec 可以進入叢集中任意節點的容器
[[email protected] ~]# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping www.baidu.com -c5
PING www.baidu.com (110.242.68.4): 56 data bytes
64 bytes from 110.242.68.4: seq=0 ttl=52 time=38.544 ms
64 bytes from 110.242.68.4: seq=1 ttl=52 time=37.649 ms
64 bytes from 110.242.68.4: seq=2 ttl=52 time=45.911 ms
64 bytes from 110.242.68.4: seq=3 ttl=52 time=37.187 ms
64 bytes from 110.242.68.4: seq=4 ttl=52 time=35.279 ms

--- www.baidu.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 35.279/38.914/45.911 ms
/ #
           

繼續閱讀