天天看點

二進制部署kubernetes高可用叢集

作者:技術怪圈

單機kubeadm部署請參考:Kubeadm單機部署kubernetes:v1.23

本次部署采用github上的開源項目kubeasz,以二進制安裝的方式,此方式安裝支援系統有CentOS/RedHat 7, Debian 9/10, Ubuntu 1604/1804/2004。

  • 注意1:確定各節點時區設定一緻、時間同步。 如果你的環境沒有提供NTP 時間同步
  • 注意2:確定在幹淨的系統上開始安裝,不要使用曾經裝過kubeadm或其他k8s發行版的環境
  • 注意3:建議作業系統更新到新的穩定核心,請結合閱讀kubeasz文檔中的核心更新文檔
  • 注意3: 各節點設定免密

架構圖

二進制部署kubernetes高可用叢集

一、叢集系統環境

root@k8s-master:~# cat /etc/issue
Ubuntu 20.04.4 LTS \n \l

-docker: 20.10.9
- k8s: v.1.23.1           

二、IP和角色規劃

下面是此次虛拟機叢集安裝前的IP等資訊規劃,因為資源有限是以有些節點資源混用。如果資源充足的話,master節點基數最好(三台以上)、etcd(三台)可以考慮每個服務一台虛拟機。此文配置環境如下:

IP HostName Role VIP
172.31.7.2 k8s-master.host.com master/etcd/HA1 172.31.7.188
172.31.7.3 k8s-node1.host.com node
172.31.7.4 k8s-node2.host.com node
172.31.7.5 k8s-node3.host.com node
172.31.7.252 harbor1.host.com harbor/HA2

三、初始化系統和全局變量

3.1 設定主機名(此處略)

~# hostnamectl set-hostname k8s-master.host.com #其他的更換主機名即可           

3.2 ubuntu系統修改IP資訊(單台為例)

root@k8s-master:~# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
 ethernets:
 ens33:
 dhcp4: no
 addresses: [172.31.7.2/16] #ip
 gateway4: 172.31.7.254
 nameservers:
 addresses: [114.114.114.114] #dns
 version: 2
 renderer: networkd

 #修改完重新開機網絡
 netplan apply           

3.3 設定系統時區和時鐘同步

timedatectl set-timezone Asia/Shanghairoot
@k8s-master:/etc/default# cat /etc/default/
localeLANG=en_US.UTF-8
LC_TIME=en_DK.UTF-8

root@k8s-master:~# cat /var/spool/cron/crontabs/root
*/5 * * * * ntpdate time1.aliyun.com &> /dev/null && hwclock -w           

3.4 核心資源優化

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.ipv4.ip_forward=1
vm.max_map_count=262144
kernel.pid_max=4194303
fs.file-max=1000000
net.ipv4.tcp_max_tw_buckets=6000
net.netfilter.nf_conntrack_max=2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

modprobe ip_conntrack
modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf

reboot
各節點做快照           

3.5 免密設定

root@k8s-master:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:MQhTEGzxTr9rnP412bdROTZVgW6ZfnUiU4b6b6VIzok [email protected]
The key's randomart image is:
+---[RSA 3072]----+
|   .*=.      ...o|
|    o+ .    ..o .|
|   .  + o  ..oo .|
|     o . o. o=. =|
|      . S  .oo *+|
|         .  o+..=|
|       ... =++o+.|
|        +.E.=.+.o|
|       oo..  . . |
+----[SHA256]-----+

  #$IPs為所有節點位址包括自身,按照提示輸入yes 和root密碼
root@k8s-master:~# ssh-copy-id $IPs            

四、高可用負載均衡部署

k8s-master 和harbor節點

#k8s-master.host.com 和harbor.host.com

#安裝keepalived haproxy
#apt install keepalived haproxy -y 
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf

#keepalived主節點配置檔案
#cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER #别一個主機上用BACKUP
    interface ens33  #網卡名與主機的一緻
    garp_master_delay 10 #每個虛拟路由器必須唯一,同屬一個虛拟路由器的多個keepalived節點必須相同。
    smtp_alert
    virtual_router_id 51
    priority 100   #在另一個節點上為80
    advert_int 1
    authentication {
        auth_type PASS #預共享密鑰認證,同一虛拟路由器的keepalived節點一樣
        auth_pass 1111
    }
    virtual_ipaddress {
        172.31.7.188 dev ens33 label ens33:0
        172.31.7.189 dev ens33 label ens33:1
        172.31.7.190 dev ens33 label ens33:2

    }
}

#複制keepalived配置檔案到另一台節點,按上面要求修改配置
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
#重新開機keepalived
systemctl restart keepalived.service && systemctl enable keepalived           
二進制部署kubernetes高可用叢集

VIP 已生效

編輯haproxy的配置檔案

#增加listen配置
#cat /etc/haproxy/haproxy.cfg
listen k8s-cluster-01-6443
        bind 172.31.7.188:6443   #監聽vip的6443端口
        mode tcp                 #模式tcp
        #如果有多台master都添加到這裡
        server k8s-master.host.com 172.31.7.2:6443 check inter 3s fall 3 rise 1
        #server k8s-master2-.host.com 172.31.7.X:6443 check inter 3s fall 3 rise 1

#重新開機haproxy
root@k8s-master:~# systemctl restart haproxy.service
root@k8s-master:~# systemctl enable haproxy.service

#複制haproxy配置檔案到另一台ha節點
scp  /etc/haproxy/haproxy.cfg  172.31.7.252:/etc/haproxy/

#核心配置檔案添加如下
net.ipv4.ip_nonlocal_bind = 1 #意思是啟動haproxy的時候,允許忽視VIP的存在           
二進制部署kubernetes高可用叢集

檢視監聽的IP與端口

五、部署harbor本地倉庫

請參考Docker倉庫之harbor部署

六、部署Kubernetes

6.1 部署節點ansible 安裝(這裡直在master節點部署)

apt install ansible -y

#為每個節點設定python軟連結
root@k8s-master:~# ln -s /usr/bin/python3.8 /usr/bin/python           

6.2 下載下傳項目源碼、二進制及離線鏡像

下載下傳工具腳本ezdown,與kubernetes相應的版本請檢視kubeasz官方文檔說明。

二進制部署kubernetes高可用叢集
# 舉例使用kubeasz版本3.2.0
export release=3.2.0
wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown
# 使用工具腳本下載下傳
root@k8s-master1:~# ./ezdown --help
./ezdown: illegal option -- -
Usage: ezdown [options] [args]
  option: -{DdekSz}
    -C         stop&clean all local containers
    -D         download all into "/etc/kubeasz 會自動下載下傳到/etc/kubeasz這個目錄
    -P         download system packages for offline installing
    -R         download Registry(harbor) offline installer
    -S         start kubeasz in a container
    -d <ver>   set docker-ce version, default "19.03.15"
    -e <ver>   set kubeasz-ext-bin version, default "1.0.0"
    -k <ver>   set kubeasz-k8s-bin version, default "v1.23.1"
    -m <str>   set docker registry mirrors, default "CN"(used in Mainland,China)
    -p <ver>   set kubeasz-sys-pkg version, default "0.4.2"
    -z <ver>   set kubeasz version, default "3.2.0

#./ezdown -D 會自動下載下傳到/etc/kubeasz這個目錄
root@k8s-master:~# ./ezdown -D
......
......
60775238382e: Pull complete
528677575c0b: Pull complete
Digest: sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423
Status: Downloaded newer image for easzlab/nfs-subdir-external-provisioner:v4.0.2
docker.io/easzlab/nfs-subdir-external-provisioner:v4.0.2
3.2.0: Pulling from easzlab/kubeasz
Digest: sha256:55910c9a401c32792fa4392347697b5768fcc1fd5a346ee099e48f5ec056a135
Status: Image is up to date for easzlab/kubeasz:3.2.0
docker.io/easzlab/kubeasz:3.2.0
2022-04-14 14:14:57 INFO Action successed: download_all

#上述腳本運作成功後,所有檔案(kubeasz代碼、二進制、離線鏡像)均已整理好放入目錄/etc/kubeasz
root@k8s-master:~# ll /etc/kubeasz/
total 120
drwxrwxr-x  11 root root  4096 Apr 14 13:32 ./
drwxr-xr-x 101 root root  4096 Apr 14 13:08 ../
-rw-rw-r--   1 root root   301 Jan  5 20:19 .gitignore
-rw-rw-r--   1 root root  6137 Jan  5 20:19 README.md
-rw-rw-r--   1 root root 20304 Jan  5 20:19 ansible.cfg
drwxr-xr-x   3 root root  4096 Apr 14 13:32 bin/
drwxrwxr-x   8 root root  4096 Jan  5 20:28 docs/
drwxr-xr-x   2 root root  4096 Apr 14 14:14 down/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 example/
-rwxrwxr-x   1 root root 24716 Jan  5 20:19 ezctl*
-rwxrwxr-x   1 root root 15350 Jan  5 20:19 ezdown*
drwxrwxr-x  10 root root  4096 Jan  5 20:28 manifests/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 pics/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 playbooks/
drwxrwxr-x  22 root root  4096 Jan  5 20:28 roles/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 tools/           

6.3 建立叢集配置執行個體

root@k8s-master:/etc/kubeasz# ./ezctl new k8s-cluster-01
2022-04-14 14:34:07 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-cluster-01
2022-04-14 14:34:07 DEBUG set versions
2022-04-14 14:34:07 DEBUG cluster k8s-cluster-01: files successfully created.
2022-04-14 14:34:07 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-cluster-01/hosts'
2022-04-14 14:34:07 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-cluster-01/config.yml'

root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# vim hosts
root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# vim config.yml           

然後根據提示配置/etc/kubeasz/clusters/k8s-cluster-01/hosts和 /etc/kubeasz/clusters/k8s-01/config.yml:根據前面節點規劃修改hosts 檔案和其他叢集層面的主要配置選項;其他叢集元件等配置項可以在config.yml檔案中修改

6.4 編輯ansible host檔案

root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# cat hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.31.7.2

# master node(s)
[kube_master]
172.31.7.2

# work node(s)
[kube_node]
172.31.7.3
172.31.7.4
172.31.7.5

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]#這裡選false,因為我自己準備了harbor倉庫
#172.31.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb] #修改成自己的vip位址與端口
172.31.1.6 LB_ROLE=backup EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=6443
172.31.1.7 LB_ROLE=master EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=6443

# [optional] ntp server for the cluster
[chrony]#時間同步也不需要,在環境準備時已完成
#172.31.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"


#這裡采用docker來部署
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

#網絡元件采用calico
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

#service網段
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"

#pod網段
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"

#端口的範圍,以後部署服務要用上
# NodePort Range
NODE_PORT_RANGE="30000-65525"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/local/bin"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-cluster-01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"           

6.5 編輯ansible config檔案

root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# cat config.yml
############################
# prepare
############################
# 可選離線安裝系統軟體包 (offline|online)
INSTALL_SOURCE: "online"

# 可選進行系統安全加強 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

# 設定時間源伺服器【重要:叢集内機器時間必須同步】
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"

# 設定允許内部時間同步的網絡段,比如"10.0.0.0/8",預設全部允許
local_network: "0.0.0.0/0"

############################
# role:deploy
############################
#ca證書的有效期
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# kubeconfig 配置參數
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"

# k8s version
K8S_VER: "1.23.1"

############################
# role:etcd
############################
# 設定不同的wal目錄,可以避免磁盤io競争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]啟用容器倉庫鏡像
ENABLE_MIRROR_REGISTRY: true

#pause鏡像可以 自己準備,我把它放在本地的harbor倉庫
# [containerd]基礎容器鏡像
SANDBOX_IMAGE: "harbor.host.com/base/pause:3.6"

# [containerd]容器持久化存儲目錄
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存儲目錄
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]開啟Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP倉庫
INSECURE_REG: '["127.0.0.1/8"]'


############################
# role:kube-master
############################
# k8s 叢集 master 節點證書配置,可以添加多個ip和域名(比如增加公網ip和域名)
MASTER_CERT_HOSTS:
  - "172.31.7.188"
  - "10.1.1.1"
  - "k8s.test.io"
  #- "www.test.com"

# node 節點上 pod 網段掩碼長度(決定每個節點最多能配置設定的pod ip位址)
# 如果flannel 使用 --kube-subnet-mgr 參數,那麼它将讀取該設定為每個節點配置設定pod網段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目錄
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node節點最大pod 數
MAX_PODS: 110

# 配置為kube元件(kubelet,kube-proxy,dockerd等)預留的資源量
# 數值設定詳見templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建議草率開啟 system-reserved, 除非你基于長期監控,了解系統的資源占用狀況;
# 并且随着系統運作時間,需要适當增加資源預留,數值設定詳見templates/kubelet-config.yaml.j2
# 系統預留設定基于 4c/8g 虛機,最小化安裝系統服務,如果使用高性能實體機可以适當增加預留
# 另外,叢集安裝時候apiserver等資源占用會短時較大,建議至少預留1g記憶體
SYS_RESERVED_ENABLED: "no"

# haproxy balance mode
BALANCE_ALG: "roundrobin"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]設定flannel 後端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.15.1"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

# [flannel]離線鏡像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

# ------------------------------------------- calico
# [calico]設定 CALICO_IPV4POOL_IPIP=“off”,可以提高網絡性能,條件限制詳見 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]設定 calico-node使用的host IP,bgp鄰居通過該位址建立,可手工指定也可以自動發現
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]設定calico 網絡 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支援calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.19.3"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]離線鏡像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 建立的 etcd 叢集節點數 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

# [cilium]鏡像版本
cilium_ver: "v1.4.1"

# [cilium]離線鏡像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

# ------------------------------------------- kube-ovn
# [kube-ovn]選擇 OVN DB and OVN Control Plane 節點,預設為第一個master節點
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]離線鏡像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

# ------------------------------------------- kube-router
# [kube-router]公有雲上存在限制,一般需要始終開啟 ipinip;自有環境可以設定為 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支援開關
FIREWALL_ENABLE: "true"

# [kube-router]kube-router 鏡像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"

# [kube-router]kube-router 離線鏡像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"


############################
# role:cluster-addon
############################
# coredns 自動安裝
dns_install: "no"
corednsVer: "1.8.6"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.21.1"
# 設定 local dns cache 位址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自動安裝
metricsserver_install: "no"
metricsVer: "v0.5.2"

# dashboard 自動安裝
dashboard_install: "no"
dashboardVer: "v2.4.0"
dashboardMetricsScraperVer: "v1.0.7"

# ingress 自動安裝
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "10.3.0"

# prometheus 自動安裝
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

# nfs-provisioner 自動安裝
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true           

6.6 部署kubernetes叢集

6.6.1環境初始化

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 --help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master
          
root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 01           

6.6.2 部署etcd叢集

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 02

#======================驗證==========================
root@k8s-master:/etc/kubeasz# export NODE_IPS="172.31.7.2"
root@k8s-master:/etc/kubeasz# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done

#預期結果:
https://172.31.7.2:2379 is healthy: successfully committed proposal: took = 4.734365ms           

6.6.3 部署Docker

手動安裝可以參考Docker-ce安裝

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 03

#======================驗證==========================
root@k8s-master:/etc/kubeasz# docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.9
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
......
........

 Live Restore Enabled: true
 Product License: Community Engine

           

6.6.4 部署master

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 04

#======================驗證==========================
root@k8s-master:/etc/kubeasz# kubectl get nodes
NAME         STATUS                     ROLES    AGE   VERSION
172.31.7.2   Ready,SchedulingDisabled   master   41s   v1.23.1           

6.6.5 部署node

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 05

#======================驗證==========================
root@k8s-master:/etc/kubeasz# kubectl get nodes
NAME         STATUS                     ROLES    AGE     VERSION
172.31.7.2   Ready,SchedulingDisabled   master   2m25s   v1.23.1
172.31.7.3   Ready                      node     18s     v1.23.1
172.31.7.4   Ready                      node     18s     v1.23.1
172.31.7.5   Ready                      node     18s     v1.23.1

#node節點
root@k8s-node1:~# cat /etc/kube-lb/conf/kube-lb.conf
user root;
worker_processes 1;

error_log  /etc/kube-lb/logs/error.log warn;

events {
    worker_connections  3000;
}

stream {
    upstream backend {
        server 172.31.7.2:6443    max_fails=2 fail_timeout=3s;
    }

    server {
        listen 127.0.0.1:6443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}           

6.6.6 部署calico網絡服務

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 06

#======================驗證==========================
root@k8s-master:/etc/kubeasz# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-754966f84c-fmtkl   1/1     Running   0          13m
kube-system   calico-node-gbgnn                          1/1     Running   0          13m
kube-system   calico-node-n6scc                          1/1     Running   0          13m
kube-system   calico-node-tdw75                          1/1     Running   0          13m
kube-system   calico-node-vzw96                          1/1     Running   0          13m


root@k8s-master:/etc/kubeasz# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 172.31.7.3   | node-to-node mesh | up    | 05:01:55 | Established |
| 172.31.7.4   | node-to-node mesh | up    | 05:03:48 | Established |
| 172.31.7.5   | node-to-node mesh | up    | 05:04:31 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.           

驗證網絡

#建立多個pod
kubectl run net-test1 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000
kubectl run net-test2 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000
kubectl run net-test3 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000

root@k8s-master:/etc/kubeasz# kubectl get pods -owide
NAME        READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          3m32s   10.200.12.1      172.31.7.5   <none>           <none>
net-test2   1/1     Running   0          3m32s   10.200.228.129   172.31.7.3   <none>           <none>
net-test3   1/1     Running   0          3m29s   10.200.111.129   172.31.7.4   <none>           <none>
root@k8s-master:/etc/kubeasz# kubectl exec -it -n default net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test1 /]# ping 10.200.111.129
PING 10.200.111.129 (10.200.111.129) 56(84) bytes of data.
64 bytes from 10.200.111.129: icmp_seq=1 ttl=62 time=0.402 ms
64 bytes from 10.200.111.129: icmp_seq=2 ttl=62 time=0.452 ms
64 bytes from 10.200.111.129: icmp_seq=3 ttl=62 time=0.457 ms
^C
--- 10.200.111.129 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2025ms
rtt min/avg/max/mdev = 0.402/0.437/0.457/0.024 ms
[root@net-test1 /]# ping 10.200.228.129
PING 10.200.228.129 (10.200.228.129) 56(84) bytes of data.
64 bytes from 10.200.228.129: icmp_seq=1 ttl=62 time=0.463 ms
64 bytes from 10.200.228.129: icmp_seq=2 ttl=62 time=0.977 ms
64 bytes from 10.200.228.129: icmp_seq=3 ttl=62 time=0.705 ms
64 bytes from 10.200.228.129: icmp_seq=4 ttl=62 time=0.483 ms           
二進制部署kubernetes高可用叢集

pod之間的網絡驗證

繼續閱讀