天天看点

二进制部署kubernetes高可用集群

作者:技术怪圈

单机kubeadm部署请参考:Kubeadm单机部署kubernetes:v1.23

本次部署采用github上的开源项目kubeasz,以二进制安装的方式,此方式安装支持系统有CentOS/RedHat 7, Debian 9/10, Ubuntu 1604/1804/2004。

  • 注意1:确保各节点时区设置一致、时间同步。 如果你的环境没有提供NTP 时间同步
  • 注意2:确保在干净的系统上开始安装,不要使用曾经装过kubeadm或其他k8s发行版的环境
  • 注意3:建议操作系统升级到新的稳定内核,请结合阅读kubeasz文档中的内核升级文档
  • 注意3: 各节点设置免密

架构图

二进制部署kubernetes高可用集群

一、集群系统环境

root@k8s-master:~# cat /etc/issue
Ubuntu 20.04.4 LTS \n \l

-docker: 20.10.9
- k8s: v.1.23.1           

二、IP和角色规划

下面是此次虚拟机集群安装前的IP等信息规划,因为资源有限所以有些节点资源混用。如果资源充足的话,master节点基数最好(三台以上)、etcd(三台)可以考虑每个服务一台虚拟机。此文配置环境如下:

IP HostName Role VIP
172.31.7.2 k8s-master.host.com master/etcd/HA1 172.31.7.188
172.31.7.3 k8s-node1.host.com node
172.31.7.4 k8s-node2.host.com node
172.31.7.5 k8s-node3.host.com node
172.31.7.252 harbor1.host.com harbor/HA2

三、初始化系统和全局变量

3.1 设置主机名(此处略)

~# hostnamectl set-hostname k8s-master.host.com #其他的更换主机名即可           

3.2 ubuntu系统修改IP信息(单台为例)

root@k8s-master:~# cat /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
 ethernets:
 ens33:
 dhcp4: no
 addresses: [172.31.7.2/16] #ip
 gateway4: 172.31.7.254
 nameservers:
 addresses: [114.114.114.114] #dns
 version: 2
 renderer: networkd

 #修改完重启网络
 netplan apply           

3.3 设置系统时区和时钟同步

timedatectl set-timezone Asia/Shanghairoot
@k8s-master:/etc/default# cat /etc/default/
localeLANG=en_US.UTF-8
LC_TIME=en_DK.UTF-8

root@k8s-master:~# cat /var/spool/cron/crontabs/root
*/5 * * * * ntpdate time1.aliyun.com &> /dev/null && hwclock -w           

3.4 内核资源优化

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.ipv4.ip_forward=1
vm.max_map_count=262144
kernel.pid_max=4194303
fs.file-max=1000000
net.ipv4.tcp_max_tw_buckets=6000
net.netfilter.nf_conntrack_max=2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

modprobe ip_conntrack
modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf

reboot
各节点做快照           

3.5 免密设置

root@k8s-master:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:MQhTEGzxTr9rnP412bdROTZVgW6ZfnUiU4b6b6VIzok [email protected]
The key's randomart image is:
+---[RSA 3072]----+
|   .*=.      ...o|
|    o+ .    ..o .|
|   .  + o  ..oo .|
|     o . o. o=. =|
|      . S  .oo *+|
|         .  o+..=|
|       ... =++o+.|
|        +.E.=.+.o|
|       oo..  . . |
+----[SHA256]-----+

  #$IPs为所有节点地址包括自身,按照提示输入yes 和root密码
root@k8s-master:~# ssh-copy-id $IPs            

四、高可用负载均衡部署

k8s-master 和harbor节点

#k8s-master.host.com 和harbor.host.com

#安装keepalived haproxy
#apt install keepalived haproxy -y 
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf

#keepalived主节点配置文件
#cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen
   }
   notification_email_from [email protected]
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
}

vrrp_instance VI_1 {
    state MASTER #别一个主机上用BACKUP
    interface ens33  #网卡名与主机的一致
    garp_master_delay 10 #每个虚拟路由器必须唯一,同属一个虚拟路由器的多个keepalived节点必须相同。
    smtp_alert
    virtual_router_id 51
    priority 100   #在另一个节点上为80
    advert_int 1
    authentication {
        auth_type PASS #预共享密钥认证,同一虚拟路由器的keepalived节点一样
        auth_pass 1111
    }
    virtual_ipaddress {
        172.31.7.188 dev ens33 label ens33:0
        172.31.7.189 dev ens33 label ens33:1
        172.31.7.190 dev ens33 label ens33:2

    }
}

#复制keepalived配置文件到另一台节点,按上面要求修改配置
cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
#重启keepalived
systemctl restart keepalived.service && systemctl enable keepalived           
二进制部署kubernetes高可用集群

VIP 已生效

编辑haproxy的配置文件

#增加listen配置
#cat /etc/haproxy/haproxy.cfg
listen k8s-cluster-01-6443
        bind 172.31.7.188:6443   #监听vip的6443端口
        mode tcp                 #模式tcp
        #如果有多台master都添加到这里
        server k8s-master.host.com 172.31.7.2:6443 check inter 3s fall 3 rise 1
        #server k8s-master2-.host.com 172.31.7.X:6443 check inter 3s fall 3 rise 1

#重启haproxy
root@k8s-master:~# systemctl restart haproxy.service
root@k8s-master:~# systemctl enable haproxy.service

#复制haproxy配置文件到另一台ha节点
scp  /etc/haproxy/haproxy.cfg  172.31.7.252:/etc/haproxy/

#内核配置文件添加如下
net.ipv4.ip_nonlocal_bind = 1 #意思是启动haproxy的时候,允许忽视VIP的存在           
二进制部署kubernetes高可用集群

查看监听的IP与端口

五、部署harbor本地仓库

请参考Docker仓库之harbor部署

六、部署Kubernetes

6.1 部署节点ansible 安装(这里直在master节点部署)

apt install ansible -y

#为每个节点设置python软链接
root@k8s-master:~# ln -s /usr/bin/python3.8 /usr/bin/python           

6.2 下载项目源码、二进制及离线镜像

下载工具脚本ezdown,与kubernetes相应的版本请查看kubeasz官方文档说明。

二进制部署kubernetes高可用集群
# 举例使用kubeasz版本3.2.0
export release=3.2.0
wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
chmod +x ./ezdown
# 使用工具脚本下载
root@k8s-master1:~# ./ezdown --help
./ezdown: illegal option -- -
Usage: ezdown [options] [args]
  option: -{DdekSz}
    -C         stop&clean all local containers
    -D         download all into "/etc/kubeasz 会自动下载到/etc/kubeasz这个目录
    -P         download system packages for offline installing
    -R         download Registry(harbor) offline installer
    -S         start kubeasz in a container
    -d <ver>   set docker-ce version, default "19.03.15"
    -e <ver>   set kubeasz-ext-bin version, default "1.0.0"
    -k <ver>   set kubeasz-k8s-bin version, default "v1.23.1"
    -m <str>   set docker registry mirrors, default "CN"(used in Mainland,China)
    -p <ver>   set kubeasz-sys-pkg version, default "0.4.2"
    -z <ver>   set kubeasz version, default "3.2.0

#./ezdown -D 会自动下载到/etc/kubeasz这个目录
root@k8s-master:~# ./ezdown -D
......
......
60775238382e: Pull complete
528677575c0b: Pull complete
Digest: sha256:f741e403b3ca161e784163de3ebde9190905fdbf7dfaa463620ab8f16c0f6423
Status: Downloaded newer image for easzlab/nfs-subdir-external-provisioner:v4.0.2
docker.io/easzlab/nfs-subdir-external-provisioner:v4.0.2
3.2.0: Pulling from easzlab/kubeasz
Digest: sha256:55910c9a401c32792fa4392347697b5768fcc1fd5a346ee099e48f5ec056a135
Status: Image is up to date for easzlab/kubeasz:3.2.0
docker.io/easzlab/kubeasz:3.2.0
2022-04-14 14:14:57 INFO Action successed: download_all

#上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz
root@k8s-master:~# ll /etc/kubeasz/
total 120
drwxrwxr-x  11 root root  4096 Apr 14 13:32 ./
drwxr-xr-x 101 root root  4096 Apr 14 13:08 ../
-rw-rw-r--   1 root root   301 Jan  5 20:19 .gitignore
-rw-rw-r--   1 root root  6137 Jan  5 20:19 README.md
-rw-rw-r--   1 root root 20304 Jan  5 20:19 ansible.cfg
drwxr-xr-x   3 root root  4096 Apr 14 13:32 bin/
drwxrwxr-x   8 root root  4096 Jan  5 20:28 docs/
drwxr-xr-x   2 root root  4096 Apr 14 14:14 down/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 example/
-rwxrwxr-x   1 root root 24716 Jan  5 20:19 ezctl*
-rwxrwxr-x   1 root root 15350 Jan  5 20:19 ezdown*
drwxrwxr-x  10 root root  4096 Jan  5 20:28 manifests/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 pics/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 playbooks/
drwxrwxr-x  22 root root  4096 Jan  5 20:28 roles/
drwxrwxr-x   2 root root  4096 Jan  5 20:28 tools/           

6.3 创建集群配置实例

root@k8s-master:/etc/kubeasz# ./ezctl new k8s-cluster-01
2022-04-14 14:34:07 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-cluster-01
2022-04-14 14:34:07 DEBUG set versions
2022-04-14 14:34:07 DEBUG cluster k8s-cluster-01: files successfully created.
2022-04-14 14:34:07 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-cluster-01/hosts'
2022-04-14 14:34:07 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-cluster-01/config.yml'

root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# vim hosts
root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# vim config.yml           

然后根据提示配置/etc/kubeasz/clusters/k8s-cluster-01/hosts和 /etc/kubeasz/clusters/k8s-01/config.yml:根据前面节点规划修改hosts 文件和其他集群层面的主要配置选项;其他集群组件等配置项可以在config.yml文件中修改

6.4 编辑ansible host文件

root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# cat hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.31.7.2

# master node(s)
[kube_master]
172.31.7.2

# work node(s)
[kube_node]
172.31.7.3
172.31.7.4
172.31.7.5

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]#这里选false,因为我自己准备了harbor仓库
#172.31.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb] #修改成自己的vip地址与端口
172.31.1.6 LB_ROLE=backup EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=6443
172.31.1.7 LB_ROLE=master EX_APISERVER_VIP=172.31.7.188 EX_APISERVER_PORT=6443

# [optional] ntp server for the cluster
[chrony]#时间同步也不需要,在环境准备时已完成
#172.31.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"


#这里采用docker来部署
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

#网络组件采用calico
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

#service网段
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.100.0.0/16"

#pod网段
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.200.0.0/16"

#端口的范围,以后部署服务要用上
# NodePort Range
NODE_PORT_RANGE="30000-65525"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local"

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/usr/local/bin"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"

# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-cluster-01"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"           

6.5 编辑ansible config文件

root@k8s-master:/etc/kubeasz/clusters/k8s-cluster-01# cat config.yml
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"

# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false

# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
  - "ntp1.aliyun.com"
  - "time1.cloud.tencent.com"
  - "0.cn.pool.ntp.org"

# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"

############################
# role:deploy
############################
#ca证书的有效期
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"

# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"

# k8s version
K8S_VER: "1.23.1"

############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""


############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true

#pause镜像可以 自己准备,我把它放在本地的harbor仓库
# [containerd]基础容器镜像
SANDBOX_IMAGE: "harbor.host.com/base/pause:3.6"

# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/var/lib/containerd"

# ------------------------------------------- docker
# [docker]容器存储目录
DOCKER_STORAGE_DIR: "/var/lib/docker"

# [docker]开启Restful API
ENABLE_REMOTE_API: false

# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8"]'


############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
  - "172.31.7.188"
  - "10.1.1.1"
  - "k8s.test.io"
  #- "www.test.com"

# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24


############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"

# node节点最大pod 数
MAX_PODS: 110

# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "no"

# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"

# haproxy balance mode
BALANCE_ALG: "roundrobin"


############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false

# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.15.1"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"

# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"

# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"

# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"

# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"

# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.19.3"

# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"

# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"

# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1

# [cilium]镜像版本
cilium_ver: "v1.4.1"

# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"

# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"

# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"

# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"

# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"

# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"

# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"


############################
# role:cluster-addon
############################
# coredns 自动安装
dns_install: "no"
corednsVer: "1.8.6"
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.21.1"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"

# metric server 自动安装
metricsserver_install: "no"
metricsVer: "v0.5.2"

# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.4.0"
dashboardMetricsScraperVer: "v1.0.7"

# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "10.3.0"

# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"

# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"

############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8443

# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true

# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true           

6.6 部署kubernetes集群

6.6.1环境初始化

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 --help
Usage: ezctl setup <cluster> <step>
available steps:
    01  prepare            to prepare CA/certs & kubeconfig & other system settings
    02  etcd               to setup the etcd cluster
    03  container-runtime  to setup the container runtime(docker or containerd)
    04  kube-master        to setup the master nodes
    05  kube-node          to setup the worker nodes
    06  network            to setup the network plugin
    07  cluster-addon      to setup other useful plugins
    90  all                to run 01~07 all at once
    10  ex-lb              to install external loadbalance for accessing k8s from outside
    11  harbor             to install a new harbor server or to integrate with an existed one

examples: ./ezctl setup test-k8s 01  (or ./ezctl setup test-k8s prepare)
          ./ezctl setup test-k8s 02  (or ./ezctl setup test-k8s etcd)
          ./ezctl setup test-k8s all
          ./ezctl setup test-k8s 04 -t restart_master
          
root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 01           

6.6.2 部署etcd集群

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 02

#======================验证==========================
root@k8s-master:/etc/kubeasz# export NODE_IPS="172.31.7.2"
root@k8s-master:/etc/kubeasz# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done

#预期结果:
https://172.31.7.2:2379 is healthy: successfully committed proposal: took = 4.734365ms           

6.6.3 部署Docker

手动安装可以参考Docker-ce安装

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 03

#======================验证==========================
root@k8s-master:/etc/kubeasz# docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.9
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 1
......
........

 Live Restore Enabled: true
 Product License: Community Engine

           

6.6.4 部署master

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 04

#======================验证==========================
root@k8s-master:/etc/kubeasz# kubectl get nodes
NAME         STATUS                     ROLES    AGE   VERSION
172.31.7.2   Ready,SchedulingDisabled   master   41s   v1.23.1           

6.6.5 部署node

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 05

#======================验证==========================
root@k8s-master:/etc/kubeasz# kubectl get nodes
NAME         STATUS                     ROLES    AGE     VERSION
172.31.7.2   Ready,SchedulingDisabled   master   2m25s   v1.23.1
172.31.7.3   Ready                      node     18s     v1.23.1
172.31.7.4   Ready                      node     18s     v1.23.1
172.31.7.5   Ready                      node     18s     v1.23.1

#node节点
root@k8s-node1:~# cat /etc/kube-lb/conf/kube-lb.conf
user root;
worker_processes 1;

error_log  /etc/kube-lb/logs/error.log warn;

events {
    worker_connections  3000;
}

stream {
    upstream backend {
        server 172.31.7.2:6443    max_fails=2 fail_timeout=3s;
    }

    server {
        listen 127.0.0.1:6443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}           

6.6.6 部署calico网络服务

root@k8s-master:/etc/kubeasz# ./ezctl setup k8s-cluster-01 06

#======================验证==========================
root@k8s-master:/etc/kubeasz# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-754966f84c-fmtkl   1/1     Running   0          13m
kube-system   calico-node-gbgnn                          1/1     Running   0          13m
kube-system   calico-node-n6scc                          1/1     Running   0          13m
kube-system   calico-node-tdw75                          1/1     Running   0          13m
kube-system   calico-node-vzw96                          1/1     Running   0          13m


root@k8s-master:/etc/kubeasz# calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+--------------+-------------------+-------+----------+-------------+
| 172.31.7.3   | node-to-node mesh | up    | 05:01:55 | Established |
| 172.31.7.4   | node-to-node mesh | up    | 05:03:48 | Established |
| 172.31.7.5   | node-to-node mesh | up    | 05:04:31 | Established |
+--------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.           

验证网络

#创建多个pod
kubectl run net-test1 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000
kubectl run net-test2 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000
kubectl run net-test3 --image=harbor.host.com/base/centos-base:202211162129 sleep 360000

root@k8s-master:/etc/kubeasz# kubectl get pods -owide
NAME        READY   STATUS    RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          3m32s   10.200.12.1      172.31.7.5   <none>           <none>
net-test2   1/1     Running   0          3m32s   10.200.228.129   172.31.7.3   <none>           <none>
net-test3   1/1     Running   0          3m29s   10.200.111.129   172.31.7.4   <none>           <none>
root@k8s-master:/etc/kubeasz# kubectl exec -it -n default net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test1 /]# ping 10.200.111.129
PING 10.200.111.129 (10.200.111.129) 56(84) bytes of data.
64 bytes from 10.200.111.129: icmp_seq=1 ttl=62 time=0.402 ms
64 bytes from 10.200.111.129: icmp_seq=2 ttl=62 time=0.452 ms
64 bytes from 10.200.111.129: icmp_seq=3 ttl=62 time=0.457 ms
^C
--- 10.200.111.129 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2025ms
rtt min/avg/max/mdev = 0.402/0.437/0.457/0.024 ms
[root@net-test1 /]# ping 10.200.228.129
PING 10.200.228.129 (10.200.228.129) 56(84) bytes of data.
64 bytes from 10.200.228.129: icmp_seq=1 ttl=62 time=0.463 ms
64 bytes from 10.200.228.129: icmp_seq=2 ttl=62 time=0.977 ms
64 bytes from 10.200.228.129: icmp_seq=3 ttl=62 time=0.705 ms
64 bytes from 10.200.228.129: icmp_seq=4 ttl=62 time=0.483 ms           
二进制部署kubernetes高可用集群

pod之间的网络验证

继续阅读