天天看點

Kubernetes的kubeadm項目安裝部署

一、網絡拓撲

Kubernetes的kubeadm項目安裝部署

二、系統配置

OS System:
[root@localhost ~]# cat /etc/redhat-release 
CentOS Linux release 7.8.2003 (Core)
核心版本:
[root@localhost ~]# uname -r
5.4.109-1.el7.elrepo.x86_64
k8s-master-VIP:172.168.32.248
haproxy+keepalived-master:172.168.32.208
haproxy+keepalived-slave:172.168.32.209
#etcd01:172.168.32.211
#etcd02:172.168.32.212
#etcd03:172.168.32.213
k8s-master01:172.168.32.201 
k8s-master02:172.168.32.202
k8s-master03:172.168.32.203
k8s-node01:172.168.32.204
k8s-node02:172.168.32.205
k8s-node03:172.168.32.206
harbor+ansible+nfs:172.168.32.41
通路域名:
172.168.32.248 www.ywx.net
harbor域名:
172.168.32.41 harbor.ywx.net
      

三、centos核心更新

# 載入公鑰
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# 安裝ELRepo
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 載入elrepo-kernel中繼資料
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
# 檢視可用的rpm包
yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
# 安裝長期支援版本的kernel
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64
# 删除舊版本工具包
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y
# 安裝新版本工具包
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64
#檢視預設啟動順序
 [root@localhost tmp]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg 
CentOS Linux (5.4.109-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-327.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-d7e33d2d499040e5ab09a6182a7175d9) 7 (Core)
#預設啟動的順序是從0開始,新核心是從頭插入(目前位置在0,而4.4.4的是在1),是以需要選擇0。
grub2-set-default 0  
#重新開機并檢查
reboot
      

報錯解決

Error: Package: kernel-lt-tools-5.4.109-1.el7.elrepo.x86_64 (elrepo-kernel)
           Requires: libpci.so.3(LIBPCI_3.3)(64bit)
Error: Package: kernel-lt-tools-5.4.109-1.el7.elrepo.x86_64 (elrepo-kernel)
           Requires: libpci.so.3(LIBPCI_3.5)(64bit)
 You could try using --skip-broken to work around the problem
 You could try running: rpm -Va --nofiles --nodigest
處理方法:
yum install -y pciutils-libs
      

四、安裝部署keepalived+haproxy

1、部署keepalived

在172.168.32.208和172.168.32.209上部署keepalived

yum install -y keepalived
      

haproxy+keepalived-master:172.168.32.208的keepalived配置檔案

! Configuration File for keepalived
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 172.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 3
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.168.32.248 dev eth0 label eth0:1
    }
}
      

haproxy+keepalived-slave:172.168.32.209的keepalived的配置

! Configuration File for keepalived
global_defs {
   notification_email {
     [email protected]
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]
   smtp_server 172.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.168.32.248 dev eth0 label eth0:1
    }
}
      

啟動keepalived,并設定為開機自啟動

systemctl start keepalived
systemctl enable keepalived
      

驗證keepalived

keepalived master:172.168.32.208

[root@haproxy01 ~]# ip a|grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 172.168.32.208/16 brd 172.168.255.255 scope global eth0
    inet 172.168.32.248/32 scope global eth0:1
      

keepalived slave:172.168.32.209

[root@haproxy02 ~]# ip a|grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 172.168.32.209/16 brd 172.168.255.255 scope global eth0
      

關閉master,驗正vip漂移到slave上

keepalived master:172.168.32.208

[root@haproxy01 ~]# systemctl stop keepalived.service 
[root@haproxy01 ~]# ip a |grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 172.168.32.208/16 brd 172.168.255.255 scope global eth0
      

keepalived slave:172.168.32.209

[root@haproxy02 ~]# ip a|grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 172.168.32.209/16 brd 172.168.255.255 scope global eth0
    inet 172.168.32.248/32 scope global eth0:1
      

VIP漂移成功過

2、VIP無法ping通故障處理

該實驗需要在haproxy01:172.168.32.208和haproxy02:172.168.32.209上都部署

yum安裝會自動生成防火牆政策,可以删除或禁止生成
[root@haproxy02 ~]# iptables -vnL
Chain INPUT (policy ACCEPT 46171 packets, 3069K bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set keepalived dst
#yum安裝的keepalived會在input鍊上設定一條iptables規則,該規則是靜止通路VIP。
      

修改iptables規則

[root@haproxy02 ~]# iptables-save > /tmp/iptables.txt
[root@haproxy02 ~]# vim /tmp/iptables.txt
# Generated by iptables-save v1.4.21 on Tue Apr  6 02:57:11 2021
*filter
:INPUT ACCEPT [47171:3118464]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [46521:2350054]
-A INPUT -m set --match-set keepalived dst -j DROP
COMMIT
# Completed on Tue Apr  6 02:57:11 2021
#将-A INPUT -m set --match-set keepalived dst -j DROP改為-A INPUT -m set --match-set keepalived dst -j ACCEPT
#重新導入iptables規則
[root@haproxy02 ~]# iptables-restore /tmp/iptables.txt 
[root@haproxy02 ~]# iptables -vnL
Chain INPUT (policy ACCEPT 115 packets, 5732 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set keepalived dst
VIP即可ping通
      

設定自動加載iptables規則

[root@haproxy02 ~]#vim /etc/rc.d/rc.local
/usr/sbin/iptables-restore  /tmp/iptables.txt
[root@haproxy02 ~]#chmod +x /etc/rc.d/rc.local
      

3、安裝部署haproxy

在172.168.32.208和172.168.32.209上部署haproxy

yum install -y haproxy
      

在haproxy.cfg的配置檔案添加如下資訊

listen stats
    mode http
    bind 172.168.32.248:9999 
    stats enable
    log global
    stats uri         /haproxy-status
    stats auth        haadmin:123456
listen k8s_api_nodes_6443
    bind 172.168.32.248:6443
    mode tcp
    #balance leastconn
   server 172.168.32.201 172.168.32.201:6443 check inter 2000 fall 3 rise 5
   server 172.168.32.202 172.168.32.202:6443 check inter 2000 fall 3 rise 5
   server 172.168.32.203 172.168.32.203:6443 check inter 2000 fall 3 rise 5
      

啟動haporxy

systemctl start haproxy
systemctl enable haproxy
      

驗證:

可以使用www.ywx.net:9999/haproxy-status通路

五、安裝部署harbor之https

1、 軟體版本

harbor 172.168.32.41

harbor軟體版本:
harbor-offline-installer-v1.8.6
docker:
19.03.9-ce
OS
[root@harbor ~]# cat /etc/redhat-release 
CentOS Linux release 7.8.2003 (Core)
[root@harbor ~]# uname -r
3.10.0-1127.el7.x86_64
      

2 、安裝docker

docker安裝腳本

cat >> /tmp/docker_install.sh << EOF
#! /bin/bash
ver=19.03.9
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
yum install -y docker-ce-$ver docker-ce-cli-$ver
systemctl start docker
systemctl enable docker
EOF
      

安裝docker

bash /tmp/docker_install.sh
      

安裝Docker Compose

方法一:
https://github.com/docker/compose/releases
mv docker-compose-Linux-x86_64 /usr/bin/docker-compose
chmod +x /usr/bin/docker-compose
方法二:
yum install -y docker-compose
      

3. 建立證書檔案

mkdir /certs
cd /certs
#生成私有key
openssl genrsa -out /certs/harbor-ca.key
#簽發證書
openssl req -x509 -new -nodes -key /certs/harbor-ca.key -subj "/CN=harbor.ywx.net" -days 7120 -out /certs/harbor-ca.crt
      

4 、安裝harbor之https

mkdir /apps
cd /apps
#把harbor-offline-installer-v2.1.2.tgz上傳到/apps
tar -xf harbor-offline-installer-v2.1.2.tgz
#修改harbor配置檔案
cd harbor 
cp harbor.yml.tmpl harbor.yml
#配置檔案資訊
vim harbor.yml
# Configuration file of Harbor
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 172.168.32.41
# http related config
http:
  port: 80
# https related config
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /certs/harbor-ca.crt
  private_key: /certs/harbor-ca.key
# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
#   # set enabled to true means internal tls is enabled
#   enabled: true
#   # put your cert and key files on dir
#   dir: /etc/harbor/tls/internal
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: 123456
。。。。。。
#安裝harbor
./install.sh
      

通路​​https://172.168.32.41​​​或者域名通路​​https://harbor.ywx.net​​

六、kubernetes叢集系統初始化

kubernetes所有master和node節點

1、安裝部署ansible

在172.168.32.41上部署ansible

yum install -y ansible
      

2、系統初始化及核心參數優化

系統初始化

yum install  vim iotop bc gcc gcc-c++ glibc glibc-devel pcre \
pcre-devel openssl  openssl-devel zip unzip zlib-devel  net-tools \
lrzsz tree ntpdate telnet lsof tcpdump wget libevent libevent-devel \
bc  systemd-devel bash-completion traceroute -y
ntpdate time1.aliyun.com
\cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo "*/5 * * * * ntpdate time1.aliyun.com &> /dev/null && hwclock -w" >> /var/spool/cron/root
systemctl stop firewalld
systemctl disable firewalld
systemctl stop NetworkManager
systemctl disable NetworkManager
      

核心參數優化

cat > /etc/modules-load.d/ipvs.conf <<EOF
# Load IPVS at boot
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF
systemctl enable --now systemd-modules-load.service
#确認核心子產品加載成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
#安裝ipset、ipvsadm
yum install -y ipset ipvsadm
#配置核心參數;
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
      

3、配置sysctl.conf檔案

vim /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
# Controls source route verification
 net.ipv4.conf.default.rp_filter = 1
 net.ipv4.ip_nonlocal_bind = 1
 net.ipv4.ip_forward = 1
 # Do not accept source routing
 net.ipv4.conf.default.accept_source_route = 0
 # Controls the System Request debugging functionality of the kernel
 kernel.sysrq = 0
 # Controls whether core dumps will append the PID to the core filename.
 # Useful for debugging multi-threaded applications.
 kernel.core_uses_pid = 1
 # Controls the use of TCP syncookies
 net.ipv4.tcp_syncookies = 1
 # Disable netfilter on bridges.
 net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1
 net.bridge.bridge-nf-call-arptables = 0
 # Controls the default maxmimum size of a mesage queue
 kernel.msgmnb = 65536
 # # Controls the maximum size of a message, in bytes
 kernel.msgmax = 65536
 # Controls the maximum shared segment size, in bytes
 kernel.shmmax = 68719476736
 # # Controls the maximum number of shared memory segments, in pages
 kernel.shmall = 4294967296
 # TCP kernel paramater
 net.ipv4.tcp_mem = 786432 1048576 1572864
 net.ipv4.tcp_rmem = 4096 87380 4194304
 net.ipv4.tcp_wmem = 4096 16384 4194304
 net.ipv4.tcp_window_scaling = 1
 net.ipv4.tcp_sack = 1
 # socket buffer
 net.core.wmem_default = 8388608
 net.core.rmem_default = 8388608
 net.core.rmem_max = 16777216
 net.core.wmem_max = 16777216
 net.core.netdev_max_backlog = 262144
 net.core.somaxconn = 20480
 net.core.optmem_max = 81720
 # TCP conn
 net.ipv4.tcp_max_syn_backlog = 262144
 net.ipv4.tcp_syn_retries = 3
 net.ipv4.tcp_retries1 = 3
 net.ipv4.tcp_retries2 = 15
 # tcp conn reuse
 net.ipv4.tcp_timestamps = 0
 net.ipv4.tcp_tw_reuse = 0
 net.ipv4.tcp_tw_recycle = 0
 net.ipv4.tcp_fin_timeout = 1
 net.ipv4.tcp_max_tw_buckets = 20000
 net.ipv4.tcp_max_orphans = 3276800
 net.ipv4.tcp_synack_retries = 1
 net.ipv4.tcp_syncookies = 1
 # keepalive conn
 net.ipv4.tcp_keepalive_time = 300
 net.ipv4.tcp_keepalive_intvl = 30
 net.ipv4.tcp_keepalive_probes = 3
 net.ipv4.ip_local_port_range = 10001 65000
 # swap
 vm.overcommit_memory = 0
 vm.swappiness = 10
 #net.ipv4.conf.eth1.rp_filter = 0
 #net.ipv4.conf.lo.arp_ignore = 1
 #net.ipv4.conf.lo.arp_announce = 2
 #net.ipv4.conf.all.arp_ignore = 1
 #net.ipv4.conf.all.arp_announce = 2
      

sysctl.conf腳本

#!/bin/bash
#目标主機清單
IP="
172.168.32.201
172.168.32.202
172.168.32.203
172.168.32.204
172.168.32.205
172.168.32.206
172.168.32.211
172.168.32.212
172.168.32.213"
for node in ${IP};do
#sshpass -p 123456 ssh-copy-id ${node}  -o StrictHostKeyChecking=no &> /dev/null
 scp -r /apps/sysctl.conf ${node}:/etc/
 ssh ${node} "/usr/sbin/sysctl --system"
  if [ $? -eq 0 ];then
    echo "${node} iptable_k8s.conf copy完成" 
  else
    echo "${node} iptable_k8s.conf copy失敗" 
  fi
done
      

4、配置ansible 172.168.32.41主機免密鑰登入所有kubernetes叢集主機

免密鑰拷貝腳本

cat scp.sh
#!/bin/bash
#目标主機清單
IP="
172.168.32.201
172.168.32.202
172.168.32.203
172.168.32.204
172.168.32.205
172.168.32.206
172.168.32.211
172.168.32.212
172.168.32.213
“
for node in ${IP};do
sshpass -p 123456 ssh-copy-id ${node}  -o StrictHostKeyChecking=no &> /dev/null
  if [ $? -eq 0 ];then
    echo "${node} 秘鑰copy完成" 
  else
    echo "${node} 秘鑰copy失敗" 
  fi
done
      

配置免密鑰

[root@harbor tmp]# ssh-keygen
[root@harbor tmp]# bash scp.sh 
172.168.32.201 秘鑰copy完成
172.168.32.202 秘鑰copy完成
172.168.32.203 秘鑰copy完成
172.168.32.204 秘鑰copy完成
172.168.32.205 秘鑰copy完成
172.168.32.206 秘鑰copy完成
172.168.32.211 秘鑰copy完成
172.168.32.212 秘鑰copy完成
172.168.32.213 秘鑰copy完成
      

5、拷貝hosts檔案

在172.168.32.41上操作

拷貝hosts檔案到所有的叢集所有節點

vim /etc/hosts
172.168.32.201 k8s-master01
172.168.32.202 k8s-master02
172.168.32.203 k8s-master03
172.168.32.204 k8s-node01
172.168.32.205 k8s-node02
172.168.32.206 k8s-node03
#172.168.32.211 etcd01
#172.168.32.212 etcd02
#172.168.32.213 etcd03
172.168.32.41 harbor.ywx.net
172.168.32.248 www.ywx.net
      

hosts貝腳本

#!/bin/bash
#目标主機清單
IP="
172.168.32.201
172.168.32.202
172.168.32.203
172.168.32.204
172.168.32.205
172.168.32.206
172.168.32.211
172.168.32.212
172.168.32.213"
for node in ${IP};do
scp -r /etc/hosts root@${node}:/etc/hosts
  if [ $? -eq 0 ];then
    echo "${node} hosts copy完成" 
  else
    echo "${node} hosts copy失敗" 
  fi
done
      

6、叢集時間同步

vim time_tongbu.sh
#!/bin/bash
#目标主機清單
IP="
172.168.32.201
172.168.32.202
172.168.32.203
172.168.32.204
172.168.32.205
172.168.32.206
172.168.32.211
172.168.32.212
172.168.32.213"
for node in ${IP};do
ssh ${node} "/usr/sbin/ntpdate time1.aliyun.com &> /dev/null && hwclock -w"
  if [ $? -eq 0 ];then
     echo "${node}--->time sysnc success!!!"
  else
     echo "${node}--->time sysnc false!!!"
  fi
done
      

同步叢集時間

[root@harbor apps]# bash time_tongbu.sh
172.168.32.201--->time sysnc success!!!
172.168.32.202--->time sysnc success!!!
172.168.32.203--->time sysnc success!!!
172.168.32.204--->time sysnc success!!!
172.168.32.205--->time sysnc success!!!
172.168.32.206--->time sysnc success!!!
172.168.32.211--->time sysnc success!!!
172.168.32.212--->time sysnc success!!!
172.168.32.213--->time sysnc success!!!
      

時間同步測試腳本

#!/bin/bash
#目标主機清單
IP="
172.168.32.201
172.168.32.202
172.168.32.203
172.168.32.204
172.168.32.205
172.168.32.206
172.168.32.211
172.168.32.212
172.168.32.213"
for node in ${IP};do
echo "------------"
echo ${node} 
ssh ${node} echo "${hostname}-$(/usr/bin/date)"
done
      

測試叢集時間是否同步

[root@harbor apps]# bash date.sh 
------------
172.168.32.201
-Sat May 22 06:12:56 CST 2021
------------
172.168.32.202
-Sat May 22 06:12:56 CST 2021
------------
172.168.32.203
-Sat May 22 06:12:56 CST 2021
------------
172.168.32.204
-Sat May 22 06:12:57 CST 2021
------------
172.168.32.205
-Sat May 22 06:12:57 CST 2021
------------
172.168.32.206
-Sat May 22 06:12:57 CST 2021
------------
172.168.32.211
-Sat May 22 06:12:57 CST 2021
------------
172.168.32.212
-Sat May 22 06:12:57 CST 2021
------------
172.168.32.213
-Sat May 22 06:12:57 CST 2021
      

7、關閉swapoff

vim swapoff.sh
#!/bin/bash
#目标主機清單
IP="
172.168.32.201
172.168.32.202
172.168.32.203
172.168.32.204
172.168.32.205
172.168.32.206
172.168.32.211
172.168.32.212
172.168.32.213"
for node in ${IP};do
#sshpass -p 123456 ssh-copy-id ${node}  -o StrictHostKeyChecking=no &> /dev/null
 ssh ${node} "swapoff -a && sed -i '/swap/s@UUID@#UUID@g' /etc/fstab"
  if [ $? -eq 0 ];then
    echo "${node} swap關閉成功" 
  else
    echo "${node} swap關閉失敗" 
  fi
done
      

執行關閉swap的腳本

[root@harbor apps]# bash swapoff.sh 
172.168.32.201 swap關閉成功
172.168.32.202 swap關閉成功
172.168.32.203 swap關閉成功
172.168.32.204 swap關閉成功
172.168.32.205 swap關閉成功
172.168.32.206 swap關閉成功
172.168.32.211 swap關閉成功
172.168.32.212 swap關閉成功
172.168.32.213 swap關閉成功
      

七、使用kubeadm部署kubernetes V1.20

在所有的master和node節點上操作

1、安裝docker-v19.03.8

docker_install.sh

#! /bin/bash
ver=19.03.8
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
yum install -y docker-ce-$ver docker-ce-cli-$ver
systemctl start docker
systemctl enable docker
      

docker_scp.sh

#!/bin/bash
#目标主機清單
IP="
172.168.32.201
172.168.32.202
172.168.32.203
172.168.32.204
172.168.32.205
172.168.32.206
172.168.32.211
172.168.32.212
172.168.32.213"
for node in ${IP};do
#sshpass -p 123456 ssh-copy-id ${node}  -o StrictHostKeyChecking=no &> /dev/null
 scp -r /apps/docker_install.sh ${node}:/tmp/
 ssh ${node} "/usr/bin/bash /tmp/docker_install.sh"
  if [ $? -eq 0 ];then
    echo "${node}----> docker install success完成" 
  else
    echo "${node}----> docker install false失敗" 
  fi
done
      

在172.168.32.41上使用腳本批量安裝給kuberntes所有master和node節點安裝docker

[root@harbor apps]#bash docker_scp.sh
      

配置docker的阿裡雲加速

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF
 systemctl daemon-reload 
 systemctl restart docker
      

2、添加阿裡雲的kubernetes的yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache
      

3、安裝kubeadm,kubelet和kubectl

安裝版本号為v1.20

yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
 systemctl enable kubelet
 #先不要啟動,後面由kubeadm來啟動
      

4、部署kubernetes maste

4.1 kubernetes 初始化

kubeadm init \
--apiserver-advertise-address=172.168.32.201 \
--control-plane-endpoint=172.168.32.248 \
--apiserver-bind-port=6443 \
--kubernetes-version=v1.20.0 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/16 \
--service-dns-domain=cluster.local \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--upload-certs \
--ignore-preflight-errors=swap
      
--apiserver-advertise-address=172.168.32.201 #kubernetes要監聽的本地IP
--control-plane-endpoint=172.168.32.248 #為控制平台指定一個穩定的 IP 位址或 DNS 名稱,即配置一個可以長期使用切是高可用的 VIP 或者域名,k8s多 master 高可用基于此參數實作
--apiserver-bind-port=6443 #apisever綁定的端口号,預設為6443
--kubernetes-version=v1.20.0 #指定安裝的kubernetes版本,一般為kubeadm version
--pod-network-cidr=10.244.0.0/16 #pod ip的位址範圍
--service-cidr=10.96.0.0/16 #service ip的位址範圍
--service-dns-domain=cluster.local #設定 k8s 内部域名,預設為 cluster.local,會有相應的 DNS 服務(kube-dns/coredns)解析生成的域名記錄。
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers #設定一個鏡像倉庫,預設為 k8s.gcr.io
--upload-certs #更新證書,用于添加master節點
--ignore-preflight-errors=swap #可以忽略檢查過程  中出現的錯誤資訊,比如忽略 swap,如果為 all就忽略所有。
      

僅在一台master節點上配置,這裡在k8s-master01 172.168.32.201上配置

[root@k8s-master01 ~]# kubeadm init \
--apiserver-advertise-address=172.168.32.201 \
--control-plane-endpoint=172.168.32.248 \
--apiserver-bind-port=6443 \
--kubernetes-version=v1.20.0 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/16 \
--service-dns-domain=cluster.local \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--upload-certs \
--ignore-preflight-errors=swap
......
Your Kubernetes control-plane has initialized successfully!
#使用叢集的配置步驟
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
  export KUBECONFIG=/etc/kubernetes/admin.conf

#部署叢集網絡
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
#添加master節點
You can now join any number of the control-plane node running the following command on each as root:
  kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc \
    --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b \
    --control-plane --certificate-key a552bdb7c4844682faeff86f6a4eaedd28c5ca52769cc9178e56b5bc245e9fc7
#token過期,建立新的token
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
#添加node節點
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc \
    --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b
      

4.2 開始配置叢集使用環境

[root@k8s-master01 ~]#  mkdir -p $HOME/.kube
[root@k8s-master01 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES                  AGE    VERSION
k8s-master01   NotReady   control-plane,master   5m6s   v1.20.0
#NotReady狀态是因為還沒有配置叢集網絡插件
      

4.3 配置叢集網絡插件Calico

注意:隻需要部署下面其中一個,推薦Calico。

Calico是一個純三層的資料中心網絡方案,Calico支援廣泛的平台,包括Kubernetes、OpenStack等。

Calico 在每一個計算節點利用 Linux Kernel 實作了一個高效的虛拟路由器( vRouter) 來負責資料轉發,而每個 vRouter 通過 BGP 協定負責把自己上運作的 workload 的路由資訊向整個 Calico 網絡内傳播。

此外,Calico 項目還實作了 Kubernetes 網絡政策,提供ACL功能。

​​https://docs.projectcalico.org/getting-started/kubernetes/quickstart​​

下載下傳calico.yaml

mkdir /apps
cd /apps
wget https://docs.projectcalico.org/manifests/calico.yaml
      

下載下傳完後還需要修改裡面定義Pod網絡(CALICO_IPV4POOL_CIDR),與前面kubeadm init指定的一樣

把CALICO_IPV4POOL_CIDR必須與master初始化時的--pod-network-cidr=10.244.0.0/16一緻,修改下面資訊
 #- name: CALICO_IPV4POOL_CIDR
 #             value: "192.168.0.0/16"
 改為
- name: CALICO_IPV4POOL_CIDR
             value: "10.244.0.0/16"
      

運作calico.yaml檔案

[root@k8s-master01 apps]# kubectl apply -f calico.yaml 
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
[root@k8s-master01 apps]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7f4f5bf95d-c9gn8   1/1     Running   0          5m43s
calico-node-882fl                          1/1     Running   0          5m43s
coredns-54d67798b7-5dc67                   1/1     Running   0          22m
coredns-54d67798b7-kxdgz                   1/1     Running   0          22m
etcd-k8s-master01                          1/1     Running   0          22m
kube-apiserver-k8s-master01                1/1     Running   0          22m
kube-controller-manager-k8s-master01       1/1     Running   0          22m
kube-proxy-5g22z                           1/1     Running   0          22m
kube-scheduler-k8s-master01                1/1     Running   0          22m
#calico已經部署完成
[root@k8s-master01 apps]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   23m   v1.20.0
#calico網絡部署完成,叢集狀态變為Ready
      

5、添加其他的mater節點

在其他master節點172.168.32.202和172.168.32.203上操作,如下指令

kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc \
    --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b \
    --control-plane --certificate-key a552bdb7c4844682faeff86f6a4eaedd28c5ca52769cc9178e56b5bc245e9fc7
      

k8s-master02 172.168.32.202

To start administering your cluster from this node, you need to run the following as a regular user:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
      

k8s-master03 172.168.32.203

......
To start administering your cluster from this node, you need to run the following as a regular user:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
      

配置k8s-master02和k8s-master03的叢集使用環境

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
      

master節點添加完成

[root@k8s-master01 apps]# kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
k8s-master01   Ready    control-plane,master   37m     v1.20.0
k8s-master02   Ready    control-plane,master   2m34s   v1.20.0
k8s-master03   Ready    control-plane,master   3m29s   v1.20.0
      

6、添加node工作節點

在所有node節點172.168.32.204 172.168.32.205 172.168.32.206上執行如下指令

kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc \
    --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b
      

k8s-node01 172.168.32.204

[root@k8s-node01 ~]# kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc \
>     --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
      

k8s-node02 172.168.32.205

[root@k8s-node02k8s-node03 172.168.32.206 ~]# kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc \
>     --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
      

k8s-node03 172.168.32.206

[root@k8s-node03 ~]# kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc \
>     --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
      

在master節點上使用"kubectl get nodes"檢視kubernetes叢集狀态

[root@k8s-master01 apps]# kubectl get nodes
NAME           STATUS   ROLES                  AGE   VERSION
k8s-master01   Ready    control-plane,master   52m   v1.20.0
k8s-master02   Ready    control-plane,master   17m   v1.20.0
k8s-master03   Ready    control-plane,master   18m   v1.20.0
k8s-node01     Ready    <none>                 10m   v1.20.0
k8s-node02     Ready    <none>                 10m   v1.20.0
k8s-node03     Ready    <none>                 10m   v1.20.0
      

八、部署dashboard

https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml下載下傳dashboard清單檔案,手動修改service暴露nodeport的端口為30001

dashboard.yaml清單

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.3
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.4
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
      

運作dashboard.yaml

[root@k8s-master01 apps]# kubectl apply -f dashboard.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
      

dashboard相關的service和pod已經在kubernetes-dashboard指令空間下建立完成

[root@k8s-master01 apps]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.110.28   <none>        8000/TCP        3m11s
kubernetes-dashboard        NodePort    10.96.56.144   <none>        443:30001/TCP   3m11s
[root@k8s-master01 apps]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-79c5968bdc-tlzbx   1/1     Running   0          2m44s
kubernetes-dashboard-9f9799597-l6p4v         1/1     Running   0          2m44s
      

使用https://nodeIP:30001測試

Kubernetes的kubeadm項目安裝部署

九、kubernetes一些常見問題

1、cgroupfs和systemd

解決方法一:

在所有master和node節點

sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload && systemctl restart kubelet
      

解決方法二:

在所有master和node節點,修改docker配置檔案

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
}
EOF
      

2、添加master節點的證書過期

#生成master證書使用者添加新master節點
kubeadm init phase upload-certs --upload-certs
I0509 18:08:59.985444    7521 version.go:251] remote version is much newer: v1.21.0; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
f22f123c33009cd57d10b6eebc3d78d7a204cc22eff795298c43c5dd1b452e74
master節點(在新的master上執行)
kubeadm join 192.168.32.248:6443 --token edba93.h7pb7iygmvn2kgrm \
    --discovery-token-ca-cert-hash sha256:647a8ed047a042d258a0ef79faeb5f458e01e2b5281d553bf5c524d32c65c106 \
    --control-plane --certificate-key f22f123c33009cd57d10b6eebc3d78d7a204cc22eff795298c43c5dd1b452e74
      

3、添加node節點的沒有記錄token資訊或token資訊過期

#執行kubeadm init時沒有記錄下加入叢集的指令,可以通過以下指令重新建立即可; kubeadm token create --print-join-command

#在master生成新的token
 kubeadm token create --print-join-command
kubeadm join 172.168.32.248:6443 --token x9dqr4.r7n0spisz9scdfbj     --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
node節點(在node節點執行)
kubeadm join 192.168.32.248:6443 --token edba93.h7pb7iygmvn2kgrm \
    --discovery-token-ca-cert-hash sha256:647a8ed047a042d258a0ef79faeb5f458e01e2b5281d553bf5c524d32c65c106
      

4、叢集時間不同步

同步叢集伺服器時間

5、重新建立token來加入node節點

1、重新生成token

[root@k8s-master ~]# kubeadm token create
kk0ee6.nhvz5p85avmzyof3
[root@k8s-master ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
kk0ee6.nhvz5p85avmzyof3   23h       2020-02-05T11:02:44+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
      

2、擷取ca證書sha256編碼hash值

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
9db128fe4c68b0e65c19bb49226fc717e64e790d23816c5615ad6e21fbe92020
      

3、添加新的node節點k8s-node4

[root@k8s-node4 ~]# kubeadm join --token kk0ee6.nhvz5p85avmzyof3 --discovery-token-ca-cert-hash sha256:9db128fe4c68b0e65c19bb49226fc717e64e790d23816c5615ad6e21fbe92020  192.168.31.35:6443
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.