天天看點

使用Kubeadm工具快速部署一個最小化的Kubernetes v1.22.1叢集

環境介紹:

master 192.168.2.18

node1 192.168.2.19

node2 192.168.2.20

CentOS 7.5

Docker 19.03.13

2核+CPU,2GB+記憶體

Kubernetes架構圖

使用Kubeadm工具快速部署一個最小化的Kubernetes v1.22.1叢集

準備工作:

1.修改/etc/hostname檔案更改三台主機名稱(不是必須)

[[email protected] ~]# cat > /etc/hostname << EOF
> k8s-master
> EOF

[[email protected] ~]# cat /etc/hostname
k8s-master

[[email protected] ~]# hostname k8s-master

[[email protected]ocalhost ~]# cat > /etc/hostname << EOF
> k8s-node1
> EOF

[[email protected] ~]# hostname k8s-ndoe1

[[email protected] ~]# 登出(Ctrl+D退出目前終端,重新連接配接就會顯示新主機名)
           

2.更改/etc/hosts檔案添加主機名與IP映射關系

master、node1、node2三台主機都需要操作。

[[email protected] ~]# cat >> /etc/hosts << EOF
> 192.168.1.18 k8s-master
> 192.168.1.19 k8s-node1
> 192.168.1.20 k8s-node2
> EOF

[[email protected] ~]# cat /etc/hosts
...
192.168.1.18 k8s-master
192.168.1.19 k8s-node1
192.168.1.20 k8s-node2

[[email protected] ~]# for i in {19,20}        //發送/etc/hosts檔案到兩台node節點
> do
> scp /etc/hosts [email protected].168.1.$i:/etc/
> done
           

驗證:

[[email protected] ~]# cat /etc/hosts
...
192.168.1.18 k8s-master
192.168.1.19 k8s-node1
192.168.1.20 k8s-node2

[[email protected] ~]# cat /etc/hosts
...
192.168.1.18 k8s-master
192.168.1.19 k8s-node1
192.168.1.20 k8s-node2

[[email protected] ~]# ping -c 2 k8s-node1
PING k8s-node1 (192.168.1.19) 56(84) bytes of data.
64 bytes from k8s-node1 (192.168.1.19): icmp_seq=1 ttl=64 time=0.180 ms
...

[[email protected] ~]# ping -c 2 k8s-node2
PING k8s-node2 (192.168.1.20) 56(84) bytes of data.
64 bytes from k8s-node2 (192.168.1.20): icmp_seq=1 ttl=64 time=0.166 ms
...
           

3.清空Iptables規則并永久關閉防火牆和Selinux

master、node1、node2三台主機都需要操作。

[[email protected] ~]# iptables -F

[[email protected] ~]# systemctl stop firewalld

[[email protected] ~]# systemctl disable firewalld

[[email protected] ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

[[email protected] ~]# setenforce 0
setenforce: SELinux is disabled

[[email protected] ~]# getenforce
Disabled
           

4.校正系統時間

系統時間如果不一緻,會導緻node節點無法加入叢集中。(三台主機都需要操作)

[[email protected] ~]# yum -y install ntp

[[email protected] ~]# ntpdate cn.pool.ntp.org
           

5.關閉swap分區

編輯etc/fstab将swap那一行注釋掉,因為K8S中不支援swap分區。

[[email protected] ~]# swapoff -a   //臨時關閉

[[email protected] ~]# vim /etc/fstab    //永久關閉(注釋掉最後一條配置)
...
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

[[email protected] ~]# free -h | grep Swap   //驗證Swap關閉情況(顯示0代表成功關閉)
Swap:            0B          0B          0B
           

6.将橋接的IPv4流量傳遞到iptables的鍊

[[email protected] ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> vm.swappiness = 0
> EOF

[[email protected] ~]# sysctl --system
* Applying /usr/lib/sysctl.d/00-system.conf ...
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.ipv4.ip_forward = 1
vm.swappiness = 0
* Applying /etc/sysctl.conf ...
           

7.保證每個節點唯一主機名、Mac位址、Product_uuid

-使用指令 ip link 或 ifconfig -a 來擷取網絡接口的 MAC 位址

-使用 sudo cat /sys/class/dmi/id/product_uuid 指令對 product_uuid 校驗

-master-
[[email protected] ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:39:01:df brd ff:ff:ff:ff:ff:ff

[[email protected] ~]# cat /sys/class/dmi/id/product_uuid
76574D56-B953-9DBB-B691-256C8B3901DF

-node1-
[[email protected] ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:11:c3:a5 brd ff:ff:ff:ff:ff:ff

[[email protected] ~]# cat /sys/class/dmi/id/product_uuid
84FD4D56-EC2A-420E-194E-D8827711C3A5

-node2-
[[email protected] ~]# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:78:d8:be brd ff:ff:ff:ff:ff:ff

[[email protected] ~]# cat /sys/class/dmi/id/product_uuid
AC234D56-C031-0FF5-14B5-56608678D8BE
           

安裝Docker

master、node1、node2三台主機都需要操作。

Docker與Kubernetes的關系:

使用Kubeadm工具快速部署一個最小化的Kubernetes v1.22.1叢集

Docker安裝部署詳解:https://blog.csdn.net/qq_44895681/article/details/105540702

[[email protected]/node1/2 ~]# cd /etc/yum.repos.d/

[[email protected]/node1/2 ~]#wget  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo   //添加阿裡雲docker yum源


[[email protected]/node1/2 ~]# systemctl start docker

[[email protected]/node1/2 ~]# docker version
Client: Docker Engine - Community
 Version:           19.03.13
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        4484c46d9d
 Built:             Wed Sep 16 17:03:45 2020
 OS/Arch:           linux/amd64
 Experimental:      false
 ...
           

安裝 kubeadm工具

 Kubeadm工具的出發點很簡單:就是把大部分元件都容器化,并通過StaticPod方式運作,大大簡化了叢集的配置及認證等工作,就是盡可能簡單的部署一個生産可用的Kubernetes叢集。Kubeadm部署實際要安裝的元件有Kubeadm、Kubelet、Kubectl三個。Kubeadm部署Kubernetes叢集的基本步驟如下:

  • kubeadm: 引導叢集的指令
  • kubelet:叢集中運作任務的代理程式
  • kubectl:指令行管理工具

master、node1、node2三台主機都需要操作。

1.添加阿裡雲K8s的yum源

[[email protected] ~]# cat >> /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes Repo
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> enabled=1
> EOF

[[email protected] ~]# for i in {19,20}         //拷貝阿裡雲kubernetes的yum源至其他節點
> do
> scp /etc/yum.repos.d/kubernetes.repo [email protected].168.1.$i:/etc/yum.repos.d/
> done
           

2.安裝Kubeadm、Kubelet、Kubectl元件

本次安裝部署的K8s版本為1.22.1

[[email protected] ~]# yum  list  |grep kubeadm   //搜尋kubeadm相關包
kubeadm.x86_64                              1.22.1-0                   kubernetes

[[email protected] ~]# yum -y install kubelet-1.22.1-0 kubeadm-1.22.1-0 kubectl-1.22.1-0  //指定版本号部署
...
已安裝:
  kubeadm.x86_64 0:1.22.1-0                    kubectl.x86_64 0:1.22.1-0                    kubelet.x86_64 0:1.22.1-0

作為依賴被安裝:
  conntrack-tools.x86_64 0:1.4.4-7.el7         cri-tools.x86_64 0:1.13.0-0                  kubernetes-cni.x86_64 0:0.8.7-0
  libnetfilter_cthelper.x86_64 0:1.0.0-11.el7  libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7  libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
  socat.x86_64 0:1.7.3.2-2.el7

完畢!
           

3.配置開機自啟動

  暫時不需要手動啟動

kubelet

,初始化的時候會自動啟動。

[[email protected] ~]# systemctl start docker && systemctl enable docker

[[email protected] ~]# systemctl enable kubelet
           

初始化Kubernetes Master

隻在master主機上操作。

 記住在執行下面的初始化指令的時候,最好是加上

--kubernetes-version=v “版本号”

,避免後面因為版本問題報錯。

[[email protected] ~]# kubeadm init --kubernetes-version=v1.22.1  --apiserver-advertise-address=192.168.1.18  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --image-repository registry.aliyuncs.com/google_containers
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
...
...
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!  //初始化成功

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.18:6443 --token 57y0v6.x6vl5kvcp9lqp6sj \
        --discovery-token-ca-cert-hash sha256:4e50e411707160bf753ad1490a4e495d402f11290cfe240a268fff7efda328fb     
           

 出現上面的

successfully

資訊之後,表示初始化已經完成,繼續執行初始化完成後提示的指令。

 記住上面

kubeadm join

完整指令,因為後續node節點加入叢集是需要用到,其中包含token。

[[email protected] ~]# mkdir -p $HOME/.kube

[[email protected] ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[[email protected] ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
           

注意:

 在初始化的時候可能會遇到下面的報錯:

error: Get "http://localhost:10248/healthz":

...
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
           

解決方法:

[[email protected] ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://hx983jf6.mirror.aliyuncs.com"],
"graph": "/mnt/data",
"exec-opts": ["native.cgroupdriver=systemd"]   //添加這行配置
}
[[email protected] ~]# systemctl restart docker

[[email protected] ~]# kubeadm reset -f

//執行清除kubeadm資訊後,再執行初始化指令即可。
           

 當出現同樣這個報錯,如果使用以上這個方法無法解決時,也可以試試另外一種方法,也是我其中一次遇到并且使用該方法解決過的。

  初始化Kubenetes報錯1:https://blog.csdn.net/qq_44895681/article/details/10741…

  Kubernetes v1.22.1部署問題2:https://blog.csdn.net/qq_44895681/article/details/119947343?spm=1001.2014.3001.5501

兩台Node節點加入叢集

使用上面Master節點初始化完成時最後一行的kubeadm join完整指令将k8s node節點加入叢集,如果這期間超過24小時,則需要重新生成token。

1.Master主機重新生成新的token

預設token的有效期為24小時,當過期之後,該token就不可用了,以後加入節點需要新token。

[[email protected] ~]# kubeadm token create
c4jjui.bpppj490ggpnmi3u

[[email protected] ~]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
c4jjui.bpppj490ggpnmi3u   22h       2020-07-21T14:37:12+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
           

擷取ca證書sha256編碼hash值

[[email protected] ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt|openssl rsa -pubin -outform der 2>/dev/null|openssl dgst -sha256 -hex|awk '{print $NF}'
c1df6d1ad77fbc0cbdf2bb3dccd5d87eac41b936a5f3fb944f2c14b79af4de55
           

2.兩台Node節點加入叢集

kubeadm join 192.168.1.18:6443 --token 57y0v6.x6vl5kvcp9lqp6sj \
        --discovery-token-ca-cert-hash sha256:4e50e411707160bf753ad1490a4e495d402f11290cfe240a268fff7efda328fb
           

 - -注意- -:

  在節點加入叢集時可能也會報上面初始化時報:

error: Get "http://localhost:10248/healthz":

這個錯誤,直接按照上面初始化時的處理方法即可。

[[email protected] ~]# kubeadm join 192.168.1.18:6443 --token 57y0v6.x6vl5kvcp9lqp6sj \
>         --discovery-token-ca-cert-hash sha256:4e50e411707160bf753ad1490a4e495d402f11290cfe240a268fff7efda328fb
[preflight] Running pre-flight checks
        [WARNING Hostname]: hostname "k8s-ndoe1" could not be reached
        [WARNING Hostname]: hostname "k8s-ndoe1": lookup k8s-ndoe1 on 192.168.1.1:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


[[email protected] ~]# kubeadm join 192.168.1.18:6443 --token 57y0v6.x6vl5kvcp9lqp6sj \
>         --discovery-token-ca-cert-hash sha256:4e50e411707160bf753ad1490a4e495d402f11290cfe240a268fff7efda328fb
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
           

3.Master節點上檢視叢集狀态

[[email protected] ~]# kubectl get nodes     //檢視master節點跟其他node節點狀态
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   3m53s   v1.22.1
k8s-ndoe1    NotReady   <none>                 119s    v1.22.1
k8s-node2    NotReady   <none>                 111s    v1.22.1
           

 可以看到叢集中的節點都顯示NotReady狀态,需要安裝完Flannel網絡插件後才會變成Ready狀态。

[[email protected] ~]# kubectl get pod -n kube-system   //檢視kube-system命名空間中的pod資訊
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7f6cbbb7b8-4j4x2             0/1     Pending   0          10m
coredns-7f6cbbb7b8-86j9t             0/1     Pending   0          10m
etcd-k8s-master                      1/1     Running   1          10m
kube-apiserver-k8s-master            1/1     Running   1          10m
kube-controller-manager-k8s-master   1/1     Running   1          10m
kube-proxy-5jjqg                     1/1     Running   0          10m
kube-proxy-fdq25                     1/1     Running   0          8m51s
kube-proxy-lmntm                     1/1     Running   0          8m43s
kube-scheduler-k8s-master            1/1     Running   1          10m
--這個時候還沒有安裝flannel插件,需要安裝--
           

Flannel原理

 Flannel是CoreOS團隊針對Kubernetes設計的一個網絡規劃服務,簡單來說,它的功能是讓叢集中的不同節點主機建立的Docker容器都具有全叢集唯一的虛拟IP位址。但在預設的Docker配置中,每個Node的Docker服務會分别負責所在節點容器的IP配置設定。Node内部得容器之間可以互相通路,但是跨主機(Node)網絡互相間是不能通信。

 Flannel設計目的就是為叢集中所有節點重新規劃IP位址的使用規則,進而使得不同節點上的容器能夠獲得"同屬一個内網"且"不重複的"IP位址,并讓屬于不同節點上的容器能夠直接通過内網IP通信。

 Flannel 使用etcd存儲配置資料和子網配置設定資訊。flannel 啟動之後,背景程序首先檢索配置和正在使用的子網清單,然後選擇一個可用的子網,然後嘗試去注冊它。etcd也存儲這個每個主機對應的ip。flannel 使用etcd的watch機制監視/coreos.com/network/subnets下面所有元素的變化資訊,并且根據它來維護一個路由表。為了提高性能,flannel優化了Universal TAP/TUN裝置,對TUN和UDP之間的ip分片做了代理。

使用Kubeadm工具快速部署一個最小化的Kubernetes v1.22.1叢集

解釋上圖Flannel的工作原理如下:

資料從源容器中發出後,經由所在主機的docker0虛拟網卡轉發到flannel0虛拟網卡,這是個P2P的虛拟網卡,flanneld服務監聽在網卡的另外一端。

·

Flannel通過Etcd服務維護了一張節點間的路由表,該張表裡儲存了各個節點主機的子網網段資訊。

·

源主機的flanneld服務将原本的資料内容UDP封裝後根據自己的路由表投遞給目的節點的flanneld服務,資料到達以後被解包,然後直接進入目的節點的flannel0虛拟網卡,然後被轉發到目的主機的docker0虛拟網卡,最後就像本機容器通信一樣的由docker0路由到達目标容器。

安裝Flannel網絡插件

 以下安裝部署Flannel網絡插件的yaml檔案内容可以直接複制使用。

[[email protected] ~]# cat kube-flannel.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.14.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
           
[[email protected] ~]# kubectl apply -f kube-flannel.yaml       //将kube-flannel.yaml中的配置應用到一個新的pod中(安裝flannel插件)
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

--上面的資訊可以不用管它,會拉取flannel鏡像(init表示正在拉取)--


[[email protected] ~]# kubectl get pod -n kube-system     //檢視kube-system命名空間中的pod資訊
NAMESPACE     NAME                                 READY   STATUS     RESTARTS   AGE
kube-system   coredns-7f6cbbb7b8-4j4x2             0/1     Pending    0          111m
kube-system   coredns-7f6cbbb7b8-86j9t             0/1     Pending    0          111m
kube-system   etcd-k8s-master                      1/1     Running    1          111m
kube-system   kube-apiserver-k8s-master            1/1     Running    1          111m
kube-system   kube-controller-manager-k8s-master   1/1     Running    1          111m
kube-system   kube-flannel-ds-amd64-clh6j          0/1     Init:0/1   0          2m45s
kube-system   kube-flannel-ds-amd64-ljs2t          0/1     Init:0/1   0          2m45s
kube-system   kube-flannel-ds-amd64-mw748          0/1     Init:0/1   0          2m45s
kube-system   kube-proxy-5jjqg                     1/1     Running    0          111m
kube-system   kube-proxy-fdq25                     1/1     Running    0          110m
kube-system   kube-proxy-lmntm                     1/1     Running    0          109m
kube-system   kube-scheduler-k8s-master            1/1     Running    1          111m
           

 鏡像拉取應該會很慢,可以手動拉取,或者從其他地方拉取。(也可以将其中成功拉取了flannel鏡像的節點上的flannel鏡像docker save導出來,然後到docker load導入到其他拉取flannel鏡像失敗的節點中)

 例子:

   - 導出:docker save -o my_ubuntu.tar runoob/ubuntu:v3

   - 導入:docker load < my_ubuntu.tar

[[email protected] ~]# kubectl  describe pod  kube-flannel-ds-amd64-clh6j -n kube-system      //檢視kube-system命名空間中的kube-flannel-ds-amd64-clh6j名稱的pod詳細資訊
Name:         kube-flannel-ds-amd64-clh6j   //pod名字為kube-flannel-ds-amd64-clh6j
Namespace:    kube-system         //命名空間為kube-system
Priority:     0
Node:         k8s-ndoe1/192.168.1.19   //屬于k8s-ndoe1/192.168.1.19這台節點
Start Time:   Fri, 27 Aug 2021 20:37:44 +0800
Labels:       app=flannel
              controller-revision-hash=76ccd4ff4f
              pod-template-generation=1
              tier=node
Annotations:  <none>
Status:       Pending
IP:           192.168.1.19
...
...
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  6m56s  default-scheduler  Successfully assigned kube-system/kube-flannel-ds-amd64-clh6j to k8s-ndoe1
  Normal  Pulling    6m55s  kubelet            Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"
--》從上面的資訊可以看到k8s-node1這個節點已經成功拉取到這個flannel鏡像
           

檢視flannel網絡插件狀态

[[email protected] ~]# kubectl get pod  -n kube-system
NAME                                 READY   STATUS    RESTARTS        AGE
coredns-7f6cbbb7b8-4j4x2             1/1     Running   2 (2d11h ago)   10d
coredns-7f6cbbb7b8-86j9t             1/1     Running   2 (2d11h ago)   10d
etcd-k8s-master                      1/1     Running   2 (2d11h ago)   10d
kube-apiserver-k8s-master            1/1     Running   2 (2d11h ago)   10d
kube-controller-manager-k8s-master   1/1     Running   3 (2d11h ago)   6d1h
kube-flannel-ds-kngqd                1/1     Running   1 (14h ago)     10d
kube-flannel-ds-lzdpk                1/1     Running   0               10d
kube-flannel-ds-r7f4v                1/1     Running   2 (14h ago)     10d
kube-proxy-5jjqg                     1/1     Running   1 (2d11h ago)   10d
kube-proxy-fdq25                     1/1     Running   0               10d
kube-proxy-lmntm                     1/1     Running   1 (14h ago)     10d
kube-scheduler-k8s-master            1/1     Running   3 (2d11h ago)   6d1h
           

從上面的資訊可以看到在所有K8s節點都成功安裝flannel網絡插件後,kube-system命名空間内的所有pod都已經是running運作狀态。

再檢視node節點狀态:

[[email protected] ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   10d   v1.22.1
k8s-ndoe1    Ready    <none>                 10d   v1.22.1
k8s-node2    Ready    <none>                 10d   v1.22.1
           

現在所有K8s節點狀态都已經為Ready狀态,接下來就可以使用K8s來建立釋出自己的服務了!

注意:

 使用Kubeadm工具部署的K8s叢集,會出現擷取kube-scheduler和kube-controller-manager元件狀态異常的問題,可以參考下面的文章解決。

 使用Kubeadm安裝的K8s叢集擷取kube-scheduler和kube-controller-manager元件狀态異常問題:https://blog.csdn.net/qq_44895681/article/details/120075750?spm=1001.2014.3001.5501

↓↓↓↓↓↓

 最近剛申請了個微信公衆号,上面也會分享一些運維知識,大家點點發财手關注一波,感謝大家。

【原創公衆号】:非著名運維 【福利】:公衆号回複 “資料” 送運維自學資料大禮包哦!

使用Kubeadm工具快速部署一個最小化的Kubernetes v1.22.1叢集

繼續閱讀