天天看點

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

文章目錄

  • centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
    • 環境說明
    • 注意事項及說明
      • 1.版本相容問題
      • 2.鏡像問題
    • 安裝步驟
      • 安裝要求
      • 準備環境
      • 初始配置(每台機器都執行)
      • 安裝cri-o(源碼方式)(每台機器都執行)
        • 安裝依賴
        • 安裝cri-o(安裝go)
        • 安裝conmon
        • Setup CNI networking
        • cri-o配置
        • (選做)Validate registries in registries.conf
        • Starting CRI-O
        • 安裝crictl
      • 使用kubeadm部署Kubernetes
        • 添加kubernetes軟體源
        • 安裝kubeadm,kubelet和kubectl
        • 部署Kubernetes Master【master節點】
        • 加入Kubernetes Node【Slave節點】
      • 部署CNI網絡插件
      • 測試kubernetes叢集
    • 錯誤彙總
      • 0.kubeadm init配置檔案
      • 1.問題:cri-o編譯時提示:go:未找到指令
      • 2.錯誤:cri-o編譯時提示:cannot find package "." in:
      • 3.錯誤:cc:指令未找到
      • 3.錯誤:kubelet啟動時報錯:Flag --cgroup-driver has been deprecated
      • 4.錯誤:kubeadm init顯示下載下傳不到鏡像
        • 解決辦法
      • 5.錯誤:Error initializing source docker://k8s.gcr.io/pause:3.5:
      • 6.錯誤:Error getting node" err="node \"node\" not found"
      • 7.錯誤: "Error getting node" err="node \"cnio-master\" not found"
      • 8.錯誤: [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
        • 解決辦法
      • 9.錯誤:
      • 10.錯誤:kubectl get cs出現錯誤
      • 11.錯誤:Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
      • 12.錯誤:Unable to connect to the server: x509: certificate signed by unknown authority
      • 13.錯誤:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
      • 14.錯誤:安裝calico網絡插件coredns一直顯示ContainerCreating
      • 15.其他配置鏡像的地方,記錄于此,以後有需要直接查找
      • 16.錯誤:coredns一直處于ContainerCreating的狀态 error adding seccomp rule for syscall socket: requested action matches default action of filter\""
    • 小知識科普
      • seccomp
      • make -j
    • 參考文獻

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

環境說明

作業系統:centos7

cri-o:1.21

go:1.17

kubelet-1.21.2 kubeadm-1.21.2 kubectl-1.21.2

kubernetes:1.21.0

runc:1.0.2

本實驗所需的全部檔案已上傳,如需要,請自行下載下傳!

其實,我有一點疑惑,這個是不是calico的網絡!

注意事項及說明

1.版本相容問題

參考k8s官方文檔與cri-o官方文檔選擇容器運作時cri-o

參考下圖,準備安裝k8s 1.21,故選擇cri-o 1.21.x。

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

1.21版本必須下載下傳go1.16版本之上才可以。

2.鏡像問題

kubeadm預設在

imageRepository

定制叢集初始化時拉取k8s所需鏡像的位址,預設為k8s.gcr.io,大家一般都會改為registry.aliyuncs.com/google_containers。

  • 如果用docker作為容器運作時,可以從阿裡雲下載下傳好鏡像之後,直接利用tag改為k8s.gcr.io的字首即可。
  • 如果用containerd作為容器運作時,也可以使用ctr改鏡像字首。
  • 但是cri-o卻不可以,是以隻能保證所有的鏡像字首都一樣。可是阿裡雲的鏡像不全,有部分鏡像隻能從其他地方下載下傳,這樣就會有兩個字首。kubeadm init的時候就會出現問題。解決辦法是:自己建一個鏡像倉庫,把需要的鏡像都傳到自己的鏡像倉庫,統一把字首改為自己的倉庫位址即可。

安裝步驟

安裝要求

在開始之前,部署Kubernetes叢集機器需要滿足以下幾個條件:

  • 一台或多台機器,作業系統 CentOS7.x-86_x64
  • 硬體配置:2GB或更多RAM,2個CPU或更多CPU,硬碟30GB或更多【注意master需要兩核】
  • 可以通路外網,需要拉取鏡像,如果伺服器不能上網,需要提前下載下傳鏡像并導入節點
  • 禁止swap分區

準備環境

角色 IP
master 192.168.56.153
node1 192.168.56.154
node2 192.168.56.155

初始配置(每台機器都執行)

# 關閉防火牆
systemctl stop firewalld
systemctl disable firewalld

# 關閉selinux
# 永久關閉
sed -i 's/enforcing/disabled/' /etc/selinux/config  
# 臨時關閉
setenforce 0  

# 關閉swap
# 臨時
swapoff -a 
# 永久關閉
sed -ri 's/.*swap.*/#&/' /etc/fstab

# 根據規劃設定主機名【master節點上操作】
hostnamectl set-hostname crio-master
# 根據規劃設定主機名【node1節點操作】
hostnamectl set-hostname crio-node1
# 根據規劃設定主機名【node2節點操作】
hostnamectl set-hostname crio-node2

# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.56.153 crio-master
192.168.56.154 crio-node1
192.168.56.155 crio-node2
EOF

cat >> /etc/hosts << EOF
192.168.56.142 crio-master
192.168.56.154 crio-node1
192.168.56.55 crio-node2
EOF

# 時間同步
yum install ntpdate -y
ntpdate time.windows.com

# 設定核心參數
cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.swappiness=0
EOF
 
modprobe overlay
modprobe br_netfilter

# 生效
sysctl --system 

# 安裝ipvsadm,并設定ipvs子產品自啟
yum install ipvsadm
   
cat > /etc/sysconfig/modules/ipvs.modules << EOF
/sbin/modinfo -F filename ip_vs > /dev/null 2>&1
if [ $? -eq 0 ];then
 /sbin/modprobe ip_vs
fi
EOF
           

安裝cri-o(源碼方式)(每台機器都執行)

安裝依賴

yum install -y \
  containers-common \
  device-mapper-devel \
  git \
  glib2-devel \
  glibc-devel \
  glibc-static \
  go \
  gpgme-devel \
  libassuan-devel \
  libgpg-error-devel \
  libseccomp-devel \
  libselinux-devel \
  pkgconfig \
  make \
  runc
           
[[email protected] ~]# runc -v
runc version spec: 1.0.1-dev
           

安裝cri-o(安裝go)

需要先安裝go,go的版本需要高一些,要不然編譯不過去,make會失敗

網上說需要1.16之上的版本,此處使用1.17版本

# 安裝go
wget https://dl.google.com/go/go1.17.linux-amd64.tar.gz
tar -xzf go1.17.linux-amd64.tar.gz -C /usr/local
ln -s /usr/local/go/bin/* /usr/bin/   # 建立軟連結 ln  -s  [源檔案或目錄]  [目标檔案或目錄]
go version
           
[[email protected] ~]# go version
go version go1.17 linux/amd64
           
# 安裝cri-o
git clone -b release-1.21 https://github.com/cri-o/cri-o.git
cd cri-o 
make   # make -j8    #把項目并行編譯 讓make最多允許n個編譯指令同時執行,這樣可以更有效的利用CPU資源。
sudo make install
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
[[email protected] ~]# crio -v
crio version 1.21.2
Version:       1.21.2
GitCommit:     unknown
GitTreeState:  unknown
BuildDate:     2021-09-03T07:28:21Z
GoVersion:     go1.17
Compiler:      gc
Platform:      linux/amd64
Linkmode:      dynamic

           

crio的配置檔案預設為/etc/crio/crio.conf,可以通過指令crio config --default > /etc/crio/crio.conf來生成預設配置檔案。

crio config --default > /etc/crio/crio.conf
           

安裝conmon

git clone https://github.com/containers/conmon
cd conmon
make
sudo make install
           

Setup CNI networking

https://github.com/cri-o/cri-o/blob/master/contrib/cni/README.md

#下載下傳不下來的話,就本地下載下傳再上傳
wget https://github.com/cri-o/cri-o/blob/master/contrib/cni/10-crio-bridge.conf
cp 10-crio-bridge.conf /etc/cni/net.d
# 沒有檔案夾的話 就自己建立就好了 mkdir -p  /etc/cni/net.d

git clone https://github.com/containernetworking/plugins
cd plugins
# git checkout v0.8.7

./build_linux.sh
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
sudo mkdir -p /opt/cni/bin
sudo cp bin/* /opt/cni/bin/
           

cri-o配置

如果是第一次安裝,生成并安裝配置檔案

[[email protected] cri-o]# make install.config

install -Z -d /usr/local/share/containers/oci/hooks.d
install -Z -d /etc/crio/crio.conf.d
install -Z -D -m 644 crio.conf /etc/crio/crio.conf
install -Z -D -m 644 crio-umount.conf /usr/local/share/oci-umount/oci-umount.d/crio-umount.conf
install -Z -D -m 644 crictl.yaml /etc
[[email protected] cri-o]#

           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

(選做)Validate registries in registries.conf

Edit

/etc/containers/registries.conf

and verify that the registries option has valid values in it. For example:

[registries.search]
registries = ['registry.access.redhat.com', 'registry.fedoraproject.org', 'quay.io', 'docker.io']

[registries.insecure]
registries = []

[registries.block]
registries = []
           

Starting CRI-O

make install.systemd

[[email protected] cri-o]# make install.systemd
install -Z -D -m 644 contrib/systemd/crio.service /usr/local/lib/systemd/system/crio.service
ln -sf crio.service /usr/local/lib/systemd/system/cri-o.service
install -Z -D -m 644 contrib/systemd/crio-shutdown.service /usr/local/lib/systemd/system/crio-shutdown.service
install -Z -D -m 644 contrib/systemd/crio-wipe.service /usr/local/lib/systemd/system/crio-wipe.service
[[email protected] cri-o]#

           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
sudo systemctl daemon-reload
sudo systemctl enable crio
sudo systemctl start crio
systemctl status crio
crio --version

           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

安裝crictl

官方安裝方法:https://github.com/kubernetes-sigs/cri-tools/

VERSION="v1.21.0"
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
rm -f crictl-$VERSION-linux-amd64.tar.gz


crictl --runtime-endpoint unix:///var/run/crio/crio.sock version
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

使用kubeadm部署Kubernetes

添加kubernetes軟體源

然後我們還需要配置一下yum的k8s軟體源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
           

安裝kubeadm,kubelet和kubectl

您需要在每台機器上安裝以下的軟體包:

  • kubeadm

    :用來初始化叢集的指令。
  • kubelet

    :在叢集中的每個節點上用來啟動 pod 和容器等。
  • kubectl

    :用來與叢集通信的指令行工具。

kubeadm 不能 幫您安裝或者管理

kubelet

kubectl

,是以您需要確定它們與通過 kubeadm 安裝的控制平面的版本相比對。如果不這樣做,則存在發生版本偏差的風險,可能會導緻一些預料之外的錯誤和問題。

由于版本更新頻繁,這裡指定版本号部署:

# 安裝kubelet、kubeadm、kubectl,同時指定版本
# yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
#yum install -y kubelet kubeadm kubectl
#yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
yum install -y kubelet-1.21.2 kubeadm-1.21.2 kubectl-1.21.2

#### 這一步很重要
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint='unix:///var/run/crio/crio.sock'
EOF

# 設定開機啟動
systemctl daemon-reload && systemctl enable kubelet --now

#systemctl status kubelet   目前的status是activating的狀态

# 檢視日志    journalctl -xe
           
[[email protected] ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:57:56Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
[[email protected] ~]#
[[email protected] ~]#
[[email protected] ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[[email protected] ~]#

           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

部署Kubernetes Master【master節點】

生成 kubeadm 配置檔案:

[[email protected] ~]# kubeadm config print init-defaults > kubeadm-config.yaml
W0903 15:50:51.208437   16483 kubelet.go:210] cannot automatically set CgroupDriver when starting the Kubelet: cannot execute 'docker info -f {{.CgroupDriver}}': executable file not found in $PATH

雖然會出現這句話,但是依然生成了kubeadm-config.yaml檔案
           
# cat kubeadm-config.yaml  # 初始生成的
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

           
# 修改後的
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.56.153
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
imageRepository: docker.io/ningan123
networking:
  podSubnet: 10.85.0.0/16
           

注意這裡的

PodSubnet

要和

/etc/cni/net.d/10-crio-bridge.conf

中的一緻。

controlPlaneEndpoint

在需要做高可用時必須要加入,這裡最好用一個虛拟 IP 來代理。

使用

kubeadm config images list

檢視 kubeadm 需要的鏡像

criSocket

要配置為 crio 的 socket 檔案。

imageRepository 改為自己的鏡像倉庫。如果不改的話,可以先試試,後面就會報錯,自己就明白啦!

kubeadm預設在

imageRepository

定制叢集初始化時拉取k8s所需鏡像的位址,預設為k8s.gcr.io,大家一般都會改為registry.aliyuncs.com/google_containers。
  • 如果用docker作為容器運作時,可以從阿裡雲下載下傳好鏡像之後,直接利用tag改為k8s.gcr.io的字首即可。
  • 如果用containerd作為容器運作時,也可以使用ctr改鏡像字首。
  • 但是cri-o卻不可以,是以隻能保證所有的鏡像字首都一樣。可是阿裡雲的鏡像不全,沒有coredns:v1.8.0。有部分鏡像隻能從其他地方下載下傳,這樣就會有兩個字首。kubeadm init的時候就會出現問題。解決辦法是:自己建一個鏡像倉庫,把需要的鏡像都傳到自己的鏡像倉庫,統一把字首改為自己的倉庫位址即可。

在開始初始化叢集之前可以使用

kubeadm config images pull --config kubeadm.yaml

預先在各個伺服器節點上拉取所k8s需要的容器鏡像。

[[email protected] ~]# kubeadm config images list
I0903 15:57:47.465274   16788 version.go:254] remote version is much newer: v1.22.1; falling back to: stable-1.21
k8s.gcr.io/kube-apiserver:v1.21.4
k8s.gcr.io/kube-controller-manager:v1.21.4
k8s.gcr.io/kube-scheduler:v1.21.4
k8s.gcr.io/kube-proxy:v1.21.4
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

可以看到,預設是從k8s.gcr.io下載下傳鏡像的
           
[[email protected] ~]# kubeadm config images list --config kubeadm-config.yaml
docker.io/ningan123/kube-apiserver:v1.21.0
docker.io/ningan123/kube-controller-manager:v1.21.0
docker.io/ningan123/kube-scheduler:v1.21.0
docker.io/ningan123/kube-proxy:v1.21.0
docker.io/ningan123/pause:3.4.1
docker.io/ningan123/etcd:3.4.13-0
docker.io/ningan123/coredns:v1.8.0

因為配置檔案裡面的鏡像倉庫是docker.io/ningan123,是以此處顯示的鏡像是自己的鏡像倉庫
           

可以先按照這個要求把鏡像上傳到自己的鏡像倉庫中,具體方法參考自己下面的錯誤(kubeadm init下載下傳不到鏡像)

修改

/etc/crio/crio.conf

中的配置: (要記住所有的node都要修改,否則就會出現你想半天都想不出來的問題,我真的是調了好久好久,某一天突然腦子一亮,想到了解決辦法,開心)

crio的配置檔案預設為/etc/crio/crio.conf,可以通過指令crio config --default > /etc/crio/crio.conf來生成預設配置檔案。

crio config --default > /etc/crio/crio.conf
           
# 原部落格
insecure_registries = ["registry.prod.bbdops.com"]
pause_image = "registry.prod.bbdops.com/common/pause:3.2"

# 自己的配置  修改為自己的鏡像倉庫
insecure_registries = ["docker.io"]
pause_image = "docker.io/ningan123/pause:3.5"
           
# 修改完記得重新開機
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart crio
[[email protected] ~]# systemctl status crio
           

部署:

因為要拉取鏡像,是以需要時間長一點,可以提前把鏡像拉取下來。

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

加入叢集節點的指令

kubeadm join 192.168.56.142:6443 --token fvfg1j.ri0yl3hi80d3bv6o \
        --discovery-token-ca-cert-hash sha256:b7d05f887a406c28da9eebb5b519207d4839373295454dd93b225da015d6b6fe
           

使用kubectl工具 【master節點操作】

執行如下指令為用戶端工具

kubectl

配置上下文,其中檔案**/etc/kubernetes/admin.conf**具備整個叢集管理者權限。

# 如果前面init過,先删除再建立  rm -rf $HOME/.kube
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
           

執行完成後,我們使用下面指令,檢視我們正在運作的節點

kubectl get nodes
kubectl get pods --all-namespaces -o wide
kubectl describe pod coredns-7d89967ddf-4qjvl -n kube-system
kubectl describe pod coredns-7d89967ddf-6md2p -n kube-system
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

不知道為什麼已經是ready了,難道是因為前面設定了網絡插件??

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

如果coredns一直處于ContainerCreating的狀态 檢視coredns那個pod的狀态

kubectl describe pod pod名字 -n kube-system
           

出現錯誤:error adding seccomp rule for syscall socket: requested action matches default action of filter""

參考下面的解決辦法,将runc的版本升高就可以了

檢視叢集狀态,如果狀态異常參見下面錯誤解決辦法

kubectl get cs
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

預設建立了4個命名空間

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

k8s系統元件全部至于kube-system命名空間中

注意:etcd、kube-apiserver、kube-controller-manager、kube-scheduler元件為靜态模式部署,其部署清單在主機的**/etc/kubernetes/manifests**目錄裡,kubelet會自動加載此目錄并啟動pod。

ls -l /etc/kubernetes/manifests/
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

coredns使用deployment部署,而kube-proxy使用daemonset模式部署:

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

加入Kubernetes Node【Slave節點】

下面我們需要到 node1 和 node2伺服器,執行下面的代碼向叢集添加新節點

執行在kubeadm init輸出的kubeadm join指令:

注意,以下的指令是在master初始化完成後,每個人的都不一樣!!!需要複制自己生成的
kubeadm join 192.168.56.142:6443 --token fvfg1j.ri0yl3hi80d3bv6o \
        --discovery-token-ca-cert-hash sha256:b7d05f887a406c28da9eebb5b519207d4839373295454dd93b225da015d6b6fe
           

如果顯示XXX exists,就在前面加上kubeadm reset

kubeadm reset kubeadm join 192.168.56.142:6443 --token fvfg1j.ri0yl3hi80d3bv6o \
        --discovery-token-ca-cert-hash sha256:b7d05f887a406c28da9eebb5b519207d4839373295454dd93b225da015d6b6fe
           

預設token有效期為24小時,當過期之後,該token就不可用了。這時就需要重新建立token,操作如下:

kubeadm token create --print-join-command
           

當我們把兩個節點都加入進來後,我們就可以去Master節點 執行下面指令檢視情況

kubectl get node
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

部署CNI網絡插件

不用部署已經是ready了,迷!

可能是上面設定了CNI的緣故。

測試kubernetes叢集

我們都知道K8S是容器化技術,它可以聯網去下載下傳鏡像,用容器的方式進行啟動

在Kubernetes叢集中建立一個pod,驗證是否正常運作:

# 下載下傳nginx 【會聯網拉取nginx鏡像】
kubectl create deployment nginx --image=nginx
# 檢視狀态
kubectl get pod -o wide
           

如果我們出現Running狀态的時候,表示已經成功運作了

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

下面我們就需要将端口暴露出去,讓其它外界能夠通路

# 暴露端口
kubectl expose deployment nginx --port=80 --type=NodePort
# 檢視一下對外的端口
kubectl get pod,svc
           

能夠看到,我們已經成功暴露了 80端口 到 30529上

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

我們到我們的主控端浏覽器上,通路如下位址

http://192.168.56.154:30318/
           

發現我們的nginx已經成功啟動了

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

到這裡為止,我們就搭建了一個單master的k8s叢集

錯誤彙總

0.kubeadm init配置檔案

# 第二次部署用的這個
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.56.153
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: docker.io/ningan123
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

           

1.問題:cri-o編譯時提示:go:未找到指令

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

安裝go語言環境

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

2.錯誤:cri-o編譯時提示:cannot find package “.” in:

/root/cri-o/vendor/archive/tar
           
[[email protected] cri-o]# make
go build -trimpath  -ldflags '-s -w -X github.com/cri-o/cri-o/internal/pkg/criocli.DefaultsPath="" -X github.com/cri-o/cri-o/internal/version.buildDate='2021-08-26T07:31:09Z' -X github.com/cri-o/cri-o/internal/version.gitCommit=87d9f169dad21a9049f25e47c3c8f78635bea340 -X github.com/cri-o/cri-o/internal/version.gitTreeState=clean ' -tags "containers_image_ostree_stub  exclude_graphdriver_btrfs btrfs_noversion   containers_image_openpgp seccomp selinux " -o bin/crio github.com/cri-o/cri-o/cmd/crio
vendor/github.com/containers/storage/pkg/archive/archive.go:4:2: cannot find package "." in:
        /root/cri-o/vendor/archive/tar
vendor/github.com/containers/storage/pkg/archive/archive.go:5:2: cannot find package "." in:
        /root/cri-o/vendor/bufio
vendor/github.com/containers/storage/layers.go:4:2: cannot find package "." in:
        /root/cri-o/vendor/bytes
vendor/github.com/containers/storage/pkg/archive/archive.go:7:2: cannot find package "." in:
        /root/cri-o/vendor/compress/bzip2
vendor/golang.org/x/crypto/openpgp/packet/compressed.go:9:2: cannot find package "." in:
        /root/cri-o/vendor/compress/flate
vendor/golang.org/x/net/http2/transport.go:12:2: cannot find package "." in:
        /root/cri-o/vendor/compress/gzip
vendor/golang.org/x/crypto/openpgp/packet/compressed.go:10:2: cannot find package "." in:
        /root/cri-o/vendor/compress/zlib
vendor/github.com/vbauerster/mpb/v7/progress.go:5:2: cannot find package "." in:
        /root/cri-o/vendor/container/heap
vendor/github.com/containers/storage/drivers/copy/copy_linux.go:14:2: cannot find package "." in:
        /root/cri-o/vendor/container/list
cmd/crio/main.go:4:2: cannot find package "." in:
        /root/cri-o/vendor/context
vendor/github.com/opencontainers/go-digest/algorithm.go:19:2: cannot find package "." in:
        /root/cri-o/vendor/crypto


           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

解決辦法:

go的版本太低了,之前go的版本是1.15.14,需要是1.16之上

參考:

crio官方github問題

[Linux下Go的安裝、配置 、更新和解除安裝](Linux下Go的安裝、配置 、更新和解除安裝 - Go語言中文網 - Golang中文社群 (studygolang.com))

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
[[email protected] cri-o]# make -j8
go build  -ldflags '-s -w -X github.com/cri-o/cri-o/internal/pkg/criocli.DefaultsPath="" -X github.com/cri-o/cri-o/internal/version.buildDate='2021-08-26T08:17:55Z' -X github.com/cri-o/cri-o/internal/version.gitCommit=cd588bfd7a710a7d5043a1791dcc9516646df9b3 -X github.com/cri-o/cri-o/internal/version.gitTreeState=clean ' -tags "containers_image_ostree_stub  exclude_graphdriver_btrfs btrfs_noversion    seccomp selinux" -o bin/crio github.com/cri-o/cri-o/cmd/crio
go build  -ldflags '-s -w -X github.com/cri-o/cri-o/internal/pkg/criocli.DefaultsPath="" -X github.com/cri-o/cri-o/internal/version.buildDate='2021-08-26T08:17:55Z' -X github.com/cri-o/cri-o/internal/version.gitCommit=cd588bfd7a710a7d5043a1791dcc9516646df9b3 -X github.com/cri-o/cri-o/internal/version.gitTreeState=clean ' -tags "containers_image_ostree_stub  exclude_graphdriver_btrfs btrfs_noversion    seccomp selinux" -o bin/crio-status github.com/cri-o/cri-o/cmd/crio-status
make -C pinns
make[1]: 進入目錄“/root/cri-o/pinns”
make[1]: 對“all”無需做任何事。
make[1]: 離開目錄“/root/cri-o/pinns”
(/root/cri-o/build/bin/go-md2man -in docs/crio-status.8.md -out docs/crio-status.8.tmp && touch docs/crio-status.8.tmp && mv docs/crio-status.8.tmp docs/crio-status.8) || \
        (/root/cri-o/build/bin/go-md2man -in docs/crio-status.8.md -out docs/crio-status.8.tmp && touch docs/crio-status.8.tmp && mv docs/crio-status.8.tmp docs/crio-status.8)
(/root/cri-o/build/bin/go-md2man -in docs/crio.conf.5.md -out docs/crio.conf.5.tmp && touch docs/crio.conf.5.tmp && mv docs/crio.conf.5.tmp docs/crio.conf.5) || \
        (/root/cri-o/build/bin/go-md2man -in docs/crio.conf.5.md -out docs/crio.conf.5.tmp && touch docs/crio.conf.5.tmp && mv docs/crio.conf.5.tmp docs/crio.conf.5)
(/root/cri-o/build/bin/go-md2man -in docs/crio.conf.d.5.md -out docs/crio.conf.d.5.tmp && touch docs/crio.conf.d.5.tmp && mv docs/crio.conf.d.5.tmp docs/crio.conf.d.5) || \
        (/root/cri-o/build/bin/go-md2man -in docs/crio.conf.d.5.md -out docs/crio.conf.d.5.tmp && touch docs/crio.conf.d.5.tmp && mv docs/crio.conf.d.5.tmp docs/crio.conf.d.5)
(/root/cri-o/build/bin/go-md2man -in docs/crio.8.md -out docs/crio.8.tmp && touch docs/crio.8.tmp && mv docs/crio.8.tmp docs/crio.8) || \
        (/root/cri-o/build/bin/go-md2man -in docs/crio.8.md -out docs/crio.8.tmp && touch docs/crio.8.tmp && mv docs/crio.8.tmp docs/crio.8)
./bin/crio -d "" --config=""  config > crio.conf
INFO[2021-08-26 16:20:36.694730863+08:00] Starting CRI-O, version: 1.21.2, git: cd588bfd7a710a7d5043a1791dcc9516646df9b3(clean)
INFO Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL
[[email protected] cri-o]#

           

編譯通過,棒~

真的是debug了好久好久呀

3.錯誤:cc:指令未找到

[[email protected] cri-o]# make -j8
mkdir -p "/root/cri-o/_output/src/github.com/cri-o"
make -C pinns
make[1]: 進入目錄“/root/cri-o/pinns”
cc -std=c99 -Os -Wall -Werror -Wextra -static -O3 -o src/sysctl.o -c src/sysctl.c
make[1]: cc:指令未找到
make[1]: *** [src/sysctl.o] 錯誤 127
make[1]: 離開目錄“/root/cri-o/pinns”
ln -s "/root/cri-o" "/root/cri-o/_output/src/github.com/cri-o/cri-o"
make: *** [bin/pinns] 錯誤 2
make: *** 正在等待未完成的任務....
touch "/root/cri-o/_output/.gopathok"

           
# 安裝gcc即可
yum install -y gcc
           

3.錯誤:kubelet啟動時報錯:Flag --cgroup-driver has been deprecated

8月 26 17:46:41 cnio-master kubelet[29519]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/task
8月 26 17:46:41 cnio-master kubelet[29519]: Flag --runtime-request-timeout has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io
8月 26 17:46:41 cnio-master kubelet[29519]: E0826 17:46:41.446034   29519 server.go:204] "Failed to load kubelet config file" err="failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to
8月 26 17:46:41 cnio-master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

因為有的參數過時了

# 更改配置 重新啟動
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS=--container-runtime=remote --container-runtime-endpoint='unix:///var/run/crio/crio.sock'
EOF

# 設定開機啟動
systemctl daemon-reload && systemctl enable kubelet --now
           

4.錯誤:kubeadm init顯示下載下傳不到鏡像

[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.8.0:

# 錯誤提示
[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.8.0: output: time="2021-08-26T19:09:02+08:00" level=fatal msg="pulling image: rpc error: code = Unknowror reading manifest v1.8.0 in registry.aliyuncs.com/google_containers/coredns: manifest unknown: manifest unknown"

           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

阿裡雲裡面沒有:

1.21版本的coredns:v1.8.0

1.22版本的coredns:v1.8.4

是以需要更改為自己的位址,将需要的鏡像都上傳到自己的伺服器上面

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

如何确定都需要上傳什麼鏡像呢?

運作以下指令報錯,就知道該上傳什麼鏡像了,

kubeadm init --config kubeadm-config.yaml --upload-certs
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

更改為自己的伺服器之後

kubeadm config images list
kubeadm config images list --config kubeadm-config.yaml
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

解決辦法

自己在dockerhub上面建了一個自己的倉庫,名為ningan123

# 另一台裝了docke的主機 假設名字為AAA
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0

docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
#docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.0    這個源裡面沒有
#docker pull networkman/coredns:v1.8.0
#docker pull docker.io/coredns/coredns:v1.8.0   v1.8.0沒有,隻有1.8.0
docker pull docker.io/coredns/coredns:1.8.0
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
# 在AAA上操作
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 ningan123/kube-apiserver:v1.21.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 ningan123/kube-controller-manager:v1.21.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.0 ningan123/kube-scheduler:v1.21.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0 ningan123/kube-proxy:v1.21.0

docker tag registry.aliyuncs.com/google_containers/pause:3.4.1 ningan123/pause:3.4.1
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 ningan123/etcd:3.4.13-0
docker tag docker.io/coredns/coredns:1.8.0 ningan123/coredns:v1.8.0
           
# 在AAA上操作
docker push ningan123/kube-apiserver:v1.21.0
docker push ningan123/kube-controller-manager:v1.21.0 
docker push ningan123/kube-scheduler:v1.21.0
docker push ningan123/kube-proxy:v1.21.0

docker push ningan123/pause:3.4.1
docker push ningan123/etcd:3.4.13-0
docker push ningan123/coredns:v1.8.0

           
# 在master上面下載下傳鏡像    (好像不用使用者名和密碼也可以)

crictl pull docker.io/ningan123/kube-apiserver:v1.21.0 --creds ningan123:awy199525
crictl pull docker.io/ningan123/kube-controller-manager:v1.21.0  --creds ningan123:awy199525
crictl pull docker.io/ningan123/kube-scheduler:v1.21.0 --creds ningan123:awy199525
crictl pull docker.io/ningan123/kube-proxy:v1.21.0 --creds ningan123:awy199525

crictl pull docker.io/ningan123/pause:3.4.1 ningan123:awy199525
crictl pull docker.io/ningan123/etcd:3.4.13-0 --creds ningan123:awy199525
crictl pull docker.io/ningan123/coredns:v1.8.0 --creds ningan123:awy199525
           

5.錯誤:Error initializing source docker://k8s.gcr.io/pause:3.5:

Error initializing source docker://k8s.gcr.io/pause:3.5: error pinging docker registry k8s.gcr.io: Get “https://k8s.gcr.io/v2/”: dial tcp 64.233.188.82:443: connect: connection refused"

修改

/etc/crio/crio.conf

中的配置: (要記住所有的node都要修改,否則就會出現你想半天都想不出來的問題,我真的是調了好久好久,某一天突然腦子一亮,想到了解決辦法,開心)

# 原部落格
insecure_registries = ["registry.prod.bbdops.com"]
pause_image = "registry.prod.bbdops.com/common/pause:3.2"

# 自己的配置  修改為自己的鏡像倉庫
insecure_registries = ["docker.io"]
pause_image = "docker.io/ningan123/pause:3.5"
           
# 修改完記得重新開機
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart crio
[[email protected] ~]# systemctl status crio
           

6.錯誤:Error getting node" err=“node “node” not found”

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

将node注釋掉

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

7.錯誤: “Error getting node” err=“node “cnio-master” not found”

在/etc/hosts檔案中添加位址映射關系

8.錯誤: [ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

[ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

[preflight] If you know what you are doing, you can make a check non-fatal with

--ignore-preflight-errors=...

解決辦法

[[email protected] ~]# cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
> net.bridge.bridge-nf-call-iptables  = 1
> net.ipv4.ip_forward                 = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> vm.swappiness=0
> EOF
# 之前沒有加vm.swappiness

[[email protected] ~]# modprobe br_netfilter
[[email protected] ~]#
[[email protected] ~]#
[[email protected] ~]# sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
           

9.錯誤:

[[email protected] ~]# kubeadm reset && kubeadm init --config kubeadm-config.yaml --upload-certs
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
[reset] Removing info for node "cnio-master" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
invalid configuration for GroupVersionKind /, Kind=: kind and apiVersion is mandatory information that must be specified
To see the stack trace of this error execute with --v=5 or higher
[[email protected] ~]#

           

錯誤原因:因為自己的配置檔案更改的不正确

# 1 有問題的配置檔案  這個檔案隻是在原始生成檔案上做了一點點的修改
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.56.145
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  #name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
#imageRepository: registry.aliyuncs.com/google_containers
imageRepository: docker.io/ningan123
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.85.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler: {}

           
# 2  這個檔案是在上面的檔案的基礎上去掉了bootstrapTokens相關的東西   可以正常successful
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.56.145
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: docker.io/ningan123
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.85.0.0/16
  serviceSubnet: 10.96.0.0/16
scheduler: {}

           
# 這個是别人家的格式 這個也可以successful
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.56.145
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/crio/crio.sock
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.21.0
imageRepository: docker.io/ningan123
networking:
  podSubnet: 10.85.0.0/16

           

10.錯誤:kubectl get cs出現錯誤

[[email protected] ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0               Healthy     {"health":"true"}

           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

controller-manager和scheduler為不健康狀态,修改

/etc/kubernetes/manifests/

下的靜态pod配置檔案

kube-controller-manager.yaml

kube-scheduler.yaml

,删除這兩個檔案中指令選項中的

- --port=0

這行,重新開機kubelet,再次檢視一切正常。

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

11.錯誤:Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.

The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz’ failed with error: Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused.

方法1:參考上一個錯誤的解決辦法,檢視一下叢集的狀态,應該是叢集的部分元件沒有正常運作

方法2:重新開機 然後會出現錯誤8,按照下面執行 (我調了一上午,最後居然要靠重新開機來解決問題,我我我,能說什麼呢?)

modprobe overlay
modprobe br_netfilter

# 生效
sysctl --system 
           

然後重新join就可以了

12.錯誤:Unable to connect to the server: x509: certificate signed by unknown authority

[[email protected] ~]# kubectl get node

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

13.錯誤:[ERROR FileContent–proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
出現錯誤:
ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
解決方法:
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
————————————————
版權聲明:本文為CSDN部落客「永遠在路上啊」的原創文章,遵循CC 4.0 BY-SA版權協定,轉載請附上原文出處連結及本聲明。
原文連結:https://blog.csdn.net/qq_34988341/article/details/109501589
           

14.錯誤:安裝calico網絡插件coredns一直顯示ContainerCreating

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
# 檢視錯誤原因
kubectl describe pod kube-proxy-rrxg5 -n kube-system
           
# 失敗提示:
Warning  FailedCreatePodSandBox  <invalid> (x56 over <invalid>)  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = error creating pod sandbox with name "k8s_kube-proxy-rrxg5_kube-system_17893532-5a8f-45ba-9b3b-7ad08264f68a_0": Error initializing source docker://k8s.gcr.io/pause:3.5: error pinging docker registry k8s.gcr.io: Get "https://k8s.gcr.io/v2/": dial tcp 64.233.188.82:443: connect: connection refused
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

又是一個k8s.gcr.io/pause:3.5

這個bug我真的是調了好久,我把所有能試的方案都試過了,都不行。

我本來已經放棄了,幹了幾天别的事情,後來有一天早上上班的路上,腦子靈光一現,我好像知道問題在哪了。

問題所在:自己隻改了 master裡面的crio.conf裡面關于鏡像的下載下傳位址,但是沒有改node裡面的配置。是以pod裡面有三個kube-proxy,master的那個是好的,其他node的有問題。

解決辦法:在node節點也進行修改

如果node節點沒有crio.conf檔案,通過下面指令生成:

crio config --default > /etc/crio/crio.conf
           

修改

/etc/crio/crio.conf

中的配置:

# 原部落格
insecure_registries = ["registry.prod.bbdops.com"]
pause_image = "registry.prod.bbdops.com/common/pause:3.2"

# 自己的配置  修改為自己的鏡像倉庫
insecure_registries = ["docker.io"]
pause_image = "docker.io/ningan123/pause:3.5"
           
# 修改完記得重新開機
[[email protected] ~]# systemctl daemon-reload
[[email protected] ~]# systemctl restart crio
[[email protected] ~]# systemctl status crio
           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

15.其他配置鏡像的地方,記錄于此,以後有需要直接查找

我們為docker.io與k8s.gcr.io配置mirror鏡像倉庫。注意:為k8s.gcr.io配置鏡像倉庫很重要,雖然我們可設定

--image-repository registry.aliyuncs.com/google_containers

來告知

kubeinit

從阿裡雲鏡像倉庫下載下傳鏡像,但本人發現部署Pod時仍使用k8s.gcr.io/pause鏡像,故為了避免是以産生報錯,此處為其配置Mirror鏡像倉庫。

cat > /etc/containers/registries.conf <<EOF
unqualified-search-registries = ["docker.io", "quay.io"]

[[registry]]
  location = "docker.io"
  insecure = false
  blocked = false
  mirror-by-digest-only = false
  prefix = ""

  [[registry.mirror]]
    location = "docker.mirrors.ustc.edu.cn"
    insecure = false

[[registry]]
  location = "k8s.gcr.io"
  insecure = false
  blocked = false
  mirror-by-digest-only = false
  prefix = ""

  [[registry.mirror]]
    location = "docker.io/ningan123"
    insecure = false
EOF
           

registry.aliyuncs.com/google_containers

注意:“安裝容器運作時”一節有提到過,即使我們在調用

kubeadm init

時通過參數

--image-repository

指定鏡像倉庫位址,但pause鏡像仍從k8s.gcr.io擷取進而導緻異常,為解決此問題,我們在容器運作時為k8s.gcr.io配置了鏡像倉庫,但還有一種方式可供選擇:調整kubelet配置指定pause鏡像。

# /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.5
           

16.錯誤:coredns一直處于ContainerCreating的狀态 error adding seccomp rule for syscall socket: requested action matches default action of filter""

centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)
Events:
  Type     Reason                  Age   From               Message
  ----     ------                  ----  ----               -------
  Normal   Scheduled               29s   default-scheduler  Successfully assigned default/nginx-6799fc88d8-vrrz8 to crio-node2
  Warning  FailedCreatePodSandBox  26s   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = container create failed: time="2021-09-03T10:40:10+08:00" level=error msg="container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\""
container_linux.go:349: starting container process caused "error adding seccomp rule for syscall socket: requested action matches default action of filter"
  Warning  FailedCreatePodSandBox  12s  kubelet  Failed to create pod sandbox: rpc error: code = Unknown desc = container create failed: time="2021-09-03T10:40:24+08:00" level=error msg="container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\""
container_linux.go:349: starting container process caused "error adding seccomp rule for syscall socket: requested action matches default action of filter"

           

網上說是runc的版本太低了,自己就把runc的版本調整了一下

# 修改前
[[email protected] ~]# runc -v
runc version spec: 1.0.1-dev

           
# 修改後

[[email protected] runc]# runc -v
runc version 1.0.2
commit: v1.0.2-0-g52b36a2
spec: 1.0.2-dev
go: go1.17
libseccomp: 2.3.1

           
git clone https://github.com/opencontainers/runc -b v1.0.2
cd runc
make
make install

[[email protected] runc-1.0.2]# make install
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
install -D -m0755 runc /usr/local/sbin/runc


# 安裝成功之後,通過runc -v檢視版本 還是之前的版本
[[email protected] ~]# runc -v
runc version spec: 1.0.1-dev

# 檢視runc的位置  看到有其他地方的  
[[email protected] ~]# whereis runc
runc: /usr/bin/runc /usr/local/sbin/runc /usr/share/man/man8/runc.8.gz

# 先備份之前的檔案
[[email protected] ~]# mv /usr/bin/runc /usr/bin/runc_backup


# 解決辦法:存的位置和讀取的位置不在一起,那就在讀的位置也放一份檔案就好了
[[email protected] runc]# cp  /usr/local/sbin/runc /usr/bin/runc



# 成功解決問題!
[[email protected] ~]# runc -v
runc version 1.0.2
spec: 1.0.2-dev
go: go1.17
libseccomp: 2.3.1

           
centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)centos7中kubeadm方式搭建k8s叢集(crio+calico)(k8s v1.21.0)

小知識科普

seccomp

什麼是seccomp

seccomp(全稱securecomputing mode)是linuxkernel從2.6.23版本開始所支援的一種安全機制。

在Linux系統裡,大量的系統調用(systemcall)直接暴露給使用者态程式。但是,并不是所有的系統調用都被需要,而且不安全的代碼濫用系統調用會對系統造成安全威脅。通過seccomp,我們限制程式使用某些系統調用,這樣可以減少系統的暴露面,同時是程式進入一種“安全”的狀态。

————————————————

原文連結:https://blog.csdn.net/mashimiao/article/details/73607485

make -j

既然IO不是瓶頸,那CPU就應該是一個影響編譯速度的重要因素了。

用make -j帶一個參數,可以把項目在進行并行編譯,比如在一台雙核的機器上,完全可以用make -j4,讓make最多允許4個編譯指令同時執行,這樣可以更有效的利用CPU資源。

還是用Kernel來測試:

用make: 40分16秒

用make -j4:23分16秒

用make -j8:22分59秒

由此看來,在多核CPU上,适當的進行并行編譯還是可以明顯提高編譯速度的。但并行的任務不宜太多,一般是以CPU的核心數目的兩倍為宜。

參考文獻

  • cri-o官方安裝文檔
  • 使用CRI-O 和 Kubeadm 搭建高可用 Kubernetes 叢集(crio1.19 k8s1.18.4)
  • k8s crio 測試環境搭建
  • 使用kubeadm部署Kubernetes 1.22(k8s1.22 )
  • [使用kubeadm搭建一單節點k8s測試叢集](https://segmentfault.com/a/1190000022907793)
  • [解決kubeadm部署kubernetes叢集鏡像問題](https://www.cnblogs.com/nb-blog/p/10636733.html)

runc version 1.0.2

spec: 1.0.2-dev

go: go1.17

libseccomp: 2.3.1

[外鍊圖檔轉存中...(img-lWTarBGx-1630666371190)]

## 小知識科普

### seccomp



> 什麼是seccomp
> seccomp(全稱securecomputing mode)是linuxkernel從2.6.23版本開始所支援的一種安全機制。
>
> 在Linux系統裡,大量的系統調用(systemcall)直接暴露給使用者态程式。但是,并不是所有的系統調用都被需要,而且不安全的代碼濫用系統調用會對系統造成安全威脅。通過seccomp,我們限制程式使用某些系統調用,這樣可以減少系統的暴露面,同時是程式進入一種“安全”的狀态。
> ————————————————
> 原文連結:https://blog.csdn.net/mashimiao/article/details/73607485



### make -j

既然IO不是瓶頸,那CPU就應該是一個影響編譯速度的重要因素了。

用make -j帶一個參數,可以把項目在進行并行編譯,比如在**一台雙核的機器上,完全可以用make -j4**,讓make最多允許4個編譯指令同時執行,這樣可以更有效的利用CPU資源。

還是用Kernel來測試:

用make: 40分16秒

用make -j4:23分16秒

用make -j8:22分59秒

由此看來,在多核CPU上,适當的進行并行編譯還是可以明顯提高編譯速度的。但并行的任務不宜太多,一般是以[CPU的核心](https://www.baidu.com/s?wd=CPU的核心&tn=24004469_oem_dg&rsv_dl=gh_pl_sl_csd)數目的兩倍為宜。



## 參考文獻

- [cri-o官方安裝文檔](https://github.com/cri-o/cri-o/blob/master/install.md)
- [使用CRI-O 和 Kubeadm 搭建高可用 Kubernetes 叢集](https://xujiyou.work/%E4%BA%91%E5%8E%9F%E7%94%9F/CRI-O/%E4%BD%BF%E7%94%A8CRI-O%E5%92%8CKubeadm%E6%90%AD%E5%BB%BA%E9%AB%98%E5%8F%AF%E7%94%A8%20Kubernetes%20%E9%9B%86%E7%BE%A4.html)(crio1.19  k8s1.18.4)
- [k8s crio 測試環境搭建](https://hanamichi.wiki/posts/k8s-ciro/)
- [使用kubeadm部署Kubernetes 1.22](https://blog.frognew.com/2021/08/kubeadm-install-kubernetes-1.22.html#24-%E9%83%A8%E7%BD%B2pod-network%E7%BB%84%E4%BB%B6calico)(k8s1.22 )
- [[使用kubeadm搭建一單節點k8s測試叢集](https://segmentfault.com/a/1190000022907793)](https://segmentfault.com/a/1190000022907793)



- [[解決kubeadm部署kubernetes叢集鏡像問題](https://www.cnblogs.com/nb-blog/p/10636733.html)](https://www.cnblogs.com/nb-blog/p/10636733.html)



           

繼續閱讀