天天看點

kubeadm 安裝 k8s 叢集環境Ready主節點配置添加從節點拆卸叢集使用到的指令整理

文章目錄

  • 環境
    • 三台 CentOS
    • Version
  • Ready
    • 關閉所有防火牆
    • 禁用 Selinux
    • 關閉 swap
    • 加入 host 資訊
    • 相關元件安裝
      • Docker
      • 安裝 `kubelet`、`kubeadm`、`kubectl`
  • 主節點配置
    • 初始化 k8s 叢集
    • 安裝 Pod Network
  • 添加從節點
  • 拆卸叢集
  • 使用到的指令整理

環境

三台 CentOS

主機名 IP 功能
kubeadm1 10.211.55.23 主節點
kubeadm2 10.211.55.24 從節點
kubeadm3 10.211.55.25 從節點

Version

  • 作業系統:CentOS Linux release 7.5.1804 (Core)
  • Docker 版本:18.06.1-ce
  • K8s 版本:1.13.2
  • kubectl:1.13.2
  • kubelet:1.13.2
  • kubeadm:1.13.2

Ready

以下操作作用在所有節點上。

關閉所有防火牆

[[email protected] ~]# systemctl disable firewalld.service 
[[email protected] ~]# systemctl stop firewalld.service
           

禁用 Selinux

# 臨時禁制
[[email protected] ~]# setenforce 0

# 永久禁止
[[email protected] ~]# vim /etc/selinux/config
SELINUX=disabled
           

關閉 swap

[[email protected] ~]# swapoff -a
           

加入 host 資訊

[[email protected] ~]# cat <<EOF > /etc/hosts
> 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
>
> 10.211.55.23 kubeadm1
> 10.211.55.24 kubeadm2
> 10.211.55.25 kubeadm3
> EOF
           

相關元件安裝

Docker

Docker 安裝可以參考:Docker 安裝

安裝

kubelet

kubeadm

kubectl

[[email protected] ~]# yum install -y kubelet kubeadm kubectl
[[email protected] ~]# systemctl enable kubelet && systemctl start kubelet
           

安裝失敗可以嘗試更改下 yum 源:

[[email protected] ~]# cat << EOF >> /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes Repo
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> EOF
           

配置 kubectl:

[[email protected] ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
[[email protected] ~]# source /etc/profile 
[[email protected] ~]# echo $KUBECONFIG
           

主節點配置

初始化 k8s 叢集

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.2
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.2
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.2
docker pull mirrorgooglecontainers/kube-proxy:v1.13.2
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull coredns/coredns:1.2.6

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.2 k8s.gcr.io/kube-apiserver:v1.13.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.2 k8s.gcr.io/kube-controller-manager:v1.13.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.2 k8s.gcr.io/kube-scheduler:v1.13.2
docker tag mirrorgooglecontainers/kube-proxy:v1.13.2 k8s.gcr.io/kube-proxy:v1.13.2
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.2
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.2
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.2
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.2
docker rmi mirrorgooglecontainers/pause-amd64:3.1
docker rmi mirrorgooglecontainers/etcd-amd64:3.2.24
docker rmi coredns/coredns:1.2.6
           
在初始化 k8s 時會去

k8s.gcr.io

拉取鏡像,由于國内網絡原因,會導緻失敗,而在

hub.docker.com

中有這些需要拉取鏡像的 Copy,是以這裡先從

hub.docker.com

上面拉取好,再更改 tag。
kubeadm 安裝 k8s 叢集環境Ready主節點配置添加從節點拆卸叢集使用到的指令整理

在主節點上進行初始化 k8s。

kubeadm init --kubernetes-version=v1.13.2 --apiserver-advertise-address 10.211.55.23 --pod-network-cidr=10.244.0.0/16
           
  • –kubernetes-version: 用于指定 k8s 版本。
  • –apiserver-advertise-address:用于指定使用 Master 的哪個 network interface 進行通信,若不指定,則 kubeadm 會自動選擇具有預設網關的 interface。
  • –pod-network-cidr:用于指定Pod 的網絡範圍。該參數使用依賴于使用的網絡方案,本文将使用經典的 flannel 網絡方案。
[[email protected] ~]# kubeadm init --kubernetes-version=v1.13.2 --apiserver-advertise-address 10.211.55.23 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
	[WARNING HTTPProxy]: Connection to "https://10.211.55.23" uses proxy "http://127.0.0.1:8118". If that is not intended, adjust your proxy settings
	[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
	[WARNING HTTPProxyCIDR]: connection to "10.244.0.0/16" uses proxy "http://127.0.0.1:8118". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm1 localhost] and IPs [10.211.55.23 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm1 localhost] and IPs [10.211.55.23 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.23]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

***************************省略***************************
           

觀察前面幾行,發現有幾行代理的警告,這是因為我在目前這台 CentOS 上面開了一個代理網絡用來翻牆的。這裡如果不處理,對下一步安裝 Pod 網絡會有影響(一般不會有這個問題,此處處理省略)。

重新初始化:

[[email protected] ~]# kubeadm init --kubernetes-version=v1.13.2 --apiserver-advertise-address 10.211.55.23 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm1 localhost] and IPs [10.211.55.23 127.0.0.1 ::1]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm1 localhost] and IPs [10.211.55.23 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.23]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.004917 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm1" as an annotation
[mark-control-plane] Marking the node kubeadm1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 9lam9f.dehj4ggp4q3zzgh7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.211.55.23:6443 --token o96ndy.cm77x257291hha2c --discovery-token-ca-cert-hash sha256:40ec5eaf1c50c03e646b73866458e7b5714d36979c049656c4fb294e314893c3
           

看上面的提示資訊盡心如下操作,這不操作是為了友善 kubectl 連接配接使用:

[[email protected] ~]# mkdir -p $HOME/.kube
[[email protected] ~]# sudo cp -rp /etc/kubernetes/admin.conf $HOME/.kube/config
[ro[email protected] ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
           

完畢之後可以檢視下 Pods 資訊,發現存在

STATUS

Pending

的 Pod,這裡先不用管:

[[email protected] ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE     IP             NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-4m84h           0/1     Pending   0          10m     <none>         <none>     <none>           <none>
kube-system   coredns-86c58d9df4-z48tf           0/1     Pending   0          10m     <none>         <none>     <none>           <none>
kube-system   etcd-kubeadm1                      1/1     Running   0          9m3s    10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-apiserver-kubeadm1            1/1     Running   0          9m4s    10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-controller-manager-kubeadm1   1/1     Running   1          9m6s    10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-proxy-7p59r                   1/1     Running   0          10m     10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-scheduler-kubeadm1            1/1     Running   1          9m25s   10.211.55.23   kubeadm1   <none>           <none>
           

檢視節點資訊:

[[email protected] ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
kubeadm1   Ready    master   11m   v1.13.2
           
  • 初始化主節點完畢

安裝 Pod Network

Pod 網絡是 Pod 之間進行通信的必要條件,k8s 支援很多種網絡方案,在這裡使用的是經典的 flannel 方案。

  • 設定系統參數,使二層的網橋在轉發包時也會被 iptables 的 FORWARD 規則所過濾。
[[email protected] ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.ipv4.ip_forward = 1
> vm.swappiness=0
> EOF

[[email protected] ~]# sysctl -p /etc/sysctl.conf
           
  • 執行如下指令
[[email protected] ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[[email protected] ~]# kubectl apply -f kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
           

再進行檢視 Pods 資訊會發現

STATUS

Pending

的都變成

Running

了:

[[email protected] ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-4m84h           1/1     Running   0          11m   10.244.0.5     kubeadm1   <none>           <none>
kube-system   coredns-86c58d9df4-z48tf           1/1     Running   0          11m   10.244.0.4     kubeadm1   <none>           <none>
kube-system   etcd-kubeadm1                      1/1     Running   0          10m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-apiserver-kubeadm1            1/1     Running   1          10m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-controller-manager-kubeadm1   1/1     Running   2          10m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-flannel-ds-amd64-mb9sp        1/1     Running   0          93s   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-proxy-7p59r                   1/1     Running   0          11m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-scheduler-kubeadm1            1/1     Running   2          11m   10.211.55.23   kubeadm1   <none>           <none>
           

添加從節點

在從節點 CentOS 上執行如下指令添加到主節點中:

[[email protected] ~]# kubeadm join 10.211.55.23:6443 --token o96ndy.cm77x257291hha2c --discovery-token-ca-cert-hash sha256:40ec5eaf1c50c03e646b73866458e7b5714d36979c049656c4fb294e314893c3
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.211.55.23:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.211.55.23:6443"
[discovery] Requesting info from "https://10.211.55.23:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.211.55.23:6443"
[discovery] Successfully established connection with API Server "10.211.55.23:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "kubeadm2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
           
以上資訊在

初始化主節點

操作之後會在日志最後有列印出來。

  這裡如果出現檔案已存在的一些錯誤資訊,那麼有可能是以前進行過

join

操作,沒有清理掉。可以先使用指令

kubeadm reset

進行清理下,再進行 join 操作。

來到主節點檢視 Pods 資訊和 Node 資訊(未啟動完成資訊):

[[email protected] ~]# kubectl get pods -n kube-system -o wide
NAME                               READY   STATUS     RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
coredns-86c58d9df4-cxhwd           1/1     Running    0          23m   10.244.0.4     kubeadm1   <none>           <none>
coredns-86c58d9df4-dnfdt           1/1     Running    0          23m   10.244.0.3     kubeadm1   <none>           <none>
etcd-kubeadm1                      1/1     Running    0          22m   10.211.55.23   kubeadm1   <none>           <none>
kube-apiserver-kubeadm1            1/1     Running    0          22m   10.211.55.23   kubeadm1   <none>           <none>
kube-controller-manager-kubeadm1   1/1     Running    0          22m   10.211.55.23   kubeadm1   <none>           <none>
kube-flannel-ds-amd64-4plbl        1/1     Running    0          23m   10.211.55.23   kubeadm1   <none>           <none>
kube-flannel-ds-amd64-vlj7s        0/1     Init:0/1   0          19s   10.211.55.24   kubeadm2   <none>           <none>
kube-proxy-cjhkb                   1/1     Running    0          23m   10.211.55.23   kubeadm1   <none>           <none>
kube-proxy-jrpv7                   1/1     Running    0          19s   10.211.55.24   kubeadm2   <none>           <none>
kube-scheduler-kubeadm1            1/1     Running    0          22m   10.211.55.23   kubeadm1   <none>           <none>

[[email protected] ~]# kubectl get nodes
NAME       STATUS     ROLES    AGE   VERSION
kubeadm1   Ready      master   23m   v1.13.2
kubeadm2   NotReady   <none>   15s   v1.13.2
           

稍等片刻,再去檢視,Pods 狀态為:

Running

,Nodes 狀态為:

Ready

[[email protected] ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-cxhwd           1/1     Running   0          50m   10.244.0.4     kubeadm1   <none>           <none>
kube-system   coredns-86c58d9df4-dnfdt           1/1     Running   0          50m   10.244.0.3     kubeadm1   <none>           <none>
kube-system   etcd-kubeadm1                      1/1     Running   0          49m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-apiserver-kubeadm1            1/1     Running   0          49m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-controller-manager-kubeadm1   1/1     Running   0          49m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-flannel-ds-amd64-4plbl        1/1     Running   0          50m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-flannel-ds-amd64-rp79t        1/1     Running   0          44s   10.211.55.24   kubeadm2   <none>           <none>
kube-system   kube-proxy-cjhkb                   1/1     Running   0          50m   10.211.55.23   kubeadm1   <none>           <none>
kube-system   kube-proxy-gv52t                   1/1     Running   0          44s   10.211.55.24   kubeadm2   <none>           <none>
kube-system   kube-scheduler-kubeadm1            1/1     Running   0          49m   10.211.55.23   kubeadm1   <none>           <none>
[[email protected] ~]# kubectl get nodes
NAME       STATUS   ROLES    AGE    VERSION
kubeadm1   Ready    master   51m    v1.13.2
kubeadm2   Ready    <none>   113s   v1.13.2
           

拆卸叢集

先删除掉節點(在主節點執行):

[[email protected] ~]# kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
[[email protected] ~]# kubectl delete node <node name>
           

清理從節點:

[[email protected] ~]# kubeadm reset
[[email protected] ~]# rm -rf /etc/kubernetes/
           

使用到的指令整理

指令 作用

kubeadm init --kubernetes-version=v1.13.2 --apiserver-advertise-address 10.211.55.23 --pod-network-cidr=10.244.0.0/16

初始化 k8s 叢集

kubectl get pods --all-namespaces -o wide

檢視 Pods 資訊。

--all-namespaces

是指所有命名空間,也可以使用

-n kube-system

指定命名空間

kubectl get nodes

檢視節點資訊

kubectl apply -f kube-flannel.yaml

安裝 Pod 網絡

kubeadm join 10.211.55.23:6443 --token o96ndy.cm77x257291hha2c --discovery-token-ca-cert-hash sha256:40ec5eaf1c50c03e646b73866458e7b5714d36979c049656c4fb294e314893c3

添加從節點

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets

排除節點

kubectl delete node <node name>

删除節點

kubeadm reset

重置叢集

kubectl describe pod <POD NAME> -n kube-system

檢視 Pod 部署資訊。排錯使用

kubectl logs <POD NAME> -n kube-system

檢視 Pod Log 資訊。排錯使用

繼續閱讀