天天看點

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

本文參考網址:

https://www.jianshu.com/p/e43f5e848da1

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

https://www.jianshu.com/p/1aebf568b786

https://blog.csdn.net/donglynn/article/details/47784393

https://blog.csdn.net/MC_CodeGirl/article/details/79998656

https://blog.csdn.net/andriy_dangli/article/details/85062983

https://docs.projectcalico.org/v3.8/getting-started/kubernetes/installation/calico

https://www.jianshu.com/p/70efa1b853f5

https://blog.csdn.net/weixin_44723434/article/details/94583457

https://preliminary.istio.io/zh/docs/setup/kubernetes/download/

https://www.cnblogs.com/rickie/p/istio.html

https://blog.csdn.net/lwplvx/article/details/79192182

https://blog.csdn.net/qq_36402372/article/details/82991098

https://www.cnblogs.com/assion/p/11326088.html

http://www.lampnick.com/php/823

https://blog.csdn.net/ccagy/article/details/83059349

https://www.jianshu.com/p/789bc867feaa

https://www.jianshu.com/p/dde56c521078

本系列分為三章,第一章是建立虛拟機、docker、kubernetes等一些基礎設施;第二章是在此基礎上建立一個三節點的kubernetes叢集;第三章是再在之上搭建istio服務網格。

本文參考了大量其他優秀作者的創作(已經在開頭列出),自己從零開始,慢慢搭建了istio服務網格,每一步都在文章中詳細地列出了。之是以要自己重新從頭搭建,一方面是很多CSDN、簡書或其他平台的教程都已經離現在(2019.8.14)太過遙遠,變得不合時宜,單純地照着别人的路子走會遇到非常多的坑;另一方面是實踐出真知。

由于我也是剛開始學習istio服務網格,才疏學淺,難免有不盡如人意的地方,還請見諒。

1 前言

之前已經建立好了三台虛拟機

三個節點IP分别為:

Node2:192.168.56.101
Node3:192.168.56.102
master:192.168.56.103
           

并已經分别與XSHELL相連,現在開始建立kubernetes叢集。

2 建造叢集第一步:初始化

在Master主節點(k8s-master)上執行:

kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.56.103
           

1.選項–pod-network-cidr=192.168.0.0/16表示叢集将使用Calico網絡,這裡需要提前指定Calico的子網範圍

2.選項–kubernetes-version=v1.15.1指定kubernetes版本。

3.選項–apiserver-advertise-address表示綁定的網卡IP,這裡一定要綁定前面提到的enp0s8網卡,否則會預設使用enp0s3網卡

4.若執行kubeadm init出錯或強制終止,則再需要執行該指令時,需要先執行kubeadm reset重置

然後就又出現了問題,如下

[[email protected]_master centos_master]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.56.103
name: Invalid value: "k8s_master": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
           

2.1 更正主機名

經過檢查,是因為這裡的hostname不能帶有

_

是以隻能回去修改,把三個honstname分别按照之前的步驟進行修改,修改完之後重新開機虛拟機即可。改成如下:

k8s-master:192.168.56.103
k8s-node2:192.168.56.101
k8s-node3:192.168.56.102
           

2.2 更改Cgroup Driver

接着再來一次,然後又報錯。。。

[[email protected] centos_master]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.56.103
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
           

第一個警告是因為推薦的驅動與我們之前安裝的驅動不一樣,輸入

docker info

可以檢視docker的驅動如下。

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

之前我們的docker的cgroup drive和kubelet的cgroup drive都是

cgroupfs

,并且他們兩的驅動需要保持一緻,是以我們這裡需要把它們都改成

systemed

2.2.1 修改docker的cgroup drive

修改或建立/etc/docker/daemon.json,加入下面的内容:

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
           

然後重新開機docker,或者直接重新開機虛拟機也可以。

開始修改,報錯~

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

是因為在用vim打開一個檔案時,其會産生一個cmd.swap檔案,用于儲存資料,當檔案非正常關閉時,可用此檔案來恢複,當正常關閉時,此檔案會被删除,非正常關閉時,不會被删除,是以提示存在.swap檔案。

進入該目錄,

ls -a

查詢隐藏檔案

将字尾名為.swp的檔案删除

rm .daemon.json.swp

再次編輯檔案不再出現提示警告!

然後還是不行,報錯說這是一個目錄,沒法進行編輯,最後查了好久好久,才發現是我建立的方式不對,我使用的是

mkdir

建立檔案,但是發現這個是用來建立目錄的!!!正确的方法是用

vi

建立,即

vi /etc/docker/daemon.json

, 然後順勢把内容寫進去,這裡雖然耗費了非常多的時間,不過對linux系統也有了更深的了解。

正确處理後,重新開機docker

systemctl daemon-reload
systemctl restart docker
           

檢視,修改成功~

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

2.2.2 修改kubelet的cgroup drive

下面是大部分的教程推薦的

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

但是實際使用時,發現那個位址是空的,最後發現可能是因為我們的版本是1.15.1,屬于新版,位址應該修改成

/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
           

代碼如下

[[email protected] centos_master]# vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
~                                                                                                                      
~                                                                                                                      
~                                       
           

然後在

Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"

後面添加上

--cgroup-driver=systemd

變成

Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd"

接着重新開機kubelet

systemctl daemon-reload
systemctl restart kubelet
           

2.2.3 對另外兩台虛拟機做同樣操作修改cgroup drive

2.3 提升虛拟機配置

之前的報錯第一個已經不見了,現在解決下一個。

[[email protected] centos_master]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.56.103
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
           

這是因為虛拟機配置低了,回頭看我們的虛拟機配置

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

果然隻有一個CPU,是以需要關閉虛拟機,把它的配置更新一下。

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)
從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

本來想是不是要給三個CPU會不會更好一點,但是後面的8CPU告訴我這不切實際。。。

最後兩外虛拟機同樣操作,每個虛拟機給2個CPU。

2.4 再次初始化叢集

[[email protected] centos_master]# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.15.1 --apiserver-advertise-address=192.168.56.103
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.103]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.103 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.103 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.503142 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 4lomz9.l7dq7yewuiuo7j6r
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.103:6443 --token 4lomz9.l7dq7yewuiuo7j6r \
    --discovery-token-ca-cert-hash sha256:a7c45b56471cfa88e897007b74618bae0c16bf4179d98710622bc64e3f0f6ed4 

           

成功!

2.5 解讀

可以看到,提示叢集成功初始化,并且我們需要執行以下指令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
           

另外, 提示我們還需要建立網絡

You should now deploy a pod network to the cluster. Run “kubectl apply

-f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

并說了其他節點加入叢集的方式

Then you can join any number of worker nodes by running the following

on each as root:

kubeadm join 192.168.56.103:6443 --token 4lomz9.l7dq7yewuiuo7j6r

–discovery-token-ca-cert-hash sha256:a7c45b56471cfa88e897007b74618bae0c16bf4179d98710622bc64e3f0f6ed4

3 建造叢集第二步

首先按照提示執行以下指令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
           

然後檢視一下pod狀态,果然沒有建立網絡的話,dns相關元件是堵塞狀态。

[[email protected] centos_master]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-dnb85             0/1     Pending   0          60m
coredns-5c98db65d4-jhdsl             0/1     Pending   0          60m
etcd-k8s-master                      1/1     Running   0          59m
kube-apiserver-k8s-master            1/1     Running   0          59m
kube-controller-manager-k8s-master   1/1     Running   0          59m
kube-proxy-78k2m                     1/1     Running   0          60m
kube-scheduler-k8s-master            1/1     Running   0          59m
           

4 建造叢集第三步:搭建網絡

搭建網絡的時候,同樣遇到很多不合時宜的教程或是其他講解的誤導,為了防止大家看迷糊,是以下面直接寫正确的步驟了。

根據官方文檔

https://docs.projectcalico.org/v3.8/getting-started/kubernetes/

(1)這一步我們在初始化的時候已經完成,是以忽略。

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
           

(2)這一步我們同樣已經完成,忽略。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
           

(3)執行以下指令

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
           

然後會看到如下界面

configmap "calico-config" created
customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "ipamblocks.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "blockaffinities.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "ipamhandles.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "bgppeers.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "hostendpoints.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "globalnetworksets.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "networksets.crd.projectcalico.org" created
customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" created
clusterrole.rbac.authorization.k8s.io "calico-kube-controllers" created
clusterrolebinding.rbac.authorization.k8s.io "calico-kube-controllers" created
clusterrole.rbac.authorization.k8s.io "calico-node" created
clusterrolebinding.rbac.authorization.k8s.io "calico-node" created
daemonset.extensions "calico-node" created
serviceaccount "calico-node" created
deployment.extensions "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created
           

(4)接下來執行以下指令觀察

watch kubectl get pods --all-namespaces
           

直到每一個pod的狀态都變成running。

如下圖就是箭頭那一列,等到那一列都變成running之後

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

變成如下場景

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

因為K8S叢集預設不會将pod排程到Master上,這樣master的資源就浪費了,設定master節點也可以被排程

[[email protected] centos_master]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/k8s-master untainted
           

使用以下指令确認我們的叢集中現在有一個節點。

kubectl get nodes -o wide
           

得到如下回報

NAME         STATUS   ROLES    AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-master   Ready    master   2d22h   v1.15.1   192.168.56.103   <none>        CentOS Linux 7 (Core)   3.10.0-957.21.3.el7.x86_64   docker://18.9.8
           

Congratulations!現在我們有了一個配備了Calico的單主機Kubernetes叢集。

5 建造叢集第四步:将其他節點加入叢集

在其他兩個節點k8s-node2和k8s-node3上,執行主節點生成的kubeadm join指令即可加入叢集:

kubeadm join 192.168.56.103:6443 --token 4lomz9.l7dq7yewuiuo7j6r --discovery-token-ca-cert-hash sha256:a7c45b56471cfa88e897007b74618bae0c16bf4179d98710622bc64e3f0f6ed4
           

然後報錯

[[email protected] centos_master]# kubeadm join 192.168.56.103:6443 --token 4lomz9.l7dq7yewuiuo7j6r --discovery-token-ca-cert-hash sha256:a7c45b56471cfa88e897007b74618bae0c16bf4179d98710622bc64e3f0f6ed4
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s
           

原因:預設token的有效期為24小時,master節點的token過期了

解決:建立新的token

(1)在master節點上,得到新的token

[[email protected] centos_master]# kubeadm token create
amrjle.m6zuzmlcfim6ntdk
           

(2)在master節點上,擷取ca證書sha256編碼hash值

[[email protected] centos_master]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
a7c45b56471cfa88e897007b74618bae0c16bf4179d98710622bc64e3f0f6ed4
           

(3)編輯新的添加節點的kubeadm join如下

kubeadm join 192.168.56.103:6443 --token amrjle.m6zuzmlcfim6ntdk --discovery-token-ca-cert-hash sha256:a7c45b56471cfa88e897007b74618bae0c16bf4179d98710622bc64e3f0f6ed4
           

(4)重新添加節點,以node2為例

[[email protected] centos_master]# kubeadm join 192.168.56.103:6443 --token amrjle.m6zuzmlcfim6ntdk --discovery-token-ca-cert-hash sha256:a7c45b56471cfa88e897007b74618bae0c16bf4179d98710622bc64e3f0f6ed4
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
           

成功~,node3同理

6 建造叢集完畢

将所有節點加入叢集後,在master節點(主節點)上,運作

kubectl get nodes

檢視叢集狀态如下

[[email protected] centos_master]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   2d23h   v1.15.1
k8s-node2    Ready    <none>   3m38s   v1.15.1
k8s-node3    Ready    <none>   3m28s   v1.15.1
           

另外,可檢視所有pod狀态,運作

kubectl get pods -n kube-system

[[email protected] centos_master]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7bd78b474d-4vgnh   1/1     Running   0          41h
calico-node-p59hr                          1/1     Running   0          41h
calico-node-rdcqs                          1/1     Running   0          4m46s
calico-node-sc79x                          1/1     Running   0          4m56s
coredns-5c98db65d4-dnb85                   1/1     Running   0          2d23h
coredns-5c98db65d4-jhdsl                   1/1     Running   0          2d23h
etcd-k8s-master                            1/1     Running   2          2d23h
kube-apiserver-k8s-master                  1/1     Running   2          2d23h
kube-controller-manager-k8s-master         1/1     Running   2          2d23h
kube-proxy-78k2m                           1/1     Running   2          2d23h
kube-proxy-n9ggl                           1/1     Running   0          4m46s
kube-proxy-zvglw                           1/1     Running   0          4m56s
kube-scheduler-k8s-master                  1/1     Running   2          2d23h
           

如圖,所有pod都是running,則代表叢集運作正常。

備注:

另外,k8s項目提供了一個官方的dashboard,因為我們平時還是指令行用的多,接下來還要搭建istio,是以這裡就不再贅述,具體怎麼安裝可以自行去找一下教程,最終效果如下。

從頭開始搭建kubernetes叢集+istio服務網格(2)—— 搭建kubernetes叢集(win10 + virtualbox6.0 + centos7.6.1810 + docker18.09.8 + kubernetes1.15.1 + istio1.2.3)

7 小結

至此,kubernetes叢集全部搭建正常!下一章我們開始在kubernetes叢集之上搭建istio服務網格。

繼續閱讀