天天看點

kubernetes1.5.1叢集安裝部署指南之叢集配置篇

閱讀本篇請首選閱讀前面的

kubernetes1.5.1叢集安裝部署指南之基礎元件安裝篇

kubernetes1.5.1叢集安裝部署指南之基礎環境準備篇

三、叢集配置篇

(一)master配置

1、叢集初始化

rm -r -f /etc/kubernetes/* /var/lib/kubelet/* /var/lib/etcd/*

kubeadm init --api-advertise-addresses=192.168.128.115 --pod-network-cidr 10.245.0.0/16 --use-kubernetes-version v1.5.1

注意:上面的192.168.128.115是我的master的位址。這個指令不可以連續運作兩次,如果再次運作,需要執行 kubeadm reset。

輸出内容如下:

[root@kube ~]# kubeadm init --api-advertise-addresses=192.168.128.115  --pod-network-cidr=10.245.0.0/16

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.

[preflight] Running pre-flight checks

[init] Using Kubernetes version: v1.5.1

[tokens] Generated token: "211c65.e7a44742440e1fad"

[certificates] Generated Certificate Authority key and certificate.

[certificates] Generated API Server key and certificate

[certificates] Generated Service Account signing keys

[certificates] Created keys and certificates in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

[apiclient] Created API client, waiting for the control plane to become ready

[apiclient] All control plane components are healthy after 23.373017 seconds

[apiclient] Waiting for at least one node to register and become ready

注意:如果這裡一直停留很長時間,說明平台所需要的docker鏡像未下載下傳到位,請參見基礎元件安裝篇。

[apiclient] First node is ready after 6.017237 seconds

[apiclient] Creating a test deployment

[apiclient] Test deployment succeeded

[token-discovery] Created the kube-discovery deployment, waiting for it to become ready

[token-discovery] kube-discovery is ready after 3.504919 seconds

[addons] Created essential addon: kube-proxy

[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully! //表示叢集初始化成功。

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join --token=211c65.e7a44742440e1fad 192.168.128.115 //千萬注意:這條資訊一定要拷貝記錄下來,否則後面哭都來不及。

(二)每台node計算節點加入k8s叢集

在所有的結點運作下面的指令:

kubeadm join --token=211c65.e7a44742440e1fad 192.168.128.115

運作完輸出 資訊如下:

[root@kube~]# kubeadm join --token=211c65.e7a44742440e1fad 192.168.128.115

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks

[tokens] Validating provided token

[discovery] Created cluster info discovery client, requesting info from http://192.168.128.115:9898/cluster-info/v1/?token-id=60a95a

[discovery] Cluster info object received, verifying signature using given token

[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.128.115:6443]

 [bootstrap] Trying to connect to endpoint https://192.168.128.115:6443

[bootstrap] Detected server version: v1.5.1

[bootstrap] Successfully established connection with endpoint https://192.168.128.115:6443 

[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request

[csr] Received signed certificate from the API server:Issuer: CN=kubernetes | Subject: CN=system:node:k8s-node1 | CA: falseNot before: 2016-12-23 07:06:00 +0000 UTC Not After: 2017-12-23 07:06:00 +0000 UTC[csr] Generating kubelet configuration[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"Node join complete:*

 Certificate signing request sent to master and response  received.* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

檢測結果

[root@kube ~]# kubectl get node

NAME          STATUS         AGE

kube.master   Ready,master   12d

kube.node1    Ready          12d

kube.node2    Ready          12d

(三)在master上清理測試資料

[root@kube ~]#kubectl taint nodes --all dedicated-

taint key="dedicated" and effect="" not found.

(四)在master上配置部署 weave 網絡,打通跨主機容器通訊

 官方給出的指令:kubectl create -f https://git.io/weave-kube

網絡問題我們一般用不了,我們用這樣:

[root@kube ~]#wget https://git.io/weave-kube -O weave-kube.yaml  //下載下傳配置檔案

[root@kube ~]#kubectl create -f weave-kube.yaml    //建立weave網絡

[root@kube ~]# kubectl get pods -o wide -n kube-system  //檢視網絡pods啟動情況

(六)在master上配置dashboard

1、下載下傳yaml檔案

[root@kube ~]#wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml

2、修改yaml檔案

[root@kube ~]#  vi kubernetes-dashboard.yaml

p_w_picpathPullPolicy: Always     修改為    p_w_picpathPullPolicy: IfNotPresent

3、配置dashboard

[root@kube ~]#kubectl create -f kubernetes-dashboard.yaml

deployment "kubernetes-dashboard" created

service "kubernetes-dashboard" created

4、檢視儀表服務啟動狀态

[root@kube ~]# kubectl get pod --namespace=kube-system

NAME                      READY     STATUS    RESTARTS   AGE

dummy-2088944543-pwdw2          1/1       Running   0          3h

etcd-kube.master              1/1       Running   0          3h

kube-apiserver-kube.master        1/1       Running   0          3h

kube-controller-manager-kube.master  1/1       Running   0          3h

kube-discovery-982812725-rj6te     1/1       Running   0          3h

kube-dns-2247936740-9g51a        3/3       Running   1          3h

kube-proxy-amd64-i1shn          1/1       Running   0          3h

kube-proxy-amd64-l3qrg          1/1       Running   0          2h

kube-proxy-amd64-yi1it           1/1       Running   0          3h

kube-scheduler-kube.master         1/1       Running   0          3h

kubernetes-dashboard-3000474083-6kwqs 1/1       Running   0          15s

注意:如果該pod不停重新開機,我是将整個k8s叢集重新開機就OK,啟動順序為node、最後master。不知道為啥?希望大神解答。

weave-net-f89j7                2/2       Running   0          32m

weave-net-q0h18                 2/2       Running   0          32m

weave-net-xrfry                 2/2       Running   0          32m

5、檢視kubernetes-dashboard服務的外網通路端口

[root@kube ~]# kubectl describe svc kubernetes-dashboard --namespace=kube-system

Name:kubernetes-dashboard

Namespace:kube-system

Labels:app=kubernetes-dashboard

Selector:app=kubernetes-dashboard

Type:NodePort

IP:10.13.114.76

Port:<unset>80/TCP

NodePort:<unset>30435/TCP //外網通路端口

Endpoints:10.38.0.2:9090

Session Affinity:None

No events.[rootkubectl get pod --namespace=kube-system

至此可以用NodeIP:NodePort通路kubernetes-dashboard

(七)在master上配置第三方開源監控heapster

1、下載下傳配置檔案并上傳master

請在github上下載下傳heapster-master檔案包或者到附件influxdb.rar

在heapster-master\deploy\kube-config\influxdb目錄下找到這6個檔案:

grafana-deployment.yaml

grafana-service.yaml

influxdb-deployment.yaml

influxdb-service.yaml

heapster-deployment.yaml

heapster-service.yaml

2、建立deployment、service

kubectl create -f grafana-deployment.yaml -f grafana-service.yaml -f influxdb-deployment.yaml -f  influxdb-service.yaml -f heapster-deployment.yaml -f  heapster-service.yaml

3、檢視pod啟動狀态

[root@kube ~]# kubectl get pods -o wide -n kube-system

NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE

dummy-2088944543-8dql8                  1/1       Running   1          12d       192.168.128.115   kube.master

etcd-kube.master                        1/1       Running   1          12d       192.168.128.115   kube.master

heapster-3901806196-hsv2s               1/1       Running   1          12d       10.46.0.4       kube.node2

kube-apiserver-kube.master              1/1       Running   1          12d       192.168.128.115   kube.master

kube-controller-manager-kube.master     1/1       Running   1          12d       192.168.128.115   kube.master

kube-discovery-1769846148-j8nwk         1/1       Running   1          12d       192.168.128.115   kube.master

kube-dns-2924299975-vdp8s               4/4       Running   4          12d       10.40.0.2       kube.master

kube-proxy-5mkkz                        1/1       Running   1          12d       192.168.128.115   kube.master

kube-proxy-8ggq0                        1/1       Running   1          12d       192.168.128.117   kube.node2

kube-proxy-tdd7m                        1/1       Running   2          12d       192.168.128.116   kube.node1

kube-scheduler-kube.master              1/1       Running   1          12d       192.168.128.115   kube.master

kubernetes-dashboard-3000605155-gr6ll   1/1       Running   0          4d        10.46.0.12      kube.node2

monitoring-grafana-810108360-2nfb7      1/1       Running   1          12d       10.46.0.3       kube.node2

monitoring-influxdb-3065341217-tzhfj    1/1       Running   0          4d        10.46.0.13      kube.node2

weave-net-98jjb                         2/2       Running   5          12d       192.168.128.116   kube.node1

weave-net-h15r5                         2/2       Running   2          12d       192.168.128.115   kube.master

weave-net-rcr6x                         2/2       Running   2          12d       192.168.128.117   kube.node2

4、檢視外網服務端口

檢視monitoring-grafana服務端口

[root@kube ~ heapster]# kubectl get svc --namespace=kube-system

NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE

heapster               10.98.45.1      <none>        80/TCP          1h

kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   2h

kubernetes-dashboard   10.108.45.66    <nodes>       80:32155/TCP    1h

monitoring-grafana     10.97.110.225   <nodes>       80:30687/TCP    1h

monitoring-influxdb    10.96.175.67    <none>        8086/TCP        1h

繼續閱讀