天天看點

使用kubeadm部署k8s叢集[v1.18.0]

0. 基礎環境

IP位址 主機名 節點
10.0.0.63 k8s-master1 master1
10.0.0.65 k8s-node1 node1
10.0.0.66 k8s-node2 node2

1. 簡要

kubeadm是官方社群推出的快速部署kubernetes叢集工具

部署環境适用于學習和使用k8s相關軟體和功能

2. 安裝要求

3台純淨centos虛拟機,版本為7.x及以上
機器配置 2核4G以上 x3台
伺服器網絡互通
禁止swap分區      

3. 學習目标

學會使用kubeadm來安裝一個叢集,便于學習k8s相關知識

4. 環境準備

# 1. 關閉防火牆功能
systemctl stop firewalld
systemctl disable firewalld

# 2.關閉selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config 
setenforce 0

# 3. 關閉swap
swapoff -a  # 臨時
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久


# 4. 伺服器規劃
cat > /etc/hosts << EOF
10.0.0.63 k8s-master1
#10.0.0.64 k8s-master2
10.0.0.65 k8s-node1
10.0.0.66 k8s-node2
EOF

#5. 臨時主機名配置方法:
 hostnamectl set-hostname  k8s-master1
 bash

#6. 時間同步配置
yum install -y ntpdate
ntpdate time.windows.com

#開啟轉發
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

#7. 時間同步
echo '*/5 * * * * /usr/sbin/ntpdate -u ntp.api.bz' >>/var/spool/cron/root
systemctl restart crond.service
crontab -l
# 以上可以全部複制粘貼直接運作,但是主機名配置需要重新修改      

5. docker安裝[所有節點都需要安裝]

#源添加
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/repo/epel-7.repo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum clean all
yum install -y bash-completion.noarch

# 安裝指定版版本 
yum -y install docker-ce-18.09.9-3.el7

#也可以檢視版本安裝
yum list docker-ce --showduplicates | sort -r

#啟動docker
systemctl enable docker
systemctl start docker
systemctl status docker      

6. docker配置cgroup驅動[所有節點]

rm -f /etc/docker/*
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
systemctl enable docker.service
  
拉取flanel鏡像:
  docker pull lizhenliang/flannel:v0.11.0-amd64      

7. 鏡像加速[所有節點]

curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
systemctl restart docker

#如果源太多容易出錯. 錯了就删除一個.bak源試試看
#保留 curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io

這個是阿裡雲配置的加速,直接添加阿裡雲加速源就可以了.
https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors      

8.kubernetes源配置[所有節點]

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF      

9. 安裝kubeadm,kubelet和kubectl[所有節點]

yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
systemctl enable kubelet      

10.  部署Kubernetes Master [ master 10.0.0.63]

kubeadm init \
  --apiserver-advertise-address=10.0.0.63 \
  --image-repository registry.aliyuncs.com/google_containers \
  --kubernetes-version v1.18.0 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16
  
#成功後加入環境變量[master]:  
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config      

初始化後擷取到token:

kubeadm join 10.0.0.63:6443 --token 2cdgi6.79j20fhly6xpgfud

--discovery-token-ca-cert-hash sha256:3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4

記住token,後面使用

注意:

W0507 00:43:52.681429    3118 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.18.0 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2 [preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher

10.1 報錯處理

報錯1: 需要修改docker驅動為systemd  /etc/docker/daemon.json 檔案中加入:   "exec-opts": ["native.cgroupdriver=systemd"]

報錯2: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

出現該報錯,是cpu有限制,将cpu修改為2核4G以上配置即可

報錯2: 出現該報錯,是cpu有限制,将cpu修改為2核4G以上配置即可

報錯3: 加入叢集出現報錯:

W0507 01:19:49.406337   26642 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@k8s-master2 yum.repos.d]# kubeadm join 10.0.0.63:6443 --token q8bfij.fipmsxdgv8sgcyq4 \
>     --discovery-token-ca-cert-hash sha256:26fc15b6e52385074810fdbbd53d1ba23269b39ca2e3ec3bac9376ed807b595c
>     --discovery-token-ca-cert-hash sha256:26fc15b6e52385074810fdbbd53d1ba23269b39ca2e3ec3bac9376ed807b595c
W0507 01:20:26.246981   26853 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
	[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


解決辦法:
執行: kubeadm reset 重新加入      

10.2. kubectl指令工具配置[master]

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#擷取節點資訊
# kubectl get nodes

[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS     ROLES    AGE     VERSION
k8s-master1   NotReady   master   2m59s   v1.18.0
k8s-node1     NotReady   <none>   86s     v1.18.0
k8s-node2     NotReady   <none>   85s     v1.18.0

#可以擷取到其他主機的狀态資訊,證明叢集完畢,另一台k8s-master2 沒有加入到叢集中,是因為要做多master,這裡就不加了.      

10.2. 安裝網絡插件[master]

[直在master上操作]上傳kube-flannel.yaml,并執行:
kubectl apply -f kube-flannel.yaml
kubectl get pods -n kube-system

下載下傳位址:
https://www.chenleilei.net/soft/k8s/kube-flannel.yaml

[必須全部運作起來,否則有問題.]
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-5dq4s              1/1     Running   0          13m
coredns-7ff77c879f-v68pc              1/1     Running   0          13m
etcd-k8s-master1                      1/1     Running   0          13m
kube-apiserver-k8s-master1            1/1     Running   0          13m
kube-controller-manager-k8s-master1   1/1     Running   0          13m
kube-flannel-ds-amd64-2ktxw           1/1     Running   0          3m45s
kube-flannel-ds-amd64-fd2cb           1/1     Running   0          3m45s
kube-flannel-ds-amd64-hb2zr           1/1     Running   0          3m45s
kube-proxy-4vt8f                      1/1     Running   0          13m
kube-proxy-5nv5t                      1/1     Running   0          12m
kube-proxy-9fgzh                      1/1     Running   0          12m
kube-scheduler-k8s-master1            1/1     Running   0          13m


[root@k8s-master1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    master   14m   v1.18.0
k8s-node1     Ready    <none>   12m   v1.18.0
k8s-node2     Ready    <none>   12m   v1.18.0      

11. 将node1 node2 加入master

node1 node2加入叢集配置

在要加入的節點種執行以下指令來加入:
kubeadm join 10.0.0.63:6443 --token fs0uwh.7yuiawec8tov5igh \
    --discovery-token-ca-cert-hash sha256:471442895b5fb77174103553dc13a4b4681203fbff638e055ce244639342701d
    
#這個配置在安裝master的時候有過提示,請注意首先要配置cni網絡插件
#加入成功後,master節點檢測:
[root@k8s-master1 docker]#  kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    master   14m   v1.18.0
k8s-node1     Ready    <none>   12m   v1.18.0
k8s-node2     Ready    <none>   12m   v1.18.0      

12 token建立和查詢

預設token會儲存24消失,過期後就不可用,如果需要重建立立token,可在master節點使用以下指令重新生成:

kubeadm token create
kubeadm token list
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
結果:
3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4


新token加入叢集方法:
kubeadm join 10.0.0.63:6443 --discovery-token nuja6n.o3jrhsffiqs9swnu --discovery-token-ca-cert-hash 3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4      

13. 安裝dashboard界面

wget https://www.chenleilei.net/soft/k8s/dashboard.yaml
kubectl apply -f dashboard.yaml
[root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.1.94.43     <none>        8000/TCP        7m58s
kubernetes-dashboard        NodePort    10.1.187.162   <none>        443:30001/TCP   7m58s      

13.1 通路測試

10.0.0.63 10.0.0.64 10.0.0.65 叢集任意一個角色通路30001端口都可以通路到dashboard頁面.
使用kubeadm部署k8s叢集[v1.18.0]

13.2 擷取dashboard token, 也就是建立service account并綁定預設cluster-admin管理者叢集角色

# kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
# kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')

将複制的token 填寫到 上圖中的 token選項,并選擇token登入      
使用kubeadm部署k8s叢集[v1.18.0]

14. 驗證叢集是否工作正常

驗證叢集狀态是否正常有三個方面:
1. 能否正常部署應用
2. 叢集網絡是否正常
3. 叢集内部dns解析是否正常      

14.1 驗證部署應用和日志查詢

#建立一個nginx應用
kubectl create deployment  k8s-status-checke --image=nginx
#暴露80端口
kubectl expose deployment k8s-status-checke --port=80  --target-port=80 --type=NodePort

#删除這個deployment
kubectl delete deployment k8s-status-checke

#查詢日志:
[root@k8s-master1 ~]# kubectl logs -f nginx-f89759699-m5k5z      

14.2 驗證叢集網絡是否正常

1. 拿到一個應用位址
[root@k8s-master1 ~]# kubectl get pods -o wide
NAME       READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED   READINESS 
pod/nginx   1/1    Running   0          25h   10.244.2.18   k8s-node2   <none>      <none>

2. 通過任意節點ping這個應用ip
[root@k8s-node1 ~]# ping 10.244.2.18
PING 10.244.2.18 (10.244.2.18) 56(84) bytes of data.
64 bytes from 10.244.2.18: icmp_seq=1 ttl=63 time=2.63 ms
64 bytes from 10.244.2.18: icmp_seq=2 ttl=63 time=0.515 ms

3. 通路節點
[root@k8s-master1 ~]# curl -I 10.244.2.18
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Sun, 10 May 2020 13:19:02 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes

4. 查詢日志
[root@k8s-master1 ~]# kubectl logs -f nginx
10.244.1.0 - - [10/May/2020:13:14:25 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" "-"      

14.3 驗證叢集内部dns解析是否正常

檢查DNS:
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-5dq4s              1/1     Running   1          4d   #有時dns會出問題
coredns-7ff77c879f-v68pc              1/1     Running   1          4d   #有時dns會出問題
etcd-k8s-master1                      1/1     Running   4          4d
kube-apiserver-k8s-master1            1/1     Running   3          4d
kube-controller-manager-k8s-master1   1/1     Running   3          4d
kube-flannel-ds-amd64-2ktxw           1/1     Running   1          4d
kube-flannel-ds-amd64-fd2cb           1/1     Running   1          4d
kube-flannel-ds-amd64-hb2zr           1/1     Running   4          4d
kube-proxy-4vt8f                      1/1     Running   4          4d
kube-proxy-5nv5t                      1/1     Running   2          4d
kube-proxy-9fgzh                      1/1     Running   2          4d
kube-scheduler-k8s-master1            1/1     Running   4          4d

#有時dns會出問題,解決方法:
1. 導出yaml檔案
kubectl get deploy coredns -n kube-system -o yaml >coredns.yaml
2. 删除coredons
kubectl delete -f coredns.yaml

檢查:
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
etcd-k8s-master1                      1/1     Running   4          4d
kube-apiserver-k8s-master1            1/1     Running   3          4d
kube-controller-manager-k8s-master1   1/1     Running   3          4d
kube-flannel-ds-amd64-2ktxw           1/1     Running   1          4d
kube-flannel-ds-amd64-fd2cb           1/1     Running   1          4d
kube-flannel-ds-amd64-hb2zr           1/1     Running   4          4d
kube-proxy-4vt8f                      1/1     Running   4          4d
kube-proxy-5nv5t                      1/1     Running   2          4d
kube-proxy-9fgzh                      1/1     Running   2          4d
kube-scheduler-k8s-master1            1/1     Running   4          4d

coredns已經删除了

3. 重建coredns
kubectl apply -f coredns.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME                                  READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-5mmjg              1/1     Running   0          13s
coredns-7ff77c879f-t74th              1/1     Running   0          13s
etcd-k8s-master1                      1/1     Running   4          4d
kube-apiserver-k8s-master1            1/1     Running   3          4d
kube-controller-manager-k8s-master1   1/1     Running   3          4d
kube-flannel-ds-amd64-2ktxw           1/1     Running   1          4d
kube-flannel-ds-amd64-fd2cb           1/1     Running   1          4d
kube-flannel-ds-amd64-hb2zr           1/1     Running   4          4d
kube-proxy-4vt8f                      1/1     Running   4          4d
kube-proxy-5nv5t                      1/1     Running   2          4d
kube-proxy-9fgzh                      1/1     Running   2          4d
kube-scheduler-k8s-master1            1/1     Running   4          4d
日志複查:
coredns-7ff77c879f-5mmjg:
[root@k8s-master1 ~]# kubectl logs coredns-7ff77c879f-5mmjg -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

coredns-7ff77c879f-t74th:
[root@k8s-master1 ~]# kubectl  logs coredns-7ff77c879f-t74th -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b


#k8s建立一個容器驗證dns
[root@k8s-master1 ~]# kubectl run -it --rm --image=busybox:1.28.4 sh
/ # nslookup kubernetes
Server:    10.1.0.10
Address 1: 10.1.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.1.0.1 kubernetes.default.svc.cluster.local
#通過 nslookup來解析 kubernetes 能夠出現解析,說明dns正常工作      

15. 叢集證書問題處理 [kuberadm部署的解決方案]

1. 删除預設的secret,使用自簽證書建立新的secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
kubectl create secret generic kubernetes-dashboard-certs \
--from-file=/etc/kubernetes/pki/apiserver.key --from-file=/etc/kubernetes/pki/apiserver.crt -n kubernetes-dashboard

使用二進制部署的這裡的證書需要根據自己當時存儲的路徑進行修改即可.

2. 證書配置後需要修改dashboard.yaml檔案,重新建構dashboard
wget https://www.chenleilei.net/soft/k8s/recommended.yaml
vim recommended.yaml
找到: kind: Deployment,找到這裡之後再次查找 args 看到這兩行:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard

改為[中間插入兩行證書位址]:
- --auto-generate-certificates
- --tls-key-file=apiserver.key
- --tls-cert-file=apiserver.crt
- --namespace=kubernetes-dashboard

[已修改的,可直接使用: wget https://www.chenleilei.net/soft/k8s/dashboard.yaml]

3. 修改完畢後重新應用 recommended.yaml
kubectl apply -f recommended.yaml

應用後,可以看到觸發了一次滾動更新,然後重新打開浏覽器發現證書已經正常顯示,不會提示不安全了.
[root@k8s-master1 ~]# kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-694557449d-r9h5r   1/1     Running   0          2d1h
kubernetes-dashboard-5d8766c7cc-trdsv        1/1     Running   0          93s   <---滾動更新.

4. 檢視新的通路端口:
 kubectl get svc -n kubernetes-dashboard
 [root@k8s-master1 ~]#  kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.1.187.60    <none>        8000/TCP        6m34s
kubernetes-dashboard        NodePort    10.1.242.240   <none>        443:31761/TCP   6m34s


5. 使用谷歌浏覽器打開會發現已經可以打開了
   #1.注意,如果你忘記了登入token,重新生成
   # kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
   # kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
  
   
   #2.還可以查詢以前的token 進行登入
    kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')      

15.1 證書更換前後浏覽器截圖:

更換前:

使用kubeadm部署k8s叢集[v1.18.0]

更換後:

使用kubeadm部署k8s叢集[v1.18.0]

15.2 報錯處理:

15.2.1 問題1 k8s-node節點加入時報錯:

k8s-node節點加入時報錯:
W0315 22:16:20.123204    5795 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

處理辦法:
echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
增加後重新加入:
kubeadm join 10.0.0.63:6443 --token 0dr1pw.ejybkufnjpalb8k6     --discovery-token-ca-cert-hash sha256:ca1aa9cb753a26d0185e3df410cad09d8ec4af4d7432d127f503f41bc2b14f2a
這裡的token由kubadm伺服器生成.      

15.2.2 問題2: web頁面無法通路處理:

重建dashboard
#删除:
kubectl delete -f dashboard.yaml

#删除後建立:
kubectl create -f dashboard.yaml
#建立賬戶:
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
#檢視密碼:
kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')

重新打開登入即可


通過下面指令檢視配置設定到了那個節點:
[root@k8s-master1 ~]# kubectl get pods -n kubernetes-dashboard -o wide
NAME                                         READY   STATUS    RESTARTS   AGE     IP           NODE       NOMINATED NODE  READINESS GATES
dashboard-metrics-scraper-694557449d-vnrvt   1/1     Running   0          8m56s   10.244.1.13  k8s-node1   <none>         <none>
kubernetes-dashboard-85fc8fbf64-t4cdw        1/1     Running   0          3m8s    10.244.2.18  k8s-node2   <none>         <none>      

15.2.3 問題3: 部署dashboard失敗

有可能是網絡問題,需要切換一個别的網絡,比如vpn,然後重新部署.
1. 或者複制以下内容儲存為 dashboard.yaml 删除原來的dashboard重新部署
2. 或者從我個人伺服器下載下傳: wget https://www.chenleilei.net/soft/k8s/recommended.yaml

3. 檢查狀态是否存在問題,如鏡像是否下載下傳成功
   kubectl get pods -n kubernetes-dashboard -o wide      
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta8
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --tls-key-file=apiserver.key
            - --tls-cert-file=apiserver.crt
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.1
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "beta.kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}      

16. 在k8s中部署一個nginx

[root@k8s-master1 ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed
[root@k8s-master1 ~]# kubectl get pod,svc
NAME                        READY   STATUS             RESTARTS   AGE
pod/nginx-f89759699-dnfmg   0/1     ImagePullBackOff   0          3m41s

ImagePullBackOff報錯: 
檢查k8s日志:  kubectl describe pod nginx-f89759699-dnfmg
結果:
  Normal   Pulling    3m27s (x4 over 7m45s)  kubelet, k8s-node2  Pulling image "nginx"
  Warning  Failed     2m55s (x2 over 6m6s)   kubelet, k8s-node2  Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/library/nginx/manifests/sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422: net/http: TLS handshake timeout

可以看到是因為docker下載下傳鏡像報錯,需要更新别的docker源
[root@k8s-master1 ~]# cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

使用其中一個node節點docker來pull nginx:
然後發現了錯誤:
[root@k8s-node1 ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
54fec2fa59d0: Pulling fs layer 
4ede6f09aefe: Pulling fs layer 
f9dc69acb465: Pulling fs layer 
Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout  #源沒有修改

重新修改源後:
[root@k8s-master1 ~]# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
54fec2fa59d0: Pull complete 
4ede6f09aefe: Pull complete 
f9dc69acb465: Pull complete 
Digest: sha256:86ae264c3f4acb99b2dee4d0098c40cb8c46dcf9e1148f05d3a51c4df6758c12
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest


再次運作: 
kubectl delete pod,svc nginx
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort


這是一個k8s拉取鏡像失敗的排查過程:
1. k8s部署nginx失敗,檢查節點 kubectl get pod,svc
2. 檢查k8s日志:  Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-
...net/http: TLS handshake timeout [出現這個故障可以看到是源沒有更換]
3. 修改docker源為阿裡雲的.然後重新啟動docker
cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker.service 
4. 再次使用docker pull 來下載下傳一個nginx鏡像, 發現已經可以拉取成功
5. 删除docker下載下傳好的nginx鏡像 docker image rm -f [鏡像名]
6. k8s中删除部署失敗的nginx   kubectl delete deployment nginx
7. 重新建立鏡像 kubectl create deployment nginx --image=nginx
8. k8s重新部署應用: kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort      

17. 暴露應用

1.建立鏡像
kubectl create deployment nginx --image=nginx

2.暴露應用
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort      

18. 優化: k8s自動補全工具

yum install -y bash-completion
source <(kubectl completion bash)
source /usr/share/bash-completion/bash_completion      

19. 本節問題點:

一. token過期處理辦法:
   每隔24小時,之前建立的token就會過期,這樣會無法登入叢集的dashboard頁面,此時需要重新生成token
   生成指令:
   kubeadm token create
   kubeadm token list
   openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
   
  查詢token   
   openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4
    
  然後使用新的token讓新伺服器加入:
   kubeadm join 10.0.0.63:6443 --token 0dr1pw.ejybkufnjpalb8k6     --discovery-token-ca-cert-hash sha256:3d847b858ed649244b4110d4d60ffd57f43856f42ca9c22e12ca33946673ccb4
   
   
 二. dashboard登入密碼擷取
 kubectl describe secrets -n kubernetes-dashboard $(kubectl -n kubernetes-dashboard get secret | awk '/dashboard-admin/{print $1}')
 
 
 三. k8s拉取鏡像失敗的排查過程
 1. k8s部署nginx失敗,檢查節點 kubectl get pod,svc
 2. 檢查k8s日志:  Failed to pull image "nginx": rpc error: code = Unknown desc = Get https://registry-
...net/http: TLS handshake timeout [出現這個故障可以看到是源沒有更換]
 3. 修改docker源為阿裡雲的.然後重新啟動docker
cat /etc/docker/daemon.json
{
  "registry-mirrors": ["https://ajvcw8qn.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
systemctl restart docker.service 
 4. 再次使用docker pull 來下載下傳一個nginx鏡像, 發現已經可以拉取成功
 5. 删除docker下載下傳好的nginx鏡像 docker image rm -f [鏡像名]
 6. k8s中删除部署失敗的nginx   kubectl delete deployment nginx
 7. 重新建立鏡像 kubectl create deployment nginx --image=nginx
 8. k8s重新部署應用: kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort      

20. YAML附件[請儲存為  .yaml 為字尾]

繼續閱讀