目錄
一、部署dashboard
1、部署
2、執行,檢視pod情況
3、建立dashboard管理者
4、給管理者授權
5、擷取token頁面登陸
二、部署Metrics Server
1、部署heapster
2、配置metrics service
3、添加metrics-server證書
四、部署kuboard
1、kuboard.yaml
2、metrics-server.yaml
一、部署dashboard
1.10.1版本 不支援1.15以上k8s,ui頁面出現404跳轉
2.0.0-rc5 部署後無法發現pod,嘗試本地鏡像無作用,後發現是配置新增runasroot等
2.0.0-beat4版本 暫時正常,相容性日後确認
今天發現ui頁面打不開了,重裝了一遍還是不行,最後确認時因為開個全局代理--#
#github位址
https://github.com/kubernetes/dashboard/releases
1、部署
[[email protected] dashboard]# cd /opt/kubernetes/dashboard
[[email protected] dashboard]# 貌似現在需要科學:wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
[[email protected] dashboard]# mv recommended.yaml k8s-dashboard.yaml
[[email protected] dashboard]# vim ..../dashboard/kubernetes-dashboard.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
noePort: 30000
selector:
k8s-app: kubernetes-dashboard
2、執行,檢視pod情況
[[email protected] dashboard]# kubectl apply -f k8s-dashboard.yaml
[[email protected] dashboard]# kubectl get pod --all-namespaces
kubernetes-dashboard dashboard-metrics-scraper-566cddb686-zlfrk 1/1 Running 0 2m38s
kubernetes-dashboard kubernetes-dashboard-7b5bf5d559-bbsdj 1/1 Running 0 2m44s
[[email protected] dashboard]# kubectl get all --all-namespaces
3、建立dashboard管理者
[[email protected] dashboard]# cat dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard
[[email protected] dashboard]# kubectl create -f dashboard-admin.yaml
[[email protected] dashboard]# kubectl describe sa dashboard-admin -n kubernetes-dashboard
4、給管理者授權
[[email protected] dashboard]# cat dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
[[email protected] dashboard]# kubectl create -f dashboard-admin-bind-cluster-role.yaml
kubectl describe clusterrolebinding dashboard-admin
5、擷取token頁面登陸
[[email protected] dashboard]# kubectl describe secret -n kubernetes-dashboard $(kubectl get secrets -n kubernetes-dashboard | awk '/dashboard-admin/{print $1}' )
#此時firefox浏覽器通路正常,至于Google浏覽器證書過期問題懶得搞了
https://<NodeIP>:30000
二、部署Metrics Server
目前通路頁面左側菜單欄Overview-pods清單
上面Dashboard的CPU Usage (cores)和Memory Usage (bytes)列是空的,這是因為Kubernetes的早期版本依靠Heapster來實作完整的性能資料采集和監控功能,Kubernetes從1.8版本開始,性能資料開始以Metrics API的方式提供标準化接口,并且從1.10版本開始将Heapster替換為Metrics Server。說白了,想要頁面完善還需要部署metrics server服務。
1、部署heapster
heapster
此子產品,在1.8版本以後由metricserver替代,如果想部署試試可參考
wget https://github.com/kubernetes/heapster/archive/v1.5.4.tar.gz
tar -zxf v1.5.4.tar.gz
cd heapster-1.5.4/
kubectl create -f deploy/kube-config/rbac/heapster-rbac.yaml ##授權
kubectl create -f deploy/kube-config/standalone/heapster-controller.yaml
2、配置metrics service
因為預設metrics service的鏡像位址需要科學上網才能拉取,是以在建立之前,我們在node1和node2節點先準備好鏡像,或者直接使用國内源也行。##docker pull bluersw/metrics-server-amd64:v0.3.6##
克隆Metrics Server GitHub倉庫:
[[email protected] dashboard]# yum install git -y
[[email protected] dashboard]# git clone https://github.com/kubernetes-sigs/metrics-server.git
[[email protected] dashboard]# vim metrics-server/deploy/kubernetes/metrics-server-deployment.yaml
[[email protected] kubernetes]# cat metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
containers:
- name: metrics-server
##########換成國内鏡像可下載下傳,或者node節點提前pull##########
image: htcfive/metrics-server-amd64:v0.3.6
args:
- --cert-dir=/tmp
- --secure-port=4443
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
##########修改為false,注釋掉runasuser,否則啟動pod報錯##########
runAsNonRoot: false
# runAsUser: 1000
##########鏡像下載下傳方式如下,command參數添加##########
imagePullPolicy: IfNotPresent
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
beta.kubernetes.io/os: linux
kubernetes.io/arch: "amd64"
3、添加metrics-server證書
此時部署完metrics-server後,執行kubectl top node,出現報錯:
Error from server (Forbidden): nodes.metrics.k8s.io is forbidden:
User "system:anonymous" cannot list nodes.metrics.k8s.io at the cluster scope.
報錯原因: apiserver權限的問題
解決方法: 在master節點建立metrics-server證書,并在kube-apiserver配置
3.1、metrics-server生成證書
生成證書:
cat > metrics-server-csr.json <<EOF
{
"CN": "aggregator",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "BeiJing",
"L": "BeiJing",
"O": "k8s",
"OU": "4Paradigm"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server
3.2、kube-apiserver增加配置
[[email protected] ssl]# cat /opt/kubernetes/cfg/kube-apiserver
--requestheader-allowed-names=aggregator \
--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem \
--enable-aggregator-routing=true \
--advertise-address:apiserver 對外通告的 IP(kubernetes 服務後端節點 IP);
--default--toleration-seconds:設定節點異常相關的門檻值;
--max--requests-inflight:請求相關的最大門檻值;
--etcd-:通路 etcd 的證書和 etcd 伺服器位址;
--experimental-encryption-provider-config:指定用于加密 etcd 中 secret 的配置;
--bind-address: https 監聽的 IP,不能為 127.0.0.1,否則外界不能通路它的安全端口 6443;
--secret-port:https 監聽端口;
--insecure-port=0:關閉監聽 http 非安全端口(8080);
--tls--file:指定 apiserver 使用的證書、私鑰和 CA 檔案;
--audit-:配置審計政策和審計日志檔案相關的參數;
--client-ca-file:驗證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;
--enable-bootstrap-token-auth:啟用 kubelet bootstrap 的 token 認證;
--requestheader-:kube-apiserver 的 aggregator layer 相關的配置參數,proxy-client & HPA 需要使用;
--requestheader-client-ca-file:用于簽名 --proxy-client-cert-file 和 --proxy-client-key-file 指定的證書;在啟用了 metric aggregator 時使用;
--requestheader-allowed-names:不能為空,值為逗号分割的 --proxy-client-cert-file 證書的 CN 名稱,這裡設定為 "aggregator";
--service-account-key-file:簽名 ServiceAccount Token 的公鑰檔案,kube-controller-manager 的 --service-account-private-key-file 指定私鑰檔案,兩者配對使用;
--runtime-config=api/all=true: 啟用所有版本的 APIs,如 autoscaling/v2alpha1;
--authorization-mode=Node,RBAC、--anonymous-auth=false: 開啟 Node 和 RBAC 授權模式,拒絕未授權的請求;
--enable-admission-plugins:啟用一些預設關閉的 plugins;
--allow-privileged:運作執行 privileged 權限的容器;
--apiserver-count=3:指定 apiserver 執行個體的數量;
--event-ttl:指定 events 的儲存時間;
--kubelet-:如果指定,則使用 https 通路 kubelet APIs;需要為證書對應的使用者(上面 kubernetes.pem 證書的使用者為 kubernetes) 使用者定義 RBAC 規則,否則通路 kubelet API 時提示未授權;
--proxy-client-*:apiserver 通路 metrics-server 使用的證書;
--service-cluster-ip-range: 指定 Service Cluster IP 位址段;
--service-node-port-range: 指定 NodePort 的端口範圍;
kube-apiserver 的 --requestheader-allowed-names 參數需要與metric證書CN字段一緻,否則後續通路 metrics 時會提示權限不足。
如果 kube-apiserver 機器沒有運作 kube-proxy,則還需要添加 --enable-aggregator-routing=true 參數
3.3、kube-controller-manager配置
kube-controller-manager增加如下參數
--horizontal-pod-autoscaler-use-rest-clients=true
3.4、安裝部署驗證
稍等片刻,然後執行kubectl top nodes便可以看到每個節點的CPU和記憶體使用率了:
kubectl create -f metrics-server/deploy/kubernetes/
[[email protected] ssl]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
192.168.192.129 77m 3% 531Mi 30%
192.168.192.130 56m 2% 287Mi 16%
四、部署kuboard
kuboard是另一個比較好用的web插件,官方自帶的有些單一,推薦嘗試下這個
cd /kubernetes
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.7/metrics-server.yaml
# 檢視運作狀态:
kubectl get pods -l k8s.kuboard.cn/name=kuboard -n kube-system
# 擷取Token登入:
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d)
# 通路Kuboard:
http://任意一個節點的IP位址:32567
1、kuboard.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kuboard
namespace: kube-system
annotations:
k8s.kuboard.cn/displayName: kuboard
k8s.kuboard.cn/ingress: "true"
k8s.kuboard.cn/service: NodePort
k8s.kuboard.cn/workload: kuboard
labels:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: kuboard
spec:
replicas: 1
selector:
matchLabels:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: kuboard
template:
metadata:
labels:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: kuboard
spec:
containers:
- name: kuboard
image: eipwork/kuboard:latest
imagePullPolicy: Always
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
operator: Exists
---
apiVersion: v1
kind: Service
metadata:
name: kuboard
namespace: kube-system
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 32567
selector:
k8s.kuboard.cn/layer: monitor
k8s.kuboard.cn/name: kuboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kuboard-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kuboard-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kuboard-user
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kuboard-viewer
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kuboard-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: kuboard-viewer
namespace: kube-system
# ---
# apiVersion: extensions/v1beta1
# kind: Ingress
# metadata:
# name: kuboard
# namespace: kube-system
# annotations:
# k8s.kuboard.cn/displayName: kuboard
# k8s.kuboard.cn/workload: kuboard
# nginx.org/websocket-services: "kuboard"
# nginx.com/sticky-cookie-services: "serviceName=kuboard srv_id expires=1h path=/"
# spec:
# rules:
# - host: kuboard.yourdomain.com
# http:
# paths:
# - path: /
# backend:
# serviceName: kuboard
# servicePort: http
2、metrics-server.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
spec:
service:
name: metrics-server
namespace: kube-system
port: 443
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
volumes:
# mount in tmp so we can safely use from-scratch images and/or read-only containers
- name: tmp-dir
emptyDir: {}
hostNetwork: true
containers:
- name: metrics-server
image: eipwork/metrics-server:v0.3.7
# command:
# - /metrics-server
# - --kubelet-insecure-tls
# - --kubelet-preferred-address-types=InternalIP
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-insecure-tls=true
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,externalDNS
ports:
- name: main-port
containerPort: 4443
protocol: TCP
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
imagePullPolicy: Always
volumeMounts:
- name: tmp-dir
mountPath: /tmp
nodeSelector:
beta.kubernetes.io/os: linux
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/name: "Metrics-server"
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: 4443