項目位址: GitHub - utkuozdemir/nvidia_gpu_exporter: Nvidia GPU exporter for prometheus using nvidia-smi binary
根據git上面的nvidia監控項目,可以實作grafana監控GPU,但是git上面提供的utkuozdemir/nvidia_gpu_exporter:0.3.0這個鏡像隻可以在ubuntu系統上面運作,如果在centos上運作,日志會提示無法擷取到GPU資訊,也就導緻無法接到k8s的prometheus.目前使用的方法是将nvidia_gpu_exporter這個可執行通路下載下傳到centos系統中,然後通過系統指令運作,最終得到一個服務,也就是gpu的metircs。然後在k8s中,建立endpoinst、service、servicemonitor,實作prometheus收集到gpu-metrics資訊,最後通過grafana進行可視化展示。下面是具體操作步驟:
1 在centos系統中有建立nvidia_gpu_exporter服務
安裝nvidia_gpu_exporter服務
# VERSION=0.3.0
# wget https://github.com/utkuozdemir/nvidia_gpu_exporter/releases/download/v${VERSION}/nvidia_gpu_exporter_${VERSION}_linux_x86_64.tar.gz
# tar -xvzf nvidia_gpu_exporter_${VERSION}_linux_x86_64.tar.gz
# mv nvidia_gpu_exporter /usr/local/bin
# ./nvidia_gpu_exporter
此時通過web頁面就可檢視此台GPU伺服器的gpu-metircs資訊,如下圖
可以看到GPU相關資訊
建立nvidia_gpu_exporter服務
# vim /etc/systemd/system/nvidia_gpu_exporter.service
[Unit]
Description=Nvidia GPU Exporter
After=network-online.target
[Service]
Type=simple
User=nvidia_gpu_exporter
Group=nvidia_gpu_exporter
ExecStart=/usr/local/bin/nvidia_gpu_exporter
SyslogIdentifier=nvidia_gpu_exporter
Restart=always
RestartSec=1
NoNewPrivileges=yes
ProtectHome=yes
ProtectSystem=strict
ProtectControlGroups=true
ProtectKernelModules=true
ProtectKernelTunables=yes
ProtectHostname=yes
ProtectKernelLogs=yes
ProtectProc=yes
[Install]
WantedBy=multi-user.target
# systemctl daemon-reload
[[email protected] ~]# systemctl enable nvidia_gpu_exporter
[[email protected] ~]# systemctl start nvidia_gpu_exporter.service
[[email protected] ~]# systemctl status nvidia_gpu_exporter.service
● nvidia_gpu_exporter.service - Nvidia GPU Exporter
Loaded: loaded (/etc/systemd/system/nvidia_gpu_exporter.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2022-05-13 17:36:03 CST; 5s ago
Main PID: 80178 (nvidia_gpu_expo)
Tasks: 6
Memory: 5.6M
CGroup: /system.slice/nvidia_gpu_exporter.service
└─80178 /usr/local/bin/nvidia_gpu_exporter
May 13 17:36:03 k8s-gpu4 systemd[1]: Started Nvidia GPU Exporter.
May 13 17:36:04 k8s-gpu4 nvidia_gpu_exporter[80178]: ts=2022-05-13T09:36:04.005Z caller=main.go:68 level=info msg="Listening on add...=:9835
May 13 17:36:04 k8s-gpu4 nvidia_gpu_exporter[80178]: ts=2022-05-13T09:36:04.006Z caller=tls_config.go:195 level=info msg="TLS is di...=false
Hint: Some lines were ellipsized, use -l to show in full.
-----------------------------------
©著作權歸作者所有:來自51CTO部落格作者boxrice的原創作品,請聯系作者擷取轉載授權,否則将追究法律責任
使用nvidia_gpu_expoter配合prometheus+grafana監控GPU性能
https://blog.51cto.com/u_11229048/5291377
服務啟動成功,通過頁面檢視
2 在k8s中建立endpoints、service、servicemonitor
建立endpoints
# cat gpu-exporter-endpoint.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: nvidia-gpu-exporter
namespace: monitoring
subsets:
- addresses:
- ip: 10.1.12.17
ports:
- name: http
port: 9835
protocol: TCP
上面的ip為GPU伺服器位址,如果是多台GPU,可在下面繼續添加,如
- ip: *.*.*.*
- ip: *.*.*.*
# kubectl create -f gpu-exporter-endpoint.yaml
endpoints/nvidia-gpu-exporter created
# kubectl get endpoints -n monitoring nvidia-gpu-exporter
NAME ENDPOINTS AGE
nvidia-gpu-exporter 10.1.12.17:9835 39s
# kubectl describe endpoints -n monitoring nvidia-gpu-exporter
Name: nvidia-gpu-exporter
Namespace: monitoring
Labels: <none>
Annotations: <none>
Subsets:
Addresses: 10.1.12.17
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
http 9835 TCP
Events: <none>
建立service
# cat gpu-exporter-svc.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nvidia-gpu-exporter
name: nvidia-gpu-exporter
namespace: monitoring
spec:
ports:
- name: http
protocol: TCP
port: 9835
targetPort: http
type: ClusterIP
# kubectl delete -f gpu-exporter-svc.yaml
service "nvidia-gpu-exporter" deleted
kubectl create -f gpu-exporter-svc.yaml
service/nvidia-gpu-exporter created
# kubectl get svc -n monitoring nvidia-gpu-exporter
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nvidia-gpu-exporter ClusterIP 10.10.75.226 <none> 9835/TCP 12s
# kubectl describe svc -n monitoring nvidia-gpu-exporter
Name: nvidia-gpu-exporter
Namespace: monitoring
Labels: app=nvidia-gpu-exporter
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.10.235.70
Port: http 9835/TCP
TargetPort: http/TCP
Endpoints: 10.1.12.17:9835
Session Affinity: None
Events: <none>
上面的endpioins一定要為上面建立的endpoints中的IP和port
建立servicemonitor
#cat gpu-exporter-serviceMonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: nvidia-gpu-exporter
name: nvidia-gpu-exporter
namespace: monitoring
spec:
endpoints:
- interval: 30s
port: http
jobLabel: app
selector:
matchLabels:
app: nvidia-gpu-exporter
kubectl create -f gpu-exporter-serviceMonitor.yaml
servicemonitor.monitoring.coreos.com/nvidia-gpu-exporter created
[[email protected] dongtai]# kubectl get servicemonitors.monitoring.coreos.com -n monitoring nvidia-gpu-exporter
NAME AGE
nvidia-gpu-exporter 12s
# kubectl describe servicemonitors.monitoring.coreos.com -n monitoring nvidia-gpu-exporter
Name: nvidia-gpu-exporter
Namespace: monitoring
Labels: app=nvidia-gpu-exporter
Annotations: <none>
API Version: monitoring.coreos.com/v1
Kind: ServiceMonitor
Metadata:
Creation Timestamp: 2022-05-13T09:50:35Z
Generation: 1
Managed Fields:
API Version: monitoring.coreos.com/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.:
f:app:
f:spec:
.:
f:endpoints:
f:jobLabel:
f:selector:
.:
f:matchLabels:
.:
f:app:
Manager: kubectl-create
Operation: Update
Time: 2022-05-13T09:50:35Z
Resource Version: 14080381
Self Link: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/nvidia-gpu-exporter
UID: 7fdb365b-8bcd-4fc2-9772-9ad7de6155bf
Spec:
Endpoints:
Interval: 30s
Port: http
Job Label: app
Selector:
Match Labels:
App: nvidia-gpu-exporter
Events: <none>
prometheus頁面驗證
在prometheus頁面的targets中檢視nvidia_gpu_exporter
在Graph頁面中進行nvidia搜尋
通過搜尋可以得到這台GPU伺服器有兩張3090GPU
3 在grafana中建立GPU監控面闆
在grafana導入本人提供的json檔案
最終展示效果如下,好需要json檔案,可評論擷取