k8s監控實戰-grafana出圖_alert告警
目錄
-
- 1 使用炫酷的grafana出圖
-
-
- 1.1 部署grafana
-
-
-
-
- 1.1.1 準備鏡像
- 1.1.2 準備rbac資源清單
- 1.1.3 準備dp資源清單
- 1.1.4 準備svc資源清單
- 1.1.5 準備ingress資源清單
- 1.1.6 域名解析
- 1.1.7 應用資源配置清單
-
-
-
-
- 1.2 使用grafana出圖
-
-
-
-
- 1.2.1 浏覽器通路驗證
- 1.2.2 進入容器安裝插件
- 1.2.3 配置資料源
- 1.2.4 添加K8S叢集資訊
- 1.2.5 檢視k8s叢集資料和圖表
-
-
-
- 2 配置alert告警插件
-
-
- 2.1 部署alert插件
-
-
-
-
- 2.1.1 準備docker鏡像
- 2.1.2 準備cm資源清單
- 2.1.3 準備dp資源清單
- 2.1.4 準備svc資源清單
- 2.1.5 應用資源配置清單
-
-
-
-
- 2.2 K8S使用alert報警
-
-
-
-
- 2.2.1 k8s建立基礎報警規則檔案
- 2.2.2 K8S 更新配置
- 2.2.3 測試告警
-
-
prometheus的dashboard雖然号稱擁有多種多樣的圖表,但是實在太簡陋了,一般都用專業的grafana工具來出圖
grafana官方dockerhub位址 grafana官方github位址 grafana官網docker pull grafana/grafana:5.4.2
docker tag 6f18ddf9e552 harbor.zq.com/infra/grafana:v5.4.2
docker push harbor.zq.com/infra/grafana:v5.4.2
準備目錄
mkdir /data/k8s-yaml/grafana
cd /data/k8s-yaml/grafana
cat >rbac.yaml <<'EOF'
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: grafana
rules:
- apiGroups:
- "*"
resources:
- namespaces
- deployments
- pods
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
name: grafana
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: grafana
subjects:
- kind: User
name: k8s-node
EOF
cat >dp.yaml <<'EOF'
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
name: grafana
namespace: infra
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 7
selector:
matchLabels:
name: grafana
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: grafana
name: grafana
spec:
containers:
- name: grafana
image: harbor.zq.com/infra/grafana:v5.4.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
protocol: TCP
volumeMounts:
- mountPath: /var/lib/grafana
name: data
imagePullSecrets:
- name: harbor
securityContext:
runAsUser: 0
volumes:
- nfs:
server: hdss7-200
path: /data/nfs-volume/grafana
name: data
EOF
建立frafana資料目錄
mkdir /data/nfs-volume/grafana
cat >svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: infra
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: grafana
EOF
cat >ingress.yaml <<'EOF'
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: grafana
namespace: infra
spec:
rules:
- host: grafana.zq.com
http:
paths:
- path: /
backend:
serviceName: grafana
servicePort: 3000
EOF
vi /var/named/zq.com.zone
grafana A 10.4.7.10
systemctl restart named
kubectl apply -f http://k8s-yaml.zq.com/grafana/rbac.yaml
kubectl apply -f http://k8s-yaml.zq.com/grafana/dp.yaml
kubectl apply -f http://k8s-yaml.zq.com/grafana/svc.yaml
kubectl apply -f http://k8s-yaml.zq.com/grafana/ingress.yaml
通路
http://grafana.zq.com,預設使用者名密碼admin/admin
能成功通路表示安裝成功
進入後立即修改管理者密碼為
admin123
grafana确認啟動好以後,需要進入grafana容器内部,安裝以下插件
kubectl -n infra exec -it grafana-d6588db94-xr4s6 /bin/bash
# 以下指令在容器内執行
grafana-cli plugins install grafana-kubernetes-app
grafana-cli plugins install grafana-clock-panel
grafana-cli plugins install grafana-piechart-panel
grafana-cli plugins install briangann-gauge-panel
grafana-cli plugins install natel-discrete-panel
添加資料源,依次點選:左側鋸齒圖示-->add data source-->Prometheus

添加完成後重新開機grafana
kubectl -n infra delete pod grafana-7dd95b4c8d-nj5cx
啟用K8S插件,依次點選:左側鋸齒圖示-->Plugins-->kubernetes-->Enable
建立cluster,依次點選:左側K8S圖示-->New Cluster
添加完需要稍等幾分鐘,在沒有取到資料之前,會報http forbidden,沒關系,等一會就好。大概2-5分鐘。
點選Cluster Dashboard
docker pull docker.io/prom/alertmanager:v0.14.0
docker tag 23744b2d645c harbor.zq.com/infra/alertmanager:v0.14.0
docker push harbor.zq.com/infra/alertmanager:v0.14.0
mkdir /data/k8s-yaml/alertmanager
cd /data/k8s-yaml/alertmanager
cat >cm.yaml <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
name: alertmanager-config
namespace: infra
data:
config.yml: |-
global:
# 在沒有報警的情況下聲明為已解決的時間
resolve_timeout: 5m
# 配置郵件發送資訊
smtp_smarthost: 'smtp.163.com:25'
smtp_from: '[email protected]'
smtp_auth_username: '[email protected]'
smtp_auth_password: 'xxxxxx'
smtp_require_tls: false
templates:
- '/etc/alertmanager/*.tmpl'
# 所有報警資訊進入後的根路由,用來設定報警的分發政策
route:
# 這裡的标簽清單是接收到報警資訊後的重新分組标簽,例如,接收到的報警資訊裡面有許多具有 cluster=A 和 alertname=LatncyHigh 這樣的标簽的報警資訊将會批量被聚合到一個分組裡面
group_by: ['alertname', 'cluster']
# 當一個新的報警分組被建立後,需要等待至少group_wait時間來初始化通知,這種方式可以確定您能有足夠的時間為同一分組來擷取多個警報,然後一起觸發這個報警資訊。
group_wait: 30s
# 當第一個報警發送後,等待'group_interval'時間來發送新的一組報警資訊。
group_interval: 5m
# 如果一個報警資訊已經發送成功了,等待'repeat_interval'時間來重新發送他們
repeat_interval: 5m
# 預設的receiver:如果一個報警沒有被一個route比對,則發送給預設的接收器
receiver: default
receivers:
- name: 'default'
email_configs:
- to: '[email protected]'
send_resolved: true
html: '{{ template "email.to.html" . }}'
headers: { Subject: " {{ .CommonLabels.instance }} {{ .CommonAnnotations.summary }}" }
email.tmpl: |
{{ define "email.to.html" }}
{{- if gt (len .Alerts.Firing) 0 -}}
{{ range .Alerts }}
告警程式: prometheus_alert <br>
告警級别: {{ .Labels.severity }} <br>
告警類型: {{ .Labels.alertname }} <br>
故障主機: {{ .Labels.instance }} <br>
告警主題: {{ .Annotations.summary }} <br>
觸發時間: {{ .StartsAt.Format "2006-01-02 15:04:05" }} <br>
{{ end }}{{ end -}}
{{- if gt (len .Alerts.Resolved) 0 -}}
{{ range .Alerts }}
告警程式: prometheus_alert <br>
告警級别: {{ .Labels.severity }} <br>
告警類型: {{ .Labels.alertname }} <br>
故障主機: {{ .Labels.instance }} <br>
告警主題: {{ .Annotations.summary }} <br>
觸發時間: {{ .StartsAt.Format "2006-01-02 15:04:05" }} <br>
恢複時間: {{ .EndsAt.Format "2006-01-02 15:04:05" }} <br>
{{ end }}{{ end -}}
{{- end }}
EOF
cat >dp.yaml <<'EOF'
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: alertmanager
namespace: infra
spec:
replicas: 1
selector:
matchLabels:
app: alertmanager
template:
metadata:
labels:
app: alertmanager
spec:
containers:
- name: alertmanager
image: harbor.zq.com/infra/alertmanager:v0.14.0
args:
- "--config.file=/etc/alertmanager/config.yml"
- "--storage.path=/alertmanager"
ports:
- name: alertmanager
containerPort: 9093
volumeMounts:
- name: alertmanager-cm
mountPath: /etc/alertmanager
volumes:
- name: alertmanager-cm
configMap:
name: alertmanager-config
imagePullSecrets:
- name: harbor
EOF
cat >svc.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: alertmanager
namespace: infra
spec:
selector:
app: alertmanager
ports:
- port: 80
targetPort: 9093
EOF
kubectl apply -f http://k8s-yaml.zq.com/alertmanager/cm.yaml
kubectl apply -f http://k8s-yaml.zq.com/alertmanager/dp.yaml
kubectl apply -f http://k8s-yaml.zq.com/alertmanager/svc.yaml
cat >/data/nfs-volume/prometheus/etc/rules.yml <<'EOF'
groups:
- name: hostStatsAlert
rules:
- alert: hostCpuUsageAlert
expr: sum(avg without (cpu)(irate(node_cpu{mode!='idle'}[5m]))) by (instance) > 0.85
for: 5m
labels:
severity: warning
annotations:
summary: "{{ $labels.instance }} CPU usage above 85% (current value: {{ $value }}%)"
- alert: hostMemUsageAlert
expr: (node_memory_MemTotal - node_memory_MemAvailable)/node_memory_MemTotal > 0.85
for: 5m
labels:
severity: warning
annotations:
summary: "{{ $labels.instance }} MEM usage above 85% (current value: {{ $value }}%)"
- alert: OutOfInodes
expr: node_filesystem_free{fstype="overlay",mountpoint ="/"} / node_filesystem_size{fstype="overlay",mountpoint ="/"} * 100 < 10
for: 5m
labels:
severity: warning
annotations:
summary: "Out of inodes (instance {{ $labels.instance }})"
description: "Disk is almost running out of available inodes (< 10% left) (current value: {{ $value }})"
- alert: OutOfDiskSpace
expr: node_filesystem_free{fstype="overlay",mountpoint ="/rootfs"} / node_filesystem_size{fstype="overlay",mountpoint ="/rootfs"} * 100 < 10
for: 5m
labels:
severity: warning
annotations:
summary: "Out of disk space (instance {{ $labels.instance }})"
description: "Disk is almost full (< 10% left) (current value: {{ $value }})"
- alert: UnusualNetworkThroughputIn
expr: sum by (instance) (irate(node_network_receive_bytes[2m])) / 1024 / 1024 > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual network throughput in (instance {{ $labels.instance }})"
description: "Host network interfaces are probably receiving too much data (> 100 MB/s) (current value: {{ $value }})"
- alert: UnusualNetworkThroughputOut
expr: sum by (instance) (irate(node_network_transmit_bytes[2m])) / 1024 / 1024 > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual network throughput out (instance {{ $labels.instance }})"
description: "Host network interfaces are probably sending too much data (> 100 MB/s) (current value: {{ $value }})"
- alert: UnusualDiskReadRate
expr: sum by (instance) (irate(node_disk_bytes_read[2m])) / 1024 / 1024 > 50
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual disk read rate (instance {{ $labels.instance }})"
description: "Disk is probably reading too much data (> 50 MB/s) (current value: {{ $value }})"
- alert: UnusualDiskWriteRate
expr: sum by (instance) (irate(node_disk_bytes_written[2m])) / 1024 / 1024 > 50
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual disk write rate (instance {{ $labels.instance }})"
description: "Disk is probably writing too much data (> 50 MB/s) (current value: {{ $value }})"
- alert: UnusualDiskReadLatency
expr: rate(node_disk_read_time_ms[1m]) / rate(node_disk_reads_completed[1m]) > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual disk read latency (instance {{ $labels.instance }})"
description: "Disk latency is growing (read operations > 100ms) (current value: {{ $value }})"
- alert: UnusualDiskWriteLatency
expr: rate(node_disk_write_time_ms[1m]) / rate(node_disk_writes_completedl[1m]) > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Unusual disk write latency (instance {{ $labels.instance }})"
description: "Disk latency is growing (write operations > 100ms) (current value: {{ $value }})"
- name: http_status
rules:
- alert: ProbeFailed
expr: probe_success == 0
for: 1m
labels:
severity: error
annotations:
summary: "Probe failed (instance {{ $labels.instance }})"
description: "Probe failed (current value: {{ $value }})"
- alert: StatusCode
expr: probe_http_status_code <= 199 OR probe_http_status_code >= 400
for: 1m
labels:
severity: error
annotations:
summary: "Status Code (instance {{ $labels.instance }})"
description: "HTTP status code is not 200-399 (current value: {{ $value }})"
- alert: SslCertificateWillExpireSoon
expr: probe_ssl_earliest_cert_expiry - time() < 86400 * 30
for: 5m
labels:
severity: warning
annotations:
summary: "SSL certificate will expire soon (instance {{ $labels.instance }})"
description: "SSL certificate expires in 30 days (current value: {{ $value }})"
- alert: SslCertificateHasExpired
expr: probe_ssl_earliest_cert_expiry - time() <= 0
for: 5m
labels:
severity: error
annotations:
summary: "SSL certificate has expired (instance {{ $labels.instance }})"
description: "SSL certificate has expired already (current value: {{ $value }})"
- alert: BlackboxSlowPing
expr: probe_icmp_duration_seconds > 2
for: 5m
labels:
severity: warning
annotations:
summary: "Blackbox slow ping (instance {{ $labels.instance }})"
description: "Blackbox ping took more than 2s (current value: {{ $value }})"
- alert: BlackboxSlowRequests
expr: probe_http_duration_seconds > 2
for: 5m
labels:
severity: warning
annotations:
summary: "Blackbox slow requests (instance {{ $labels.instance }})"
description: "Blackbox request took more than 2s (current value: {{ $value }})"
- alert: PodCpuUsagePercent
expr: sum(sum(label_replace(irate(container_cpu_usage_seconds_total[1m]),"pod","$1","container_label_io_kubernetes_pod_name", "(.*)"))by(pod) / on(pod) group_right kube_pod_container_resource_limits_cpu_cores *100 )by(container,namespace,node,pod,severity) > 80
for: 5m
labels:
severity: warning
annotations:
summary: "Pod cpu usage percent has exceeded 80% (current value: {{ $value }}%)"
EOF
在prometheus配置檔案中追加配置:
cat >>/data/nfs-volume/prometheus/etc/prometheus.yml <<'EOF'
alerting:
alertmanagers:
- static_configs:
- targets: ["alertmanager"]
rule_files:
- "/data/etc/rules.yml"
EOF
重載配置:
curl -X POST http://prometheus.zq.com/-/reload
以上這些就是我們的告警規則
把test命名空間裡的dubbo-demo-service給停掉
blackbox裡資訊已報錯,alert裡面項目變黃了
等到alert中項目變為紅色的時候就開會發郵件告警
如果需要自己定制告警規則和告警内容,需要研究一下promql,自己修改配置檔案。