天天看點

在kubernetes1.17.2上結合ceph部署efk

簡紹

應用程式和系統日志可以幫助我們了解叢集内部的運作情況,日志對于我們調試問題和監視叢集情況也是非常有用的。而且大部分的應用都會有日志記錄,對于傳統的應用大部分都會寫入到本地的日志檔案之中。對于容器化應用程式來說則更簡單,隻需要将日志資訊寫入到 stdout 和 stderr 即可,容器預設情況下就會把這些日志輸出到主控端上的一個 JSON 檔案之中,同樣也可以通過 docker logs 或者 kubectl logs 來檢視到對應的日志資訊。

Kubernetes 中比較流行的日志收集解決方案是 Elasticsearch、Fluentd 和 Kibana(EFK)技術棧,也是官方現在比較推薦的一種方案。

Elasticsearch 是一個實時的、分布式的可擴充的搜尋引擎,允許進行全文、結構化搜尋,它通常用于索引和搜尋大量日志資料,也可用于搜尋許多不同類型的文檔。Elasticsearch 通常與 Kibana 一起部署。

Kibana 是 Elasticsearch 的一個功能強大的資料可視化 Dashboard,Kibana 允許你通過 web 界面來浏覽 Elasticsearch 日志資料。

Fluentd是一個流行的開源資料收集器,我們将在 Kubernetes 叢集節點上安裝 Fluentd,通過擷取容器日志檔案、過濾和轉換日志資料,然後将資料傳遞到 Elasticsearch 叢集,在該叢集中對其進行索引和存儲。

拓撲圖

在kubernetes1.17.2上結合ceph部署efk

ps: 因為我的實體機資源有限,并且還要在叢集中部署myweb、prometheus、jenkins等,是以這裡我隻部署EFK,正常情況,這套方案也足夠使用了。

配置啟動一個可擴充的 Elasticsearch 叢集,然後在 Kubernetes 叢集中建立一個 Kibana 應用,最後通過 DaemonSet 來運作 Fluentd,以便它在每個 Kubernetes 工作節點上都可以運作一個 Pod。

檢查叢集狀态

ceph叢集

# ceph -s
  cluster:
    id:     ed4d59da-c861-4da0-bbe2-8dfdea5be796
    health: HEALTH_WARN
            application not enabled on 1 pool(s)
 
  services:
    mon: 3 daemons, quorum bs-k8s-harbor,bs-k8s-gitlab,bs-k8s-ceph
    mgr: bs-k8s-ceph(active), standbys: bs-k8s-harbor, bs-k8s-gitlab
    osd: 6 osds: 6 up, 6 in
 
  data:
    pools:   1 pools, 128 pgs
    objects: 92  objects, 285 MiB
    usage:   6.7 GiB used, 107 GiB / 114 GiB avail
    pgs:     128 active+clean

原因:這是因為未在池上啟用應用程式。
解決:
# ceph osd lspools
6 webapp
# ceph osd pool application enable webapp rbd
enabled application 'rbd' on pool 'webapp'
# ceph -s
......
    health: HEALTH_OK
           

kubernetes叢集

# kubectl get pods --all-namespaces 
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6cf5b744d7-rxt86     1/1     Running   0          47h
kube-system   calico-node-25dlc                            1/1     Running   2          2d4h
kube-system   calico-node-49q4n                            1/1     Running   2          2d4h
kube-system   calico-node-4gmcp                            1/1     Running   1          2d4h
kube-system   calico-node-gt4bt                            1/1     Running   1          2d4h
kube-system   calico-node-svcdj                            1/1     Running   1          2d4h
kube-system   calico-node-tkrqt                            1/1     Running   1          2d4h
kube-system   coredns-76b74f549-dkjxd                      1/1     Running   0          47h
kube-system   dashboard-metrics-scraper-64c8c7d847-dqbx2   1/1     Running   0          46h
kube-system   kubernetes-dashboard-85c79db674-bnvlk        1/1     Running   0          46h
kube-system   metrics-server-6694c7dd66-hsbzb              1/1     Running   0          47h
kube-system   traefik-ingress-controller-m8jf9             1/1     Running   0          47h
kube-system   traefik-ingress-controller-r7cgl             1/1     Running   0          47h
myweb         rbd-provisioner-9cf46c856-b9pm9              1/1     Running   1          7h2m
myweb         wordpress-6677ff7bd-sc45d                    1/1     Running   0          6h13m
myweb         wordpress-mysql-6d7bd496b4-62dps             1/1     Running   0          5h51m
# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%  
20.0.0.201   563m         14%    1321Mi          103%    
20.0.0.202   359m         19%    1288Mi          100%    
20.0.0.203   338m         18%    1272Mi          99%     
20.0.0.204   546m         14%    954Mi           13%     
20.0.0.205   516m         13%    539Mi           23%     
20.0.0.206   375m         9%     1123Mi          87%  
           

建立namespace

這裡我準備将所有efk放入assembly名稱空間下。 assembly:元件

# vim namespace.yaml 

[root@bs-k8s-master01 efk]# pwd
/data/k8s/efk
[root@bs-k8s-master01 efk]# cat namespace.yaml 
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   namespace.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Namespace
metadata:
  name: assembly
           

建立動态RBD StorageClass

建立assembly pool

bs-k8s-ceph
# ceph osd pool create assembly 128
pool 'assembly' created
# ceph auth get-or-create client.assembly mon 'allow r' osd 'allow class-read, allow rwx pool=assembly' -o ceph.client.assemply.keyring
           

建立Storageclass

bs-k8s-master01
# ceph auth get-key client.assembly | base64
QVFBWjIzRmVDa0RnSGhBQWQ0TXJWK2YxVThGTUkrMjlva1JZYlE9PQ==
# cat ceph-efk-secret.yaml 
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   ceph-jenkins-secret.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: assembly 
data:
  key: QVFBaUptcGU0R3RDREJBQWhhM1E3NnowWG5YYUl1VVI2MmRQVFE9PQ==
type: kubernetes.io/rbd
---
apiVersion: v1
kind: Secret
metadata:
  name: ceph-assembly-secret
  namespace: assembly 
data:
  key: QVFBWjIzRmVDa0RnSGhBQWQ0TXJWK2YxVThGTUkrMjlva1JZYlE9PQ==
type: kubernetes.io/rbd
# kubectl apply -f ceph-efk-secret.yaml
secret/ceph-admin-secret created
secret/ceph-assembly-secret created
# cat ceph-efk-storageclass.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   ceph-jenkins-storageclass.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-efk
  namespace: assembly
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: ceph.com/rbd
reclaimPolicy: Retain
parameters:
  monitors: 20.0.0.205:6789,20.0.0.206:6789,20.0.0.207:6789
  adminId: admin
  adminSecretName: ceph-admin-secret
  adminSecretNamespace: assembly
  pool: assembly
  fsType: xfs
  userId: assembly
  userSecretName: ceph-assembly-secret
  imageFormat: "2"
  imageFeatures: "layering"
# kubectl apply -f ceph-efk-storageclass.yaml
storageclass.storage.k8s.io/ceph-efk created

ceph rbd和kubernetes結合需要第三方插件
# cat external-storage-rbd-provisioner.yaml 
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   external-storage-rbd-provisioner.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rbd-provisioner
  namespace: assembly
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["kube-dns"]
    verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: rbd-provisioner
subjects:
  - kind: ServiceAccount
    name: rbd-provisioner
    namespace: assembly
roleRef:
  kind: ClusterRole
  name: rbd-provisioner
  apiGroup: rbac.authorization.k8s.io

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: rbd-provisioner
  namespace: assembly
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: rbd-provisioner
  namespace: assembly
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rbd-provisioner
subjects:
- kind: ServiceAccount
  name: rbd-provisioner
  namespace: assembly

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rbd-provisioner
  namespace: assembly
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rbd-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: rbd-provisioner
    spec:
      containers:
      - name: rbd-provisioner
        image: "harbor.linux.com/rbd/rbd-provisioner:latest"
        imagePullPolicy: IfNotPresent
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/rbd
      imagePullSecrets: 
        - name: k8s-harbor-login
      serviceAccount: rbd-provisioner
      nodeSelector:             ## 設定node篩選器,在特定label的節點上啟動
        rbd: "true"
# kubectl apply -f external-storage-rbd-provisioner.yaml
serviceaccount/rbd-provisioner created
clusterrole.rbac.authorization.k8s.io/rbd-provisioner unchanged
clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner configured
role.rbac.authorization.k8s.io/rbd-provisioner created
rolebinding.rbac.authorization.k8s.io/rbd-provisioner created
deployment.apps/rbd-provisioner created
# kubectl get pods -n assembly
NAME                              READY   STATUS    RESTARTS   AGE
rbd-provisioner-9cf46c856-6qzll   1/1     Running   0          71s
           

建立Elasticsearch

建立elasticsearch-svc.yaml
定義了一個名為 elasticsearch 的 Service,指定标簽app=elasticsearch,當我們将 Elasticsearch StatefulSet 與此服務關聯時,服務将傳回帶有标簽app=elasticsearch的 Elasticsearch Pods 的 DNS A 記錄,然後設定clusterIP=None,将該服務設定成無頭服務。最後,我們分别定義端口9200、9300,分别用于與 REST API 互動,以及用于節點間通信。
# cat elasticsearch-svc.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   elasticsearch-svc.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
kind: Service
apiVersion: v1
metadata:
  name: elasticsearch
  namespace: assembly
  labels:
    app: elasticsearch
spec:
  selector:
    app: elasticsearch
  clusterIP: None
  ports:
    - port: 9200
      name: rest
    - port: 9300
      name: inter-node
# kubectl apply -f elasticsearch-svc.yaml
service/elasticsearch created
已經為 Pod 設定了無頭服務和一個穩定的域名.elasticsearch.assmbly.svc.cluster.local,接下來通過 StatefulSet 來建立具體的 Elasticsearch 的 Pod 應用.
Kubernetes StatefulSet 允許為 Pod 配置設定一個穩定的辨別和持久化存儲,Elasticsearch 需要穩定的存儲來保證 Pod 在重新排程或者重新開機後的資料依然不變,是以需要使用 StatefulSet 來管理 Pod。

建立動态pv
# cat elasticsearch-pvc.yaml 
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-18
#FileName:                   elasticsearch-pvc.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: ceph-elasticsearch
  namespace: assembly
  labels:
    app: elasticsearch
spec:
  storageClassName: ceph-efk
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
#kubectl apply -f ceph-efk-storageclass.yaml 
# cat elasticsearch-statefulset.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   elasticsearch-storageclass.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: es-cluster
  namespace: assembly
spec:
  serviceName: elasticsearch
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      imagePullSecrets: 
        - name: k8s-harbor-login
      containers:
      - name: elasticsearch
        image: harbor.linux.com/efk/elasticsearch-oss:6.4.3
        resources:
            limits:
              cpu: 1000m
            requests:
              cpu: 100m
        ports:
        - containerPort: 9200
          name: rest
          protocol: TCP
        - containerPort: 9300
          name: inter-node
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        env:
          - name: cluster.name
            value: k8s-logs
          - name: node.name
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
         # - name: discovery.zen.ping.unicast.hosts
         #   value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
         # - name: discovery.zen.minimum_master_nodes
         #   value: "2"
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
      - name: increase-vm-max-map
        image: busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: ceph-elasticsearch 
      nodeSelector:             ## 設定node篩選器,在特定label的節點上啟動
        elasticsearch: "true"  
節點打标簽
# kubectl label nodes 20.0.0.204 elasticsearch=true
node/20.0.0.204 labeled
# kubectl apply -f elasticsearch-statefulset.yaml
# kubectl get pods -n assembly
NAME                              READY   STATUS    RESTARTS   AGE
es-cluster-0                      1/1     Running   0          2m15s
rbd-provisioner-9cf46c856-6qzll   1/1     Running   0          37m

Pods 部署完成後,我們可以通過請求一個 REST API 來檢查 Elasticsearch 叢集是否正常運作。使用下面的指令将本地端口9200轉發到 Elasticsearch 節點(es-cluster-0)對應的端口
# kubectl port-forward es-cluster-0 9200:9200 --namespace=assembly
Forwarding from 127.0.0.1:9200 -> 9200
#  curl http://localhost:9200/_cluster/state?pretty
{
  "cluster_name" : "k8s-logs",
  "compressed_size_in_bytes" : 234,
  "cluster_uuid" : "PopKT5FLROqyBYlRvvr7kw",
  "version" : 2,
  "state_uuid" : "ubOKSevGRVe4iR5JXODjDA",
  "master_node" : "vub5ot69Thu8igd4qeiZBg",
  "blocks" : { },
  "nodes" : {
    "vub5ot69Thu8igd4qeiZBg" : {
      "name" : "es-cluster-0",
      "ephemeral_id" : "9JjNmdOyRomyYsHAO1IQ5Q",
      "transport_address" : "172.20.46.85:9300",
      "attributes" : { }
    }
  },
           

建立Kibana

Elasticsearch 叢集啟動成功了,接下來可以來部署 Kibana 服務,建立一個名為 kibana.yaml 的檔案。

# cat kibana.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   kibana.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: assembly
  labels:
    app: kibana
spec:
  ports:
  - port: 5601
  type: NodePort
  selector:
    app: kibana

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: assembly
  labels:
    app: kibana
spec:
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      imagePullSecrets: 
        - name: k8s-harbor-login
      containers:
      - name: kibana
        image: harbor.linux.com/efk/kibana-oss:6.4.3
        resources:
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch:9200
        ports:
        - containerPort: 5601
      nodeSelector:             ## 設定node篩選器,在特定label的節點上啟動
        kibana: "true"  
節點打标簽
# kubectl label nodes 20.0.0.204 kibana=true
node/20.0.0.204 labeled
# kubectl apply -f kibana.yaml
service/kibana created
deployment.apps/kibana created
# kubectl get pods -n assembly
NAME                              READY   STATUS    RESTARTS   AGE
es-cluster-0                      1/1     Running   0          8m4s
kibana-598987f498-k8ff9           1/1     Running   0          70s
rbd-provisioner-9cf46c856-6qzll   1/1     Running   0          43m
定義了兩個資源對象,一個 Service 和 Deployment,為了測試友善,我們将 Service 設定為了 NodePort 類型,Kibana Pod 中配置都比較簡單,唯一需要注意的是我們使用 ELASTICSEARCH_URL 這個環境變量來設定Elasticsearch 叢集的端點和端口,直接使用 Kubernetes DNS 即可,此端點對應服務名稱為 elasticsearch,由于是一個 headless service,是以該域将解析為 Elasticsearch Pod 的 IP 位址清單
# kubectl get svc --namespace=assembly
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
elasticsearch   ClusterIP   None            <none>        9200/TCP,9300/TCP   50m
kibana          NodePort    10.68.123.234   <none>        5601:22693/TCP      2m22s
           

代理kibana

這裡我讓kibana走traefik代理

# cat kibana-ingreeroute.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   kibana-ingreeroute.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
  name: kibana
  namespace: assembly
spec:
  entryPoints:
    - web
  routes:
  - match: Host(`kibana.linux.com`)
    kind: Rule
    services:
    - name: kibana
      port: 5601
# kubectl apply -f kibana-ingreeroute.yaml
ingressroute.traefik.containo.us/kibana created
           
在kubernetes1.17.2上結合ceph部署efk

traefik代理成功,本地主機hosts解析

在kubernetes1.17.2上結合ceph部署efk

web通路成功!

建立Fluentd

Fluentd是一個高效的日志聚合器,是用 Ruby 編寫的,并且可以很好地擴充。對于大部分企業來說,Fluentd 足夠高效并且消耗的資源相對較少,另外一個工具Fluent-bit更輕量級,占用資源更少,但是插件相對 Fluentd 來說不夠豐富,是以整體來說,Fluentd 更加成熟,使用更加廣泛,是以這裡使用 Fluentd 來作為日志收集工具。

工作原理

Fluentd 通過一組給定的資料源抓取日志資料,處理->轉換成結構化的資料格式将它們轉發給其他服務,比如 Elasticsearch、對象存儲等等。Fluentd 支援超過300個日志存儲和分析服務,是以在這方面是非常靈活的。主要運作步驟如下:

​ 首先 Fluentd 從多個日志源擷取資料

​ 結構化并且标記這些資料

​ 然後根據比對的标簽将資料發送到多個目标服務去

Fluentd拓撲圖

在kubernetes1.17.2上結合ceph部署efk

配置

通過一個配置檔案來告訴 Fluentd 如何采集、處理資料的

日志源配置

比如這裡為了收集 Kubernetes 節點上的所有容器日志,就需要做如下的日志源配置:

<source>
@id fluentd-containers.log
@type tail
path /var/log/containers/*.log
pos_file /var/log/fluentd-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S.%NZ
tag raw.kubernetes.*
format json
read_from_head true
</source>
           

上面配置部分參數說明如下:

  • id:表示引用該日志源的唯一辨別符,該辨別可用于進一步過濾和路由結構化日志資料
  • type:Fluentd 内置的指令,

    tail

    表示 Fluentd 從上次讀取的位置通過 tail 不斷擷取資料,另外一個是

    http

    表示通過一個 GET 請求來收集資料。
  • path:

    tail

    類型下的特定參數,告訴 Fluentd 采集

    /var/log/containers

    目錄下的所有日志,這是 docker 在 Kubernetes 節點上用來存儲運作容器 stdout 輸出日志資料的目錄。
  • pos_file:檢查點,如果 Fluentd 程式重新啟動了,它将使用此檔案中的位置來恢複日志資料收集。
  • tag:用來将日志源與目标或者過濾器比對的自定義字元串,Fluentd 比對源/目标标簽來路由日志資料。

路由配置

上面是日志源的配置,接下來看看如何将日志資料發送到 Elasticsearch:

<match **>
@id elasticsearch
@type elasticsearch
@log_level info
include_tag_key true
type_name fluentd
host "#{ENV['OUTPUT_HOST']}"
port "#{ENV['OUTPUT_PORT']}"
logstash_format true
<buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
chunk_limit_size "#{ENV['OUTPUT_BUFFER_CHUNK_LIMIT']}"
queue_limit_length "#{ENV['OUTPUT_BUFFER_QUEUE_LIMIT']}"
overflow_action block
</buffer>
           
  • match:辨別一個目标标簽,後面是一個比對日志源的正規表達式,我們這裡想要捕獲所有的日志并将它們發送給 Elasticsearch,是以需要配置成

    **

  • id:目标的一個唯一辨別符。
  • type:支援的輸出插件辨別符,我們這裡要輸出到 Elasticsearch,是以配置成 elasticsearch,這是 Fluentd 的一個内置插件。
  • log_level:指定要捕獲的日志級别,我們這裡配置成

    info

    ,表示任何該級别或者該級别以上(INFO、WARNING、ERROR)的日志都将被路由到 Elsasticsearch。
  • host/port:定義 Elasticsearch 的位址,也可以配置認證資訊,我們的 Elasticsearch 不需要認證,是以這裡直接指定 host 和 port 即可。
  • logstash_format:Elasticsearch 服務對日志資料建構反向索引進行搜尋,将 logstash_format 設定為

    true

    ,Fluentd 将會以 logstash 格式來轉發結構化的日志資料。
  • Buffer: Fluentd 允許在目标不可用時進行緩存,比如,如果網絡出現故障或者 Elasticsearch 不可用的時候。緩沖區配置也有助于降低磁盤的 IO

要收集 Kubernetes 叢集的日志,直接用 DasemonSet 控制器來部署 Fluentd 應用,這樣,它就可以從 Kubernetes 節點上采集日志,確定在叢集中的每個節點上始終運作一個 Fluentd 容器。

首先,通過 ConfigMap 對象來指定 Fluentd 配置檔案,建立 fluentd-configmap.yaml 檔案。

# cat fluentd-configmap.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   fluentd-configmap.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
kind: ConfigMap
apiVersion: v1
metadata:
  name: fluentd-config
  namespace: assembly
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
data:
  system.conf: |-
    <system>
      root_dir /tmp/fluentd-buffers/
    </system>
  containers.input.conf: |-
    <source>
      @id fluentd-containers.log
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/es-containers.log.pos
      time_format %Y-%m-%dT%H:%M:%S.%NZ
      localtime
      tag raw.kubernetes.*
      format json
      read_from_head true
    </source>
    # Detect exceptions in the log output and forward them as one log entry.
    <match raw.kubernetes.**>
      @id raw.kubernetes
      @type detect_exceptions
      remove_tag_prefix raw
      message log
      stream stream
      multiline_flush_interval 5
      max_bytes 500000
      max_lines 1000
    </match>
  system.input.conf: |-
    # Logs from systemd-journal for interesting services.
    <source>
      @id journald-docker
      @type systemd
      filters [{ "_SYSTEMD_UNIT": "docker.service" }]
      <storage>
        @type local
        persistent true
      </storage>
      read_from_head true
      tag docker
    </source>
    <source>
      @id journald-kubelet
      @type systemd
      filters [{ "_SYSTEMD_UNIT": "kubelet.service" }]
      <storage>
        @type local
        persistent true
      </storage>
      read_from_head true
      tag kubelet
    </source>
  forward.input.conf: |-
    # Takes the messages sent over TCP
    <source>
      @type forward
    </source>
  output.conf: |-
    # Enriches records with Kubernetes metadata
    <filter kubernetes.**>
      @type kubernetes_metadata
    </filter>
    <match **>
      @id elasticsearch
      @type elasticsearch
      @log_level info
      include_tag_key true
      host elasticsearch
      port 9200
      logstash_format true
      request_timeout    30s
      <buffer>
        @type file
        path /var/log/fluentd-buffers/kubernetes.system.buffer
        flush_mode interval
        retry_type exponential_backoff
        flush_thread_count 2
        flush_interval 5s
        retry_forever
        retry_max_interval 30
        chunk_limit_size 2M
        queue_limit_length 8
        overflow_action block
      </buffer>
    </match>
# kubectl apply -f fluentd-configmap.yaml
configmap/fluentd-config created
           

上面配置檔案中配置了 docker 容器日志目錄以及 docker、kubelet 應用的日志的收集,收集到資料經過處理後發送到 elasticsearch:9200 服務。

然後建立一個 fluentd-daemonset.yaml 的檔案

# cat fluentd-daemonset.yaml
##########################################################################
#Author:                     zisefeizhu
#QQ:                         2********0
#Date:                       2020-03-13
#FileName:                   fluentd-daemonset.yaml
#URL:                        https://www.cnblogs.com/zisefeizhu/
#Description:                The test script
#Copyright (C):              2020 All rights reserved
###########################################################################
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: assembly
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: assembly
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es
  namespace: assembly
  labels:
    k8s-app: fluentd-es
    version: v2.0.4
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v2.0.4
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        kubernetes.io/cluster-service: "true"
        version: v2.0.4
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: fluentd-es
      imagePullSecrets: 
        - name: k8s-harbor-login
      containers:
      - name: fluentd-es
        image: harbor.linux.com/efk/fluentd-elasticsearch:v2.0.4
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /data/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
      nodeSelector:
        beta.kubernetes.io/fluentd-ds-ready: "true"
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-config
      nodeSelector:             ## 設定node篩選器,在特定label的節點上啟動
        fluentd: "true" 
節點打标簽
# kubectl apply -f fluentd-daemonset.yaml
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es created
# kubectl label nodes 20.0.0.204 fluentd=true
node/20.0.0.204 labeled
# kubectl label nodes 20.0.0.205 fluentd=true
node/20.0.0.205 labeled
# kubectl label nodes 20.0.0.206 fluentd=true
node/20.0.0.206 labeled
# kubectl get pods -n assembly -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE         NOMINATED NODE   READINESS GATES
es-cluster-0                      1/1     Running   0          30m     172.20.46.85    20.0.0.204   <none>           <none>
fluentd-es-5fgt7                  1/1     Running   0          5m36s   172.20.46.87    20.0.0.204   <none>           <none>
fluentd-es-l22nj                  1/1     Running   0          5m22s   172.20.145.9    20.0.0.205   <none>           <none>
fluentd-es-pnqk8                  1/1     Running   0          5m18s   172.20.208.29   20.0.0.206   <none>           <none>
kibana-598987f498-k8ff9           1/1     Running   0          23m     172.20.46.86    20.0.0.204   <none>           <none>
rbd-provisioner-9cf46c856-6qzll   1/1     Running   0          65m     172.20.46.84    20.0.0.204   <none>           <none>
           
在kubernetes1.17.2上結合ceph部署efk

前面 Fluentd 配置檔案中我們采集的日志使用的是 logstash 格式,這裡隻需要在文本框中輸入

logstash-*

即可比對到 Elasticsearch pod中的所有日志資料,然後點選下一步,進入以下頁面:

在kubernetes1.17.2上結合ceph部署efk

在該頁面中配置使用哪個字段按時間過濾日志資料,在下拉清單中,選擇

@timestamp

字段,然後點選

Create index pattern

,建立完成後,點選左側導航菜單中的

Discover

,然後就可以看到一些直方圖和最近采集到的日志資料了

在kubernetes1.17.2上結合ceph部署efk
在kubernetes1.17.2上結合ceph部署efk

至此完成了efk的部署

啟動池

# ceph osd pool application enable assembly rbd
enabled application 'rbd' on pool 'assembly'
           

過手如登山,一步一重天