天天看點

kubernetes常用控制器之DaemonSet二、排程政策三、更新四、例子

一、簡介

DaemonSet保證在每個Node上都運作一個Pod,如果 新增一個Node,這個Pod也會運作在新增的Node上,如果删除這個DaemonSet,就會清除它所建立的Pod。常用來部署一些叢集日志收集,監控等全局應用。

常見的場景如下:

1、運作存儲叢集daemon,比如ceph,glusterd等;

2、運作一個日志收集daemon,比如logstash,fluentd等;

3、運作監控daemon,比如Prometheus Node Exporter,collectd,New Relic agent,Ganglia gmond等;

二、排程政策

通常情況下,通過DaemonSet建立的的Pod應該排程到那個節點是通過Kubernetes排程政策決定的,然而,當這個Pod被建立的時候,運作在那個節點上其實已經被提前決定了,是以它會忽略排程器。是以:

  • DaemonSet控制器并不在乎Node的unschedulable字段;
  • 即使排程器沒有啟動,DaemonSet控制器都可以建立Pod;

但是可以通過以下方法來讓Pod運作到指定的Node上:

  • nodeSelector:隻排程到比對指定label的Node上;
  • nodeAffinity:功能更豐富的Node選擇器,比如支援集合操作;
  • podAffinity:排程到滿足條件的Pod所在的Node上;

2.1、nodeSelector

就是給需要運作在的Node打标簽,比如隻運作在有ssd硬碟的node上,那麼我們就可以給這些node打一個标簽:

kubectl label nodes node-01 disktype=ssd           

複制

然後在DaemonSet的字段中定義nodeSelector為disktype=ssd:

spec:
  nodeSelector:
    disktype: ssd           

複制

2.2、nodeAffinity

nodeAffinity目前支援兩種:requiredDuringSchedulingIgnoredDuringExecution和preferredDuringSchedulingIgnoredDuringExecution,分别代表必須滿足條件和優選條件。比如下面的例子代表排程到包含标簽kubernetes.io/e2e-az-name并且值為e2e-az1或e2e-az2的Node上,并且優選還帶有标簽another-node-label-key=another-node-label-value的Node。

apiVersion: v1
kind: Pod
metadata:
  name: with-node-affinity
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/e2e-az-name
            operator: In
            values:
            - e2e-az1
            - e2e-az2
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 1
        preference:
          matchExpressions:
          - key: another-node-label-key
            operator: In
            values:
            - another-node-label-value
  containers:
  - name: with-node-affinity
    image: gcr.io/google_containers/pause:2.0           

複制

2.3、podAffinity

podAffinity基于Pod的标簽來選擇Node,僅排程到滿足條件Pod所在的Node上,支援podAffinity和podAntiAffinity。這個功能比較繞,以下面的例子為例:

  • 如果一個“Node所在Zone中包含至少一個帶有security=S1标簽且運作中的Pod”,那麼可以排程到該Node
  • 不排程到“包含至少一個帶有security=S2标簽且運作中Pod”的Node上
apiVersion: v1
kind: Pod
metadata:
  name: with-pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: security
            operator: In
            values:
            - S1
        topologyKey: failure-domain.beta.kubernetes.io/zone
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: security
              operator: In
              values:
              - S2
          topologyKey: kubernetes.io/hostname
  containers:
  - name: with-pod-affinity
    image: gcr.io/google_containers/pause:2.0           

複制

三、更新

DaemonSet支援滾動更新,其定義字段為updateStrategy,可以通過kubectl explain ds.spec.updateStrategy檢視。

[root@master ~]# kubectl explain ds.spec.updateStrategy
KIND:     DaemonSet
VERSION:  extensions/v1beta1

RESOURCE: updateStrategy <Object>

DESCRIPTION:
     An update strategy to replace existing DaemonSet pods with new pods.

FIELDS:
   rollingUpdate    <Object>
     Rolling update config params. Present only if type = "RollingUpdate".

   type    <string>
     Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is
     OnDelete.           

複制

其rollingUpdate字段隻有一個maxUnavailable,沒有maxSurge,因為DaemonSet隻允許在node上運作一個。

DaemonSet的更新政策有兩個:

  • RollingUpdate:滾動更新
  • OnDelete:當删除Pod的時候更新,預設的更新政策;

四、例子

定義一個收集日志的DaemonSet,使用filebeat收集日志,通過filebeat收集日志傳給redis:

# vim filebeat-ds.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: cachedb
  template:
    metadata:
      labels:
        app: redis
        role: cachedb
    spec:
      containers:
      - name: redis
        image: redis:5.0.5-alpine
        ports:
        - name: redis
          containerPort: 6379

---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: redis
    role: cachedb
  ports:
  - port: 6379

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat-ds
  namespace: default
spec:
  selector:
    matchLabels:
      app: filebeat
      role: logstorage
  template:
    metadata:
      labels:
        app: filebeat
        role: logstorage
    spec:
      containers:
      - name: filebeat
        image: ikubernetes/filebeat:5.6.5-alpine
        env:
        - name: REDIS_HOST
          value: redis.default.svc.cluster.local           

複制

我們建立這個YAML檔案

# kubectl apply -f filebeat-ds.yaml           

複制

然後檢視svc,pod的狀态

[root@master daemonset]# kubectl get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.68.0.1      <none>        443/TCP          4d6h
redis           ClusterIP   10.68.213.73   <none>        6379/TCP         5s

[root@master daemonset]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
filebeat-ds-pgmzt        1/1     Running   0          2m50s
filebeat-ds-wx44z        1/1     Running   0          2m50s
filebeat-ds-zjv68        1/1     Running   0          2m50s
redis-85c7ccb675-ks4rc   1/1     Running   0          4m2s           

複制

然後我們進filebeat容器搞一點測試資料:

# kubectl exec -it filebeat-ds-pgmzt -- /bin/bash
# cd /var/log/containers/
# echo "123" > a.log           

複制

然後進redis容器檢視資料:

# kubectl exec -it redis-85c7ccb675-ks4rc -- /bin/sh
/data # redis-cli -h redis.default.svc.cluster.local -p 6379
redis.default.svc.cluster.local:6379> KEYS *
1) "filebeat"
redis.default.svc.cluster.local:6379>           

複制

從上面可以看到Redis正常接收到filebeat傳輸過來的資訊了。