天天看點

K8S+DevOps架構師實戰課 | kubernetes對接分布式存儲

作者:熱愛程式設計的通信人

視訊來源:B站《Docker&k8s教程天花闆,絕對是B站講的最好的,這一套學會k8s搞定Docker 全部核心知識都在這裡》

一邊學習一邊整理老師的課程内容及試驗筆記,并與大家分享,侵權即删,謝謝支援!

附上彙總貼:K8S+DevOps架構師實戰課 | 彙總_熱愛程式設計的通信人的部落格-CSDN部落格

PV與PVC快速入門

k8s存儲的目的就是保證Pod重建後,資料不丢失。簡單的資料持久化的下述方式:

  • emptyDirapiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - image: k8s.gcr.io/test-webserver name: webserver volumeMounts: - mountPath: /cache name: cache-volume - image: k8s.gcr.io/test-redis name: redis volumeMounts: - mountPath: /data name: cache-volume volumes: - name: cache-volume emptyDir: {}Pod内的容器共享卷的資料存在于Pod的生命周期,Pod銷毀,資料丢失Pod内的容器自動重建後,資料不會丢失
  • hostPath
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /test-pod
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: Directory           

通常配合nodeSelector使用

  • nfs存儲
...
  volumes:
  - name: redisdata             #卷名稱
    nfs:                        #使用NFS網絡存儲卷
      server: 192.168.31.241    #NFS伺服器位址
      path: /data/redis         #NFS伺服器共享的目錄
      readOnly: false           #是否為隻讀
...           

volume支援的種類衆多(參考 https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes ),每種對應不同的存儲後端實作,是以為了屏蔽後端存儲的細節,同時使得Pod在使用存儲的時候更加簡潔和規範,k8s引入了兩個新的資源類型,PV和PVC。

PersistentVolume(持久化卷),是對底層的存儲的一種抽象,它和具體的底層的共享存儲技術的實作方式有關,比如 Ceph、GlusterFS、NFS 等,都是通過插件機制完成與共享存儲的對接。如使用PV對接NFS存儲:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/k8s
    server: 172.21.51.55           
  • capacity,存儲能力, 目前隻支援存儲空間的設定, 就是我們這裡的 storage=1Gi,不過未來可能會加入 IOPS、吞吐量等名額的配置。
  • accessModes,通路模式, 是用來對 PV 進行通路模式的設定,用于描述使用者應用對存儲資源的通路權限,通路權限包括下面幾種方式:ReadWriteOnce(RWO):讀寫權限,但是隻能被單個節點挂載ReadOnlyMany(ROX):隻讀權限,可以被多個節點挂載ReadWriteMany(RWX):讀寫權限,可以被多個節點挂載
K8S+DevOps架構師實戰課 | kubernetes對接分布式存儲
  • persistentVolumeReclaimPolicy,pv的回收政策, 目前隻有 NFS 和 HostPath 兩種類型支援回收政策Retain(保留)- 保留資料,需要管理者手工清理資料Recycle(回收)- 清除 PV 中的資料,效果相當于執行 rm -rf /thevolume/*Delete(删除)- 與 PV 相連的後端存儲完成 volume 的删除操作,當然這常見于雲服務商的存儲服務,比如 ASW EBS。

因為PV是直接對接底層存儲的,就像叢集中的Node可以為Pod提供計算資源(CPU和記憶體)一樣,PV可以為Pod提供存儲資源。是以PV不是namespaced的資源,屬于叢集層面可用的資源。Pod如果想使用該PV,需要通過建立PVC挂載到Pod中。

PVC全寫是PersistentVolumeClaim(持久化卷聲明),PVC 是使用者存儲的一種聲明,建立完成後,可以和PV實作一對一綁定。對于真正使用存儲的使用者不需要關心底層的存儲實作細節,隻需要直接使用 PVC 即可。

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi           

然後Pod中通過如下方式去使用:

...
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:                        #挂載容器中的目錄到pvc nfs中的目錄
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
      - name: www
        persistentVolumeClaim:              #指定pvc
          claimName: pvc-nfs
...           

PV與PVC管理NFS存儲卷實踐

環境準備

服務端:172.21.51.55

$ yum -y install nfs-utils rpcbind

# 共享目錄
$ mkdir -p /data/k8s && chmod 755 /data/k8s

$ echo '/data/k8s  *(insecure,rw,sync,no_root_squash)'>>/etc/exports

$ systemctl enable rpcbind && systemctl start rpcbind
$ systemctl enable nfs && systemctl start nfs           

用戶端:k8s叢集slave節點

$ yum -y install nfs-utils rpcbind
$ mkdir /nfsdata
$ mount -t nfs 172.21.51.55:/data/k8s /nfsdata           

PV與PVC示範

$ cat pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/k8s/nginx
    server: 172.21.51.55

$ kubectl create -f pv-nfs.yaml

$ kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS  
nfs-pv   1Gi        RWO            Retain           Available              

一個 PV 的生命周期中,可能會處于4種不同的階段:

  • Available(可用):表示可用狀态,還未被任何 PVC 綁定
  • Bound(已綁定):表示 PV 已經被 PVC 綁定
  • Released(已釋放):PVC 被删除,但是資源還未被叢集重新聲明
  • Failed(失敗): 表示該 PV 的自動回收失敗
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs
  namespace: default
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

$ kubectl create -f pvc.yaml
$ kubectl get pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs   Bound    nfs-pv   1Gi        RWO                           3s
$ kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             
nfs-pv   1Gi        RWO            Retain           Bound    default/pvc-nfs             

#通路模式,storage大小(pvc大小需要小于pv大小),以及 PV 和 PVC 的 storageClassName 字段必須一樣,這樣才能夠進行綁定。

#PersistentVolumeController會不斷地循環去檢視每一個 PVC,是不是已經處于 Bound(已綁定)狀态。如果不是,那它就會周遊所有的、可用的 PV,并嘗試将其與未綁定的 PVC 進行綁定,這樣,Kubernetes 就可以保證使用者送出的每一個 PVC,隻要有合适的 PV 出現,它就能夠很快進入綁定狀态。而所謂将一個 PV 與 PVC 進行“綁定”,其實就是将這個 PV 對象的名字,填在了 PVC 對象的 spec.volumeName 字段上。

# 檢視nfs資料目錄
$ ls /nfsdata           

建立Pod挂載pvc

$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-pvc
spec:
  replicas: 1
  selector:        #指定Pod的選擇器
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts:                        #挂載容器中的目錄到pvc nfs中的目錄
        - name: www
          mountPath: /usr/share/nginx/html
      volumes:
      - name: www
        persistentVolumeClaim:              #指定pvc
          claimName: pvc-nfs
          
          
$ kubectl create -f deployment.yaml

# 檢視容器/usr/share/nginx/html目錄           

storageClass實作動态挂載

建立pv及pvc過程是手動,且pv與pvc一一對應,手動建立很繁瑣。是以,通過storageClass + provisioner的方式來實作通過PVC自動建立并綁定PV。

K8S+DevOps架構師實戰課 | kubernetes對接分布式存儲

部署: https://github.com/kubernetes-retired/external-storage

provisioner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: luffy.com/nfs
            - name: NFS_SERVER
              value: 172.21.51.55
            - name: NFS_PATH  
              value: /data/k8s
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.21.51.55
            path: /data/k8s           

rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
  namespace: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io           

storage-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"   # 設定為default StorageClass
  name: nfs
provisioner: luffy.com/nfs
parameters:
  archiveOnDelete: "true"           

pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
  storageClassName: nfs           

對接Ceph存儲實踐

ceph的安裝及使用參考 http://docs.ceph.org.cn/start/intro/

單點快速安裝: https://blog.csdn.net/h106140873/article/details/90201379

K8S+DevOps架構師實戰課 | kubernetes對接分布式存儲
# CephFS需要使用兩個Pool來分别存儲資料和中繼資料
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_meta 128
ceph osd lspools

# 建立一個CephFS
ceph fs new cephfs cephfs_meta cephfs_data

# 檢視
ceph fs ls

# ceph auth get-key client.admin
client.admin
        key: AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
# 挂載
$ mount -t ceph 172.21.51.55:6789:/ /mnt/cephfs -o name=admin,secret=AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==           

storageClass實作動态挂載

建立pv及pvc過程是手動,且pv與pvc一一對應,手動建立很繁瑣。是以,通過storageClass + provisioner的方式來實作通過PVC自動建立并綁定PV。

K8S+DevOps架構師實戰課 | kubernetes對接分布式存儲

比如,針對cephfs,可以建立如下類型的storageclass:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 172.21.51.55:6789
    adminId: admin
    adminSecretName: ceph-admin-secret
    adminSecretNamespace: "kube-system"
    claimRoot: /volumes/kubernetes           

NFS,ceph-rbd,cephfs均提供了對應的provisioner

部署cephfs-provisioner

$ cat external-storage-cephfs-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cephfs-provisioner
  namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["create", "get", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cephfs-provisioner
subjects:
  - kind: ServiceAccount
    name: cephfs-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cephfs-provisioner
  apiGroup: rbac.authorization.k8s.io



---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cephfs-provisioner
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cephfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: cephfs-provisioner
    spec:
      containers:
      - name: cephfs-provisioner
        image: "quay.io/external_storage/cephfs-provisioner:latest"
        env:
        - name: PROVISIONER_NAME
          value: ceph.com/cephfs
        imagePullPolicy: IfNotPresent
        command:
        - "/usr/local/bin/cephfs-provisioner"
        args:
        - "-id=cephfs-provisioner-1"
        - "-disable-ceph-namespace-isolation=true"
      serviceAccount: cephfs-provisioner           

在ceph monitor機器中檢視admin賬戶的key

$ ceph auth list
$ ceph auth get-key client.admin
AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==           

建立secret

$ echo -n AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==|base64
QVFCUFRzdGdjMDc4TkJBQTc4RDEvS0FCZ2xJWkhLaDcrRzJYOHc9PQ==
$ cat ceph-admin-secret.yaml
apiVersion: v1
data:
  key: QVFCUFRzdGdjMDc4TkJBQTc4RDEvS0FCZ2xJWkhLaDcrRzJYOHc9PQ==
kind: Secret
metadata:
  name: ceph-admin-secret
  namespace: kube-system
type: Opaque           

建立storageclass

$ cat cephfs-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
    monitors: 172.21.51.55:6789
    adminId: admin
    adminSecretName: ceph-admin-secret
    adminSecretNamespace: "kube-system"
    claimRoot: /volumes/kubernetes           

動态pvc驗證及實作分析

使用流程: 建立pvc,指定storageclass和存儲大小,即可實作動态存儲。

建立pvc測試自動生成pv

$ cat cephfs-pvc-test.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: cephfs-claim
spec:
  accessModes:     
    - ReadWriteOnce
  storageClassName: dynamic-cephfs
  resources:
    requests:
      storage: 2Gi

$ kubectl create -f cephfs-pvc-test.yaml

$ kubectl get pv
pvc-2abe427e-7568-442d-939f-2c273695c3db   2Gi        RWO            Delete           Bound      default/cephfs-claim   dynamic-cephfs            1s           

建立Pod使用pvc挂載cephfs資料盤

$ cat test-pvc-cephfs.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    name: nginx-pod
spec:
  containers:
  - name: nginx-pod
    image: nginx:alpine
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: cephfs
      mountPath: /usr/share/nginx/html
  volumes:
  - name: cephfs
    persistentVolumeClaim:
      claimName: cephfs-claim
      
$ kubectl create -f test-pvc-cephfs.yaml           

我們所說的容器的持久化,實際上應該了解為主控端中volume的持久化,因為Pod是支援銷毀重建的,是以隻能通過主控端volume持久化,然後挂載到Pod内部來實作Pod的資料持久化。

主控端上的volume持久化,因為要支援資料漂移,是以通常是資料存儲在分布式存儲中,主控端本地挂載遠端存儲(NFS,Ceph,OSS),這樣即使Pod漂移也不影響資料。

k8s的pod的挂載盤通常的格式為:

/var/lib/kubelet/pods/<Pod的ID>/volumes/kubernetes.io~<Volume類型>/<Volume名字>           

檢視nginx-pod的挂載盤,

$ df -TH
/var/lib/kubelet/pods/61ba43c5-d2e9-4274-ac8c-008854e4fa8e/volumes/kubernetes.io~cephfs/pvc-2abe427e-7568-442d-939f-2c273695c3db/

$ findmnt /var/lib/kubelet/pods/61ba43c5-d2e9-4274-ac8c-008854e4fa8e/volumes/kubernetes.io~cephfs/pvc-2abe427e-7568-442d-939f-2c273695c3db/

172.21.51.55:6789:/volumes/kubernetes/kubernetes/kubernetes-dynamic-pvc-ffe3d84d-c433-11ea-b347-6acc3cf3c15f           

繼續閱讀