天天看點

利用NFS動态提供Kubernetes後端存儲卷利用NFS動态提供Kubernetes後端存儲卷

文章目錄

  • 利用NFS動态提供Kubernetes後端存儲卷
    • 安裝nfs
    • k8s中使用nfs做存儲盤

利用NFS動态提供Kubernetes後端存儲卷

本文将介紹使用nfs-client-provisioner這個應用,利用NFS Server給Kubernetes作為持久存儲的後端,并且動态提供PV。

安裝nfs

我這邊安裝的是一台nfs伺服器,比較簡單。其他節點安裝nfs-utils就夠了。

yum -y install nfs-utils rpcbind
find /etc/ -name '*rpcbind.socket*'   
vim /etc/systemd/system/sockets.target.wants/rpcbind.socket #上一條檔案結果
#檢視是否和下面一樣
[Unit]
Description=RPCbind Server Activation Socket
[Socket]
ListenStream=/var/run/rpcbind.sock
# RPC netconfig can't handle ipv6/ipv4 dual sockets
BindIPv6Only=ipv6-only
ListenStream=0.0.0.0:111
ListenDatagram=0.0.0.0:111
#ListenStream=[::]:111
#ListenDatagram=[::]:111
[Install]
WantedBy=sockets.target
           

配置服務開機運作:

systemctl enable rpcbind.service &&systemctl start rpcbind.service
systemctl enable nfs.service &&systemctl start nfs.service
           

配置共享目錄:

#建立共享目錄,目錄自己定
mkdir -p /usr/share/k8s
#按需設定目錄權限
chmod -R 666 /usr/share/k8s
#更改共享設定
vi /etc/exports
/usr/share/k8s *(insecure,rw,no_root_squash) 
systemctl restart nfs
           

測試Nfs服務是否正常:

選擇另外一台主機進行測試,另一台主機也安裝了nfs-utils,沒安裝就執行:

#安裝nfs-utils用于測試
yum -y install nfs-utils rpcbind
#檢視Nfs主機上的共享
showmount -e 192.168.161.180
Export list for 192.168.161.180:
/usr/share/k8s *

#嘗試挂載
mount -t nfs 192.168.161.180:/usr/share/k8s(共享目錄) /usr/share/k8s(本地目錄)

#檢視是否挂載成功
df -Th
           

k8s中使用nfs做存儲盤

1、配置rbac:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
           

2、storageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: stateful-nfs
provisioner: zy-test                  #這個要和nfs-client-provisioner的env環境變量中的PROVISIONER_NAME的value值對應。
reclaimPolicy: Retain               #指定回收政策為Retain(手動釋放)
           

3、pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  storageClassName: stateful-nfs              #定義存儲類的名稱,需與SC的名稱對應
  accessModes:
    - ReadWriteMany                        #通路模式為RWM
  resources:
    requests:
      storage: 100Mi
           

4、測試deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
spec:
  replicas: 1                              #指定副本數量為1
  strategy:
    type: Recreate                      #指定政策類型為重置
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner            #指定rbac yanl檔案中建立的認證使用者賬号
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner     #使用的鏡像
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes             #指定容器内挂載的目錄
          env:
            - name: PROVISIONER_NAME           #容器内的變量用于指定提供存儲的名稱
              value: zy-test
            - name: NFS_SERVER                      #容器内的變量用于指定nfs服務的IP位址
              value: 192.168.161.180				#ip是nfs伺服器位址
            - name: NFS_PATH                       #容器内的變量指定nfs伺服器對應的目錄
              value: /usr/share/k8s
      volumes:                                                #指定挂載到容器内的nfs的路徑及IP
        - name: nfs-client-root
          nfs:
            server: 192.168.161.180
            path: /usr/share/k8s
           

看下如果有pv建立出來和pvc被綁定,當然po需要是running狀态,那就是成功:

[[email protected] nfs]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
pvc-6d85fe39-b6b4-4c29-ade8-4aff4ce7fb4e   100Mi      RWX            Delete           Bound    default/test-pvc   stateful-nfs            26m
[[email protected] nfs]# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-6d85fe39-b6b4-4c29-ade8-4aff4ce7fb4e   100Mi      RWX            stateful-nfs   61m

           

參考文章:

[1]:https://www.cnblogs.com/gytangyao/p/11407221.html

[2]:https://blog.csdn.net/ANXIN997483092/article/details/100177380

[3]:https://blog.51cto.com/14157628/2470107

繼續閱讀