天天看點

K8s系列---【安裝nfs檔案系統(為k8s提供動态建立pv的能力)】

安裝nfs檔案系統(為k8s提供動态建立pv的能力)

1.1 安裝nfs-server

# 在每個機器執行下面這條指令(包含master)。
yum install -y nfs-utils      

下面的​

​/nfs/data​

​目錄可以自定義,這個是用來供node節點往master節點同步pv資料用的目錄

# 在master 執行以下指令,直接粘貼執行,或者粘貼到shell腳本中執行
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports


# 在master執行以下指令,啟動 nfs 服務;建立共享目錄
mkdir -p /nfs/data


# 在master執行
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

# 使配置生效
exportfs -r


#檢查配置是否生效
exportfs      

驗證:執行完指令之後,出現​

​/nfs/data <world>​

​,則說明執行成功。

1.2 配置nfs-client(選做)

主要用來把node節點的​

​/nfs/data​

​的資料同步到master節點,下面指令直接複制所有并在所有node節點執行。
#在所有node節點執行,下面的ip改成你自己的master的ip
  showmount -e 192.168.26.180
  
  mkdir -p /nfs/data
  
  #在所有node節點執行,下面的ip改成你自己的master的ip
  mount -t nfs 192.168.26.180:/nfs/data /nfs/data      

1.3 配置動态建立pv預設存儲

把下面的兩處ip更換成自己的nfs的server服務的ip,這裡我已把master作為nfs的server服務,是以更換成master的ip即可。

在nfs伺服器上建立sc.yml檔案:​

​vi sc.yaml​

把下面的代碼粘貼到sc.yaml檔案中

在master執行​

​kubectl get sc​

​​和​

​kubectl get storageclass​

​,此時檢視都是沒有的,No resource found。

在master執行:​

​kubectl apply -f sc.yaml​

## 建立了一個存儲類
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的時候,pv的内容是否要備份

  ---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 192.168.26.180 ## 指定自己nfs伺服器位址,這裡我把master設定成server伺服器了
            - name: NFS_PATH
              value: /nfs/data  ## nfs伺服器共享的目錄
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.26.180 ## 指定自己nfs伺服器位址,這裡我把master設定成server伺服器了
            path: /nfs/data
  ---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
  ---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  ---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
  ---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  ---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io      

1.4 驗證

  • 執行:​

    ​kubectl get pod -A​

    ​ ,檢視nfs-client-provisioner-322342c323是否Running。
  • pvc的建立與綁定
kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name:nginx-pvc
  spec:
    accessModes:
      - ReadWriteMany
    resources:
      requests:
        storage: 200Mi
    # 使用kubectl get sc 檢視預設存儲類的name,一般為nfs-storage,也可以不指定,自動會挂載到預設存儲類上
    sotrageClassName: nfs-storage      
vi pvc.yaml

kubectl apply -f pvc.yaml

  #pvc處于綁定狀态
kubectl get pvc

  #這時會發現,已經自動建立了pv 
kubectl get pv      

繼續閱讀