laitimes

Kubernetes Basic Self-Study Series | PV explanation

author:A communicator who loves programming

Video source: Station B "Trying to build Kubernetes at the end of 2021 to mastery - Happy Appetizers in 2022"

While learning, I sorted out the teacher's course content and test notes, and shared with you, infringement is deleted, thank you for your support!

Attached is a summary of the Kubernetes Basic Self-Study Series | A blog that summarizes _COCOgsta - the CSDN blog

conception

PersistentVolume(PV)

is a storage set up by an administrator and is part of a cluster. Just as a node is a resource in a cluster, a PV is a resource in a cluster. A PV is a volume plug-in such as Volume, but has a lifecycle that is independent of the Pod that uses the PV. This API object contains the details of the storage implementation, which is NFS, iSCSI, or a cloud vendor-specific storage system

PersistentVolumeClaim(PVC)

is a request for user store. It is similar to Pods. Pods consume node resources, and PVC consumes PV resources. A Pod can request a specific level of resources (CPU and memory). Declarations can request specific sizes and access patterns (for example, can be mounted in read/write once or read-only multiple modes)

Static pv

The cluster administrator creates some PVs. They carry details of the actual storage available to cluster users. They exist in the Kubernetes API and can be used for consumption

dynamic

When an administrator creates a static PV that does not match the user's PersistentVolumeClaim, the cluster might attempt to dynamically create volumes for the PVC. This configuration is based on StorageClasses: THE PVC must request a [storage class] and the administrator must create and configure the class for dynamic creation. Declaring the class "" effectively disables its dynamic configuration

To enable a dynamic storage-based storage configuration, the cluster administrator needs to enable defaultStorageClass on the API server. For example, this can be done by ensuring that the DefaultStorageClass is in the --admission-control flag of the API server component, using a comma-separated list of ordered values

bind

The control loop in master monitors the new PVC, looks for matching PVs (if possible), and binds them together. If a PV is dynamically provisioned for a new PVC, the loop will always bind the PV to the PVC. Otherwise, users will always get the storage they request, but the capacity may exceed the required amount. Once the PV and PVC are bound, the PersistentVolumeClaim binding is exclusive, regardless of how they are bound. PVC vs. PV bindings are one-to-one mappings

Persist the protection declared by the volume

The purpose of PVC protection is to ensure that the PVC that is being used by the pod is not removed from the system, as it can result in data loss if removed

The PVC is active when the pod status is 'Pending' and the pod has been assigned to a node or when the pod is in the Running state

When the PVC protected alpha feature is enabled, if a user deletes a PVC that a pod is using, the PVC is not deleted immediately. The removal of the PVC will be postponed until the PVC is no longer used by any pods

Persist the volume type

PersistentVolume types are implemented as plug-ins. Kubernetes currently supports the following plugin types:

  • GCEPersistentDisk AWSElasticBlockStore AzureFile AzureDisk FC (Fibre Channel)
  • FlexVolume Flocker NFS iSCSI RBD (Ceph Block Device) CephFS
  • Cinder (OpenStack block storage) Glusterfs VsphereVolume Quobyte Volumes
  • HostPath VMware Photon Portworx Volumes ScaleIO Volumes StorageOS

Persistent volume demo code

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /tmp
    server: 172.17.0.2
           

PV access mode

PersistentVolume can be mounted to a host in any way supported by the resource provider. As shown in the following table, vendors have different capabilities, and the access mode for each PV is set to the specific mode supported by that volume. For example, NFS can support multiple read/write clients, but a particular NFSPV might be exported to the server as read-only. Each PV has its own set of access patterns that describe specific features

  • ReadWriteOnce --- The volume can be mounted in read/write mode by a single node
  • ReadOnlyMany --- The volume can be mounted in read-only mode by multiple nodes
  • ReadWriteMany --- The volume can be mounted by multiple nodes in read/write mode

At the command line, the access mode is abbreviated as:

  • RWO - ReadWriteOnce
  • ROX - ReadOnlyMany
  • RWX - ReadWriteMany

A volume can only be mounted using one access mode at a time, even if it supports many access modes. For example, a GCEPersistentDisk can be mounted by a single node in ReadWriteOnce mode, or by multiple nodes in ReadOnlyMany mode, but not simultaneously

Volume plugin ReadWriteOnce ReadOnlyMany ReadWriteMany
AWSElasticBlockStoreAWSElasticBlockStore -
AzureFile
AzureDisk
CephFS
Cinder
FC
FlexVolume
Flocker
GCEPersistentDisk
Glusterfs
HostPath
iSCSI
PhotonPersistentDisk
Quobyte
NFS
RBD
VsphereVolume - (valid when pods are tied)
PortworxVolume
ScaleIO
StorageOS

Reclamation policy

  • Retain --- manually recycle
  • Recycle --- Basic erase (rm -rf /thevolume/*)
  • Delete --- Associated storage assets, such as AWS EBS, GCE PD, Azure Disk, and OpenStack Cinder volumes, are deleted

Currently, only NFS and HostPath support reclamation policies. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion policies

state

A volume can be in one of the following states:

  • Available --- A free resource has not been bound by any claims
  • Bound --- volume has been declared bound
  • Released --- the declaration is deleted, but the resource has not been redeclared by the cluster
  • Failed --- The volume's automatic reclamation failed

The command line displays the name of the PVC bound to the PV

Persistence Demo Description - NFS

I. Install Server for NFS

yum install -y nfs-common nfs-utils  rpcbind
mkdir /nfsdata
chmod 666 /nfsdata
chown nfsnobody /nfsdata
cat /etc/exports
    /nfs/1 *(rw,no_root_squash,no_all_squash,sync)
    ...(10个)
systemctl start rpcbind
systemctl enable rpcbind
systemctl start nfs
systemctl enable nfs
           

II. Deploy the PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfspv1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs/1
    server: 192.168.898.23
           

III. Create a service and use a PVC

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: wangyanglinux/myapp:v1
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs"
      resources:
        requests:
          storage: 1Gi  
           

About StatefulSet

  • The pattern that matches the Pod name is: $(statefulset name) -$ (ordinal), such as the example above: web-0, web-1, web-2
  • StatefulSet creates a DNS domain name for each copy of the Pod in the form of$ (podname). (headless server name), which means that the service is communicated through the Pod domain name instead of the Pod IP, because when the Node where the Pod is located fails, the Pod will be drifted to other Nodes, and the Pod IP will change, but the Pod domain name will not change
  • StatefulSet uses the Headless service to control the Pod's domain name, which has an FQDN of$(service name.$)(namespace).svc.cluster.local, where "cluster.local" refers to the cluster's domain name
  • According to volumeClaimTemplates, create a PVC for each Pod, the naming convention of PVC matches the pattern: (volumeClaimTemplates.name)-(pod_name), such as the above volumeMounts.name=www, Pod name=web-[0-2], so the PVC created is www-web-0, www-web-1, www-web-2
  • Deleting a Pod does not delete its PVC, manually deleting a PVC will automatically release the PV

Statefulset start-stop order:

  • Ordered deployment: When deploying StatefulSet, if there are multiple copies of Pods, they are created sequentially (from 0 to N-1) and all previous Pods must be running and Ready before the next Pod runs.
  • Ordered Deletion: When a Pod is deleted, the order in which they are terminated is from N-1 to 0.
  • Ordered Scaling: When scaling a Pod is performed, as with a deployment, the Pods in front of it must be in both running and Ready states.

StatefulSet usage scenarios:

  • Stable persistent storage, that is, the pod can still access the same persistent data after rescheduling, based on PVC.
  • The stable network identifier, that is, the PodName and HostName do not change after the Pod is rescheduled.
  • Ordered deployment, ordered scaling, based on init containers.
  • Ordered contraction.