Kubernetes CKA認證運維工程師筆記-Kubernetes存儲
- 1. 為什麼需要存儲卷?
- 2. 資料卷概述
- 3. 臨時存儲卷,節點存儲卷,網絡存儲卷
-
- 3.1 臨時存儲卷:emptyDir
- 3.2 節點存儲卷:hostPath
- 3.3 網絡存儲卷:NFS
- 4. 持久卷概述
-
- 4.1 PV與PVC使用流程
- 5. PV生命周期
-
- 5.1 PV 靜态供給
- 5.2 PV 動态供給
- 5.3 案例:應用程式使用持久卷存儲資料
- 6. 有狀态應用部署:StatefulSet 控制器
- 7. 應用程式配置檔案存儲:ConfigMap
- 8. 敏感資料存儲:Secret
1. 為什麼需要存儲卷?
容器部署過程中一般有以下三種資料:
- 啟動時需要的初始資料,例如配置檔案
- 啟動過程中産生的臨時資料,該臨時資料需要多個容器間共享
- 啟動過程中産生的持久化資料,例如MySQL的data目錄
2. 資料卷概述
- Kubernetes中的Volume提供了在容器中挂載外部存儲的能力
- Pod需要設定卷來源(spec.volume)和挂載點(spec.containers.volumeMounts)兩個資訊後才可以使用相應的Volume
常用的資料卷:
- 本地(hostPath,emptyDir)
- 網絡(NFS,Ceph,GlusterFS)
- 公有雲(AWS EBS)
- K8S資源(configmap,secret)
3. 臨時存儲卷,節點存儲卷,網絡存儲卷
3.1 臨時存儲卷:emptyDir
emptyDir卷:是一個臨時存儲卷,與Pod生命周期綁定一起,如果Pod删除了卷也會被删除。
應用場景:Pod中容器之間資料共享
範例:Pod内容器之前共享資料
[[email protected] ~]# vi pod-v.yaml
[[email protected] ~]# cat pod-v.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: write
image: centos
command: ["bash","-c","for i in {1..100};do echo $i >> /data/hello;sleep 1;done"]
volumeMounts:
- name: data
mountPath: /data
- name: read
image: centos
command: ["bash","-c","tail -f /data/hello"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
emptyDir: {}
[[email protected] ~]# kubectl apply -f pod-v.yaml
[[email protected] ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dns 1/1 Running 2 3d23h 10.244.36.74 k8s-node1 <none> <none>
my-pod 2/2 Running 2 4m32s 10.244.36.86 k8s-node1 <none> <none>
[[email protected] ~]# kubectl logs my-pod
error: a container name must be specified for pod my-pod, choose one of: [write read]
[[email protected] ~]# kubectl logs my-pod read
7
8
9
10
...
# 預設進入寫容器
[[email protected] ~]# kubectl exec -it my-pod bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulting container name to write.
Use 'kubectl describe pod/my-pod -n default' to see all of the containers in this pod.
[[email protected] /]# cd /data/
[[email protected] data]# ls
hello
[[email protected] data]# cat hello
1
2
3
4
5
6
...
[[email protected] data]# tail -f hello
70
71
72
73
74
75
76
...
# 進入讀容器
[[email protected] data]# exit
exit
command terminated with exit code 130
[[email protected] ~]# kubectl exec -it my-pod -c read bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[[email protected] /]# cd /data/
[[email protected] data]# tail -f hello
91
92
93
94
95
96
97
....
# 進入到分布的節點也可進行檢視
[[email protected] ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dns 1/1 Running 2 3d23h 10.244.36.74 k8s-node1 <none> <none>
my-pod 1/2 CrashLoopBackOff 5 16m 10.244.36.86 k8s-node1 <none> <none>
# 打開分布的節點
[[email protected] ~]# ls /var/lib/kubelet/
config.yaml device-plugins pki plugins_registry pods
cpu_manager_state kubeadm-flags.env plugins pod-resources
[[email protected] ~]# ls /var/lib/kubelet/pods/
075b0581-87cd-4196-a03c-007ea51584a0 a5f0ec32-818c-44af-8c4f-5b30b36f596e
1ab11649-6fbe-4538-8608-fb62399d2b8b ab151aff-ac4b-4895-a3dd-4cc8df0c09be
2e1362f8-f67c-44c0-a45f-db99c9026fa4 b9d0512c-4696-468f-9469-d31b54e8cb64
36bcdc4c-d7ac-4429-89c6-a035c6dda3c6 cb250494-02f8-404e-b2cb-8430fe62e752
5a5a4820-fb2c-48be-bbdb-94b044266f03 ffad9162-22f4-48fa-84e0-5d11cb3fd3f9
9677d5e0-6531-4351-86c5-e84e243f7e2f
[[email protected] ~]# docker ps|grep my-pod
cfa549def6de centos "bash -c 'for i in {…" 9 seconds ago Up 8 seconds k8s_write_my-pod_default_ab151aff-ac4b-4895-a3dd-4cc8df0c09be_6
037a43215f94 centos "bash -c 'tail -f /d…" 16 minutes ago Up 16 minutes k8s_read_my-pod_default_ab151aff-ac4b-4895-a3dd-4cc8df0c09be_0
47f12c464fba registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 17 minutes ago Up 17 minutes k8s_POD_my-pod_default_ab151aff-ac4b-4895-a3dd-4cc8df0c09be_0
[[email protected] ~]# cd /var/lib/kubelet/pods/ab151aff-ac4b-4895-a3dd-4cc8df0c09be
[[email protected] ab151aff-ac4b-4895-a3dd-4cc8df0c09be]# ls
containers etc-hosts plugins volumes
[[email protected] ab151aff-ac4b-4895-a3dd-4cc8df0c09be]# cd volumes/
[[email protected] volumes]# ls
kubernetes.io~empty-dir kubernetes.io~secret
[[email protected] volumes]# cd kubernetes.io~empty-dir/
[[email protected] kubernetes.io~empty-dir]# ls
data
[[email protected] kubernetes.io~empty-dir]# cd data/
[[email protected] data]# ls
hello
[[email protected] data]# tail -f hello
91
92
93
94
95
96
97
98
99
100
3.2 節點存儲卷:hostPath
hostPath卷:挂載Node檔案系統(Pod所在節點)上檔案或者目錄到Pod中的容器。
應用場景:Pod中容器需要通路主控端檔案
範例:将主控端/tmp目錄挂載到容器/data目錄
[[email protected] ~]# vi pod-h.yaml
[[email protected] ~]# cat pod-h.yaml
apiVersion: v1
kind: Pod
metadata:
name: my-pod2
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 36000
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
hostPath:
path: /tmp
type: Directory
[[email protected] ~]# kubectl apply -f pod-h.yaml
pod/my-pod2 created
[[email protected] ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dns 1/1 Running 2 4d6h 10.244.36.74 k8s-node1 <none> <none>
my-pod 1/2 CrashLoopBackOff 63 7h 10.244.36.86 k8s-node1 <none> <none>
my-pod2 1/1 Running 0 22s 10.244.169.169 k8s-node2 <none> <none>
[[email protected] ~]# kubectl exec -it my-pod2 -- sh
/ # cd /data/
/data # ls
systemd-private-593535db8a3349798ff611aa4e524832-chronyd.service-MNp5mA
systemd-private-701d7a68e2e346698089d492b13e2539-chronyd.service-ppsSzU
systemd-private-f5834299f78540beaa308f4ad24940d7-chronyd.service-uEjUIn
vmware-root_1091-4021784385
vmware-root_1094-2697139482
vmware-root_1113-4013723357
/data # touch a.txt
/data # ls
a.txt
systemd-private-593535db8a3349798ff611aa4e524832-chronyd.service-MNp5mA
systemd-private-701d7a68e2e346698089d492b13e2539-chronyd.service-ppsSzU
systemd-private-f5834299f78540beaa308f4ad24940d7-chronyd.service-uEjUIn
vmware-root_1091-4021784385
vmware-root_1094-2697139482
vmware-root_1113-4013723357
/data #
[[email protected] ~]# cd /tmp/
[[email protected] tmp]# ls
systemd-private-593535db8a3349798ff611aa4e524832-chronyd.service-MNp5mA
systemd-private-701d7a68e2e346698089d492b13e2539-chronyd.service-ppsSzU
systemd-private-f5834299f78540beaa308f4ad24940d7-chronyd.service-uEjUIn
vmware-root_1091-4021784385
vmware-root_1094-2697139482
vmware-root_1113-4013723357
[[email protected] tmp]# ls
a.txt
systemd-private-593535db8a3349798ff611aa4e524832-chronyd.service-MNp5mA
systemd-private-701d7a68e2e346698089d492b13e2539-chronyd.service-ppsSzU
systemd-private-f5834299f78540beaa308f4ad24940d7-chronyd.service-uEjUIn
vmware-root_1091-4021784385
vmware-root_1094-2697139482
vmware-root_1113-4013723357
3.3 網絡存儲卷:NFS
NFS卷:提供對NFS挂載支援,可以自動将NFS共享路徑挂載到Pod中
NFS:是一個主流的檔案共享伺服器。
# yum install nfs-utils
# vi /etc/exports
/ifs/kubernetes *(rw,no_root_squash)
# mkdir -p /ifs/kubernetes
# systemctl start nfs
# systemctl enable nfs
注:每個Node上都要安裝nfs-utils包
[[email protected] tmp]# yum install nfs-utils
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
...
Complete!
[[email protected] tmp]# cd
[[email protected] ~]# vi /etc/exports
[[email protected] ~]# mkdir -p /ifs/kubernetes
[[email protected] ~]# systemctl start nfs
[[email protected] ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[[email protected] ~]# mount -t nfs 10.0.0.63:/ifs/kubernetes /mnt
[[email protected] ~]# cd /mnt/
[[email protected] mnt]# ls
[[email protected] mnt]#
[[email protected] mnt]# cd /ifs/kubernetes/
[[email protected] kubernetes]# ls
[[email protected] kubernetes]# touch a.txt
[[email protected] kubernetes]# ls
a.txt
[[email protected] mnt]# ls
a.txt
[[email protected] ~]# vi dep-n.yaml
[[email protected] ~]# cat dep-n.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
nfs:
server: 10.0.0.63
path: /ifs/kubernetes
[[email protected] ~]# kubectl apply -f dep-n.yaml
deployment.apps/nginx-deployment created
[[email protected] ~]# kubectl expose deployment nginx-deployment --port=80 --target-port=80 --type=NodePort
service/nginx-deployment exposed
[[email protected] ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
my-dep NodePort 10.111.199.51 <none> 80:31734/TCP 25d
my-service NodePort 10.100.228.0 <none> 80:32433/TCP 19d
nginx-deployment NodePort 10.102.245.67 <none> 80:31176/TCP 13s
web NodePort 10.96.132.243 <none> 80:31340/TCP 28d
web666 NodePort 10.106.85.63 <none> 80:30008/TCP 5d6h
[[email protected] ~]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 10.0.0.61:6443 29d
my-dep <none> 25d
my-service 10.244.169.164:80,10.244.169.167:80 19d
nginx-deployment 10.244.169.164:80,10.244.169.167:80 37s
web 10.244.169.158:8080 28d
web666 10.244.169.158:80 5d6h
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 0 26m
nginx-deployment-577f9758bc-8jffx 1/1 Running 0 3m12s
nginx-deployment-577f9758bc-fh4c5 1/1 Running 0 3m12s
nginx-deployment-577f9758bc-mxkdp 1/1 Running 0 3m12s
[[email protected] ~]# curl 10.102.245.67
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>
[[email protected] ~]# kubectl exec -it nginx-deployment-577f9758bc-8jffx bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[email protected]:/# cd /usr/share/nginx/html/
[email protected]:/usr/share/nginx/html# ls
a.txt
[[email protected] kubernetes]# vi index.html
[[email protected] kubernetes]# cat index.html
<h1>hello nginx!</h1>
[email protected]:/usr/share/nginx/html# ls
a.txt index.html
4. 持久卷概述
- PersistentVolume(PV):對存儲資源建立和使用的抽象,使得存儲作為叢集中的資源管理
- PersistentVolumeClaim(PVC):讓使用者不需要關心具體的Volume實作細節
4.1 PV與PVC使用流程
容器應用
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-pvc
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: my-pvc
卷需求模闆:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
資料卷定義:
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /ifs/kubernetes
server: 10.0.0.63
[[email protected] ~]# vi dep-pvc.yaml
[[email protected] ~]# cat dep-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-pvc
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: my-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
[[email protected] ~]# kubectl apply -f dep-pvc.yaml
deployment.apps/deploy-pvc created
persistentvolumeclaim/my-pvc created
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-pvc-76846b5956-2g67t 0/1 Pending 0 7s
deploy-pvc-76846b5956-kxlmr 0/1 Pending 0 7s
deploy-pvc-76846b5956-sc2k6 0/1 Pending 0 7s
dns 0/1 Error 2 4d6h
my-pod 0/2 Error 65 7h44m
my-pod2 1/1 Running 0 44m
nginx-deployment-577f9758bc-8jffx 1/1 Running 0 21m
nginx-deployment-577f9758bc-fh4c5 0/1 ContainerCreating 0 21m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 0 21m
[[email protected] ~]# kubectl describe pod deploy-pvc-76846b5956-2g67t
Name: deploy-pvc-76846b5956-2g67t
Namespace: default
Priority: 0
Node: <none>
Labels: app=nginx2
pod-template-hash=76846b5956
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/deploy-pvc-76846b5956
Containers:
nginx:
Image: nginx:1.14.2
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from www (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
www:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: my-pvc
ReadOnly: false
default-token-8grtj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8grtj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 27s (x2 over 28s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
[[email protected] ~]# vi pv.yaml
[[email protected] ~]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /ifs/kubernetes
server: 10.0.0.63
[[email protected] ~]# kubectl apply -f pv.yaml
persistentvolume/my-pv created
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-pvc-76846b5956-2g67t 0/1 Pending 0 3m1s
deploy-pvc-76846b5956-kxlmr 0/1 Pending 0 3m1s
deploy-pvc-76846b5956-sc2k6 0/1 Pending 0 3m1s
dns 0/1 Error 2 4d7h
my-pod 0/2 Error 65 7h47m
my-pod2 1/1 Running 0 47m
nginx-deployment-577f9758bc-8jffx 1/1 Running 0 23m
nginx-deployment-577f9758bc-fh4c5 0/1 ContainerCreating 0 23m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 0 23m
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deploy-pvc-76846b5956-2g67t 0/1 ContainerCreating 0 3m6s
deploy-pvc-76846b5956-kxlmr 0/1 ContainerCreating 0 3m6s
deploy-pvc-76846b5956-sc2k6 0/1 ContainerCreating 0 3m6s
dns 0/1 Error 2 4d7h
my-pod 0/2 Error 65 7h47m
my-pod2 1/1 Running 0 47m
nginx-deployment-577f9758bc-8jffx 1/1 Running 0 23m
nginx-deployment-577f9758bc-fh4c5 0/1 ContainerCreating 0 23m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 0 23m
[[email protected] ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound my-pv 5Gi RWX 3m23s
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Bound default/my-pvc 32s
[[email protected] ~]# kubectl expose deployment deploy-pvc --port=80 --target-port=80 --type=NodePort
service/deploy-pvc exposed
[[email protected] ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
deploy-pvc NodePort 10.98.3.1 <none> 80:31756/TCP 17s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
my-dep NodePort 10.111.199.51 <none> 80:31734/TCP 26d
my-service NodePort 10.100.228.0 <none> 80:32433/TCP 19d
nginx-deployment NodePort 10.102.245.67 <none> 80:31176/TCP 26m
web NodePort 10.96.132.243 <none> 80:31340/TCP 28d
web666 NodePort 10.106.85.63 <none> 80:30008/TCP 5d6h
5. PV生命周期
AccessModes(通路模式):
AccessModes 是用來對PV 進行通路模式的設定,用于描述使用者應用對存儲資源的通路權限,通路權限包括下面幾種方式:
- ReadWriteOnce(RWO):讀寫權限,但是隻能被單個節點挂載
- ReadOnlyMany(ROX):隻讀權限,可以被多個節點挂載
- ReadWriteMany(RWX):讀寫權限,可以被多個節點挂載
RECLAIM POLICY(回收政策):
目前PV 支援的政策有三種:
- Retain(保留):保留資料,需要管理者手工清理資料
- Recycle(回收):清除PV 中的資料,效果相當于執行rm -rf /ifs/kuberneres/*
- Delete(删除):與PV 相連的後端存儲同時删除
STATUS(狀态):
一個PV 的生命周期中,可能會處于4種不同的階段:
- Available(可用):表示可用狀态,還未被任何PVC 綁定
- Bound(已綁定):表示PV 已經被PVC 綁定
- Released(已釋放):PVC 被删除,但是資源還未被叢集重新聲明
- Failed(失敗):表示該PV 的自動回收失敗
5.1 PV 靜态供給
現在PV使用方式稱為靜态供給,需要K8s運維工程師提前建立一堆PV,供開發者使用。
[[email protected] ~]# kubectl delete -f dep-pvc.yaml
deployment.apps "deploy-pvc" deleted
persistentvolumeclaim "my-pvc" deleted
[[email protected] ~]# cp pv.yaml test-pv.yaml
[[email protected] ~]# vi test-pv.yaml
[[email protected] ~]# cat test-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /ifs/kubernetes/pv0001
server: 10.0.0.63
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0002
spec:
capacity:
storage: 15Gi
accessModes:
- ReadWriteMany
nfs:
path: /ifs/kubernetes/pv0002
server: 10.0.0.63
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 25Gi
accessModes:
- ReadWriteMany
nfs:
path: /ifs/kubernetes/pv0003
server: 10.0.0.63
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0004
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteMany
nfs:
path: /ifs/kubernetes/pv0004
server: 10.0.0.63
---
[[email protected] ~]# kubectl apply -f test-pv.yaml
persistentvolume/pv0001 created
persistentvolume/pv0002 created
persistentvolume/pv0003 created
persistentvolume/pv0004 created
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 0/1 Error 2 4h40m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 4h17m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 3h19m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 4h17m
pod-taint 0/1 Completed 4 6d3h
test-76846b5956-2b458 1/1 Running 0 19m
test-76846b5956-6lkb7 0/1 ContainerCreating 0 19m
test-76846b5956-8ns4z 1/1 Running 0 19m
web-96d5df5c8-6czpv 0/1 Completed 3 4d12h
web-96d5df5c8-6f4ww 0/1 Error 1 3h19m
web-96d5df5c8-6hc68 1/1 Running 2 3h19m
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 4h45m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 4h21m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 3h23m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 4h21m
pod-taint 1/1 Running 5 6d3h
test-76846b5956-2b458 1/1 Running 0 23m
test-76846b5956-6lkb7 1/1 Running 0 23m
test-76846b5956-8ns4z 1/1 Running 0 23m
web-96d5df5c8-6czpv 1/1 Running 4 4d12h
web-96d5df5c8-6f4ww 1/1 Running 2 3h23m
web-96d5df5c8-6hc68 1/1 Running 2 3h23m
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 3h45m
pv0001 5Gi RWX Retain Bound default/my-pvc 2m29s
pv0002 15Gi RWX Retain Available 2m29s
pv0003 25Gi RWX Retain Available 2m29s
pv0004 30Gi RWX Retain Available 2m29s
[[email protected] ~]# kubectl exec -it test-76846b5956-2b458 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[email protected]:/# cd /usr/share/nginx/html/
[email protected]:/usr/share/nginx/html# ls
[email protected]:/usr/share/nginx/html# touch a.txt
[email protected]:/usr/share/nginx/html# ls
a.txt
[[email protected] kubernetes]# ll pv0001/
total 0
-rw-r--r-- 1 root root 0 Dec 21 09:56 a.txt
[[email protected] kubernetes]# ll pv0002/
total 0
[[email protected] kubernetes]# ll pv0003/
total 0
[[email protected] kubernetes]# ll pv0004/
total 0
[[email protected] kubernetes]#
[[email protected] ~]# vi test-pvc.yaml
[[email protected] ~]# cat test-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test2
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: my-pvc2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 26Gi
[[email protected] ~]# kubectl apply -f test-pvc.yaml
deployment.apps/test2 created
persistentvolumeclaim/my-pvc2 created
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 4h5m
pv0001 5Gi RWX Retain Bound default/my-pvc 22m
pv0002 15Gi RWX Retain Available 22m
pv0003 25Gi RWX Retain Available 22m
pv0004 30Gi RWX Retain Bound default/my-pvc2 22m
1、pvc與pv有什麼比對條件?
- 存儲空間
- 通路模式
2、存儲空間字段能限制實際存儲容量?
- 不能,存儲空間隻能用于比對一種标記,具體可用容量取決于網絡存儲(nfs、ceph)
- 能不能自動設定這個容量,取決于網絡存儲技術支不支援(K8S逐漸對部分存儲提供自動限制的支援)
3、容量比對政策
- 比對最接近的PV容量
- 如果都滿足不了,pod處于pending狀态,等待可配置設定的PV
4、PV與PVC關系
- 中國婚姻一樣,一夫一妻
5.2 PV 動态供給
PV靜态供給明顯的缺點是維護成本太高了!
是以,K8s開始支援PV動态供給,使用StorageClass對象實作。
基于NFS實作PV動态供給流程圖
部署NFS實作自動建立PV插件
git clone https://github.com/kubernetes-incubator/external-storage
cd nfs-client/deploy
kubectl apply -f rbac.yaml # 授權通路apiserver
kubectl apply -f deployment.yaml # 部署插件,需修改裡面NFS伺服器位址與共享目錄
kubectl apply -f class.yaml # 建立存儲類
kubectl get cs # 檢視存儲類
5.3 案例:應用程式使用持久卷存儲資料
[[email protected] ~]# unzip nfs-client.zip
Archive: nfs-client.zip
creating: nfs-client/
inflating: nfs-client/class.yaml
inflating: nfs-client/deployment.yaml
inflating: nfs-client/rbac.yaml
[[email protected] ~]# cd nfs-client/
[[email protected] nfs-client]# ls
class.yaml deployment.yaml rbac.yaml
[[email protected] nfs-client]#
[[email protected] nfs-client]# ls
class.yaml deployment.yaml rbac.yaml
[[email protected] nfs-client]# vi deployment.yaml
[[email protected] nfs-client]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: lizhenliang/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.0.0.63
- name: NFS_PATH
value: /ifs/kubernetes
volumes:
- name: nfs-client-root
nfs:
server: 10.0.0.63
path: /ifs/kubernetes
[[email protected] nfs-client]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true"
[[email protected] nfs-client]# kubectl apply -f .
storageclass.storage.k8s.io/managed-nfs-storage created
serviceaccount/nfs-client-provisioner created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner unchanged
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[[email protected] nfs-client]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h18m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 3m22s
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 4h54m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 3h56m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 4h54m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 56m
test-76846b5956-6lkb7 1/1 Running 0 56m
test-76846b5956-8ns4z 1/1 Running 0 56m
test2-78c4694588-87b9r 1/1 Running 0 26m
[[email protected] nfs-client]# cd
[[email protected] ~]# cp test-pvc.yaml test-pvc2.yaml
[[email protected] ~]# vi test-pvc2.yaml
[[email protected] ~]# cat test-pvc2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auto-pv
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: my-pvc3
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc3
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 40Gi
[[email protected] ~]# kubectl apply -f test-pvc2.yaml
deployment.apps/auto-pv created
persistentvolumeclaim/my-pvc3 created
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
auto-pv-6969ddf4bc-4c8xx 0/1 Pending 0 8s
my-pod2 1/1 Running 3 5h21m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 7m1s
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 4h58m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 4h58m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 60m
test-76846b5956-6lkb7 1/1 Running 0 60m
test-76846b5956-8ns4z 1/1 Running 0 60m
test2-78c4694588-87b9r 1/1 Running 0 29m
[[email protected] ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound pv0001 5Gi RWX 60m
my-pvc2 Bound pv0004 30Gi RWX 29m
my-pvc3 Pending 17s
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 4h35m
pv0001 5Gi RWX Retain Bound default/my-pvc 52m
pv0002 15Gi RWX Retain Available 52m
pv0003 25Gi RWX Retain Available 52m
pv0004 30Gi RWX Retain Bound default/my-pvc2 52m
[[email protected] ~]# cat test-pvc2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auto-pv
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: my-pvc3
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc3
spec:
storageClassName: "managed-nfs-storage"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 40Gi
[[email protected] ~]# kubectl apply -f test-pvc2.yaml
deployment.apps/auto-pv unchanged
The PersistentVolumeClaim "my-pvc3" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
... // 2 identical fields
Resources: core.ResourceRequirements{Requests: core.ResourceList{s"storage": {i: resource.int64Amount{value: 42949672960}, Format: "BinarySI"}}},
VolumeName: "",
- StorageClassName: &"managed-nfs-storage",
+ StorageClassName: nil,
VolumeMode: &"Filesystem",
DataSource: nil,
}
[[email protected] ~]# kubectl delete -f test-pvc2.yaml
deployment.apps "auto-pv" deleted
persistentvolumeclaim "my-pvc3" deleted
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h26m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 11m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h3m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h5m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h3m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 65m
test-76846b5956-6lkb7 1/1 Running 0 65m
test-76846b5956-8ns4z 1/1 Running 0 65m
test2-78c4694588-87b9r 1/1 Running 0 34m
[[email protected] ~]# kubectl apply -f test-pvc2.yaml
deployment.apps/auto-pv created
persistentvolumeclaim/my-pvc3 created
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
auto-pv-6969ddf4bc-69ndt 1/1 Running 0 16s
my-pod2 1/1 Running 3 5h26m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 12m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h3m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h5m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h3m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 65m
test-76846b5956-6lkb7 1/1 Running 0 65m
test-76846b5956-8ns4z 1/1 Running 0 65m
test2-78c4694588-87b9r 1/1 Running 0 34m
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 4h40m
pv0001 5Gi RWX Retain Bound default/my-pvc 57m
pv0002 15Gi RWX Retain Available 57m
pv0003 25Gi RWX Retain Available 57m
pv0004 30Gi RWX Retain Bound default/my-pvc2 57m
pvc-911835a4-7ef4-421c-b8d3-1594a93f94c8 40Gi RWX Delete Bound default/my-pvc3 managed-nfs-storage 32s
[[email protected] kubernetes]# ls
a.txt index.html pv0002 pv0004
default-my-pvc3-pvc-911835a4-7ef4-421c-b8d3-1594a93f94c8 pv0001 pv0003
[[email protected] ~]# kubectl delete -f test-pvc2.yaml
deployment.apps "auto-pv" deleted
persistentvolumeclaim "my-pvc3" deleted
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 4h41m
pv0001 5Gi RWX Retain Bound default/my-pvc 58m
pv0002 15Gi RWX Retain Available 58m
pv0003 25Gi RWX Retain Available 58m
pv0004 30Gi RWX Retain Bound default/my-pvc2 58m
[[email protected] kubernetes]# ls
archived-default-my-pvc3-pvc-911835a4-7ef4-421c-b8d3-1594a93f94c8 pv0001 pv0004
a.txt pv0002
index.html pv0003
[[email protected] ~]# cat nfs-client/class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true"
在建立pvc時指定存儲類名稱
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc3
spec:
storageClassName: "managed-nfs-storage"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 40Gi
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: busybox
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
6. 有狀态應用部署:StatefulSet 控制器
StatefulSet:
- 部署有狀态應用
- 解決Pod獨立生命周期,保持Pod啟動順序和唯一性
- 1.穩定,唯一的網絡辨別符,持久存儲
- 2.有序,優雅的部署和擴充、删除和終止
- 3.有序,滾動更新
應用場景:分布式應用、資料庫叢集
-
穩定的網絡ID
使用Headless Service(相比普通Service隻是将spec.clusterIP定義為None)來維護Pod網絡身份。
并且添加serviceName: “nginx”字段指定StatefulSet控制器要使用這個Headless Service。
DNS解析名稱:<statefulsetName-index>.<service-name> <namespace-name>.svc.cluster.local
-
穩定的存儲
StatefulSet的存儲卷使用VolumeClaimTemplate建立,稱為卷申請模闆,當StatefulSet使用VolumeClaimTemplate建立一個PersistentVolume時,同樣也會為每個Pod配置設定并建立一個編号的PVC。
StatefulSet與Deployment差別:有身份的!
身份三要素:
- 域名
- 主機名
- 存儲(PVC)
[[email protected] ~]# cat statefulset.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-nfs-storage"
resources:
requests:
storage: 1Gi
[[email protected] ~]# kubectl apply -f statefulset.yaml
service/nginx unchanged
statefulset.apps/web created
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h49m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 35m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h26m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h28m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h26m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 88m
test-76846b5956-6lkb7 1/1 Running 0 88m
test-76846b5956-8ns4z 1/1 Running 0 88m
test2-78c4694588-87b9r 1/1 Running 0 57m
web-0 0/1 ContainerCreating 0 10s
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h50m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 35m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h26m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h28m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h26m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 88m
test-76846b5956-6lkb7 1/1 Running 0 88m
test-76846b5956-8ns4z 1/1 Running 0 88m
test2-78c4694588-87b9r 1/1 Running 0 58m
web-0 1/1 Running 0 29s
web-1 0/1 ContainerCreating 0 9s
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
my-pod2 1/1 Running 3 5h50m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 35m
nginx-deployment-577f9758bc-8jffx 1/1 Running 4 5h27m
nginx-deployment-577f9758bc-gr766 1/1 Running 4 4h29m
nginx-deployment-577f9758bc-mxkdp 1/1 Running 4 5h27m
pod-taint 1/1 Running 5 6d4h
test-76846b5956-2b458 1/1 Running 0 89m
test-76846b5956-6lkb7 1/1 Running 0 89m
test-76846b5956-8ns4z 1/1 Running 0 89m
test2-78c4694588-87b9r 1/1 Running 0 58m
web-0 1/1 Running 0 49s
web-1 1/1 Running 0 29s
web-2 0/1 ContainerCreating 0 9s
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 5h5m
pv0001 5Gi RWX Retain Bound default/my-pvc 82m
pv0002 15Gi RWX Retain Available 82m
pv0003 25Gi RWX Retain Available 82m
pv0004 30Gi RWX Retain Bound default/my-pvc2 82m
pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 2m39s
pvc-ca56c0cc-11d8-41a8-9306-2334ab942e1d 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 2m19s
pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 2m59s
[[email protected] ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-pvc Bound pv0001 5Gi RWX 91m
my-pvc2 Bound pv0004 30Gi RWX 60m
www-web-0 Bound pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59 1Gi RWO managed-nfs-storage 3m2s
www-web-1 Bound pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb 1Gi RWO managed-nfs-storage 2m42s
www-web-2 Bound pvc-ca56c0cc-11d8-41a8-9306-2334ab942e1d 1Gi RWO managed-nfs-storage 2m22s
[[email protected] ~]# kubectl describe pod web-0
Name: web-0
Namespace: default
Priority: 0
Node: k8s-node1/10.0.0.62
Start Time: Tue, 21 Dec 2021 10:58:24 +0800
Labels: app=nginx
controller-revision-hash=web-67bb74dc
statefulset.kubernetes.io/pod-name=web-0
Annotations: cni.projectcalico.org/podIP: 10.244.36.109/32
cni.projectcalico.org/podIPs: 10.244.36.109/32
Status: Running
IP: 10.244.36.109
IPs:
IP: 10.244.36.109
Controlled By: StatefulSet/web
Containers:
nginx:
Container ID: docker://678e95dad7ae4dcea2c14178d1afd1e7021962dbbbbbe05059a3b49f7f10c3c6
Image: nginx
Image ID: docker-pullable://[email protected]:9522864dd661dcadfd9958f9e0de192a1fdda2c162a35668ab6ac42b465f0603
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 21 Dec 2021 10:58:41 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from www (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-8grtj (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
www:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: www-web-0
ReadOnly: false
default-token-8grtj:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-8grtj
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 5m29s (x2 over 5m29s) default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 5m26s default-scheduler Successfully assigned default/web-0 to k8s-node1
Normal Pulling 5m25s kubelet, k8s-node1 Pulling image "nginx"
Normal Pulled 5m10s kubelet, k8s-node1 Successfully pulled image "nginx" in 15.568844274s
Normal Created 5m10s kubelet, k8s-node1 Created container nginx
Normal Started 5m9s kubelet, k8s-node1 Started container nginx
[[email protected] ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
my-pv 5Gi RWX Retain Released default/my-pvc 5h9m
pv0001 5Gi RWX Retain Bound default/my-pvc 87m
pv0002 15Gi RWX Retain Available 87m
pv0003 25Gi RWX Retain Available 87m
pv0004 30Gi RWX Retain Bound default/my-pvc2 87m
pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb 1Gi RWO Delete Bound default/www-web-1 managed-nfs-storage 7m4s
pvc-ca56c0cc-11d8-41a8-9306-2334ab942e1d 1Gi RWO Delete Bound default/www-web-2 managed-nfs-storage 6m44s
pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59 1Gi RWO Delete Bound default/www-web-0 managed-nfs-storage 7m24s
[[email protected] kubernetes]# ls
archived-default-my-pvc3-pvc-911835a4-7ef4-421c-b8d3-1594a93f94c8 default-www-web-1-pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb pv0001 pv0004
a.txt default-www-web-2-pvc-ca56c0cc-11d8-41a8-9306-2334ab942e1d pv0002
default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59 index.html
[[email protected] kubernetes]# cd default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59/
[[email protected] default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59]# ls
[[email protected] default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59]# vi index.html
[[email protected] default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59]# cd ../default-www-web-1-pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb/
[[email protected] default-www-web-1-pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb]# vi index.html
[[email protected] default-www-web-1-pvc-6dc705f5-608e-4d5a-bacd-f4f640efe5fb]#
[[email protected] ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-pod2 1/1 Running 3 6h1m 10.244.169.164 k8s-node2 <none> <none>
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 46m 10.244.36.107 k8s-node1 <none> <none>
web-0 1/1 Running 0 11m 10.244.36.109 k8s-node1 <none> <none>
web-1 1/1 Running 0 11m 10.244.36.106 k8s-node1 <none> <none>
web-2 1/1 Running 0 11m 10.244.36.105 k8s-node1 <none> <none>
[[email protected] ~]# curl 10.244.36.109
00000
[[email protected] ~]# curl 10.244.36.106
11111
[[email protected] ~]# kubectl run -it --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # ls
bin dev etc home proc root sys tmp usr var
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: nginx
Address 1: 10.244.36.109 web-0.nginx.default.svc.cluster.local
Address 2: 10.244.36.106 10-244-36-106.my-service.default.svc.cluster.local
Address 3: 10.244.36.105 web-2.nginx.default.svc.cluster.local
/ # nslookup web
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web
Address 1: 10.96.132.243 web.default.svc.cluster.local
web-0 -> www-web-0(pvc) -> pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59(pv) -> default-www-web-0-pvc-ebf2a8c9-57b4-46cf-b917-a5894970fc59
7. 應用程式配置檔案存儲:ConfigMap
建立ConfigMap後,資料實際會存儲在K8s中(Etcd)Etcd,然後通過建立Pod時引用該資料。
應用場景:應用程式配置
Pod使用configmap資料有兩種方式:
- 變量注入
- 資料卷挂載
[[email protected] ~]# vi configmap.yaml
[[email protected] ~]# cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: configmap-demo
data:
abc: "123"
cde: "456"
redis.properties: |
port: 6379
host: 10.0.0.63
[[email protected] ~]# kubectl apply -f configmap.yaml
configmap/configmap-demo created
[[email protected] ~]# kubectl get configmap
NAME DATA AGE
configmap-demo 3 17s
[[email protected] ~]# kubectl get cm
NAME DATA AGE
configmap-demo 3 24s
[[email protected] ~]# vi configmap-pod.yaml
[[email protected] ~]# cat configmap-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: configmap-demo-pod
spec:
containers:
- name: demo
image: nginx
env:
- name: NAME
value: "adu"
- name: ABCD
valueFrom:
configMapKeyRef:
name: configmap-demo
key: abc
- name: CDEF
valueFrom:
configMapKeyRef:
name: configmap-demo
key: cde
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
- name: config
configMap:
name: configmap-demo
items:
- key: "redis.properties"
path: "redis.properties"
[[email protected] ~]# kubectl apply -f configmap-pod.yaml
pod/configmap-demo-pod created
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
configmap-demo-pod 0/1 ContainerCreating 0 7s
my-pod2 1/1 Running 3 7h3m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 108m
pod-taint 1/1 Running 5 6d6h
sh 1/1 Running 1 59m
test-76846b5956-gftn9 1/1 Running 0 55m
test-76846b5956-r7s9k 1/1 Running 0 54m
test-76846b5956-trpbn 1/1 Running 0 55m
test2-78c4694588-87b9r 1/1 Running 0 131m
web-0 1/1 Running 0 73m
web-1 1/1 Running 0 73m
web-2 1/1 Running 0 72m
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
configmap-demo-pod 1/1 Running 0 66s
my-pod2 1/1 Running 3 7h4m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 109m
pod-taint 1/1 Running 5 6d6h
sh 1/1 Running 1 59m
test-76846b5956-gftn9 1/1 Running 0 56m
test-76846b5956-r7s9k 1/1 Running 0 55m
test-76846b5956-trpbn 1/1 Running 0 56m
test2-78c4694588-87b9r 1/1 Running 0 132m
web-0 1/1 Running 0 74m
web-1 1/1 Running 0 74m
web-2 1/1 Running 0 73m
[[email protected] ~]# kubectl exec -it configmap-demo-pod -- bash
[email protected]:/# echo $NAME
adu
[email protected]:/# echo $ABCD
123
[email protected]:/# echo $CDEF
456
[email protected]:/# echo "echo \$NAME" > a.sh
[email protected]:/# bash a.sh
adu
[email protected]:/# cd /config/
[email protected]:/config# ls
redis.properties
[email protected]:/config# cat redis.properties
port: 6379
host: 10.0.0.63
[email protected]:/config# pwd
/config
[email protected]:/config#
進入pod中測試是否注入變量和挂載:
# echo $ABCD
# ls /config
8. 敏感資料存儲:Secret
與ConfigMap類似,差別在于Secret主要存儲敏感資料,所有的資料要經過base64編碼。
應用場景:憑據
kubectl create secret 支援三種資料類型:
- docker-registry:存儲鏡像倉庫認證資訊
- generic:從檔案、目錄或者字元串建立,例如存儲使用者名密碼
- tls:存儲證書,例如HTTPS證書
[[email protected] ~]# kubectl create secret --help
Create a secret using specified subcommand.
Available Commands:
docker-registry Create a secret for use with a Docker registry
generic Create a secret from a local file, directory or literal value
tls Create a TLS secret
Usage:
kubectl create secret [flags] [options]
Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all
commands).
Pod使用Secret資料與ConfigMap方式一樣。
将使用者名密碼進行編碼:
[[email protected] ~]# echo -n 'admin' | base64
YWRtaW4=
[[email protected] ~]# echo -n '1f2d1e2e67df' | base64
MWYyZDFlMmU2N2Rm
[[email protected] ~]# vi secret.yaml
[[email protected] ~]# cat secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: db-user-pass
type: Opaque
data:
username: YWRtaW4=
password: MWYyZDFlMmU2N2Rm
[[email protected] ~]# kubectl apply -f secret.yaml
secret/db-user-pass created
[[email protected] ~]# kubectl get secret
NAME TYPE DATA AGE
db-user-pass Opaque 2 12s
default-token-8grtj kubernetes.io/service-account-token 3 29d
nfs-client-provisioner-token-s26sh kubernetes.io/service-account-token 3 151m
[[email protected] ~]# vi secret-pod.yaml
[[email protected] ~]# cat secret-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: secret-demo-pod
spec:
containers:
- name: demo
image: nginx
env:
- name: USER
valueFrom:
secretKeyRef:
name: db-user-pass
key: username
- name: PASS
valueFrom:
secretKeyRef:
name: db-user-pass
key: password
volumeMounts:
- name: config
mountPath: "/config"
readOnly: true
volumes:
- name: config
secret:
secretName: db-user-pass
items:
- key: username
path: my-username
[[email protected] ~]# kubectl apply -f secret-pod.yaml
pod/secret-demo-pod created
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
configmap-demo-pod 1/1 Running 0 56m
my-pod2 1/1 Running 3 7h59m
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 165m
pod-taint 1/1 Running 5 6d6h
secret-demo-pod 0/1 ContainerCreating 0 10s
sh 1/1 Running 1 115m
test-76846b5956-gftn9 1/1 Running 0 111m
test-76846b5956-r7s9k 1/1 Running 0 111m
test-76846b5956-trpbn 1/1 Running 0 112m
test2-78c4694588-87b9r 1/1 Running 0 3h7m
web-0 1/1 Running 0 130m
web-1 1/1 Running 0 129m
web-2 1/1 Running 0 129m
[[email protected] ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
configmap-demo-pod 1/1 Running 0 57m
my-pod2 1/1 Running 3 8h
nfs-client-provisioner-58d675cd5-dx7n4 1/1 Running 0 165m
pod-taint 1/1 Running 5 6d6h
secret-demo-pod 1/1 Running 0 24s
sh 1/1 Running 1 116m
test-76846b5956-gftn9 1/1 Running 0 112m
test-76846b5956-r7s9k 1/1 Running 0 111m
test-76846b5956-trpbn 1/1 Running 0 112m
test2-78c4694588-87b9r 1/1 Running 0 3h8m
web-0 1/1 Running 0 130m
web-1 1/1 Running 0 130m
web-2 1/1 Running 0 129m
[[email protected] ~]# kubectl exec -it secret-demo-pod -- bash
[email protected]:/# ls
bin config docker-entrypoint.d etc lib media opt root sbin sys usr
boot dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
[email protected]:/# cd config/
[email protected]:/config# ls
my-username
[email protected]:/config# cat my-username
[email protected]:/config#
課後作業:
1、建立一個secret,并建立2個pod,pod1挂載該secret,路徑為/secret,pod2使用環境變量引用該secret,該變量的環境變量名為ABC
- secret名稱:my-secret
- pod1名稱:pod-volume-secret
- pod2名稱:pod-env-secret
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
password: MWYyZDFlMmU2N2Rm
apiVersion: v1
kind: Pod
metadata:
name: pod-volume-secret
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: foo
mountPath: "/etc/foo"
volumes:
- name: foo
secret:
secretName: mysecret
---
apiVersion: v1
kind: Pod
metadata:
name: pod-env-secret
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
2、建立一個pv,再建立一個pod使用該pv
- 容量:5Gi
- 通路模式:ReadWriteOnce
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-claim
spec:
storageClassName: "managed-nfs-storage"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- name: web
image: nginx
volumeMounts:
- name: data
mountPath: "/mnt"
volumes:
- name: data
persistentVolumeClaim:
claimName: test-claim
3、建立一個pod并挂載資料卷,不可以用持久卷
- 卷來源:emptyDir、hostPath任意
- 挂載路徑:/data
apiVersion: v1
kind: Pod
metadata:
name: no-persistent-redis
spec:
containers:
- name: redis
image: redis
volumeMounts:
- name: cache
mountPath: /data/redis
volumes:
- name: cache
emptyDir: {}
4、将pv按照名稱、容量排序,并儲存到/opt/pv檔案
kubectl get pv --sort-by=.metadata.name > /opt/pv
kubectl get pv --sort-by=.spec.capacity.storage > /opt/pv