1.mkdir schedule建立測試目錄。vim pod-demo.yaml編輯檔案。cat pod-demo.yaml檢視檔案(主要是nodeSelector:選擇器)。
[[email protected] manifests]# mkdir schedule
[[email protected] manifests]# cd schedule
[[email protected] schedule]# cp ../pod-demo.yaml .
[[email protected] schedule]# vim pod-demo.yaml
[[email protected] schedule]# cat pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-demo
namespace: default
labels:
app: myapp
tier: frontend
annotations:
example.com/created-by: "cluster admin"
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
nodeSelector:
disktype: ssd
2.kubectl apply -f pod-demo.yaml聲明資源。kubectl get pods -o wide檢視pod部署的節點。kubectl get nodes --show-labels | grep ssd通過标簽内容手動篩選節點進行印證。
[[email protected] schedule]# kubectl apply -f pod-demo.yaml
pod/pod-demo created
[[email protected] manifests]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-demo 1/1 Running 0 3m 10.244.1.3 node1.example.com
[[email protected] manifests]# kubectl get nodes --show-labels | grep ssd
node1.example.com Ready <none> 5d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node1.example.com
3.kubectl delete -f pod-demo.yaml删除資源。vim pod-demo.yaml修改檔案。 cat pod-demo.yaml | grep disktype看到disktype值修改。kubectl get nodes --show-labels | grep harddisk重新比對無合适節點。kubectl apply -f pod-demo.yaml嘗試聲明資源。kubectl get pods -o wide看到pod被挂起。kubectl describe pods pod-demo | grep -i events -A4檢視時間發現排程失敗。
[[email protected] schedule]# kubectl delete -f pod-demo.yaml
pod "pod-demo" deleted
[[email protected] schedule]# vim pod-demo.yaml
[[email protected] schedule]# cat pod-demo.yaml | grep disktype
disktype: harddisk
[[email protected] schedule]# kubectl get nodes --show-labels | grep harddisk
[[email protected] schedule]# kubectl apply -f pod-demo.yaml
pod/pod-demo created
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-demo 0/1 Pending 0 11s <none> <none>
[[email protected] schedule]# kubectl describe pods pod-demo | grep -i events -A4
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 1s (x16 over 48s) default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.
4.kubectl label nodes node2.example.com disktype=harddisk手動将節點打上标簽。kubectl get pods -o wide發現pod被成功部署。kubectl delete -f pod-demo.yaml删除資源。
[[email protected] schedule]# kubectl label nodes node2.example.com disktype=harddisk
node/node2.example.com labeled
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-demo 1/1 Running 0 3m 10.244.2.3 node2.example.com
[[email protected] schedule]# kubectl delete -f pod-demo.yaml
pod "pod-demo" deleted
5.vim pod-nodeaffinity-demo.yaml編輯檔案。cat pod-nodeaffinity-demo.yaml檢視檔案(注意這裡requiredDuringSchedulingIgnoredDuringExecution:為強制參數)。kubectl apply -f pod-nodeaffinity-demo.yaml聲明資源。kubectl get pods -o wide因為沒有找到比對是以pod被挂起。
[[email protected] schedule]# scp pod-demo.yaml pod-nodeaffinity-demo.yaml
[[email protected] schedule]# vim pod-nodeaffinity-demo.yaml
[[email protected] schedule]# cat pod-nodeaffinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-node-affinity-demo
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: zone
operator: In
values:
- foo
- bar
[[email protected] schedule]# kubectl apply -f pod-nodeaffinity-demo.yaml
pod/pod-node-affinity-demo created
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-node-affinity-demo 0/1 Pending 0 18s <none> <none>
6.kubectl get pods -o wide編輯檔案。cat pod-nodeaffinity-demo-2.yaml檢視檔案(preferredDuringSchedulingIgnoredDuringExecution:注意這裡為非強制參數)。kubectl apply -f pod-nodeaffinity-demo-2.yaml聲明資源。kubectl get pods -o wide發現pod可以進行部署。
[[email protected] schedule]# scp pod-nodeaffinity-demo.yaml pod-nodeaffinity-demo-2.yaml
[[email protected] schedule]# vim pod-nodeaffinity-demo-2.yaml
[[email protected] schedule]# cat pod-nodeaffinity-demo-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-node-affinity-demo-2
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: zone
operator: In
values:
- foo
- bar
weight: 60
[[email protected] schedule]# kubectl apply -f pod-nodeaffinity-demo-2.yaml
pod/pod-node-affinity-demo-2 created
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-node-affinity-demo 0/1 Pending 0 7m <none> <none>
pod-node-affinity-demo-2 1/1 Running 0 11s 10.244.2.4 node2.example.com
7.vim pod-required-affinity-demo.yaml編輯檔案。cat pod-required-affinity-demo.yaml檢視檔案(注意這裡的affinity:屬性,優先選擇kubernetes.io/hostname進行部署)。kubectl apply -f pod-required-affinity-demo.yaml聲明資源。kubectl get pods -o wide擷取pod資源資訊。 kubectl describe pods pod-second | grep -i event -A6檢視部署過程資訊。kubectl delete -f pod-required-affinity-demo.yaml清除資源。
[[email protected] schedule]# cp pod-demo.yaml pod-required-affinity-demo.yaml
[[email protected] schedule]# vim pod-required-affinity-demo.yaml
[[email protected] schedule]# cat pod-required-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app, operator: In, values: ["myapp"]}
topologyKey: kubernetes.io/hostname
[[email protected] schedule]# kubectl apply -f pod-required-affinity-demo.yaml
pod/pod-first created
pod/pod-second created
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-first 1/1 Running 0 1m 10.244.2.9 node2.example.com
pod-second 1/1 Running 0 1m 10.244.2.10 node2.example.com
[[email protected] schedule]# kubectl describe pods pod-second | grep -i event -A6
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m default-scheduler Successfully assigned default/pod-second to node2.example.com
Normal Pulled 3m kubelet, node2.example.com Container image "busybox:latest" already present on machine
Normal Created 3m kubelet, node2.example.com Created container
Normal Started 3m kubelet, node2.example.com Started container
[[email protected] schedule]# kubectl delete -f pod-required-affinity-demo.yaml
pod "pod-first" deleted
pod "pod-second" deleted
8.vim pod-required-anti-affinity-demo.yaml編輯檔案。cat pod-required-anti-affinity-demo.yaml檢視檔案(這次使用 podAntiAffinity:反親和)。kubectl apply -f pod-required-anti-affinity-demo.yaml聲明資源(此時兩個Pod已經部署在不同節點)。kubectl get pods -o wide擷取pod資源資訊。
[[email protected] schedule]# cp pod-required-affinity-demo.yaml pod-required-anti-affinity-demo.yaml
[[email protected] schedule]# vim pod-required-anti-affinity-demo.yaml
[[email protected] schedule]# cat pod-required-anti-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app, operator: In, values: ["myapp"]}
topologyKey: kubernetes.io/hostname
[[email protected] schedule]# kubectl apply -f pod-required-anti-affinity-demo.yaml
pod/pod-first created
pod/pod-second created
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-first 1/1 Running 0 17s 10.244.2.11 node2.example.com
pod-second 1/1 Running 0 17s 10.244.1.9 node1.example.com
[[email protected] schedule]# kubectl delete -f pod-required-anti-affinity-demo.yaml
pod "pod-first" deleted
pod "pod-second" deleted
9.kubectl get nodes --show-labels檢視節點标簽。kubectl label nodes node1.example.com zone=foo為節點打标簽。kubectl label nodes node2.example.com zone=foo為節點打标簽。vim pod-required-anti-affinity-demo.yaml編輯檔案。cat pod-required-anti-affinity-demo.yaml檢視檔案。cat pod-required-anti-affinity-demo.yaml | grep topologyKey檔案中的topologyKey标簽。kubectl apply -f pod-required-anti-affinity-demo.yaml聲明資源。kubectl get pods -o wide擷取Pod資源資訊,第二個pod被挂起,原因在于反親和政策導緻沒有比對節點可部署。kubectl delete -f pod-required-anti-affinity-demo.yaml清除資源。
[[email protected] schedule]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master.example.com Ready master 6d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master.example.com,node-role.kubernetes.io/master=
node1.example.com Ready <none> 6d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node1.example.com
node2.example.com Ready <none> 6d v1.11.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=harddisk,kubernetes.io/hostname=node2.example.com
[[email protected] schedule]# kubectl label nodes node1.example.com zone=foo
node/node1.example.com labeled
[[email protected] schedule]# kubectl label nodes node2.example.com zone=foo
node/node2.example.com labeled
[[email protected] schedule]# vim pod-required-anti-affinity-demo.yaml
[[email protected] schedule]# cat pod-required-anti-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-first
labels:
app: myapp
tier: frontend
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
---
apiVersion: v1
kind: Pod
metadata:
name: pod-second
labels:
app: backend
tier: db
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
command: ["sh","-c","sleep 3600"]
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- {key: app, operator: In, values: ["myapp"]}
topologyKey: zone
[[email protected] schedule]# cat pod-required-anti-affinity-demo.yaml | grep topologyKey
topologyKey: zone
[[email protected] schedule]# kubectl apply -f pod-required-anti-affinity-demo.yaml
pod/pod-first created
pod/pod-second created
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
pod-first 1/1 Running 0 6m 10.244.2.12 node2.example.com
pod-second 0/1 Pending 0 6m <none> <none>
[[email protected] schedule]# kubectl delete -f pod-required-anti-affinity-demo.yaml
pod "pod-first" deleted
pod "pod-second" deleted
10.kubectl describe node master.example.com | grep -i taints檢視管理節點的taints。kubectl get pods -n kube-system -o wide | grep api檢視api系統pod。kubectl describe pods kube-apiserver-master.example.com -n kube-system | grep -i tolerations檢視其容忍度(Tolerations: :NoExecute容忍一切taints)。kubectl get pods -n kube-system -o wide | grep proxy檢視poxy的pod。 kubectl describe pods kube-proxy-r4j2h -n kube-system | grep -i tolerations -A8檢視其容忍度資訊。kubectl get nodes master.example.com -o yaml | grep -i taints -A2将管理節點資訊輸出檢視taints資訊。
[[email protected] schedule]# kubectl describe node master.example.com | grep -i taints
Taints: node-role.kubernetes.io/master:NoSchedule
[[email protected] schedule]# kubectl get pods -n kube-system -o wide | grep api
kube-apiserver-master.example.com 1/1 Running 8 6d 172.20.0.128 master.example.com
[[email protected] schedule]# kubectl describe pods kube-apiserver-master.example.com -n kube-system | grep -i tolerations
Tolerations: :NoExecute
[[email protected] schedule]# kubectl get pods -n kube-system -o wide | grep proxy
kube-proxy-56hs9 1/1 Running 6 6d 172.20.0.129 node1.example.com
kube-proxy-r4j2h 1/1 Running 11 6d 172.20.0.128 master.example.com
kube-proxy-t985x 1/1 Running 10 6d 172.20.0.130 node2.example.com
[[email protected] schedule]# kubectl describe pods kube-proxy-r4j2h -n kube-system | grep -i tolerations -A8
Tolerations:
CriticalAddonsOnly
node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events: <none>
[[email protected] schedule]# kubectl get nodes master.example.com -o yaml | grep -i taints -A2
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
11.kubectl taint node node1.example.com node-type=prodution:NoSchedule進行taint标簽(注意:标明的是限制排程)。vim deploy-demo.yaml編輯檔案。cat deploy-demo.yaml檢視檔案(注意:沒有容忍度資訊)。kubectl apply -f deploy-demo.yaml聲明資源。kubectl get pods -o wide發現Pod都運作在另一個節點。kubectl taint node node2.example.com node-type=dev:NoExecute進行taint标簽(注意:标明的是限制運作)。kubectl get pods -o wide檢視pod發現被挂起(因為沒有既可以排程又可以運作的節點)。
[[email protected] schedule]# kubectl taint node node1.example.com node-type=production:NoSchedule
node/node1.example.com tainted
[[email protected] schedule]# cp ../deploy-demo.yaml .
[[email protected] schedule]# vim deploy-demo.yaml
[[email protected] schedule]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: 80
[[email protected] schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy created
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-67f6f6b4dc-dtdgn 1/1 Running 0 9s 10.244.2.14 node2.example.com
myapp-deploy-67f6f6b4dc-fbrr7 1/1 Running 0 9s 10.244.2.13 node2.example.com
myapp-deploy-67f6f6b4dc-r6ccw 1/1 Running 0 9s 10.244.2.15 node2.example.com
[[email protected] schedule]# kubectl taint node node2.example.com node-type=dev:NoExecute
node/node2.example.com tainted
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-67f6f6b4dc-2vnxl 0/1 Pending 0 27s <none> <none>
myapp-deploy-67f6f6b4dc-7mvtz 0/1 Pending 0 27s <none> <none>
myapp-deploy-67f6f6b4dc-ffrrq 0/1 Pending 0 27s <none> <none>
12.vim deploy-demo.yaml編輯檔案。cat deploy-demo.yaml檢視檔案( tolerations:增加容忍度資訊)。kubectl apply -f deploy-demo.yaml聲明資源。kubectl get pods -o wide檢視Pod資訊(還是被挂起,因為production無法被排程,容忍運作是在排程之後,是以無法啟用)。
[[email protected] schedule]# vim deploy-demo.yaml
[[email protected] schedule]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: 80
tolerations:
- key: "node-type"
operator: "Equal"
value: "production"
effect: "NoExecute"
tolerationSeconds: 3600
[[email protected] schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy configured
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-67f6f6b4dc-2vnxl 0/1 Pending 0 6m <none> <none>
myapp-deploy-67f6f6b4dc-7mvtz 0/1 Pending 0 6m <none> <none>
myapp-deploy-67f6f6b4dc-ffrrq 0/1 Pending 0 6m <none> <none>
myapp-deploy-77fb48ff96-xsh5s 0/1 Pending 0 9s <none> <none>
13.vim deploy-demo.yaml編輯檔案。cat deploy-demo.yaml檢視檔案(注意此時容忍度已變為可容忍禁止排程)。kubectl apply -f deploy-demo.yaml重新聲明資源。kubectl get pods -o wide發現pod已經可以運作(隻能在node1上因為node2有禁止運作taints)。
[[email protected] schedule]# vim deploy-demo.yaml
[[email protected] schedule]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: 80
tolerations:
- key: "node-type"
operator: "Equal"
value: "production"
effect: "NoSchedule"
[[email protected] schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy configured
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-65cc47f858-hwsjd 1/1 Running 0 4m 10.244.1.10 node1.example.com
myapp-deploy-65cc47f858-nmdr9 1/1 Running 0 4m 10.244.1.11 node1.example.com
myapp-deploy-65cc47f858-rbtc7 1/1 Running 0 4m 10.244.1.12 node1.example.com
14.vim deploy-demo.yaml編輯檔案。 cat deploy-demo.yaml檢視檔案(注意此時改為存在性檢測,即隻要存在node-type這一類的taints都可以容忍)。kubectl apply -f deploy-demo.yaml聲明資源。kubectl get pods -o wide發現pod開始随機部署。
[[email protected] schedule]# vim deploy-demo.yaml
[[email protected] schedule]# cat deploy-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: myapp
release: canary
template:
metadata:
labels:
app: myapp
release: canary
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v2
ports:
- name: http
containerPort: 80
tolerations:
- key: "node-type"
operator: "Exists"
value: ""
effect: ""
[[email protected] schedule]# kubectl apply -f deploy-demo.yaml
deployment.apps/myapp-deploy configured
[[email protected] schedule]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myapp-deploy-5d9c6985f5-g4cwf 1/1 Running 0 18s 10.244.1.13 node1.example.com
myapp-deploy-5d9c6985f5-jwn56 1/1 Running 0 17s 10.244.2.17 node2.example.com
myapp-deploy-5d9c6985f5-qm7g2 1/1 Running 0 20s 10.244.2.16 node2.example.com