1. 在Ceph上為Kubernetes建立一個存儲池
# ceph osd pool create k8s 128
2. 建立k8s使用者
# ceph auth get-or-create client.k8s mon 'allow r' osd 'allow rwx pool=k8s' -o ceph.client.k8s.keyring
3. 将k8s使用者的key進行base64編碼
這是Kubernetes通路Ceph的密鑰,會儲存在Kubernetes的Secret中
# grep key ceph.client.k8s.keyring | awk '{printf "%s", $NF}' | base64
VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ==
4. 在Kubernetes建立通路Ceph的Secret
# echo ' apiVersion: v1
kind: Secret
metadata:
name: ceph-k8s-secret
type: "kubernetes.io/rbd"
data:
key: VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ==
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-admin-secret
namespace: kube-system
type: "kubernetes.io/rbd"
data:
key: VBGFaeN3OWJYdUZPSHhBQTNrU2E2QlUyaEF5UUV0SnNPRHdXeRT8PQ== ‘ | kubectl create -f -
5. 将通路Ceph的keyring複制到Kubernetes work節點上
在建立Pod的時候,kubelet會調用rbd指令去檢測和挂載PVC對應的rbd鏡像,是以在kubelet節點上要保證存在rbd指令和通路ceph的keyring。否則建立Pod,kubelet有可能報各種各樣ceph相關的錯誤。
如果kubelet在worker節點上是正常運作在default namespace下的,那麼安裝ceph-common包,然後将keyring拷貝到/etc/ceph/目錄下即可;如果kubelet是運作在容器中,那這兩個操作就需要在容器中執行。
我們的環境中,kubelet是運作在rkt容器中的,官方鏡像中已經包含了ceph用戶端,是以隻需要将keyring拷貝到容器中。
我們環境中使用systemctl管理kubelet,以服務的方式啟動一個rkt容器來運作kubelet,修改/etc/systemd/system/kubelet.service,增加一個對應keyring的volume:
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=load-images.service
Requires=docker.service
[Service]
EnvironmentFile=/etc/cluster-envs
Environment=KUBELET_IMAGE_TAG=v1.7.10
Environment="RKT_RUN_ARGS= \
--volume ceph-keyring,kind=host,source=/etc/ceph/ceph.client.k8s.keyring \
--mount volume=ceph-keyring,target=/etc/ceph/ceph.client.k8s.keyring \
--volume modprobe,kind=host,source=/usr/sbin/modprobe \
--mount volume=modprobe,target=/usr/sbin/modprobe \
--volume lib-modules,kind=host,source=/lib/modules \
--mount volume=lib-modules,target=/lib/modules \
ExecStartPre=/usr/bin/mkdir -p /etc/ceph
ExecStart=/opt/bin/kubelet-wrapper \
--address=0.0.0.0 \
--allow-privileged=true \
--cluster-dns=192.168.192.10 \
--cluster-domain=cluster.local \
--cloud-provider='' \
--port=10250 \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--node-labels=worker=true \
--pod-manifest-path=/etc/kubernetes/manifests \
--kubeconfig=/etc/kubernetes/kubeconfig.yaml \
--require-kubeconfig=true \
--network-plugin=cni \
--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin \
--logtostderr=true
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
6. 在Kubernetes建立ceph-rbd StorageClass
# echo ‘apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.32.24.11:6789,10.32.24.12:6789,10.32.24.13:6789
adminId: k8s
adminSecretName: ceph-k8s-secret
adminSecretNamespace: kube-system
pool: k8s
userId: k8s
userSecretName: ceph-k8s-secret’ | kubectl create -f -
7. 将ceph-rbd設定為預設的StorageClass
# kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
注意叢集中隻能存在一個預設StorageClass,如果同時将多個StorageClass設定為預設,相當于沒有設定預設StorageClass。檢視StorageClass清單,預設StorageClass帶有(default)标記:
# kubectl get storageclass
NAME TYPE
ceph-rbd (default) kubernetes.io/rbd
ceph-sas kubernetes.io/rbd
ceph-ssd kubernetes.io/rbd
8. 建立一個PersistentVolumeClaim
# echo ‘apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-test-vol1-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-rbd
resources:
requests:
storage: 10Gi’ | kubectl create -f -
因為指定了預設StorageClass,是以這裡的storageClassName其實可以省略。
9. 建立使用PVC的Pod
# echo ‘apiVersion: v1
kind: Pod
metadata:
name: nginx-test
spec:
containers:
- name: nginx
image: nginx:latest
volumeMounts:
- name: nginx-test-vol1
mountPath: /data/
readOnly: false
volumes:
- name: nginx-test-vol1
persistentVolumeClaim:
claimName: nginx-test-vol1-claim’ | kubectl create -f -
10. 檢視容器狀态
進入容器看到rbd挂載到了/data目錄
# kubectl exec nginx-test -it -- /bin/bash
[root@nginx-test ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/rbd0 50G 52M 47G 1% /data