天天看點

k8s不能挂載ceph塊存儲

我是參考 Tony Bai 部落格進行k8s挂載ceph的存儲,但是發現最終pod的狀态一直是

ContainerCreating

一、環境說明:

  • Tony Bai 是把k8s 和 ceph都部署在那兩台虛拟機上
  • 我的環境是k8s叢集和ceph存儲叢集分别部署在不同機器上的

ceph存儲叢集環境部署可以參考Tony Bai的,或者網上找很多教程,我這裡隻是記錄k8s挂載到ceph塊存儲所遇到的問題。

二、配置檔案

# ceph-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFEUGpCVlpnRWphREJBQUtMWFd5SVFsMzRaQ2JYMitFQW1wK2c9PQo=

##########################################
# ceph-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv
spec:
  capacity:
    storage: Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - :
    pool: rbd
    image: ceph-image
    keyring: /etc/ceph/ceph.client.admin.keyring
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

##########################################
# ceph-pvc.yaml

kind: PersistentVolumeClaim

apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: Gi

##########################################
# ceph-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
spec:
  containers:
  - name: ceph-busybox1
    image: :/duni/busybox:latest
    command: ["sleep", "600000"]
    volumeMounts:
    - name: ceph-vol1
      mountPath: /usr/share/busybox
      readOnly: false
  volumes:
  - name: ceph-vol1
    persistentVolumeClaim:
      claimName: ceph-claim
           

三、查找挂載失敗的原因

檢視對象的狀态

$ kubectl get pv,pvc,pods
NAME         CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                REASON    AGE
pv/ceph-pv   1Gi        RWO           Recycle         Bound     default/ceph-claim             11s

NAME             STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
pvc/ceph-claim   Bound     ceph-pv   1Gi        RWO           10s

NAME                   READY     STATUS              RESTARTS   AGE
po/ceph-pod1           0/1       ContainerCreating   0          11s
           

發現ceph-pod1狀态一直是

ContainerCreating

檢視pod的event

$ kubectl describe po/ceph-pod1

Events:
  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  m        m         {default-scheduler }            Normal      Scheduled   Successfully assigned ceph-pod1 to duni-node1
  s        s         {kubelet duni-node1}            Warning     FailedMount Unable to mount volumes for pod "ceph-pod1_default(6656394a-37b6-11e7-b652-000c2932f92e)": timeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]
  s        s         {kubelet duni-node1}            Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]
           

又到

ceph-pod1

所在k8s節點機上檢視

kubelet

日志

$ journalctl -u  kubelet -f 

May  :: duni-node1 kubelet[]: I0513 ::     operation_executor.go:] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/e38290de-33a7-11e7-b6       52-000c2932f92e-default-token-91w6v" (spec.Name: "default-token-91w6v") pod "e38290de-33a7-11e7-b652-000c2932f92e" (UID: "e38290de-33a7-11e7-b652-000c2932f92e").
 May  :: duni-node1 kubelet[]: E0513 ::     kubelet.go:] Unable to mount volumes for pod "ceph-pod1_default(ef4e99c4-37aa-11e7-b652-000c2932f92e)": t       imeout expired waiting for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]; skipping pod
 May  :: duni-node1 kubelet[]: E0513 ::     pod_workers.go:] Error syncing pod ef4e99c4-aa--b652-c2932f92e, skipping: timeout expired waiting        for volumes to attach/mount for pod "ceph-pod1"/"default". list of unattached/unmounted volumes=[ceph-vol1]
 May  :: duni-node1 kubelet[]: I0513 ::     reconciler.go:] MountVolume operation started for volume "kubernetes.io/secret/ddee5d45-3490-11e7-b652-000       c2932f92e-default-token-91w6v" (spec.Name: "default-token-91w6v") to pod "ddee5d45-3490-11e7-b652-000c2932f92e" (UID: "ddee5d45-3490-11e7-b652-000c2932f92e"). Volume is already moun       ted to pod, but remount was requested.
           

四、解決方法

在k8s節點機上安裝

ceph common

yum install ceph-common

删掉

ceph-pod1

重新運作,等一會就看到狀态

Running

繼續閱讀