模拟環境:
在真機上挂載ceph10.iso :
[[email protected] ~]# mkdir /var/ftp/ceph
[[email protected] ~]# mount ceph10.iso /var/ftp/ceph/
用戶端虛拟機:client 192.168.4.10/24
真機IP位址:192.168.4.254/24
存儲叢集虛拟機:
node1 192.168.4.11/24
node2 192.168.4.12/24
node3 192.168.4.13/24
步驟:
1.配置無密碼連接配接:
[[email protected] ~]# ssh-keygen -f /root/.ssh/id_rsa -N ‘’
[[email protected] ~]#for do i in 10 11 12 13 ; do ssh-copy-id 192.168.4.$i; done
2.修改/etc/hosts并同步到所有虛拟機
[[email protected] ~]# cat /etc/hosts
… …
192.168.4.10 client
192.168.4.11 node1
192.168.4.12 node2
192.168.4.13 node3
[[email protected] ~]# for i in client node1 node2 node3; do scp /etc/hosts $i:/etc/; done
3.要修改所有節點是以需要配置YUM源,并同步到所有虛拟機
[[email protected] ~]# vim /etc/yum.repos.d/ceph.repo
[mon]
name=mom
baseurl=ftp://192.168.4.254/ceph/MON
gpgcheck=0
[osd]
name=osd
baseusrl=ftp://192.168.4.254/ceph/OSD
gpgcheck=0
[tools]
name=tools
baseurl=ftp://192.168.4.254/ceph/Tools
gpgcheck=0
[[email protected] ~]# yum repolist #驗證YUM源軟體數量
[[email protected] ~]# for i in client node1 node2 node3; do scp /etc/yum.repos.d/ceph.repo $i:/etc/yum.repos.d; done
4.配置NTP伺服器,使得所有虛拟機與真機的NTP伺服器同步時間
[[email protected] ~]# vim /etc/chrony.conf
… …
server 192.168.4.254 iburst
[[email protected] ~]# for i in client node1 node2 node3; do scp /etc/chrony.conf $i:/etc/ ; ssh $i “systemctl restart chronyd”; done
5.真機上為每個存儲叢集虛拟機準備存儲磁盤,3塊兒20G磁盤(可以使用圖形直接添加)
6.安裝部署軟體ceph-deploy
1).[[email protected] ~]# yum -y install ceph-deploy #安裝ceph-deploy軟體
2).[[email protected] ~]# mkdir ceph-cluster # 建立目錄
[[email protected] ~]# cd ceph-cluster/ #移動到該目錄下
7.部署ceph叢集
1).建立ceph叢集,在ceph-cluster目錄下生成ceph配置檔案,在ceph.conf配置檔案中定義monitor主機是誰.
[[email protected] ceph-cluster]# ceph-deploy new node1 node2 node3
2).給所有節點安裝ceph相關軟體包
[[email protected] ceph-cluster]# for i in node1 node2 node3 ; do ssh $i “yum -y install ceph-mon ceph-osd ceph-mds ceph-randosgw” ; done
3).初始化所有節點的monitor服務,也相當于啟動monitor服務
[[email protected] ceph-cluster]# ceph-deploy mon create-initial
8.建立OSD
1).檢視磁盤狀态
[[email protected] ~]# lsblk #确認3塊兒20G磁盤是否添加.[ vdb,vdc,vdd]
2).将所有存儲叢集虛拟機上的磁盤vdb中的vdb1和vdb2兩個分區用來做存儲伺服器的緩存盤
[[email protected] ceph-cluster]# for i in node1 node2 node3 ; do ssh $i “parted /dev/vdb mklabel gpt” ; ssh $i “parted /dev/vdb mkpart primary 1 50%”; ssh $i “parted /dev/vdb mkpart primary 50% 100%” ; done
3).磁盤分區後的預設權限無法讓ceph軟體對其進行讀寫操作,需要修改權限,所有存儲叢集虛拟機上都要操作.
[[email protected] ceph-cluster]# vim /etc/udev/rules.d/70-vdb.rules
ENV{DEVNAME}"/dev/vdb1",OWNER=“ceph”,GROUP=“ceph”
ENV{DEVNAME}"/dev/vdb2",OWNER=“ceph”,GROUP=“ceph” #修改永久配置
[[email protected] ceph-cluster]# for i in client node1 node2 node3 ; do chown ceph:ceph /dev/vdb1 ; chown ceph:ceph /dev/vdb2 ; scp /etc/udev/rules.d/70-vdb.rules $i:/etc/udev/rules.d/ ; done
4).初始化清空磁盤資料 (僅在node1操作即可)
[[email protected] ceph-cluster]# ceph-deploy disk zap node1:vdc node1:vdd
[[email protected] ceph-cluster]# ceph-deploy disk zap node2:vdc node2:vdd
[[email protected] ceph-cluster]# ceph-deploy disk zap node3:vdc node3:vdd
5).建立OSD存儲空間 (僅在node1操作即可)
[[email protected] ceph-cluster]# ceph-deploy osd create node1:vdc:/dev/vdb1 node1:vdd:/dev/vdb2
[[email protected] ceph-cluster]# ceph-deploy osd create node2:vdc:/dev/vdb1 node2:vdd:/dev/vdb2
[[email protected] ceph-cluster]# ceph-deploy osd create node3:vdc:/dev/vdb1 node3:vdd:/dev/vdb2
6).驗證測試
[[email protected] ~]# ceph -s
9.建立ceph塊存儲
1).建立鏡像
[[email protected] ~]# ceph osd lspools #檢視存儲池
[[email protected] ~]# rbd create demo-image --image-feature layering --size 10G #建立demo-image鏡像
[[email protected] ~]rbd create image --image-feature layering --size 10G #建立image鏡像
#-image-feature參數指定我們建立的鏡像有哪些功能,layering是開啟COW功能。
[[email protected] ~]# rbd list #檢視有哪些鏡像
[[email protected] ~]# rbd info demo-image #檢視單個鏡像的資訊
1.1).動态調整-----縮小容量
[[email protected] ~]# rbd resize --size 7G image --allow-shrink
1.2).動态調整-----擴容容量
[[email protected] ~]# rbd resize --size 15G image
10.用戶端通過KRBD通路
#用戶端需要安裝ceph-common軟體包
#拷貝配置檔案(否則不知道叢集在哪)
#拷貝連接配接密鑰(否則無連接配接權限)
[[email protected] ~]# yum -y install ceph-common
[[email protected] ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph/
[[email protected] ~]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
[[email protected] ~]# rbd map image
[[email protected] ~]# lsblk
[[email protected] ~]# rbd showmapped # 檢視所有鏡像
11.用戶端格式化,挂載分區
[[email protected] ~]# mkfs.xfs /dev/rbd0
[[email protected]ient ~]# mount /dev/rbd0 /mnt/
[[email protected] ~]# echo “test” > /mnt/test.txt #在挂在點建立測試文檔
12.建立鏡像快照
1).[[email protected] ~]# rbd snap create image --snap image-snap1 #為image鏡像建立快照,快照名稱為image-snap1
2).[[email protected] ~]# rbd snap ls image #檢視鏡像快照
3).[[email protected] ~]# rm -rf /mnt/test.txt
[[email protected] ~]# umount /mnt #删除用戶端寫入的測試文檔
4).用快照還原資料
[[email protected] ~]# rbd snap rollback image --snap image-snap1 #用戶端重新挂載分區
[[email protected] ~]# mount /dev/rbd0 /mnt/
[[email protected] ~]# ls /mnt #之前模拟誤删除的測試文檔被還原.