天天看點

虛拟機搭建叢集(三台虛拟機)

1、将各節點的IP和hostname加入到每個節點的/etc/hosts中 echo 192.168.63.141 admin-node >> /etc/hosts echo 192.168.63.142 ceph-node1 >> /etc/hosts .... 拷貝到其它主機:scp  /etc/hosts  [email protected]:/etc/ 在admin-node主機上執行:ssh-keygen 并将它複制到其他節點:ssh-copy-id [email protected]

2、 在作業系統防火牆内啟用Ceph monitor、OSD、MDS所需的端口。(所有機器中都需要執行以下指令) 啟動firewalld:service firewalld start firewall-cmd --zone=public --add-port=6789/tcp --permanent firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent firewall-cmd --reload firewall-cmd --zone=public --list-all 執行個體: [[email protected] ~]# service firewalld start Redirecting to /bin/systemctl start  firewalld.service [[email protected] ~]# firewall-cmd --zone=public --add-port=6789/tcp --permanent success [[email protected] ~]# firewall-cmd --zone=public --add-port=6800-7100/tcp --permanent success [[email protected] ~]# firewall-cmd --reload success [[email protected] ~]# firewall-cmd --zone=public --list-all public (default, active)   interfaces: e777no16736   sources:   services: dhcpv6-client ssh   ports: 6789/tcp 6800-7100/tcp   masquerade: no   forward-ports:   icmp-blocks:   rich rules:

[[email protected] ~]#

3、在所有機器上禁用SELINUX [[email protected] ~]# setenforce 0 [[email protected] ~]# sed -i s'/SELINUX.*=.*enforcing/SELINUX=disabled'/g /etc/selinux/config 檢視檔案config中SELINUX=disabled 即可

4、在所有機器上安裝并配置ntp服務 yum -y install ntp ntpdate [[email protected] ~]# ntpdate pool.ntp.org  3 Sep 10:13:10 ntpdate[13011]: adjust time server 202.118.1.81 offset -0.003634 sec [[email protected] ~]# systemctl restart ntpdate.service [[email protected] ~]# systemctl restart ntpd.service [[email protected] ~]# systemctl enable ntpd.service

Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service. [[email protected] ~]# systemctl enable ntpdate.service Created symlink from /etc/systemd/system/multi-user.target.wants/ntpdate.service to /usr/lib/systemd/system/ntpdate.service.

5、在所有ceph節點上添加Ceph Giant版本庫并更新yum: [[email protected] ~]# rpm -Uhv http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm Retrieving http://ceph.com/rpm-giant/el7/noarch/ceph-release-1-0.el7.noarch.rpm warning: /var/tmp/rpm-tmp.5WFToc: Header V4 RSA/SHA1 Signature, key ID 460f3994: NOKEY Preparing...                          ################################# [100%] Updating / installing...    1:ceph-release-1-0.el7             ################################# [100%]

5、利用ceph-deploy搭建叢集,在管理節點上安裝ceph-deploy yum -y install ceph-deploy mkdir /etc/ceph cd /etc/ceph ceph-deploy new admin-node 在每個節點上都安裝ceph: [[email protected] ceph]# ceph-deploy install ceph-node1 admin-node ceph-node2

6、在admin-node中建立第一個Ceph monitor: ceph-deploy mon create-initial 部署叢集到此成功,檢視叢集狀态: [[email protected] ceph]# ceph -s     cluster 5035c6ba-96c8-4378-a086-a8b579089dd6      health HEALTH_ERR             64 pgs stuck inactive             64 pgs stuck unclean             no osds      monmap e1: 1 mons at {admin-node=192.168.63.140:6789/0}             election epoch 2, quorum 0 admin-node      osdmap e1: 0 osds: 0 up, 0 in             flags sortbitwise       pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects             0 kB used, 0 kB / 0 kB avail                   64 creating 顯示幾區不健康,還需繼續配置。

7、增加osd 在/var/local路徑下mkdir osd1 osd2 osd3 并chmod 777 osd* [[email protected] ceph]# ceph-deploy --overwrite-conf osd prepare admin-node:/var/local/osd1 [[email protected] ceph]# ceph-deploy --overwrite-conf osd activate admin-node:/var/local/osd1 [[email protected] ceph]# ceph -s     cluster 7863ef1c-1e65-4db7-af36-1310975e056e      health HEALTH_WARN             64 pgs degraded             64 pgs stuck inactive             64 pgs stuck unclean             64 pgs undersized      monmap e1: 3 mons at {admin-node=192.168.63.141:6789/0,ceph-node1=192.168.63.142:6789/0,ceph-node2=192.168.63.143:6789/0}             election epoch 4, quorum 0,1,2 admin-node,ceph-node1,ceph-node2      osdmap e5: 1 osds: 1 up, 1 in             flags sortbitwise       pgmap v7: 64 pgs, 1 pools, 0 bytes data, 0 objects             6890 MB used, 10987 MB / 17878 MB avail                   64 undersized+degraded+peered

增加monitor: [[email protected] ceph]# ceph-deploy mon create ceph-node2

#mkfs.xfs  /dev/sdb #mount /dev/sdb /opt/ceph/ ceph-deploy osd prepare{ceph-node}:/path/to/directory

6.2.4. Mount資料分區

mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/{cluster-name}-{osd-number}

執行:

sh

sudo mount -o user_xattr /dev/sdc1 /var/lib/ceph/osd/ceph-0

部署過程中如果出現任何奇怪的問題無法解決,可以簡單的删除一切從頭再來:

# ceph-deploy purge ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2

# ceph-deploy purgedata ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2

# ceph-deploy forgetkeys

5.1 如何清空ceph資料

先清空之前所有的ceph資料,如果是新裝不用執行此步驟,如果是重新部署的話也執行下面的指令:

ceph-deploy purgedata {ceph-node} [{ceph-node}]

ceph-deploy forgetkeys

看一下配置成功了沒?

  1. # ceph health

  2. HEALTH_WARN too few PGs per OSD (10< min 30)

增加 PG 數目,根據 Total PGs = (#OSDs * 100) / pool size 公式來決定 pg_num(pgp_num 應該設成和 pg_num 一樣),是以 20*100/2=1000,Ceph 官方推薦取最接近2的指數倍,是以選擇 1024。如果順利的話,就應該可以看到 HEALTH_OK 了:

  1. # ceph osd pool set rbd size 2

  2. set pool 0 size to 2

  3. # ceph osd pool set rbd min_size 2

  4. set pool 0 min_size to 2

  5. # ceph osd pool set rbd pg_num 1024

  6. set pool 0 pg_num to 1024

  7. # ceph osd pool set rbd pgp_num 1024

  8. set pool 0 pgp_num to 1024

  9. # ceph health

  10. HEALTH_OK

如果操作沒有問題的話記得把上面操作寫到 ceph.conf 檔案裡,并同步部署的各節點:

  1. # vi ceph.conf

  2. [global]

  3. fsid =6349efff-764a-45ec-bfe9-ed8f5fa25186

  4. mon_initial_members = ceph-mon1, ceph-mon2, ceph-mon3

  5. mon_host =192.168.2.101,192.168.2.102,192.168.2.103

  6. auth_cluster_required = cephx

  7. auth_service_required = cephx

  8. auth_client_required = cephx

  9. filestore_xattr_use_omap =true

  10. osd pool default size =2

  11. osd pool default min size =2

  12. osd pool default pg num =1024

  13. osd pool default pgp num =1024

  14. # ceph-deploy admin ceph-adm ceph-mon1 ceph-mon2 ceph-mon3 ceph-osd1 ceph-osd2

6.3 删除OSD

[objc]  view plain  copy  print ?

虛拟機搭建叢集(三台虛拟機)
虛拟機搭建叢集(三台虛拟機)
  1. [ceph@node1 cluster]$ cat rmosd.sh  
  2. ###############################################################################  
  3. # Author : [email protected]  
  4. # File Name : rmosd.sh  
  5. # Description :  
  6. #  
  7. ###############################################################################  
  8. #!/bin/bash  
  9. if [ $# != 1 ]; then  
  10. echo "Error!";  
  11. exit 1;  
  12. fi  
  13. ID=${1}  
  14. sudo systemctl stop ceph-osd@${ID}  
  15. ceph osd crush remove osd.${ID}  
  16. ceph osd down ${ID}  
  17. ceph auth del osd.${ID}  
  18. ceph osd rm ${ID}  
  19. [ceph@node1 cluster]$  

 Another app is currently holding the yum lock; waiting for it to exit... 強制關掉yum程序:  #rm  -f  / var /run/yum . pid

繼續閱讀