天天看點

ceph 0.80.1源碼安裝

ceph 0.80.1源碼安裝

各節點職能:

ceph-node1  mon.0

ceph-node1  mds.0

ceph-node4  osd        client

osd.0
osd.1

ceph-node5     osd        client

osd.2
osd.3

1、必要的支援包安裝

軟體包安裝

yum install automake autoconf boost-devel

yum install fuse-devel libtool libuuid-devel

yum install libblkid-devel keyutils-libs-devel

yum install cryptopp-devel fcgi-devel libcurl-devel

安裝cryptopp-devel:

cryptopp:rpm -ivh http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/cryptopp-5.5.2-1.el6.rf.x86_64.rpm

cryptopp-devel:rpm -ivh http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/cryptopp-devel-5.5.2-1.el6.rf.x86_64.rpm

安裝fcgi-devel:

epel-release是安裝一個有關ceph的源:rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

把epel.repo檔案下面的baseurl取消注釋,把mirrorlist行注釋掉

yum install expat-devel gperftools-devel libedit-devel

yum install libatomic_ops-devel snappy-devel leveldb-devel

yum install libaio-devel xfsprogs-devel

yum install libudev-devel btrfs-progs

2、編譯源代碼包

安裝包為:ceph-0.80.1.tar.gz

編譯安裝指令:

cd /root/ceph-0.80.1

./autogen.sh

./configure --prefix=/usr/local/ceph

make&&make install

3、ceph安裝配置

①初步配置

yum install python-pip

pip install argparse

;以上兩個軟體包是建立osd所依賴的包,直接用yum源安裝就可以了

cp /usr/local/ceph/bin/ceph /usr/local/bin/ ;在local目錄下面沒有拷貝建立ceph執行腳本,這裡需要人工操作

/usr/local/bin/ceph osd create ;建立osd子產品

執行完上面的操作後可能會出現下面的錯誤資訊輸出:

[[email protected] ~]# /usr/local/bin/ceph create osd
Traceback (most recent call last):
  File "/usr/local/bin/ceph", line 56, in <module>
	import rados
ImportError: No module named rados
[[email protected] ~]# /usr/local/bin/ceph osd create
Traceback (most recent call last):
  File "/usr/local/bin/ceph", line 56, in <module>
	import rados
ImportError: No module named rados
           

為了解決上面的問題需要執行如下操作cp -vf /usr/local/ceph/lib/python2.6/site-packages/* /usr/lib64/python2.6echo /usr/local/ceph/lib > /etc/ld.so.conf.d/ceph.confecho /usr/local/lib> /etc/ld.so.conf.d/libtcmalloc.confldconfig②每個節點上建立“mkdir /etc/ceph”目錄③添加環境變量export PATH=$PATH:/usr/local/ceph/bin:/usr/local/ceph/sbin④添加ceph服務cp init-ceph /etc/init.d/ceph//這個檔案必須複制,且不要做任何修改########################################################################################

上面的操作是在ceph1節點上面執行的,下面的操作将會涉及到其他節點特别是ceph4節點的配置和建立分區,為了安裝簡單,上面操作完成之後就可以clone系統了########################################################################################

4、無密碼登入

實作各節點之間使用ssh免密碼登入,要是新添加節點要執行相同的操作,需要注意的是每個節點之間都需要互相免密碼登入

5、在ceph-node4、ceph-node5上建立分區

①建立分區的情況如下所示:

[osd.0]
host = ceph-node4
devs = /dev/sdb1
[osd.1]
host = ceph-node4
devs = /dev/sdc1
[osd.2]
host = ceph-node5
devs = /dev/sdb1
[osd.3]
host = ceph-node5
devs = /dev/sdc1
           

在ceph-node2節點上添加兩個容量為50GB的兩塊硬碟,分别挂載在對應的分區上面

②建立分區的檔案系統為xfs

6、腳本說明

在ceph-node1節點上操作如下:cd /root/ceph-0.80.1/src

cp sample.ceph.conf /etc/ceph/ceph.conf                             ;粘貼修改下面配置即可,也可不複制

cp sample.fetch_config /etc/ceph/fetch_config                     ;fetch_config沒什麼用,可要可不要

7、ceph.conf配置檔案:

[global]
public network             = 10.10.2.0/24
cluster network            = 10.10.2.0/24
;fsid                       = a3fa7253-63c2-4e98-a13c-9f9376157561 
pid file                   = /var/run/ceph/$name.pid
max open files             = 131072
auth cluster required      = cephx
auth service required      = cephx
auth client required       = cephx
keyring                  = /etc/ceph/$cluster.$name.keyring
cephx require signatures   = true 
osd pool default size      = 2
osd pool default min size  = 1
osd pool default crush rule = 0
osd crush chooseleaf type = 1
osd pool default pg num    = 192
osd pool default pgp num   = 192
osd auto discovery	   = false
journal collocation 	   = false
raw multi journal	   = true
[mon]
mon data                   = /var/lib/ceph/mon/$name
mon clock drift allowed    = .15
keyring = /etc/ceph/keyring.$name
[mon.0]
host=ceph-node1
mon addr = 10.10.2.171:6789
[mds]
keyring = /etc/ceph/keyring.$name
[mds.0]
host=ceph-node1
[osd]
osd data = /mnt/osd$id
osd recovery max active      = 5
osd mkfs type                = xfs
;osd mount options btrfs      = noatime,nodiratime
osd journal = /mnt/osd$id/journal
osd journal size             = 1000
keyring=/etc/ceph/keyring.$name
[osd.0]
host = ceph-node4
devs = /dev/sdb1
[osd.1]
host = ceph-node4
devs = /dev/sdc1
[osd.2]
host = ceph-node5
devs = /dev/sdb1
[osd.3]
host = ceph-node5
devs = /dev/sdc1
           

根據配置檔案建立目錄:ceph-node1下:/var/lib/ceph/mon/mon.0ceph-node4和ceph-node5下建立挂載點,和配置檔案裡面的目錄對應建立journal相關的目錄要看情況,要是journal采用和osd不同的盤,則需要建立,否則不要建立,啟動叢集的時候可能會出現osd挂載點不為空同步腳本可采用下面的腳本:

#!/bin/bash
cp /etc/ceph/ceph.conf /usr/local/ceph/etc/ceph/
scp /etc/ceph/ceph.conf ceph-node4:/usr/local/ceph/etc/ceph/ 
scp /etc/ceph/ceph.conf ceph-node4:/etc/ceph/
#scp /etc/ceph/ceph.conf ceph-node2:/usr/local/ceph/etc/ceph/ 
#scp /etc/ceph/ceph.conf ceph-node2:/etc/ceph/
scp /etc/ceph/ceph.conf ceph-node5:/usr/local/ceph/etc/ceph/ 
scp /etc/ceph/ceph.conf ceph-node5:/etc/ceph/
           

8、啟動叢集

執行下面腳本之前先“yum install redhat-lsb”,不然可能會報“/etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or directory”,具體解決參照“init-functions No such file or directory.txt”初次啟動或者上次啟動錯誤,要重新啟動需執行下面的格式化腳本:

#!/bin/bash
sh reset_settings.sh	#reset_settings.sh腳本的作用:清楚之前啟動的各種資料,同步新的配置檔案
mkcephfs -a -c /etc/ceph/ceph.conf --mkfs
/etc/init.d/ceph start
ceph osd create
ceph osd create
ceph osd create
ceph osd create
ssh [email protected] "/etc/init.d/ceph start osd"
ssh [email protected] "/etc/init.d/ceph start osd"
           

啟動腳本:

#!/bin/bash
/etc/init.d/ceph start
ceph osd create
ceph osd create
ceph osd create
ceph osd create
ssh [email protected] "/etc/init.d/ceph start osd"
ssh [email protected] "/etc/init.d/ceph start osd"
           

停止腳本:

#!/bin/bash
/etc/init.d/ceph -a stop
           

說明:“ceph osd create”操作有幾個就建立幾個osd,從0開始計數

9、挂載ceph

方法一:

更新核心到3.10以上然後挂載.

mount -t ceph 10.10.2.171:/ /mnt/ceph

如果打開了認證模式,那麼加載時需要

mount -t ceph 10.10.2.171:/ /mnt/ceph -o name=admin,secret=AQCGdJ5TYLNrCBAAkoMJgdYHW66ITpnWyItccw==

或直接使用

mount -t ceph 10.10.2.171:/ /mnt/ceph -o name=admin,secret=`ceph-authtool/etc/ceph/keyring.client.admin -p`

secret在keyring.client.admin中

方法二:

不用更新核心

scp ceph.client.admin.keyring ceph-node4:/etc/ceph/

scp ceph.client.admin.keyring ceph-node5:/etc/ceph/

然後在client上執行

ceph-fuse /mnt/ceph

這種方法不推薦,用戶端擷取了ceph.conf檔案是不安全的,目前就采用這種方式

備注:

①ceph源代碼下載下傳位址

http://ceph.com/download/

②從相關軟體包下載下傳位址

http://ceph.com/rpm/el6/x86_64/

③可以參考“Centos 6.5 安裝 ceph”,但是盡量不要執行裡面的操作,錯誤太多了。

④日志檔案“init-functions No such file or directory.txt”

⑤日志檔案“坑死人的fsid.txt”

⑥0.80.2之後的版本格式化采用“http://blog.csdn.net/skdkjzz/article/details/41445847”裡面的方法