<a href="http://s3.51cto.com/wyfs02/M00/58/05/wKiom1Snv07y93kOAAB8H97IOtQ440.gif" target="_blank"></a>
DRBD (DistributedReplicated Block Device) 是 Linux 平台上的分散式儲存系統。其中包含了核心模組,數個使用者空間管理程式及 shell scripts,通常用于高可用性(high availability, HA)叢集。DRBD 類似磁盤陣列的RAID 1(鏡像),隻不過 RAID 1 是在同一台電腦内,而 DRBD 是透過網絡。
DRBD 是以 GPL2 授權散布的自由軟體。
實驗架構圖:
<a href="http://s3.51cto.com/wyfs02/M01/58/03/wKioL1Sn0UegrXZtAAGVF8BhCKU166.jpg" target="_blank"></a>
一.高可用叢集建構的前提條件
1.主機名互相解析,實作主機名通信
[root@node1 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.31.10 node1.stu31.com node1
172.16.31.11 node2.stu31.com node2
複制一份到node2:
[root@node1 ~]# scp /etc/[email protected]:/etc/hosts
2.節點直接實作ssh無密鑰通信
節點1:
[root@node1 ~]# ssh-keygen -t rsa -P""
[root@node1 ~]# ssh-copy-id -i.ssh/id_rsa.pub root@node2
節點2:
[root@node2 ~]# ssh-keygen -t rsa -P""
[root@node2 ~]# ssh-copy-id -i.ssh/id_rsa.pub root@node1
測試ssh無密鑰通信:
[root@node2 ~]# date ; ssh node1 'date'
Fri Jan 2 12:34:02 CST 2015
時間同步,上面兩個節點的時間是一緻的!
二.DRBD軟體的安裝
1.擷取DRBD軟體程式,CentOS6.6的核心版本是2.6.32-504
[root@node1 ~]# uname -r
2.6.32-504.el6.x86_64
DRBD已經合并到linux kernel2.6.33及以後核心版本中,這裡直接安裝管理工具即可,若核心
版本低于2.6.33時請額外安裝DRBD核心子產品,且與管理工具版本保持一緻。
kmod-drbd84-8.4.5-504.1.el6.x86_64.rpm
drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm
此軟體包是經過編譯源碼而成,我提供下載下傳,根據附件下載下傳即可:
2.安裝軟體包,節點1和節點2都需要安裝
安裝時間将持續很長時間:
[root@node1 ~]# rpm -ivhdrbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm kmod-drbd84-
8.4.5-504.1.el6.x86_64.rpm
warning:drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm: Header V4 DSA/SHA1 Signature,
key ID baadae52: NOKEY
Preparing... ########################################### [100%]
1:drbd84-utils ########################################### [ 50%]
2:kmod-drbd84 ########################################### [100%]
Working. This may take some time ...
Done.
3.各節點準備儲存設備
節點1和節點2都需要操作:
[root@node1 ~]# echo -n -e "n\np\n3\n\n+1G\nw\n"|fdisk /dev/sda
[root@node1 ~]# partx -a /dev/sda
BLKPG: Device or resource busy
error adding partition 1
error adding partition 2
error adding partition 3
四.配置DRBD
1.DRBD的配置檔案:
[root@node1 ~]# vim /etc/drbd.conf
# You can find an example in /usr/share/doc/drbd.../drbd.conf.example
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
DRBD的所有的控制都是在配置檔案/etc/drbd.conf中。通常情況下配置檔案包含如下内容:
include"/etc/drbd.d/global_common.conf";
include "/etc/drbd.d/*.res";
通常情況下,/etc/drbd.d/global_common.conf包含global和common的DRBD配置部分,而.res檔案都包含一個資源的部分。
在一個單獨的drbd.conf檔案中配置全部是可以實作的,但是占用的配置很快就會變得混亂,變得難以管理,這也是為什麼多檔案管理作為首選的原因之一。
無論采用哪種方式,需必須保持在各個叢集節點的drbd.conf以及其他的檔案完全相同。
2.配置DRBD的全局及通用資源配置檔案
[root@node1 drbd.d]# cat global_common.conf
# DRBD is the result of over a decade ofdevelopment by LINBIT.
# In case you need professional servicesfor DRBD or have
# feature requests visithttp://www.linbit.com
global {
#用于統計應用各個版本的資訊。當新的版本的drbd被安裝就會和http server進行聯系
。當然也可以禁用該選項,預設情況下是啟用該選項的。
usage-count no;
# minor-count dialog-refresh disable-ip-verification
}
common {
handlers {
# These are EXAMPLE handlersonly.
# They may have severeimplications,
# like hard resetting the nodeunder certain circumstances.
# Be careful when chosing yourpoison.
#一旦節點發生錯誤就降級
pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;
/usr/lib/drbd/notify-emergency-reboot.sh;echo b > /proc/sysrq-trigger ; reboot -f";
#一旦節點發生腦裂的處理是重新開機
pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;
#一旦本地io錯誤的處理是關機
local-io-error"/usr/lib/drbd/notify-io-error.sh;
/usr/lib/drbd/notify-emergency-shutdown.sh;echo o > /proc/sysrq-trigger ; halt -f";
# fence-peer"/usr/lib/drbd/crm-fence-peer.sh";
# split-brain"/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync"/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target"/usr/lib/drbd/snapshot-resync-target-lvm.sh
-p 15 -- -c 16k";
# after-resync-target/usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}
startup {
# wfc-timeout degr-wfc-timeoutoutdated-wfc-timeout wait-after-sb
options {
# cpu-maskon-no-data-accessible
disk {
# size on-io-error fencingdisk-barrier disk-flushes
#一旦本地磁盤發生IO錯誤時的操作:拆除
on-io-error detach;
# disk-drain md-flushesresync-rate resync-after al-extents
# c-plan-ahead c-delay-targetc-fill-target c-max-rate
# c-min-rate disk-timeout
net {
# protocol timeoutmax-epoch-size max-buffers unplug-watermark
#資源配飾使用完全同步複制協定(Protocol C),除非另有明确指定;表示
收到遠端主機的寫入确認後,則認為寫入完成.
protocol C;
# connect-int ping-intsndbuf-size rcvbuf-size ko-count
# allow-two-primariescram-hmac-alg shared-secret after-sb-0pri
#設定主備機之間通信使用的資訊算法.
cram-hmac-alg "sha1";
#消息摘要認證密鑰
shared-secret "password";
# after-sb-1pri after-sb-2prialways-asbp rr-conflict
# ping-timeoutdata-integrity-alg tcp-cork on-congestion
# congestion-fillcongestion-extents csums-alg verify-alg
# use-rle
syncer {
#設定主備節點同步時的網絡速率最大值,機關是位元組.
rate 1000M;
3.定義節點存儲資源配置檔案
一個DRBD裝置(即:/dev/drbdX),叫做一個"資源"。裡面包含一個DRBD裝置的主備節點的的ip資訊,底層儲存設備名稱,裝置大小,meta資訊存放方式,drbd對外提供的裝置名等等。
[root@node1 drbd.d]# vim mystore.res
resource mystore {
#每個主機的說明以"on"開頭,後面是主機名.在後面的{}中為這個主機的配置.
on node1.stu31.com {
device /dev/drbd0;
disk /dev/sda3;
#設定DRBD的監聽端口,用于與另一台主機通信
address 172.16.31.10:7789;
meta-disk internal;
on node2.stu31.com {
address 172.16.31.11:7789;
配置完成後複制一份到節點2:
[root@node1 drbd.d]# ls
global_common.conf mystore.res
[root@node1 drbd.d]# scp *node2:/etc/drbd.d/
global_common.conf 100% 2105 2.1KB/s 00:00
mystore.res 100% 318 0.3KB/s 00:00
4.建立matadata
在啟動DRBD之前,需要分别在兩台主機的sda分區上,建立供DRBD記錄資訊的資料塊.分别在兩台主機上執行:
[root@node1 drbd.d]# drbdadm create-mdmystore
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfullycreated.
[root@node2 ~]# drbdadm create-md mystore
5.啟動DRBD服務
[root@node1 ~]# /etc/init.d/drbd start
Starting DRBD resources: [
create res: mystore
prepare disk: mystore
adjust disk: mystore
adjust net: mystore
]
..........
***************************************************************
DRBD's startup script waits for the peernode(s) to appear.
- Incase this node was already a degraded cluster before the
rebootthe timeout is 0 seconds. [degr-wfc-timeout]
- Ifthe peer was available before the reboot the timeout will
expire after 0 seconds. [wfc-timeout]
(These values are for resource 'mystore'; 0 sec -> wait forever)
Toabort waiting enter 'yes' [ 21]:
.
[root@node1 ~]#
節點2啟動drbd:
[root@node2 ~]# /etc/init.d/drbd start
6. 檢視DRBD的狀态,分别在兩台主機上執行
[root@node1 ~]# cat /proc/drbd
version: 8.4.5 (api:1/proto:86-101)
GIT-hash:1d360bde0e095d495786eaeb2a1ac76888e4db96 build by [email protected],
2015-01-02 12:06:20
0:cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----
ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:1059216
對輸出的含義解釋如下:
ro表示角色資訊,第一次啟動drbd時,兩個drbd節點預設都處于Secondary狀态,
ds是磁盤狀态資訊,“Inconsistent/Inconsisten”,即為“不一緻/不一緻”狀态,表示兩個節點的磁盤資料處于不一緻狀态。
Ns表示網絡發送的資料包資訊。
Dw是磁盤寫資訊
Dr是磁盤讀資訊
7.設定主節點
由于預設沒有主次節點之分,因而需要設定兩個主機的主次節點,選擇需要設定為主節點的主機,然後執行如下
node1為主節點
#強制設定主節點
[root@node1 ~]# drbdadm primary --force mystore
檢視同步操作:
[root@node1 ~]# drbd-overview
0:mystore/0 SyncSource Primary/Secondary UpToDate/Inconsistent
[=====>..............] sync'ed: 32.1% (724368/1059216)K
[root@node1 ~]# watch -n1 'cat /proc/drbd'
完成後檢視節點狀态:
0:cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----
ns:1059216 nr:0 dw:0 dr:1059912 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:0
8.格式化存儲
[root@node1 ~]# mke2fs -t ext4 /dev/drbd0
挂載到一個目錄:
[root@node1 ~]# mount /dev/drbd0 /mnt
複制一個檔案到mnt:
[root@node1 ~]# cp /etc/issue /mnt
解除安裝存儲:
[root@node1 ~]# umount /mnt
9.切換主節點為備節點,将node2提升為主節點
節點1設定為備節點:
[root@node1 ~]# drbdadm secondary mystore
0:mystore/0 Connected Secondary/Secondary UpToDate/UpToDate
提升節點2為主節點:
[root@node2 ~]# drbdadm primary mystore
[root@node2 ~]# drbd-overview
0:mystore/0 Connected Primary/Secondary UpToDate/UpToDate
挂載檔案系統,檢視檔案是否存在:
[root@node2 ~]# mount /dev/drbd0 /mnt
[root@node2 ~]# ls /mnt
issue lost+found
注意:
(1)mount drbd裝置以前必須把裝置切換到primary狀态。
(2)兩個節點中,同一時刻隻能有一台處于primary狀态,另一台處于secondary狀态。
(3)處于secondary狀态的伺服器上不能加載drbd裝置。
(4)主備伺服器同步的兩個分區大小最好相同,這樣不至于浪費磁盤空間,因為drbd磁盤鏡像相當于網絡raid 1。
10.将drbd服務關閉,開機自啟動關閉:
[root@node1 ~]# service drbd stop
Stopping all DRBD resources: .
[root@node1 ~]# chkconfig drbd off
[root@node2 ~]# service drbd stop
Stopping all DRBD resources:
[root@node2 ~]# chkconfig drbd off
五.corosync+pacemaker+drbd實作mariadb高可用叢集
1.安裝corosync和pacemaker軟體包:節點1和節點2都安裝
# yum install corosync pacemaker -y
2.建立配置檔案并配置
[root@node1 ~]# cd /etc/corosync/
[root@node1 corosync]# cpcorosync.conf.example corosync.conf
[root@node1 corosync]# cat corosync.conf
# Please read the corosync.conf.5 manualpage
compatibility: whitetank
totem {
version: 2
# secauth: Enable mutual node authentication. If you choose to
# enable this ("on"), then do remember to create a shared
# secret with "corosync-keygen".
#開啟認證
secauth: on
threads: 0
# interface: define at least one interface to communicate
# over. If you define more than one interface stanza, you must
# also set rrp_mode.
interface {
# Rings must be consecutivelynumbered, starting at 0.
ringnumber: 0
# This is normally the*network* address of the
# interface to bind to. Thisensures that you can use
# identical instances of thisconfiguration file
# across all your clusternodes, without having to
# modify this option.
#定義網絡位址
bindnetaddr: 172.16.31.0
# However, if you have multiplephysical network
# interfaces configured for thesame subnet, then the
# network address alone is notsufficient to identify
# the interface Corosync shouldbind to. In that case,
# configure the *host* addressof the interface
# instead:
# bindnetaddr: 192.168.1.1
# When selecting a multicastaddress, consider RFC
# 2365 (which, among otherthings, specifies that
# 239.255.x.x addresses areleft to the discretion of
# the network administrator).Do not reuse multicast
# addresses across multipleCorosync clusters sharing
# the same network.
#定義多點傳播位址
mcastaddr: 239.224.131.31
# Corosync uses the port youspecify here for UDP
# messaging, and also theimmediately preceding
# port. Thus if you set this to5405, Corosync sends
# messages over UDP ports 5405and 5404.
#資訊傳遞端口
mcastport: 5405
# Time-to-live for clustercommunication packets. The
# number of hops (routers) thatthis ring will allow
# itself to pass. Note thatmulticast routing must be
# specifically enabled on mostnetwork routers.
ttl: 1
logging {
# Log the source file and line where messages are being
# generated. When in doubt, leave off. Potentially useful for
# debugging.
fileline: off
# Log to standard error. When in doubt, set to no. Useful when
# running in the foreground (when invoking "corosync -f")
to_stderr: no
# Log to a log file. When set to "no", the "logfile"option
# must not be set.
#定義日志記錄存放
to_logfile: yes
logfile: /var/log/cluster/corosync.log
# Log to the system log daemon. When in doubt, set to yes.
#to_syslog: yes
# Log debug messages (very verbose). When in doubt, leave off.
debug: off
# Log messages with time stamps. When in doubt, set to on
# (unless you are only logging to syslog, where double
# timestamps can be annoying).
timestamp: on
logger_subsys {
subsys: AMF
debug: off
#以插件方式啟動pacemaker:
service {
ver: 0
name: pacemaker
3.生成認證密鑰檔案:認證密鑰檔案需要1024位元組,手動寫入太麻煩了,我們可以下載下傳程式包來實作寫滿記憶體的熵池實作,
[root@node1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from/dev/random.
Press keys on your keyboard to generateentropy.
Press keys on your keyboard to generateentropy (bits = 128).
Press keys on your keyboard to generateentropy (bits = 192).
Press keys on your keyboard to generateentropy (bits = 256).
Press keys on your keyboard to generateentropy (bits = 320).
Press keys on your keyboard to generateentropy (bits = 384).
Press keys on your keyboard to generateentropy (bits = 448).
Press keys on your keyboard to generateentropy (bits = 512).
Press keys on your keyboard to generateentropy (bits = 576).
Press keys on your keyboard to generateentropy (bits = 640).
Press keys on your keyboard to generateentropy (bits = 704).
Press keys on your keyboard to generate entropy(bits = 768).
Press keys on your keyboard to generateentropy (bits = 832).
Press keys on your keyboard to generateentropy (bits = 896).
Press keys on your keyboard to generateentropy (bits = 960).
Writing corosync key to/etc/corosync/authkey.
随便下載下傳神馬程式都行!
完成後将配置檔案及認證密鑰複制一份到節點2:
[root@node1 corosync]# scp authkey corosync.conf node2:/etc/corosync/
authkey 100% 128 0.1KB/s 00:00
corosync.conf 100% 2724 2.7KB/s 00:00
4.啟動corosync服務:
[root@node1 corosync]# service corosync start
Starting Corosync Cluster Engine(corosync): [ OK ]
[root@node2 ~]# service corosync start
5.檢視日志:
檢視corosync引擎是否正常啟動:
節點1的啟動日志:
[root@node1 corosync]# grep -e"Corosync Cluster Engine" -e "configuration file"
/var/log/cluster/corosync.log
Jan 02 14:20:28 corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): startedand
ready to provide service.
Jan 02 14:20:28 corosync [MAIN ] Successfully read main configuration file
'/etc/corosync/corosync.conf'.
節點2的啟動日志:
[root@node2 ~]# grep -e "CorosyncCluster Engine" -e "configuration file"
Jan 02 14:20:39 corosync [MAIN ] Corosync Cluster Engine ('1.4.7'): startedand
Jan 02 14:20:39 corosync [MAIN ] Successfully read main configuration file
檢視關鍵字TOTEM,初始化成員節點通知是否發出:
[root@node1 corosync]# grep"TOTEM" /var/log/cluster/corosync.log
Jan 02 14:20:28 corosync [TOTEM ]Initializing transport (UDP/IP Multicast).
Jan 02 14:20:28 corosync [TOTEM ]Initializing transmit/receive security:
libtomcrypt SOBER128/SHA1HMAC (mode 0).
Jan 02 14:20:28 corosync [TOTEM ] Thenetwork interface [172.16.31.10] is now up.
Jan 02 14:20:28 corosync [TOTEM ] Aprocessor joined or left the membership and a
new membership was formed.
Jan 02 14:20:37 corosync [TOTEM ] Aprocessor joined or left the membership and a
檢視監聽端口5405是否開啟:
[root@node1 ~]# ss -tunl |grep 5405
udp UNCONN 0 0 172.16.31.10:5405 *:*
udp UNCONN 0 0 239.224.131.31:5405 *:*
檢視錯誤日志:
[root@node1 ~]# grep ERROR/var/log/cluster/corosync.log
#警告資訊:将pacemaker以插件運作的告警,忽略即可
Jan 02 14:20:28 corosync [pcmk ] ERROR: process_ais_conf: You haveconfigured a
cluster using the Pacemaker plugin for Corosync.The plugin is not supported in this
environment and will be removed very soon.
Jan 02 14:20:28 corosync [pcmk ] ERROR: process_ais_conf: Please see Chapter 8 of
'Clusters from Scratch'(http://www.clusterlabs.org/doc) for details on using
Pacemaker with CMAN
Jan 02 14:20:52 [6260] node1.stu31.com pengine: notice: process_pe_message:
Configuration ERRORs found during PE processing. Please run "crm_verify -L" to
identify issues.
[root@node1 ~]# crm_verify -L -V
#無stonith裝置的警告資訊,可以忽略
error: unpack_resources: Resource start-up disabled since no STONITH
resources have been defined
error: unpack_resources: Either configure some or disable STONITH with the
stonith-enabled option
error: unpack_resources: NOTE:Clusters with shared data need STONITH to
ensure data integrity
Errors found during check: config not valid
六.叢集配置工具安裝:crmsh軟體安裝
1.配置yum源:我這裡存在一個完整的yum源伺服器
[root@node1 yum.repos.d]# vimcentos6.6.repo
[base]
name=CentOS $releasever $basearch on localserver 172.16.0.1
baseurl=http://172.16.0.1/cobbler/ks_mirror/CentOS-6.6-$basearch/
gpgcheck=0
[extra]
name=CentOS $releasever $basearch extras
baseurl=http://172.16.0.1/centos/$releasever/extras/$basearch/
[epel]
name=Fedora EPEL for CentOS$releasever$basearch on local server 172.16.0.1
baseurl=http://172.16.0.1/fedora-epel/$releasever/$basearch/
[corosync2]
name=corosync2
baseurl=ftp://172.16.0.1/pub/Sources/6.x86_64/corosync/
複制一份到節點2:
[root@node1 ~]# scp /etc/yum.repos.d/centos6.6.reponode2:/etc/yum.repos.d/
centos6.6.repo 100% 521 0.5KB/s 00:00
2.安裝crmsh軟體,2各節點都安裝
# yum install -y crmsh
# rpm -qa crmsh
crmsh-2.1-1.6.x86_64
3.去除上面的stonith裝置警告錯誤:
[root@node1 ~]# crm
crm(live)# configure
crm(live)configure# propertystonith-enabled=false
crm(live)configure# verify
#雙節點需要仲裁,或者忽略(會造成叢集分裂)
crm(live)configure# propertyno-quorum-policy=ignore
crm(live)configure# commit
crm(live)configure# show
node node1.stu31.com
node node2.stu31.com
property cib-bootstrap-options: \
dc-version=1.1.11-97629de \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes=2 \
stonith-enabled=false \
no-quorum-policy=ignore
無錯誤資訊輸出了:
[root@node1 ~]# crm_verify -L -V
七.将DRBD定義為叢集服務
1.按照叢集服務的要求,首先確定兩個節點上的drbd服務已經停止,且不會随系統啟動而自動啟動:
0:mystore/0 Unconfigured . .
[root@node1 ~]# chkconfig --list drbd
drbd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
2.配置drbd為叢集資源:
提供drbd的RA目前由OCF歸類為linbit,其路徑為/usr/lib/ocf/resource.d/linbit/drbd。我們可以使用如下指令來檢視此RA及RA的meta資訊:
[root@node1 ~]# crm ra classes
lsb
ocf / heartbeat linbit pacemaker
service
stonith
[root@node1 ~]# crm ra list ocf linbit
drbd
下面指令可以檢視詳細資訊
[root@node1 ~]# crm ra info ocf:linbit:drbd
輸出内容略
drbd需要同時運作在兩個節點上,但隻能有一個節點(primary/secondary模型)是Master,而另一個節點為Slave;是以,它是一種比較特殊的叢集資源,其資源類型為多态(Multi-state)clone類型,即主機節點有Master和Slave之分,且要求服務剛啟動時兩個節點都處于slave狀态。
開始定義叢集資源:
[root@node1 ~]# crm configure
crm(live)configure# primitive mydrbdocf:linbit:drbd params drbd_resource="mystore"
op monitor role=Slave interval=20stimeout=20s op monitor role=Master interval=10s
timeout=20s op start timeout=240s op stoptimeout=100s
将叢集資源設定為主從模式:
crm(live)configure# ms ms_mydrbd mydrbdmeta master-max="1" master-node-max="1"
clone-max="2"clone-node-max="1" notify="true"
primitive mydrbd ocf:linbit:drbd \
params drbd_resource=mystore \
op monitor role=Slave interval=20s timeout=20s \
op monitor role=Master interval=10s timeout=20s \
op start timeout=240s interval=0 \
op stop timeout=100s interval=0
ms ms_mydrbd mydrbd \
meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1notify=true
crm(live)configure# cd
crm(live)# status
Last updated: Sat Jan 3 11:22:54 2015
Last change: Sat Jan 3 11:22:50 2015
Stack: classic openais (with plugin)
Current DC: node1.stu31.com - partitionwith quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
2 Resources configured
Online: [ node1.stu31.com node2.stu31.com ]
Master/Slave Set: ms_mydrbd [mydrbd]
Masters: [ node2.stu31.com ]
Slaves: [ node1.stu31.com ]
#master-max:有幾個主資源master-node-max: 1個節點上最多運作的主資源
#clone-max:有幾個克隆資源clone-node-max:1個節點上最多運作的克隆資源
#主從資源也是克隆資源的一種的,隻不過它有主從關系
檢視drbd的主從狀态:
0:mystore/0 Connected Secondary/Primary UpToDate/UpToDate
将node2降級成從節點并上線:
[root@node2 ~]# crm node standby
[root@node2 ~]# crm node online
[root@node2 ~]# drbd-overview
0:mystore/0 Connected Secondary/Primary UpToDate/UpToDate
那麼node1就成為主節點了:
3.定義DRBD存儲自動挂載,主節點在哪裡,存儲就在哪裡,需要定義限制
crm(live)configure# primitive myfsocf:heartbeat:Filesystem params device=/dev/drbd0
directory=/mydata fstype="ext4"op monitor interval=20s timeout=40s op start
timeout=60s op stop timeout=60s
#定義協同限制,主節點在哪裡啟動,存儲就跟随主節點
crm(live)configure# colocation myfs_with_ms_mydrbd_masterinf: myfs ms_mydrbd:Master
#定義順序限制,主角色提升完成後才啟動存儲
crm(live)configure# orderms_mydrbd_master_before_myfs inf: ms_mydrbd:promote
myfs:start
Last updated: Sat Jan 3 11:34:23 2015
Last change: Sat Jan 3 11:34:12 2015
3 Resources configured
Masters: [ node1.stu31.com ]
Slaves: [ node2.stu31.com ]
myfs (ocf::heartbeat:Filesystem): Started node1.stu31.com
可以知道主節點是node1,存儲也是挂載在節點1上的。
檢視挂載的目錄:檔案存在,挂載成功
[root@node1 ~]# ls /mydata
主從資源,檔案系統挂載都完成了,下面就開始安裝mariadb資料庫了!
八.安裝mariadb資料庫
1.初始化安裝mariadb必須在主節點進行:
建立使用者mysql管理資料庫及配置資料存儲目錄權限為mysql,兩個節點都需要建立使用者
# groupadd -r -g 306 mysql
# useradd -r -g 306 -u 306 mysql
擷取mariadb的二進制安裝包:
mariadb-10.0.10-linux-x86_64.tar.gz
解壓至/usr/local目錄中:
[root@node1 ~]# tar xfmariadb-10.0.10-linux-x86_64.tar.gz -C /usr/local/
建立軟連結:
[root@node1 ~]# cd /usr/local
[root@node1 local]# ln -svmariadb-10.0.10-linux-x86_64/ mysql
在挂載的DRBD存儲上建立資料庫資料存放目錄:
# chown -R mysql:mysql /mydata/
進入安裝目錄:
[root@node1 local]# cd mysql
[root@node1 mysql]# pwd
/usr/local/mysql
[root@node1 mysql]# chown -R root:mysql ./*
初始化安裝mariadb:
[root@node1 mysql]#scripts/mysql_install_db --user=mysql --datadir=/mydata/data
安裝完成後檢視資料存放目錄:
[root@node1 mysql]# ls /mydata/data/
aria_log.00000001 ibdata1 ib_logfile1 performance_schema
aria_log_control ib_logfile0 mysql test
安裝成功!
mariadb配置檔案的存放,如果我們希望一個節點的配置檔案更改後,備節點同步更新,那麼配置檔案需要存放在drbd存儲上是最合适的!
[root@node1 mysql]# mkdir /mydata/mysql/
[root@node1 mysql]# chown -R mysql:mysql /mydata/mysql/
[root@node1 mysql]# cp support-files/my-large.cnf /mydata/mysql/my.cnf
[root@node1 mysql]# vim /mydata/mysql/my.cnf
[mysqld]
port = 3306
datadir = /mydata/data
socket = /tmp/mysql.sock
skip-external-locking
key_buffer_size = 256M
max_allowed_packet = 1M
table_open_cache = 256
sort_buffer_size = 1M
read_buffer_size = 1M
read_rnd_buffer_size = 4M
myisam_sort_buffer_size = 64M
thread_cache_size = 8
query_cache_size= 16M
# Try number of CPU's*2 forthread_concurrency
thread_concurrency = 8
innodb_file_per_table = on
skip_name_resolve = on
在本地建立軟連結指向配置檔案目錄:
[root@node1 ~]# ln -sv /mydata/mysql/etc/mysql
`/etc/mysql' -> `/mydata/mysql'
服務腳本的建立:
[root@node1 mysql]# cpsupport-files/mysql.server /etc/init.d/mysqld
[root@node1 mysql]# chkconfig --add mysqld
啟動服務測試:
[root@node1 mysql]# service mysqld start
Starting MySQL. [ OK ]
登入mysql建立資料庫:
[root@node1 mysql]#/usr/local/mysql/bin/mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 4
Server version: 10.0.10-MariaDB-log MariaDBServer
Copyright (c) 2000, 2014, Oracle, SkySQL Aband others.
Type 'help;' or '\h' for help. Type '\c' toclear the current input statement.
MariaDB [(none)]> create databasetestdb;
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]> \q
Bye
停止mysql伺服器:
[root@node1 mysql]# service mysqld stop
Shutting down MySQL.. [ OK ]
2.節點2也要配置mariadb
切換node1為從節點:
[root@node1 ~]# crm node standby
[root@node1 ~]# crm status
Last updated: Sat Jan 3 12:21:38 2015
Last change: Sat Jan 3 12:21:34 2015
Node node1.stu31.com: standby
Online: [ node2.stu31.com ]
Stopped: [ node1.stu31.com ]
myfs (ocf::heartbeat:Filesystem): Started node2.stu31.com
讓node1從節點上線:
[root@node1 ~]# crm node online
Last updated: Sat Jan 3 12:21:52 2015
Last change: Sat Jan 3 12:21:48 2015
mariadb程式包解壓:
[root@node2 ~]# tar xfmariadb-10.0.10-linux-x86_64.tar.gz -C /usr/local
[root@node2 ~]# cd /usr/local
[root@node2 local]# ln -sv mariadb-10.0.10-linux-x86_64/mysql
`mysql' ->`mariadb-10.0.10-linux-x86_64/'
[root@node2 local]# cd mysql
[root@node2 mysql]# chown -R root:mysql ./*
不需要初始化安裝了!
檢視節點2的存儲挂載完成與否:
[root@node2 local]# ls /mydata/data/
aria_log.00000001 ib_logfile1 mysql-bin.index testdb
aria_log_control multi-master.info mysql-bin.state
ibdata1 mysql performance_schema
ib_logfile0 mysql-bin.000001 test
成功挂載:
隻需要服務腳本了:
[root@node2 mysql]# cp support-files/mysql.server /etc/init.d/mysqld
[root@node2 mysql]# chkconfig --add mysqld
[root@node2 mysql]# chkconfig mysqld off
建立軟連結将存儲的配置檔案定位到/etc/下,友善mysql啟動:
[root@node2 ~]# ln -sv /mydata/mysql//etc/mysql
`/etc/mysql' -> `/mydata/mysql/'
啟動mysqld服務:
[root@node2 ~]# service mysqld start
Starting MySQL... [ OK ]
[root@node2 ~]# /usr/local/mysql/bin/mysql
MariaDB [(none)]> show databases;
+--------------------+
| Database |
| information_schema |
| mysql |
| performance_schema |
| test |
| testdb |
5 rows in set (0.04 sec)
MariaDB [(none)]> grant all on *.* to'root'@'172.16.%.%' identified by 'oracle';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
可以看出資料庫是有testdb的!!!
同步過來了!我們授權一下遠端用戶端可以登入!
兩個節點都安裝好了mariadb,
九.定義mariadb資料庫叢集服務資源
[root@node2 ~]# crm
#定義資料庫叢集的VIP
crm(live)configure# primitive myipocf:heartbeat:IPaddr params ip="172.16.31.166" op
monitor interval=10s timeout=20s
#定義資料庫叢集的服務資源mysqld
crm(live)configure# primitive myserverlsb:mysqld op monitor interval=20s
timeout=20s
#将資源加入資源組,進行結合資源在一起
crm(live)configure# group myservice myipms_mydrbd:Master myfs myserver
ERROR: myservice refers to missing objectms_mydrbd:Master
INFO: resource references incolocation:myfs_with_ms_mydrbd_master updated
INFO: resource references inorder:ms_mydrbd_master_before_myfs updated
#定義資源順序限制,啟動好myfs資源後再啟動myserver資源:
crm(live)configure# ordermyfs_before_myserver inf: myfs:start myserver:start
#所有都定義完成後就送出!可能mysql服務啟動有點慢,等一下即可!
Last updated: Sat Jan 3 13:42:13 2015
Last change: Sat Jan 3 13:41:48 2015
5 Resources configured
Resource Group: myservice
myip (ocf::heartbeat:IPaddr): Started node2.stu31.com
myfs (ocf::heartbeat:Filesystem): Started node2.stu31.com
myserver (lsb:mysqld): Started node2.stu31.com
啟動完成後,我們在遠端用戶端上連接配接資料庫進行測試:
[root@nfs ~]# mysql -h 172.16.31.166 -uroot -poracle
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.5.5-10.0.10-MariaDB-logMariaDB Server
Copyright (c) 2000, 2013, Oracle and/or itsaffiliates. All rights reserved.
Oracle is a registered trademark of OracleCorporation and/or its
affiliates. Other names may be trademarksof their respective
owners.
mysql> show databases;
5 rows in set (0.05 sec)
mysql> use testdb
Database changed
mysql> create table t1 (id int);
Query OK, 0 rows affected (0.18 sec)
mysql> show tables;
+------------------+
| Tables_in_testdb |
| t1 |
1 row in set (0.01 sec)
mysql> \q
将節點2切換為備節點,讓node1成為主節點:
輸入切換指令後我們監控node1轉換成主節點的過程:
<a href="http://s3.51cto.com/wyfs02/M01/58/05/wKiom1Snwwnh5p-4AAaiKxVNqOI705.jpg" target="_blank"></a>
<a href="http://s3.51cto.com/wyfs02/M02/58/02/wKioL1Snw9zTudTEAAVgCw7ErhI127.jpg" target="_blank"></a>
檢視節點1的叢集狀态資訊:
Last updated: Sat Jan 3 13:59:38 2015
Last change: Sat Jan 3 13:48:49 2015
Node node2.stu31.com: standby
Online: [ node1.stu31.com ]
Stopped: [ node2.stu31.com ]
myip (ocf::heartbeat:IPaddr): Started node1.stu31.com
myfs (ocf::heartbeat:Filesystem): Started node1.stu31.com
myserver (lsb:mysqld): Started node1.stu31.com
再次遠端連接配接資料庫測試:
mysql> use testdb;
Reading table information for completion oftable and column names
You can turn off this feature to get aquicker startup with -A
1 row in set (0.00 sec)
測試成功,同步完成!
至此,corosync+pacemaker+crmsh+DRBD實作資料庫伺服器高可用性叢集的搭建就完成了!!!
本文轉自 dengaosky 51CTO部落格,原文連結:http://blog.51cto.com/dengaosky/1964590,如需轉載請自行聯系原作者