本次 主要介紹MySqL高可用叢集環境的搭建。DRBD的英文全稱為:Distributed Replicated Block Device(分布式塊裝置複制),是Linux核心存儲層中的一個分布式存儲系統,可利用DRBD在兩台Linux伺服器之間共享塊裝置、檔案系統和資料,類似一個網絡RAID1的功能。DRBD的架構如圖所示:
<a href="http://blog.51cto.com/attachment/201205/135052270.gif" target="_blank"></a>
其他的就不多說了。下面來實作吧。
前提:
1)本配置共有兩個測試節點,分别node1.zhou.com和node2.zhou.com,相的IP位址分别為192.168.35.11/24和192.168.35.12/24;
2)node1和node2兩個節點上各提供了一個大小相同的分區作為drbd裝置;我們這裡為在兩個節點上均為/dev/sda5,大小為2G;
3)調整兩個節點的時間要同步
4)關閉兩台伺服器的selinux。
關閉方法:# setenforce 0
要開機就已經是關閉要編輯配置檔案
# vim /etc/selinux/config
定位到:SELINUX 并修改為:SELINUX=permissive
5)配置好yum源
6)系統為rhel5.4,x86平台;
一、準備工作
兩個節點的主機名稱和對應的IP位址解析服務可以正常工作,且每個節點的主機名稱需要跟"uname -n“指令的結果保持一緻;是以,需要保證兩個節點上的/etc/hosts檔案均為下面的内容:
192.168.35.11 node1.zhou.com node1
192.168.35.12 node2.zhou.com node2
為了使得重新啟動系統後仍能保持如上的主機名稱,還分别需要在各節點執行類似如下的指令:
Node1:
# sed -i 's@\(HOSTNAME=\).*@\1node1.zhou.com@g'
# hostname node1.zhou.com
Node2:
# sed -i 's@\(HOSTNAME=\).*@\1node2.zhou.com@g'
# hostname node2.zhou.com
為了在兩台伺服器之間檔案複制友善,下面來配置雙機互信。
設定兩個節點可以基于密鑰進行ssh通信,這可以通過類似如下的指令實作:
# ssh-keygen -t rsa
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node2
Node2:
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@node1
配置完成了。以後兩台伺服器之間在複制檔案時就不用輸入密碼了。這樣就友善多了。
二、安裝配置DRBD
下載下傳所需的軟體包:rbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm這兩個軟體包要根據自己的系統來定。
下載下傳完成後直接安裝即可:
# yum -y --nogpgcheck localinstall drbd83-8.3.8-1.el5.centos.i386.rpm kmod-drbd83-8.3.8-1.el5.centos.i686.rpm
兩台伺服器上都要安裝上。
下面的操作在node1.zhou.com上完成。
1)複制樣例配置檔案為即将使用的配置檔案:
# cp /usr/share/doc/drbd83-8.3.8/drbd.conf /etc
2)
配置/etc/drbd.d/global-common.conf
global {
usage-count no;
# minor-count dialog-refresh disable-ip-verification
}
common {
protocol C;
handlers {
pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f";
local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f";
# fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
# split-brain "/usr/lib/drbd/notify-split-brain.sh root";
# out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root";
# before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k";
# after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh;
}
startup {
#wfc-timeout 120;
#degr-wfc-timeout 120;
disk {
on-io-error detach;
#fencing resource-only;
net {
cram-hmac-alg "sha1";
shared-secret "mydrbdlab";
syncer {
rate 1000M;
3、定義一個資源/etc/drbd.d/web.res,内容如下:
resource web {
on node1.zhou.com {
device /dev/drbd0;
disk /dev/sda5;
address 192.168.35.11:7789;
meta-disk internal;
}
on node2.zhou.com {
address 192.168.35.12:7789;
# cd /etc/corosync
# cp corosync.conf.example corosync.conf
totem {
version: 2
secauth: on --->這個要啟用
threads: 0
interface {
ringnumber: 0
bindnetaddr: 192.168.35.0 ---->修改為相應的網絡位址
mcastaddr: 226.94.1.9 ----->這個多點傳播位址也做一點修改以防與其他人的相同。
mcastport: 5405
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: no ----->日志檔案用一個就行了。是以要關閉一個
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
amf {
mode: disabled
service { ----->從這行開始到結束是要添加的内容
ver: 0
name: pacemaker
aisexec {
user: root
group: root
# grep -e "Corosync Cluster Engine" -e "configuration file" /var/log/cluster/corosync.log
Jun 14 19:02:08 node1 corosync[5103]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
Jun 14 19:02:08 node1 corosync[5103]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
Jun 14 19:02:08 node1 corosync[5103]: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1397.
Jun 14 19:03:49 node1 corosync[5120]: [MAIN ] Corosync Cluster Engine ('1.2.7'): started and ready to provide service.
Jun 14 19:03:49 node1 corosync[5120]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
# grep TOTEM /var/log/cluster/corosync.log
Jun 14 19:03:49 node1 corosync[5120]: [TOTEM ] Initializing transport (UDP/IP).
Jun 14 19:03:49 node1 corosync[5120]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Jun 14 19:03:50 node1 corosync[5120]: [TOTEM ] The network interface [192.168.35.11] is now up.
Jun 14 19:03:50 node1 corosync[5120]: [TOTEM ] A processor joined or left the membership and a new membership was formed.
# grep ERROR: /var/log/cluster/corosync.log | grep -v unpack_resources
檢視pacemaker是否正常啟動:
# grep pcmk_startup /var/log/cluster/corosync.log
Jun 14 19:03:50 node1 corosync[5120]: [pcmk ] info: pcmk_startup: CRM: Initialized
Jun 14 19:03:50 node1 corosync[5120]: [pcmk ] Logging: Initialized pcmk_startup
Jun 14 19:03:50 node1 corosync[5120]: [pcmk ] info: pcmk_startup: Maximum core file size is: 4294967295
Jun 14 19:03:50 node1 corosync[5120]: [pcmk ] info: pcmk_startup: Service: 9
Jun 14 19:03:50 node1 corosync[5120]: [pcmk ] info: pcmk_startup: Local hostname: node1.zhou.com
# crm status
============
Last updated: Tue Jun 14 19:07:06 2011
Stack: openais
Current DC: node1.magedu.com - partition with quorum
Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87
2 Nodes configured, 2 expected votes
0 Resources configured.
Online: [ node1.zhou.com node2.zhou.com ]
出現 這個結果顯示Corosync安裝成功了。 四、安裝MySqL 下載下傳軟體:mysql-5.5.20-linux2.6-i686.tar.gz 在node1上操作 使node1為主節點:
# mkdir /mydata
# mount /dev/drbd0 /mydata
# mkdir /myata/data
# groupadd -r -g 306 mysql
# useradd -r -g mysql -u 306 -s /sbin/nologin -M mysql
# chown -R mysql:mysql /mydata/data
# tar xvf mysql-5.5.20-linux2.6-i686.tar.gz -C /usr/local/
# cd /usr/local/
# ln -sv mysql-5.5.20-linux2.6-i686 mysql
# cd mysql
# chown -R mysql:mysql .
# ./scripts/mysql_install_db --user=mysql --datadir=/mydata/data/
# chown -R root .
# cp support-files/my-large.cnf /etc/my.cnf
# cp support-files/mysql.server /etc/rc.d/init.d/mysqld
# ln -sv /usr/local/mysql/include /usr/include/mysql
# echo "/usr/local/mysql/lib" > /etc/ld.so.conf.d/mysql.conf
# ldconfig -v | grep mysql
# service mysqld start
# service mysqld stop
# chkconfig mysqld off
# scp node1:/root/mysql-5.5.20-linux2.6-i686.tar.gz ./
# chown -R root:mysql .
# scp node1:/etc/my.cnf /etc/
# scp node1:/etc/rc.d/init.d/mysqld /etc/rc.d/init.d/
# scp node1:/etc/ld.so.conf.d/mysql.conf /etc/ld.so.conf.d/
# crm configure
crm(live)configure# property stonith-enabled="false"
crm(live)configure# property no-quorum-policy="ignore"
crm(live)configure# rsc_defaults resource-stickiness=100
crm(live)configure# verify
crm(live)configure# commit
crm(live)configure# show
node node1.zhou.com
node node2.zhou.com
property $id="cib-bootstrap-options" \
dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
cluster-infrastructure="openais" \
expected-quorum-votes="2" \
stonith-enabled="false" \
no-quorum-policy="ignore"
rsc_defaults $id="rsc-options" \
resource-stickiness="100"
crm(live)configure# primitive drbd ocf:linbit:drbd params drbd_resource="web" op monitor interval=29s role="Master" op monitor interval=31s role="Slave" op start timeout=240s op stop timeout=100s
crm(live)configure# ms ms_drbd drbd meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
crm(live)configure# primitive fs ocf:heartbeat:Filesystem params device="/dev/drbd0" directory="/mydata" fstype="ext3" op start timeout=60s op stop timeout=60s
crm(live)configure# primitive myip ocf:heartbeat:IPaddr params ip="192.168.53.4"
crm(live)configure# primitive mysqld lsb:mysqld
crm(live)configure# colocation fs_with_ms_drbd inf: fs ms_drbd:Master --->讓fs與ms_drbd的主節點在一起
crm(live)configure# order fs_after_ms_drbd inf: ms_drbd:promote fs:start ---->讓ms_drbd先于fs啟動
crm(live)configure# colocation ip_with_ms_drbd inf: ip ms_drbd:Master ---> 讓ip位址與ms_drbd的主節點在一起
crm(live)configure# order fs_after_ip inf: ip fs:start ----> 讓fs晚于ip啟動
crm(live)configure# colocation mysqld_with_fs inf: mysqld fs ------> 讓mysql服務與fs在一起
crm(live)configure# order mysqld_after_fs inf: fs mysqld:start -------> 讓fs先于mysql服務啟動
最要檢測一下,并送出。
# crm_mon
如下所示的結果:
Master/Slave Set: ms_drbd [drbd]
Masters: [ node2.zhou.com ]
Slaves: [ node1.zhou.com ]
fs (ocf::heartbeat:Filesystem): Started node2.zhou.com
ip (ocf::heartbeat:IPaddr): Started node2.zhou.com
mysqld (lsb:mysqld): Started node2.zhou.com
# crm node standby
Node node2.zhou.com: standby
Online: [ node1.zhou.com ]
Masters: [ node1.zhou.com ]
Stopped: [ drbd:0 ]
fs (ocf::heartbeat:Filesystem): Started node1.zhou.com
ip (ocf::heartbeat:IPaddr): Started node1.zhou.com
mysqld (lsb:mysqld): Started node1.zhou.com
本文轉自 ZhouLS 51CTO部落格,原文連結:http://blog.51cto.com/zhou123/878151