天天看點

V 4 drbd

一、相關概念:

DAS(SCSI總線,支援7-16個裝置,controller内置在主機闆上的晶片,adapter通過PCI插槽或PCI-E連接配接的卡,如RAID卡)

NAS(file-server)

SAN(SCSI封包借助FC或TCP/IP隧道,可實作遠距離傳輸)

在有些場景下,DAS、NAS、SAN都不适用,采用DRBD

RAID1:mirror,兩磁盤空間一樣大,兩磁盤内部之間是按位逐一對應起來,關鍵這兩塊磁盤要在同一主機

drbd:是核心功能,核心子產品,類似ipvs,需要時載入核心并啟用規則drbd能将兩個主機上的兩個磁盤分區做成鏡像裝置,通過網絡将兩磁盤分區按位對應起來

drbd有主從primary/secondary概念:

在某一時刻僅允許目前的primary-node挂載,有讀寫操作,而secondary絕不能挂載(也就是同一時刻兩個node中一個為primary,另一個必定是secondary);

僅允許兩個node(要麼主從,要麼雙主);

作為主從資源可自動切換

drbd支援雙主模型(dual master),兩個node可同時挂載,前提要基于CFS(cluster filesystem,DLM,distributed lock manager)完成(OCFS2或GFS2);dual master對效率提升并沒有幫助(不是并行讀寫,是阻止了另一node寫),隻是實作了多機可以使用同一個FS

在高可用叢集場景中,使用pacemaker将drbd被定義為資源(drbd作為高可用叢集服務),若使用雙主模型,FS要格式化為CFS,DLM要定義為叢集資源

<a href="http://s2.51cto.com/wyfs02/M00/77/26/wKiom1ZkD1riR7lCAAB8oOHezLc810.jpg" target="_blank"></a>

service&lt;--&gt;FS(使用者空間程序向核心請求FS相關的系統調用,API(open()、read()、write()等),通過核心的API輸出給使用者空間)

注:不同的FS,系統調用的名稱會不一樣,所接受的參數也會不一樣,參數個數也不一樣,通過VFS将不同的FS彌合起來

FS&lt;--&gt;buffer cache&lt;--&gt;disk scheduler(核心會在記憶體中找一段空間,用于buffer cache(緩存源資料、資料等),對檔案的讀寫是在buffer cache中完成的;對于讀,buffer cache中放的是檔案的附本,這段記憶體若被清空,下次通路到該檔案再讀取即可;對于寫,先存在buffer cache中,過一會系統會安排自動同步到磁盤上,若是機械硬碟要等到對應的磁道和扇區轉過來才開始寫,若有多個寫操作通過disk scheduler将相鄰的磁道或扇區上的操作進行合并和排序,進而提高性能)

注:disk scheduler,磁盤排程器,合并讀請求,合并寫請求,将多個IO合并為較少IO,對于機械式硬碟,将随機讀寫轉為順序讀寫,性能提升會非常明顯

disk&lt;--&gt;driver(任何驅動一般都在核心中執行(至關重要),驅動完成具體的讀寫操作,驅動知道讀或寫的位址在哪(哪個磁道,哪個扇區))

drbd(可了解為過濾器,對于非drbd的操作它不管,不做任何處理)

三種資料同步協定(傳回告知app已完成):

A模型(發至本地的TCP/IP協定棧時就傳回成功,異步async,對于性能來說此模型靠譜,性能靠譜)

B模型(發至對方TCP/IP協定棧傳回成功,半同步semi-sync)

C模型(存儲到對方磁盤後傳回成功,同步sync,資料靠譜,預設使用C模型)

考慮到的問題?千兆網卡,加密傳輸

一個主機内隻要不是相同的分區可以做多個drbd,各自都要定義好用哪個磁盤,通過哪個網絡,要不要加密,占據多少帶寬,如某個drbd,primary,secondary角色會随時轉換,每個主機上都要運作app并監聽在socket上(雙方的port,使用的硬碟,使用的網卡要規劃好)

定義一組drbd裝置(drbd的資源),drbd resource包含四個屬性:

resource name(可以是除空白字元外的任意ASCII碼字元);

drbd device(在雙方node上,此drbd裝置的裝置檔案一般為/dev/drbdNUM,其主裝置号為147,次裝置号用于辨別不同的裝置,類似RAID中/dev/md{0,1});

disk configuration(磁盤,在雙方node上,各自提供的儲存設備);

network configuration(雙方資料同步時所使用的網絡屬性)

user space administration tools:

drbdadm(high-level,類似ipvsadm,從/etc/drbd.conf中讀取配置,這個檔案僅定義了要讀取/etc/drbd.d/下的*.conf和*.res所有檔案)

drbdsetup(low-level(底層,選項和參數使用複雜),沒drbdadm簡單)

drbdmeta(low-level,底層,操作drbd的源資料(與檔案的源資料不是一回事),是維護drbd裝置所用到的源資料,drbd的源資料可放在本地磁盤内(internal一般使用)也可放在另一磁盤)

二、操作:

環境redhat5.8 i386 2.6.18-308.el5,兩個節點node1、node2

drbd83-8.3.15-2.el5.centos.i386.rpm(工具包)

kmod-drbd83-8.3.15-3.el5.centos.i686.rpm(核心子產品)

注:核心2.6.33後才整合到核心中,centos提供了核心子產品及使用者空間的工具包

準備環境(參照叢集第二篇heartbeatV2)有:時間同步、ssh雙機互信、主機名、/etc/hosts檔案、兩節點都準備一個2G分區,不要格式化

node1-side:

#scp /root/*.rpm  root@node2:/root/

#for I  in  {1..2};do ssh  node$I  ‘yum -y  --nogpgcheck  localinstall /root/*.rpm’;done

#rpm -ql  drbd83

/etc/drbd.conf

/etc/drbd.d/global_common.conf

/etc/ha.d/resource.d/drbddisk

/etc/ha.d/resource.d/drbdupper

/sbin/drbdadm

/sbin/drbdmeta

/sbin/drbdsetup

/usr/sbin/drbd-overview(顯示drbd簡要資訊)

#cp /usr/share/doc/drbd83-8.3.15/drbd.conf  /etc/drbd.conf

#vim /etc/drbd.conf

include"drbd.d/global_common.conf";

include "drbd.d/*.res";(*.res是定義的資源)

#vim /etc/drbd.d/global_common.conf

global {

         usage-count no;(主機若在網際網路上,開啟此項會自動發資訊給原作者用于統計裝機量等)

}

common {(提供預設屬性定義,多個node中的資源若相同可放在common段)

         protocol C;(資料同步協定,ABC三種模型)

         handlers{(處理程式,若drbd故障如腦裂,要讓某一方舍棄,通過腳本實作不同的政策,如誰寫的少誰舍棄或誰剛成為主節點誰舍棄等政策)

                   pri-on-incon-degr"/usr/lib/drbd/notify-pri-on-incon-degr.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b &gt; /proc/sysrq-trigger ;reboot -f";

                   pri-lost-after-sb"/usr/lib/drbd/notify-pri-lost-after-sb.sh;/usr/lib/drbd/notify-emergency-reboot.sh; echo b &gt; /proc/sysrq-trigger ;reboot -f";

                   local-io-error"/usr/lib/drbd/notify-io-error.sh;/usr/lib/drbd/notify-emergency-shutdown.sh; echo o &gt; /proc/sysrq-trigger ;halt -f";

         }

         startup{(drbd裝置剛啟動時要同步,若目前主聯系不上從,此處定義timeout及degrade降級的timeout等)

                   #wfc-timeout 120;

# degr-wfc-timeout 120;

         disk{(不同的drbd,disk不同)

                   on-io-error detach;

                   #fencing resource-only;

   net {

       cram-hmac-alg "sha1";(定義加密)

       shared-secret "mydrbd";

    }

   syncer {

       rate 1000M;

#vim /etc/drbd.d/mydrbd.res

resource mydrbd  {

 device  /dev/drbd0;

  disk  /dev/sdb1;

 meta-disk  internal;

 on  node1.magedu.com  {

  address  192.168.41.129:7789;

  }

 on  node2.magedu.com  {

  address  192.168.41.130:7789;

#scp -r  /etc/drbd*  node2:/etc/

#drbdadm help

#drbddam create-md  mydrbd(初始化drbd,兩個節點都要初始化)

node2-side:

#drbdadm create-md  mydrbd

[root@node1 ~]# cat  /proc/drbd(檢視狀态inconsistent為不一緻狀态)

version: 8.3.15 (api:88/proto:86-97)

GIT-hash:0ce4d235fc02b5c53c1c52c53433d11a694eab8c build [email protected], 2013-03-27 16:04:08

 0:cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

   ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:2096348

[root@node1 ~]# drbd-overview(檢視狀态)

 0:mydrbd  Connected Secondary/Secondary Inconsistent/Inconsistent C r-----

[root@node1 ~]# drbdadm  --  --overwrite-data-of-peer  primary  mydrbd(将目前node設為主,注意該指令隻在一個node上執行,也可用指令#drbdsetup  /dev/drbd0 primary  -o)

[root@node1 ~]# drbd-overview(有了主從,且資料同步正在進行,也可用指令#watch  -n  1  ‘cat  /proc/drbd’實時檢視狀态)

 0:mydrbd  Sync Source Primary/Secondary UpToDate/Inconsistent C r---n-

         [==&gt;.................]sync'ed: 16.3% (1760348/2096348)K

[root@node1 ~]# drbd-overview

 0:mydrbd  Connected Primary/Secondary UpToDate/UpToDate C r-----

[root@node2 ~]# drbd-overview(以下顯示的Secondary/Primary,前面的屬于自己的狀态,後面的是另一node的狀态)

 0:mydrbd  Connected Secondary/Primary UpToDate/UpToDate C r-----

[root@node1 ~]# mke2fs -j /dev/drbd0

[root@node1 ~]# mkdir /mydata

[root@node1 ~]# mount /dev/drbd0 /mydata

[root@node1 ~]# ls /mydata

lost+found

[root@node1 ~]# cp /etc/issue /mydata

[root@node1 ~]# umount /mydata(注意:重要,将目前node轉為從之前要先解除安裝FS)

[root@node1 ~]# drbdadm secondary mydrbd(将目前node轉為從)

 0:mydrbd  Connected Secondary/Secondary UpToDate/UpToDate C r-----

[root@node2 ~]# drbdadm primary mydrbd(将目前node轉為主)

[root@node2 ~]# drbd-overview

[root@node2 ~]# mkdir /mydata

[root@node2 ~]# mount /dev/drbd0 /mydata

[root@node2 ~]# ls /mydata

issue lost+found

=============================================================

drbd,distributedreplicated block device,基于塊裝置在不同的HA server對之間同步和鏡像資料的軟體,通過它可實作在網絡中的兩台server之間基于塊裝置級别的實時同步複制或異步鏡像,類似inotify+rsync這種架構項目的軟體,inotify+rsync是在FS之上的實際檔案的同步,而drbd是基于FS底層block級别同步,是以drbd效率更高,效果更好;

drbd refers to block devices designed as abuilding block to form high availability clusters,this is done by mirroring awhole block device via an assigned network.drbd can be understood as networkbased raid-1.

www.drbd.org

工作原理:

<a href="http://s5.51cto.com/wyfs02/M02/85/DD/wKiom1etEfSgg2ToAABU4sh0TNc852.jpg" target="_blank"></a>

drbd軟體工作在FS級别以下,比FS更靠近os kernel和IO棧,在基于HA的兩台server中,當資料寫入到磁盤中時,資料還會實時的發送到網絡中的另一台主機上,類似相同的形式記錄在另一個磁盤系統中,使得本地(masternode)與遠端主機(backup node)的資料保持實時同步,如果master node故障,那backup node上會保留與master node相同的資料可供繼續使用,這樣backup node可直接接管并繼續提供服務,降低當機修複時間提升使用者體驗;

drbd服務的作用類似磁盤陣列中的RAID1功能,相當于把網絡中的兩台server做成了RAID1,在HA中使用drbd可代替使用一個共享盤陣,因為資料在master和backup上,發生failover時,backup node可直接使用資料(master和backup的資料是一緻的);

drbd同步模式:

實時同步模式(drbd的協定C級别,當資料寫入到本地磁盤和遠端所有server磁盤成功後才傳回成功寫入,此模式可阻止本地和遠端的資料不一緻或丢失,生産環境中常用此模式);

異步同步模式(drbd協定的A|B級别,當資料寫入到本地server磁盤成功後就傳回成功寫入,或資料寫入到遠端的buffer成功後就傳回成功,而不管遠端server是否真正寫入成功);

注:nfs也有類似的參數和功能,如sync和async;mount指令的參數也有;

drbd的生産應用模式:

主備模式(單主模式,典型的HA叢集方案);

雙主模式(複主模式,要采用CFS叢集檔案系統,如GFS或OCFS2);

drbd的3種同步複制協定:

協定A(異步複制協定,指資料寫到本地磁盤上,并且複制的資料已經被放到本地的tcp緩沖區并等待發送以後,就認為寫入完成,效率高,資料可能會丢失);

協定B(半同步複制協定,記憶體同步,指資料寫到本地磁盤上,并且這些資料已經發送到對方記憶體緩沖區,對方的tcp已經收到資料,并宣布寫入,如果雙機掉電,資料可能丢失;此處的半同步與MySQL的半同步不同,MySQL的一主多從架構中的半同步是確定一主和一從同步成功,其它的從不管);

協定C(同步複制協定,指主node已寫入,從node磁盤也寫入,如果單機掉電或單機磁盤損壞,資料不會丢失);

注:若出問題,解決步驟:

檢查iptables,selinux,drbd配置檔案,IP配置,主機路由配置等;

若兩端都出現secondary/Unknown,最有可能是split-brain導緻的,在backup node上運作如下指令:

#drbdadm secondary data

#drbdadm -- --discard-my-data connect data

#cat /proc/drbd

在master node運作:

#drbdadm connect data

#cat /proc/data

drbd的企業應用場景:

heartbeat+drbd+nfs

heartbeat+drbd+MySQL

注:drbd可配合任意需要資料同步的所有服務的應用場景;drbd的backup node是不可見狀态,非挂載模式,不能應用,這會浪費一台server資源,同一時間隻能master node提供rw功能;drbd可配合MySQL或Oracle,Oracle的8i-10g版本slave不能應用(如讀功能),到11g版本slave可應用了,Oracle中的dataguard可實作physical和logical的備份方式

相關同步工具:

rsync(sersync、inotify、lsyncd)、scp、nc、nfs、union(雙機同步)、csync2(多機同步)、軟體的自身同步機制(MySQL、Oracle、mongodb、ttserver、redis)、drbd

注:

drbd狀态資訊:

0: cs:Connected ro:Primary/Secondaryds:UpToDate/UpToDate C r-----

   ns:4 nr:4 dw:8 dr:1039 al:1 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

[root@test-master ~]# drbdadm cstate data

Connected

cs (connection state). Status of the network connection. See Section 6.1.5, “Connectionstates”for details about the various connection states.

StandAlone. No network configuration available. The resource has not yet beenconnected, or has been administratively disconnected (using drbdadmdisconnect), or has dropped its connection due to failed authentication orsplit brain.

Disconnecting. Temporary state during disconnection. The next state isStandAlone.

Unconnected. Temporary state, prior to a connection attempt. Possible nextstates: WFConnection and WFReportParams.

Timeout. Temporary state following a timeout in the communication with thepeer. Next state: Unconnected.

BrokenPipe. Temporary state after the connection to the peer was lost. Nextstate: Unconnected.

NetworkFailure. Temporary state after the connection to the partner was lost. Nextstate: Unconnected.

ProtocolError. Temporary state after the connection to the partner was lost. Nextstate: Unconnected.

TearDown. Temporary state. The peer is closing the connection. Next state:Unconnected.

WFConnection. This node is waiting untilthe peer node becomes visible on the network.

WFReportParams. TCP connection has been established, this node waits for the firstnetwork packet from the peer.

Connected. A DRBD connection has been established, data mirroring is nowactive. This is the normal state.

StartingSyncS. Full synchronization, initiated by the administrator, is juststarting. The next possible states are: SyncSource or PausedSyncS.

StartingSyncT. Full synchronization, initiated by the administrator, is juststarting. Next state: WFSyncUUID.

WFBitMapS. Partial synchronization is just starting. Next possible states:SyncSource or PausedSyncS.

WFBitMapT. Partial synchronization is just starting. Next possible state:WFSyncUUID.

WFSyncUUID. Synchronization is about tobegin. Next possible states: SyncTarget or PausedSyncT.

SyncSource. Synchronization is currently running, with the local node beingthe source of synchronization.

SyncTarget. Synchronization is currently running, with the local node beingthe target of synchronization.

PausedSyncS. The local node is the source of an ongoing synchronization, butsynchronization is currently paused. This may be due to a dependency on thecompletion of another synchronization process, or due to synchronization havingbeen manually interrupted by drbdadm pause-sync.

PausedSyncT. The local node is the target of an ongoing synchronization, butsynchronization is currently paused. This may be due to a dependency on thecompletion of another synchronization process, or due to synchronization havingbeen manually interrupted by drbdadm pause-sync.

VerifyS. On-line device verification is currently running, with the localnode being the source of verification.

VerifyT. On-line device verification is currently running, with the localnode being the target of verification.

[root@test-master ~]# drbdadm role data

Primary/Secondary

ro (roles). Roles of the nodes. The role of the local node is displayed first,followed by the role of the partner node shown after the slash. See Section6.1.6, “Resource roles”for details about the possible resource roles.

Primary. The resource is currently in the primary role, and may be readfrom and written to. This role only occurs on one of the two nodes, unlessdual-primary mode is enabled.

Secondary. The resource is currently in the secondary role. It normallyreceives updates from its peer (unless running in disconnected mode), but mayneither be read from nor written to. This role may occur on one or both nodes.

Unknown. The resource’s role is currently unknown. The local resource rolenever has this status. It is only displayed for the peer’s resource role, andonly in disconnected mode.

[root@test-master ~]# drbdadm dstate data

UpToDate/UpToDate

ds (disk states). State of the hard disks. Prior to the slash the state of the localnode is displayed, after the slash the state of the hard disk of the partnernode is shown. See Section 6.1.7, “Disk states”for details about the variousdisk states.

Diskless. No local block device has been assigned to the DRBD driver. Thismay mean that the resource has never attached to its backing device, that ithas been manually detached using drbdadm detach, or that it automaticallydetached after a lower-level I/O error.

Attaching. Transient state while reading meta data.

Failed. Transient state following an I/O failure report by the local blockdevice. Next state: Diskless.

Negotiating. Transient state when an Attach is carried out on analready-Connected DRBD device.

Inconsistent. The data is inconsistent. This status occurs immediately uponcreation of a new resource, on both nodes (before the initial full sync). Also,this status is found in one node (the synchronization target) duringsynchronization.

Outdated. Resource data is consistent, but outdated.

DUnknown. This state is used for the peer disk if no network connection isavailable.

Consistent. Consistent data of a node without connection. When the connectionis established, it is decided whether the data is UpToDate or Outdated.

UpToDate. Consistent, up-to-date state of the data. This is the normalstate.

ns (network send). Volume of net data sent to the partner via the network connection;in Kibyte.

nr (network receive). Volume of net data received by the partner via the networkconnection; in Kibyte.

dw (disk write). Net data written on local hard disk; in Kibyte.

dr (disk read). Net data read from local hard disk; in Kibyte.

al (activity log). Number of updates of the activity log area of the meta data.

bm (bit map). Number of updates of the bitmap area of the meta data.

lo (local count). Number of open requests to the local I/O sub-system issued byDRBD.

pe (pending). Number of requests sent to the partner, but that have not yet beenanswered by the latter.

ua (unacknowledged). Number of requests received by the partner via the networkconnection, but that have not yet been answered.

ap (application pending). Number of block I/O requests forwarded to DRBD, but not yetanswered by DRBD.

ep (epochs). Number of epoch objects. Usually 1. Might increase under I/O loadwhen using either the barrier or the none write ordering method.

wo (write order). Currently used write ordering method: b(barrier), f(flush),d(drain) or n(none).

oos (out of sync). Amount of storage currently out of sync; in Kibibytes.

準備環境:

master:eth0(10.96.20.113)、eth1(172.16.1.113,資料同步和傳輸心跳,不配網關及dns)、主機名(test-master)

backup:eth0(10.96.20.114)、eth1(172.16.1.114,資料同步和傳輸心跳,不配網關及dns)、主機名(test-backup)

每台主機均兩塊硬碟

分别配置主機名/etc/sysconfig/network結果一定要與uname -n保持一緻,/etc/hosts檔案,ssh雙機互信,時間同步,iptables,selinux

test-master:

[root@test-master ~]# fdisk -l

……

Disk /dev/sdb: 2147 MB, 2147483648 bytes

255 heads, 63 sectors/track, 261 cylinders

Units = cylinders of 16065 * 512 = 8225280bytes

Sector size (logical/physical): 512 bytes /512 bytes

I/O size (minimum/optimal): 512 bytes / 512bytes

Disk identifier: 0x00000000

[root@test-master ~]# parted /dev/sdb  #(parted指令可支援大于2T的硬碟,将新硬碟分兩個區,一個區用于放資料,另一個區用于drbd的meta data)

GNU Parted 2.1

Using /dev/sdb

Welcome to GNU Parted! Type 'help' to viewa list of commands.

(parted) h                                                               

 align-check TYPE N                       check partition N for TYPE(min|opt) alignment

 check NUMBER                            do a simple check on the file system

  cp[FROM-DEVICE] FROM-NUMBER TO-NUMBER  copy file system to another partition

 help [COMMAND]                          print general help, or help on COMMAND

  mklabel,mktable LABEL-TYPE               create a new disklabel (partitiontable)

 mkfs NUMBER FS-TYPE                     make a FS-TYPE file system on partition NUMBER

  mkpart PART-TYPE [FS-TYPE] START END     make a partition

 mkpartfs PART-TYPE FS-TYPE START END    make a partition with a file system

 move NUMBER START END                   move partition NUMBER

 name NUMBER NAME                        name partition NUMBER as NAME

  print [devices|free|list,all|NUMBER]     display the partition table, availabledevices, free space, all found partitions, or a

       particular partition

 quit                                    exit program

 rescue START END                        rescue a lost partition near START and END

 resize NUMBER START END                 resize partition NUMBER and its file system

  rmNUMBER                               delete partition NUMBER

 select DEVICE                           choose the device to edit

  setNUMBER FLAG STATE                   change the FLAG on partition NUMBER

 toggle [NUMBER [FLAG]]                  toggle the state of FLAG on partition NUMBER

 unit UNIT                               set the default unit to UNIT

 version                                 display the version number and copyright information of GNU Parted

(parted) mklabel gpt                                                     

(parted) mkpart primary 0 1024

Warning: The resulting partition is notproperly aligned for best performance.

Ignore/Cancel?Ignore

(parted) mkpart primary 1025 2147                                         

Ignore/Cancel? Ignore

(parted) p                                                               

Model: VMware, VMware Virtual S (scsi)

Disk /dev/sdb: 2147MB

Sector size (logical/physical): 512B/512B

Partition Table: gpt

Number Start   End     Size   File system  Name     Flags

 1     17.4kB  1024MB  1024MB               primary

 2     1025MB  2147MB  1122MB               primary

[root@test-master ~]# wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

[root@test-master ~]# rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm

warning:elrepo-release-6-6.el6.elrepo.noarch.rpm: Header V4 DSA/SHA1 Signature, key IDbaadae52: NOKEY

Preparing...               ########################################### [100%]

  1:elrepo-release        ########################################### [100%]

[root@test-master ~]# yum -y install drbd kmod-drbd84

[root@test-master ~]# modprobe drbd

FATAL: Module drbd not found.

[root@test-master ~]# yum -y install kernel*   #(更新核心後要重新開機系統)

[root@test-master ~]# uname -r

2.6.32-642.3.1.el6.x86_64

[root@test-master ~]# depmod

[root@test-master ~]# lsmod | grep drbd

drbd                  372759  0

libcrc32c               1246  1 drbd

[root@test-master ~]# ll /usr/src/kernels/

total 12

drwxr-xr-x. 22 root root 4096 Mar 31 06:462.6.32-431.el6.x86_64

drwxr-xr-x. 22 root root 4096 Aug  8 03:40 2.6.32-642.3.1.el6.x86_64

drwxr-xr-x. 22 root root 4096 Aug  8 03:40 2.6.32-642.3.1.el6.x86_64.debug

[root@test-master ~]# chkconfig drbd off

[root@test-master ~]# chkconfig --list drbd

drbd              0:off 1:off 2:off 3:off 4:off 5:off 6:off

[root@test-master ~]# echo "modprobedrbd &gt; /dev/null 2&gt;&amp;1" &gt; /etc/sysconfig/modules/drbd.modules

[root@test-master ~]# cat !$

cat /etc/sysconfig/modules/drbd.modules

modprobe drbd &gt; /dev/null 2&gt;&amp;1

test-backup:

[root@test-backup ~]# parted /dev/sdb

(parted) mklabel gpt

(parted) mkpart primary 0 4096                                           

Ignore/Cancel? Ignore                                                    

(parted) mkpart primary 4097 5368                                        

(parted) p                                                               

Disk /dev/sdb: 5369MB

 1     17.4kB  4096MB  4096MB               primary

 2     4097MB  5368MB  1271MB               primary

[root@test-backup ~]# wget http://www.elrepo.org/elrepo-release-6-6.el6.elrepo.noarch.rpm

[root@test-backup ~]# rpm -ivh elrepo-release-6-6.el6.elrepo.noarch.rpm

[root@test-backup ~]# ll /etc/yum.repos.d/

total 20

-rw-r--r--. 1 root root 1856 Jul 19 00:28CentOS6-Base-163.repo

-rw-r--r--. 1 root root 2150 Feb  9  2014elrepo.repo

-rw-r--r--. 1 root root  957 Nov 4  2012 epel.repo

-rw-r--r--. 1 root root 1056 Nov  4  2012epel-testing.repo

-rw-r--r--. 1 root root  529 Mar 30 23:00 rhel-source.repo.bak

[root@test-backup ~]# yum -y install drbd kmod-drbd84

[root@test-backup ~]# yum -y install kernel*

[root@test-backup ~]# depmod

[root@test-backup ~]# lsmod | grep drbd

[root@test-backup ~]# chkconfig drbd off

[root@test-backup ~]# chkconfig --list drbd

[root@test-backup ~]# echo "modprobe drbd &gt; /dev/null 2&gt;&amp;1" &gt; /etc/sysconfig/modules/drbd.modules

[root@test-backup ~]# cat !$

[root@test-master ~]# vim /etc/drbd.d/global_common.conf

[root@test-master ~]# egrep -v "#|^$" /etc/drbd.d/global_common.conf

         usage-countno;

common {

         handlers{

         startup{

         options{

         disk{

                on-io-error detach;

         net{

         syncer{

                   rate50M;

                   verify-algcrc32c;

[root@test-master ~]# vim /etc/drbd.d/data.res

resource data {

       protocol C;

       on test-master {

                device  /dev/drbd0;

                disk    /dev/sdb1;

                address 172.16.1.113:7788;

                meta-disk       /dev/sdb2[0];

       }

       on test-backup {

                address 172.16.1.114:7788;

[root@test-master ~]# cd /etc/drbd.d

[root@test-master drbd.d]# scp global_common.conf data.res root@test-backup:/etc/drbd.d/

global_common.conf                                                                                     100% 2144     2.1KB/s   00:00   

data.res                                                                                               100%  251     0.3KB/s  00:00   

[root@test-master drbd.d]# drbdadm --help

USAGE: drbdadm COMMAND [OPTION...]{all|RESOURCE...}

GENERAL OPTIONS:

 --stacked, -S

 --dry-run, -d

 --verbose, -v

 --config-file=..., -c ...

 --config-to-test=..., -t ...

 --drbdsetup=..., -s ...

 --drbdmeta=..., -m ...

 --drbd-proxy-ctl=..., -p ...

 --sh-varname=..., -n ...

 --peer=..., -P ...

 --version, -V

 --setup-option=..., -W ...

 --help, -h

COMMANDS:

 attach                             disk-options                      

 detach                             connect                           

 net-options                        disconnect                        

 up                                 resource-options                  

 down                               primary                           

 secondary                          invalidate                        

 invalidate-remote                  outdate                           

 resize                             verify                            

 pause-sync                         resume-sync                       

 adjust                            adjust-with-progress              

 wait-connect                       wait-con-int                      

 role                               cstate                            

 dstate                             dump                              

 dump-xml                           create-md                          

 show-gi                            get-gi                            

 dump-md                            wipe-md                           

 apply-al                           hidden-commands    

[root@test-master drbd.d]# drbdadm create-md data

initializing activity log

NOT initializing bitmap

Writing meta data...

New drbd meta data block successfullycreated.

[root@test-master drbd.d]# ssh test-backup 'drbdadm create-md data'

[root@test-master drbd.d]# drbdadm up data

[root@test-master drbd.d]# ssh test-backup 'drbdadm up data'

[root@test-master drbd.d]# cat /proc/drbd

version: 8.4.7-1 (api:1/proto:86-101)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6,2016-01-12 13:27:11

   ns:0 nr:0 dw:0 dr:0 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:999984

[root@test-master drbd.d]# ssh test-backup 'cat /proc/drbd'

GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49build by mockbuild@Build64R6, 2016-01-12 13:27:11

   ns:0 nr:0 dw:0 dr:0 al:16 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:999984

[root@test-master drbd.d]# drbdadm -- --overwrite-data-of-peer primary data   #(僅在主上執行,會覆寫backup node的資料)

GIT-hash:3a6a769340ef93b1ba2792c6461250790795db49 build by mockbuild@Build64R6, 2016-01-1213:27:11

 0:cs:SyncSource ro:Primary/Secondaryds:UpToDate/Inconsistent C r-----

   ns:339968 nr:0 dw:0 dr:340647 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:660016

         [=====&gt;..............]sync'ed: 34.3% (660016/999984)K

         finish:0:00:15 speed: 42,496 (42,496) K/sec

[root@test-master drbd.d]# cat /proc/drbd

 0:cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----

   ns:630784 nr:0 dw:0 dr:631463 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:369200

         [===========&gt;........]sync'ed: 63.3% (369200/999984)K

         finish:0:00:09 speed: 39,424 (39,424) K/sec

   ns:942080 nr:0 dw:0 dr:942759 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:57904

         [=================&gt;..]sync'ed: 94.3% (57904/999984)K

         finish:0:00:01 speed: 39,196 (39,252) K/sec

 0:cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----

   ns:999983 nr:0 dw:0 dr:1000662 al:8 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:0

 0:cs:Connected ro:Secondary/Primaryds:UpToDate/UpToDate C r-----

   ns:0 nr:999983 dw:999983 dr:0 al:16 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:0

[root@test-master drbd.d]# mkdir /drbd

[root@test-master drbd.d]# ssh test-backup 'mkdir /drbd'

[root@test-master drbd.d]# mkfs.ext4 -b 4096 /dev/drbd0   #(僅在主上執行,meta分區不要格式化)

Writing superblocks and filesystem accountinginformation: done

[root@test-master drbd.d]# tune2fs -c -1 /dev/drbd0

tune2fs 1.41.12 (17-May-2010)

Setting maximal mount count to -1

[root@test-master drbd.d]# mount /dev/drbd0 /drbd

[root@test-master drbd.d]# cd /drbd

[root@test-master drbd]# for i in `seq 1 10`; do touch test$i; done

[root@test-master drbd]# ls

lost+found test1  test10  test2 test3  test4  test5 test6  test7  test8 test9

[root@test-master drbd]# cd

[root@test-master ~]# umount /dev/drbd0

[root@test-master ~]# drbdadm secondary data

[root@test-master ~]# cat /proc/drbd

 0:cs:Connected ro:Secondary/Secondaryds:UpToDate/UpToDate C r-----

    ns:1032538 nr:0 dw:32554 dr:1001751 al:19bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

[root@test-backup ~]# cat /proc/drbd

   ns:0 nr:1032538 dw:1032538 dr:0 al:16 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:foos:0

[root@test-backup ~]# drbdadm primary data

[root@test-backup ~]# cat /proc/drbd

 0:cs:Connected ro:Primary/Secondaryds:UpToDate/UpToDate C r-----

   ns:0 nr:1032538 dw:1032538 dr:679 al:16 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1wo:f oos:0

[root@test-backup ~]# mount /dev/drbd0 /drbd

[root@test-backup ~]# ls /drbd

本文轉自 chaijowin 51CTO部落格,原文連結:http://blog.51cto.com/jowin/1720094,如需轉載請自行聯系原作者