天天看點

[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法

  • 環境
    系統:

    RHEL6.5

    三台虛拟機;

    Luci

    主機:

    172.25.23.8

    ,

    hostname

    :

    server8.com

    ;

    ricci

    主機:

    172.25.23.9

    ;

    hostname

    :

    server9.com

    ;

    ricci

    主機:

    172.25.23.10

    ;

    hostname

    :

    server10.com

    ;
  • 安裝前的準備
  • 1.臨時性的關閉

    iptables

    ,否則就需要設定

    Iptables

    規則;
  • 2.臨時性的關閉

    selinux

    ;
  • 3.需要配置節點的時間同步;
  • 4.需要配置節點間的解析正常;
  • 5.需要保證

    Yum

    倉庫是正常的;
  • 軟體安裝
  • server8.com

    上面安裝

    luci

    ,叢集管理軟體
  • 各個節點上面安裝

    ricci

[root@my Desktop]# ssh 172.25.23.9 'yum install ricci -y'
[root@my Desktop]# ssh 172.25.23.10 'yum install ricci -y'
           
  • ricci

    使用者設定密碼
為ricci使用者設定密碼
[[email protected] ~]# passwd ricci
Changing password for user ricci.
New password: 
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.

[[email protected] ~]# passwd ricci
Changing password for user ricci.
New password: 
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.
           
  • 首先啟動

    ricci

    服務,并且托管給

    Init

    程序
[root@server9 ~]# /etc/init.d/ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@server9 ~]# chkconfig ricci on

[root@server10 ~]# /etc/init.d/ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@server10 ~]# chkconfig ricci on
           
  • 檢視服務監聽的端口
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 啟動管理節點上面的

    luci

    服務
[[email protected] ~]# /etc/init.d/luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `server8.com' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
    (none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Starting saslauthd:                                        [  OK  ]
Start luci...                                              [  OK  ]
Point your web browser to https://server8.com:8084 (or equivalent) to access luci
           

檢視監聽的端口

[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 通過浏覽器使用

    https

    協定通路

    https://server8.com:8084

    ;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 首先來配置

    server9.com

    server10.com

    加入叢集節點,點選

    create

    ;
[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法

* 按照這裡的配置建立

server9.com

server10.com

server8.com

管理的叢集節點;

[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 點選送出之後,會出現下面的界面,表示正在建立叢集節點,安裝需要的軟體包
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 這個過程會持續,較長時間,因為安裝完成之後,會進行重新開機操作,之後會出現這個界面,表示正在重新開機,暫時性聯系不上;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 在上面的過程結束之後,可以看到這個界面,表示叢集中的兩個節點已經配置完畢;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 這些資源節點

    ricci

    通過

    luci

    的管理自動安裝的,再重新啟動之後,需要保證這些服務都是正常的;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 節點的狀态,可以在

    ricci

    節點上面通過指令進行檢視
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 檢視配置檔案來檢視節點的配置資訊
[[email protected] ~]# cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster config_version="1" name="xixihaha">
    <clusternodes>
        <clusternode name="server9.com" nodeid="1"/>
        <clusternode name="server10.com" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices/>
    <rm/>
</cluster>
           
  • 接下來為叢集添加

    fence

    裝置,這個步驟不能夠在虛拟機執行,必須在真機上面執行
  • 安裝

    fence

    軟體,強調是

    真機

    ;
[[email protected] Desktop]# yum install fence* -y
fence-agents-ipmilan--el7.x86_64
fence-agents-eps--el7.x86_64
fence-agents-ibmblade--el7.x86_64
fence-agents-apc-snmp--el7.x86_64
fence-agents-kdump--el7.x86_64
fence-agents-all--el7.x86_64
fence-agents-wti--el7.x86_64
fence-agents-ilo2--el7.x86_64
libxshmfence-devel--el7.x86_64
fence-agents-rsa--el7.x86_64
fence-agents-brocade--el7.x86_64
fence-agents-emerson--el7.x86_64
fence-agents-cisco-ucs--el7.x86_64
fence-virtd--el7.x86_64
fence-agents-cisco-mds--el7.x86_64
fence-agents-ilo-ssh--el7.x86_64
fence-agents-intelmodular--el7.x86_64
fence-agents-drac5--el7.x86_64
fence-virtd-serial--el7.x86_64
libxshmfence--el7.x86_64
fence-agents-ilo-mp--el7.x86_64
fence-agents-eaton-snmp--el7.x86_64
fence-agents-vmware-soap--el7.x86_64
fence-agents-ipdu--el7.x86_64
fence-agents-ifmib--el7.x86_64
fence-agents-hpblade--el7.x86_64
fence-agents-ilo-moonshot--el7.x86_64
fence-virtd-libvirt--el7.x86_64
fence-agents-common--el7.x86_64
fence-agents-rsb--el7.x86_64
fence-virtd-multicast--el7.x86_64
fence-virt--el7.x86_64
fence-agents-scsi--el7.x86_64
fence-agents-bladecenter--el7.x86_64
fence-agents-mpath--el7.x86_64
fence-agents-rhevm--el7.x86_64
fence-agents-compute--el7.x86_64
fence-agents-apc--el7.x86_64
libxshmfence--el7.i686
不太建議使用*進行安裝,但是這個方式不容易漏掉軟體包;
           
  • 配置

    fence_vitrd

    軟體的工作方式,一定在真機上面執行
[[email protected] Desktop]# fence_virtd -c    //指令一定是fence_virtd,還有一個指令是fence_virt很類似;
Module search path [/usr/lib64/fence-virt]: 

Available backends:
    libvirt 
Available listeners:
    multicast 
    serial 

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:  //這個類型和後面選擇虛拟fence裝置的類型一緻;

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [.]: 

Using ipv4 as family.

Multicast IP Port []: 

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0  //虛拟機橋接網卡,不用和這裡的一樣;

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]: 

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]: 

Configuration complete.

=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br0";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y
           
  • 上面的密鑰檔案

    /etc/cluster/fence_xvm.key

    可能沒有生成,手動使用随機數生成一個
[[email protected] Desktop]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
+ records in
+ records out
 bytes ( B) copied,  s,  kB/s
           
  • 啟動

    fence_vitrd

    這個軟體
[root@my Desktop]# systemctl restart fence_virtd
[root@my Desktop]# systemctl status fence_virtd
           
  • 請确認一定是正常啟動的
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 并且打開了相應的端口
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 接下來将配置檔案發送給各各個

    ricci

    主機
[root@my Desktop]# scp /etc/cluster/fence_xvm.key :/etc/cluster/
fence_xvm.key                                 %       KB/s   :    
[root@my Desktop]# scp /etc/cluster/fence_xvm.key :/etc/cluster/
fence_xvm.key                                 %       KB/s   :
           
  • 接下來需要通過

    luci

    添加

    fence_裝置

  • 一定是這個類膝蓋,并且是

    multicast

    模式;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 然後再叢集中各個節點下面,找到

    Add fence method

    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 點選之後選擇名字,進行,點選送出
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 然後選擇

    Add fence instance

    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 去虛拟機管理界面複制

    UUID

    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 粘貼在這個地方
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 對于另一台主機

    server10.com

    進行同樣的配置
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 然後複制

    server10.com

    UUID

    ,并且粘貼在這個地方;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
    • 這個是完成配置之後的界面
      [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
      [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
    • UUID

      是實作管理的關鍵,一定不能夠複制錯
    • 下面是添加了

      fence

      之後,叢集配置檔案的改變
[[email protected] ~]# cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster config_version="1" name="xixihaha">
    <clusternodes>
        <clusternode name="server9.com" nodeid="1"/>
        <clusternode name="server10.com" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices/>
    <rm/>
</cluster>
[[email protected] ~]# cat /etc/cluster/cluster.conf 
<?xml version="1.0"?>
<cluster config_version="6" name="xixihaha">
    <clusternodes>
        <clusternode name="server9.com" nodeid="1">
            <fence>
                <method name="fence1">
                    <device domain="d1c2f8d4-7f8e-453e-9584-0b53d1256d62" name="vmFence"/>
                </method>
            </fence>
        </clusternode>
        <clusternode name="server10.com" nodeid="2">
            <fence>
                <method name="fence2">
                    <device domain="ac8be087-b7bc-48f6-b6dd-b482badd8572" name="vmFence"/>
                </method>
            </fence>
        </clusternode>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="vmFence"/>
    </fencedevices>
</cluster>
           
  • 接下來通過域名來

    fence

    某個主機裝置,實作手動

    fence

    ,主機将進行重新開機,并且自動加入叢集
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 這個過層是必須進行驗證的,放置在自動

    fence

    的過程中,主機不進行響應,這次反向進行驗證;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 接下來添加故障轉移域;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 檢視配置檔案發生的改變,這是關于

    Failover

    的區域
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 上面僅僅完成的是叢集架構的配置,接下來為叢集配置資源
  • 首先為叢集添加

    IP

    資源
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 接下來添加符合

    LSB

    風格的服務腳本,這裡使用

    httpd

    ,因為

    ricci

    節點上面已經安裝,

    luci

    可以直接進行管理
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 接下來這兩個

    Gloable Resource

    作為一個資源組,資源組表示資源更加傾向于運作在同一個

    ricci

    節點上面
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 點選

    Add Resource

    添加全局資源,因為

    IP

    資源首先運作,是以先添加

    IP

    資源
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 點選

    Add Resource

    繼續添加

    httpd

    資源;
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 點選送出,因為在

    Failover

    裡面定義了資源的優先級,是以資源運作在

    server9.com

    節點上面,首先檢視網卡别名配置了

    IP:172.25.23.98

    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 然後通過網頁通路

    httpd

    服務
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 通過指令行檢視節點的運作狀态
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 這是

    luci

    管理界面提供的

    apache

    資源組的運作狀況
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 當節點

    server9.com

    ricci

    服務無法聯系上

    luci

    服務時,就會執行重新啟動政策,并且重新配置設定資源,也就是将資源配置設定在

    server10.com

    上面,這裡定義的故障轉移域隻有兩台主機,如果有多台,根據優先級進行轉移
  • 首先停止

    server9.com

    上面的網絡服務
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 服務在

    server9.com

    進行重新啟動的同時,已經轉移到

    server10.com

    主機上面
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 這樣就可以達到資源的高可用;
  • server9.com

    服務正常時,資源

    apache

    并沒有轉移到節點

    server9.com

    ,這個和定義政策有關,如果存在定義的

    failback

    就會轉移到原來的節點
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • luci

    管理節點上面也已經顯示資源轉移在

    server10.com

    上面;
  • 接下來通過向核心傳遞錯誤參數,強制

    server10.com

    進行重其,将服務轉移到

    server9.com

    上面
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 服務同樣在重新啟動的過程中,成功的轉移到

    server9.com

    上面
  • [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 接下來配置

    ISCSI

    提供檔案系統服務
    規劃

    server8.com

    作為

    iSCSI

    target

    段,提供磁盤等;

    server9.com

    以及

    server10.com

    作為

    Initiator

    端使用磁盤服務;
  • 首先這裡提供三塊裝置,用于進行實驗
[[email protected] /]# ll -lh /srv/iscsi/
total 200M
-rw-r--r--. 1 root root 200M 2月  25 18:53 disk1.img
/dev/vdb1               1        1017      512536+  83  Linux
/dev/vdb2            1018        2034      512568   8e  Linux LVM
           
  • 更改目錄的安全上下文
[[email protected] /]# chcon -Rv -t tgtd_var_lib_t /srv/iscsi/
changing security context of `/srv/iscsi/disk1.img'
changing security context of `/srv/iscsi/'
           
  • 提供邏輯卷
[root@server8 /]# pvcreate /dev/vdb2 
  Physical volume "/dev/vdb2" successfully created
[root@server8 /]# vgcreate server /dev/vdb2 
  Volume group "server" successfully created
[root@server8 /]# lvcreate -L 2G -n iscsi server
  Volume group "server" has insufficient free space ( extents):  required.
[root@server8 /]# lvcreate -l 124 -n iscsi server
  Logical volume "iscsi" created
           
  • 編輯

    ISCSI target

    的配置檔案
[[email protected] /]# vim /etc/tgt/targets.conf 
<target iqn.com.server8:server.target6>
    backing-store /srv/iscsi/disk1.img
    backing-store /dev/vdb1
    backing-store /dev/server/iscsi
    initiator-address /
    incominguser westos westos
    write-cache off
</target>
           
  • 啟動

    tgtd

    服務
[root@server8 /]# /etc/init.d/tgtd start
Starting SCSI target daemon:                               [  OK  ]
[root@server8 /]# netstat -tlunp | grep tgt
tcp               ...:                ...:*                   LISTEN      /tgtd           
tcp               :::                     :::*                        LISTEN      /tgtd  
           
  • 服務端檢測服務是否正常啟動
[[email protected] /]# tgt-admin --show 
Target : iqn.com.server8:server.target6
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 
            Type: controller
            SCSI ID: IET     
            SCSI SN: beaf10
            Size:  MB, Block size: 
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 
            Type: disk
            SCSI ID: IET     
            SCSI SN: beaf11
            Size:  MB, Block size: 
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/server/iscsi
            Backing store flags: 
        LUN: 
            Type: disk
            SCSI ID: IET     
            SCSI SN: beaf12
            Size:  MB, Block size: 
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vdb1
            Backing store flags: 
        LUN: 
            Type: disk
            SCSI ID: IET     
            SCSI SN: beaf13
            Size:  MB, Block size: 
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /srv/iscsi/disk1.img
            Backing store flags: 
    Account information:
        westos
    ACL information:
        ./
           
  • initiator

    上面進行這樣的配置
node.session.auth.username = westos
node.session.auth.password = westos

# To set a CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#node.session.auth.username_in = username_in
#node.session.auth.password_in = password_in

# To enable CHAP authentication for a discovery session to the target
# set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
#discovery.sendtargets.auth.authmethod = CHAP

# To set a discovery session CHAP username and password for the initiator
# authentication by the target(s), uncomment the following lines:
discovery.sendtargets.auth.username = westos
discovery.sendtargets.auth.password = westos
           
  • 啟動服務,檢視是否發現遠端裝置
[[email protected] iscsi]# iscsiadm -m discovery -t sendtargets -p 172.25.23.8
Starting iscsid:                                           [  OK  ]
172.25.23.8:3260,1 iqn.com.server8:server.target6
           
  • 進行登操作,檢視共享裝置在本機的映射
[root@server9 iscsi]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.com.server8:server.target6, portal: ,] (multiple)
Login to [iface: default, target: iqn.com.server8:server.target6, portal: ,] successful.
[root@server9 iscsi]# fdisk -l

Disk /dev/vda:  GB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical): 512 bytes /  bytes
I/O size (minimum/optimal): 512 bytes /  bytes
Disk identifier: 

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *                              Linux
Partition  does not end on cylinder boundary.
/dev/vda2                          e  Linux LVM
Partition  does not end on cylinder boundary.

Disk /dev/mapper/VolGroup-lv_root:  GB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical): 512 bytes /  bytes
I/O size (minimum/optimal): 512 bytes /  bytes
Disk identifier: 


Disk /dev/mapper/VolGroup-lv_swap:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical): 512 bytes /  bytes
I/O size (minimum/optimal): 512 bytes /  bytes
Disk identifier: 


Disk /dev/sdb:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical): 512 bytes /  bytes
I/O size (minimum/optimal): 512 bytes /  bytes
Disk identifier: 


Disk /dev/sda:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical): 512 bytes /  bytes
I/O size (minimum/optimal): 512 bytes /  bytes
Disk identifier: 


Disk /dev/sdc:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical): 512 bytes /  bytes
I/O size (minimum/optimal): 512 bytes /  bytes
Disk identifier: 
           
  • server10.com

    上面進行同樣的操作,可以得到相同的結果
[[email protected] mnt]# fdisk -l

Disk /dev/vda:  GB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical):  bytes /  bytes
I/O size (minimum/optimal):  bytes /  bytes
Disk identifier: 

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *                              Linux
Partition  does not end on cylinder boundary.
/dev/vda2                          e  Linux LVM
Partition  does not end on cylinder boundary.

Disk /dev/mapper/VolGroup-lv_root:  GB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical):  bytes /  bytes
I/O size (minimum/optimal):  bytes /  bytes
Disk identifier: 


Disk /dev/mapper/VolGroup-lv_swap:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical):  bytes /  bytes
I/O size (minimum/optimal):  bytes /  bytes
Disk identifier: 


Disk /dev/sda:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical):  bytes /  bytes
I/O size (minimum/optimal):  bytes /  bytes
Disk identifier: 


Disk /dev/sdb:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical):  bytes /  bytes
I/O size (minimum/optimal):  bytes /  bytes
Disk identifier: 


Disk /dev/sdc:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical):  bytes /  bytes
I/O size (minimum/optimal):  bytes /  bytes
Disk identifier: 
           
關于

iSCSI

的具體配置參考

http://blog.csdn.net/qq_36294875/article/details/79514247

  • 使用

    iSCSI target

    提供的共享裝置
  • 首先進行格式化
[root@server9 iscsi]# fdisk /dev/sda 
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier .
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (-)
p
Partition number (-): 
First cylinder (-, default ): 
Using default value 
Last cylinder, +cylinders or +size{K,M,G} (-, default ): 
Using default value 

Command (m for help): p

Disk /dev/sda:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical): 512 bytes /  bytes
I/O size (minimum/optimal): 512 bytes /  bytes
Disk identifier: 

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1                                  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@server9 iscsi]# partx /dev/sda
sda   sda1  
[root@server9 iscsi]# partx /dev/sda
# 1:        62-  1015807 (  1015746 sectors,    520 MB)
# 2:         0-       -1 (        0 sectors,      0 MB)
# 3:         0-       -1 (        0 sectors,      0 MB)
# 4:         0-       -1 (        0 sectors,      0 MB)
           
  • 在一個結點上面進行的格式化會在另一個節點上面同步顯示
[[email protected] mnt]# fdisk -l /dev/sda 

Disk /dev/sda:  MB,  bytes
 heads,  sectors/track,  cylinders
Units = cylinders of  *  =  bytes
Sector size (logical/physical):  bytes /  bytes
I/O size (minimum/optimal):  bytes /  bytes
Disk identifier: 

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1                                  Linux
           
  • 接下來擴充為邏輯卷,并且進行格式化
[root@server9 iscsi]# pvcreate /dev/sda1 
  dev_is_mpath: failed to get device for :
  Physical volume "/dev/sda1" successfully created
[root@server9 iscsi]# pvs
  PV         VG       Fmt  Attr PSize   PFree  
  /dev/sda1           lvm2 a--  m m
  /dev/vda2  VolGroup lvm2 a--   g       
[root@server9 iscsi]# vgcreate clustervg /dev/sda1 
  Clustered volume group "clustervg" successfully created
[root@server9 iscsi]# lvcreate -l 123 -n demo clustervg
  Logical volume "demo" created
           
  • 這裡可能出現的錯誤
[[email protected] iscsi]# lvcreate -l 123 -n demo clustervg
  Error locking on node server10.com: Volume group for uuid not found: VzCzPUWP9RGUQGpu8KiZ50MJzodsb0KGCBPhLCOddLrJwNKKPJoYaHoLd1ovuK5k
  Failed to activate new LV.
           
  • 表示沒有辦法施加鎖,,需要将另一個節點的登陸退出;
[[email protected] mnt]# iscsiadm -m node --logout
Logging out of session [sid: , target: iqn.com.server8:server.target6, portal: ,]
Logout of [sid: , target: iqn.com.server8:server.target6, portal: ,] successful.
           
  • server9.com

    上面對

    /dev/sda1

    的操作,在另一個節點上面也是可以清楚的看到的
[[email protected] mnt]# pvs
  PV         VG        Fmt  Attr PSize   PFree
  /dev/sda1  clustervg lvm2 a--  492.00m    0 
  /dev/vda2  VolGroup  lvm2 a--   19.51g    0 
[[email protected] mnt]# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  VolGroup           wz--n-  19.51g    0 
  clustervg          wz--nc 492.00m    0 
[[email protected] mnt]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a----- 492.00m  
           
  • 然後進行格式化
[[email protected] mnt]# mkfs.ext4 /dev/clustervg/demo 
mke2fs  (-May-)
Filesystem label=
OS type: Linux
Block size= (log=)
Fragment size= (log=)
Stride= blocks, Stripe width= blocks
 inodes,  blocks
 blocks (%) reserved for the super user
First data block=
Maximum filesystem blocks=
 block groups
 blocks per group,  fragments per group
 inodes per group
Superblock backups stored on blocks: 
    , , , , , , , 

Writing inode tables: done                            
Creating journal ( blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every  mounts or
 days, whichever comes first.  Use tune2fs -c or -i to override.
           
  • 然後就可以進行挂載了
[root@server9 iscsi]# mount /dev/clustervg/demo /mnt/
[root@server9 iscsi]# cd /mnt/
[root@server9 mnt]# ls
lost+found
           
  • 兩個節點都是可以進行挂載的,但是兩個節點進行的檔案操作,是不會
  • 在進行操作的過程中,如果在仍然挂載的過程中,退出了

    iscsi

    登陸,也就是執行
[[email protected] /]# iscsiadm -m node --logout
Logging out of session [sid: , target: iqn.com.server8:server.target6, portal: ,]
Logout of [sid: , target: iqn.com.server8:server.target6, portal: ,] successful.
           
  • 在重新登陸,以及挂載的過程中,就會出現如下的錯誤
[[email protected] /]# mount /dev/clustervg/demo /mnt/
mount: you must specify the filesystem type
           
  • 在檢視

    pvs

    以及

    vgs

    的過程中就會出現
[[email protected] /]# pvs
  /dev/clustervg/demo: read failed after  of  at : Input/output error
  /dev/clustervg/demo: read failed after  of  at : Input/output error
  /dev/clustervg/demo: read failed after  of  at : Input/output error
  /dev/clustervg/demo: read failed after  of  at : Input/output error
  PV         VG        Fmt  Attr PSize   PFree
  /dev/sdb1  clustervg lvm2 a--  492.00m    0 
  /dev/vda2  VolGroup  lvm2 a--   19.51g    0 
           
  • 對于出現上述錯誤的解決辦法,我這裡是在

    server9.com

    上面重新建立的邏輯卷
[[email protected] /]# pvcreate /dev/sda1 
Can't initialize physical volume "/dev/sda1" of volume group "clustervg" without -ff
[[email protected] /]# pvcreate /dev/sda1 -ff
Really INITIALIZE physical volume "/dev/sda1" of volume group "clustervg" [y/n]? y
  WARNING: Forcing physical volume creation on /dev/sda1 of volume group "clustervg"
  Physical volume "/dev/sda1" successfully created
[[email protected] /]# vgcreate server /dev/sda1 
  Clustered volume group "server" successfully created
[[email protected] /]# lvcreate -l 123 -n iscsi server
  Logical volume "iscsi" created
[[email protected] /]# mkfs.ext4 /dev/server/iscsi 
           
  • 然後進行挂載,并且建立預設的發怒頁面
[root@server9 /]# mount /dev/server/iscsi /mnt/
[root@server9 /]# cd /mnt/
[root@server9 mnt]# vim index.html
[root@server9 mnt]# cat index.html 
luci.iscsi.server.com
[root@server9 /]# umount /mnt/
           
  • 禁用

    apache

    服務,首先需要檢視是在那個主機上面運作的
[[email protected] /]# clustat
Cluster Status for xixihaha @ Mon Mar 12 17:30:43 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server9.com                                 1 Online, Local, rgmanager
 server10.com                                2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 server9.com                    started 
[[email protected] /]# clusvcadm -d apache
Local machine disabling service:apache...Success
           
  • 然後通浏覽器管理界面,添加資源
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 然後點選

    service group

    首先删除資源
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 然後先添加資源

    webdata

    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 然後添加

    httpd

    資源
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
    • 可以檢視對于配置的修改
<resources>
            <ip address="172.25.23.98/24" sleeptime="5"/>
            <script file="/etc/init.d/httpd" name="httpd"/>
            <fs device="/dev/server/iscsi" force_unmount="1" fsid="42848" fstype="ext4" mountpoint="/var/www/html" name="webdata" quick_status="1" self_fence="1"/>
        </resources>

           
  • 然後重新啟用資源
[[email protected] cluster]# clusvcadm -e apache 
Local machine trying to enable service:apache...Success
service:apache is now running on server9.com
           
  • 檢視浏覽器的通路界面,是在

    iscsi

    共享存儲裡面的内容
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 手動遷移資源到另一個節點,浏覽器的通路是不會中斷的
[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法

* 賦予已經遷移到另一節點上面

[[email protected] cluster]# clustat 
Cluster Status for xixihaha @ Mon Mar 12 18:04:14 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server9.com                                 1 Online, Local, rgmanager
 server10.com                                2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 server10.com                   started 
           
  • 通過

    server10.com

    的位址也是可以直接通路的
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 出現的錯誤,

    cman

    無法啟動
[[email protected] ~]# /etc/init.d/cman start
Starting cluster: 
   Checking if cluster has been disabled at boot...        [  OK  ]
   Checking Network Manager... 
Network Manager is either running or configured to run. Please disable it in the cluster.
                                                           [FAILED]
Stopping cluster: 
   Leaving fence domain...                                 [  OK  ]
   Stopping gfs_controld...                                [  OK  ]
   Stopping dlm_controld...                                [  OK  ]
   Stopping fenced...                                      [  OK  ]
   Stopping cman...                                        [  OK  ]
   Unloading kernel modules...                             [  OK  ]
   Unmounting configfs...                                  [  OK  ]
           
  • 通過

    luci

    無法将節點

    server9.com

    添加進入叢集中
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 解決辦法
[root@server9 ~]# service NetworkManager stop
Stopping NetworkManager daemon:                            [  OK  ]
[root@server9 ~]# chkconfig NetworkManager off
           
  • 再次使用

    luci

    添加叢集節點
    [rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 修改檔案系統為全局檔案系統
  • 首先删除邏輯卷,重新建立邏輯卷
[[email protected] /]# lvremove /dev/server/iscsi 
  Volume Groups with the clustered attribute will be inaccessible.
Do you really want to remove active logical volume iscsi? [y/n]: y
  Logical volume "iscsi" successfully removed
[[email protected] /]# lvcreate -l 255 -n demo server
 connect() failed on local socket: Connection refused
 Internal cluster locking initialisation failed.
 WARNING: Falling back to local file-based locking.
 Volume Groups with the clustered attribute will be inaccessible.
 Logical volume "demo" created
           
  • 然後重新建立邏輯卷
[root@server9 /]# pvcreate /dev/sdb1
[root@server9 /]# vgcreate iscsi /dev/sdb1
[root@server9 /]# lvcreate -l 49 -n server iscsi
  Logical volume "server" created
[root@server9 /]# lvs
  LV      VG       Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup -wi-ao----  g                                             
  lv_swap VolGroup -wi-ao---- m                                             
  server  iscsi    -wi-a----- m      
           
  • 然後格式化為全局檔案系統
[[email protected] /]# mkfs.gfs2 -p lock_dlm -t xixihaha:mygfs2 -j  /dev/iscsi/server 
This will destroy any data on /dev/iscsi/server.
It appears to contain: symbolic link to `../dm-'

Are you sure you want to proceed? [y/n] y

Device:                    /dev/iscsi/server
Blocksize:                 
Device Size                 GB ( blocks)
Filesystem Size:            GB ( blocks)
Journals:                  
Resource Groups:           
Locking Protocol:          "lock_dlm"
Lock Table:                "xixihaha:mygfs2"
UUID:                      bf3a490-f7-cf03-a80-d6b41e9d
           
  • 然後設定自動挂載
[[email protected] /]# blkid
/dev/vda1: UUID="06e3b19e-51e5-47c4-96c7-8e2168624055" TYPE="ext4" 
/dev/vda2: UUID="OaQY3R-qzEd-16cc-wD0F-HUjd-FneW-J40cXT" TYPE="LVM2_member" 
/dev/mapper/VolGroup-lv_root: UUID="3701355f-87cb-45c7-8007-66179acd9356" TYPE="ext4" 
/dev/mapper/VolGroup-lv_swap: UUID="6d3d832a-1ff6-4bbc-833a-af04a199d2d0" TYPE="swap" 
/dev/sda1: UUID="J1G3pZ-fMxQ-ehkq-I0dA-tsmG-0iFk-amEJJU" TYPE="LVM2_member" 
/dev/sdb1: UUID="ft46ZQ-IIr6-amjn-fbgt-dt5K-hLKO-SCjJ64" TYPE="LVM2_member" 
/dev/mapper/iscsi-server: LABEL="xixihaha:mygfs2" UUID="6bf3a490-54f7-cf03-3a80-7186d6b41e9d" TYPE="gfs2"
[[email protected] /]# vim /etc/fstab
添加
UUID="6bf3a490-54f7-cf03-3a80-7186d6b41e9d"     /var/www/html gfs2 _netdev  
           
  • 同時在另一個節點上面也是可以看到這些資訊的
[[email protected] ~]# blkid
/dev/vda1: UUID="6831b144-7f92-4d3d-bd3f-1532ed0ab581" TYPE="ext4" 
/dev/vda2: UUID="SQfbye-9dUU-3nYg-64Rj-YMJt-mIwJ-JS8y97" TYPE="LVM2_member" 
/dev/mapper/VolGroup-lv_root: UUID="caa9d225-b786-41f3-bd7c-a0995c56868e" TYPE="ext4" 
/dev/mapper/VolGroup-lv_swap: UUID="ac5ad443-b74e-4934-ac36-b2dc2d9d2337" TYPE="swap" 
/dev/sda1: UUID="J1G3pZ-fMxQ-ehkq-I0dA-tsmG-0iFk-amEJJU" TYPE="LVM2_member" 
/dev/sdb1: UUID="ft46ZQ-IIr6-amjn-fbgt-dt5K-hLKO-SCjJ64" TYPE="LVM2_member" 
/dev/mapper/iscsi-server: LABEL="xixihaha:mygfs2" UUID="6bf3a490-54f7-cf03-3a80-7186d6b41e9d" TYPE="gfs2" 

[[email protected] ~]# vim /etc/fstab 
添加
UUID="6bf3a490-54f7-cf03-3a80-7186d6b41e9d"     /var/www/html gfs2 _netdev  
           
  • 在任意一個結點上面添加預設的釋出頁面
[root@server9 /]# mount /dev/mapper/iscsi-server /mnt/
[root@server9 /]# cd /mnt/
[root@server9 mnt]# ls
[root@server9 mnt]# vim index.html
           
  • 這裡在啟動

    apache

    的過程中出現了這樣一個問題
Local machine trying to enable service:apache...Could not connect to resource group manager
           
  • 這個問題的出現是因為

    rgmanger

    沒有正常啟動,這李進行手動啟動
[root@server9 /]# /etc/init.d/rgmanager status
rgmanager dead but pid file exists
[root@server9 /]# /etc/init.d/rgmanager start
Starting Cluster Service Manager:                          [  OK  ]
           

* 然後再

luci

管理節點上面,進行如下的操作

* 首先删除之前的資源組裡面的檔案系統和

httpd

資源

[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法

* 然後添加全局檔案系統資源

[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法

* 之後,在組資源裡面,首先添加檔案系統,之後添加

httpd

資源

[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法

* 添加資源的順序一定不能夠出錯

* 之後就可以通過浏覽器來通路資源

[rhel6.5]配置luci和ricci + Apache+iscsi || gfs2 + 遇到的錯誤以及解決辦法
  • 檢視叢集節點資訊,并且手動遷移資源
[[email protected] /]# clustat 
Cluster Status for xixihaha @ Tue Mar 13 14:33:30 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server9.com                                 1 Online, Local, rgmanager
 server10.com                                2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 server9.com                    started       
[[email protected] /]# clusvcadm -r apache -m server10.com
Trying to relocate service:apache to server10.com...Success
service:apache is now running on server10.com
[[email protected] /]# clustat 
Cluster Status for xixihaha @ Tue Mar 13 14:34:14 2018
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 server9.com                                 1 Online, Local, rgmanager
 server10.com                                2 Online, rgmanager

 Service Name                   Owner (Last)                   State         
 ------- ----                   ----- ------                   -----         
 service:apache                 server10.com                   started