天天看點

Linux操作文檔——RAID

RAID(軟)

文章目錄

    • RAID(軟)
      • 一、RAID 分類
      • 二、RAID 建立
        • 1、建立 RAID 0
        • 2、建立 RAID 1
        • 3、建立 RAID 5
      • 三、其他相關指令
        • 1、移除其中一塊磁盤(磁盤損壞)
        • 2、當其中一塊磁盤損壞時,添加另一塊磁盤方法
        • 3、解除安裝raid陣列
        • 4、重新添加磁盤陣列

一、RAID 分類

級别 性能 資料備援能力 磁盤數量 空間使用率
Raid 0(條帶) 讀寫速度快 最少2塊磁盤 1
Raid 1(鏡像) 寫性能下降,讀性能提升 支援損壞1塊磁盤 最少2塊磁盤 1/2
Raid-10 讀寫速度快 同組不能都壞掉 最少4塊磁盤 1/2
Raid-01 讀寫速度快 同組可以壞,不能是不同組的相同标号 最少4塊磁盤 1/2
Raid 5(校驗碼機制) 讀寫速度快 隻能壞一塊 最少3塊磁盤 (n-1)/n
Raid 6 讀寫速度快 支援同時損壞2塊磁盤 最少4塊磁盤 (n-2)/n
Raid 7 讀寫速度快 支援同時損壞3塊磁盤 最少5塊磁盤 (n-3)/n

二、RAID 建立

建立模式指令:mdadm -C

-l: 指定級别

-n: 裝置個數

-a: {yes|no} 自動為其建立裝置檔案

-c: chunk大小,預設為64k,(資料塊) 2的N次方

-x: 指定空閑盤的個數

1、建立 RAID 0

[[email protected] ~]# mdadm -C /dev/md0 -a yes -l 0 -n 2 /dev/sd{b1,c1}       //
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
[[email protected] ~]# cat /proc/mdstat 
Personalities : [raid0] 
md0 : active raid0 sdc1[1] sdb1[0]
      41906176 blocks super 1.2 512k chunks
      
unused devices: <none>
[[email protected] ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[[email protected] ~]# mkdir /mnt/Raid0
[[email protected] ~]# mount /dev/md0 /mnt/Raid0/
[[email protected] ~]# df -hT /mnt/Raid0/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md0       xfs    40G   33M   40G   1% /mnt/Raid0
           

2、建立 RAID 1

[[email protected] ~]# mdadm -C /dev/md1 -a yes -l 1 -n 2 /dev/sd{b1,c1}
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[[email protected] ~]# cat /proc/mdstat 
Personalities : [raid1] 
md1 : active raid1 sdc1[1] sdb1[0]
      20953088 blocks super 1.2 [2/2] [UU]
      [=========>...........]  resync = 46.7% (9802112/20953088) finish=0.1min speed=980211K/sec
      
unused devices: <none>
[[email protected] ~]# mkdir /mnt/Raid1
[[email protected] ~]# mkfs.xfs /dev/md1
meta-data=/dev/md1               isize=512    agcount=4, agsize=1309568 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=5238272, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[[email protected] ~]# mount /dev/md1 /mnt/Raid1/
[[email protected] ~]# df -hT /mnt/Raid1/ 
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md1       xfs    20G   33M   20G   1% /mnt/Raid1
           

3、建立 RAID 5

[[email protected] ~]# mdadm -C /dev/md5 -a yes -l 5 -n 3 /dev/sd{b1,c1,d1}
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
[[email protected] ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md5 : active raid5 sdd1[3] sdc1[1] sdb1[0]
      41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=======>.............]  recovery = 37.8% (7921536/20953088) finish=0.4min speed=528102K/sec
      
unused devices: <none>
[[email protected] ~]# mkdir /mnt/raid5
[[email protected] ~]# mkfs.xfs /dev/md5 
meta-data=/dev/md5               isize=512    agcount=16, agsize=654720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=10475520, imaxpct=25
         =                       sunit=128    swidth=256 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=5120, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[[email protected] ~]# mount /dev/md5 /mnt/raid5/
[[email protected] ~]# df -hT /mnt/raid5/
Filesystem     Type  Size  Used Avail Use% Mounted on
/dev/md5       xfs    40G   33M   40G   1% /mnt/raid5
[[email protected] ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Fri May  8 02:04:28 2020
        Raid Level : raid5
        Array Size : 41906176 (39.96 GiB 42.91 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Fri May  8 02:06:50 2020
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : 5b706bd6:feaf4048:5d1160a3:b53e91ea
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       3       8       49        2      active sync   /dev/sdd1
           

三、其他相關指令

1、移除其中一塊磁盤(磁盤損壞)

[[email protected] Raid1]# mdadm /dev/md1 -r /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md1
[[email protected] Raid1]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri May  8 01:22:36 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Fri May  8 01:27:01 2020
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 53b4a211:634a85a6:288b86c3:9a404213
            Events : 26

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       -       0        0        1      removed
           

2、當其中一塊磁盤損壞時,添加另一塊磁盤方法

對于硬碟大小,轉速各方面都要盡量一緻,且已經格式化

[[email protected] Raid1]# mdadm /dev/md1 -a /dev/sdd1
mdadm: added /dev/sdd1
[[email protected] Raid1]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri May  8 01:22:36 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri May  8 01:28:34 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 27% complete

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 53b4a211:634a85a6:288b86c3:9a404213
            Events : 36

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       2       8       49        1      spare rebuilding   /dev/sdd1
           

3、解除安裝raid陣列

[[email protected] ~]# umount /dev/md1
[[email protected] ~]# df -hT /mnt/Raid1/
Filesystem              Type  Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs    17G  1.2G   16G   7% /
[[email protected] ~]# mdadm -S /dev/md1 
mdadm: stopped /dev/md1
[[email protected] ~]# cat /proc/mdstat 
Personalities : [raid1] 
unused devices: <none>
           

4、重新添加磁盤陣列

[[email protected] ~]# mdadm -AR /dev/md1 /dev/sd{b1,c1}
[[email protected] ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri May  8 01:22:36 2020
        Raid Level : raid1
        Array Size : 20953088 (19.98 GiB 21.46 GB)
     Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri May  8 01:45:02 2020
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : resync

    Rebuild Status : 9% complete

              Name : localhost.localdomain:1  (local to host localhost.localdomain)
              UUID : 53b4a211:634a85a6:288b86c3:9a404213
            Events : 52

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       2       8       33        1      spare rebuilding   /dev/sdc1
           

5、掃描raid磁盤資訊

[[email protected] ~]# mdadm -D --scan
ARRAY /dev/md1 metadata=1.2 name=localhost.localdomain:1 UUID=53b4a211:634a85a6:288b86c3:9a404213
           

繼續閱讀