在這裡插入圖檔描述
Mdadm介紹:
mdadm是multiple devices admin的簡稱,它是Linux下的一款标準的軟體 RAID 管理工具。
- mdadm能夠診斷、監控和收集詳細的陣列資訊。
- mdadm是一個單獨內建化的程式而不是一些分散程式的集合,是以對不同RAID管理指令有共通的文法。
- mdadm能夠執行幾乎所有的功能而不需要配置檔案。(也沒有預設的配置檔案)**
在linux系統中目前以MD(Multiple Devices)虛拟塊裝置的方式實作軟體RAID,利用多個底層的塊裝置虛拟出一個新的虛拟裝置,并且利用條帶化(stripping)技術将資料塊均勻分布到多個磁盤上來提高虛拟裝置的讀寫性能,利用不同的資料冗祭算法來保護使用者資料不會因為某個塊裝置的故障而完全丢失,而且還能在裝置被替換後将丢失的資料恢複到新的裝置上。
目前MD支援linear,multipath,raid0(stripping),raid1(mirror),raid4,raid5,raid6,raid10等不同的備援級别和級成方式,當然也能支援多個RAID陳列的層疊組成raid1 0,raid5 1等類型的陳列。
環境介紹:
CentOS 7.5-Minimal
VMware Workstation 15
mdadm工具
六塊磁盤:sdb sdc sdd sde sdf sdg
複制
常用參數:
參數 | 作用 |
---|---|
-a | 添加磁盤 |
-n | 指定裝置數量 |
-l | 指定RAID級别 |
-C | 建立 |
-v | 顯示過程 |
-f | 模拟裝置損壞 |
-r | 移除裝置 |
-Q | 檢視摘要資訊 |
-D | 檢視詳細資訊 |
-S | 停止RAID磁盤陣列 |
-x | 指定空閑盤(熱備磁盤)個數,空閑盤(熱備磁盤)能在工作盤損壞後自動頂替 |
mdadm工具指令基本格式:
mdadm -C -v 目錄 -l 級别 -n 磁盤數量 裝置路徑
檢視RAID陣列方法:
cat /proc/mdstat //檢視狀态
mdadm -D 目錄 //檢視詳細資訊
複制
1.虛拟機磁盤準備
在這裡插入圖檔描述
2.檢視新增的磁盤
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 20G 0 disk -新
sdc 8:32 0 20G 0 disk -新
sdd 8:48 0 20G 0 disk -新
sde 8:64 0 20G 0 disk -新
sdf 8:80 0 20G 0 disk -新
sdg 8:96 0 10G 0 disk -新
sr0 11:0 1 906M 0 rom
複制
3.安裝madam工具
[root@localhost ~]# yum -y install mdadm
複制
更改磁盤分區格式:
分區格式需改為fd
****************************************
[root@localhost ~]# fdisk /dev/sdb //修改sdb磁盤分區
...
指令(輸入 m 擷取幫助):n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
分區号 (1-4,預設 1):
起始 扇區 (2048-41943039,預設為 2048):
将使用預設值 2048
Last 扇區, +扇區 or +size{K,M,G} (2048-41943039,預設為 41943039):
将使用預設值 41943039
分區 1 已設定為 Linux 類型,大小設為 20 GiB
指令(輸入 m 擷取幫助):t
已選擇分區 1
Hex 代碼(輸入 L 列出所有代碼):fd
已将分區“Linux raid autodetect”的類型更改為“Linux raid autodetect”
指令(輸入 m 擷取幫助):p
磁盤 /dev/sdb:21.5 GB, 21474836480 位元組,41943040 個扇區
Units = 扇區 of 1 * 512 = 512 bytes
扇區大小(邏輯/實體):512 位元組 / 512 位元組
I/O 大小(最小/最佳):512 位元組 / 512 位元組
磁盤标簽類型:dos
磁盤辨別符:0xf056a1fe
裝置 Boot Start End Blocks Id System
/dev/sdb1 2048 41943039 20970496 fd Linux raid autodetect
指令(輸入 m 擷取幫助):w
The partition table has been altered!
Calling ioctl() to re-read partition table.
正在同步磁盤。
****************************************
[root@localhost ~]# fdisk /dev/sdc //修改sdc磁盤分區
指令(輸入 m 擷取幫助):p
...
裝置 Boot Start End Blocks Id System
/dev/sdc1 2048 41943039 20970496 fd Linux raid autodetect
****************************************
[root@localhost ~]# fdisk /dev/sdd //修改sdd磁盤分區
指令(輸入 m 擷取幫助):p
...
裝置 Boot Start End Blocks Id System
/dev/sdd1 2048 41943039 20970496 fd Linux raid autodetect
****************************************
[root@localhost ~]# fdisk /dev/sde //修改sde磁盤分區
指令(輸入 m 擷取幫助):p
...
裝置 Boot Start End Blocks Id System
/dev/sde1 2048 41943039 20970496 fd Linux raid autodetect
****************************************
[root@localhost ~]# fdisk /dev/sdf //修改sdf磁盤分區
指令(輸入 m 擷取幫助):p
...
裝置 Boot Start End Blocks Id System
/dev/sdf1 2048 41943039 20970496 fd Linux raid autodetect
****************************************
[root@localhost ~]# fdisk /dev/sdg
指令(輸入 m 擷取幫助):p
...
裝置 Boot Start End Blocks Id System
/dev/sdg1 2048 20971519 10484736 fd Linux raid autodetect
複制
RAID 0 實驗
1.建立RAID 0
[root@localhost ~]# mdadm -C -v /dev/md0 -l 0 -n 2 /dev/sdb1 /dev/sdc1
//在/dev/md0目錄下将sdb1與sdc1兩塊磁盤建立為RAID級别為0,磁盤數為2的RAID0陣列
[root@localhost ~]# cat /proc/mdstat //檢視RAID 0
Personalities : [raid0]
md0 : active raid0 sdc1[1] sdb1[0]
41906176 blocks super 1.2 512k chunks
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md0 //檢視RAID0詳細資訊
/dev/md0:
Version : 1.2
Creation Time : Tue Apr 21 15:55:29 2020
Raid Level : raid0
Array Size : 41906176 (39.96 GiB 42.91 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Apr 21 15:55:29 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : none
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 4b439b50:63314c34:0fb14c51:c9930745
Events : 0
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
複制
2.格式化分區
[root@localhost ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0 isize=512 agcount=16, agsize=654720 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=10475520, imaxpct=25
= sunit=128 swidth=256 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=5120, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@localhost ~]# blkid /dev/md0
/dev/md0: UUID="13a0c896-5e79-451f-b6f1-b04b79c1bc40" TYPE="xfs"
複制
3.格式化後挂載
[root@localhost ~]# mkdir /raid0 //建立挂載目錄
[root@localhost ~]# mount /dev/md0 /raid0/ //将/dev/md0挂載到/raid0
[root@localhost ~]# df -h //檢視挂載是否成功
檔案系統 容量 已用 可用 已用% 挂載點
/dev/mapper/centos-root 17G 11G 6.7G 61% /
devtmpfs 1.1G 0 1.1G 0% /dev
tmpfs 1.1G 0 1.1G 0% /dev/shm
...
/dev/md0 40G 33M 40G 1% /raid0
複制
RAID 1 實驗
1.建立RAID 1
[root@localhost ~]# mdadm -C -v /dev/md1 -l 1 -n 2 /dev/sdd1 /dev/sde1 -x 1 /dev/sdb1
//在/dev/md1目錄下将sdd1與sde1兩塊磁盤建立為RAID級别為1,磁盤數為2
的RAID1磁盤陣列并将sdd1作為備用磁盤
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid0 devices=2 ctime=Tue Apr 21 15:55:29 2020
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md1 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
複制
2.檢視RAID 1狀态
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid1]
md1 : active raid1 sdb1[2](S) sde1[1] sdd1[0]
20953088 blocks super 1.2 [2/2] [UU]
[==========>..........] resync = 54.4% (11407360/20953088) finish=0.7min speed=200040K/sec
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Apr 21 16:11:16 2020
Raid Level : raid1
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Apr 21 16:12:34 2020
State : clean, resyncing
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Resync Status : 77% complete
Name : localhost.localdomain:1 (local to host localhost.localdomain)
UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
Events : 12
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
2 8 17 - spare /dev/sdb1
複制
3.格式化并挂載
[root@localhost ~]# mkfs.xfs /dev/md1
[root@localhost ~]# blkid /dev/md1
/dev/md1: UUID="18a8f33b-1bb6-43c2-8dfc-2b21a871961a" TYPE="xfs"
[root@localhost ~]# mkdir /raid1
[root@localhost ~]# mount /dev/md1 /raid1/
[root@localhost ~]# df -h
檔案系統 容量 已用 可用 已用% 挂載點
/dev/mapper/centos-root 17G 11G 6.7G 61% /
...
/dev/md1 20G 33M 20G 1% /raid1
複制
4.模拟磁盤損壞
模拟損壞後檢視RAID1陣列詳細資訊,發現/dev/sdb1自動替換了損壞的/dev/sdd1磁盤。
[root@localhost ~]# mdadm /dev/md1 -f /dev/sdd1
[root@localhost ~]# mdadm -D /dev/md1 //檢視
/dev/md1:
Version : 1.2
Creation Time : Tue Apr 21 16:11:16 2020
Raid Level : raid1
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Apr 21 16:29:38 2020
State : clean, degraded, recovering //正在自動恢複
Active Devices : 1
Working Devices : 2
Failed Devices : 1 //已損壞的磁盤
Spare Devices : 1 //備用裝置數
Consistency Policy : resync
Rebuild Status : 46% complete
Name : localhost.localdomain:1 (local to host localhost.localdomain)
UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
Events : 26
Number Major Minor RaidDevice State
2 8 17 0 spare rebuilding /dev/sdb1
1 8 65 1 active sync /dev/sde1
0 8 49 - faulty /dev/sdd1
****** 備用磁盤正在自動替換損壞的磁盤,等幾分鐘再檢視RAID1陣列詳細資訊
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Apr 21 16:11:16 2020
Raid Level : raid1
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Apr 21 16:30:39 2020
State : clean //幹淨,已經替換成功了
Active Devices : 2
Working Devices : 2
Failed Devices : 1 //已損壞的磁盤
Spare Devices : 0 //備用裝置數為0了
Consistency Policy : resync
Name : localhost.localdomain:1 (local to host localhost.localdomain)
UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
Events : 36
Number Major Minor RaidDevice State
2 8 17 0 active sync /dev/sdb1
1 8 65 1 active sync /dev/sde1
0 8 49 - faulty /dev/sdd1
複制
5.移除損壞的磁盤
[root@localhost ~]# mdadm -r /dev/md1 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Apr 21 16:11:16 2020
Raid Level : raid1
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Apr 21 16:38:32 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0 //因為我們已經移除了,是以這裡已經沒有顯示了
Spare Devices : 0
Consistency Policy : resync
Name : localhost.localdomain:1 (local to host localhost.localdomain)
UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
Events : 37
Number Major Minor RaidDevice State
2 8 17 0 active sync /dev/sdb1
1 8 65 1 active sync /dev/sde1
複制
6.添加新磁盤到RAID1陣列:
[root@localhost ~]# mdadm -a /dev/md1 /dev/sdc1
//将/dev/sdc1磁盤添加為RAID1陣列的備用裝置
mdadm: added /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Apr 21 16:11:16 2020
Raid Level : raid1
Array Size : 20953088 (19.98 GiB 21.46 GB)
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Apr 21 16:40:20 2020
State : clean
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1 //剛添加了一塊新磁盤,備用磁盤這裡已經有顯示
Consistency Policy : resync
Name : localhost.localdomain:1 (local to host localhost.localdomain)
UUID : 98b76e6e:b6390011:26a822a8:3dcc4cc9
Events : 38
Number Major Minor RaidDevice State
2 8 17 0 active sync /dev/sdb1
1 8 65 1 active sync /dev/sde1
3 8 33 - spare /dev/sdc1
複制
注意:
- 新增加的硬碟需要與原硬碟大小一緻。
- 如果原有陣列缺少工作磁盤(如raid1隻有一塊在工作,raid5隻有2塊在工作),這時新增加的磁盤直接變為工作磁盤,如果原有陣列工作正常,則新增加的磁盤為熱備磁盤。
7.停止RAID陣列
要停止陣列,需要先将挂載的RAID先取消挂載才可以停止陣列,并且停止陣列之後會自動删除建立陣列的目錄。
[root@localhost ~]# umount /dev/md1
[root@localhost ~]# df -h
檔案系統 容量 已用 可用 已用% 挂載點
/dev/mapper/centos-root 17G 11G 6.7G 61% /
devtmpfs 1.1G 0 1.1G 0% /dev
tmpfs 1.1G 0 1.1G 0% /dev/shm
tmpfs 1.1G 9.7M 1.1G 1% /run
tmpfs 1.1G 0 1.1G 0% /sys/fs/cgroup
/dev/sda1 1014M 130M 885M 13% /boot
overlay 17G 11G 6.7G 61% /var/lib/docker/overlay2/2131dc663296fd193837265e88fa5c9c62b9bfd924303381cea8b4c39c652c84/merged
shm 64M 0 64M 0% /var/lib/docker/containers/436f7e6619c1805553ea71d800fd49ab08843cef6ed162acb35b4c32064ea449/mounts/shm
tmpfs 211M 0 211M 0% /run/user/0
[root@localhost ~]# mdadm -S /dev/md1
mdadm: stopped /dev/md1
[root@localhost ~]# ls /dev/md1
ls: 無法通路/dev/md1: 沒有那個檔案或目錄
複制
RAID 5 實驗
1.建立RAID 5
[root@localhost ~]# mdadm -C -v /dev/md5 -l 5 -n 3 /dev/sdb1 /dev/sdc1 /dev/sdd1 -x 1 /dev/sde1
//在/dev/md5目錄下将sdb1、sdc1、sdd1三塊磁盤建立為RAID級别為5,磁盤
數為3的RAID5磁盤陣列并将sde1作為備用磁盤
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Tue Apr 21 16:11:16 2020
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Tue Apr 21 16:11:16 2020
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Tue Apr 21 16:11:16 2020
mdadm: /dev/sde1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Tue Apr 21 16:11:16 2020
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md5 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
複制
2.檢視RAID 5陣列資訊
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4]
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
41906176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Tue Apr 21 16:56:09 2020
Raid Level : raid5
Array Size : 41906176 (39.96 GiB 42.91 GB)
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue Apr 21 16:59:56 2020
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1 //備用裝置數1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 422363cb:e7fd4d3a:aaf61344:9bdd00b3
Events : 18
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
3 8 65 - spare /dev/sde1
複制
3.模拟磁盤損壞
[root@localhost ~]# mdadm -f /dev/md5 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md5 //提示sdb1已損壞
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Tue Apr 21 16:56:09 2020
Raid Level : raid5
Array Size : 41906176 (39.96 GiB 42.91 GB)
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue Apr 21 17:04:36 2020
State : clean, degraded, recovering //正在自動替換
Active Devices : 2
Working Devices : 3
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 16% complete
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 422363cb:e7fd4d3a:aaf61344:9bdd00b3
Events : 22
Number Major Minor RaidDevice State
3 8 65 0 spare rebuilding /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
0 8 17 - faulty /dev/sdb1
**************************
[root@localhost ~]# mdadm -D /dev/md5
/dev/md5:
Version : 1.2
Creation Time : Tue Apr 21 16:56:09 2020
Raid Level : raid5
Array Size : 41906176 (39.96 GiB 42.91 GB)
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue Apr 21 17:07:58 2020
State : clean //自動替換成功
Active Devices : 3
Working Devices : 3
Failed Devices : 1 //損壞磁盤數為1
Spare Devices : 0 //備用磁盤數為0,因為已經替換
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:5 (local to host localhost.localdomain)
UUID : 422363cb:e7fd4d3a:aaf61344:9bdd00b3
Events : 37
Number Major Minor RaidDevice State
3 8 65 0 active sync /dev/sde1
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
0 8 17 - faulty /dev/sdb1
複制
4.格式化并挂載
[root@localhost ~]# mkdir /raid5
[root@localhost ~]# mkfs.xfs /dev/md5
[root@localhost ~]# mount /dev/md5 /raid5/
[root@localhost ~]# df -h
檔案系統 容量 已用 可用 已用% 挂載點
/dev/mapper/centos-root 17G 11G 6.7G 61% /
...
/dev/md5 40G 33M 40G 1% /raid5
複制
5.停止陣列
[root@localhost ~]# mdadm -S /dev/md5
mdadm: stopped /dev/md5
複制
RAID 6 實驗
1.建立RAID 6陣列
[root@localhost ~]# mdadm -C -v /dev/md6 -l 6 -n 4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 -x 2 /dev/sdf1 /dev/sdg1
//在/dev/md6目錄下将sdb1、sdc1、sdd1、sde1四塊磁盤建立為RAID級别
為6,磁盤數為4的RAID6磁盤陣列并将sdf1、sdg1作為備用磁盤
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: partition table exists on /dev/sdb1 but will be lost or
meaningless after creating array
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: /dev/sde1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: /dev/sdf1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Apr 21 17:37:15 2020
mdadm: size set to 10467328K
mdadm: largest drive (/dev/sdb1) exceeds size (10467328K) by more than 1%
Continue creating array? y
mdadm: Fail create md6 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md6 started.
複制
2.檢視RAID 6陣列資訊
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md6 : active raid6 sdg1[5](S) sdf1[4](S) sde1[3] sdd1[2] sdc1[1] sdb1[0]
20934656 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
Version : 1.2
Creation Time : Tue Apr 21 17:54:25 2020
Raid Level : raid6
Array Size : 20934656 (19.96 GiB 21.44 GB)
Used Dev Size : 10467328 (9.98 GiB 10.72 GB)
Raid Devices : 4
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Tue Apr 21 17:58:16 2020
State : clean
Active Devices : 4
Working Devices : 6
Failed Devices : 0
Spare Devices : 2
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:6 (local to host localhost.localdomain)
UUID : 9a5c470e:eb95d0b4:2a213dac:f0fd3315
Events : 17
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
4 8 81 - spare /dev/sdf1
5 8 97 - spare /dev/sdg1
複制
3.模拟磁盤損壞(同時損壞兩塊)
[root@localhost ~]# mdadm -f /dev/md6 /dev/sdb1 //sdb1損壞
mdadm: set /dev/sdb1 faulty in /dev/md6
[root@localhost ~]# mdadm -f /dev/md6 /dev/sdc1 //sdc1損壞
mdadm: set /dev/sdc1 faulty in /dev/md6
[root@localhost ~]# mdadm -D /dev/md6 //檢視RAID6陣列狀态
/dev/md6:
Version : 1.2
Creation Time : Tue Apr 21 17:54:25 2020
Raid Level : raid6
Array Size : 20934656 (19.96 GiB 21.44 GB)
Used Dev Size : 10467328 (9.98 GiB 10.72 GB)
Raid Devices : 4
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Tue Apr 21 18:01:46 2020
State : clean, degraded, recovering //正在替換
Active Devices : 2
Working Devices : 4
Failed Devices : 2 //損壞磁盤數2塊
Spare Devices : 2 //備用裝置數2
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Rebuild Status : 19% complete
Name : localhost.localdomain:6 (local to host localhost.localdomain)
UUID : 9a5c470e:eb95d0b4:2a213dac:f0fd3315
Events : 29
Number Major Minor RaidDevice State
5 8 97 0 spare rebuilding /dev/sdg1
4 8 81 1 spare rebuilding /dev/sdf1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
0 8 17 - faulty /dev/sdb1
1 8 33 - faulty /dev/sdc1
*****************************
[root@localhost ~]# mdadm -D /dev/md6
/dev/md6:
Version : 1.2
Creation Time : Tue Apr 21 17:54:25 2020
Raid Level : raid6
Array Size : 20934656 (19.96 GiB 21.44 GB)
Used Dev Size : 10467328 (9.98 GiB 10.72 GB)
Raid Devices : 4
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Tue Apr 21 18:04:02 2020
State : clean //已自動替換
Active Devices : 4
Working Devices : 4
Failed Devices : 2
Spare Devices : 0 //備用裝置為0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : resync
Name : localhost.localdomain:6 (local to host localhost.localdomain)
UUID : 9a5c470e:eb95d0b4:2a213dac:f0fd3315
Events : 43
Number Major Minor RaidDevice State
5 8 97 0 active sync /dev/sdg1
4 8 81 1 active sync /dev/sdf1
2 8 49 2 active sync /dev/sdd1
3 8 65 3 active sync /dev/sde1
0 8 17 - faulty /dev/sdb1
1 8 33 - faulty /dev/sdc1
複制
4.格式化并挂載
方法同上。
5.停止陣列
[root@localhost ~]# mdadm -S /dev/md6
mdadm: stopped /dev/md6
複制
RAID 10 實驗
RAID 1+0是用兩個RAID 1來建立。
1.建立兩個RAID 1陣列
[root@localhost ~]# mdadm -C -v /dev/md1 -l 1 -n 2 /dev/sdb1 /dev/sdc1
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Wed Apr 22 00:47:05 2020
mdadm: partition table exists on /dev/sdb1 but will be lost or
meaningless after creating array
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sdc1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Wed Apr 22 00:47:05 2020
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md1 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@localhost ~]# mdadm -C -v /dev/md0 -l 1 -n 2 /dev/sdd1 /dev/sde1
mdadm: /dev/sdd1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Apr 21 17:54:25 2020
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: /dev/sde1 appears to be part of a raid array:
level=raid6 devices=4 ctime=Tue Apr 21 17:54:25 2020
mdadm: size set to 20953088K
Continue creating array? y
mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
複制
2.檢視兩個RAID 1陣列資訊
[root@localhost ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Wed Apr 22 00:48:19 2020
Raid Level : raid1
Array Size : 20953088 (19.98 GiB 21.46 GB)
*****第一個RAID 1容量20G*****
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Apr 22 00:50:21 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : localhost.localdomain:1 (local to host localhost.localdomain)
UUID : 95cd9b90:8dcbbbef:7974f3aa:d38d7f5b
Events : 17
Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 33 1 active sync /dev/sdc1
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Apr 22 00:48:52 2020
Raid Level : raid1
Array Size : 20953088 (19.98 GiB 21.46 GB)
*****第二個RAID 1容量20G******
Used Dev Size : 20953088 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Apr 22 00:50:44 2020
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Resync Status : 96% complete
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : ae813945:1174d6cb:ad1e3a33:1303a7d3
Events : 15
Number Major Minor RaidDevice State
0 8 49 0 active sync /dev/sdd1
1 8 65 1 active sync /dev/sde1
複制
3.建立RAID 1+0陣列
[root@localhost ~]# mdadm -C -v /dev/md10 -l 0 -n 2 /dev/md1 /dev/md0
mdadm: chunk size defaults to 512K
mdadm: Fail create md10 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md10 started.
複制
4.檢視RAID 1+0陣列資訊
[root@localhost ~]# mdadm -D /dev/md10
/dev/md10:
Version : 1.2
Creation Time : Wed Apr 22 00:55:41 2020
Raid Level : raid0
Array Size : 41871360 (39.93 GiB 42.88 GB)
*****制作的RAID 1+0 容量40G*****
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Apr 22 00:55:41 2020
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : none
Name : localhost.localdomain:10 (local to host localhost.localdomain)
UUID : 09a95fcb:c9a2ec94:4461c81e:a9a65c2f
Events : 0
Number Major Minor RaidDevice State
0 9 1 0 active sync /dev/md1
1 9 0 1 active sync /dev/md0
複制