1.實驗說明:
1.RAID 5是一種存儲性能、資料安全和存儲成本兼顧的存儲解決方案。RAID 5可以了解為是RAID 5可以了解為是RAID 0 和RAID 1的折中方案。RAID 5和RAID 4一樣,資料以塊為機關分布到各個磁盤上。RAID 5不對資料進行備份,而是把資料和其對應的奇偶校驗資訊存儲到組成RAID5的各個磁盤上,并且奇偶校驗資訊和對應的資料分别存儲于不同的磁盤上。當RAID5的一個磁盤資料損壞後,利用剩下的資料和相應的奇偶校驗資訊恢複被損壞的資料。RAID 5至少使用3塊磁盤組成磁盤陣列。此處通過實驗加深讀RAID 5工作原理的了解。
2.mdadm是Linux下用于建立和管理軟體RAID的指令,是一個模式化指令。但是由于現在伺服器一般都帶有RAID陣列卡,并且RAID陣列卡也很廉價,且由于軟體RAID的自身缺陷(不能用作啟動分區、使用CPU實作、降低CPU使用率),是以在生産環境下并不适用。但為了學習和了解RAID原理和管理,進行如下實驗。
2.實驗環境:
VMware Workstation 12.0.0 build-2985596 CentOS6.9 64位系統
3.前期準備:
1.開啟VMware Workstation虛拟機中CentOS6.9 64位系統前,在虛拟機中新增4塊新的虛拟磁盤,大小分别為20GB、25GB、30GB、35GB,添加方法:點選VM虛拟機的菜單“虛拟機(M)"-->"設定(S)Ctrl+D"-->”硬體“-->"添加(A)”-->"硬體類型:硬碟“-->"下一步(N)”-->"選擇磁盤類型:預設不修改“-->”下一步(N)”-->"選擇磁盤:勾選 建立新虛拟磁盤(V)“-->”下一步(N)”-->"指定磁盤容量 最大磁盤容量填需要增加的大小,例如20,表示20GB;立即配置設定所有空間(A)不勾選;将虛拟磁盤存儲為單個檔案(O)勾選-->”下一步(N)“-->”指定磁盤檔案:預設不修改“-->點選”完成“,建立磁盤完成,按照此方法再建立剩下3個虛拟磁盤。
2.開啟虛拟機中的CentOS系統,如果添加虛拟硬碟時系統正在運作,想不重新開機讓虛拟機識别硬碟可以使用這個指令:echo '- - -' > /sys/class/scsi_host/host2/scan,讓系統識别新增的硬碟。然後用指令lsblk檢視目前系統已有塊裝置。如下:
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 3.7G 0 rom /media/CentOS_6.9_Final # 已挂載CD光牒,此處可忽略
sda 8:0 0 200G 0 disk # 根目錄所在磁盤,此處可忽略
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 100G 0 part /
├─sda3 8:3 0 50G 0 part /app
├─sda4 8:4 0 1K 0 part
└─sda5 8:5 0 1G 0 part [SWAP]
sdb 8:16 0 20G 0 disk # 此處為新增的第一塊磁盤
sdc 8:32 0 25G 0 disk # 此處為新增的第二塊磁盤
sdd 8:48 0 30G 0 disk # 此處為新增的第三塊磁盤
sde 8:64 0 35G 0 disk # 此處為新增的第四塊磁盤
4.正文:
1.實驗前的規劃:sdb、sdc、sdd組成RAID5,sde作為備用盤,因為組成RAID5的每個磁盤取出的空間要一緻,組成後RAID最大空間取硬碟中最小的空間20G,是以磁盤sdc、sdd、sde要進行劃分分區處理,如下操作在sdc上建立大小為20G的主分區,并按同樣的操作方法,在sdd、sde上建立同樣為20G的主分區。
[[email protected] ~]# fdisk /dev/sdc # 對磁盤sdc分區
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xe691aa35.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').
Command (m for help): m # 擷取幫助
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)
Command (m for help): n # 建立新的分區
Command action
e extended
p primary partition (1-4)
p # 選擇建立主分區
Partition number (1-4): 1 # 選擇分區編号
First cylinder (1-3263, default 1): # 按Enter鍵,擷取預設扇區起始點
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-3263, default 3263): +20G # 輸入建立的分區空間大小
Command (m for help): p # 檢視分區資訊
Disk /dev/sdc: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe691aa35
Device Boot Start End Blocks Id System
/dev/sdc1 1 2612 20980858+ 83 Linux
Command (m for help): t # 修改系統分區id
Selected partition 1
Hex code (type L to list codes): fd # 可用輸入L,檢視輸入id編号,Linux Raid的輸入fd
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w # 儲存退出
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
2.sdc、sdd、sde上分區建立好後如下,若看不到,可能是分區不同步,可以使用指令partx -a /dev/DEVICE同步。例如同步磁盤sdc, partx -a /dev/sdc.
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 3.7G 0 rom /media/CentOS_6.9_Final
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 100G 0 part /
├─sda3 8:3 0 50G 0 part /app
├─sda4 8:4 0 1K 0 part
└─sda5 8:5 0 1G 0 part [SWAP]
sdb 8:16 0 20G 0 disk
sdc 8:32 0 25G 0 disk
└─sdc1 8:33 0 20G 0 part
sdd 8:48 0 30G 0 disk
└─sdd1 8:49 0 20G 0 part
sde 8:64 0 35G 0 disk
└─sde1 8:65 0 20G 0 part
3.使用指令mdadm建立RAID5陣列。
[[email protected] ~]# mdadm -C /dev/md0 -a yes -l 5 -n 3 -x1 /dev/sd{b,c1,d1,e1} # 此條指令的詳細解析請看備注
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
4.使用指令mdadm -D /dev/md0檢視建立的RAID5陣列md0的資訊,也可以使用指令cat /proc/mdstat檢視md0的資訊。
[[email protected] ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 19 21:09:04 2017
Raid Level : raid5
Array Size : 41910272 (39.97 GiB 42.92 GB)
Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Jun 19 21:10:51 2017
State : clean
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : centos6.xh.com:0 (local to host centos6.xh.com)
UUID : 07feea7a:f29f82fd:a59063c3:6b52657a
Events : 18
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
3 8 65 - spare /dev/sde1
5.将陣列md0的配置儲存到配置檔案,以免系統重新開機,導緻配置資訊的丢失。
[[email protected] ~]# ll /etc/mdadm.config # mdadm.conf本身不存在,要自己建立
ls: cannot access /etc/mdadm.config: No such file or directory
[[email protected] ~]# mdadm -Ds /dev/md0 > /etc/mdadm.conf # 将生成的md0的配置資訊儲存到配置檔案/etc/mdadm.conf
[[email protected] ~]# cat /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 spares=1 name=centos6.xh.com:0 UUID=07feea7a:f29f82fd:a59063c3:6b52657a
6.在陣列md0上建立檔案系統。
[[email protected] ~]# mkfs.ext4 /dev/md0
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
...(中間省略)...
This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
7.檢視陣列md0上建立的檔案系統。
[[email protected] ~]# blkid
/dev/sda1: UUID="50d74642-6b7c-4df1-bb9c-d90f7a9849a3" TYPE="ext4"
/dev/sda2: UUID="11baefb7-52b3-4d71-a4b3-d054606c452b" TYPE="ext4"
/dev/sda3: UUID="f0f08ce9-77b9-4b4f-ba27-63583b24efea" TYPE="ext4"
/dev/sda5: UUID="50d2c8e4-c1bf-49f3-b925-e5a602967bdc" TYPE="swap"
/dev/sdb: UUID="07feea7a-f29f-82fd-a590-63c36b52657a" UUID_SUB="5cb22a99-15bb-1e91-d2fc-faf392669555" LABEL="centos6.xh.com:0" TYPE="linux_raid_member"
/dev/sdc1: UUID="07feea7a-f29f-82fd-a590-63c36b52657a" UUID_SUB="97571e1a-e7b3-f665-42f5-42701c111dae" LABEL="centos6.xh.com:0" TYPE="linux_raid_member"
/dev/sdd1: UUID="07feea7a-f29f-82fd-a590-63c36b52657a" UUID_SUB="e31d1b2b-8cdd-1c83-ff90-69ec0db5ccdf" LABEL="centos6.xh.com:0" TYPE="linux_raid_member"
/dev/sde1: UUID="07feea7a-f29f-82fd-a590-63c36b52657a" UUID_SUB="373332ef-e2c8-daac-8e5f-84d73b76cddb" LABEL="centos6.xh.com:0" TYPE="linux_raid_member"
/dev/md0: UUID="0e54fa3c-446f-4da4-a0fb-4f9dd927073c" TYPE="ext4" # md0上建立好的ext4檔案系統
8.挂載陣列md0上建立好的檔案系統/dev/md0
[[email protected] ~]# mount /dev/md0 /mnt/raid5
[[email protected] ~]# vim /etc/fstab # 将挂載資訊寫入配置檔案
[[email protected] ~]# tail -n 1 /etc/fstab
UUID=0e54fa3c-446f-4da4-a0fb-4f9dd927073c /mnt/raid5 ext4 defaults 0 0 # 在/etc/fstab最後一行追加内容,md0的UUID可以通過blkid指令檢視,也可用裝置名,不推薦
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 99G 4.8G 89G 6% /
tmpfs 491M 76K 491M 1% /dev/shm
/dev/sda3 50G 1.1G 46G 3% /app
/dev/sda1 976M 35M 891M 4% /boot
/dev/sr0 3.7G 3.7G 0 100% /media/CentOS_6.9_Final
/dev/md0 40G 48M 38G 1% /mnt/raid5
9.使用檔案系統
[[email protected] ~]# cp /etc/man.config /mnt/rad5
[[email protected] ~]# ls -l /mnt/raid5
total 24
drwx------. 2 root root 16384 Jun 19 21:35 lost+found
-rw-r--r--. 1 root root 4940 Jun 19 21:46 man.config
[[email protected] ~]# head -n 5 /mnt/raid5/man.config
#
# Generated automatically from man.conf.in by the
# configure script.
#
# man.conf from man-1.6f
10.軟體模拟磁盤/dev/sdc損壞,看md0的運作情況。
[[email protected] ~]# mdadm -D /dev/md0 # 檢視磁盤sdc損壞前,md0的情況
/dev/md0:
...(中間省略)...
Update Time : Tue Jun 20 07:23:30 2017
State : clean
Active Devices : 3 # 活動的裝置有3個
Working Devices : 4 # 工作的裝置有4個
Failed Devices : 0 # 失效的裝置有0個
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : centos6.xh.com:0 (local to host centos6.xh.com)
UUID : 07feea7a:f29f82fd:a59063c3:6b52657a
Events : 18
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 33 1 active sync /dev/sdc1
4 8 49 2 active sync /dev/sdd1
3 8 65 - spare /dev/sde1 # /dev/sde1處于備用狀态
[[email protected] ~]# mdadm /dev/md0 -f /dev/sdc1 # 模拟磁盤/dev/sdc損壞
mdadm: set /dev/sdc1 faulty in /dev/md0
[[email protected] ~]# mdadm -D /dev/md0 -f /dev/sdc1 # 檢視磁盤/dev/sdc損壞後,md0的運作情況
/dev/md0:
...(中間省略)...
Update Time : Tue Jun 20 07:40:55 2017
State : clean, degraded, recovering # 降級使用,正在恢複
Active Devices : 2 # 活動的裝置有2個
Working Devices : 3 # 工作的裝置有3個
Failed Devices : 1 # 失效的裝置有1個
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 42% complete # 正在重建md0
Name : centos6.xh.com:0 (local to host centos6.xh.com)
UUID : 07feea7a:f29f82fd:a59063c3:6b52657a
Events : 26
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
3 8 65 1 spare rebuilding /dev/sde1
4 8 49 2 active sync /dev/sdd1 # sdd1由備用狀态轉為活動狀态
1 8 33 - faulty /dev/sdc1 # 失效裝置sdc1
mdadm: /dev/sdc1 does not appear to be an md device
11.從陣列md0中移除sdc1.
[[email protected] ~]# mdadm /dev/md0 -r /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md0
[[email protected] ~]# mdadm -D /dev/md0
/dev/md0:
...(中間省略)...
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
3 8 65 1 active sync /dev/sde1
4 8 49 2 active sync /dev/sdd1
12.模拟硬碟實體損壞,關閉虛拟機,移除空間大小的為20G的硬碟,重新開機虛拟機,啟動後,檢視md0的資訊
[[email protected] ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 99G 5.8G 88G 7% /
tmpfs 491M 76K 491M 1% /dev/shm
/dev/sda3 50G 2.1G 45G 5% /app
/dev/sda1 976M 35M 891M 4% /boot
/dev/md0 40G 1.1G 37G 3% /mnt/raid5
[[email protected] ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Jun 19 21:09:04 2017
Raid Level : raid5
Array Size : 41910272 (39.97 GiB 42.92 GB)
Used Dev Size : 20955136 (19.98 GiB 21.46 GB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Jun 20 08:20:17 2017
State : clean, degraded # 降級使用
Active Devices : 2
Working Devices : 2 # 隻有2塊硬碟在工作
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : centos6.xh.com:0 (local to host centos6.xh.com)
UUID : 07feea7a:f29f82fd:a59063c3:6b52657a
Events : 40
Number Major Minor RaidDevice State
0 0 0 0 removed
3 8 49 1 active sync /dev/sdd1
4 8 33 2 active sync /dev/sdc1
[[email protected] ~]# cd /mnt/raid5/
[[email protected] raid5]# ls # md0可以正常通路
bigfile lost+found man.config
[[email protected] raid5]# head -n 5 man.config
#
# Generated automatically from man.conf.in by the
# configure script.
#
# man.conf from man-1.6f
13.檢視還有空閑磁盤,為RAID增加一塊硬碟
[[email protected] ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 3.7G 0 rom /media/CentOS_6.9_Final
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 100G 0 part /
├─sda3 8:3 0 50G 0 part /app
├─sda4 8:4 0 1K 0 part
└─sda5 8:5 0 1G 0 part [SWAP]
sdb 8:16 0 25G 0 disk # 對比之前檢視塊裝置資訊,可以發現裝置名來辨別裝置是不穩定的,裝置名會因裝置的增減、系統重新開機發生改變
└─sdb1 8:17 0 20G 0 part # 裝置sdb1是空閑的
sdc 8:32 0 30G 0 disk
└─sdc1 8:33 0 20G 0 part
└─md0 9:0 0 40G 0 raid5 /mnt/raid5
sdd 8:48 0 35G 0 disk
└─sdd1 8:49 0 20G 0 part
└─md0 9:0 0 40G 0 raid5 /mnt/raid5
[[email protected] ~]# mdadm /dev/md0 -a /dev/sdb1 # 将磁盤sdb1添加到md0
mdadm: added /dev/sdb1
[[email protected] ~]# cat /proc/mdstat # 檢視md0資訊
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb1[5] sdc1[4] sdd1[3]
41910272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [_UU]
[==>..................] recovery = 10.0% (2103556/20955136) finish=1.4min speed=210355K/sec # 正在重建md0
unused devices: <none>
5.備注:
1.mdadm指令的簡單講解:
mdadm: 模式化的工具
指令的文法格式: mdadm [mode] <raiddevice> [options] <component-devices>
支援的RAID級别:LINEAR,RAID0, RAID1, RAID 4, RAID 5, RAID 6, RAID 10
模式:
建立: -C
裝配:-A
監控: -F
管理:-f, -r, -a
<raiddevice>: /dev/md#
<component-devices>: 任意塊裝置
-C: 建立模式
-n #: 使用#個裝置來建立此RAID
-l #: 指明要建立的RAID的級别
-a {yes|no}: 自動建立目标RAID裝置的裝置檔案
-c CHUNK_SIZE: 指明塊大小
-x #: 指明空閑盤的個數
-D: 顯示raid的詳細資訊:
mdadm -D /dev/md#
管理模式:
-f: 指定标記的磁盤為損壞
-a: 添加磁盤
-r: 移除磁盤
觀察md的狀态:
cat /proc/mdstat