天天看點

centos下做軟raid

系統centos5.1 VM虛拟機下!

[[email protected] ~]#yum install -y mdadm

[[email protected] ~]# fdisk  /dev/hdb

The number of cylinders for this disk is set to 17753.

There is nothing wrong with that, but this is larger than 1024,

and could in certain setups cause problems with:

1) software that runs at boot time (e.g., old versions of LILO)

2) booting and partitioning software from other OSs

   (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): n  -----建立新的分區

Command action

   e   extended

   p   primary partition (1-4)

p ------主分區

Partition number (1-4):1 ---編号随便

First cylinder (1-17753, default 1):

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-17753, default 17753):

Using default value 17753   ----回車即可

Command (m for help):t ------指定磁盤格式 (L可以檢視所有格式)

Hex code (type L to list codes): fd

Changed system type of partition 1 to fd (Linux raid autodetect) 選擇了raid格式因為我們要做raid嘛!

Command (m for help): w  -----------寫入

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

[[email protected] ~]# fdisk  /dev/hdd 同上!

然後用 mdadm這個指令建立raid

[[email protected] ~]# mdadm -C /dev/md0  -l0 -n2 /dev/hd[bd]1 建立raid0

這個是簡寫全部指令是 mdadm -C /dev/md0  --levle=raid0  --raid-devices=2 /dev/hdb1 /dev/hdd1

[[email protected] ~]# mkfs.ext3 -t /dev/md0 格式化!

[[email protected] ~]# mkdir -p /mnt/raid0

[[email protected] ~]# mount /dev/md0  /mnt/raid0 挂載

[[email protected] ~]# df -h

檔案系統              容量  已用 可用 已用% 挂載點

/dev/hda1             7.3G  1.8G  5.2G  26% /

tmpfs                  94M     0   94M   0% /dev/shm

/dev/md0               16G  173M   15G   2% /mnt/raid0

cat /proc/mdstate 

檢視raid的狀态呢?

mdadm -D /dev/md*

--------------------------------------------------------------------------------------------------------------------------

1、確定你的 2 塊硬碟一模一樣

2、fdisk 對兩塊硬碟分區,全部隻分一個區即可,分區辨別用 fd

3、假如你的系統硬碟是 sda,兩塊硬碟分别是 sdb 和 sdc

# mdadm -C /dev/md0 -l1 -n2 /dev/sdb1 /dev/sdc1

# mdadm -Ds > /etc/mdadm.conf

# mkfs.ext3 /dev/md0

# mount /dev/md0 /mnt/raid

4、修改你的 /etc/fstab 讓系統啟動後自動挂載

------------------------------------------------------------------------------------------------------------------------------

建立raid 5 的指令。

[[email protected] /]# mdadm -C -v /dev/md0 -l5 -n3 /dev/sda1 /dev/sdb1 /dev/sdc1 -x1 /dev/sdd1

------------------------------------------------------------------------------------------------------------------------------- 

一.停止 raid

[[email protected] media]# mdadm -S /dev/md10

mdadm: stopped /dev/md10

4.啟動和停止 

停止之前需要umount,然後mdadm –S /dev/md0 

啟動:[[email protected] /]# mdadm -A /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 

如果已經建立了配置檔案,啟動可使用指令mdadm –As /dev/md0

[[email protected] media]# 

二.删除 raid

mdadm /dev/md10 --fail /dev/sde1 --remove /dev/sde1

mdadm /dev/md10 --fail /dev/sdd1 --remove /dev/sdd1

mdadm /dev/md10 --fail /dev/sdc1 --remove /dev/sdc1

mdadm /dev/md10 --fail /dev/sdb1 --remove /dev/sdb1

三.删除/dev/md10  

rmdir   /dev/md10 

1.測試前,在/mnt/radi下建立一個檔案使用 

[[email protected] raid]# ll / |tee ./ls.txt 

mdadm自帶指令可以标記某塊硬碟為損壞,參看幫助mdadm --manage –h。 

故障測試中,标記sda硬碟損壞 

[[email protected] /]# mdadm /dev/md0 -f /dev/sda1 

mdadm: set /dev/sda1 faulty in /dev/md0

2.檢視狀态 

[[email protected] /]# more /proc/mdstat 

Personalities : [raid5] 

md0 : active raid5 sdd1[3] sdc1[2] sdb1[1] sda1[4](F) 

16771584 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU] 

[====>................] recovery = 22.8% (1916928/8385792) finish=1.9min speed=55892K/sec 

unused devices: 

[[email protected] /]# 

因為sdd硬碟為熱備,系統顯示raid重構中,進度22.8%,且sda1标記(F)。 

稍等再次檢視: 

[[email protected]/]# more /proc/mdstat 

Personalities : [raid5] 

md0 : active raid5 sdd1[0] sdc1[2] sdb1[1] sda1[3](F) 

16771584 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] 

unused devices: 

[[email protected] /]# mdadm -D /dev/md0 

/dev/md0: 

Version : 00.90.01 

Creation Time : Fri Jun 1 13:00:31 2007 

Raid Level : raid5 

Array Size : 16771584 (15.99 GiB 17.17 GB) 

Device Size : 8385792 (7.100 GiB 8.59 GB) 

Raid Devices : 3 

Total Devices : 4 

Preferred Minor : 0 

Persistence : Superblock is persistent 

Update Time : Fri Jun 1 13:23:49 2007 

State : clean 

Active Devices : 3 

Working Devices : 3 

Failed Devices : 1 

Spare Devices : 0 

Layout : left-symmetric 

Chunk Size : 64K 

Number Major Minor RaidDevice State 

0 8 49 0 active sync /dev/sdd1 

1 8 17 1 active sync /dev/sdb1 

2 8 33 2 active sync /dev/sdc1 

3 8 1 -1 faulty /dev/sda1 

UUID : 63cb965b:79486986:d389c551:67677f20

3.移除損壞的硬碟 

[[email protected] /]# mdadm /dev/md0 -r /dev/sda 

mdadm: hot removed /dev/sda

4.添加新的硬碟到raid中 

新硬碟接入系統中,應進行正确的分區,且最好與之前換掉的硬碟保持相同的裝置号。然後執行指令 

mdadm /dev/md0 -a /dev/sda

繼續閱讀