天天看點

linux raid mdadm,Linux軟陣列 用mdadm做RAID實驗

用mdadm做raid實驗時遇到一個問題,一直沒有搞定,在網上一直也沒有查到,準備問那個教育訓練老師,結果等了半天硬是沒有等到,然後又在google上搜尋,終于在一個老外的BBS裡發現問題原因了

問題如下:不用多加解釋,相信大家都能看懂

[ro[email protected] ~]# mdadm -C /dev/md5  -l 5 -n 3 -x 1 /dev/sdb[5-8]

mdadm: Cannot open /dev/sdb5: Device or resource busy

mdadm: Cannot open /dev/sdb6: Device or resource busy

mdadm: Cannot open /dev/sdb7: Device or resource busy

mdadm: Cannot open /dev/sdb8: Device or resource busy

這是我第一台虛拟機,後面換了一台幹淨的計算機,然後實驗的,單獨加了三塊硬碟,和前面問題基本類似,不過就是三塊硬碟和三個分區的問題。

尋找問題的過程中,在網上有的人說是由于主機闆不支援,我感覺不大可能,還有說核心不支援的,其實最終的問題根本不是這些,而是由于/dev/md*裝置被占用,隻要用mdadm --stop 指令把該裝置停止即可,問題就可以解決。下面是過程。

[[email protected] ~]# mdadm --stop /dev/md5

mdadm: stopped /dev/md5

[[email protected] ~]# mdadm -C /dev/md5 -l 5 -n 3 /dev/sd[bcd]

mdadm: /dev/sdb appears to be part of a raid array:

level=raid5 devices=3 ctime=Wed Dec  9 03:37:11 2009

mdadm: /dev/sdc appears to be part of a raid array:

level=raid5 devices=3 ctime=Wed Dec  9 03:37:11 2009

mdadm: /dev/sdd appears to be part of a raid array:

level=raid5 devices=3 ctime=Wed Dec  9 03:37:11 2009

Continue creating array?

Continue creating array? (y/n) y

mdadm: array /dev/md5 started.

[[email protected] ~]# mdadm --detail /dev/md5

/dev/md5:

Version : 00.90.03

Creation Time : Wed Dec  9 03:46:26 2009

Raid Level : raid5

Array Size : 2097024 (2048.22 MiB 2147.35 MB)

Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB)

Raid Devices : 3

Total Devices : 3

Preferred Minor : 5

Persistence : Superblock is persistent

Update Time : Wed Dec  9 03:46:26 2009

State : clean, degraded, recovering

Active Devices : 2

Working Devices : 3

Failed Devices : 0

Spare Devices : 1

Layout : left-symmetric

Chunk Size : 64K

Rebuild Status : 55% complete

UUID : 297ef6fe:26379be0:84c578f1:21ebe5cf

Events : 0.1

Number   Major   Minor   RaidDevice State

0       8       16        0      active sync   /dev/sdb

1       8       32        1      active sync   /dev/sdc

3       8       48        2      spare rebuilding   /dev/sdd

[[email protected] ~]#

[[email protected] ~]# mdadm --detail /dev/md5

/dev/md5:

Version : 00.90.03

Creation Time : Wed Dec  9 03:46:26 2009

Raid Level : raid5

Array Size : 2097024 (2048.22 MiB 2147.35 MB)

Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB)

Raid Devices : 3

Total Devices : 3

Preferred Minor : 5

Persistence : Superblock is persistent

Update Time : Wed Dec  9 03:47:34 2009

State : clean

Active Devices : 3

Working Devices : 3

Failed Devices : 0

Spare Devices : 0

Layout : left-symmetric

Chunk Size : 64K

UUID : 297ef6fe:26379be0:84c578f1:21ebe5cf

Events : 0.2

Number   Major   Minor   RaidDevice State

0       8       16        0      active sync   /dev/sdb

1       8       32        1      active sync   /dev/sdc

2       8       48        2      active sync   /dev/sdd

[[email protected] ~]# mkfs.ext3 /dev/md5

mke2fs 1.39 (29-May-2006)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

262144 inodes, 524256 blocks

26212 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=536870912

16 block groups

32768 blocks per group, 32768 fragments per group

16384 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912

Writing inode tables: done

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[[email protected] ~]# mount /dev/md5 /mnt

[[email protected] ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda2             3.8G  1.9G  1.8G  53% /

/dev/sda3             4.3G  137M  3.9G   4% /home

/dev/sda1              46M   11M   33M  24% /boot

tmpfs                 339M     0  339M   0% /dev/shm

/dev/md5              2.0G   36M  1.9G   2% /mnt

[[email protected] ~]#

linux raid mdadm,Linux軟陣列 用mdadm做RAID實驗