天天看點

分區--格式化-挂盤-磁盤陣列

============================== 生産環境 分區做raid

[root@192_168_11_203 tmp]# cat /proc/mdstat  檢視做raid情況

Personalities : [raid1] 

md0 : active raid1 sdc[1] sdb[0]

      998891520 blocks super 1.2 [2/2] [UU]

      bitmap: 2/8 pages [8KB], 65536KB chunk

unused devices: <none>

[root@192_168_11_203 ~]# fdisk -l  

   Disk /dev/sdb: 1023.0 GB, 1022999134208 bytes

   Disk /dev/sdc: 1023.0 GB, 1022999134208 bytes

   Disk /dev/sdd: 899.0  GB,  898999779328 bytes

   Disk /dev/sde: 899.0  GB,  898999779328 bytes

   Disk /dev/sda: 299.0  GB,  298999349248 bytes

   Disk /dev/md0: 1022.9 GB, 1022864916480 bytes

[root@192_168_11_203 ~]# df -h

   Filesystem            Size  Used Avail Use% Mounted on

   /dev/sda2              30G  4.8G   24G  17% /

   tmpfs                  64G  4.0K   64G   1% /dev/shm

   /dev/sda1             190M   47M  133M  27% /boot

  /dev/md0p1            938G  132G  759G  15% /data    <--- /dev/md0: 1022.9 GB

  /dev/sdd1             824G   73M  783G   1% /backup  <--- /dev/sdd: 899.0 GB

  /dev/sde1             824G   11G  773G   2% /logs    <--- /dev/sde: 899.0 GB

[root@192_168_11_203 ~]# cat /etc/fstab 

  /dev/md0p1              /data                   ext4    defaults        0 0

  /dev/sdd1               /backup                 ext4    defaults        0 0

  /dev/sde1               /logs                   ext4    defaults        0 0

###################建立軟RAID0

###分區

  fdisk /dev/sdb

    n/p/1/+50M/w

  fdisk /dev/sdc

###安裝軟raid工具

   下載下傳mdadm 

      http://download.chinaunix.net/down.php?id=28095&ResourceID=6641&site=1

      tar jxf mdadm-3.0.tar.bz2

      make

      make install

###做軟raid

   mdadm -C --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2

   y

###檢視是否有:/dev/md0

 fdisk -l

###格式化

   mkfs.ext4 /dev/md0

###檢視raid資訊

mdadm -D /dev/md0  

cat /proc/mdstat

###挂載

  mkdir /mdata

  mount /dev/md0 /mdata

### df -h 檢視

/dev/md0         80M  1.6M   74M   3% /mdata

===================== 生成案例

[root@oth-bj-110-110-119-119 /]# fdisk -l

Disk /dev/sda: 300.0 GB, 300000000000 bytes

255 heads, 63 sectors/track, 36472 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0xe2b2e2b2

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          26      204800   83  Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2              26        3942    31457280   83  Linux

/dev/sda3            3942        4465     4194304   82  Linux swap / Solaris

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes

255 heads, 63 sectors/track, 121601 cylinders

Disk identifier: 0x00000000

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes

Disk /dev/md0: 1000.1 GB, 1000070447104 bytes

2 heads, 4 sectors/track, 244157824 cylinders

Units = cylinders of 8 * 512 = 4096 bytes

I/O size (minimum/optimal): 524288 bytes / 524288 bytes

Disk identifier: 0xfa2f1e15

[root@oth-bj-110-110-119-1193 /]# mdadm -C  /dev/md0 -c512 -l1 -n2 /dev/sd[b-c] --auto=yes

[root@oth-bj-110-110-119-1193 /]# cat /proc/mdstat 

Personalities : [raid6] [raid5] [raid4] 

md0 : active raid5 sdc[2] sdb[0]

      976631296 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/1] [U_]

      [==============>......]  recovery = 74.4% (727073492/976631296) finish=46.4min speed=89539K/sec

      bitmap: 0/8 pages [0KB], 65536KB chunk

[root@oth-bj-110-110-119-1193 /]# mdadm -D /dev/md0

/dev/md0:

        Version : 1.2

  Creation Time : Fri May 20 16:41:57 2016

     Raid Level : raid5

     Array Size : 976631296 (931.39 GiB 1000.07 GB)

  Used Dev Size : 976631296 (931.39 GiB 1000.07 GB)

   Raid Devices : 2

  Total Devices : 2

    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Fri May 20 18:49:25 2016

          State : clean, degraded, recovering 

 Active Devices : 1

Working Devices : 2

 Failed Devices : 0

  Spare Devices : 1

         Layout : left-symmetric

     Chunk Size : 512K

 Rebuild Status : 81% complete

           Name : oth-bj-119-090-062-013:0  (local to host oth-bj-119-090-062-013)

           UUID : 5e3e97cc:441c6a7e:5adc8f88:f21ec830

         Events : 1553

    Number   Major   Minor   RaidDevice State

       0       8       16        0      active sync   /dev/sdb

       2       8       32        1      spare rebuilding   /dev/sdc

本文轉自cloves 51CTO部落格,原文連結:http://blog.51cto.com/yeqing/1775512

繼續閱讀