RAID 1 creates a mirror on the second drive. .You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu.
You may have to create the RAID device first by indicating the RAID device with the block major and minor numbers. Be sure to increment the “2” number by one each time you create an additional RAID device.
# mknod /dev/md1 b 9 2
This will create the device if you have already used /dev/md0.
Create RAID 1
# mdadm –create /dev/md1 –level=1 –raid-devices=2 /dev/sdb7 /dev/sdb8
–create
This will create a RAID array. The device that you will use for the first RAID array is /dev/md1.
–level=1
The level option determines what RAID level you will use for the RAID.
–raid-devices=2 /dev/sdb7 /dev/sdb8
Note: for illustration or practice this shows two partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows two RAID partitions.
mdadm: array /dev/md0 started.
Verify the Create of the RAID
# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid1 sdb8[1] sdb7[0]
497856 blocks [2/2] [UU]
[======>…………..] resync = 34.4% (172672/497856) finish=0.2min speed=21584K/sec
md0 : active raid0 sdb6[1] sdb5[0]
995712 blocks 64k chunks
unused devices:
# tail /var/log/messages
You can also verify that RAID is being built in /var/log/messages.
May 19 09:21:45 ub1 kernel: [ 5320.433192] md: raid1 personality registered for level 1
May 19 09:21:45 ub1 kernel: [ 5320.433620] md2: WARNING: sdb7 appears to be on the same physical disk as sdb8.
May 19 09:21:45 ub1 kernel: [ 5320.433628] True protection against single-disk failure might be compromised.
May 19 09:21:45 ub1 kernel: [ 5320.433772] raid1: raid set md2 active with 2 out of 2 mirrors
May 19 09:21:45 ub1 kernel: [ 5320.433913] md: resync of RAID array md2
May 19 09:21:45 ub1 kernel: [ 5320.433926] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
May 19 09:21:45 ub1 kernel: [ 5320.433934] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
May 19 09:21:45 ub1 kernel: [ 5320.433954] md: using 128k window, over a total of 497856 blocks.
Create the File System ext 3.
You have to place a file system on your RAID device. The journaling system ext3 is placed on the device in this example.
# mke2fs -j /dev/md1
mke2fs 1.40.8 (13-Mar-2008)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
124928 inodes, 497856 blocks
24892 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
61 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
Mount the RAID on the /raid Partition
In order to use the RAID array you will need to mount it on the file system. For testing purposes you can create a mount point and test. To make a permanent mount point you will need to edit /etc/fstab.
# mount /dev/md1 /raid
# df
The df command will verify that it has mounted.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 5809368 2699256 2817328 49% /
varrun 1037732 104 1037628 1% /var/run
varlock 1037732 0 1037732 0% /var/lock
udev 1037732 80 1037652 1% /dev
devshm 1037732 12 1037720 1% /dev/shm
/dev/sda1 474440 49252 400691 11% /boot
/dev/sda4 474367664 1738024 448722912 1% /home
/dev/md1 482090 10544 446654 3% /raid
You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:
/dev/md1 /raid defaults 0 2
Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device.
Create a Failed RAID Disk
In order to test your RAID 1 you can fail a disk, remove it and reinstall it. This is an important feature to practice.
# mdadm /dev/md1 -f /dev/sdb8
This will deliberately make the /dev/sdb8 faulty.
mdadm: set /dev/sdb8 faulty in /dev/md1
root@ub1:/etc/network# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid1 sdb8[2](F) sdb7[0]
497856 blocks [2/1] [U_]
md0 : active raid0 sdb6[1] sdb5[0]
995712 blocks 64k chunks
unused devices:
Hot Remove the Failed Disk
You can remove the faulty disk from the RAID array.
# mdadm /dev/md1 -r /dev/sdb8
mdadm: hot removed /dev/sdb8
Verify the Process
You should be able to see the process as it is working.
# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid1 sdb7[0]
497856 blocks [2/1] [U_]
md0 : active raid0 sdb6[1] sdb5[0]
995712 blocks 64k chunks
unused devices:
Add a Replacement Drive HOT
This will allow you to add a device into the array to replace the bad one.
# mdadm /dev/md1 -a /dev/sdb8
mdadm: re-added /dev/sdb8
Verify the Process.
# cat /proc/mdstat
Personalities : [raid0] [raid1]
md2 : active raid1 sdb8[2] sdb7[0]
497856 blocks [2/1] [U_]
[=====>……………] recovery = 26.8% (134464/497856) finish=0.2min speed=26892K/sec
md0 : active raid0 sdb6[1] sdb5[0]
995712 blocks 64k chunks
unused devices: