How to setup RAID 1 on Ubuntu Linux?

Estimated Reading Time: 4 minutes

RAID 1 creates a mirror on the second drive. .You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu.

raid1

You may have to create the RAID device first by indicating the RAID device with the block major and minor numbers. Be sure to increment the “2” number by one each time you create an additional RAID device.

# mknod /dev/md1 b 9 2

This will create the device if you have already used /dev/md0.

Create RAID 1

# mdadm –create /dev/md1 –level=1 –raid-devices=2 /dev/sdb7 /dev/sdb8

–create
This will create a RAID array. The device that you will use for the first RAID array is /dev/md1.

–level=1
The level option determines what RAID level you will use for the RAID.

–raid-devices=2 /dev/sdb7 /dev/sdb8
Note: for illustration or practice this shows two partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows two RAID partitions.
mdadm: array /dev/md0 started.

Verify the Create of the RAID

# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[1] sdb7[0]

497856 blocks [2/2] [UU]

[======>…………..] resync = 34.4% (172672/497856) finish=0.2min speed=21584K/sec

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

# tail /var/log/messages

You can also verify that RAID is being built in /var/log/messages.

May 19 09:21:45 ub1 kernel: [ 5320.433192] md: raid1 personality registered for level 1

May 19 09:21:45 ub1 kernel: [ 5320.433620] md2: WARNING: sdb7 appears to be on the same physical disk as sdb8.

May 19 09:21:45 ub1 kernel: [ 5320.433628] True protection against single-disk failure might be compromised.

May 19 09:21:45 ub1 kernel: [ 5320.433772] raid1: raid set md2 active with 2 out of 2 mirrors

May 19 09:21:45 ub1 kernel: [ 5320.433913] md: resync of RAID array md2

May 19 09:21:45 ub1 kernel: [ 5320.433926] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.

May 19 09:21:45 ub1 kernel: [ 5320.433934] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.

May 19 09:21:45 ub1 kernel: [ 5320.433954] md: using 128k window, over a total of 497856 blocks.

Create the File System ext 3.
You have to place a file system on your RAID device. The journaling system ext3 is placed on the device in this example.

# mke2fs -j /dev/md1

mke2fs 1.40.8 (13-Mar-2008)

Filesystem label=

OS type: Linux

Block size=1024 (log=0)

Fragment size=1024 (log=0)

124928 inodes, 497856 blocks

24892 blocks (5.00%) reserved for the super user

First data block=1

Maximum filesystem blocks=67633152

61 block groups

8192 blocks per group, 8192 fragments per group

2048 inodes per group

Superblock backups stored on blocks:

8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

Mount the RAID on the /raid Partition

In order to use the RAID array you will need to mount it on the file system. For testing purposes you can create a mount point and test. To make a permanent mount point you will need to edit /etc/fstab.

# mount /dev/md1 /raid

# df
The df command will verify that it has mounted.

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/sda2 5809368 2699256 2817328 49% /

varrun 1037732 104 1037628 1% /var/run

varlock 1037732 0 1037732 0% /var/lock

udev 1037732 80 1037652 1% /dev

devshm 1037732 12 1037720 1% /dev/shm

/dev/sda1 474440 49252 400691 11% /boot

/dev/sda4 474367664 1738024 448722912 1% /home

/dev/md1 482090 10544 446654 3% /raid

You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:

/dev/md1 /raid defaults 0 2

Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device.

Create a Failed RAID Disk

In order to test your RAID 1 you can fail a disk, remove it and reinstall it. This is an important feature to practice.

# mdadm /dev/md1 -f /dev/sdb8
This will deliberately make the /dev/sdb8 faulty.

mdadm: set /dev/sdb8 faulty in /dev/md1

root@ub1:/etc/network# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[2](F) sdb7[0]

497856 blocks [2/1] [U_]

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

Hot Remove the Failed Disk
You can remove the faulty disk from the RAID array.

# mdadm /dev/md1 -r /dev/sdb8

mdadm: hot removed /dev/sdb8

Verify the Process

You should be able to see the process as it is working.

# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb7[0]

497856 blocks [2/1] [U_]

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

Add a Replacement Drive HOT

This will allow you to add a device into the array to replace the bad one.
# mdadm /dev/md1 -a /dev/sdb8

mdadm: re-added /dev/sdb8

Verify the Process.

# cat /proc/mdstat

Personalities : [raid0] [raid1]

md2 : active raid1 sdb8[2] sdb7[0]

497856 blocks [2/1] [U_]

[=====>……………] recovery = 26.8% (134464/497856) finish=0.2min speed=26892K/sec

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks

unused devices:

How to setup RAID 0 on Ubuntu Linux?

Estimated Reading Time: 3 minutes

RAID 0 will create striping to increase read/write speeds as the data can be read and written on separate disks at the same time. This level of RAID is what you want to use if you need to increase the speed of disk access.You will need to create RAID aware partitions on your drives before you can create RAID and you will need to install mdadm on Ubuntu.

raid0
These commands must be done as root or you must add the sudo command in front of each command.

# mdadm –create /dev/md0 –level=0 –raid-devices=2 /dev/sdb5 /dev/sdb6

–create
This will create a RAID array. The device that you will use for the first RAID array is /dev/md0.

–level=0
The level option determines what RAID level you will use for the RAID.

–raid-devices=2 /dev/sdb5 /dev/sdb6
Note: for illustration or practice this shows two partitions on the same drive. This is NOT what you want to do, partitions must be on separate drives. However, this will provide you with a practice scenario. You must list the number of devices in the RAID array and you must list the devices that you have partitioned with fdisk. The example shows two RAID partitions.
mdadm: array /dev/md0 started.

Check the development of the RAID.

# cat /proc/mdstat

Personalities : [raid0]

md0 : active raid0 sdb6[1] sdb5[0]

995712 blocks 64k chunks
unused devices:

# tail /var/log/messages
You can also verify that RAID is being built in /var/log/messages.

May 19 09:08:51 ub1 kernel: [ 4548.276806] raid0: looking at sdb5

May 19 09:08:51 ub1 kernel: [ 4548.276809] raid0: comparing sdb5(497856) with sdb6(497856)

May 19 09:08:51 ub1 kernel: [ 4548.276813] raid0: EQUAL

May 19 09:08:51 ub1 kernel: [ 4548.276815] raid0: FINAL 1 zones

May 19 09:08:51 ub1 kernel: [ 4548.276822] raid0: done.

May 19 09:08:51 ub1 kernel: [ 4548.276826] raid0 : md_size is 995712 blocks.

May 19 09:08:51 ub1 kernel: [ 4548.276829] raid0 : conf->hash_spacing is 995712 blocks.

May 19 09:08:51 ub1 kernel: [ 4548.276831] raid0 : nb_zone is 1.

May 19 09:08:51 ub1 kernel: [ 4548.276834] raid0 : Allocating 4 bytes for hash.

Create the ext 3 File System
You have to place a file system on your RAID device. The journaling system ext3 is placed on the device in this example.

# mke2fs -j /dev/md0

mke2fs 1.40.8 (13-Mar-2008)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

62464 inodes, 248928 blocks

12446 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=255852544

8 block groups

32768 blocks per group, 32768 fragments per group

7808 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376

Writing inode tables: done

Creating journal (4096 blocks): done

Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 39 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.

Create a Place to Mount the RAID on the File System

In order to use the RAID array you will need to mount it on the file system. For testing purposes you can create a mount point and test. To make a permanent mount point you will need to edit /etc/fstab.

# mkdir /raid

Mount the RAID Array

# mount /dev/md0 /raid

You should be able to create files on the new partition. If this works then you may edit the /etc/fstab and add a line that looks like this:

/dev/md0 /raid defaults 0 2

Be sure to test and be prepared to enter single user mode to fix any problems with the new RAID device.

Hope you find this article helpful.

How do I bind NIC interrupts to selected CPU?

Estimated Reading Time: 2 minutes

I read this interesting mailing thread few weeks back. I won’t be late to share this with open source enthusiast like you. Here goes the story:

nic2

I have a 4 Quad server, am trying to bind NIC eth0 interrupt(s) to CPU4 and CPU5. As of now, my eth0 is found bind to all the 8’s.
#grep eth0 /proc/interrupts | awk ‘{print $NF}’ | sort

eth0-0
eth0-1
eth0-2
eth0-3
eth0-4
eth0-5
eth0-6
eth0-7

How to move ahead?

Solution: Follow these steps to get it done.

As I am using Broadcom card(bnx2), I am going to run this command and reboot my machine.

Open the terminal:

echo “options bnx2 disable_msi=1” > /etc/modprobe.d/bnx2.conf

then reboot, after you’ll only see one irq for eth0.

Next, run this command:

echo cpumask > /proc/irq/IRQ-OF-ETH0-0/smp_affinity

I believe the mask for cpu4 is 10 and cpu5 is 20.
(don’t forget to disable irqbalance)

you can only bind the irqs for one nic to one core at a time.

or you could do something fancy/silly with isolcpus and….

isolcpus all but 4/5 so that all irqs will be scheduled on 4/5. this will
mean that the kernel can only schedule tasks on cpu4/5.

Hope it helps !!!
then use cpusets/taskset/tuna to move all the processes off cpu 4/5… and
you’ll have to use taskset/cpuset/tuna for every task to ensure its not
using cpu4/5