### Configuring and using a software RAID1 (mirroring) with mdadm ###
       
       
       Linux has a really nice support for software RAIDs. The backend tool to manage them is called 'mdadm'. I will describe here the few commands that I use with mdadm to handle my RAID1 (disk mirror) setup. The system I tested the commands is a Debian v7.0 ('Wheezy'), but it should be pretty much the same on any possible Linux distribution.
       
       
       *** CREATING THE RAID1 MIRROR ***
       
       
       Creating the RAID1 mirror, using two disks: sdb and sdc:
       
       # mdadm --create --verbose /dev/md0 --level=mirror --metadata 1.0 --raid-devices=2 /dev/sdb /dev/sdc
       
       You can also build your RAID-1 with only 1 disk, and add the second one later (for eg. because you need to copy data from one of the disks to the new array first, but take care that the first disk you add is the bigger one, if disks are not exactly of the same size):
       
       # mdadm --create --verbose /dev/md0 --level=mirror --metadata 1.0 --raid-devices=2 /dev/sdb missing
       
       Note that I explicitely asked for an array superblock in version 1.0. This is actually pretty important in our situation.
       Version 1.0 is stored near the end of the device (at least 8K, and less than 12K, from the end). This is useful, especially with RAID-1 devices, because the RAID filesystem and non-RAID filesystem starts in exactly the same place. This makes it possible to take out a disk of the array and access it in another machine just like a normal disk. It also has the *big* advantage that the MBR of the mdadm array will be located in the MBR of the physical disks, which will make your system able to boot from there, and, in case of a disaster, you will be able to take out a disk from the RAID, and mount it anywhere else to dump the data as if it would be a normal single drive.
       
       You will notice that building the array takes a long time - several hours, or even several days, depending on how big your array is, and how fast the drives are. It will take that long every time that the array goes into 'reassembly', which might happen for example if you temporarily take out a disk and re-add it into the array, or if one of the disks drops out from the array after a crash. To make the process of reassembly much faster, add a write-intent bitmap to your array:
       
       # mdadm --grow --bitmap=internal /dev/md0
       
       It's important to keep your mdadm.conf up to date. To do it, use the following command to populate your mdadm.conf file:
       
       # mdadm --detail --scan > /etc/mdadm/mdadm.conf
       
       
       *** MAKE YOUR RAID BOOTABLE ***
       
       
       If you'd like to make your mdadm array bootable, there are a few details to know.
       
       Apparently you need Grub2 to be able to boot from the RAID (incidentally Grub2 is the default boot loader on Debian).
       
       By default, Grub locates disks using their UUID. This can become messy with RAID, since the RAID might end up with the same UUID than one of the drive (that's what happenned to me). To be safe, I disabled the UUID feature from grub, via the /etc/default/grub file:
       GRUB_DISABLE_LINUX_UUID=true
       
       Grub needs a so-called 'device map'. It's basically a text file that provides the list of available drives. This file is stored in /boot/grub/device.map. It can be computed by the command 'grub-mkdevicemap'. But in my case, this file wasn't containing my RAID device, therefor Grub wasn't able to boot from it. I had to add it manually, as a new line at the end of the file:
       
       (hd0)   /dev/disk/by-id/ata-Hitachi_HDT721010SLA360_STF607MH3VJKBK
       (hd1)   /dev/disk/by-id/ata-ST1000DM003-9YN162_S1DAAY5W
       (hd2)   /dev/md0
       
       EDIT: It would seem that the current (as of 26 Dec 2015) version of Grub2 doesn't need a device map any more.
       
       Once you're done, you can call the 'update-grub2' command that will recompute a grub configuration, and then you will have to install grub on member devices separately (and not on the md array!).
       
       Example:
       # grub-install --no-floppy /dev/sda
       # grub-install --no-floppy /dev/sdb
       
       Yes, this IS counter-intuitive, because one would think that doing grub-install /dev/md0 would install the grub MBR stuff on all disk members (it is a RAID 1 after all, remember?). But no, that's not what is happening, altough I don't have much clue why. Also, writing directly to RAID members sounds really scary (at least it was a scary experience for me), and I was expecting a whole lot of problems after that, like the RAID breaking because the disks weren't synced anymore, etc, but nothing like that happened.
       
       But grub is not the only one involved in the boot process - it will only load an initramfs image and run it. The initramfs image must support your RAID array to be able to recognize it properly. The only detail I had to take care of is to make sure that mdadm is set up to assemble arrays at boot. To set this up, I had to change the 'INITRDSTART' parameter from the /etc/default/mdadm file:
       
       INITRDSTART='all'
       
       ...And then, update the initramfs image via 'update-initramfs -u'. Since then, my system happily booted from the RAID.
       
       Last note: Not directly related to RAID, but I wasted a few hours on this. Even though you have loaded your MBR with Grub, some BIOSes might refuse to load it, as long as they do not see at least one partition marked as "active". In my case, I was getting a "no boot device found" message from my BIOS, until I marked the first partition on my md0 RAID as "active", using fdisk.
       
       
       *** DAY TO DAY MANAGEMENT OF THE ARRAY ***
       
       
       How do I check the RAID status?
       
       # mdadm --detail /dev/md0
       # cat /proc/mdstat
       
       How to run a consistency check on the array?
       
       # echo check > /sys/block/md0/md/sync_action
       
       How to remove a failed drive? This is a two steps process. First mark the drive as 'failed', and then actually remove it from the RAID:
       
       # mdadm --manage /dev/md0 --fail /dev/sdb
       # mdadm --manage /dev/md0 --remove /dev/sdb
       
       If you'd like to erase the RAID tag from the drive (for example to avoid Linux thinking it's still part of some RAID), you zero the mdadm superblock as easily as that:
       
       # mdadm --zero-superblock /dev/sdb
       
       Adding a new drive into the array:
       
       # mdadm --manage /dev/md0 --add /dev/sdb
       
       Growing the size of an array after replacing disks with bigger ones:
       
       # mdadm --grow /dev/md0 --size=max