I recently had a drive in my Linux software RAID 5 array begin to die. When I went to install the replacement, I could not find any straightforward instructions on how to put the new drive in the array, so I thought I’d toss what I did here in case others find it useful.
My machine is running Fedora Core 6 using mdadm with a RAID 5 array (so one disk in the array can die without data loss). For these directions, lets say the RAID array is /dev/md0, the bad disk is /dev/sdc, and the RAID partition on the bad disk was /dev/sdc1.
1. I shut down the machine and removed the bad drive, because my SATA controller and the libata driver for it does not support drive hot-swap.
2. I installed the new drive in the same bay and SATA controller port.
3. I restarted the machine. The RAID array came up in the degraded condition since it was missing an active drive.
4. I used "/sbin/fdisk /dev/sdb" to look at the partition table of another disk in the array to know what to create on the new drive. Alternatively, if the old drive is still functional enough its partition table could be used as an example.
5. I used "/sbin/fdisk /dev/sdc" to create the RAID partition on the new drive:
‘n’ to create the new partition. I created primary partition #1 and sized it to use the whole disk.
‘t’ to change the partition id. I choose type fd (that is in hexadecimal, aka 0xfd), "Linux raid autodetect".
‘a’ to toggle the bootable flag since the Fedora install set all RAID partitions bootable on my disks during the original install/upgrade process.
‘w’ to write the new partition table and exit fdisk.
6. "/sbin/mdadm /dev/md0 -a /dev/sdc1" to add the new RAID partition on the new disk to the array.
The kernel then rebuilt the disk automatically over the next few hours. The progress of the rebuild can be checked by "cat /proc/mdstat" or continually via "watch -n .1 cat /proc/mdstat". "dmesg" will also display a message at the start and the completion of the rebuild.