[SOLVED]Additional Raid1 arrays not started on degraded boot
Posted: 2015/02/02 03:24:56
Hi Everyone,
I have a new centos 7 install that I have been having trouble with. Usually a 2 drive Raid1 array.. your normal boot root and swap created during install. These all come up just fine in degraded mode (single drive), EXCEPT for the additional Raid1 partition I have. The system brings it up as a spare and does not automatically activate the array, its ends up marked as a spare like this..
Personalities : [raid1]
md100 : inactive sda4[1](S)
1927256320 blocks super 1.2
Where as with both drives it comes up just fine..
md100 : active raid1 sda4[2] sdc4[1]
20000000 blocks super 1.2 [2/2] [UU]
Note the size is wrong, but it doesn't seem to matter as I tried it with the full partition size too. If the array was supposed to be mounted (fstab), it drops to the service prompt. I can force it to run in degraded mode by doing #mdadm -R /dev/md100 and it starts up just fine in degraded mode. Here is my line in mdadm.conf..
ARRAY /dev/md/100 metadata=1.2 UUID=5fb266ab:8d3b4672:ff810cac:ad834674
I was initially on an older superblock 0.9.. I zeroed it out and installed a fresh 1.2 with ext4, same thing. Grub is installed on both drives (at one point I had 4 drives in a Raid1 array, same thing), and I've tried dracut with both the mdadm.conf included and not. I can't seem to figure out what is responsible for starting this array properly when a drive is missing. I updated the stock install the other day, mdadm, dracut, kernel (3.10.0-123.20.1.el7.x86_64) all had updates, but I still have the issue.
Anyone seen this before? Am I just not looking in the right place? I appreciate any advice. Thanks!
Regards,
-Moses
I have a new centos 7 install that I have been having trouble with. Usually a 2 drive Raid1 array.. your normal boot root and swap created during install. These all come up just fine in degraded mode (single drive), EXCEPT for the additional Raid1 partition I have. The system brings it up as a spare and does not automatically activate the array, its ends up marked as a spare like this..
Personalities : [raid1]
md100 : inactive sda4[1](S)
1927256320 blocks super 1.2
Where as with both drives it comes up just fine..
md100 : active raid1 sda4[2] sdc4[1]
20000000 blocks super 1.2 [2/2] [UU]
Note the size is wrong, but it doesn't seem to matter as I tried it with the full partition size too. If the array was supposed to be mounted (fstab), it drops to the service prompt. I can force it to run in degraded mode by doing #mdadm -R /dev/md100 and it starts up just fine in degraded mode. Here is my line in mdadm.conf..
ARRAY /dev/md/100 metadata=1.2 UUID=5fb266ab:8d3b4672:ff810cac:ad834674
I was initially on an older superblock 0.9.. I zeroed it out and installed a fresh 1.2 with ext4, same thing. Grub is installed on both drives (at one point I had 4 drives in a Raid1 array, same thing), and I've tried dracut with both the mdadm.conf included and not. I can't seem to figure out what is responsible for starting this array properly when a drive is missing. I updated the stock install the other day, mdadm, dracut, kernel (3.10.0-123.20.1.el7.x86_64) all had updates, but I still have the issue.
Anyone seen this before? Am I just not looking in the right place? I appreciate any advice. Thanks!
Regards,
-Moses