Hi guys I need help regarding Raid 0 becoming inactive on Raid 1+0 when one drive is removed.
We use centos 6.5 that is booting from the usb and the raid 1+0 is mounted as /data which holds virtualbox images and files.
We format the drives using gparted and created the raid using mdadm.
This is software-raid
so the setup is
/dev/sda and /dev/sdb = /dev/md1 (raid 1)
/dev/sdc and /dev/sdd = /dev/md2 (raid 1)
then we set them up as raid 0
/dev/md1 and /dev/md2 = /dev/md0 (raid 0)
I remember testing this a few years back that when /dev/sda is removed raid 0 is still active and just reports that a drive had failed.
same for sdb, sdc and sdd.
But recently when we tested it again we had found out that raid 0 is becoming inactive when any one of the four drives is removed.
So this is basically a change in behavior. I had tested this with the old kernel version on which we first tested the raid 1+0 and the new version via yum update and the result is the same.
Does anyone know why this happens?
Thanks!
Raid 0 becomes inactive in raid 1+0 when one drive is removed
Re: Raid 0 becomes inactive in raid 1+0 when one drive is removed
Are you absolutely sure that you don't already have a failed drive?
Re: Raid 0 becomes inactive in raid 1+0 when one drive is removed
Hi that is what we are simulating. We are simulating a failed drive so we just turn off the server then pull one drive out then boot the system.
cat /proc/mdstat with all drives will produce something like
Personalities : [raid1] [raid0]
md125 : active raid0 md127[0] md126[1]
1953262592 blocks super 1.2 512k chunks
md126 : active raid1 sdc[2] sdd[1]
976631360 blocks super 1.2 [2/2] [UU]
md127 : active raid1 sdb[1] sda[2]
976631360 blocks super 1.2 [2/2] [UU]
But when we took one drive out it becomes
Personalities : [raid1] [raid0]
md125 : inactive raid0
1953262592 blocks super 1.2 512k chunks
md126 : active raid1 sdd[1]
976631360 blocks super 1.2 [2/2] [U_]
md127 : active raid1 sdb[1] sda[2]
976631360 blocks super 1.2 [2/2] [UU]
In theory raid 0 should still work as the raid 1 array only lost one drive. and therefore should still work.
Thanks!
cat /proc/mdstat with all drives will produce something like
Personalities : [raid1] [raid0]
md125 : active raid0 md127[0] md126[1]
1953262592 blocks super 1.2 512k chunks
md126 : active raid1 sdc[2] sdd[1]
976631360 blocks super 1.2 [2/2] [UU]
md127 : active raid1 sdb[1] sda[2]
976631360 blocks super 1.2 [2/2] [UU]
But when we took one drive out it becomes
Personalities : [raid1] [raid0]
md125 : inactive raid0
1953262592 blocks super 1.2 512k chunks
md126 : active raid1 sdd[1]
976631360 blocks super 1.2 [2/2] [U_]
md127 : active raid1 sdb[1] sda[2]
976631360 blocks super 1.2 [2/2] [UU]
In theory raid 0 should still work as the raid 1 array only lost one drive. and therefore should still work.
Thanks!
Re: Raid 0 becomes inactive in raid 1+0 when one drive is removed
Don't. It's 5 years old and you should be on 6.10 not 6.5.We use centos 6.5
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke
Re: Raid 0 becomes inactive in raid 1+0 when one drive is removed
Hi, I had also tried this on centos 7 and centos 6.10... Behavior is still the same.
Centos 7 OS gets broken when one drive is pulled out which is weird because the OS disk is outside of the raid array.
Centos 7 OS gets broken when one drive is pulled out which is weird because the OS disk is outside of the raid array.