CentOS 6.0 x86 RAID 1 Does not load Grub
Posted: 2012/05/13 13:40:57
Hello all,
I've got a situation here that I have been trying to resolve for the past week. I have a test server has dual 80gb and dual 250gb drives, each to be setup in their own RAID1. The 250gb drives are setup using a Promise FASTTrak PCI card and are set to /home (ext4). The 80gb mirrored drives are currently using the onboard Promise FASTTrak raid setup on my Tyan server motherboard. This RAID1 is formatted for 150mb /boot (ext4), 1gb swap, and / (ext4). Both RAID1 setups are created in the Promise firmware and are fully recognized with the drives being tested good by the WD mfg diagnostics. The 80gb array is also set to being the bootable primary one in the Promise Firmware
When I go to install CentOS 6 (x86), everything goes great. I select my 80gb array to be the one that will be formatted and where the OS will be installed too, leaving the 250gb drive as a data drive. Then I custom select standard partitions for the 80gb array as I specified above, write the changes to the disk and let it install the OS. Once the install is done, I remove the disc, and reboot. Except once the BIOS is done the POST process, the system simply stops after checking to see if there is any disk in the CDrom drive to boot from. Grub doesn't load and nothing happens, the BIOS hangs with a black screen.
I figured there might have been something messed up with my 80gb raid1 array so I went back, disabled the onboard promise FASTTrak array, and connected both drives as primary on two different standard motherboard IDE controllers. This time I booted from the DVD once more and the installer saw two separate 80gb drives. During the custom partition setup, I specified a software raid partition on each drive of /boot - 150mb, - 1gb, / - , created an LVM device and created the software mirror. Again I wrote the changes to the disk, proceeded with the install and rebooted the server at the end.
Once again, when the server finished the POSTing process, it simply stops after checking if there is any disk to boot from. Grub doesn't load at all.
I'm simply dumbfounded here as to what is happening. I've googled this issue but haven't found an exact replica of the issue elsewhere and I'm confident that it's an issue with Grub not loading, I'm just not sure why. I even took my Ubuntu 11.10 disc and booted from the live CD part on both instances to make sure the installation actually did work. Sure enough, I saw my 250gb (/home) device, 78gb (/) device, and my 150mb (/boot) device all intact with ubuntu easily recognizing them.
Does anyone have any remote idea of what in the world is going on here? Do I need to somehow use terminal from a live cd to fix Grub?
Thanks
Brad
I've got a situation here that I have been trying to resolve for the past week. I have a test server has dual 80gb and dual 250gb drives, each to be setup in their own RAID1. The 250gb drives are setup using a Promise FASTTrak PCI card and are set to /home (ext4). The 80gb mirrored drives are currently using the onboard Promise FASTTrak raid setup on my Tyan server motherboard. This RAID1 is formatted for 150mb /boot (ext4), 1gb swap, and / (ext4). Both RAID1 setups are created in the Promise firmware and are fully recognized with the drives being tested good by the WD mfg diagnostics. The 80gb array is also set to being the bootable primary one in the Promise Firmware
When I go to install CentOS 6 (x86), everything goes great. I select my 80gb array to be the one that will be formatted and where the OS will be installed too, leaving the 250gb drive as a data drive. Then I custom select standard partitions for the 80gb array as I specified above, write the changes to the disk and let it install the OS. Once the install is done, I remove the disc, and reboot. Except once the BIOS is done the POST process, the system simply stops after checking to see if there is any disk in the CDrom drive to boot from. Grub doesn't load and nothing happens, the BIOS hangs with a black screen.
I figured there might have been something messed up with my 80gb raid1 array so I went back, disabled the onboard promise FASTTrak array, and connected both drives as primary on two different standard motherboard IDE controllers. This time I booted from the DVD once more and the installer saw two separate 80gb drives. During the custom partition setup, I specified a software raid partition on each drive of /boot - 150mb, - 1gb, / - , created an LVM device and created the software mirror. Again I wrote the changes to the disk, proceeded with the install and rebooted the server at the end.
Once again, when the server finished the POSTing process, it simply stops after checking if there is any disk to boot from. Grub doesn't load at all.
I'm simply dumbfounded here as to what is happening. I've googled this issue but haven't found an exact replica of the issue elsewhere and I'm confident that it's an issue with Grub not loading, I'm just not sure why. I even took my Ubuntu 11.10 disc and booted from the live CD part on both instances to make sure the installation actually did work. Sure enough, I saw my 250gb (/home) device, 78gb (/) device, and my 150mb (/boot) device all intact with ubuntu easily recognizing them.
Does anyone have any remote idea of what in the world is going on here? Do I need to somehow use terminal from a live cd to fix Grub?
Thanks
Brad