Server will not boot after new RAID array was installed

Support for the other architectures (X86_64, IA-64, and PowerPC)
Post Reply
hrobinson
Posts: 13
Joined: 2010/02/03 12:55:54

Server will not boot after new RAID array was installed

Post by hrobinson » 2010/02/03 13:40:10

Hello Everyone!

I have been working on this all night and have not found out a solution.

I needed to put more space on my vmware server. Installed is a LSI Megaraid 300-8x controller

Before the upgrade I had the following hardware installed.

4gb ram
1 250GB IDE hard drive
1 IDE CD Rom Drive
4 x 500gb drives configured as RAID 5.

The 250GB drive is configured as follows:
Volume Group: CentOS5
FSRoot (/)
Usr (/usr)
Tmp (/tmp)
Var (/var)
Home (/home)
Swap (Swap) 2GB

/boot EXT3 100mb

The first 4 SATA hard drives on the RAID controller are configured as follows:
RAID 5:
Volume Group: Data
Data (/data) EXT 3 1.4TB

This configuration worked just fine.

I did updates and shutdown the system.

Next I added 3 new 1TB hard drives to the server and attached them to ports 4,5,6 I configured the MegaRAID SATA 300-8x controller for a second array, Raid 5 Using the 1TB drives. Total Size of this "drive" is 2tb.

After I put the system back together again, I attempted to reboot and received the following error:
(I will go line by line)
memory for crash kernel (0x0 to 0x0) not within permissible range
PCI: BIOS Bug: MMCFG area at E0000000 is not E820-Reserved
PCI: Not using MMCONFIG.
Red Hat nash version 5.1.19.6 starting
ahci 0000:02:00.0 MV_AHCI Hack: port_map 7 -> 3
sda: asking for cache data failed
sda: assuming drive cache: write through
sda: asking for cache data failed
sda: assuming drive cache: write through
sda: asking for cache data failed
sda: assuming drive cache: write through
sda: asking for cache data failed
sda: assuming drive cache: write through
sdb: asking for cahce data failed
sdb: assuming drive cache: write through
sdb: asking for cahce data failed
sdb: assuming drive cache: write through
Reading all physical volumes. This may take a while...
Found volume group "Data" using metadata type lvm2
Volume group "CentOS5" not found 130000000 @ 8000-e000
unable to access resume device (/dev/CentOS5/Swap)
mount: could not find filesystem /dev/root
setuproot: moving /dev failed: No such file or directory
setuproot: error mounting /proc: no such file or directory
setuproot: error mounting /sys: No such file or directory
switchroot: mount failed: No such file or directory
Kernel panic - not syncing: Attempted to kill init!

(and it hangs right there).

Previous to this I could not even get that far. I eventually had to mount the /boot partition and change the grub.conf file from hd(1,0) to hd(2,0) to get this far in the boot sequence.

I am able to boot in rescue mode and all the partitions mount normally with no errors.

I have successfully built a new initrd image. Initally I tried to run mkinitrd. This is where I discovered that I had burned the wrong CD the OS on the server is x64, and I was using 32 bit CentOS rescue CD, and i was not able to chroot. chroot would error out with "bad format error" and not function. to create the new initrd image.

So to me it looks like drives have been inadvertantly moved around, so that is why it will not boot.

Can someone suggest how I might be able to fix this?

Meanwhile I will keep looking as I have to have this server up before I go home to bed.

Thanks for your help.

Sincerely,

Harold Robinson

hrobinson
Posts: 13
Joined: 2010/02/03 12:55:54

Re: Server will not boot after new RAID array was installed

Post by hrobinson » 2010/02/03 14:02:13

UPdate:

I decided to try the other kernels that I had in the system, starting at the bottom.

The last one reported invalid partition hd(1,0). The second to last one finally got the system up and running.

Here are a list of kernels I have in the grub menu:

CentOS (2.6.18-128.4.1.el5) -> does not boot fails with the error in the last post This is the kernel I have been fighting.
CentOS (2.6.18-128.1.10.el5) -> does not boot fails with the error in the last post.
CentOS (2.6.18-92.1.22.el5) -> This does work and boot normally :-)
CentOS (2.6.18-92.el5) -> After modifying the grub.conf to change the hard drive from hd(1,0) to hd(2,0) this also boots! :-)

Do you think that maybe yum-update installed the 32bit kernel instead of the 64bit kernel?

Sincerely,

Harold Robinson

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Server will not boot after new RAID array was installed

Post by gerald_clark » 2010/02/03 14:30:41

Check your BIOS drive order.
You should probably be booting off hd0 ( IDE ) not hd1 or hd2.

Post Reply

Return to “CentOS 5 - X86_64,s390(x) and PowerPC Support”