Unable to boot to degraded raid 1

General support questions
Post Reply
naamchan
Posts: 2
Joined: 2017/08/31 02:47:54

Unable to boot to degraded raid 1

Post by naamchan » 2017/08/31 03:03:23

I've follow this tutorial https://terrydactyl10.wordpress.com/201 ... uefi-boot/ to create raid 1 array on efi boot. Now I can boot properly if both my drives is present however I can't boot if I unplug one of the drive. It will be loaded for a while and will be in emergency mode. Any suggestion to solve this problem?

Here is my current config.

blkid

Code: Select all

/dev/sdb1: UUID="0290c885-d660-3321-86b6-f9f80352e535" UUID_SUB="24ac4491-f35c-aaf7-431e-d83c43eaa4ca" LABEL="localhost.localdomain:boot" TYPE="linux_raid_member" PARTUUID="ac3bcb4b-79cb-4a98-b72b-e140975c4740" 
/dev/sdb2: SEC_TYPE="msdos" UUID="41A8-91A2" TYPE="vfat" PARTUUID="221828db-123e-4e8c-9d3c-658ae161b9e0" 
/dev/sdb3: UUID="0a17a048-97d4-d7c7-1894-49657c8159ea" UUID_SUB="718c95b7-e195-d456-eb0c-83fda57943c9" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="c7470639-6ff3-428a-856b-e59ffc2d88cb" 
/dev/sda1: SEC_TYPE="msdos" UUID="41A8-91A2" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="0e16077d-8126-49e5-916e-fbb2e8c62422" 
/dev/sda2: UUID="0290c885-d660-3321-86b6-f9f80352e535" UUID_SUB="1a4e6a04-30cc-72d4-b379-7e546da8edda" LABEL="localhost.localdomain:boot" TYPE="linux_raid_member" PARTUUID="9c9afe54-28b9-4469-bd2b-28a02e262635" 
/dev/sda3: UUID="0a17a048-97d4-d7c7-1894-49657c8159ea" UUID_SUB="03e39d38-ecc9-c3cc-aa9b-3639a4c23591" LABEL="localhost.localdomain:pv00" TYPE="linux_raid_member" PARTUUID="70ae3d1b-5914-4150-9a1a-d7927bcbca14" 
/dev/md127: UUID="a9f420ab-89a7-4318-b6d0-fccd8832a888" TYPE="xfs" 
/dev/md126: UUID="AehQGf-sWyC-Fwjp-WZbf-noBr-PUYh-nFAcCY" TYPE="LVM2_member" 
/dev/mapper/cl-root: UUID="777fbfba-3220-41e5-8fa5-7fa9f48b60e5" TYPE="xfs" 
/dev/mapper/cl-swap: UUID="378b4833-1a55-4a34-98e7-368ee597c22c" TYPE="swap"
fstab

Code: Select all

#
# /etc/fstab
# Created by anaconda on Wed Aug 30 09:34:31 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=a9f420ab-89a7-4318-b6d0-fccd8832a888 /boot                   xfs     defaults        0 0
PARTUUID=0e16077d-8126-49e5-916e-fbb2e8c62422          /boot/efi               vfat    umask=0077,shortname=winnt 0 0
PARTUUID=221828db-123e-4e8c-9d3c-658ae161b9e0          /boot/efi2              vfat    umask=0077,shortname=winnt 0 0
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
efibootmgr

Code: Select all

BootCurrent: 0001
Timeout: 1 seconds
BootOrder: 0001,0002
Boot0001* CentOS
Boot0002* CentOS
Thanks in advance

Boyd.ako
Posts: 46
Joined: 2016/06/22 08:49:07
Location: Honolulu, HI
Contact:

Re: Unable to boot to degraded raid 1

Post by Boyd.ako » 2017/09/01 04:03:00

ick... Well, my first question is there any inrecoverable data on the drives? Because your setup is not one would consider ideal.

1) Normally, you never want to put the /boot partition on a raid. I've gone to the point to installing Linux on a very large thumbdrive and just dedicating the hard drives to the raid. The raided drives you normally just want to dedicate to irrecoverable data like /home, network shares or swap partitions.

2) RAID 1 is a mirror that breaks. Unplugging one of the drives is breaking the mirror. The reason you can still boot is mark a "new" drive to replace the broke drive in a mirror so that it can sync up. Normally in your case with just two drives, you would boot directly into single user mode to do that.

3) Just intentionally breaking the mirror by unplugging it doesn't work like you think it does. You need to use mdadm to break the mirror first and then unplug it. Getting mdadm to intentionally break a mirror undestructively is quite complicated. I'd actually avoid it.

In short, I wouldn't put the OS required partitions on a raid unless it was at least a RAID 5. Even then I'd script up inventorying the RPMs installed and backing up the config files that I change.
My noob level: LPIC-2, Sec+ CE, Linux+
https://boydhanaleiako.me

User avatar
TrevorH
Site Admin
Posts: 33219
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Unable to boot to degraded raid 1

Post by TrevorH » 2017/09/01 08:47:01

Putting /boot on RAID 1 is a common practice and can help to protect against single drive failure. Testing that by removing a drive is a good thing to do (TM). More likely a problem is the EFI part of this as that requires a vfat /boot/efi partition and that does not get mirrored by mdadm - you have to do strange tricks manually to make that part work. There are older forum threads where this has been done but I'm afraid I don't know where they are. Searching via google using "site:www.centos.org/forums uefi boot" or similar might find them.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

Boyd.ako
Posts: 46
Joined: 2016/06/22 08:49:07
Location: Honolulu, HI
Contact:

Re: Unable to boot to degraded raid 1

Post by Boyd.ako » 2017/09/01 10:56:04

TrevorH wrote:Putting /boot on RAID 1 is a common practice and can help to protect against single drive failure.

Another common practice is to install Microsoft Windows. Need I say more?

And from my experience and observation, those really concerned about single drive failure would do the extra purchase to RAID5 it. Yes, RAID1 is legendary. But, it's also starting to become less used with new tech and software coming out. Let's keep up with the times shall we.
My noob level: LPIC-2, Sec+ CE, Linux+
https://boydhanaleiako.me

User avatar
TrevorH
Site Admin
Posts: 33219
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Unable to boot to degraded raid 1

Post by TrevorH » 2017/09/01 11:18:25

You can only use RAID 1 with /boot due to grub restrictions. Does not apply to hardware RAID of course as that's transparent - this is only for mdadm software RAID.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

naamchan
Posts: 2
Joined: 2017/08/31 02:47:54

Re: Unable to boot to degraded raid 1

Post by naamchan » 2017/09/02 08:57:09

Thank you for your answers. Anyway Windows is not the solution as I want to install NextCloud which is Linux only. I will try to find the answer or maybe change the NextCloud to another software. Thanks again!

Boyd.ako
Posts: 46
Joined: 2016/06/22 08:49:07
Location: Honolulu, HI
Contact:

Re: Unable to boot to degraded raid 1

Post by Boyd.ako » 2017/09/03 08:40:05

naamchan wrote:Thank you for your answers. Anyway Windows is not the solution as I want to install NextCloud which is Linux only. I will try to find the answer or maybe change the NextCloud to another software. Thanks again!
Forgive me. The Windows statement was playful sarcasm.

I run owncloud on Freenas. In owncloud you state where the data directory is going to be. Like say a ZFS raidz2 vdev. (ZFS has a mirror configuration as well.) Yes, if the OS or the jail plugin goes to fudge it won't work. But, it's not hard to restore via ZFS snapshots or straight up reinstalling from scratch. The point being the important irreplaceable data is on a RAID. Do you really need to RAID data like the OS and software and take up valuable RAIDed disk space?

The Freenas OS is on thumbdrive as they recommend for multiple reasons.

If you wanted to go into extremist theory. You should boot from a PXE image that mounts the network shares of the data and other directorys that contain the installed software. This is kind of where the cloud-based architecture using containers comes in with docker and what not. Install the Nextcloud docker container and point the data directory to RAIDed network share; preferably NFS or iSCSI. I don't suggest Samba/SMB/CIFS simple because I've been categorized as a GNU Linux Nazi.
My noob level: LPIC-2, Sec+ CE, Linux+
https://boydhanaleiako.me

PhoenixM
Posts: 3
Joined: 2019/01/18 22:36:11

Re: Unable to boot to degraded raid 1

Post by PhoenixM » 2019/01/23 02:12:06

I know that commenting on old posts in forums is generally frowned upon, but I am wondering if OP ever did find a solution to his problem, as I am literally having that exact same problem right now myself. I didn't use the tutorial that the OP pointed to in his post (I hadn't even seen that tutorial until now), but I had basically used similar logic and similar steps when crafting my own RAID1 setup (with about the only difference being that I didn't employ LVM). I have no idea why I keep getting dropped into Emergency Mode; I mean, the efibootmgr-created entry pointing to the EFI partition backup on sdb works the way that it should, and I get past grub, but then things die at mounting the root RAID array if both drives are not present.

Post Reply