Installation on LVM RAID (not mdraid+lvm)

General support questions
Cent0Snewbie
Posts: 5
Joined: 2015/02/28 21:50:29

Installation on LVM RAID (not mdraid+lvm)

Post by Cent0Snewbie » 2015/02/28 23:36:59

I wish to use LVM RAID capabilities instead of mdraid+lvm. I have chosen to use LVM RAID as it is possible to use different RAID modes within the same volume group, expanding volume groups and logical volumes is significantly easier than with mdraid+lvm. I am prepared to sacrifice some performance and have a bit of a fight with the installation, in order to have greater flexibility.

I understand that the CentOS7 installer does not support lvm+raid creation, however I have read that it is possible to pre-configure the disks prior to installation with desired LVM RAID (or other) configuration and select during installation.

I am performing tests on a Virtual Machine with 3xSata Virtual HD prior to attempting an installation on a full system, as follows:

I would like to use the following layout for each disk:

/dev/sd[abc]: GPT, pmbr_boot
/dev/sd[abc]1: Offset 1MiB, Length 16MiB, bios_grub
/dev/sd[abc]2: Length 4096MiB, LVM, LVM RAID0(x3), SWAP (mkswap /dev/vg_swap/lv_swap)
/dev/sd[abc]3: Length 512MiB, Primary Partition, /boot (unallocated prior to installation)
/dev/sd[abc]4: Length 16GiB, LVM, LVM RAID5(8GBx2+8GB Parity), / (root)

on full system, sd[abc]4: would be Length 64GB (32GBx2+32GB Parity), remaining disk space would be split into a number of additional partitions/volume groups/logical volumes, for use as additional OS System Volumes, Applications Volumes, Data Volumes and other forms of space required.

I have determined from initial tests that the installer will not select any pre-existing partition or logical volume, though it will pickup pre-configured Swap Space and free space in existing Volume Groups in addition to any unallocated space.

It is therefor appears not possible to pre-build a disk with Logical Volumes prior to installation contrary to what I have read.

I wish to ascertain if I have missed any steps in preparing the system prior to installation, or if I have to perform the installation in some different manner, perhaps for example if it is possible to run the installation as a program while running from a LiveDVD.

I understand that one approach to installing a system may be to install to a single disk and then convert to LVM or LVM RAID. I have not explored this option, as I am concerned that it may not be possible to convert a standard partition containing an FS to a Volume Group and Logical Volume using LVM RAID. Furthur I am concerned that if it is possible to convert a standard partition in this manner, the resultant Volume Group may not be efficiently arranged as the bulk of the data would be written to only the first drive.

I have been struggling to figure this out for weeks, so any pointers or links to some hidden documentation would be appreciated.

cmurf
Posts: 64
Joined: 2015/02/12 01:31:31

Re: Installation on LVM RAID (not mdraid+lvm)

Post by cmurf » 2015/03/01 20:30:58

man lvconvert shows that type is limited to cache, cache-pool, raid1, snapshot, thin, or thin-pool, but I can't tell if this is a complete list. So I'm not sure if conversion from single to raid56 are possible like with mdadm.

python-blivet might need certain awareness of LVM RAID in order to recognize and support it as a target, even if it's pre-created. I know that linear and thin LV's can be pre-created, and the GUI installer will recognize and let them be used.

Performance of LVM RAID should be similar to mdadm managed because the backend RAID code is still md kernel code. It's just the user space tools (and metadata) that are different.

I'll try doing this in a VM and see if either the CentOS 7 or Fedora 22 Server (alpha TC7) recognize LV's with type raid5.

cmurf
Posts: 64
Joined: 2015/02/12 01:31:31

Re: Installation on LVM RAID (not mdraid+lvm)

Post by cmurf » 2015/03/01 22:55:11

OK so this is possible.

## GPT partitioning on three disks, sd[abc]

Code: Select all

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            4095   1024.0 KiB  EF02  BIOS boot partition
   2            4096         1028095   500.0 MiB   FD00  Linux RAID
   3         1028096       511999966   243.7 GiB   8E00  Linux LVM

And then precreate the RAID Logical Volumes:

Code: Select all

# pvcreate /dev/sd[abc]3
# vgcreate VG /dev/sd[abc]3
# lvcreate --type raid5 --nosync -i 2 -I 64 -L 4G -n swap VG
# lvcreate --type raid5 --nosync -i 2 -I 64 -L 25G -n root VG
# lvcreate --type raid5 --nosync -i 2 -I 64 -L 50G -n home VG
## --nosync be careful, in my case the virtual disks were new so everything is guaranteed to be zeros. You might want to do an ATA Secure Erase with hdparm first if you want to use nosync on real drives, or just let them sync.

Results:

Code: Select all

# pvs
  PV         VG   Fmt  Attr PSize   PFree  
  /dev/sda3  VG   lvm2 a--  243.65g 204.14g
  /dev/sdb3  VG   lvm2 a--  243.65g 204.14g
  /dev/sdc3  VG   lvm2 a--  243.65g 204.14g
# vgs
  VG   #PV #LV #SN Attr   VSize   VFree  
  VG     3   3   0 wz--n- 730.95g 612.41g
# lvs
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home VG   Rwi-a-r--- 50.00g                                    100.00          
  root VG   Rwi-a-r--- 25.00g                                    100.00          
  swap VG   Rwi-a-r---  4.00g                                    100.00


Boot up the CentOS installer...

The installer sees the raid5 LV's I can click on them, assign them mount points and check reformat to format them as XFS.

I can't directly use the three pre-created 500MB partitions to create raid1 /boot because I can't select all three at the same time. What I have to do is delete them individually, freeing up 500MB on each drive, then create a new mount point at /boot size 500MB, then on the right side UI change the Device Type to RAID, with a RAID level of 1.

Regarding the precreated BIOSBoot partitions, the installer behavior is totally catatonic. It requires one of them added to the New CentOS Installation section by clicking it, and checking reformat leaving the format as BIOSBoot. Fine. I was able to do the same with the other two, but after making an unrelated change, they vanished completely from both the New Installation and Unknown portions of the UI. Super annoying that users even have to be involved in managing these required things. See the "notabug" bug
https://bugzilla.redhat.com/show_bug.cgi?id=1022316
Anyway, this partition really doesn't need to be bigger than 1MB because the core.img that's stuffed into it by grub2-install is only 31KB for the above layout. For simpler layouts it's around 25KB. For /boot on Btrfs it's 38KB.

Click Done.

Begin Installation.

And it boots!

Code: Select all

# ssm list
-------------------------------------------------------------
Device          Free       Used      Total  Pool  Mount point
-------------------------------------------------------------
/dev/sda                         244.14 GB        PARTITIONED
/dev/sda1                          1.00 MB                   
/dev/sda2    0.00 KB  499.94 MB  500.00 MB  md               
/dev/sda3  204.14 GB   39.51 GB  243.65 GB  VG               
/dev/sdb                         244.14 GB                   
/dev/sdb1                          1.00 MB                   
/dev/sdb2    0.00 KB  499.94 MB  500.00 MB  md               
/dev/sdb3  204.14 GB   39.51 GB  243.65 GB  VG               
/dev/sdc                         244.14 GB                   
/dev/sdc1                          1.00 MB                   
/dev/sdc2    0.00 KB  499.94 MB  500.00 MB  md               
/dev/sdc3  204.14 GB   39.51 GB  243.65 GB  VG               
-------------------------------------------------------------
----------------------------------------------------
Pool  Type  Devices       Free       Used      Total  
----------------------------------------------------
VG    lvm   3        612.41 GB  118.54 GB  730.95 GB  
----------------------------------------------------
------------------------------------------------------------------------------
Volume        Pool  Volume size  FS     FS size       Free  Type   Mount point
------------------------------------------------------------------------------
/dev/VG/swap  VG        4.00 GB                             raid5             
/dev/VG/home  VG       50.00 GB  xfs   49.97 GB   49.97 GB  raid5  /home      
/dev/VG/root  VG       25.00 GB  xfs   24.99 GB   24.24 GB  raid5  /          
/dev/md127    md      499.94 MB  xfs  496.61 MB  436.28 MB  raid1  /boot      
/dev/sda2     md      500.00 MB  xfs  496.61 MB  436.28 MB  part              
/dev/sdb2     md      500.00 MB  xfs  496.61 MB  436.28 MB                    
/dev/sdc2     md      500.00 MB  xfs  496.61 MB  436.28 MB                    
------------------------------------------------------------------------------
## Hint, above ssm command is available after doing 'yum install system-storage-manager'. Very neat tool.
Last edited by cmurf on 2015/03/01 23:09:47, edited 3 times in total.

cmurf
Posts: 64
Joined: 2015/02/12 01:31:31

Re: Installation on LVM RAID (not mdraid+lvm)

Post by cmurf » 2015/03/01 22:59:07

One more thing. I recommend post-install and reboot doing:

Code: Select all

grub2-install /dev/sda
grub2-install /dev/sdb
grub2-install /dev/sdc
to make sure the core.img is written to BIOSBoot on each drive. grub2-install isn't smart enough to accept all devices on one line unfortunately.

*sigh* and off topic, is to change /etc/fstab to set the last two columns for any XFS filesystem to 0. There is no such thing as an unattended fsck for XFS at boot time. For some reason I see some systemd-fsck confusion where it tries to run fsck.ext2 on the root LV, weird.

Cent0Snewbie
Posts: 5
Joined: 2015/02/28 21:50:29

Re: Installation on LVM RAID (not mdraid+lvm)

Post by Cent0Snewbie » 2015/03/02 08:37:57

Thankyou all, I shall look at these and give it another go

Cent0Snewbie
Posts: 5
Joined: 2015/02/28 21:50:29

Re: Installation on LVM RAID (not mdraid+lvm)

Post by Cent0Snewbie » 2015/03/09 18:50:59

cmurf,

I have had some time to run some further tests, and have some additional observations.

I have noted the unusual behavior of the installer. In particular the "Catatonic" behaviour when starting to allocate partitions.

BIOS_BOOT Partition
Selecting [sda]BIOS_BOOT partition and ticking reformatting results in 2 entries for the one partition under data. BIOS_BOOT on other disks can be selected but no Duplicates present.
This Partition is added automatically when /boot or / are added first.

/boot Partition
[sda]added as a basic partition no issue.I will not be using RAID for this partition. I would prefer to manually Sync these partitions on each disk.
In situations where RAID fails or is not cleanly unmounted, System will refuse to boot. I am not skilled enough to deal with these issues.

vg_swap/lv_swap
Must be pre-formatted using command "mkswap /dev/vg_swap/lv_swap". Auto allocated as soon as any other Partition/Volume/Mount Point is defined.

vg_system1/lv_root1
Unable to select this Logical Volume unless it has been pre-formatted using mkfs. If not preformatted, Installer crashes.
The need to format a Logical Volume may be necessary for all Logical Volumes when selecting pre-defined Logical Volumes in the installer?
As I do not have access to external support services, I do not know how to capture the output.

Have been able to install CentOS7 in VM, using the disk structure previously described.

Still having issues with actual hardware, possibly due to the use of larger disks and pre-configuration of a number of additional non-system Partitions/Volume Groups/Logical Volumes.
Installer crashes when selecting any Logical Volume.

I am going to attempt to pre-format all the available Logical Volumes and attempt the installation one last time.

Am continuing to try to find a solution, however this is quite frustrating.

I would ask - If I am unable to resolve this in the short term - I am considering installing in a temporary standard partition. Is it possible (and if so how?) to copy the contents of the installed root partition into a Logical Volume. More specifically - I may create a temporary partition e.g. /dev/sda10, install into it, then copy the contents into /dev/vg_system1/lv_root1, then subsequently remove the temporary partition and run the server from the logical volume.

Assuming I get this system running, I will follow your recommendations in your last message.

Thanks again for your help

cmurf
Posts: 64
Joined: 2015/02/12 01:31:31

Re: Installation on LVM RAID (not mdraid+lvm)

Post by cmurf » 2015/03/12 06:30:45

You could leave enough room in the VG to create a linear LV's for root, and then copy its contents to the raid5 LV root, then blow away the linear LV later. To do such a copy you should use:

Code: Select all

rsync -pogAXtlHrDx
I got that from an anaconda program.log for a live install.

-Don't forget that fstab needs updating, because the fs UUID will have changed, and so will the VG/LV path.
-The grub.cfg will need to be updated in order to know the fs UUID and VG/LV path, that's grub2-mkconfig -o /boot/grub2/grub.cfg (assumes BIOS firmware, it's /boot/efi/EFI/redhat/grub.cfg for RHEL and I think also centos, check your /boot/efi/EFI folder)
-And finally decent chance the initramfs should be rebuilt too since it tends to capture UUIDs and such, so you'd use dracut for that.

Steps 2 and 3 above are tricky because they depend on the system already being booted with the new root. So wherever you have your newly copied root mounted, say /mnt/sysmage, you'll want to assemble it with bind minds like:

Code: Select all

mount -B /boot /mnt/sysimage/boot
mount -B /dev /mnt/sysimage/dev
mount -B /proc /mnt/sysimage/proc
mount -B /sys /mnt/sysimage/sys
chroot /mnt/sysimage
grub2-mkconfig -o blahblah
dracut -f
exit
reboot
In each case, I had made raid5 LVs for swap, home, and root, they were not preformatted and the installer didn't complain. That was in a VM though this shouldn't matter.

Definitely make sure each drive SCT ERC is set to 7 seconds / 70 deciseconds. You can check per drive with

Code: Select all

smartctl -l scterc <dev>
If you don't, then any bad sectors that cause read errors will not get corrected and it'll cause problems later.

Cent0Snewbie
Posts: 5
Joined: 2015/02/28 21:50:29

Re: Installation on LVM RAID (not mdraid+lvm)

Post by Cent0Snewbie » 2015/03/12 13:40:15

cmurph,

Many thanks, this is fantastic information.

I'm going to explore this over the next few days and see if I can make some progress.

I should say again that I did not have problems installing CentOS7 into the VM with your help, however I did experience ongoing issues when trying this on real hardware.

Typically, the Installer failed with a similar or the same error suggesting a sizing error. I cannot say what specifically as I have no idea how to capture the trace info. In short:

(typed):
Anaconda 19.31.79-1 exception report:
File "usr/lib/python2.7/site-packages/blivet/size.py", line 132, in _parseSpec
raise SizeNotPositiveError("spec= param must be >=0")

Note, I am attempting to configure the disks first before defining Packages to be installed, since I don't wish to waste time until I get past this hurdle. I don't know if this error is due to a problem with the disk and sizing or the packages not being selected, though I definitely did not have this problem in the VM when following the revised install, this is the error that pops up all the times it does fail.

I deleted a 370GB partition at the end of on of the disk and attempted to create a 16GB root FS, and it still crashed. I am going to pre-allocate the space using a Live DVD which should hopefully get round the crashing problem!

I'll update once I have had a chance to review and test your last (in a VM first!)

Thanks again!

cmurf
Posts: 64
Joined: 2015/02/12 01:31:31

Re: Installation on LVM RAID (not mdraid+lvm)

Post by cmurf » 2015/03/12 15:45:00

Cent0Snewbie wrote: Anaconda 19.31.79-1 exception report:
File "usr/lib/python2.7/site-packages/blivet/size.py", line 132, in _parseSpec
raise SizeNotPositiveError("spec= param must be >=0")
Yeah I ran into some of these in Fedora testing, circa Fedora 19/20. I Googled:

Code: Select all

"param must be" size.py site:bugzilla.redhat.com
There are maybe a dozen, but yours, with line 132 isn't listed, so it's probably an edge case. Most were fixed in what became Fedora 21 which post-dates the CentOS 7 installer and its backported fixes.
I just checked:
http://buildlogs.centos.org/rolling/7/isos/x86_64/
and CentOS-7-x86_64-DVD-20150228_01.iso has the same anaconda, so that won't fix it.

If you get to a shell, control-alt-F2/control-alt-Fn-F2, in /tmp will be a file anaconda-tb-xxxxxxx with some random text. I suggest opening a CentOS bug and attach that file. It contains pretty much everything needed for devs to sort it out.

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Re: Installation on LVM RAID (not mdraid+lvm)

Post by gerald_clark » 2015/03/12 16:08:23

You need to open the bug report with Red Hat, not CentOS.
CentOS doe not fix bugs.

Post Reply