Submitted by xtremeservices on Sat, 06/25/2011 - 16:10
I am using Cloudmin to provision Xen VPS instances using the LVM option. I am finding that after a reboot of the Centos Dom0 that the Xen systems can't boot because the LVs are not active.
I am having to manually active the VG after each reboot and then manually starting the Xen instances.
This is a fresh Centos 5.6 dom0 using the GPL Cloudmin install.sh script and webmin to create the VG.
The underlying device that the VG and LV are on is an iSCSI device. The boot order of iSCSI is 13 compared to the LVM-Monitor of 26. So the device is always there when the LVM initiates.
Any assistance with this would be greatly appreciated.
Submitted by xtremeservices on Sat, 06/25/2011 - 17:10 Comment #1
I resolved this issue for now by adding the following to the very top of /etc/init.d/xend
/sbin/vgchange -a y
This seems like a lame solution so I hope someone has a better answer as to why LVs are not active after reboot. I assume it is because they do not have mount points and are not in /etc/fstab but doing that would defeat the purpose of running Xen instances on LVs.
Submitted by JamieCameron on Sat, 06/25/2011 - 22:37 Comment #2
It unusual that you would need to add that line, as on most systems one of the standard init scripts will run vgchange to detect all the volume groups and logical volumes..
For example, on one of our CentOS 5 systems
/etc/init.d/netfsruns it (assuming that /sbin/lvm.static exists).
Submitted by xtremeservices on Mon, 06/27/2011 - 10:43 Comment #3
I agree, very unusual. I even discovered that lvm.static did infact exist. Anyways, I have moved away from the idea of iSCSI since it most definately does not allow concurrent lun mounting. NFS is the godsend on bonded gigabit. NFS, I have found, is alot easier to setup than clvm and I have come to the conclusion that backing up individual files vs LVs will be easier as well.
I would like to suggest that Cloudmin have an NFS setup area (Shared Storage Setup) with optional failover replication using DRBD or similar block replication. I have manually setup two systems as my HA shared storage using Heartbeat Pacemaker and DRBD serving NFS shares. Now the XEN live migrations are seemless - beautiful thing. I think this setup as an option through the cloudmin master control would be a nice extra out of the box. Maybe that is just a pipe dream though.
Submitted by JamieCameron on Mon, 06/27/2011 - 11:03 Comment #4
Yes, we plan to add better support for shared storage setup in the next release or two.
I'm glad NFS worked out for you - often it is considered inferior due to the additonal filesystem overhead, but is much simpler to setup.