# If you're using the lenny or later version of the Xen guest kernel you will
# need to make sure that you use 'hvc0' for the guest serial device,
# and 'xvdX' instead of 'sdX' for serial devices.
#
# You may specify the things to use here:
#
# serial_device = hvc0 #default
# serial_device = tty1
#
# disk_device = xvda #default
# disk_device = sda
#
I use Debian 6 as host system and try to create debian 5 and 6 based VMs by using PyGrub. all VMs creates with /dev/sda partitions.
Cloudmin 5.8 and later should solve this issue, as VM images that support xvda devices use /dev/xvda1 in their /etc/fstab file. This will be preserved when the VM is created..
Sounds good but in practice i always have sda based partitions by default even if VM configured use xvda in fstab.
May be this inconvience present only in GPL version that i have in use? I use latest version of cloudmin.
It's not very big problem for me. I can always rename partitions for VM's. But this is not right way. :o)
I can confirm the issue: the xen config file always creates "sda" entries, even if the fstab shows xvda.
More, it would be nice to have the possibility to set options like "extra='elevator=noop'" on specific images' configs.
It does, actually, as the machine doesn't boot: the xen config file sets also the "root=/dev/sdaX" option so the machine boots up, searches for sdaX, doesn't find it and panics. If I change manually to "xvdaX", it is able to boot perfectly.
Thanks - I forgot about that case, as most users boot their VMs using pygrub, in which case the root line isn't needed. I will fix this in the next Cloudmin release.
Thank you very much.
I would love to deploy a Cloudmin install on a Debian Squeeze dom0, and I'm trying to find out all the possible problems.
Cloudmin (as all the *min projects) is a great piece of software. Thank you for that.
Do they recommend using xvda in the Xen .cfg file, or in the /etc/fstab file on the VM?
Cloudmin currently uses xvda in /etc/fstab if the kernel supports it.
''
Here block of /etc/xen-tools/xen-tools.conf
# If you're using the lenny or later version of the Xen guest kernel you will
# need to make sure that you use 'hvc0' for the guest serial device,
# and 'xvdX' instead of 'sdX' for serial devices.
#
# You may specify the things to use here:
#
# serial_device = hvc0 #default
# serial_device = tty1
#
# disk_device = xvda #default
# disk_device = sda
#
I use Debian 6 as host system and try to create debian 5 and 6 based VMs by using PyGrub. all VMs creates with /dev/sda partitions.
Cloudmin 5.8 and later should solve this issue, as VM images that support xvda devices use /dev/xvda1 in their /etc/fstab file. This will be preserved when the VM is created..
''
Sounds good but in practice i always have sda based partitions by default even if VM configured use xvda in fstab. May be this inconvience present only in GPL version that i have in use? I use latest version of cloudmin.
It's not very big problem for me. I can always rename partitions for VM's. But this is not right way. :o)
So do you mean your xen config file uses sda device names, or do they appear somewhere else in your vm?
''
I can confirm the issue: the xen config file always creates "sda" entries, even if the fstab shows xvda. More, it would be nice to have the possibility to set options like "extra='elevator=noop'" on specific images' configs.
Does the user of sda instead of xvda in the Xen config cause problems though?
''
It does, actually, as the machine doesn't boot: the xen config file sets also the "root=/dev/sdaX" option so the machine boots up, searches for sdaX, doesn't find it and panics. If I change manually to "xvdaX", it is able to boot perfectly.
Thanks - I forgot about that case, as most users boot their VMs using pygrub, in which case the root line isn't needed. I will fix this in the next Cloudmin release.
''
Thank you very much. I would love to deploy a Cloudmin install on a Debian Squeeze dom0, and I'm trying to find out all the possible problems. Cloudmin (as all the *min projects) is a great piece of software. Thank you for that.
Sounds good. Thanks for great software.