moving virtual system's disk doesn't update xen config

using Resources > Manage disks to move a LVM disk to another Volume Group doesn't change the settings in Xen instance config file causing the virtual system unable to boot:

PTY PID: 701 Error: Device 51714 (vbd) could not be connected. /dev/vg0/vm01_swap does not exist

note: vm01_swap is moved to LVM Volume Group 'vg1' and was renamed to vm1_3_img by cloudmin.

Status: 
Closed (fixed)

Comments

How did you do the move exactly? Cloudmin doesn't yet support moving disks between VGs.

here are the steps:

  1. shutdown the virtual system
  2. go to Resources > Manage disks
  3. click on the disk name (LVM disk)
  4. on the bottom there is form to move the LVM partition to another VG

note that i create 2 VGs on the same server node.

Sorry, I'd forgotten about that feature ... even though I wrote it!

So, when you did the disk move, did it report any error? Also, if you fix the Xen .cfg file for the VM manually, does it start up OK?

the disk move went fine, i think it's just not updating the xen config file.

One more question - which Xen version are you running there?

i'm using centos 6 and install cloudmin using the GPL installer

xen version 4.2.4

So on Xen 4.x, the .cfg file can differ from the actual settings Xen uses in some cases.

Can you check if the .cfg file (usually in the /xen directory) was updated with the new path?

i checked again and actually cloudmin did change the disk configuration in the .cfg file to the correct LVM partition. however when i boot the virtual system i get the same error:

Error: Device 51714 (vbd) could not be connected. /dev/vg1/vm01_swap does not exist

note: i move back the LVM to vg0, below is the new .cfg:

===CFG=== memory = 1024 maxmem = 4096 name = 'vm01' vif = ['ip=192.168.1.80,mac=00:16:3e:94:C2:83'] address = '192.168.1.80' netmask = '255.255.255.0' disk = ['phy:/dev/vg0/vm01_img,xvda1,w','phy:/dev/vg0/vm01_3_img,xvda2,w'] kernel = "/usr/lib/xen/boot/pv-grub-x86_64.gz" extra = "(hd0)/boot/grub/grub.conf" vcpus = 2 ===END===

strangely if i start the virtual system from console by running xm create vm01.cfg, it boot just fine.

i noticed strange thing, when i'm shutting down virtual system either from cloudmin or through xm shutdown command, the virtual system is still listed on xm list

xm list

Name ID Mem VCPUs State Time(s) Domain-0 0 1024 1 r----- 7640.8 vm01 1024 2 13.8

if i moved the disk to other LVM partition and did delete the vm by running "xm delete vm01", i can boot the virtual system from cloudmin just fine.

but if i didn't delete the vm by running command "xm delete vm01" i will get the error.

so i think the solution is to delete the virtual system from xend management after shutdown, but i find this strange as on my other installation (non cloudmin), shutdown also remove the vm from xend management

i think this is a new feature of xen 4.2, it save the vm in xend managed domain so when you shutdown the vm it will still listed in xend and you can start it again with "xm start" command. unfortunately this command didn't re-read the .cfg file so in the case that a disk is moved to another LVM VG xend didn't know.

running xm create from console give error:

Error: VM name 'vm01' already exists

on the XL command list there is "config-update" option, but i can't find it on XM command. so you need to do xm delete vm01 to have xend re-read the config file.

i read on http://wiki.xenproject.org/wiki/Xen_4.2_Feature_List and it says that xend is formally deprecated, is there any plan on updating cloudmin to use XL toolstack instead of XM?

Yeah, Xen 4.x changed the way the .cfg file is used - rather than being the authorative source, it is only used when creating a VM.

The work-around maybe to use commands like :

xm delete vm01
xm new /xen/vm01.conf

is it possible to call this commands from the API? or can you implement it on next update please?

Yes, this needs to be fixed in Cloudmin. But did that work-around work for you?

yes, that how i did to have the vm started and updated with the new configuration. i did "xm create" instead of "xm new"

it's also necessary to do "xm delete" everytime a virtual system get reset/reinstalled using different architechture 32-bit to 64-bit and vice versa..

this also needed in the case the template is using different disk device, for example "Debian Lenny 64 bit base OS" is using "sda" as the disk while "Centos 6 64 bit base OS" is using "xvda"

Ok, these will be fixed in the next Cloudmin release.

Automatically closed -- issue fixed for 2 weeks with no activity.

update:

after reading an article, i found out that 'xm new' means creating xend managed domU while 'xm create' is creating xend UN-managed domU.

so to have the config .cfg file re-read every time virtual system is started, the virtual system have to be created using 'xm create' command

Hi Jamie,

when will the next Cloudmin released? any ETA?

Sorry, it's been held up a bit while we add CentOS 7 support. I expect it will be out in a week though.

i have updated to 7.8 and it seems that changing settings on a virtual system still isn't applied without me manually run "xm delete " manually and restart it.

Are you changing the path to a virtual disk, or something else?

see reply number 2 for the moving LVM disk part.

the problem is not only for that moving LVM, but everything that changes the config file (.cfg) is not applied even you restart the virtual system because the vs (virtual system) is started with xm new command which is Xen domain managed. the next time you start/reboot the vs, xen will not re-read the config file as it's managed and already inside xen domain managed list (xm list)

try change the amount of virtual CPU, reset image from 32-bit to 64-bit, change IP address, it will not work.

this can be solved either by replacing xm new with xm create or execute xm delete before xm new

Ok, this is odd as it doesn't match the behavior of our test systems.

I will dig into it some more on a CentOS 6 machine with Xen setup, and update this bug.

this happens on newly installed server (fresh install of centos 6 64-bit and cloudmin)

Jamie, did you find solution for this problem? have you tested it?

No, still working on it .. but I should have a fix in a day or two.

So I did some testing here with Cloudmin 7.8, and both memory and address changes seem to work fine on a CentOS 6 host system with Xen 4.2.

memory change doesn't have problem because it doesn't require reboot and changed on-the-fly.. but changes like disk size and reinstall from 32-bit to 64-bit OS (and vice versa) doesn't work

Ok, for an architecture change on reset within Cloudmin the Xen config isn't being refreshed - I'll fix that.

Disk sizes aren't stored in the Xen config file though, so there should be no need for it to be modified when a disk is expanded.

i suggest to make the "reboot system" under "system state" to always re-read the config file before re-starting the virtual system.

for example when i change the virtual cpu settings, i'm offered to reboot the virtual system, if i click the reboot button the settings is applied. but if i prefer to do the reboot later on and use the "reboot system" under "system state", the changes on virtual cpu settings doesn't seems to be applied.

on second thought, i think it should be made to re-read the config every time the virtual system is re/booted

about the HDD, it's not the disk size changing that i'm talking about, but LVM partition changes. on my test server i have 2 Volume Group (VG) if i move the disk of a virtual system to another VG, the config is changed but not re-read on reboot/boot so the virtual system boot is failed.

I suppose that could be done, but I'm not sure if it is a great idea to delete and then re-create the Xen instance continually - it would be better for Cloudmin to do this only when necessary. I'll make sure this happens for Xen disk moves in the next release.

I've fixed this in the code, but haven't done a new release that includes the fix yet.

Let me know if you'd like to try a pre-release version though.

let me test it and i'll give you feedback

Ok, I am emailing you an updated RPM for installation on the Cloudmin master system now.

it seems to work fine now..

Great! This fix will be included in the upcoming 7.9 Cloudmin release.

Automatically closed -- issue fixed for 2 weeks with no activity.