Trying again - cannot manage disks after moving cloudmin to another server

The last time I tried to file this it failed with a 500 error --

I recently attempted to move my Cloudmin instance from one server to another. This had some issues, some of which I reported earlier about licenses for Virtualmin. Some just seemed to be general issues with managing the servers. I fixed a lot of this by just unregistering the systems and then re-registering them. This is now working fairly well with one exception --- I cannot resize disks any more. When I attempt to do so I get an error reading

Warning - this disk cannot be safely resized. Cloudmin does not know what type of filesystem this disk contains, and so cannot properly resize it. This is most likely because the disk is not mounted on the virtual system.

Which is a little odd since the screen before that clearly understands that it is an ext3 file system. I tried to upload screenshots of all of this, but the file attachment widget on the issue form doesn't work, reporting a js error when you try to attach a file.

Status: 
Closed (fixed)

Comments

Maybe this helps ---

It appears that if the virtual server is running, cloudmin knows that the file system is mounted on / and is ext3. As soon as I shut the server down, then cloudmin only seems to know that it is a virtual partition.

What does the /etc/fstab file contain in one of your VMs with this issue?

If the device for the root filesystem is something like LABEL=/ , then Cloudmin can have trouble working out which partition within the disk image this is. The solution is to change this to the actual device name, like /dev/sda1

Nope -- I don't see any substantial difference between the ones that work and don't work --

/dev/sda1 / ext3 grpquota,usrquota,rw 0 1 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 /dev/sda2 swap swap defaults 0 0 ~

Nope -- I don't see any substantial difference between the ones that work and don't work --

/dev/sda1 / ext3 grpquota,usrquota,rw 0 1 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 /dev/sda2 swap swap defaults 0 0 ~

Also -- this is the xen cfg file for the same domain

memory = 3284 maxmem = 4096 name = 'uppitywis' vif = ['ip=64.33.160.71,mac=00:16:3e:9B:68:D8'] address = "64.33.160.71" netmask = '255.255.255.240' disk = ['phy:/dev/VolGroup00/uppitywis_img,sda1,w','phy:/dev/VolGroup00/uppitywis_swap,sda2,w'] bootloader = "/usr/bin/pygrub" vnc = 1 vnclisten = "0.0.0.0" vncunused = 1 vncpasswd = "expurgatedforsecurity" vfb = ['type=vnc,vncunused=1,vncpasswd=expurgatedforsecurity,vnclisten=0.0.0.0'] vcpus = 2 cpus = "2,3" cpu_cap = 0 cpu_weight = 256

If you SSH into the cloudmin master system as root and run :

cloudmin list-disks --host xen-instance-name --multiline

what does it output?

Also, what output do you get from :

cloudmin download-file --host xen-instance-name --source /etc/fstab --stdout

cloudmin list-disks --host uppitywis.cloudmin.cruiskeenconsulting.com --multiline
/dev/VolGroup00/uppitywis_img
Description: LVM VG VolGroup00, LV uppitywis_img
Device on system: /dev/sda1
Description on system: SCSI device A partition 1
Format: partition
Device file on system: /dev/sda
Type: Device
LVM volume group: VolGroup00
LVM logical volume: uppitywis_img
Storage name: LVM volume group VolGroup00
Storage ID: lvm_VolGroup00
Media: disk
Size: 25769803776
Nice size: 24 GB
Filesystem size: 25374093312
Filesystem used: 24254513152
Filesystem free: 1119580160
Nice filesystem size: 23.63 GB
Nice filesystem used: 22.59 GB
Nice filesystem free: 1.04 GB
Mount point: /
Mount device: /dev/sda1
Filesystem: ext3
Mounted: Yes
Xen block ID: 2049
/dev/VolGroup00/uppitywis_swap
Description: LVM VG VolGroup00, LV uppitywis_swap
Device on system: /dev/sda2
Description on system: SCSI device A partition 2
Format: partition
Device file on system: /dev/sda
Type: Device
LVM volume group: VolGroup00
LVM logical volume: uppitywis_swap
Storage name: LVM volume group VolGroup00
Storage ID: lvm_VolGroup00
Media: disk
Size: 2147483648
Nice size: 2 GB
Mount point: /
Mount device: /dev/sda1
Filesystem: ext3
Mounted: Yes
Xen block ID: 2050

cloudmin download-file --host uppitywis.cloudmin.cruiskeenconsulting.com --source /etc/fstab --stdout
/dev/sda1 / ext3 grpquota,usrquota,rw 0 1
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
/dev/sda2 swap swap defaults 0 0

I should have mentioned - could you run those commands when the VM is down, and what do they output then?

while the same system is down ---

cloudmin list-disks --host uppitywis.cloudmin.cruiskeenconsulting.com --multiline
/dev/VolGroup00/uppitywis_img
Description: LVM VG VolGroup00, LV uppitywis_img
Device on system: /dev/sda1
Description on system: SCSI device A partition 1
Format: partition
Type: Device
LVM volume group: VolGroup00
LVM logical volume: uppitywis_img
Storage name: LVM volume group VolGroup00
Storage ID: lvm_VolGroup00
Media: disk
Size: 25769803776
Nice size: 24 GB
/dev/VolGroup00/uppitywis_swap
Description: LVM VG VolGroup00, LV uppitywis_swap
Device on system: /dev/sda2
Description on system: SCSI device A partition 2
Format: partition
Type: Device
LVM volume group: VolGroup00
LVM logical volume: uppitywis_swap
Storage name: LVM volume group VolGroup00
Storage ID: lvm_VolGroup00
Media: disk
Size: 2147483648
Nice size: 2 GB

cloudmin download-file --host uppitywis.cloudmin.cruiskeenconsulting.com --source /etc/fstab --stdout
ERROR: Failed to list partitions in disk image /dev/VolGroup00/uppitywis_img : Disk /dev/VolGroup00/uppitywis_img doesn't contain a valid partition table

Disk /dev/VolGroup00/uppitywis_img: 25.7 GB, 25769803776 bytes
255 heads, 63 sectors/track, 3133 cylinders, total 50331648 sectors
Units = sectors of 1 * 512 = 512 bytes

I should have mentioned --- these are systems where I also moved the virtual machines from another physical server using cloudmin -- so what I think has happened here is that the disk images got broken during the move -- which is weird because the systems boot and shutdown just fine.

Ok, I see the problem now - Cloudmin thinks that your Xen VM has a whole-disk image, but it really only contains a single partition.

The fix is to SSH into the master system and get the system's ID with the command :

cloudmin list-systems --host uppitywis.cloudmin.cruiskeenconsulting.com --id-only

Then edit the file in /etc/webmin/servers named XXX.serv , where XXX is the unique ID. At the end of this file, add the line :

xen_wholedisk=0

Yup, that seems to have sorted it -- thanks. Now my only question is why did several of these virtual hosts get horked like that when I moved them to a different server

This is due to a bug in Cloudmin that is triggered only when you un-register and then re-register a VM.

I will fix it in the next release (version 5.5).

Automatically closed -- issue fixed for 2 weeks with no activity.