Error when adding second LVM volume to KVM guest

Today, I was trying to add a second Volume to a KVM guest, running CentOS 5.7 on a CentOS 5.7 host, with CLoudmin 5.7, with virtio block "driver".

I chose a mounting point (an empty dir on the guest), chose a size (There's plenty of room on the LVM Volume Group), and hitted "Create".

Then this is what I got :

Creating virtual disk on /dev/VolGroup00/xxxx_yyyyyy_net_2_img .. creation failed : QEMU 0.9.1 monitor - type 'help' for more information (qemu) p[K[Dpc[K[D[Dpci[K[D...

Then a big bunch of characters, and at the end : storage media=disk,file=/dev/VolGroup00/xxxx_yyyyyy_net_2_img,if=virtio[K Invalid pci address

Now the "/dev/vdb" is listed in Cloudmin and in /etc/fstab on the guest. The corresponding LVM Volume IS created on the host. But, even after reboot of the guest, it's not mounted in the guest and if I try to mount it,I get a message saying that /dev/vdb doesn't exist...

Deleted it, shutdown the guest, rebooted the host, tried again... for the same result.

Any idea ?

Status: 
Active

Comments

That error means that KVM could not live-attach the new disk to the VM.

However, if you shut down the VM and start it up again, the new disk should appear OK.

If not, does the existing root disk on the VM use /dev/vda as the device, as is it on /dev/hda instead?

Nope, even after a restart, the VM wouldn't mount /dev/vdb. Yes, the existing root disk VM uses /dev/vda, as it uses virtio_blk.

I removed the second Volume and tried again with the VM shut down : this time it's a go. After boot, /dev/vdb is mounted on the chosen dir.

I guess something goes wrong when adding the Volume to a live VM. Maybe there's a clue in the "Invalid pci address" error.

P.S. : as a side question : now that it works, i tried to mount this VM's filesystem on the host via the "cloudmin mount-system" command. It works as expected, but I only have access to the data on the root FS (/dev/vda), and not to the data on /dev/vdb, wich is mounted on a subdir of /home. Tried fidling with the "--want-dir" option, but no luck... Any idea ?

That's unusual .. failure to attach the disk to a running VM shouldn't prevent it from being created so that it can be mounted after a reboot.

Unfortunately hot-adding a disk to a running VM is not completely reliable, as it relies on interaction between KVM, PCI and the Linux kernel .. and the KVM part is not completely reliable in my experience.

Regarding mounting the second disk, there is no way to do this currently ..