Failed to attach VDI while creating Xenserver 6.2 VMs

Hi,

It seems like we cannot create new VMs using Cloudmin and XENServer 6.2. Whenever we try to create a VM, we get the following error Failed to attach VDI to host system : All /dev/td* devices are in use!

We can however manually create VMs in Xenserver without issues (using the same template).

Also, it used to work earlier and stopped all of a sudden.

I have checked the memory, disk space, cpu and other resources to make sure that is not causing the issue.

Can you please help us with this? TIA

Status: 
Closed (fixed)

Comments

Did an upgrade of your Xen host perhaps cause this to stop working?

No the XENServers were not updated. We have been using the same patch level for more than couple of years now and it just stopped working randomly

Hmmm ... I wonder what changed that would suddenly trigger it. If you can, does rebooting the xen host help?

We have rebooted the XENserver twice hoping to fix it but it didn't work out. The problem is we cannot create new hosts anymore. Is there any alternative way for now (other than logging into XENconsole) that you can think of which would allow us to create VMs?

It's possible that the cause is actually that all /dev/td* devices are in use by VMs.

What output do you get if you run ls -l /dev/td* ?

Hi Jamie

No i do not see /dev/td* devices [root@xxx log]# ls -ld /dev/td* ls: /dev/td*: No such file or directory

BTW i ran this command on the XENServer btw. Also we figured out /dev/td* devices do not exist in the production XENServers too which are running fine.

Can you suggest any other way (like a command or something) that we can run from cloudmin to create the VMs?

Regards

What about /dev/xvd* devices? Cloudmin assumes that on XenServer versions below 6.2, they will be used - but on 6.2 and above (which you have), /dev/td* devices will exist.

Same issue.

[root@xxx ~]# cat /etc/issue | head -n 1 Citrix XenServer Host 6.2.0-70446c [root@xxx ~]# ls -ld /dev/xvd* ls: /dev/xvd*: No such file or directory

Can you also post the output from /proc/partitions on the host system?

[root@xxx log]# cat /proc/partitions
major minor  #blocks  name

   7        0      52252 loop0
   8        0  142737408 sda
   8        1      32976 sda1
   8        2    4194304 sda2
   8        3    4194304 sda3
   8        4  134307890 sda4
252        0       4096 dm-0
   8       48  209715200 sdd
   8       64  209715200 sde
   8       80  209715200 sdf
   8       96  314572800 sdg
   8      112  314572800 sdh
   8      128  314572800 sdi
   8      144  209715200 sdj
   8      160  209715200 sdk
   8      176  209715200 sdl
   8      192  314572800 sdm
   8      208  314572800 sdn
   8      224  314572800 sdo
   8      240  209715200 sdp
  65        0  209715200 sdq
  65       16  209715200 sdr
  65       32  314572800 sds
  65       48  314572800 sdt
  65       64  314572800 sdu
  65       80  209715200 sdv
  65       96  209715200 sdw
  65      112  209715200 sdx
  65      128  314572800 sdy
  65      144  314572800 sdz
  65      160  314572800 sdaa
252        1  209715200 dm-1
252        2       4096 dm-2
252        3  209715200 dm-3
252        4       4096 dm-4
252        5  314572800 dm-5
252        6       4096 dm-6
252        7  209715200 dm-7
252        8       4096 dm-8
252        9  314572800 dm-9
252       10       4096 dm-10
252       11  314572800 dm-11
252       12       4096 dm-12
252       16   16818176 dm-16
253        0   16777216 tda
252       14    8413184 dm-14
253        2    8388608 tdc
252       17   31526912 dm-17
253        4   31457280 tde
252       19   16818176 dm-19
253        6   16777216 tdg
252       13   10514432 dm-13
253        1   10485760 tdb
252       18    1056768 dm-18
253        5    1048576 tdf
252       20    8413184 dm-20
253        7    8388608 tdh
252       21    8413184 dm-21
252       10       4096 dm-10
252       11  314572800 dm-11
252       12       4096 dm-12
252       16   16818176 dm-16
253        0   16777216 tda
252       14    8413184 dm-14
253        2    8388608 tdc
252       17   31526912 dm-17
253        4   31457280 tde
252       19   16818176 dm-19
253        6   16777216 tdg
252       13   10514432 dm-13
253        1   10485760 tdb
252       18    1056768 dm-18
253        5    1048576 tdf
252       20    8413184 dm-20
253        7    8388608 tdh
252       21    8413184 dm-21
252       44    3235840 dm-44
252       45     892928 dm-45
252       46    1818624 dm-46
252       47    1437696 dm-47
252       48    1150976 dm-48
252       49    1646592 dm-49
252       50    1208320 dm-50
252       51    1114112 dm-51
252       53    2088960 dm-53
252       54    1765376 dm-54
252       55     880640 dm-55
252       56    4820992 dm-56
252       57    6950912 dm-57
252       58    1056768 dm-58
253       17   10485760 tdr
252       61    1056768 dm-61
253       20    1048576 tdu
252       62   10514432 dm-62
253       21   10485760 tdv
252       15   10514432 dm-15
253        3   10485760 tdd
252       27   15765504 dm-27
253       10   15728640 tdk
252       29    8413184 dm-29
252       30    3182592 dm-30
252       31    8413184 dm-31
252       32    4526080 dm-32
253       11    8388608 tdl
252       68    3964928 dm-68
252       69    9494528 dm-69
253       25   16777216 tdz
252       70    4526080 dm-70
253       26   16777216 tdaa
252       35    1056768 dm-35
252       52      36864 dm-52
253       14    1048576 tdo
252       75   10514432 dm-75
252       76    8413184 dm-76
253       16   10485760 tdq
252       59    1056768 dm-59
253       18    1048576 tds
252       60    8413184 dm-60
253       19    8388608 tdt
252       79   36777984 dm-79
253       33   36700160 tdah
252       80   21020672 dm-80
253       34   20971520 tdai
252       77    1056768 dm-77
253       31    1048576 tdaf
252       78   13664256 dm-78
253       32   13631488 tdag
252       67    1056768 dm-67
252       71   21020672 dm-71
253       24    1048576 tdy
253       27   20971520 tdab
252       72   10514432 dm-72
252       81   10514432 dm-81
253       28   10485760 tdac
252       82    1978368 dm-82
253       35   10487808 tdaj
252       84   16818176 dm-84
252       83   16818176 dm-83
252       63   16818176 dm-63
253       22   16777216 tdw
252       64   16818176 dm-64
253       23   16777216 tdx
252       65    8413184 dm-65
253       29    8388608 tdad

We also got this while creating a new VM

During creation Creating virtual system with Citrix Xen .. .. creation failed : Failed to create root disk : Failed to attach VDI to host system : Failed to connect VDI a650ef84-7d88-4901-914c-2ac2e62eaaea to host : The uuid you supplied was invalid. type: VM uuid:

We manually figured out the VM UUID is 0aef8648-f56d-5aa2-b43f-c6776a563d93

So it seems like it misses the UUID of new VMs

Ok, I think I see the issue now - there are > 26 virtual devices, so everything from /dev/tda to /dev/tdz is used up! Citrix Xen has gone on to /dev/tdaa and so on, but Cloudmin doesn't recognize these.

We can fix this relatively easily though. I can send you a patched version of Cloudmin now if you like?

Yes that would be great....would you please attach a download link?

I forgot to ask - which Linux distribution is your Cloudmin master system running?

Hi

Thanks for the quick response. It is running CentOS 6.8

Regards,

Now we have a new issue (after implementing the patch you sent over mail)

When we try to create a VM, we get the message: Creating virtual system with Citrix Xen .. .. creation failed : Failed to create root disk: Failed to attach VDI to host system: Failed to connect VDI (uuid) to host (host-uuid): A device with the name given already exists on the selected VM device: 6

Argh, I missed another case where two-letter device names are in use. I'll send you another patch.

Thanks a lot. The patch works great :-)

Awesome! I'm actually a little surprised that it did work, because I wasn't able to re-produce this problem on test systems so the code was essentially untested.

The same fix will be included in the next Cloudmin release.

Status: Active » Fixed
Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.