Bug: System Backup using "Systems hosted on" feature fails when only Cloudmin master selected.

steps to reproduce the bug:
1) from cloudmin web interface open "Backup and Restore"
2) choose system backups
3) click "Add a new system backup"
4) in "systems to backup" choose radio button for "Systems hosted on"
5) In the "Systems hosted on" select the hostname of the cloudmin master only and move it to the right to select it.
6) set other backup options and create the backup

The evidence and effect of the bug:
Evidence:
when you are returned to the backup list, you will notice that under "systems to backup" instead of seeing the expected "Systems hosted on CLOUDMASTERHOSTNAME" instead you will see ???
Effects:
When you execute the backup manually you will see:

Backing up 0 systems to destinations from host systems ..

Finding systems to backup ..
.. found 0 systems
Working out backup destinations ..
.. found 0 usable destinations

Backups of 0 systems completed successfully.

If you set the backup to scheduled, it will do nothing and report nothing since this will not actually generate an error.
This problem has been a bug in 6.6, and now in 6.7 in both Cloudmin GPL and Cloudmin Pro.
Both systems on which I am seeing this bug were installed using the KVM-Debian install script onto ubuntu 64 host OS.
Franco

Status: 
Closed (fixed)

Comments

Ok, this is a bug that is triggered when the host system is also the Cloudmin master. It will be fixed in the 6.8 release.

Thank you sir :) Franco

Ok, just a note, as a work around I usually create a backup job and use "Selected Systems" and choose the VMs manually, then set the destination to the cloudmaster hostname and folders. While this is unaffected by the bug, it does have the downside that if you move VMs around from host to host, that you must manually update this backup job to reflect the VMs Actually hosted on the master. So although this is more high maintenance than the "Systems hosted on" approach, it does work as expected and should serve fine till 6.8.
Franco

We will be releasing version 6.8 today or tomorrow to fix this bug, and a couple of other issues..

Automatically closed -- issue fixed for 2 weeks with no activity.