Looking for instructions to create a cloudmin 64 bit Centos disk image

Ok, so, you DO have doc on how to create a disk image, however, that replies on already having a pre setup kvm guest. Using cloudmin, one can only create a 32 bit guest. I need a 64 bit centos guest.

I presume you guys create the disk images that are available, so, I would also guess that you have a procedure to create those. Would it be possible to post that procedure?

I am wanting to create a 64 bit Centos 6.4, using ext4, with Virtualmin disk image so I can create a number of guests for my application. 32 bit doesn't really cut it for our application. If you could provide the procedure you use for creating your disk images, perhaps this would be useful.

Status: 
Closed (fixed)

Comments

Ok, that makes sense. I presume any IP assigned to the machine is ignored when a disk image is created from it and will be re-assigned by Cloudmin.

However, if I install virtualmin on this now not so empty host, and then create the disk image, will it also remove the host specific configuration from Virtualmin? Or, is there a trick to that? It of course asks for a fully qualified domain name as part of the virtualmin install process.

I have it all working to the point of pre-installing Virtualmin. Obviously, I would not complete it via the http interface.

When a new VM is created from an image, Cloudmin updates all the relevant configuration files with the assigned hostname and IP addresses. This should apply to all Virtualmin-related settings too.

However, any domains you created in Virtualmin before imaging will not be modified or renamed when creating a new VM.

So, this all seems to have worked except for one detail... New system is up an running based on my new template. When I create a new virtual host based on my Centos 6.3, here is what happens... I am creating a new host with disk space AND swap space. So, the new new virtual host, in fstab, it has:

UUID=bd74e6e5-5cdd-45c6-8943-684a4c8aeba9 / ext4 grpquota,usrquota,rw 0 1 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/hdb1 swap swap defaults 0 0

Which is great, except for one thing.... There is no /dev/hdb1. It's /dev/sdb1. This was created using LVM for each drive using your new system screen.

Both LVM drives were created for the new host.

Executing swapon yields:

swapon -a

swapon: /dev/hdb1: stat failed: No such file or directory

I believe if I change the fstab to sdb1, it will work fine:

ls -l /dev/sd*

brw-rw---- 1 root disk 8, 0 Feb 4 04:02 /dev/sda brw-rw---- 1 root disk 8, 1 Feb 4 04:02 /dev/sda1 brw-rw---- 1 root disk 8, 16 Feb 4 04:02 /dev/sdb brw-rw---- 1 root disk 8, 17 Feb 4 04:02 /dev/sdb1

WELL... Maybe not. I looked at this actually seems to have a Linux partition type. It should be Swap.

So, something is not all there for creating swap space somehow. When creating the new server, it says:

Creating swap file of 1024 MB .. .. done

My guess is this is because I am using LVM for the disk "partitions".

I think the cause here is that the /etc/fstab file on the original system contains a UUID= instead of the actual device name. This causes problems for Cloudmin, as it cannot work out what the root device name will be inside the VM.

The fix is to edit /etc/fstab and change the first field to the actual device path, like /dev/sda1 , before creating the image.

By "original" system, you mean the system I created as an empty system to install Centos on, right? Not the master system for cloudmin?

Ok, understand that Centos installs a system, this way when you use LVM, I did not specify the UUID or change fstab in any way shape or form. So, I think what you are saying here is that Cloudmin has a problem with using LVM for the host system.

Note the line with the UUID is the root partition, which boots and works just fine. I don't see any issue with that. The issue is with the swap partition which is defined in fstab as:

/dev/hdb1 swap swap defaults 0 0

This is the root of the problem. If I manually change in the created system /dev/hdb1 to /dev/sdb1, and, type swapon -a, it works fine.

So, the issue is /dev/hdb1 which is wrong. Perhaps you are saying the UUID on the root partition is why it is using /dev/hdb1?

If that is the case, then, please confirm on the image system I should change the UUID to the /dev device, and, make a new image, then, re-create the new system and swap should work fine. Is that what you are saying? If not, then, please elaborate.

Yes, you need to fix the /etc/fstab file on the VM that you created as an empty system in Cloudmin.

The incorrect swap file device is being used because the root filesystem device cannot be determined from the /etc/fstab file.

Sorry to drag this on, but, it still doesn't work. I update the fstab on the image from machine, re-created the image, deleted and re-created the virtual machine from the new image, and, the fstab now shows up as:

/dev/sda1 / ext4 grpquota,usrquota,rw 0 1 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/hdb1 swap swap defaults 0 0

So.... It's still using hdb1 for some reason, seems like a bug to me?

It does work if I simply change that to sdb1 and swapon -a.

Looks like there is another issue here - on the Linux distribution you are using, the disk devices are seen as sdX instead of hdX (as Cloudmin expects).

I will fix this in the next Cloudmin release. Let me know if you'd like a fix sooner ..

No quick fix needed. Will deal with it. No big deal to change the fstab and swapon it for the time being. Just wanted it to get eventually fixed.

I had one more issue with this, more related to the bug with checking disk space and resizing after a machine is created. Probably, I should make this a separate ticket. The issue is I created the vhost as swap space 1GB so as to not have it tell me I was out of space. Using Cloudmin, I then changed the disk space to 4GB by using resouerces -> Manage Disks. It did increase the LVM partition to 4GB, however, it was still 1GB on the vhost once it was started. The reason seems to be no mkswap command was done to change the size.

I don't think mkswap needs to be run actually.

If you run fdisk -l on the swap disk, does it show the partition as 4 GB?

Yes, it shows 4GB, however, free -m, top, you name it, all show 1GB swap space.

As soon as I run mkswap, it shows 4GB.

If you run the command :

cloudmin list-disks --host your-vm-name --multiline

as root via SSH on the Cloudmin master system, what does it output? This will tell me if the disk is being detected as swap or not..

I don't know if this is correct or not as obviously, I had fixed it. I had also changed the filesystem type from Linux to swap as it was NOT swap.

See attached since line spacing doesn't always seem to work right.

Where did you make the filesystem type change to swap exactly?

In webmin for the vhost. Under hardware and partitions. I am not sure that had to be done though. Virtual memory was working at 1GB even with filesystem type linux. Just was trying to get 4GB.

I don't know if Cloudmin has the feature virtualmin does where I can allow you temporary access or not, but, if you wanted to play around a bit it might be helpful to you to have the same environment. Just let me know.

Ok, I see the bug that is causing this now, and have implemented a fix for inclusion in the next Cloudmin release.

Automatically closed -- issue fixed for 2 weeks with no activity.