IP Pool Sharing for different virtualizations on same host system

As the title says, is this possible?

Status: 
Closed (fixed)

Comments

Yes, this is absolutely possible - the same host system can be registered as (for example) a Xen and OpenVZ host, and can have the same IP range for both. Virtual systems of either type will not use each other's IPs..

Thanks Jamie, I've just added a Xen host system, but there's only an option to add them, not create them. Any ideas?

Have you downloaded any Xen images yet, at Cloudmin Settings -> New System Images?

Hi Jamie, that seemed to be the problem - thanks.

Unfortunately, I am stumped by another problem, this time to do with Xen networking.

The server I have is hosted in a datacenter which provide subnets in a routed format. I have one single IP and then I have an allocation of a /25 which is routed to the single IP. Any idea how this can be configured in Cloudmin?

Hmm, I've tried that setup .. so if you create a virtual IP (like eth0:1) with one of the IPs in the range you have allocated, can it be pinged from outside?

If so, creating Xen instances with IPs within that range should work OK. However, you may have to set their default gateways to your main IP.

Hi Jamie,

Yes, they can be pinged from the outside if I create an IP alias. Though if I set the default gateway to my main IP, it doesn't seem to help.

Can the Xen instance IPs be pinged from the Xen host system?

Only when I add an IP as a virtual interface as you asked me to test earlier. Would you like access to the system? It's pretty bare at the moment, so you're more than welcome to do anything you like if it makes it easier for you instead of questioning me.

Sure, if I could login to the host system that would be very useful. You can email me login details for root SSH access at jcameron@virtualmin.com

Thanks for the login..

Do you have a system already created with Cloudmin in the problem IP range, or can I create a new one?

I deleted all the systems, go ahead and create a new one.

I had no luck getting it working, sorry ...

Is there any chance your hosting company could assign you a real IP range in the same network as your main IP?

I haven't setup Xen instances in a different network range from the host like this before, and I'm not even sure if it will work if the instances cannot talk to the network themselves directly.

Thanks Jamie. What is peculiar is that OpenVZ works practically flawlessly with the IP range. My host has a wiki article here: http://translate.google.com/translate?prev=hp&hl=en&js=y&u=http://wiki.h...

It describes why most virtualization types may have issues with the way they allocate and manage IP ranges and addresses Xen specifically, though to be frank, I have little to no knowledge of virtualization networking, so it does not mean that much to me, especially with the translation, but it may help enlighten you.

FYI "Allokierung" is "allocation" and Virtualisierungsart is "virtualization type".

Yes, OpenVZ does its network in a quite different way - for Xen, the virtual system's network interfaces are bridged at the ethernet level to the network, so there isn't really any scope for the Xen host to do routing.

Can you try the instructions on your host's wiki page? I don't speak enough German to follow them, sorry ..

Hi Jamie,

At long last I have fixed the issue. I set up the virtual interfaces correctly and the only remaining problem seemed to be the way the Xen instance config file is set up.

#############
#NON-WORKING#
#############

[root@cloud xen]# cat test.cfg
kernel = '/xen/vmlinuz-vm2-xenU'
ramdisk = '/xen/initrd.vm2.xenU.img'
memory = 256
name = 'test'
vif = [ '' ]
address = '188.40.204.130'
netmask = '255.255.255.128'
disk = ['file:/xen/test.img,sda1,w']
root = '/dev/sda1 ro'
vnc = 5901
vnclisten = "127.0.0.1"
vncdisplay = 1
vncpasswd = "passwordings"

#########
#WORKING#
#########

[root@cloud xen]# cat test.cfg
kernel = '/xen/vmlinuz-vm2-xenU'
ramdisk = '/xen/initrd.vm2.xenU.img'
memory = 256
name = 'test'
vif = [ 'ip=188.40.204.130' ]
address = '188.40.204.130'
netmask = '255.255.255.128'
disk = ['file:/xen/test.img,sda1,w']
root = '/dev/sda1 ro'
vnc = 5901
vnclisten = "127.0.0.1"
vncdisplay = 1
vncpasswd = "passwordings"

---

Is it possible for you to fix this autonomously? So that, for example, there's two radio buttons, one for bridged subnet, one for routed subnet that does the former config and latter config respectively?

That reply didn't really format as I intended. But you should be able to see what I mean.

(Fixed it)

It also seems that when I bring up a virtual machine, I have to restarting the networking service on the dom0 to get any network throughput.

Cool, I'm glad you got that working .. to be honest, this setup was kind of a mystery to me too.

So the only change you had to make to the .cfg file was the ip= option in the vif= line?

Also, what do you mean by restarting networking on the dom0 system? Do you mean restarting xend?

Yes, that was the only change to the config file.

What I mean is, whenever a Xen system starts up, initially there is no connectivity - but if I run "service network restart", it then proceeds to work flawlessly.

Thanks, I will have Cloudmin add that ip= option in the next release .. in fact, it seems safe to do this for all Xen guests.

Regarding the networking restart, that is quite surprising that it is needed. It would be rather drastic for Cloudmin to do this all the time, as it would interrupt all other network connections to the host.

Do you know if there is a specific part of the network restart that solves the problem, such as reloading firewall rules or flushing the ARP table?

I will try to find this out asap. Thanks.

While it's on my mind, would it be possible to have a "Reboot" feature which will reboot virtual machines from the list of managed systems?

Jamie,

It seems the culprit of the problem is one of these things:

a) I was imagining things and the problem never actually occurred.
b) The problem has now magically fixed itself.
c) The problem is just hiding and will come back after I post this message.

So for now, forget about the issue of "service network restart" etc. When will I be able to get the new release with the patch in for the ip=xxx.xxx.xxx.xxx parameter in the config file? Also, how's the HyperVM migration feature coming along?

Thanks.

It will be a couple of weeks till a new release comes out with the ip= feature, as I just released version 3.0.

But I can send you a patch now if you like ..

Also, that mass reboot is a good idea ... that will be in the next release.

Regarding HyperVM, Cloudmin can already take over control of existing OpenVZ instances. What other features are you looking for in HyperVM migration?

A patch would be very nice thank you Jamie.

I have a couple of other things to ask if you don't mind. Regarding OpenVZ migration, can you briefly go over the procedure (or refer me to documentation) of how it works? Also, I am trying to work out how the DNS works for the VPS'.

I have set up local master zones on the main Cloudmin server, these are xen.exoware.net and vz.exoware.net respectively and they're having records added to them fine and are locally resolvable. But when I create instances, there are no DNS servers that work to resolve global records for general purpose use - what is expected of the administrator in order to set this up (via webmin, or otherwise)?

Also, please let me know if I'm starting to be a bit cheeky with my questions, I'm not sure, but with the latter queries, I may be touching the border of "not cloudmin's job, rtfm, you're the sysadmin".

Regarding DNS, you should delegate those xen and vz sub-domains from your exoware.net domain using NS records .. then they will be globally resolvable. Is that what you are looking for?

Regarding HyperVM, Cloudmin has an "Add OpenVZ container" menu item that can bring a running OpenVZ system under its control. However, in this case no DNS records are adding .. it is expected that either a DNS entry that resolves to the OpenVZ container's IP already exists, or you just add it by IP address.

Great, thanks. At the moment, when creating instances, the contents of /etc/resolv.conf is empty, other than a "search exoware.net" which doesn't help. Is there any Cloudmin-specific way of adding nameservers for the instances to use?

The resolv.conf file should actually be copied from the host system ... or you can specify an alternate location to copy it from at Host Systems -> Xen Host Systems -> yourhost -> Source DNS resolv.conf file.

Thanks Jamie. Can you explain how Cloudmin expects to receive an OpenVZ container?

It just needs to be told the container name and host system, and can pick up the other settings from that.

Thanks Jamie, but these seems to just be managing the OpenVZ instance on the old server. What I wish to do is actually move the container to a new server, which is the one Cloudmin is currently running on.

Due to an oversight by me, moving OpenVZ containers isn't supported yet .. but will be in the next release (3.1)

Hehe, glad I wasn't going crazy. Is it possible to get a patch for this as soon as it's ready? This particular issue is of the utmost importance (above all others) due to a renewal/cancel contract of the old OpenVZ server in 8 days.

If I do not transfer the containers within 8 days, I either have to let them go down (which isn't happening) and renew my contract on the server for an additional 3 months, which is pretty costly. Whilst I appreciate this is my problem and not yours, a priority on this feature and a patch would be highly appreciated above anything else.

Sure, that should be possible ..

Hi Jamie, will you let me know when you have any patches available? Thanks.

Sure, will do .. it will be a couple of days.

I've just completed support for moving OpenVZ containers .. I will email you an updated Cloudmin version.

Hi Jamie,

I've installed the new version, but when attempting to move an OpenVZ container, I get:

Moving OpenVZ system from vpsnode1.exoware.net to cloud.exoware.net ..
Saving state of running processes ..
.. done
 
Sending process state file to new host system ..
.. done
 
Copying OpenVZ config file /etc/vz/conf/270.conf ..
.. done
 
Creating TAR file of filesystem ..
Re-fetching system status ..
.. done. New status is : Down
 
.. done

And I see nothing on the recipient physical node.

Is that the whole output? It seems to have been truncated ...

That was indeed the whole output. It just seemed to stop.

Screenshot attached of output. As soon as it reaches the last line, the browser loading animation stops as if it had finished loading.

Ok, I see it .. an error message is missing.

As as quick work-around so you can see the real error, create /etc/webmin/server-manager/custom-lang containing the line :

smove_tarfailed=.. TAR failed : $1

Then re-try the move.

TAR failed : tar: ./dev/log: socket ignored

Odd, that looks like just a warning, not an error..

Does a tar file named $hostname.tar.gz get created in /tmp or /tmp/.webmin on the source host system?

No, but it does seem to be making these files:

[root@vpsnode1 tmp]# ls -lah
total 478M
drwxrwxrwt  3 root root 4.0K Aug 29 00:50 .
drwxr-xr-x 26 root root 4.0K Jun  6 23:46 ..
-rw-r--r--  1 root root  60M Aug 29 00:17 213060_2_move.cgi
-rw-r--r--  1 root root  60M Aug 28 15:55 256662_2_backup.cgi
-rw-r--r--  1 root root  60M Aug 29 00:27 503976_2_move.cgi
-rw-r--r--  1 root root  60M Aug 28 23:04 571901_2_move.cgi
-rw-r--r--  1 root root  60M Aug 28 23:09 599420_2_move.cgi
-rw-r--r--  1 root root  60M Aug 29 00:20 603707_2_move.cgi
-rw-r--r--  1 root root  60M Aug 28 23:06 667642_2_move.cgi
-rw-r--r--  1 root root  60M Aug 29 00:50 826579_2_move.cgi
drwxrwxrwt  2 root root 4.0K Feb 28 09:11 .ICE-unix

Which are valid .tar.gz archives of the system.

Ok, those seem fine..

What output do you get from the following commands :

cd /vz/private/XXXX && tar czf /tmp/123456_2_move.cgi
echo $?

where XXXX is the OpenVZ context ID of the container you want to move.

[root@vpsnode1 270]# tar czf /tmp/123456_2_move.cgi
tar: Cowardly refusing to create an empty archive
[root@vpsnode1 270]# echo $?
2

Sorry, there should be a . at the end of that tar command. The commands should really be :

cd /vz/private/XXXX && tar czf /tmp/123456_2_move.cgi .
echo $?
[root@vpsnode1 270]# tar czf /tmp/123456_2_move.cgi .
tar: ./dev/log: socket ignored
[root@vpsnode1 270]# echo $?
0

That is really odd, as that's the exact same command Cloudmin runs to tar up an OpenVZ container prior to moving it.

Just for debugging purposes, can you backup this OpenVZ system with a command like the following, run on the Cloudmin master :

vm2 backup-system --host youropenvzsystem --dest /tmp

To debug this further, it would be useful if I could SSH into the Cloudmin master myself .. let me know if that is possible.

[root@cloud tmp]# vm2 backup-systems --host betterbrief.vz.exoware.net --dest /tmp
Finding systems to backup ..
.. found 1 systems
 
Working out backup destinations ..
.. found 1 usable destinations
 
Backing up betterbrief.vz.exoware.net to /tmp/betterbrief.vz.exoware.net.tar.gz on Cloudmin master ..
    TARing up filesystem for betterbrief.vz.exoware.net under /vz/private/270 ..
.. backup failed : TAR creation appeared to succeed, but destination file /tmp/474914_1_backup-systems.pl was not created
 
betterbrief.vz.exoware.net: FAILED TAR creation appeared to succeed, but destination file /tmp/474914_1_backup-systems.pl was not created

Login credentials to master on its way to your inbox.

Thanks for the login - I found a bug that causes this problem, and patched it on your Cloudmin system (and will include the fix in future releases).

Please re-try the move, and let me know if it works now..

Hi Jamie,

Thanks, the process gets a little further, but still fails.

Moving OpenVZ system from vpsnode1.exoware.net to cloud.exoware.net ..
Saving state of running processes ..
.. done
 
Sending process state file to new host system ..
.. done
 
Copying OpenVZ config file /etc/vz/conf/270.conf ..
.. done
 
Creating TAR file of filesystem ..
.. create TAR file of 59.65 MB
 
Copying 59.65 MB TAR file to new host system ..
.. done
 
Extracting TAR file on new host system ..
Re-fetching system status ..
.. done. New status is : Down
 
.. done

Looks like it's failing on the extraction. I've left the credentials as they are.

I fixed another bug, and it made more progress this time .. but oddly, failed when restoring the dump of running processes on cloud.exoware.net.

The error I get is :

Error: No checkpointing support, unable to open /proc/rst: No such file or directory

Does cloud.exoware.net have the same OpenVZ version and kernel as the old host system?

Old:

[root@vpsnode1 270]# uname -a
Linux vpsnode1.exoware.net 2.6.18-92.1.18.el5.028stab060.2 #1 SMP Tue Jan 13 11:38:36 MSK 2009 x86_64 x86_64 x86_64 GNU/Linux

New:

[root@cloud tmp]# uname -a
Linux cloud.exoware.net 2.6.18-128.2.1.el5.028stab064.4xen #1 SMP Mon Jul 27 13:15:05 MSD 2009 x86_64 x86_64 x86_64 GNU/Linux

I think the command "modprobe vzctp" might fix the issue. Try doing what you did again, I've ran the modprobe command.

I've sent you a new RPM that will allow OpenVZ systems to be moved when shut down, which avoids this problem ..

I still got that error about /proc/rst .. but was able to move betterbrief.vz.exoware.net but first shutting it down, moving and then starting it up. Take a look, and let me know if it looks OK.

Hi Jamie,

Thanks, that seemed to work, there is one small issue though; the IP address is still set to the old IP address on the old server (or at least that's what Cloudmin is reporting).

I tried editing /etc/network/interfaces with a new IP, but it seems to get rewritten every time the OpenVZ instance is rebooted.

You can change a system's IP address at System Configuration -> Network Interfaces.

how do we import HyperVM XEN instances?

Assuming they use regular open-source xen, you can import them at Add System -> Add Xen Instance.