LXC problems

Hello Jamie,

i bought a license for 1 of the 2 server i have a try to register the host itself for running lxc containers.

But when i try to register the host i get: Failed to save LXC host : The selected system cannot support LXC containers : The /cgroup filesystem is not mounted

i read a lot about this but do not see something that makes sense.

when i enter; #mount i get: cgroups on /sys/fs/cgroup type tmpfs (rw,uid=0,gid=0,mode=0755)

Can you help me out here.

Once this works i have to buy anbother license for the other server and do the same.

Thanks.

Status: 
Closed (fixed)

Comments

Howdy -- just to verify, when you run that "mount" command and see the "cgroups" line, is that command being run on the remote system, the one you're planning to setup LXC on?

If so, you might be seeing a bug -- but assuming I understand the issue correctly, there's a simple workaround we can use until we can correct that in the next Cloudmin version.

I think the issue here may be that Cloudmin expects the cgroups filesystem to be at /cgroup, when on your system it is at /sys/fs/cgroup.

ok, but how to safely correct this.

can i still create the container on the large extra disk sdb1 The cloudmin install is on the small ssd raid and is only for the os.

Thanks btw for your answer

--edit-- i see there's a /etc/cgconfig.conf can i just do a mkdir /cgroup

and change the 4 lines to:

cpu = /cgroup/cpu; cpuacct = /cgroup/cpuacct; devices = /cgroup/devices; memory = /cgroup/memory; freezer = /cgroup/freezer;

Hi,

yes, it's entered on the server i've installed cloumin, kwm and lxc on.

Conrad

Ok, so did that change to /etc/cgconfig.conf solve the problem for you?

No, unfortunately not, after restart of cgconfig, cgred etc Cgroup still mounted on /fs/sys/cgroup

Even à restart did not help.

Is there a cgroups line in your /etc/fstab file?

i tried, but if i put it there and do a #mount /cgroup

it say it's busy or allready mouted. But it isn't.

Ok, I think the only fix here will require a code change in Cloudmin. I can send you a beta with the fix if you like though?

If that will be an easy update please.

Next week 2 server will be in production.

Other question: i have now a licensed master server (10 vm's)

if i will do a master slave replication for the configs only i will need another license for 10 machines right?

Thanks Jamie,

Conrad Maayen

Ok, just save the file attached to this bug report as /usr/share/webmin/server-manager/lxc-type-lib.pl and then /etc/webmin/restart to apply the patch.

Jamie,

i did the above, but no success.

Failed to save LXC host : The selected system cannot support LXC containers : The /cgroup filesystem is not mounted.

Stopping Webmin server in /usr/share/webmin Starting Webmin server in /usr/share/webmin Pre-loaded server-manager/server-manager-lib-funcs.pl in server_manage Pre-loaded WebminCore

Conrad

Now after a full reboot the error while adding a the server as a lxc host:

Failed to save LXC host : The selected system cannot support LXC containers : The required command lxc-cgroup does not exist. Perhaps the LXC software is not installed?

but on the commandline i can run the command lxc-group

Conrad ]

What directory is the lxc-group command in on the host system? It may be one that Cloudmin doesn't check.

which lxc-cgroup

/usr/bin/lxc-cgroup

Now the error is suddenly: The /cgroup filesystem is not mounted

Could be that cloumin is looking for the mount /cgroup and the default mount in Ubuntu appears to be /cgroups

as shown in /etc/mtab

Conrad

Ok, that looks like a separate issue - I have attached a further update to lxc-type-lib.pl to this bug.

Jamie,

i have replace the file lxc-type-lib.pl

but when registering the host still the message:

Failed to save LXC host : The selected system cannot support LXC containers : The /cgroup filesystem is not mounted.

On the second clean server i installed lxc and the bridge-utils

when i enter #mount

also there the mount is cgroups (with an s) so thats default in Ubuntu

Can you post the full output from the mount command?

/dev/sda1 on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
/dev/sdb1 on /mnt/data type ext3 (rw)
cgroups on /sys/fs/cgroup type tmpfs (rw,uid=0,gid=0,mode=0755)

Ok, I see now .. my previous fix didn't handle that properly.

Try editing that lxc-type-lib.pl file again, and change line 2043 to :

if ($out !~ /(cgroup|cgroups)\s+on\s+(\/cgroup|\/cgroups|\/sys\/fs\/cgroup)\s/) {

Yes!! After restarting webmin servcice my host is now a register LXC host.

Only thing now is that on a default Ubuntu 12.04 when i try to create a new container i get the message: Failed to create system : LXC host system does not support memory limits

But thanks for the help and patience :)

Conrad

However i like Webmin and Virtualmin Pro which i use on multiple systems, i do not think i have much confidence in Cloudmin.

Now LXC is running but setting up a container does not work,

After creating a container it will not start:

lxc-start: failed to attach 'vethKT12vV' to the bridge 'lxcbr0' : No such device lxc-start: failed to create netdev lxc-start: failed to create the network lxc-start: failed to spawn 'mail.osec.nl' lxc-start: No such file or directory - failed to remove cgroup '/sys/fs/cgroup/c pu/sysdefault/lxc/mail.osec.nl'

but i did create the network (br0) settings as stated in the documentation on the cloudmin site.

I have to go live with 2 hosts which can manage the virtual servers, but i think is isn't going to work. I was hoping it works as easy as Virtualmin and Webmin, but ....

Conrad

Sorry to hear you've been hitting all these problems with LXC - it may be due to changes in how LXC is setup on Ubuntu 12.04 versus older distributions.

Too bad,

i do not have enough confidence that this is going to work. Really like using webmin and virtualmin pro, but in this case cloudmin is not going to work for me. I just installed to Proxmox servers and everything is just the way i want ist now.

When i have more time i will experiment some more with Cloudmin.

Is there a refund policy for the Cloudmin license ?

Conrad

We're sorry that it's not working well for you!

If at some point you wanted to troubleshoot it more, we'd happily work with you and correct these issues.

LXC does get less testing and use than other VM types though, and it sounds like we need to do some more testing of that on newer platforms to figure out what might have changed.

If you would like a refund, you're certainly welcome to one, and we can have that to you shortly.

If you would rather wait a little bit and see how long it takes to LXC issues, we can do that as well.

Do you have any thoughts on that?

Hi,

Ubuntu 12.04 LTS is not that new is it :)

I wish it did run well, because then i had webmin, virtualmin and cloudmin, but is didn't have the time to correct the issues. The fact that our timedifferences are about 8 hours is also nog very helping.

I removed cloudmin and installed Proxmox PVE, now i have KVM and Openvz out of the box, it's bare metal so a lot less freedom but is allready in production now and running ok.

I hope in the future i have some more time to test cloudmin more, because i really like the concept.

Also still using several clouldmin pro servers and very happy with it. Allthough i hope the enduser interface will soon get an update :) Love the screenshots.

So for now i would appreciate the refund and hope to try some more in the future. Thanks for your help sofar.

Conrad Maayen

A refund is no problem.

Jamie, can you issue a refund of $149 for Order #9084?

We'll look into the LXC issues that you ran into on Ubuntu 12.04.

Sure, this has been refunded.

Automatically closed -- issue fixed for 2 weeks with no activity.