Any chance of xfs Centos 7.1 image?

I need to use xfs in the VM to mimic what we do on a live machine (this would be a test VM). There is no cloudmin xfs image at present. So, I went down the path of an empty system and installation from centos 7.1 virtual cd. So get it installed, change boot method to hard disk, used 1 partition, and, it sort of boots but ends up in a grub prompt.

IF instead of 1 partition, I let Centos partition automatically, it DOES boot, however, I cannot resize the xfs disk size from cloudmin.

So, what is the correct way to get Centos 7.1 installed with xfs such that disk resizes work and can be fully managed from Cloudmin? If I can get to there, I can then build a new image to make more VMs just like it.

Status: 
Active

Comments

I was able to find a way, on my tenth try, to get Centos 7 loaded on xfs. Now, I want to resize the partition as I made it too small, and, will need to later anyway over time.

VM is shut down. I go to Manage Disks in cloudmin, change the capacity, click save, and:

Updating virtual disk on /dev/VGDATA/testsolr_cloudmin_commservhost_com_img .. .. update failed : (mkdir -p \/var\/.webmin\/597891_15472_2_save_disk.cgi && mount -t xfs /dev/loop1 /var/.webmin/597891_15472_2_save_disk.cgi && xfs_growfs -D 13105152 \/var\/.webmin\/597891_15472_2_save_disk.cgi ; ex=$? ; umount /var/.webmin/597891_15472_2_save_disk.cgi ; rmdir /var/.webmin/597891_15472_2_save_disk.cgi ; exit $ex) failed : sh: ((: mkdir -p /var/.webmin/597891_15472_2_save_disk.cgi && mount -t xfs /dev/loop1 /var/.webmin/597891_15472_2_save_disk.cgi && xfs_growfs -D 13105152 /var/.webmin/597891_15472_2_save_disk.cgi ; ex=0 ; umount /var/.webmin/597891_15472_2_save_disk.cgi ; rmdir /var/.webmin/597891_15472_2_save_disk.cgi ; exit : division by 0 (error token is "/.webmin/597891_15472_2_save_disk.cgi && mount -t xfs /dev/loop1 /var/.webmin/597891_15472_2_save_disk.cgi && xfs_growfs -D 13105152 /var/.webmin/597891_15472_2_save_disk.cgi ; ex=0 ; umount /var/.webmin/597891_15472_2_save_disk.cgi ; rmdir /var/.webmin/597891_15472_2_save_disk.cgi ; exit ")

A VM using XFS should work fine - just make sure that you create it with a single partition on the virtual disk, and the filesystem directly inside that partition. Don't use RAID or LVM, or else Cloudmin will be unable to perform resizes.

I probably should have closed this case and opened another, see comment #1.

So are you still seeing problems with resizing disks with XFS filesystems? If so, make sure you have the XFS tools installed on the host system too.

rpm -qa | grep xfs xfsprogs-3.2.1-6.el7.x86_64

The host system uses xfs. Was installed via cloudmin installer of course, so, would presume all tools are there. My comment 1 shows exactly what cloudmin displayed to me from the Resources -> Manage Disks screen after changing and saving the new capacity.

As an aside, it did change the LVM volume to the new size, and I had to boot the VM and use xfs_growfs and that did work. However, something wrong with Cloudmin perhaps?

So I just did a test by installing CentOS 7 using XFS into an empty KVM instance on a CentOS 7 host, and was able to resize the disks just fine.

Which is what I did, and, you have the error message, so, should be able to figure out why it failed.

Unfortunately it isn't clear from the error what went wrong - the only useful message in there is "division by 0", which is coming from the xfs_growfs command.

You might want to try it again with a different size (like not such a big increase) and see if it still fails.

Sp, I tried to change the capacity of the down suystem from 50GB to 51GB, got this:

Updating virtual disk on /dev/VGDATA/testsolr_cloudmin_commservhost_com_img ... .. update failed : (mkdir -p \/var\/.webmin\/375365_25980_2_save_disk.cgi && mount -t xfs /dev/loop1 /var/.webmin/375365_25980_2_save_disk.cgi && xfs_growfs -D 13367296 \/var\/.webmin\/375365_25980_2_save_disk.cgi ; ex=$? ; umount /var/.webmin/375365_25980_2_save_disk.cgi ; rmdir /var/.webmin/375365_25980_2_save_disk.cgi ; exit $ex) failed : sh: ((: mkdir -p /var/.webmin/375365_25980_2_save_disk.cgi && mount -t xfs /dev/loop1 /var/.webmin/375365_25980_2_save_disk.cgi && xfs_growfs -D 13367296 /var/.webmin/375365_25980_2_save_disk.cgi ; ex=0 ; umount /var/.webmin/375365_25980_2_save_disk.cgi ; rmdir /var/.webmin/375365_25980_2_save_disk.cgi ; exit : division by 0 (error token is "/.webmin/375365_25980_2_save_disk.cgi && mount -t xfs /dev/loop1 /var/.webmin/375365_25980_2_save_disk.cgi && xfs_growfs -D 13367296 /var/.webmin/375365_25980_2_save_disk.cgi ; ex=0 ; umount /var/.webmin/375365_25980_2_save_disk.cgi ; rmdir /var/.webmin/375365_25980_2_save_disk.cgi ; exit ")

Not sure where the division by 0 is. This is on latest Centos 7.1 with all patches applied. Almost nothing extra loaded, just wget and perl.

It may be worth trying a manual filesystem expansion. Try SSHing into your Cloudmin master system as root and running the command :

cloudmin mount-system --host your.vm.name

this will output a mount point directory on the host system. Then SSH into the host, and run :

xfs_growfs -D 13367296 /path/to/mount/point

and see if that fails or not. Finally, SSH into the master system again and run :

cloudmin umount-system --host your.vm.name

This is the root fs, wouldn't think that would be good. BUT, as mentioned, while it fails from cloudmin, it does resize the LVM disk image. Which means in the host system after a startup, I can grow the FS.

So, there is indeed a workaround. However, seems like a bug somewhere.

I even made another CENTOS 7.1 system (with all centos updates), and, still cannot resize it. I'm good. Just wondering if you want to figure out what the bug is for others.

Yes, it looks like the LVM and partition resizes worked OK, but the filesystem resize failed.

That's why I'm interested to know if a filesystem resize from the host system (which is what Cloudmin does) works OK, using the commands I posted above.

Ok, I would be glad to try, but, not sure of your sequence here....

The cloudmin mount gives a message: "Only down systems can be mounted"

Sorry, I forgot to mention that you have to shut down the VM first..

I see, I guess I am misunderstanding instructions Step 2 was to "Then SSH into the host". Since the term was different, it didn't say master, I presumed you meant the vm, not the master, which made no sense of course. When you said to ssh three times, seemed like it was different systems.

Ok, I get it, but, I don't want 51GB. Since I cannot shrink the machine, and it's currently good at 20GB, we could try 21GB, if I knew what -D should be for that. So, I think for 21GB, it should be 2110241024*1024/4096=5505024.

So, it still fails though, since that;s impossible size:

data size 5505024 too large, maximum is 5242624

That's because that's how big the LVM disk size is. So, there is a missing step of increasing the LVM size, right? Can I do that using webmin without messing up the virtual disk?

The -D parameter is in 4k blocks - odd, but that's what the xfs_growfs command expects.

Yes, I know. So, isn't there the missing step of first upping the limit of the LVM virtual drive? After that, xfs_growfs can be used to grow it.

No, Cloudmin already does that - to expand a disk, it first increases the LVM volume size, then increases the size of a partition within the volume, then expands the filesystem. However, it looks like the last step is failing on your system for some reason..

No no no! You are asking me to do what cloudmin does, manually. You are asking me to manually grow the system via xfs_growfs, without the step of increasing the lvm volume. Please review what you asked in comment 10 for the manual expansion, there is no step listed to increase the lvm volume size.

I am saying isn't this procedure wrong, and, shouldn't I modify the lvm volume size in webmin before the grow?

Right, I am assuming that you already tried to increase the virtual disk size in Cloudmin, but it failed at the filesystem expansion step? If so, the LV will have been left already expanded..

xfs_growfs -D 5504768 /mnt/kvm-testsolr.cloudmin.commservhost.com meta-data=/dev/loop1 isize=256 agcount=27, agsize=196544 blks = sectsz=512 attr=2, projid32bit=1 = crc=0 finobt=0 data = bsize=4096 blocks=5242368, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=0 log =internal bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 5242368 to 5504768

BTW, you really don't need the -D I don't think as it's supposed to grow to whatever fits by default I believe. Anyway, it worked fine when mounted from the master host as you can see.

That's really odd - Cloudmin effectively uses the exact same command, so I have no idea why xfs_growfs would be crashing when run as part of Cloudmin's resize process.

I assume the VM was shut down when you did this resize?

Yes, it was, in both cases.

I might have some useful info for you.... I'll try and explain... To do what i did above, I changed in Cloudmin from 20GB to 21GB in Cloudmin -> Resource -> Manage Disks. Of course, it failed for the same error I reported, but, it has the side benefit of showing me the xfs_growfs command. I executed that command from the master host as you requested, and it "worked" as I said. HOWEVER, by the next day, it had crashed (the VM). The error was that XFS was trying to write beyond the end of the "volume". You cannot shrink xfs, so, I simply changed it to 22GB, and, use xfs_growfs without the -D parameter. It has not crashed.

It was clear by looking at xfs_info that indeed, it was sized larger than the LVM volume. Not sure why it lets you do that though (without an error), perhaps because it's done on the master machine and it's LVM not real disk, not sure. Perhaps the calc is wrong somehow, not excluding the boot area or other things that are metadata. You might try it without the -D. -D is useful for non LVM, but, when resizing a VM from cloudmin when it is stored on LVM logical volume, I would think 100% of the time you'd want the max size?

Interesting, that may explain it. Cloudmin has to use the -D flag because the size of the volume sometimes cannot be detected automatically by the xfs_growfs command, but I've seen issues with other filesystem types in which some rounding down was needed to get the correct size. I will change the code to do this for XFS as well, which should fix the issue..