Resources > Manage Disks >

When editing the disk size I received this error

Updating Disk
mktnextc.cloudmin.tchod.com Updating virtual disk on /home/xen/mktnextc.img .. .. update failed : e2fsck -f -p \/home\/xen\/mktnextc.img failed : root: Inodes that were part of a corrupted orphan linked list found. root: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options)

Status: 
Closed (fixed)

Comments

Wow, it looks like the disk already has some serious filesystem corruption detected on it.

Can you boot this VM, and if so does it report any filesystem errors internally?

Starting up mktnextc.cloudmin.tchod.com .. .. failed :

PTY PID: 14577 xenconsole: Could not read tty from store: No such file or directory

Your support is very slow ... You can be faster??

Your support is very slow ... You can be faster??

Your support is very slow ... You can be faster??

Are you still seeing this problem? It looks like the VM mktnextc is running.

By the way, your system seems pretty loaded, which is probably contributing to these issues. You have only 4 cores, but are running 9 VMs

Updating virtual disk on /home/xen/mktnextc.img .. .. update failed : e2fsck -f -p \/home\/xen\/mktnextc.img failed : root: Inodes that were part of a corrupted orphan linked list found. root: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options)

What should I do here to resolve this error??

You could try stopping the VM, and then on the Cloudmin host, run fsck against the image with a command such as this:

e2fsck -f /home/xen/mktnextc.img

I'm using only two vm webmin and below the machine does not turn on.

I believe that it is not lack of system resources.

You can see this for myself because no vm league webmin? I need him working.

Refreshing Status mktnextc.cloudmin.tchod.com Re-fetching system status .. .. done. New status is: Webmin down

This bootup/Webmin issue was resolved in this request here:

https://www.virtualmin.com/node/24978

If you're still having problems resizing the disks, let us know, but I'll mark this as fixed in the meantime.

Automatically closed -- issue fixed for 2 weeks with no activity.