Submitted by Jesse OBrien on Wed, 02/20/2013 - 08:59
When running a full backup to Amazon s3, the log reports a failure and specifies (database name abstracted for security purposes):
Dumping MySQL database $databasename ..
.. dump failed! mysqldump: Error: 'Got error 28 from storage engine' when trying to dump tablespaces
mysqldump: Couldn't execute 'show fields from `adodb_logsql`': Got error 28 from storage engine (1030)
Backing up Webmin ACL files ..
.. done
Creating TAR file of home directory ..
.. TAR failed! cat: write error: No space left on device
However, the drive appears to have plenty of space:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 709G 567G 107G 85% /
tmpfs 7.9G 0 7.9G 0% /lib/init/rw
udev 7.9G 100K 7.9G 1% /dev
tmpfs 7.9G 0 7.9G 0% /dev/shm
The database itself is only 154M, which should not be able to fill any of these partitions.
Does the backup script default to the root partition or tmpfs, and is it possible to add error handling to indicate which partition is full, and insert that into the log?
Status:
Active
Comments
Submitted by andreychek on Wed, 02/20/2013 - 09:07 Comment #1
Howdy -- well, it does look like you have plenty of disk space (the backups use /tmp, but that's part of your / partition).
However, with both tar and mysqldump failing with disk space errors, it does look like you're seeing some sort of disk problem.
Is there perhaps a quota assigned to the root user?
You can determine that by running these two commands:
quota -u root
quota -g root
Also, what does the "mount" command output?
And lastly, does the command "dmesg | tail -30" show any errors?
Submitted by Jesse OBrien on Wed, 02/20/2013 - 10:40 Comment #2
Thanks for the quick reply. There is no quota set for root:
quota -u root
Disk quotas for user root (uid 0): none
quota -g root
Disk quotas for group root (gid 0): none
Quite a bit of work was done since this morning, so "dmesg | tail -30" would give odd results. Would you accept "dmesg | grep error" (with 0 results returned) as being the equivalent?
After looking at the s3 backup bucket, it looks like Virtualmin tries to create backups of ALL the sites locally, then copies the whole backup directory all at once. I may have misled you into thinking that this was the only site on the server (there are roughly 130, and they would likely fill that 107G and then some). If each site were backed up, copied to the destination, then deleted, this would not be an issue.
I'm going to try to resize the root partition and run the backup again. I would still consider this a feature request though, since large Virtualmin installations with many sites on them would require a massive amount of empty space to create and upload backups.
Submitted by andreychek on Wed, 02/20/2013 - 10:48 Comment #3
Done! That feature has been created, and retroactively added into your system :-)
You can enable it by going into Backup and Restore -> Scheduled Backups -> BACKUP_NAME, and in the "Destination and format" section, check the "Transfer each virtual server after it is backed up" option.
Let us know if that resolves the space issues you're seeing.
Submitted by Jesse OBrien on Thu, 02/21/2013 - 08:25 Comment #4
Excellent, I hadn't read all the documentation (which is quite good and accessible, by the way) and wasn't sure what that checkbox did.
I'm running this backup again now. Is there a way to limit bandwith used during a backup, as well?
Submitted by andreychek on Thu, 02/21/2013 - 09:13 Comment #5
No, unfortunately, there isn't a way to limit the bandwidth usage for file transfers, sorry!