Submitted by andymorton on Sun, 12/06/2015 - 15:38
When backing up to an S3 bucket, the files are uploaded, but when it finds old backups to delete, they fail with the following message:
Deleting file backups/backup-20151101-0001/mydomain.co.uk.tar.gz, which is 35 days old .. .. deletion failed : Completed 1 part(s) with ... file(s) remaining
The old backups remain intact, even though they are older than the limit (which in this case is 31 days).
It ends the backup log with the line
.. found 63 backups, but none were old enough
But they were old, it just didn't delete them!
If it makes a difference, I have the aws command line tools installed (to get round the problem with the empty http headers...)
Status:
Closed (fixed)
Comments
Submitted by andymorton on Sun, 12/06/2015 - 16:37 Comment #1
Submitted by JamieCameron on Sun, 12/06/2015 - 21:03 Comment #2
That message "Completed 1 part(s) with ... file(s) remaining" is interesting, as it's the error message Amazon is providing.
How large are these backup files that would be deleted?
Submitted by andymorton on Mon, 12/07/2015 - 03:46 Comment #3
they range from 15mb to 500mb.
Submitted by JamieCameron on Mon, 12/07/2015 - 23:13 Comment #4
Ok, that's not unreasonably large. Is the bucket you are backing up to in the US, europe, or some other zone?
So I just did some tests to try to re-produce this problem, but the purging of old backups went fine.
Submitted by andymorton on Sat, 12/12/2015 - 05:07 Comment #5
The backup is to US, and the box itself is situated in Europe. Its a bit odd - is there a way to throw the aws-cli into debug, or get the command that virtualmin is calling...?
Submitted by JamieCameron on Sat, 12/12/2015 - 18:18 Comment #6
You could try running the
aws
command Virtualmin uses, which is just :aws rm s3://backups/backup-20151101-0001/mydomain.co.uk.tar.gz
Submitted by andymorton on Tue, 12/22/2015 - 09:31 Comment #7
Ill do that - in the meantime, does virtualmin have a recommended user policy that will give the S3 users the correct level of access.
at the moment this is mine
{ "Statement": [ { "Effect": "Allow", "Action": "s3:ListAllMyBuckets", "Resource": "arn:aws:s3:::" }, { "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:GetBucketLocation" ], "Resource": "arn:aws:s3:::BucketName" }, { "Effect": "Allow", "Action": [ "s3:PutObject", "s3:GetObject", "s3:DeleteObject" ], "Resource": "arn:aws:s3:::BucketName/" } ] }
Submitted by JamieCameron on Tue, 12/22/2015 - 12:49 Comment #8
We haven't really looked into the "minimum" S3 access required for backups - generally we just use a user who has full access to the destination bucket.
Submitted by andymorton on Wed, 01/06/2016 - 15:02 Comment #9
Consider this an S3 permissions problem. I had to use the CLI to diagnose this. For reference, here is the final IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
]
}
]
}
Submitted by andymorton on Wed, 01/06/2016 - 15:02 Comment #10