[container] joch's s3backup


joch

Recommended Posts

What exactly does this do? Does it strictly copy files from unRAID to S3, or does it perform a sync between unRAID and S3? So if you were to delete a file off unRAID, will the object be removed from your S3 bucket?

It's basically up to you. The default behaviour is to sync, but *not* removed delete files on the remote which have been deleted locally. You can however enable this option by providing the "--delete-removed" flag to S3CMDPARAMS if you want that behaviour.

Link to comment
  • 7 months later...
13 hours ago, UntouchedWagons said:

Hi there. How do I find out why s3cmd did not upload anything to my bucket? I configured your container to run every day at 3AM and I supplied brand new access and security keys but when I checked the bucket this afternoon nothing was uploaded. I checked the container's log and there were no messages of any sort.

Hm that's strange. Maybe try adding "--verbose" to S3CMDPARAMS to make it more verbose, so it may tell you something.

Link to comment
On 9/16/2017 at 4:29 PM, UntouchedWagons said:

Okay I added that paramater but there's still nothing in the log or in my bucket.

Strange, let's do some debugging. First, enter the container:

docker exec -ti S3Backup bash

Then run the command manually, adding the verbose flag:

/usr/local/bin/s3cmd sync $S3CMDPARAMS -v /data/ $S3PATH

If that doesn't show anything, run the command with the debug flag:

/usr/local/bin/s3cmd sync $S3CMDPARAMS -d /data/ $S3PATH

 

To exit the container, just type "exit".

 

Did any of that help you find out the reason why it doesn't work?

Link to comment
  • 2 years later...
On 4/10/2020 at 4:10 PM, troyan said:

don't work
https://photos.app.goo.gl/4ypj73YkTf9SUvqn8
when i connect to docker and i edit /root/.s3config  delete_removed = False ?


 

Editing the config file shouldn't be necessary.

 

If you're in the container, does "echo $S3CMDPARAMS" show "--delete-removed"?

 

If it does, then try running the command (in the container) manually to see if it works: "/usr/local/bin/s3cmd sync $S3CMDPARAMS /data/ $S3PATH"

 

Link to comment
10 hours ago, joch said:

Editing the config file shouldn't be necessary.

If you're in the container, does "echo $S3CMDPARAMS" show "--delete-removed"?

If it does, then try running the command (in the container) manually to see if it works: "/usr/local/bin/s3cmd sync $S3CMDPARAMS /data/ $S3PATH"

 

ok i have change and add S3CMDPARAMS.
i see tomorrow if it's ok :)

thank

 

Link to comment
  • 3 weeks later...
  • 2 months later...

New unraid and s3backup user here.  First, this container is great, and thanks for providing it!

 

One question though, I'm not sure if I'm doing something wrong, but everytime i start the container, I get another cron entry.  Now /etc/cron.d/s3backup has 8 entries (the first one and 7 duplicates).  

 

I can't figure out how to edit the cron file or how to make it stop duplicating.  I know this file has to be stored somewhere but there isn't any disk maps beyond my share that i'm backing up.  Any help is appreciated.

Link to comment
  • 1 month later...
On 7/31/2020 at 3:16 PM, Michael Hacker said:

New unraid and s3backup user here.  First, this container is great, and thanks for providing it!

 

One question though, I'm not sure if I'm doing something wrong, but everytime i start the container, I get another cron entry.  Now /etc/cron.d/s3backup has 8 entries (the first one and 7 duplicates).  

 

I can't figure out how to edit the cron file or how to make it stop duplicating.  I know this file has to be stored somewhere but there isn't any disk maps beyond my share that i'm backing up.  Any help is appreciated.

Hi! That sounds weird. Did you manage to solve it, or are you still having an issue with this? How are you running the container?

Link to comment
  • 2 months later...
18 hours ago, cptechnology said:

I may be stupid, but I don't understand this part: "Just mount the directories you want backed up under the /data volume", I can't seem to find any /data volume on my unraid server?

There is nothing wrong with asking questions! What I meant by that is that you need to mount the folders you want backed up under /data in the Docker container, eg. host path /mnt/user/Documents needs to map into the container under /data/Documents in order for it to be backed up. You can add multiple of those for everything you want backed up.

  • Thanks 1
Link to comment
  • 4 months later...

I'm trying out this container, hoping to automate my AWS backups with a lightweight solution! At the moment I use a Windows app called FastGlacier to manually backup files to AWS, but obviously an automated solution would be better.

 

I have installed the container and set my configuration as per the following:

 

s3backup-config.thumb.PNG.6f9afe069ac67d6bab55e581067f373b.PNG

 

As I want to use Glacier for backup and the lower cost, my storage-class command parameter is set to GLACIER which according to https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html is a supported option.

 

After starting the container and checking the logs, I saw this error saying that the bucket didn't exist:

 

s3backup-1.PNG.e12571ad5ad4cc8236650cf9d419dbb4.PNG

 

I don't know if there's a way to create the bucket automatically if it doesn't exist, but in the meantime I consoled into the container and used s3cmd to make a new bucket. I then checked the bucket exists and also verified that I could manually use s3cmd to upload a file from my container's 'data' path:

 

s3backup-2.PNG.487df97bc0b5f3c9e83e546618d4f416.PNG

 

But I am now seeing a similar issue to that which joeschmoe saw, the job fails due to an existing S3 lockfile:

 

s3backup-3.PNG.37340b9d2ec38ab92bcd251bfe905907.PNG

 

Based upon joeschmoe's findings, I am assuming this is because s3cmd doesn't like my s3 parameter being set to GLACIER - yet the same option runs fine from the command line when I ran s3cmd sync manually. I can also see that the file uploaded successfully in the bucket using the AWS management console.

 

I would really like to get this working so would be grateful for any pointers! :) I did try restarting the container several times in case that would help cleanout /tmp but it didn't change the behaviour.

Edited by TangoEchoAlpha
Already tried bouncing the container :)
Link to comment
13 hours ago, TangoEchoAlpha said:

I'm trying out this container, hoping to automate my AWS backups with a lightweight solution! At the moment I use a Windows app called FastGlacier to manually backup files to AWS, but obviously an automated solution would be better.

 

I have installed the container and set my configuration as per the following:

 

s3backup-config.thumb.PNG.6f9afe069ac67d6bab55e581067f373b.PNG

 

As I want to use Glacier for backup and the lower cost, my storage-class command parameter is set to GLACIER which according to https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/sync.html is a supported option.

 

After starting the container and checking the logs, I saw this error saying that the bucket didn't exist:

 

s3backup-1.PNG.e12571ad5ad4cc8236650cf9d419dbb4.PNG

 

I don't know if there's a way to create the bucket automatically if it doesn't exist, but in the meantime I consoled into the container and used s3cmd to make a new bucket. I then checked the bucket exists and also verified that I could manually use s3cmd to upload a file from my container's 'data' path:

 

s3backup-2.PNG.487df97bc0b5f3c9e83e546618d4f416.PNG

 

But I am now seeing a similar issue to that which joeschmoe saw, the job fails due to an existing S3 lockfile:

 

s3backup-3.PNG.37340b9d2ec38ab92bcd251bfe905907.PNG

 

Based upon joeschmoe's findings, I am assuming this is because s3cmd doesn't like my s3 parameter being set to GLACIER - yet the same option runs fine from the command line when I ran s3cmd sync manually. I can also see that the file uploaded successfully in the bucket using the AWS management console.

 

I would really like to get this working so would be grateful for any pointers! :) I did try restarting the container several times in case that would help cleanout /tmp but it didn't change the behaviour.

 

Hi! Try removing the lock file from the Docker container, like this:

docker exec -ti S3Backup rm -f /tmp/s3cmd.lock

 

Link to comment
1 hour ago, joch said:

 

Hi! Try removing the lock file from the Docker container, like this:


docker exec -ti S3Backup rm -f /tmp/s3cmd.lock

 

Hi Joch -

 

Sorry, I meant to add to this thread earlier this morning. I added the --verbose flag to the command parameters and now I am successfully uploading to S3 straight into the Glacier storage class. Maybe the lock file got cleaned up in the interim, maybe it's a co-incidence but it's working!

 

Am I right in thinking that this will not support either client side or server side encryption due to the need to do the MD5 hash as part of the file comparison?

 

Thanks 😀

Link to comment
  • 2 weeks later...
On 4/19/2021 at 12:05 PM, TangoEchoAlpha said:

Hi Joch -

 

Sorry, I meant to add to this thread earlier this morning. I added the --verbose flag to the command parameters and now I am successfully uploading to S3 straight into the Glacier storage class. Maybe the lock file got cleaned up in the interim, maybe it's a co-incidence but it's working!

 

Am I right in thinking that this will not support either client side or server side encryption due to the need to do the MD5 hash as part of the file comparison?

 

Thanks 😀

Hi! Great to hear.

 

You can enable transparent server side encryption on the S3 bucket side of things.

Link to comment
  • 2 weeks later...
On 4/18/2021 at 12:17 PM, TangoEchoAlpha said:

As I want to use Glacier for backup and the lower cost

 

There is also Glacier Deep Archive. It is slower to retrieve the data if you do actually need to access it, the thawing time is 12-48 hours depending on what you want to pay. It is a fraction of the price to standard Glacier though.

image.thumb.png.333c655dc69d7f8cf86b6e2aaac52cea.png

 

On 4/29/2021 at 5:47 AM, joch said:

You can enable transparent server side encryption on the S3 bucket side of things.

 

^^ Do this please.

@joch do you have a write-up for users to help them create the IAM user and policy, disable public access, enable encryption etc? Some general ways that the users can harden their buckets?

 

  • Like 1
Link to comment
6 hours ago, mmwilson0 said:

 

There is also Glacier Deep Archive. It is slower to retrieve the data if you do actually need to access it, the thawing time is 12-48 hours depending on what you want to pay. It is a fraction of the price to standard Glacier though.

 

 

Thanks, found the command line version for it (DEEP_ARCHIVE) after I made those previous posts. But as you say, for big files you don't need quickly it's massively cheaper!

Link to comment
  • 4 months later...
  • 1 year later...

I've been using this simple yet very useful app for many years now. 

 

I'm just reinstalling it on a new UNRAID server  but when I click "Install" from the UNRAID Apps the UNRAID web page stalls, the UNRAID "busy" animation is shown and the "Add Container" settings page is never shown.

 

I can see anything in the log files showing any error. 

 

I have tried restarting docker and the UNRAID server this did not help. Other docker apps are not affected and bring up their "Add Container" settings pages with no problem.

 

Edit: I was able to install the S3Backup container manually by using the Add Container option on the Docker page.

Edited by Geoff Bland
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.