alphazo Posted August 18, 2015 Share Posted August 18, 2015 Hello, borg-backup (https://borgbackup.github.io/borgbackup/) is a fork of the excellent Attic (https://attic-backup.org/) that provides deduplicated and optionally encrypted backup. Pretty similar to bup that I have been using extensively. borg bring many new exciting features over Attic including configurable chunk sizes to accommodate lower RAM (important with very large backups) and different password based encryption scheme. The latest git version also brings lz4 compression scheme (in addition to zlib). One the reason for moving away from bup is the impossibility to prune older backup. I'm planning to use it on unRAID both internally to do periodic backup/snapshot of important data (doesn't take additional space if nothing has changed) and also remotely from clients. Based upon the work done by Silvio Fricke I published two projets on Dockerhub: - Latest git version: https://hub.docker.com/r/alphazo/borgbackup-git/ - Latest released version: https://hub.docker.com/r/alphazo/borgbackup/ You can find and install them using the new extended search feature found in the Docker Community Application plugin. I quickly tested it and was able to perform backups. I haven't gone through the generation of the unRAID template yet. Hope this will be useful to the unRAID community. PS: this could also be used in a distributed encrypted incremental and deduplicated backup scheme where you store some of your content to another (untrusted) remote unRAID machine. Quote Link to comment
Squid Posted August 18, 2015 Share Posted August 18, 2015 You can find and install them using the new extended search feature found in the Docker Community Application plugin. Search for either alphazo or borg-backup Quote Link to comment
Squid Posted August 18, 2015 Share Posted August 18, 2015 Wouldn't be a bad idea to expose the volume /B as something a little more descriptive... Quote Link to comment
alphazo Posted August 18, 2015 Author Share Posted August 18, 2015 Wouldn't be a bad idea to expose the volume /B as something a little more descriptive... Fully agree on this one. I don't know what was the motivation of the original author. Quote Link to comment
jonp Posted August 18, 2015 Share Posted August 18, 2015 Hello, borg-backup (https://borgbackup.github.io/borgbackup/) is a fork of the excellent Attic (https://attic-backup.org/) that provides deduplicated and optionally encrypted backup. Pretty similar to bup that I have been using extensively. borg bring many new exciting features over Attic including configurable chunk sizes to accommodate lower RAM (important with very large backups) and different password based encryption scheme. The latest git version also brings lz4 compression scheme (in addition to zlib). One the reason for moving away from bup is the impossibility to prune older backup. I'm planning to use it on unRAID both internally to do periodic backup/snapshot of important data (doesn't take additional space if nothing has changed) and also remotely from clients. Based upon the work done by Silvio Fricke I published two projets on Dockerhub: - Latest git version: https://hub.docker.com/r/alphazo/docker-borgbackup-git/ - Latest released version: https://hub.docker.com/r/alphazo/docker-borgbackup/ You can find and install them using the new extended search feature found in the Docker Community Application plugin. I quickly tested it and was able to perform backups. I haven't gone through the generation of the unRAID template yet. Hope this will be useful to the unRAID community. PS: this could also be used in a distributed encrypted incremental and deduplicated backup scheme where you store some of your content to another (untrusted) remote unRAID machine. Going to move this to the Docker subforum here. Quote Link to comment
alphazo Posted August 31, 2015 Author Share Posted August 31, 2015 Updated the docker file so it now uses /sourcedir instead of /B. BTW borgbackup v0.25 has been released and brings fast lz4 compression algorithm. Quote Link to comment
Squid Posted August 31, 2015 Share Posted August 31, 2015 You should create a template so that its easier for users to access Quote Link to comment
page3 Posted December 2, 2015 Share Posted December 2, 2015 I've just spent some time reading up on Borg and it really does look like this could be a simple, easy to use backup solution for UnRAID. Would someone consider creating a docket template and also adding BorgWeb, the web interface to this? Quote Link to comment
s.Oliver Posted December 3, 2015 Share Posted December 3, 2015 well, this seems to be really very interesting. one of it's aspects – being cross-platform – could resolve my dilemma of backing up data from other operating systems. would be really interested in seeing this taking off. Quote Link to comment
page3 Posted January 6, 2016 Share Posted January 6, 2016 Anyone working on this, especially adding the GUI? Hoping 2016 will bring an easy to use backup solution to UnRAID and this looks like it could be the basis on one. Quote Link to comment
page3 Posted January 23, 2016 Share Posted January 23, 2016 As no-one else has replied. I'm trying this now to compare it to hashbackup. I'm not sure of the benefit of a docker, as the borg executable is self contained, just like hashbackup. With hashbackup you need to backup drives separately, rather than user shares as otherwise it will think there have been changes even when there haven't been. Does anyone know if this is also the case with borg? Ie: can I backup /mnt/user/media rather than /mnt/disk1/media, /mnt/disk2/edia etc etc? Quote Link to comment
abs0lut.zer0 Posted January 23, 2016 Share Posted January 23, 2016 You should create a template so that its easier for users to access hey alphazo thank you for this are you planning to do a template for us docker beginners Quote Link to comment
k2e2ni Posted January 25, 2016 Share Posted January 25, 2016 As no-one else has replied. I'm trying this now to compare it to hashbackup. I'm not sure of the benefit of a docker, as the borg executable is self contained, just like hashbackup. With hashbackup you need to backup drives separately, rather than user shares as otherwise it will think there have been changes even when there haven't been. Does anyone know if this is also the case with borg? Ie: can I backup /mnt/user/media rather than /mnt/disk1/media, /mnt/disk2/edia etc etc? page3 have you managed to get it working? I am not sure if I am missing something, but how are we meant to use this docker? When I start it, it just stops right away. Looking at it as well, it seems like it doesn't have the webUI that a lot of the other common dockers like Plex etc have which to my understanding means you don't have an interface to interact with it. I see mentions of some examples etc on the dockers page but have no idea where to type that nor can I find the ini file which is meant to store configuration. Is there an advance guide or documentation to using dockers for cases like these where it has not been properly setup with a template? Most of the guides I have been able to find seem to be for the more typical scenario. Quote Link to comment
trurl Posted January 25, 2016 Share Posted January 25, 2016 Haven't tried this but the docker run examples for it on docker hub use -ti so it is going to give you an interactive command line interface. No point in having a template for it, just run it from the command line. Quote Link to comment
abs0lut.zer0 Posted January 26, 2016 Share Posted January 26, 2016 You can find and install them using the new extended search feature found in the Docker Community Application plugin. Search for either alphazo or borg-backup tried this cannot find it don't have the knowledge to do a docker from scratch... any help thanks Quote Link to comment
page3 Posted January 26, 2016 Share Posted January 26, 2016 I didn't use the docker, but simply downloaded the binary and put it on my cache drive. It is running fine, but I'm having a problem with running out of disk space during a large backup. I'm having difficulty trying to determine how to change where Borg writes its cache, or how regain the space now used on root. Quote Link to comment
trurl Posted January 26, 2016 Share Posted January 26, 2016 I didn't use the docker, but simply downloaded the binary and put it on my cache drive. It is running fine, but I'm having a problem with running out of disk space during a large backup. I'm having difficulty trying to determine how to change where Borg writes its cache, or how regain the space now used on root. Seems more likely you would be running out of RAM since that is where everything except /boot and /mnt actually is, including "root". Simplest way to regain that space is just to reboot unRAID. Quote Link to comment
abs0lut.zer0 Posted January 26, 2016 Share Posted January 26, 2016 what would the thread suggest for the following senario.. what should i use? i have a directory of photos that i would like to backup dedupe and maximum compression and then copy to another network drive. Quote Link to comment
page3 Posted January 27, 2016 Share Posted January 27, 2016 I didn't use the docker, but simply downloaded the binary and put it on my cache drive. It is running fine, but I'm having a problem with running out of disk space during a large backup. I'm having difficulty trying to determine how to change where Borg writes its cache, or how regain the space now used on root. Seems more likely you would be running out of RAM since that is where everything except /boot and /mnt actually is, including "root". Simplest way to regain that space is just to reboot unRAID. Thanks. I've found Borg has two environment variables that can override where it places its keys and cache. The cache can be quite large - far bigger than RAM. I've now got it to backup 2Tb successfully and can easily mount the backup as a fuse file system. Borg is a self contained binary, so all I did to get it going was create a directory (on my cache drive), place the binary there. I then copied their example backup script and modified it for my setup (I want to backup to USB). I've set the two environment variables mentioned above within the script and all works fine. Quote Link to comment
abs0lut.zer0 Posted February 6, 2016 Share Posted February 6, 2016 I didn't use the docker, but simply downloaded the binary and put it on my cache drive. It is running fine, but I'm having a problem with running out of disk space during a large backup. I'm having difficulty trying to determine how to change where Borg writes its cache, or how regain the space now used on root. Seems more likely you would be running out of RAM since that is where everything except /boot and /mnt actually is, including "root". Simplest way to regain that space is just to reboot unRAID. Thanks. I've found Borg has two environment variables that can override where it places its keys and cache. The cache can be quite large - far bigger than RAM. I've now got it to backup 2Tb successfully and can easily mount the backup as a fuse file system. Borg is a self contained binary, so all I did to get it going was create a directory (on my cache drive), place the binary there. I then copied their example backup script and modified it for my setup (I want to backup to USB). I've set the two environment variables mentioned above within the script and all works fine. any chance of sharing some of your scripts please, i am having trouble trying to wrap my head around this Quote Link to comment
page3 Posted February 7, 2016 Share Posted February 7, 2016 Sire. I'm experimenting with a "daily" backup. So far, I have: #!/bin/sh REPOSITORY=/mnt/disks/Backup export BORG_CACHE_DIR=/mnt/cache/apps/borg/.cache/borg export BORG_KEYS_DIR=/mnt/cache/apps/borg/.borg/keys /mnt/cache/apps/borg/borg create --compression zlib,6 --stats --progress \ $REPOSITORY::`hostname`-`date +%Y-%m-%d` \ /mnt/user/Lightroom \ # /mnt/user/Photos \ /mnt/user/iTunes \ # /mnt/user/media \ /mnt/cache/apps \ --exclude '.DS_Store' \ --exclude '._.DS_Store' \ --exclude .AppleDouble/ \ --exclude .Recycle.Bin/ # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. --prefix `hostname`- is very important to # limit prune's operation to this machine's archives and not apply to # other machine's archives also. /mnt/cache/apps/borg/borg prune -v $REPOSITORY --prefix `hostname`- \ --keep-daily=7 --keep-weekly=4 --keep-monthly=6 /mnt/cache/apps/borg is on my SSD and where I put the borg executable. /mnt/disks/Backup is my external USB mounted via the Unassigned Devices plugin. BORG_CACHE_DIR and BORG_KEYS_DIR moves borg cache files from /root/tmp (memory) to my cache drive (small SSD). Quote Link to comment
abs0lut.zer0 Posted February 7, 2016 Share Posted February 7, 2016 Sire. I'm experimenting with a "daily" backup. So far, I have: #!/bin/sh REPOSITORY=/mnt/disks/Backup export BORG_CACHE_DIR=/mnt/cache/apps/borg/.cache/borg export BORG_KEYS_DIR=/mnt/cache/apps/borg/.borg/keys /mnt/cache/apps/borg/borg create --compression zlib,6 --stats --progress \ $REPOSITORY::`hostname`-`date +%Y-%m-%d` \ /mnt/user/Lightroom \ # /mnt/user/Photos \ /mnt/user/iTunes \ # /mnt/user/media \ /mnt/cache/apps \ --exclude '.DS_Store' \ --exclude '._.DS_Store' \ --exclude .AppleDouble/ \ --exclude .Recycle.Bin/ # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. --prefix `hostname`- is very important to # limit prune's operation to this machine's archives and not apply to # other machine's archives also. /mnt/cache/apps/borg/borg prune -v $REPOSITORY --prefix `hostname`- \ --keep-daily=7 --keep-weekly=4 --keep-monthly=6 /mnt/cache/apps/borg is on my SSD and where I put the borg executable. /mnt/disks/Backup is my external USB mounted via the Unassigned Devices plugin. BORG_CACHE_DIR and BORG_KEYS_DIR moves borg cache files from /root/tmp (memory) to my cache drive (small SSD). thanks for your script Quote Link to comment
AcidRaZor Posted February 8, 2016 Share Posted February 8, 2016 Silly question, but can this be used to push backups to Google Drive? Quote Link to comment
martinj Posted September 29, 2017 Share Posted September 29, 2017 Did anyone else notice slow backups when backing up shares? My incremental backups takes hours, even with no changed files. But if I add the --ignore-inode argument the backups takes a few minutes instead. Aren't inodes stable on unraid shares? Quote Link to comment
Nischi Posted July 28, 2018 Share Posted July 28, 2018 (edited) Noticed the same thing martinj, but I use the --files-cache=mtime,size command instead as they say the ignore inode is depricated now. I made the terrible mistake also of not backing up my .cache drive before my reboot, so I kept wondering why it always wants thinks my repository is unknown >.> now I copy it over the same was as I do with my ssh-files from /boot/ssh on array startup. Edited July 29, 2018 by Nischi Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.