[Plugin] rclone


Waseh

Recommended Posts

I agree with bobbintb!

Just be aware that only the beta branch has included scripts and gui. I will update the stable branch (and the OP) one of the comming days :)

 

I am using the scripts now and still not seeing anything on the mount.  I am using the mount script.  I have my docker container set to r/w slave.  When I go into that folder listed in the mount script I am not seeing any contents still though

 

#!/bin/bash

#----------------------------------------------------------------------------

# This script mounts your remote share with the recommended options.        |

# Just define the remote you wish to mount as well as the local mountpoint.  |

# The script will create a folder at the mountpoint                          |

#----------------------------------------------------------------------------

 

# Local mountpoint

mntpoint="/local/path" # It's recommended to mount your remote share in /mnt/disks/subfolder -

                                # This is the only way to make it accesible to dockers

 

# Remote share

remoteshare="remote:path" # If you want to share the root of your remote share you have to

                                # define it as "remote:" eg. "acd:" or "gdrive:"

 

 

#---------------------------------------------------------------------------------------------------------------------

 

 

mkdir -p $/mnt/user/disk3

rclone mount --max-read-ahead 1024k --allow-other $secret: $/mnt/disk3/Mount &

Link to comment

I have my container for plex setup like this:

/mnt/disk3/Mount/

Access Mode: RW/Slave

 

The local mount point should be inside /mnt/disks/ if you want to share the files with your docker containers.

 

I don't see a folder for /mnt/disks/

 

I have tried /mnt/user and I have tried /mnt/disk3 but neither seems to work.  How do I get the option for /mnt/disks?

Link to comment

First of all the point of the script is to edit it where it says mntpoint="/local/path" and remoteshare="remote:path" replacing what's inside the double quotes.

 

The script will then create all the necessary folders.

 

In your case it should be mntpoint="/mnt/disks/Mount/"

You need to delete the script folder in the plugin folder to reset the scripts and undo your changes. I will make a button I'm the gui to reset the scripts as well.

 

I will try and edit the script to make it even more obvious

Link to comment

First of all the point of the script is to edit it where it says mntpoint="/local/path" replacing what's inside the double quotes. The script will then create all the necessary folders. I will try and edit the script to make it even more obvious

 

Oh I didnt' realize those were variables, I thought it was just the instructions.  I figured it out finally.  I tried manually creating the Mount folder by adding a share.  But you have to let the script create that new folder.

 

Like I said I am noob, but thanks for the help guys!

Link to comment

Good thread! I too ran into the issues with mount points being abandoned and getting stuck, the fusor command to unmount cleared it right up. I found that the user script plugin had issues with the comments in the script for some reason, removing them and leaving the bash line cleared it.

 

I do notice folks talking about a /mnt/disks directory. Under /mnt I too have my disks listed and two user directories. User and User0. Do others have something named disks? I've been creating amount point under one of the directories in /mnt/user/ and it seems to work fine.

 

Still figuring out how best to run this in the background and schedule it but this seems viable to Syncovery which I'd been looking at. Have folks figured out exactly what's needed to be stored offsite to make recovery easiest? Is it simply the config file from rclone? Is anyone encrypting that file? Storing that in the ACD in a zip with password might be a good way, any reason to include rclone with it maybe? I'd hope they won't break compatibility along the line. :o

Link to comment

The userscripts plugin was recently updated to include variables that are preceded with a comment "#". That might have something to do with it as I have not yet updated my userscripts plugin and haven't heard of anyone coming across the issue you are describing.

 

The reason people are mounting in /mnt/disks is because it is necessary if you want your dockers to have access to the mounts, such as Plex. If this doesn't matter to you then you can mount it under /mnt/user. I've got user and user0 as well.

 

It's not on the OP yet but I have a custom script that will run sync as a daemon so you do not need to schedule it. You would have to adjust the sync command to your liking however, such as if you wanted to throttle bandwidth as you mentioned in the other thread. I've been using it and it's been working fine for me. The only thing needed to backup to restore is your config file, or really just your encryption password(s). It would be easiest to just backup the config file by zipping it up in a password protected file like you said.

Link to comment

 

The userscripts plugin was recently updated to include variables that are preceded with a comment "#". That might have something to do with it as I have not yet updated my userscripts plugin..

 

 

Just an FYI.  User scripts parses the commented lines, but passes the script untouched for execution.  The comment lines do not have to be present (and even if they are they are comments in bash and PHP (and probably others)

 

Any issues with user scripts and comments are probably fixed with today's update.  But, a super simple workaround is prior to any legitimate comments in a script is to put a dummy command in (something like cat /dev/null) as user scripts stops processing once it hits a non-comment line

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

What I'm trying to tell you is there's no such directory as "disks" on my system and apparently on the systems of others too. Do I need to have Docker required in order for that to appear? I'm not using it and I don't know but please be aware that not everyone has that directory and it can be confusing when others mention it. I have an ESX server that runs many of my external sorts of programs in VMs but I'll get around to Docker and KVM eventually :)

 

I look forward to seeing rclone as a daemon. Still puzzling it out but have run a successful test of both it and Syncovery, ACD is rocking for me! Close to a half TB went up no problem. How long does rclone need to scan a large datastore? I'm wondering if I should do this all in one pass or not - it'll take "awhile" lol. No problems starting where it left off I wish it could tweak files ala rsync. Lots to learn but really appreciate the plugin to access the scripts more easily!

 

P.S. Looks to be nothing native in rclone to throttle it's bandwidth? I can do this at my firewall but it's a shame it's not native as it certainly has the ability to be a hog!

 

Edit: I was wrong, there is a Bandwidth limit. It's --bwlimit. Looks like there's also a 50Gig limit on ACD. Using --max-size=5100M should solve that. I've got a ton of system backups with 500gig files :( There's also a dry run flag, -n but it can still take ages to return. I am sending output to a log file with --log-file but cannot access it via Windows as it's permissions are whacked since I run as Root so I need to sort that.

Link to comment

What I'm trying to tell you is there's no such directory as "disks" on my system and apparently on the systems of others too. Do I need to have Docker required in order for that to appear? I'm not using it and I don't know but please be aware that not everyone has that directory and it can be confusing when others mention it. I have an ESX server that runs many of my external sorts of programs in VMs but I'll get around to Docker and KVM eventually :)

The "disks" folder is normally created by the Unassigned Devices plugin.  Setting the mount point to be within /mnt/disks is really only required if you need the mounts created by this plugin to be utilized by a docker container.  If you don't require that, then it doesn't matter where the mount is located.

 

Perhaps it would be a good idea for this plugin to automatically create the disks folder upon installation to eliminate any confusion.

Link to comment

What I'm trying to tell you is there's no such directory as "disks" on my system and apparently on the systems of others too. Do I need to have Docker required in order for that to appear? I'm not using it and I don't know but please be aware that not everyone has that directory and it can be confusing when others mention it. I have an ESX server that runs many of my external sorts of programs in VMs but I'll get around to Docker and KVM eventually :)

The "disks" folder is normally created by the Unassigned Devices plugin.  Setting the mount point to be within /mnt/disks is really only required if you need the mounts created by this plugin to be utilized by a docker container.  If you don't require that, then it doesn't matter where the mount is located.

 

Perhaps it would be a good idea for this plugin to automatically create the disks folder upon installation to eliminate any confusion.

 

I went ahead and installed the Unassigned Devices plugin, I now have a Disks folder. However it's completely empty and I see no way to get to it from SMB etc. so I'm not sure what good it would do for me. Assigning mount points under an existing share works well for being able to see the cleartext progress of what's being loaded to ACD.

 

One interesting issue I'm puzzled about though that I somehow hoped that the Disks share might solve is that I appear to have two main subdirectories that I need to backup: user and user0. At first glance the data in them appeared to be the same but then I began noticing that there were files in each of them that didn't exist in the other - or the sorting was whacky  ::) I've tried specifying two different targets for rclone but this appears to be a no-go. I'm not certain what to do - two different jobs run back to back maybe? Anyone else gotten around this? I could jigger mount symlinks or something but i don't want to accidently recurse something or end up moving data 2x. I've got enough data that doing just user will take a month or so, hopefully a solution presents itself or I'm stupid and missed something lol

Link to comment

You can make an SMB share just fine from the /mnt/disks mount point. In unraid go to the settings tab then SMB and add the share under the Samba extras configuration. Here's my example:

 

[Amazon]
   path = /mnt/disks/Amazon
   read only = yes
   guest ok = yes

 

Of course that is a somewhat round about way of doing it, mounting a network share of a network share, but I didn't want to have to install rclone on another machine just to view the files. I'm only doing it for a visual reference .

 

Also look at this:

https://lime-technology.com/forum/index.php?topic=45880.0

Link to comment

What I'm trying to tell you is there's no such directory as "disks" on my system and apparently on the systems of others too. Do I need to have Docker required in order for that to appear? I'm not using it and I don't know but please be aware that not everyone has that directory and it can be confusing when others mention it. I have an ESX server that runs many of my external sorts of programs in VMs but I'll get around to Docker and KVM eventually :)

The "disks" folder is normally created by the Unassigned Devices plugin.  Setting the mount point to be within /mnt/disks is really only required if you need the mounts created by this plugin to be utilized by a docker container.  If you don't require that, then it doesn't matter where the mount is located.

 

Perhaps it would be a good idea for this plugin to automatically create the disks folder upon installation to eliminate any confusion.

 

I went ahead and installed the Unassigned Devices plugin, I now have a Disks folder. However it's completely empty and I see no way to get to it from SMB etc. so I'm not sure what good it would do for me. Assigning mount points under an existing share works well for being able to see the cleartext progress of what's being loaded to ACD.

 

One interesting issue I'm puzzled about though that I somehow hoped that the Disks share might solve is that I appear to have two main subdirectories that I need to backup: user and user0. At first glance the data in them appeared to be the same but then I began noticing that there were files in each of them that didn't exist in the other - or the sorting was whacky  ::) I've tried specifying two different targets for rclone but this appears to be a no-go. I'm not certain what to do - two different jobs run back to back maybe? Anyone else gotten around this? I could jigger mount symlinks or something but i don't want to accidently recurse something or end up moving data 2x. I've got enough data that doing just user will take a month or so, hopefully a solution presents itself or I'm stupid and missed something lol

/mnt/user is the contents of all the shares including the cache drive

/mnt/user0 is the contents of all the shares excluding the cache drive

 

You would only need to backup user not user0

 

Sent from my SM-T560NU using Tapatalk

 

 

Link to comment

Is /mnt/disks empty even when rclone is mounted?

 

I'm not mounting to it. Actually have my mount in a bad state right now, both rclone and fusor are telling me the device or resource is busy and if I attempt to LS the mounted directory I'm informed that the transport endpoint isn't connected. I'll cycle the box and clear that when I don't have anyone else trying to access it :) This happened after breaking a mount command with rclone, then unmounting, then trying a mount script via User scripts. Not sure what occurred but I see an update to UserScripts that just came down fixes some PHP errors so I'll blame it on that ;)

 

Truly appreciate the clarification on user and user0, whew! I'd have gotten to looking it up eventually.

 

I like the idea of adding a Read-Only Amazon share. I've gone ahead and used your code in the Extras section. I assume this will come into play upon a reboot? I won't mess with it for now but doing it R/O seems a smart way to go. When I fix the issue with my current mount I'll move to using this one - thanks!

 

For anyone who cares here's the rclone command I'll be using to backup things for now, examples always seem to be helpful for me. My target name is crypt and the configuration will create a subdirectory on the target for me. I have a 100megabit connection, I'm reserving 6megaBYTES for upload. I'll be generating a log file that I'll try to tail in a shell. I'm limiting file size to 50Gigabytes as that's ACD max file size and my backups exceed that (stupidly) :(

 

 rclone --log-file /mnt/user/work/rclone.log --max-size=50G --bwlimit=6M sync /mnt/user/ crypt:

 

Edit: Bah, the state of the mount prevents me from running. Server shutdown is going poorly too, the endpoint being in a user share likely isn't helping. May have a lengthy parity check in my future. Okay finally fusermount -u worked and sure enough the server stopped fine, whew.

Link to comment

It would be empty if you just created it and haven't mounted anything to it yet. If things got messed up with the mount you can either reboot or force kill the mount command. I'm pretty sure you don't need to reboot for the SMB to work. You may have to reboot SMB/stop and start the array but you might not even need to do that. If you want to kill the mount command that is stuck find the process id with

 

ps aux | grep mount

 

then just use the "kill" command followed by the process number. You may need to do a "kill -9". Run the first command again to see if it was stopped.

Link to comment

The "disks" folder is normally created by the Unassigned Devices plugin.  Setting the mount point to be within /mnt/disks is really only required if you need the mounts created by this plugin to be utilized by a docker container.  If you don't require that, then it doesn't matter where the mount is located.

 

Perhaps it would be a good idea for this plugin to automatically create the disks folder upon installation to eliminate any confusion.

 

That's a very good idea!

Link to comment

I've been using HTOP to find the process and kill it after the first time. Stopping the server kills it as well without a full reboot. User Scripts doesn't properly kill the script, but it's not hard to find. Honestly how are they expecting you to properly stop a mount with rclone? The Fuse unmount command won't stop it if it's running will it? Must break or kill the process then use Fuse? Seems clunky if that's right but I know it's still under development.

 

Comment on the rclone plugin. If I edit a script and apply to save it, I get taken back to the top config script (from memory). To have it moved I have to drop back down to the script I just saved and move it. It might be better to apply and not goto the top so I can just move it? Or have I missed something? :)

 

P.S. If you're moving large multigig files use the --acd-upload-wait-per-gb switch. This can be used to set a wait state that kicks in after each gig of data. This is used because Amazon can take up to 30 seconds for the data to be posted. If rclone doesn't see it and this doesn't make it wait rclone will start uploading it again and spiral itself into the a circle. Need to add this to mine lol

 

Edit: Looking at the notes for 1.34 released recently it *looks* like the wait per gb has a default already and can scale by size of file so this might not need to be set unless you see errors.

 

* Amazon Drive

      * New wait for upload option `--acd-upload-wait-per-gb`

        * upload timeouts scale by file size and can be disabled

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.