Quiks

Members
  • Posts

    30
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Quiks's Achievements

Noob

Noob (1/14)

8

Reputation

  1. Hi, I recently changed the security on my windows shares. Previously everyone had read/write/etc access and I was able to mount my shares in unraid without issue using this plugin I've created a single user that has write access to these shares now. domain\user I cannot mount my share using the following info IP/Host: 192.168.1.60 Username: domain\user Password : password Share: R I ran into a similar issue using cifs in ubuntu and I had to specify domain=domain in the command or credentials file. Is there a way to do a domain=xxx in this plugin without using the command line? clicking mount on the R share refreshes the page, but yields the same results logs: Jun 1 10:29:23 Tower unassigned.devices: Mount of '//192.168.1.60/R' failed. Error message: mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) Jun 1 10:34:40 Tower unassigned.devices: Mount SMB share '//192.168.1.60/R' using SMB1 protocol. Thanks!
  2. I ended up migrating rutorrent off of my unraid box onto a hyper-v docker server I have. No issues there with a nice low 0-2% CPU usage. It might have been an unraid issue. I noticed VM performance increase after upgrading to 6.4. Did you upgrade to the latest version to see if your docker container's performance also increased? Sorry I can't be of help since I ditched the platform in this case :(.
  3. try adding another app into it and see if you can get that working. post your conf file and I'll eyeball it, but I'm by no means an nginx expert.
  4. You just have to wait for a fix, or for letsencrypt to accept ports other than 80/443 =P. I'm betting this container will be fixed before that though.
  5. Maybe try restarting nextcloud? can you access it locally (not through nginx)? is it only nextcloud having an issue?
  6. Are you accessing it the same way? What do you see instead of your nextcloud page? My only issue was getting my certificate pushed. After that, everything worked per normal. you should be able to go to your public ipaddress:port instead of the domain and have it work as well (albeit without the pretty "secure" icon) assuming you have this allowed in your conf.
  7. Just tried HTTPVAL = true, forwarded port 80 to my exposed http port 90 > 80 and it did the trick. Hopefully they fix this so i can close back up port 80. edit: for anyone else that needs to know where to edit this, it's under advanced settings
  8. Like others I'm also getting the challenge error as well as the no such file or directory problem firstly, it's complaining about /config/keys/letsencrypt. This is a symlink that goes to /etc/letsencrypt/live/domain.com I can't verify if this is correctly linked inside the container because the container immediately stops once started, no time to docker exec in and see what's wrong. Has anyone come to a conclusion on what's going on this this file error? I haven't tried the HTTPVAL fix yet as I'm dealing with the directory problem first. I also would prefer to not have to forward port 80.
  9. New question. The CPU indicator is showing 100% usage, but my processor is running at ~50%. I docker exec'd into the container and it doesn't seem to be showing high usage either. Is this indicator not accurate or is there some underlying problem that needs to be addressed. Docker stats doesn't show too much either Any help is greatly appreciated Thanks!
  10. Hi, I find that when I restart the container, any labels I have assigned torrents are removed. Is there something I'm missing? Does it only save data like this every so often? edit: it looks like the above isn't saved instantly. I'm unsure of the interval, but when I left it alone for a few minutes and restarted, the labels, ratio, etc, saved. Another, separate problem, likely not the problem of this container. When downloading a torrent with only 1 file, it gets dumped into the main download folder. Is it possible to have every torrent created with a folder of its torrent name in the save path? The reason I ask, is I'd like to automate the removal of torrents past a certain number of days with the ratio group set to "remove data (all)". The reason I want to use "remove data (all)" is so that it deletes unrarred contents as well. Thanks in advance!
  11. Good to know. I didn't even know apps existed for this. Neat
  12. you don't mount things that way. container /music isn't going to be used for anything because it mounts your files in your user's directory. to better explain, /data is where all the files will get mapped in the below structure The 2 blocked out folders are usernames in nextcloud Inside each of those you'll find Inside files is where all of that user's files are stored. To my knowledge, you can't mount things the way you are thinking. They have to be placed in the specified user's folder. You could create a symlink from the files directory to /music if you wanted I guess, but it seem counter intuitive unless you need that share on multiple users in nextcloud.
  13. I did read it, but I think I may have missed some settings. I was successfully able to upload the 3.5 GB file that failed on my android client, so the back end of nextcloud is working fine. I probably need to change some nginx settings. edit: changing a few settings allowed me to upload >2gb files via the webui. php.ini I mapped the tmp directory to /config/www/nextcloud/data/upload-tmp with a chown abc:abc on it max_execution_time = 7200 max_input_time = 7200 post_max_size = 16400M upload_max_filesize = 16000M memory_limit = 1024M I'm having issues using the letsencrypt container in conjunction with this one. When i upload a file to the local webui (this container), I see the temp file created immediately as it's uploading When I used the remote link (letsencrypt nginx pointing to nextcloud's nginx), I didn't see any tmp file created until the upload completes. to fix this, set your letsencrypt config file to something similar to below location / { proxy_pass_header Authorization; proxy_pass https://192.168.1.185; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_buffering off; proxy_request_buffering off; client_max_body_size 0; proxy_read_timeout 36000s; proxy_redirect off; proxy_ssl_session_reuse off; }
  14. Hi, does anyone have any issues uploading files >2gb. I don't get any error messages I've been having issues and thought it was the webui causing it (maybe nginx?), but the file uploads successfully to wherever it goes (tmp folder?). After it uploads through the interface, it pieces the file together in a .part file. The webui displays a 100% full bar displaying "a few seconds" while it pieces together this .part file to the resulting finished file. This is where the file has problems. gets stuck around 2.2 GB every time for a 3.3 GB file. The exact byte size it gets stuck at isn't the same each time either. When I tried a 1.3 GB file, it did everything above except it actually completed and stopped being a .part file So my issue seems to stem from it being denied the ability to allocate >2gb of space to a single file. Everything I read regarding 2gb and nextcloud mentions 32 bit. I assume that since my unraid install is "Linux 4.9.30-unRAID x86_64", that shouldn't be an issue, right? From some searching, I made some changes below, but I think they are in vain since it seems to upload the file itself just fine, it just can't build the file from the tmp cache afterward. default php.ini .user.ini (this got changed when I set the webui max filesize to 20gb) The docker log has nothing in it. php log nginx error log has nothing in it. I heard someone talking about having container related issues when it comes to nextcloud as well relating to the tmp folder getting filled, but I don't think that's an issue either. my docker image is like 2gb /20 gb. Of all the research I've done, it doesn't seem like people are having issues at the part where the file is being created from chunks after upload like I am. Is there anything I can provide to help figure out what may be causing this? Thanks in advance!