Jorgen

Members
  • Posts

    269
  • Joined

  • Last visited

1 Follower

About Jorgen

  • Birthday 02/04/1976

Converted

  • Gender
    Male
  • Location
    Manly, Australia

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jorgen's Achievements

Contributor

Contributor (5/14)

52

Reputation

1

Community Answers

  1. Yes, unfortunately. Unless your library and everything else Apple likes to store in iCloud comes in under 5GB…
  2. One thing that could lead to some of those symptoms is running out of space in the /downloads directory. for example: I have my downloads go to a disk outside the array mounted by unassigned devices, and somehow the disk got unmourned but the mount point remained. This caused deluge to write all downloads into a temp ram area in unraid, which filled up quickly and caused issues. I never found any logs showing this problem, just stumbled upon it by chance while troubleshooting.
  3. Since you’re new to unraid, have you looked at Spaceinvaderone’s video guides? There are tweaks you can do on the Windows side to get it to work better as an unraid VM. I had similar CPU spiking issues until I tweaked the MSI interrupt settings inside Windows. The hyper-v changes in this thread also helped of course. I’m not actually sure if the MSI interrupts were covered in this video series, could also have been in:
  4. 1. Stop container 2. Backup Prowlarr appdata folder 3. Delete everything in prowlarr appdata folder 4. Start container you can also uninstall container after step 1 and reinstall it after step 3
  5. I had problems with this inotify command. It ran and created the text file, but nothing was ever logged to the file. I can only get it to log anything by removing *[!ramdisk] AND pointing it to mnt/cache/appdata. Just curious if anyone can explain why this is? I have my appdata share defined (cache prefer setting) as /mnt/cache/appdata for all containers and as the default in docker settings, but I would have thought that /mnt/user/appdata should still work?
  6. Yes, deluge only does torrents, you’ll need something like NZBget for Usenet Sent from my iPhone using Tapatalk
  7. Use localhost instead of docker network IP Sent from my iPhone using Tapatalk
  8. Sounds like the morhterboard, but it's defenitly not for certain. First step would be to hook up a monitor and keybord directlly to the server, power up and see if it gets past the POST stage. if it doesn't you'll need to dig into the beep codes to identify which component is faulty. This will be your first hurdle, I have the same mobo and it doesn't have a built-in speaker, so you'll need to rig somehting up yourself....
  9. Yeah that should work. Looks like other ubiquity products auto-renews the selfsigned cert on Boot if it’s within a certain amount of days from expiry. Not sure if unifi does the same? Sent from my iPhone using Tapatalk
  10. Ah ok. The controller already ships with a self-signed cert, you should be able to extract it from /config/data/keystore or even download it from the controller web page using the browser “inspect cert” functions. I assume Safari have those somewhere. Unless you need it for your own domain name, then you’ll need to create it with pfsense and import it into the keystore as per above Sent from my iPhone using Tapatalk
  11. Depends on your situation. To start with you need your own domain, pointing to your unifi controller IP. This guide will walk you through creating a new cert specifically for your unifi domain/sub-domain: https://community.ui.com/questions/UniFi-Controller-SSL-Certificate-installation/2e0bb632-bd9a-406f-b675-651e068de973 I think you need to register for the unifi forum to access it. It also has info on how the default keystore works. For this docker the files are in /config/data (which is also mapped to your appdata share). You need to create a new keystore using the "unifi" alias and the default password "aircontrolenterprise". All commands can be run from the docker console. If you already have an existing wildcard cert for your domain you should be able to import it. You'll need to turn it into a pkcs12 then convert that to a keystore that unifi will accept. Something like this if you have a private key and signed cert: https://stackoverflow.com/a/8224863 Caveat: I never got it to work for me. My controller is only avialbe on my LAN, I don't have an existing wildcard cert for my domain and I didn't want to pay for one, and using the free certs form LetsEncrypt required a public IP + refresh every 90 days which seems complicated for this use case. So I put it in the too hard basket.
  12. Since the topic of mismatched docker path mappings comes up quite often with Radarr/Sonarr and download clients, maybe this diagram helps to visualize the three levels of folder config and how they interact? Important to realize is that the application running inside the docker container knows nothing about the Unraid shares. It can ONLY access folders you have specifically added as a container path in the docker config.
  13. Q25 here: https://github.com/binhex/documentation/blob/master/docker/faq/vpn.md Sent from my iPhone using Tapatalk
  14. Nice solution @TurboStreetCar and thanks for sharing! Sent from my iPhone using Tapatalk
  15. Oh ok, that file is in the cocker image and needs to be patched with your changes every time you update the container. I was thinking of scripting the change via “extra parameters” but after some research it appears that is not available. See this thread for background and potential workaround using user scripts. /topic/58700-passing-commandsargs-to-docker-containers-request/?do=findComment&comment=670979 Deluge daemon needs to be started with the —logrotate option for it to work. And it’s started by one of binhex’s scripts that is part of the image. So you’re in the same situation as you log modifications. Either binhex Updates the image to support logrotate, or you need to patch that script yourself For persistent logs, I think logrotate would be the better option, but there are other ways. Here are some random thoughts, I’m no particular order of suitability or ease to implement… - user script parses the logs on a schedule and writes the required data into a persistent file outside the container - user script simply copies the whole log file into persistent storage (you’ll end up with lots of duplication though) - write your own deluge plug-in to export the data to a persistent file - identify another trigger to script your own log file, e.g. are the torrents added by radarr that might have better script support? Sent from my iPhone using Tapatalk