spalmisano

Members
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

0 Neutral

About spalmisano

  • Rank
    Member
  1. I have a 6.3.5 unRaid environment hosting ~14TB of media for Plex/Radarr/Sonarr/et cetera and I’m not convinced I'm using my cache and array settings as efficiently as I should be. There’s a single parity drive plus five other array devices, as well as a 525GB SSD cache and 32GB USD drive housing the unRaid OS. All of this is hosted on a Dell D710 with more than enough power/RAM to drive everything. If details of the hardware end up being pertinent, I’ll provide them as well. I'm using Docker containers to run Plex, Radarr, Sonarr, NZBGet and a few other apps. I have appdata and system set to Prefer the cache device, which I'm interpreting to mean those shares will by default write data to cache, and if cache runs out of space it’ll start writing to the array. When Mover runs it’ll transfer things around between the array and cache as needed based on shares’ config. There’s a share called media where the Linux ISOs are stored, which has ‘use cache’ set to No. I'm interpreting this to mean that when the containers do their thing they’ll end up putting their data onto the array. The containers have their media share mounted here in their config. Does this also mean extraction/transformation from NZBGet/Sonar/Radarr happens on the cache drive (their config is on appdata) and then moves everything to the array? I don’t need anything on media immediately available on cache, but I would like to enjoy the speed with which the cache drive handles ETL after download. The questions: Based on the above, the cache drive should fill with what the containers download, and then immediately get transferred to the array, right? If you have a similar use case, what are you doing with your cache settings? Are there other things to take into consideration for getting optimal speed/efficiency from this kind of setup? Diagnostic from this afternoon is included as well. media-diagnostics-20170628-1437.zip
  2. This appears to be working. I am able to connect/authenticate with the non-admin user after a container restart. Im guessing that user account got out of sync with the database while I was tweaking it via the command line. Thanks for the help.
  3. You mean the 'Authenticate users with' on the Active Configuration should be 'local'? That's what I have and still am experiencing the above. The user account I use to log in with shows up in the UI, but I still need to use the command line to re-add that user and set its password.
  4. Yes, saw that. What I'm not clear on is why I have to update the users password every time the container gets updated and restarted.
  5. I'm not. The user was added via the UI, and is still in the UI on container restart, but I still need to add the user via the command line in order to set its password. The authentication is is set to local, not PAM.
  6. unRaid 6.3.5 and linuxserver.io's openvpn-as Docker container. Ive not read through this whole thread, so if the answer exists here already, please shove me in that direction. This was originally posted at Reddit, with no resolution so far. I followed Spaceinvader One’s video for setting up OpenVPN as a Docker container and was able to get everything set up correctly, save for one issue. Any time there’s an update to the container, the user I've created to access the VPN, not admin, is deleted. I can manually create the user via the command line, but should I have to? Why would updating/restarting the container cause the user to get removed? The user does show up in the UI, but I can’t connect using its credentials. When I do docker exec –it openvpn-as adduser username after restarting the container, I can connect again. What am I missing?
  7. This was well written. Thanks for the reply. For anyone with the same issues/questions, the above, plus this thread, are both helpful.
  8. That makes sense; thanks. I'll leave it set to Only and keep an eye on the data's growth.
  9. Thanks itimpi. Im guessing this also means Im ok with directly specifying /mnt/cache to ensure the config data lives, and stays, on the cache. If anyone else has commentary, Id love to hear it.
  10. My unRaid environment is about a week old and Im working through setting up several Docker containers. The Docker portion is going well, but Im curious about the proper usage of the cache drive for appdata. Im currently specifying /mnt/cache/appdata for container config locations, and everything looks like its correctly writing to the cache drive. Should I instead be using /mnt/user/appdata and setting the share to Use Cache Disk to Only? Is one different from the other? What is the best practice? I want to take advantage of the SSD speed of my cache drive, and don't want the drive's contents being written to the array. Which method is best?
Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.