thaddeussmith

Members
  • Content count

    191
  • Joined

  • Last visited

Community Reputation

0 Neutral

About thaddeussmith

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Dallas
  1. I just RMA'd a 5TB red and received a 6TB red in return. Fortunately it was my parity drive or I would have been screwed..
  2. Assign ssd to your cache, and then keep those files in a cache-only share.
  3. Unraid doesn't see the underlying disks.. just a single virtual disk. I don't know how to answer your support question with any sort of confidence. Work with the supported card list in the wiki and go from there. The 9211 is a known functional card in IT mode and a common recommendation for replacing the now problematic supermicro cards. You'll just have to try it.
  4. I use a LSI 9211-8i to support the array itself. My cache is for cache and docker configs only so I don't care about data resiliency, smart status, etc.. I didn't find any essential benefit to using the integrated btrfs pool and since I only had two disks I didn't like the reduced capacity either. I had a spare Megaraid 9261-8i laying around that doesn't support being flashed to IT mode, so I just run my cache drives on it with RAID 0 and present that virtual disk. Extra capacity, speed, and ease of use. If a drive fails I just replace and restore docker configs from backup. Dunno if that answers your question since we obviously have different use cases and expectations for the cache storage, but it at least shows what is technically possible.
  5. You could always do hardware raid and assign that virtual disk to the cache slot.
  6. It looks like in my case there was an issue with mover kicking off while the backup scripts were still running. Flipped start times so that mover always finishes before app backup kicks off and it has run successfully twice, including plex.
  7. I did this, built into a neogeo cabinet. Worked great, but I ultimately decided to dismantle and rebuild in a different chassis. Now I just have a crappy old laptop running the arcade function.
  8. Already using xfs across the board. I deleted both backup directories manually using rm -rf and they cleared in a couple of minutes each - no gui hang ups. Likewise I never have issues with manual rsync. Good to know the issue is a big ¯\_(ツ)_/¯
  9. Derp, sorry. I just checked the status tab of the plug-in's settings section. The referenced directory wasn't yet removed like the last line stated.
  10. Does what persist past the reboot?
  11. forced a reboot of the server. looks like the backup succeeded and it started to hang when deleting the previous backup. I presume the issue is the massive amount of file in Plex directory.. I'll exclude Plex and see if the next run is any better.
  12. attempting now.. so far it's just hanging on that command
  13. Not at all - I have ample compute and bonded 1G NIC's. I can't login to the webgui to check anything or manage the array, let alone cancel the tasks and restart my docker containers.
  14. why does it make the webgui, smb, etc completely unresponsive? i'm able to SSH, but even it is laggy.
  15. I can see where it would become a bit of a bear to manage the hardware level raid and virtual disk presentations for the array storage. I had a spare 8 port card and decided to use it for combining cheap 120GB SSD's for the cache drive. That data is moved off nightly and the persistent data is backed up weekly, so there's not much risk. If a drive dies I simply rebuild the RAID 0 and restore my docker config data. It could be a bit more painful doing that on array/parity disks.
Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.