thaddeussmith

Members
  • Posts

    216
  • Joined

  • Last visited

Everything posted by thaddeussmith

  1. Never did, then stopped caring about the stats collection.
  2. Ha, I might have at the time! Now, not so much.. I've decided to roll back to 2TB drives for the sake of cost replacements. So I'm running 28+2 2TB drives for the main array, then 16 2TB drives in RAID 10 for the cache drive to be used as unprotected scratch space, etc. I have a separate host for playing around with virtualization, so SSD's just really aren't needed in my use case of unRAID. I'll roll through the stash of 2TB replacement spares and not feel so bad once I start having to spend on retail 2TB drives. At 56TB+16TB, I've got all the storage I really need and ultimately just purge content that doesn't get watched in a year or two.
  3. Not really kudos, but certainly some dialogue about the concepts or avenues to explore from here. It's interesting to see the pushback on hw raid without anything to really back up the opinions.
  4. So nobody tinkers with this stuff? You just read through the unraid wiki or watch the linus tech tips videos and follow step-by-step? I guess I need a samurai mask in my case for it to be interesting to the community..
  5. 4u, front load with trays, like the supermicro cases. I'll grab a pic or two, but my server room is in absolute disarray right now.
  6. What? Not 160 servers, just around 160 2TB drives, with about 140 2TB drives sitting as cold spares right now. I literally took all of the servers with the intent of only using one or two and keeping everything else as cold spares. Electricity cost hasn't been measured, but I've since stopped using several other virtualization hosts, so it should be about a wash (or less) compared to when all of that was running. No additional unraid licensing - I already had unraid running in a 16 bay server with the full pro license and just moved that OS, disks, and HBA's over into this larger chassis.
  7. I have several 16 and 24 port cards as a result of this haul, but your concern is valid in "normal" scenarios.
  8. I'm not sure I'm following your logic. Losing parity is losing parity - so you're saying it's better to have a single 10tb with potentially a day or more of rebuild time (plus ordering, bc who's keeping a cold spare at that size/cost) than to have a raid10 backed vd made from 2tb drives with a dramatically shorter rebuild time? And I have 140+ cold spares. And I have to actually lose the entirety of one of the mirrored pairs before the vd actually becomes compromised, which means parity is actually more robust. I get the more disks = greater risk of failure argument, but that's why I've gone with smaller disks for faster rebuilds. I don't think I'm any more exposed than someone rolling a single 10tb drive as their parity drive. What am I missing?
  9. Most of my unRAID gear has been free over the years, so while I'd love to spend thousands on a super dense micro server with 10TB disks, that's just not something I've ever been able to prioritize in the budget. But since I work in IT and datacenter environments, I do manage to get some fun stuff to play with. Most recently, it was a bunch of "Aberdeen" storage servers - a mix of 48+2, 24, and 16 bay servers all full with circa 2010 compute resources and RAID controllers. They also came populated with 2tb drives of varying ages, about 160 in total. I've moved my current 16 drive unRAID build into one of the 48 bay enclosures - it's a mix of 6tb, 5tb, 4tb, 3tb, and 2tb, drives totalling 56TB of array space. I've decided to leave a bunch of bays empty since I'm only at 50% utilization, but I'd like to slowly move to 10TB disks and this will give me the space to either replace existing drives or add to the array without worrying about enclosure limits. I hate the thought of throwing a 10TB disk into the parity slot, however, so I've instead got 12x 2tb drives running in the bottom of the chassis in RAID10. This presents to unRAID as 12tb and gives me plenty of overhead for whatever sizes the 10TB drives happen to show up with. These 2tb disks are old, and they're going to fail.. but I have about 140 sitting and waiting as hot spares ready to pop in and automatically rebuild on the HW controller. This actually happened a couple of weeks ago - 2 drives failed. I replaced them and the RAID10 rebuild was completed in just a couple of hours. unRAID was none the wiser and so I didn't have to worry about a parity rebuild on the 12TB virtual disk, which is about 22 hours. I've seen some old threads about this, but most seem to be abandoned and full of people chiming in with how stupid it is. I could understand some arguments against going out and purchasing a bunch of hardware to make this setup, but when it's just sitting in my lap available for use? So far, it's been fairly solid. Anyone else doing anything remotely similar with hardware RAID and virtual disk presentations?
  10. Motherboard - Supermicro - X9DR3-FCPU - dual Xeon E5-2620 v2 @ 2.10GHzRAM - DDR3 44GB $350 shipped and paypal'd (conUS)
  11. Thanks, I'll dig in and learn about what's required there. A quick look at TechPowerUp's database and I don't see the 1030 listed, so if you don't mind providing what you have as a starting point that would be great. Shoot me a PM. Thanks!
  12. Excellent, that's the one I'm looking at as well. Any quirks you had to overcome for passthrough?
  13. I'm looking to add a video card to my R910 chassis to improve Lightroom performance in my VM and have some size and power restrictions which require me to look at something like the EVGA GeForce GT 1030 SC. I was hoping to find an updated guide on supported GPU's for pass through or at least a clear indicator if NVIDIA graphics cards are still problematic, but I'm not finding anything which concisely answers that. Any links or direct answers you guys can provide?
  14. And let's be honest.. the cost of a pro license is fairly small, even for a home/lab environment. I bought my license almost 3 years ago and have been able to enjoy upgrades and a functioning system without any additional licensing costs, in spite of changing the disks and compute hardware numerous times.
  15. Negative ghost rider.. each system requires a separate license.
  16. I've moved the same array configuration across six different motherboard/controller configurations without any negative impact. You can find pre-flashed 9211's from this guy: https://www.ebay.com/itm/New-IT-Mode-Genuine-LSI-9211-8i-SAS-SATA-8-port-PCI-E-Card-Bulk-pack-US-Seller/291641245650?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2060353.m1438.l2649 I've had three from him so far without any issues and I believe I found the recommendation for this seller through another post here on the unRAID forums.
  17. Selling two Samsung 250GB SSD's. One is an 840 EVO and one is an 850 EVO. Light usage, in good operational health, and currently cleared and ready to be formatted. $140 shipped ConUS and paypal'd for both. SOLD
  18. you need to remove parity2 asssignment and start the array so that it forgets that disk. then stop the array and assign the parity2 disk to slot 9 and start the array.
  19. tldr; it appears the information is static?
  20. so i've got the plugin installed, but where do i configure it? ACL, permissions, community string, etc..
  21. Will modifying the hosts file in this manner impact docker traffic? I run plex as a docker and wish to redirect traffic from their stats collection in light of their upcoming EULA changes, but cannot easily do so at the firewall.
  22. I just RMA'd a 5TB red and received a 6TB red in return. Fortunately it was my parity drive or I would have been screwed..