ufopinball

Members
  • Posts

    237
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by ufopinball

  1. UnRaid identifies drives by serial number, so motherboard SATA ports should definitely be fine. Add-in SATA PCIe cards should also be fine, but I don't believe USB connections are supported. Not sure about the M.2 slot idea as I haven't tried it. But, since it's newer technology, hopefully they've thought to support things like this. Just make sure all your drives show up and are properly accounted for before you start the array.
  2. Hmmm, they’re no longer running as my primary system, but they seemed okay before. Any idea how to get the ASRock x399 Taichi to play nice with LSI controllers? Or what’s a good option outside of LSI and Marvell?
  3. Sadly, the ASRock x399 Taichi MB just doesn't play well with LSI controller cards. Here's a conversation on Reddit that goes into greater detail: https://www.reddit.com/r/unRAID/comments/98kdyp/lsi_920116i_and_asrock_taichi_x399/ I've got the same setup, and I've never gotten it to work. I ended up going with dual AOC-SAS2LP-MV8 (Marvell based) controller cards. That was many years ago, so there may be newer alternatives, but I know they work with this MB and a Threadripper 1950X. The two Dell Perc H310 (LSI based) controllers that ASRock didn't like, work just fine in an ASUS MB, so I know they're functionally okay.
  4. Just found this thread. I've been having the same problem on 6.9.2, in that I cannot change the custom temperature thresholds for my three NVMe drives. When I went to look at /boot/config/smart-one.cfg, it was actually empty. A zero-byte file. What I ended up doing was editing it and adding a space character and saving it, so that it wasn't totally empty. Once I did that, updating the temperatures from the GUI now works properly. No need for manual editing. Something to try if you're seeing the same thing.
  5. Updated smoothly from 6.8.3 --> 6.9.1 --> 6.9.2. No issues to report, everything seems to be running as it should. Thanks LT!
  6. I upgraded my server (AMD Ryzen Threadripper 1950X) and had no problems whatsoever. Uptime is a little over 2 days, have not had issues with booting, dockers, VMs, etc. I'm not sure what you mean by "cpu insulation"? My MB is the ASRock x399 Taichi, if it matters. Full specs in signature line.
  7. I upgraded my server, everything seems to be going smoothly. No issues with updates, Dockers, transfer speeds, etc. I did bump into the NoVNC bug once, but it didn't persist. So far my uptime is 6+ days so I guess I meant to post this earlier. Really enjoying being able to copy files on the server while the family is watching something on Plex. Thanks for all the efforts, LT!!
  8. Updated my server from 6.7.1 to 6.7.2 last week, and have had no problems on my Threadripper 1950X build. Previous Uptime: 38d, 14m Current Uptime: 8d, 5h, 48m Thanks for all the hard work!!
  9. I'm sure you're right. I did buy an AOC-SAS2LP-MV8 at one point in time, I must have lost track of which one I have in the machine. Thanks for the sharp eye!
  10. Hmmm, well I don't have the card in hand, but that's what I thought I bought off eBay some years ago. System Devices reads as follows: RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev 03) So, dunno?
  11. Upgraded from 6.6.7 to 6.7, haven't had any problems. Previous uptime: 75d, 19h, 58m Current uptime: 9d, 1h, 21m According to "System Devices", my Dell HV52W PERC H310 controller has a Marvell chipset (88SE9485). As far as I know, people on this chipset are not seeing the missing drives issue? I have not had any issues on my system so far. My next step is to do the "amd_iommu=pt", but for the moment, things are running smoothly.
  12. Upgraded from 6.6.6 to 6.6.7, no problems with Dockers (Plex, Sickbeard) or VMs (LAMP, Win10, Win7) Uptime for 6.6.6: 67 days, 9 hours, 30 minutes Uptime for 6.6.7: 15 hours, 4 minutes and counting
  13. Looks like you have quite the variety of drives, so that complicates things. Here's how I would proceed. 1) Build a new array, ultimately your goal will be a solid, reliable array. Don't reuse any of your old drives since we are going to try an extract data. Note that with this method of recovery, I don't think you can rely on any drive giving you back 100%, so if you have to do a rebuild on any given drive (assuming you fully recover that many drives), I don't know how reliable your rebuilt drive would be, either. You're welcome to try it. If not, maybe start with 1 Parity and 1 Data and work your way up from there. I'm assuming you know which of the old drives are data and which were parity. This method recovers the data treating the old array drives as JBOD. 2) Take say the STBV5000100, buy another exact model drive. Last time, I bought a used working drive off E-Bay. Test the newer drive, make sure it works and is reliable. Replace the bad drive's board with the new drive's board. Plug the bad drive into the server and use something like Unassigned Devices to mount it, then see how much data you can copy off of it. Once you have extracted as much data as you can, unmount and remove your bad drive. Swap the controller boards back. Bad drive goes on the shelf in case you need it for further recovery. The newer drive can be pre-cleared and added to the array. Repeat this step for all drives. Something I heard was that reallocated sectors are recorded somewhere on the controller board. I heard it quite a long time ago, so I don't know if it is/was true. If it is, your recovery may involve accessing some incorrect sectors, which is why I think the data isn't guaranteed to be 100%, but again anything is better than 0%. This should also be non-destructive, so you could still use other methods to recover your data if you like. I have not heard of the diode fix, nor have I ever attempted to alter a controller board in any way. All I have done is a straight board swap, and hope that any data losses are livable. Thankfully, this isn't something that I have had to do regularly, but it has worked once or twice. PS: Dunno about the warranty, but I'd skip the soldering iron if you intend to go this route.
  14. If I understand the proposed setup, the SSDs are passed through to the VMs, and are not governed by unRAID. They OS to worry about would be the target OS on each VM. Is that Windows 10?
  15. Are these drives of the same make and model? Do you have a list of the drives? I have swapped working logic boards onto otherwise dead drives in order to recover data, so it can be done. This is not a 100% guarantee, but some recovered data is better than no recovered data. You'll still have to replace the drives with (new) known-working drives, so this is going to be expensive and time consuming. FYI
  16. To begin, are your m.2 drives the SATA variety, or the PCIe x4 variety? The former will run roughly the speed of your other SATA SSDs, the latter should run much, much faster. If you have PCIe x4 m.2 drives, you could try to have a mirrored cache and run all four gaming VMs off the drive. Samsung's SATA SSDs advertise "Up to 540 MBps" where as the PCIe x4 m.2 SSDs offer "Up to 3500 MBps". Even with four VMs running at a time, you should still have a lot of headroom speed? It may depend on what else (if any) you're using your cache drive for, though. The alternative is you have 4 VMs, and 4 SSD NVMe type drives. Pass through 1 drive for each VM and you should enjoy dedicated performance for each VM from its assigned drive. If performance is an absolute must, maybe this is the way to go?
  17. Oh, okay. I have not pushed Cache beyond two drives mirrored. I'll keep it in mind for future reference. The most I see is people would like the option to have multiple cache pools. Dunno what priority that has on the wish list.
  18. Noted, but I already have a RAID1 cache pool (see signature). SSD capacities are going up and prices are coming down (relatively speaking). My needs are not so great that I'm out there buying 12TB drives, so someday I'd like to switch over to SSDs. This may be years in the future, but it may also be a slow migration away from HDD to SSD, depending on how often I access data on any given drive. I'm not going to RAID1 my 40TB of existing space as SSD, I rather like the current set up with two Parity drives and ten Data drives. I mean, if you never want to add an SSD to your array, that's fine. I understand the options available via cache, but I want to do this specifically as an array drive. Since this configuration appears to be supported, I'll start with a small 1TB SSD and see how things go.
  19. Good to know, thanks for the info!
  20. Thanks, I will run a parity check here and there to make sure. The drive was one of my old cache pool drives and has not given me issues. It's still fairly young, as far as SSDs go. It's also a 1TB drive and I don't really know that I'll be eating up so much of it, I guess it will come down to usage once I have it installed. As I mentioned in the other post, this is mostly for quick-access to Read existing data. It's not going to be a heavy R/W sort of drive (like a cache drive) so hopefully TRIM won't be such a big issue.
  21. I have a cache drive. An array drive means I get Parity protection in case of a drive failure, and cache or UD doesn't offer that. The performance boost I'm looking for is on the Read end of things. Writing to this drive will likely be relatively rare, it's just data I don't want to wait for a HDD to spin up for.
  22. I upgraded some stuff via Black Friday, and now I have a 1TB SATA SSD that I'd like to use as an array drive. I understand there is no TRIM for array drives. Otherwise, is this configuration supported? Is anyone else here doing this? Also, in order to add this to an existing array, presumably I have to at least do a pre-clear (zero the drive) and have a pre-clear signature written. Anything else I should be aware of? Thanks in advance!
  23. Upgraded from 6.6.5 to 6.6.6 a few days ago ... no smoke, no fires.
  24. Upgraded yesterday from 6.6.0 to 6.6.5. No issues so far, and no problems starting Dockers or VMs. Thanks again, LT!
  25. I'm using a Kill-A-Watt like measurement tool. And I've swapped the graphicscard out for a low power alternative and the GTX970 draws arount 10-13 watts. So 100W-13W=87W which is still too much with all disks spun down. I thought it would be around 50W at worst without the graphicscard... The last part I could swap is the motherboard... Okay, so the Ryzen 1800X is no longer an unRAID box. That said, I set up my Kill-a-Watt and booted off my spare unRAID thumb drive. There's nothing configured. The machine is largely empty, there are no SATA drives nor SATA controllers. There is one SAMSUNG 960 EVO M.2 1TB NVMe, the Display card is an even older ASUS Radeon HD 4350 EAH4350, and this (normally Desktop) system uses Crucial Ballistix Elite 16GB (2x8GB) DDR4-2666 CL16, which is substantially less RAM. I have no VMs set up. So, certainly not apples-to-apples but it's the best I can do under current circumstances. At the wall, once the unRAID boot reaches the login prompt, the Kill-a-Watt reports 43 watts at idle. So minus the dozen or so spun down SATA drives, an 8-port SATA controller, and less (16GB vs 64GB) memory, the 43 watt total comes in at 75% of the previously reported 55 watts. That seems pretty fair to me. At least the readings some reasonably consistent. Comparatively your 87 watts is nearly 40% more than my original 55 watt reading. I really don't know what could eat up that much more energy. The 1800X is binned higher than the 1700-series, but again that's a substantial leap in power consumption. Anything further from here is purely guesswork. Maybe try another forum and see if you can find someone with a more similar setup to yours and see what they're getting? You could also try to replicate my recent test (pull power cables or whatever) and see how close to 43 watts you could get?