electron286

Members
  • Posts

    213
  • Joined

  • Last visited

  • Days Won

    1

electron286 last won the day on April 23 2019

electron286 had the most liked content!

About electron286

  • Birthday 09/27/1961

Converted

  • Gender
    Male
  • Location
    USA

electron286's Achievements

Explorer

Explorer (4/14)

7

Reputation

  1. Are you using QEMU to create a virtual array that UNRAID is then in turn working with? not sure of any real advantages there, but a bunch of potential issues if there is need for an array rebuild. Is the cache drive getting direct access with Unraid? it looks like it is. In my tests, many of the NVME drives at the elevated temperature you show in your earlier picture, will slow down. If temperature related, additional heat sinking and/or air flow cooling on the NVME may resolve your problem.
  2. I think the real answer is to look at how there is a mismatch between the parity and the data. Something happened, or they would match. if you run successive parity checks, WITHOUT PARTITY CORRECTIONS BEING WRITTEN, and have identical results, even if having sync errors, then yes, I agree it looks like there is not currently a hardware issue. if the results are NOT consistent, then there really probably is a hardware issue. unless you have past logs to look at, to determine where the error occurred, you really do not know if it is a data or parity drive in error. However, there are tools people have used in the past to identify which files may be affected under such situations. The data files then can be verified to be correct or not, depending on how good you are with a backup strategy. This way you can verify your data drives are correct, then re-build you parity drive(s). Hardware issues or power bumps are the two main causes of bad data being written to either the data drives or the parity drives. another many times overlooked cause is timing and voltage settings on motherboards. Some newer motherboards have default settings that now are set with GAMERS in mind, and go for default performance instead of reliability settings. One example is many ASUS motherboards s now. Pushing the timing and voltage settings for better gaming performance is the opposite of what we should be seeking on a data server. We want stable, reliable, and repeatable results. Regardless, after data is written to the array, and data and parity writes are complete, any and all parity checks afterwards should have NO sync errors. If there are errors, something is wrong, no matter how much things seem to be ok. On critical data, I even use PAR files to create additional protection and recovery options files for sets of data files. This allows me to verify the data files, and recover from damaged and even MISSING data files. I then store all of them, the data files and the PAR files, on the DATA drives on unraid. There are many programs that work with PAR and PAR2 files. It is a similar concept of how the 2 parity drives work in UNRAID, but at the file level instead of the drive level. QuickPar is one such utility, though I have not used that one myself.
  3. While the N5105 is a very capable low per CPU, the N100 seems to be a better option overall for Unraid. I am currently testing various Asmedia, and JMicron controllers. The JMB585 is a great controller, and so far in my testing works very well, it is a PCIe Gen 3x2 bridge which converts to 5 SATA III ports. Regardless of the hate people give to multiplexers/port multipliers, I am also next going to be testing stacked multiplexers/port multipliers combined with the JMB585. If done properly, paying attention to bandwidth and limits, they can be used in a very usable array configuration that will be able to outperform physical spinning hard drives. Specifically I am going to be testing with the JMB575, 1 SATA port in (host) to 5 SATA ports, all SATA III 6Gb/s capable.
  4. Cheap low power motherboard NAS array options... Yes port multipliers divide bandwidth. But when planned out properly can be very useful for building an array. The ASMedia port multipliers do not seem as reliable so far when compared with the JMicron devices. Also there are some very attractively priced N100 based boards now with JMicron PCIe - SATA controllers onboard. It is also best to keep to one family of controller/multiplier when cascading devices to maintain the best interoperability and reliability. I am starting tests with various ASMedia PCIe to SATA controllers for use to upgrade older PCI based systems. I am also looking at the N100 w/JMicron route for testing, which seems a better option for larger arrays. Obviously there is less bandwidth to work with than the LSI 12Gb SAS/SATA dual link via LSI Port multipliers. But, for a new low power system, the N100 option looks very attractive. And if not pushed to limits causing bandwidth throttling, (see option 2 below), with SPINNING hard drives, the new cheaper option looks like it should be able to even do parity checks at comparable speeds to that of the LSI build! (possibly limited more by the new CPU) N100 based MB - NAS bandwidth calculations: w/ JMB585 PCIe-SATA Bridge Controller - PCIe Gen 3 x2 to x5 SATA III 6Gb/sec ADD - JMB575 Port multipliers - 1 to 5 ports SATA 6Gb/s, Cascaded mode: up to x15 drives from 1 SATA port Cascaded mode: up to x75 drives from 5 JMB585 SATA ports! NOTE: 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth OPTION 1 PCIe Gen 3 x2 = (PCI-E 3.0 2 lanes = 2GB/sec) = 5 drives/ports = 400 MB/sec per drive! 5ea JMB575 multipliers, 1 per port from JMB585 ports 25 ports total - SATA 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth 400 MB/sec per port in FULL USAGE AVERAGED = 80 MB/sec per drive averaged over 25 drives OPTION 2 3ea JMB575 multipliers (1st level), 1 per port from 3 JMB585 ports 15ea JMB575 multipliers (2nd level), 1 per port from 1st level JMB575 multipliers 75 ports total - SATA 6Gb/sec SATA = 600 MB/sec max potential bandwidth per SATA port unshared bandwidth 1st level non-limiting - 600 MB/sec (200 MB unused bandwidth from PCIe) 2nd level 600 MB/sec per port in FULL USAGE AVERAGED = 120 MB/sec per drive averaged over 75 drives FULL USAGE = Full array activity - Parity check/build, Drive rebuild
  5. Missing drives in the unassigned drives window under the main tab after updating to unraid version 6.12.8 and updating the unassigned devices plugin to version dated 2024.03.19. (updated from 6.6.7 and unassigned devices dated 2019.03.31) It looks like the SAMBA shares are still working on the drives that were previously shared, but MOST of the assigned drives did not show on the MAIN tab of unraid. only two initially showed up. Each time I press the REFRESH DISKS AND CONFIGURATION icon, it adds ONE drive to the list... I am now seeing 7 of the 10 drives not in the array. MOST of them were precleared and ready to add to the array when needed. 3 were being used as unprotected temporary data drives for misc. uses. is there a concern with the drives not showing up as expected? Also, previously under FS it listed precleared drives as being precleared. Has that functionality been deliberately removed? it was pretty convenient for hot spares. Thanks for the plugin, it has served me well for many years.
  6. If the SSD(s) are getting hot, they may be slowing down. Increased ventilation will often improve the performance of sustained data transfer on SSDs.
  7. various system component speeds - - I made this little breakdown of various system components to assist my decision making on my upgrade. depending on actual use, this can be used to help decide where to focus on upgrades on hardware... I Hope someone else may also benefit from it. Newer SSDs, (SATA and NVME) while not listed here, are so fast overall for actual needs, the faster options only really benefit in parity check and array rebuilds. Read/write speeds of 500/400MB/sec per device are screaming fast overall for most unraid uses. It may be very beneficial to performance however to still use a higher end and faster drive for Parity and/or cache drives. But watch the caveat, some faster drives also have reduced MTBF ratings... Here is the speed breakdowns for various components, using general best case data on a well designed system: PCI-E to replace old PCI or PCI-X SATA controller cards - - and - Speed comparisons of various system components SATA - SATA 1 - 1.5 Gb/sec = 150 MB/sec SATA 2 - 3.0 Gb/sec = 300 MB/sec SATA 3 - 6.0 Gb/sec = 600 MB/sec Hard Drives 5400 RPM up to ? - up to 180-210 MB/sec - 7200 RPM up to 1030 Mb/sec disc to buffer - up to 255 MB/sec - 204 MB/sec sustained NETWORK limits 94% efficiency- - 10Mb = 1.18 MB/sec 100Mb = 11.8 MB/sec 1Gb = 118 MB/sec 2.5Gb = 295 MB/sec 10Gb = 1180 MB/sec PCI 32-bit 33 Mhz 133.33 MB/sec = 8 drives = 16.66 MB/sec per drive! 133.33 MB/sec = 4 drives = 33.33 MB/sec per drive! 133.33 MB/sec = 2 drives = 66.66 MB/sec per drive! PCI 32-bit 66 Mhz 266 MB/sec = 8 drives = 33.25 MB/sec per drive! 266 MB/sec = 4 drives = 66.5 MB/sec per drive! 266 MB/sec = 2 drives = 133 MB/sec per drive! PCI-X 64 bit 133 Mhz 1072 MB/sec = 8 drives = 134 MB/sec per drive! 1072 MB/sec = 4 drives = 268 MB/sec per drive! 1072 MB/sec = 2 drives = 536 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 16 drives = 62.5 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 16 drives = 125 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 8 drives = 125 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 8 drives = 250 MB/sec per drive! PCI-E 3.0 1 lanes = 1GB/sec = 4 drives = 250 MB/sec per drive! PCI-E 4.0 1 lanes = 2GB/sec = 4 drives = 500 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 16 drives = 125 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 16 drives = 250 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 8 drives = 250 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 8 drives = 500 MB/sec per drive! PCI-E 3.0 2 lanes = 2GB/sec = 4 drives = 500 MB/sec per drive! PCI-E 4.0 2 lanes = 4GB/sec = 4 drives = 1000 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 16 drives = 250 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 16 drives = 500 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 8 drives = 500 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 8 drives = 1000 MB/sec per drive! PCI-E 3.0 4 lanes = 4GB/sec = 4 drives = 1000 MB/sec per drive! PCI-E 4.0 4 lanes = 8GB/sec = 4 drives = 2000 MB/sec per drive!
  8. It looks like I will just need to copy/move the files over to new/re-formatted drives on the array for use under V6.x. Not fun, but needed it looks like. it also seems it may be time to consider a motherboard/SATA controller upgrade, (but probably after I migrate to V6.x I think). 8 or 16 port SATA controllers on PCI-e will definitely increase parity check/rebuild speed. I was trying to delay any big changes since the hardware has been working so well for so many years... but looking at options now, I am not really sure.
  9. The oldest one is now running a Core 2 Duo (64-bit) E7200, with 2 GB of RAM, expandable up to 4 GB of RAM. It is on a 4GB Flash drive SATA drives are running on: 1 ea - SuperMicro SAT2-MV8 (running in PCI mode) (8-drives) 1 ea - Promise FASTTRAK S150 TX4 PCI card (4-drives) The rest of the drives are on the Motherboard SATA ports (4-drives)
  10. I bought 2 more licenses to bring two more servers online a few years back. One is seeing heavy use, the other only gets odd tests as I think of them.
  11. Ok, this may seem a bit odd, but most of my UNRAID servers are way back on 4.5.3 They have been rock stable forever, with the exceptions of power outages, and a failed motherboard, and a few power supplies here and there over the years. I am finally done with my tests on 6.x... and am ready to commit to retiring and upgrading from the 4.x world... But, I see no link that actually takes me to any upgrade guide now. Am I missing something?
  12. most of mine are way back on 4.5.3 Finally considering updating them...
  13. Thanks, I just sent you two e-mails, one for each server, they have different controllers. I included the debug files for each server.
  14. No, it gives an error so I played around till I got the flags set properly for my controller it was prompting in the error. The following commands do properly return the respective drive serial numbers; smartctl -i /dev/twa1 -d 3ware,1 smartctl -i /dev/twa1 -d 3ware,0 smartctl -i /dev/twa1 -d 3ware,2 smartctl -i /dev/twa0 -d 3ware,0 smartctl -i /dev/twa0 -d 3ware,1
  15. I also see what looks like the same results on the 2nd server.