Gizmotoy

Members
  • Posts

    276
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Gizmotoy's Achievements

Contributor

Contributor (5/14)

4

Reputation

  1. Ah, tricky. I did not see that in the log. I was hunting using the words "format" and "cache". This was exactly it, though, after a reboot everything is working correctly. Thank you!
  2. I've been using Unraid for a very long time, but have never run a cache drive. As I was deprecating some old hard drives, I decided I wanted to give this a shot. I'm getting an error in the UI that says "Unmountable: Unsupported or no file system". I do see the section at the bottom of the main page that says "Unmountable disks present" and there is the "Yes I want to do this" box alongside the format button, which I tried, but even after that happens the drives still show the same error. It's not clear what I'm doing wrong. Does anything stick out? The process I followed: Got former array drives out of the array and got the array back in good shape Precleared former array drives (primarily as an integrity check) Stopped the array Increased the cache slot size to 2 to enable direct mirroring Added precleared drives to cache slots 1 and 2 Started the array At the bottom of the main page, clicked the "yes I want to do this" and the "Format button Verified the errors and state still exist I think I then repeated steps 3-8 a second time, with the same result. Running 6.12.6. hyperion-diagnostics-20240115-0949.zip
  3. I had a set of 3 drives to remove. I followed the instructions laid out in the previous posts exactly and it worked very well for my first drive. I created a new config, and then as a follow-up confirmation ran a parity check and all was good. Then I started on the second drive. Unfortunately, after the zeroing completed I have noticed that creating a new config sets the md_write tunable back to Auto from Turbo. I'm wondering if this will break anything, or if the process will just take longer? I noticed that using Auto only the drive being zeroed and my two parity disks are being read during the operation. When on Turbo it was reading from all disks and writing to the drive being zeroed and the parity drives. I'm OK with it taking longer (it's done now). I just want to confirm I didn't irreparably break anything.
  4. I thought I might start a fun thread for the holidays. So I have this hard drive that has been in service and in active use for 12 years and 6 months. It is horrifically slow compared to the modern drives, but it is a trooper. I really want to let it keep going and see when it will eventually fail, but it feels like too big a risk to keep in my main array. Plus, as noted, the performance is terrible. I'm going to pull it out of the array next week and let it live out its retirement some other way. Any ideas or suggestions? Anyone still running something older than this? Drive details: Hitachi Deskstar 5K3000 Model HDS5C3020ALA632 The drive's stats, according to SMART. Some of these are astonishing: Power on hours: 109250 Start/stop count: 5927 Load cycle count: 6071 Reallocated sector count: 0 Power Cycle Count: 116 Logical Sectors Written: 66,681,556,472 Number of Write Commands: 460,521,050 Logical Sectors Read: 4,704,534,865,919 (4 trillion sector reads?!? Can this be right?!?) Number of Read Commands: 7,586,319,622 A big salute to you, bulletproof Hitachi I purchased on a whim from MicroCenter in 2011. Hitachi_HDS5C3020ALA632_ML0220Fxxxxx-20231224-2107.txt
  5. Oh man, what a silly mistake. That definitely resolved the issue. Too bad the folder picker doesn't work for that path or I would have found out immediately. Thanks for the suggestion, I'm all set!
  6. I was trying to figure out why my Docker container utilization was high, and after running du -h -d 1 / from every docker container console and finding nothing, I decided to enable log rotation for my Docker image. So I set "Enabled" to "No", turned on Advance Settings, Enabled Log Rotation, and then set the max size to 1 file at 100MB. When I try to apply the changes I get the following image. The hover over for the red directory is "please match the requested format". I did not change that directory at all, and the format is correct: the path is accurate, and it's where all the other containers are stored. I do run Docker from an Unassigned Devices volume since I don't use a cache disk, but this is the first issue I've ever had with the arrangement in the maybe 3-4 years I've had it like this. That disk is mounted and active. I can't even re-enable Docker now. Any suggestions?
  7. I inspected cables, and some affected drives are on SAS cables to the Dell H310, and some are on plain SATA cables to the motherboard. In total, 9 cables are affected of varying brands, ages, and types. Power could potentially be the culprit but I’m not sure how to figure that out short of just replacing the power supply. It’s an oversized Seasonic 80 Platinum, so not a cheap supply.
  8. So I replaced my AOC-SASLP-MV8 with the Dell H310 and run a parity check and all is fine. The WebUI responsiveness issues have resolved. That said, I'm still getting a bunch of those scsi task aborts from above. It looks like they're on both the Dell adapter as well as the built-in motherboard ports, so everything is affected. Are they safe to ignore? New diag attached. hyperion-diagnostics-20190610-2005.zip
  9. Couple developments here: As soon as the preclear finished, the system went back to normal without a reboot. Drives are all online and nominal, Docker functional, WebUI functional. I ordered a refurbished Dell H310 as a backup in case my MV8 is failing, but it'll take a few days to arrive.
  10. So I've had a disk or two fail with read errors in the past 2 months. I just recovered from a failure, and was preclearing my hot spare. I came home today and to check on it, and noticed it was almost done. However, while interacting with it, the Unraid WebUI became unresponsive. I can still SSH in, so I grabbed the diagnostics (attached), but it looks like both my main Unraid WebUI and all Docker WebUIs are unresponsive. If I try to access the WebUI, I get the error in the attached image. It looks like maybe the webserver itself is up, but Unraid is down. If I look through the logs, I see a bunch of this starting today: Jun 5 19:12:37 Hyperion kernel: sd 8:0:1:0: attempting task abort! scmd(000000009f4a675f) Jun 5 19:12:37 Hyperion kernel: sd 8:0:1:0: [sdc] tag#0 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Jun 5 19:12:37 Hyperion kernel: scsi target8:0:1: handle(0x000b), sas_address(0x4433221102000000), phy(2) Jun 5 19:12:37 Hyperion kernel: scsi target8:0:1: enclosure logical id(0x5003048011f2a900), slot(1) Jun 5 19:12:38 Hyperion kernel: sd 8:0:1:0: task abort: SUCCESS scmd(000000009f4a675f) Jun 5 19:12:46 Hyperion kernel: sd 8:0:4:0: attempting task abort! scmd(0000000010ab0e43) Jun 5 19:12:46 Hyperion kernel: sd 8:0:4:0: [sdf] tag#0 CDB: opcode=0x85 85 06 20 00 d8 00 00 00 00 00 4f 00 c2 00 b0 00 Jun 5 19:12:46 Hyperion kernel: scsi target8:0:4: handle(0x0010), sas_address(0x4433221107000000), phy(7) Jun 5 19:12:46 Hyperion kernel: scsi target8:0:4: enclosure logical id(0x5003048011f2a900), slot(4) Jun 5 19:12:47 Hyperion kernel: sd 8:0:4:0: task abort: SUCCESS scmd(0000000010ab0e43) Jun 5 19:32:39 Hyperion kernel: sd 8:0:1:0: Power-on or device reset occurred Jun 5 19:32:39 Hyperion kernel: sd 8:0:4:0: Power-on or device reset occurred Jun 5 19:32:39 Hyperion rc.diskinfo[5829]: SIGHUP received, forcing refresh of disks info. Jun 5 19:32:39 Hyperion rc.diskinfo[5829]: SIGHUP ignored - already refreshing disk info. So it looks like something is up, but I'm not sure what. So I have two questions: Is there a way to cleanly shut down the array now that this has happened? and Are there any clues as to what's gone wrong? I noticed it might be an error with my SAS controller. It's an AOC-SASLP-MV8 that, while a Marvell chipset, I've never had trouble with before (but is 8 years old now). If so, is there a recommended drop-in replacement? Any suggestions appreciated. Thanks! hyperion-diagnostics-20190605-1959.zip
  11. Have you had any problems with your SAS2LP-MV8 on 6.7? I have the same card and am hesitant to upgrade given the other issues noted in this thread. I guess I should investigate if there's a well-reviewed LSI-based card that's a drop-in replacement. I don't really want to redo cabling if I don't have to, and my MV8 has been rock solid for over 8 years now.
  12. Hmm. That does work, but only if I don't need access to Unraid itself. That makes Plex work, but it then breaks Unraid's Main even if I then set up a custom location for that. There's many inter-dependancies. This is tricky.
  13. Has anyone been able to get this to work with Plex? I've gotten Grafana and a bunch of others working, but Plex just redirects back to my main Unraid page. I just use an invalid TLD locally without SSL. With a local DNS server as well, this traffic never goes outside my network. Unraid is on 8008, and Plex is on its usual 32400.
  14. I've tried everything at this point. Bridged/Host/Static network modes and every optional parameter in `speedtest-cli`. I installed it on my desktop machine on the same network, and it works there. It just won't work in this docker container, for whatever reason. It's not a measurement/reporting issue, either: stats from my network switch confirms that the throughput is actually very low upstream. There's a large upstream burst right at the start of the test, then nothing, resulting in a low Mbps reading. I'm not an expert, but it kind of seems like it's waiting for acknowledgement from the Speedtest server before sending more data, and that ACK never arrives. The current results use HTTP. There's a pull request outstanding for a socket-based option. Maybe that'll help whenever it gets integrated back into master.
  15. This is great, thank you. The downstream measurements are now roughly inline with expectations. Still can't figure out why the upstream is locked to 4Mbps, though, but at least the downstream works.