rowid_alex

Members
  • Posts

    43
  • Joined

  • Last visited

Recent Profile Visitors

1083 profile views

rowid_alex's Achievements

Rookie

Rookie (2/14)

2

Reputation

  1. BTW, I checked the location of the disk and it is connected to the SATA port on the motherboard directly. So it doesn't seems to be a cable issue...unless it happens the next time I think.
  2. Hello I reboot the server and this time parity disk has SMART data. I found flag 0x000a UDMA CRC error count added one permanently. But everything else seems fine. 0x06 0x018 4 1 --- Number of Interface CRC Errors Please kindly suggest what should I do to enable the disk. Thanks! unraid-diagnostics-20210524-2352.zip
  3. Understood. I will check the cable then. Thanks for the explaination.
  4. My understanding is that since the disk is disabled so SMART is not available. Should I restart the array to get SMART back?
  5. Hello guys, I just found my parity disk sdf has been disabled since midnight. Here are the syslogs: May 24 00:53:29 UNRAID kernel: ata6.00: exception Emask 0x10 SAct 0x1c00000 SErr 0x400000 action 0x6 frozen May 24 00:53:29 UNRAID kernel: ata6.00: irq_stat 0x08000000, interface fatal error May 24 00:53:29 UNRAID kernel: ata6: SError: { Handshk } May 24 00:53:29 UNRAID kernel: ata6.00: failed command: WRITE FPDMA QUEUED May 24 00:53:29 UNRAID kernel: ata6.00: cmd 61/40:b0:40:4c:af/05:00:ca:01:00/40 tag 22 ncq dma 688128 out May 24 00:53:29 UNRAID kernel: res 40/00:00:d0:55:af/00:00:ca:01:00/40 Emask 0x10 (ATA bus error) May 24 00:53:29 UNRAID kernel: ata6.00: status: { DRDY } May 24 00:53:29 UNRAID kernel: ata6.00: failed command: WRITE FPDMA QUEUED May 24 00:53:29 UNRAID kernel: ata6.00: cmd 61/50:b8:80:51:af/04:00:ca:01:00/40 tag 23 ncq dma 565248 out May 24 00:53:29 UNRAID kernel: res 40/00:00:d0:55:af/00:00:ca:01:00/40 Emask 0x10 (ATA bus error) May 24 00:53:29 UNRAID kernel: ata6.00: status: { DRDY } May 24 00:53:29 UNRAID kernel: ata6.00: failed command: WRITE FPDMA QUEUED May 24 00:53:29 UNRAID kernel: ata6.00: cmd 61/40:c0:d0:55:af/05:00:ca:01:00/40 tag 24 ncq dma 688128 out May 24 00:53:29 UNRAID kernel: res 40/00:00:d0:55:af/00:00:ca:01:00/40 Emask 0x10 (ATA bus error) May 24 00:53:29 UNRAID kernel: ata6.00: status: { DRDY } May 24 00:53:29 UNRAID kernel: ata6: hard resetting link May 24 00:53:39 UNRAID kernel: ata6: softreset failed (1st FIS failed) May 24 00:53:39 UNRAID kernel: ata6: hard resetting link May 24 00:53:49 UNRAID kernel: ata6: softreset failed (1st FIS failed) May 24 00:53:49 UNRAID kernel: ata6: hard resetting link May 24 00:54:24 UNRAID kernel: ata6: softreset failed (1st FIS failed) May 24 00:54:24 UNRAID kernel: ata6: limiting SATA link speed to 3.0 Gbps May 24 00:54:24 UNRAID kernel: ata6: hard resetting link May 24 00:54:29 UNRAID kernel: ata6: softreset failed (1st FIS failed) May 24 00:54:29 UNRAID kernel: ata6: reset failed, giving up May 24 00:54:29 UNRAID kernel: ata6.00: disabled May 24 00:54:29 UNRAID kernel: ata6: EH complete May 24 00:54:29 UNRAID kernel: sd 6:0:0:0: [sdf] tag#26 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 cmd_age=0s May 24 00:54:29 UNRAID kernel: sd 6:0:0:0: [sdf] tag#26 CDB: opcode=0x35 35 00 00 00 00 00 00 00 00 00 May 24 00:54:29 UNRAID kernel: blk_update_request: I/O error, dev sdf, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0 May 24 00:54:29 UNRAID kernel: sd 6:0:0:0: [sdf] tag#27 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 cmd_age=60s May 24 00:54:29 UNRAID kernel: sd 6:0:0:0: [sdf] tag#27 CDB: opcode=0x8a 8a 00 00 00 00 01 ca af 5b 10 00 00 01 70 00 00 May 24 00:54:29 UNRAID kernel: blk_update_request: I/O error, dev sdf, sector 7695457040 op 0x1:(WRITE) flags 0x0 phys_seg 46 prio class 0 May 24 00:54:29 UNRAID kernel: md: disk0 write error, sector=7695456976 I referred some threads in the forum. And I am not sure if I should stop array and to rebuild the parity disk with the same hard disk or not. Please kindly help check. Diagnostic file is attached. Thanks a lot! Alex unraid-diagnostics-20210524-2256.zip
  6. I had the same problem and find the way to fix it. Go to UNRAID docker page, change the view from basic to advanced with the button in the up-right corner, then you will see there is a "force update" for every docker container. Force update letsencrypt and issue is resolved!
  7. Thanks! I thought this will be updated via docker container. I successfully upgraded to 15.0.5 now.
  8. Thanks. I just got the update this evening so this time I tried to use "force update" function in advanced view to update the nextcloud container and the updater said it is up-to-date. So nothing is changed. I am still in 14.0.3 however...
  9. Hello, I see people mentioning nextcloud 15.0.5. My docker container recently shows upgrade for this nextcloud a couple of times. But after the upgrade I see I still at 14.0.3. May I know how to upgrade to 15.0.5 as your guys? Thanks!
  10. thanks. may i know what exactly how the algorithm to calculated parity slot 2 is different than slot 1? personally i don't mind if parity disk is in slot 1 or 2, as long as I have one parity disk.
  11. ok. what you mean is: for parity-swap it contains: one parity copy from old parity disk to new disk (this only needs the old parity disk healthy, other than all other disks healthy, since the array is offline) one data rebuild afterwards (this requires all disks healthy) for what I did via parity disk 2 it needs: one parity sync for parity disk 2 (this needs all disks healthy) one data rebuild afterwards (this needs all disks healthy as well). is my understanding correct?
  12. here the thing: i have array with parity drive 5TB and data drive 5TB+1.5TB+1.5TB. a few days ago the 5TB data disk drive failed and I sent it to vendor for RMA. in the same time i bought two new 8TB drives and want to replace both the failed data drive and the parity drive. i found out i cannot replace the failed data drive with 8TB drive since the new drive 8TB is larger than the current parity drive 5TB. of course i cannot replace the parity drive with the new drive directly since there is no way to rebuild the data unraid suggests to check "parity-swap" procedure so I found it in wiki: https://wiki.unraid.net/The_parity_swap_procedure my understanding is that I have to swap the old parity disk to the missing data drive entry, then replace the parity disk with my new drive. then execute a "parity copy" to copy the parity data from the old parity disk (in data disk entry) to the new parity disk (in parity disk entry) once parity data is copied, start the array will rebuild the data in data disk entry. however i archived with a different way by parity 2 disk since i don't have double parity disks: add the pre-cleared 8TB disk to parity 2 entry start array and sync the parity data once parity 2 disk is ready, stop the array and remove the 5TB parity 1 disk in the configuration and start array again, this removes parity 1 disk from the configuration completely stop the array, now I have a parity disk replaced with a larger 8TB disk in parity disk 2 entry. add the new 8TB disk into data disk entry start the array and rebuild the data. my questions are if my procedure is recommended? if any advantage to use parity-swap procedure than my above procedure? thanks! Best Regards, Alex
  13. I am searching for thunderbolt 3 support recently in the forums and found most of the answers are "No", except this case: The poster used a thunderbolt 2 enclosure for the drives and it is working properly with unraid release before (except the booting issue he addressed). Wondering if anybody tried any thunderbolt 3 external drive enclosure with TB3 device (e.g. intel NUC) actually? I found the following information in egpu.io: since in unraid 6.5.2 we are already at Linux Kernel 4.14, and should include the following patch for TB3 security control: https://lkml.org/lkml/2017/5/26/432 does it mean unraid support eGPU already by: $sudo sh -c 'echo 1 > /sys/bus/thunderbolt/devices/0-1/authorized' like this topic: https://egpu.io/forums/pc-setup/egpu-in-linux-has-anyone-here-gotten-it-to-work/ does anyone ever test it?