Yulquen

Members
  • Posts

    37
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Yulquen's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Finally upgraded my 7-year old unraid build from 4.7 to 6.3.5 last weekend to be able to use bigger disks and dual parity, which went quite ok. My hardware is Core2Duo E8500/4GB/Supermicro X7SBE/Corsair 850W power/2 Supermicro SATA controller cards/4 ea.5 in 3 Supermicro hotswap cages. I have installed a few plugins only, namely the preclear, V.2017.09.27, Community Application, and Server Layout. I have 12 ea.2TB WD enterprise disks in my array as of today (1P/11D), and I just bought 3 ea.10TB WD gold disks, put them in my server, and launched preclear on all 3 the evening 31/10. When I got home from work today, one of the disks had passed (and an email was sent to my inbox), another is now as 92% postread, but whats troubling me is that the third (SDD) has just stopped at 34% postread, and even spun down. The time counter is also stopped for this disk. The array disks are set to spin down after 15 min. of no activity, but this should not affect disks outside the array. My syslog is attached. Thanks for any suggestions. syslog.txt
  2. Been on 4.7 for years, also long time since I was updated on whats going on in Unraid development, so I have some questions: - Is V5 abandoned, and V6 is the way to go now? - Is V6 considered stable for day to day use? - Have multiple parity disk scheme been implemented? with larger and larger disks available, it would be nice. Thanks for replies.
  3. I'm getting this every 3 minutes in my syslog: Aug 27 14:22:36 TheCube kernel: Hangcheck: hangcheck value past margin! Still on V4.7, have plans to migrate to V5. What causes this? System have been running fine for many years. Thanks in advance for help.
  4. Sorry for bumbing the thread, but I would like to know if anyone can answer my first question: If I were to keep the disks spun down, will the disks still spin up every now and then to do the offline check of sectors? And automatically spin down again when done? Thanks in advance for any answers.
  5. I have had my unraid system running for more than 2,5 years now, and I would like to share my experience regarding my hd's. I started out with 6 WD enterprise disks, namely WDC_WD2003FYYS-0 model (2TB). I then choose to keep all disks spinning to not have issues with spin-up delays. Their temperature usually is about 30-35 deg.celcius at summer, but now in winter months they are at 25-30. So far, 3 of those 6 disks have been replaced. The first one just died after less than a year. I replaced the second one this summer, and the third one was just returned, and now I'm waiting for a replacement. Failing disk 2 and 3 started throwing pending sectors. I tried to rebuild the disks to force re-allocation, but the disk just re-wrote the bad sectors and was happy about that. The problem was that after a few days, the pending sectors was back. I tried several rebuilds, but the disks seemed to refuse to re-allocate. I also tried putting the disks into another hotswap bay to cancel out potential problems with the bays, but the problems would not go away. Finally, the disks started throwing read-errors, and at that point I threw them out of the array, and replaced them, and after rebuild the replacement disks have not yet had any issues. All replacement disks have been precleared 3 times before use. As per now, I have 8 disks in my array, 2 of them are newer versions of the initial ones. I check the array every day using the excellent smart view of Unmenu. The oldest ones have been operating for 22800 hours, and have a load cycle count of >150.000. So far I have not lost any data I'm aware of. I did a filesystem check of all disks a few days ago, and no corruptions were found. I'm starting to think that keeping the disks spun down is a must, else they will all die within 2 years. The 1.2 million hours MTBF of the enterprise disks seems unrealistic. Perhaps they last that long if you never use them. But at least they come with a 5 year warranty (5 years = 3,65% of 1.200.000 hours). I have some questions: If I were to keep the disks spun down, will the disks still spin up every now and then to do the offline check of sectors? I obviously want that to happen. And can I expect any trouble with media players, both software like VLC/XBMC, and hardware ones like Syvio since the content will be initially delayed for a few seconds? And finally, how long do Your disks last (>=2TB ones), and which ones have the best track record so far? Thanks in advance for answers and/or comments.
  6. the disk sits in a supermicro hotswap bay with 4 other disks, and they are fine. They are fed from 850W corsair power supply with one 12V rail. Though I have 3 additional hotswap bays in the same machine, of which 2 are unpopulated. I can try and move the disk to one of them, rebuild, and see if anything changes.
  7. My parity drive (WDC_WD2002FYPS-0_WD-WCAVY1999906) is constantly getting pending sectors and offline unreadable, labeled red in unmenu smart view. They either go away by themselves, or they are forced (by me rebuilding the parity drive). The parity drive is the only one with the red smart issues. So far no blocks has been re-allocated. This is the current status of my parity disk after a new rebuild this night: Should I replace it? SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 2377 3 Spin_Up_Time 0x0027 151 148 021 Pre-fail Always - 9433 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 45 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 072 072 000 Old_age Always - 21033 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 44 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 11 193 Load_Cycle_Count 0x0032 150 150 000 Old_age Always - 151269 194 Temperature_Celsius 0x0022 124 114 000 Old_age Always - 28 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 187 000 Old_age Offline - 1 Thanks in advance for any replies.
  8. One of my data disks is reporting a few current pending sectors. what exactly is the procedure to rebuild this data disk (which will force re-allocation of the bad sectors)? using unraid 4.5.4 I have searched the forums and wiki without luck. thanks in advance for help.
  9. Thanks, I rebuilt the parity by un-assigning it and assigning it again. It was successfully rebuilt, and a new parity check shows no errors. The 2 pending sectors are gone from smart report although they are not reallocated as that particular raw value is still at 0. But along the way, I have now gotten 1 offline uncorrectable sector on my parity disk (it came before I rebuilt the parity). If I understand correctly, the disk does some sector-checking of its own when its idle, and marks the sectors that it can not read. I have some questions: 1. When unraid is checking parity, and one or more sectors are reported unreadable by the hard disk to the filesystem, what happens? is the disk with the faulty sector (and potentially corrupted files) kicked out, or is the error ignored? 2. If theres an offline uncorrectable error count in smart status, does that mean that the sector is definately gone , or can it still be readable? after all, a passed parity check means that all sectors are read, and consistent with the rest of the drives. 3. Can the offline uncorrectable count decrement, like the pending sector count does if the sector is reads ok a number of times, or is it latched, as a warning of things to come? 4. Is there a correlation between offline uncorrectable sectors and pending sectors? if a sector is detected unreadable by the internal housekeeping check, is it also marked as a sector pending for re-allocation? please forgive me for asking all those questions.
  10. Im afraid Im running 4.5.4 still as V5 is not the official stable yet. does that apply to my version as well?
  11. Yes, theres one pending sector on the parity disk, the data disks are fine.
  12. Ive got 1 sector on my parity disk pending for reallocation. If I understood correctly, the pending status was caused by a read error, and it will not be moved until that particular sector is written. if 1 data disk fails, and I rely on parity to rebuild it, I can risk the sector to be unreadable, causing some data corruption on the replacement disk ? is there a way to force re-allocation of the bad sector, or do i need to remove the parity from the system, start the array without it, re-attach it and do full rebuild to get around it? thanks in advance for suggestions.
  13. thanks again for all replies. Im running only 7200rpm enterprise disks in my unraid box, so Ill probably end up with the supermicro cages.
  14. I would like to put my HDD's into hot swap bays, which would make my life easyer when replacing failed disks or adding new ones. Is there anything wrong with this one: http://www.supermicro.nl/products/accessories/mobilerack/cse-m35tq.cfm its the same brand as my mainboard (MBD-X7 SBE) the question is if I can use this in my unraid setup without any issues whatsoever. and does unraid support hot swap? if no, would I need to power it down when replacing/adding a disk? thanks in advance for any replies.