Koolkiwi

Members
  • Posts

    108
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Koolkiwi's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. Thanks. For the record: Further research does indeed show that the ATA Attachment Command Set documentation confirms that the ATA Self-test log data structure is a 16-bit word value for the Life timestamp, that: "shall contain the power-on lifetime of the device in hours when command completion occurred." So there's no reasonable workaround for this, and I suspect unRAID systems still containing drives older than 65535 hours (~7.5 years) are probably a relatively rare edge case anyway.
  2. I have an old (but still good) drive, that is in excess of 65535 Power on hours. ie. Over 7.5 years! I just noticed, after doing a SMART Extended offLine self-test, that the unRAID GUI is showing the test as being completed after 8401 "LifeTime(hours)", instead of the expected ~73036 hours. As per the screenshot attached, you can see the current SMART Attribute RAW VALUE shown in the GUI is now: 73955 (8y, 5m, 6d, 11h). So, it appears the SMART Attribute RAW VALUE is being show correctly, but the "SMART self-test history" "LifeTime(hours)" value is being truncated (wrapped) as a 16-bit value.
  3. Thanks jonathanm, I think the above (and your prior reply) does answer my question. What I appear to have been overlooking was: This being the case, I can now understand the logic for ensuring the full Parity Drive represents the full Parity (for the full Parity size space). If I've understood you correctly, you're saying that when a new larger drive is installed, it is (in effect) the new "empty" drive space that is being brought into line with the equivalent empty / cleared space Parity. ie. It is the new Data space that is cleared, not the Parity being updated to reflect whatever was on the new Data drive space. In terms of the Parity process itself, no I'm not going to cry semantics. I do believe I have a good understanding of the Parity mechanism, and do understand that parity is calculated across the raw content of the drives (yep, I've had too long a career in computers, IT, and software development).
  4. Yes, of course. But the point of discussing questions of this nature, is so that the developers can consider potential improvements to the product in some future version (if it is deemed a valid point).
  5. So you are basically saying that the continued reading is just to verify that the remainder of the Parity drive reflects the Parity (0?) for Zeroed Data (which is in effect what you could say we have when there is no Data). I can see the argument for doing this (ie. easier / quicker to add a pre-cleared new disk that is larger than all your other data disks!). But, in terms of efficiency / parity disk wear, that does seem rather wasteful to be doing this extended zero parity check *every time* you do a Parity Check, just to allow for the single case of adding a new pre-cleared disk that is also larger than any existing Data disk! It would seem more efficient (and logical) to deal with this extra Parity only in the actual case of you adding a new disk that is larger than all other data disks (even when pre-cleared). So that the added disk size space would only, on that one occasion, then have the extra Parity space initialised / updated (if needed). ie. In my case, if I added my first 8TB Data disk, then there would just be a *once-off* need to initialise the extra 2TB of the Parity disk. Noting also that if my new 8TB drive had not been pre-cleared, the added Parity space is going to need to be initialised anyway!
  6. Hi Benson. Thanks, but I'm not questioning the MB/s speed of my system. I'm questioning why the Parity Check *needs* to continue beyond the Data Disk size. ie. Once all the Data drives have been Parity checked against the Parity Drive(s), then I don't understand why the Parity Drive (alone) needs to continue to be read? There is no further Data Parity to calculate / check! Speed of the system is not relevant to this question. On any system, if you are performing a Parity Check before another operation (eg. upgrading a Data Drive), then the additional time waiting for the larger Parity Drive to (seemingly unnecessarily) read *all* the way through, just adds to the overall time for completing the whole operation.
  7. A quick question I couldn't find the answer to... On my system I have an 8TB Parity Drive, while my largest Data drive is 6TB. Perhaps not uncommon, as I'm allowing for future data drive upgrades. When doing a Parity Check it takes say 2 - 3 days to check all the required Data Drive Parity (ie. up to the 6TB mark), after which all the data drives have spun-down and only the Parity Drive remains active. The Parity Drive then diligently continues reading through it's remaining 2TB (which is not protecting any data drives). This continued reading of the remaining 2TB of the Parity Drive adds another 10 - 12 hours to the Parity Check completion! So, my question is: Why does a Parity Check need to continue reading the Parity Drive, beyond the capacity of the largest Data Drive? Could this not be optimised, such that once the calculated Parity has been Checked for all the Data Drives, the Parity Check is complete? ie. Assuming there is indeed no need to just continue on with reading the Parity Drive, when there is no remaining Data Drive capacity to "Parity Check".
  8. Thanks redia. But that's exactly what I did, when I said I'd "sent a support request last Saturday to ask how I get replacement keys". The copy confirmation auto-reply was from Tom's email address, so I didn't send any further email directly to him (no point adding further to his inbox). However, I've now replied to the support confirmation email, on the assumption it's just been overlooked. Perhaps he will see that email or this forum post. Fingers crossed my suspect backup flash drive hangs on! EDIT: All good... I have now heard back from Tom and sent him the new GUID's for some replacement keys. Looks like it was just an overlooked email issue, something we all suffer from every now and then.
  9. I have a problem. My 4+ year old flash drive failed. Since I paid for a 2 pack license I have managed to get up and running again with my backup flash drive (I strongly recommend everyone buying the 2 pack!), however the second flash drive is in a sorry state, with a burn mark on the end - so I suspect it's failure is imminent. My problem is that the license keys are tied to the GUID's of my old failed and failing flash drives. I've gone out and purchased 2 brand new better quality Sandisk Cruzer Flash Drives, and sent a support request last Saturday to ask how I get replacement keys. However I have had no response, other than the copy confirmation of my email enquiry. Is there another process I need to follow to get support? If my second flash disk fails I will without my unRAID! Aaargh - a very scary prospect!
  10. Hi, I've just come back after a long period running 4.2.4 without any issues. I decided to upgrade to stable 4.5.6 so I'm a little more current. However I seem to have a problem. To conserve power, I intentionally built my unRAID Server without a VGA card. This has been working fine in the past. I do have a spare VGA card that I can install if I ever have an issue that requires me to access the server from the console. However for normal use, I have no VGA card installed (reducing power consumption and heat), and I have no keyboard connected. ie. Just a bare minimum network connected black box. However, after upgrading to 4.5.6, my server will no longer boot-up if I do not have my VGA card installed. Basically, without the VGA card installed I see no disk activity, and I cannot network connect as the ethernet interface has presumably not been initialized at the point the boot appears to fail / freeze. With the VGA card installed all works fine, except I have a toasty hot display card unnecessarily using up power. I'm not sure how to diagnose this, as when the system refuses to start up without a VGA card, I have now way to connect to capture a log to see how w2here the boot process has stopped. Any ideas on what to do next, or what may have changed since 4.2.4 that now requires a VGA card to be present for successful boot? Thanks Greg
  11. Hi Flambot, fellow Kiwi here. I started out using the Seagate 500GB SATA drives, and have just made the jump to 750GB in the form of the WD7500AAKS. What triggered me was the price of the WD7500AAKS dropping below NZ$.50/GB. Still more expensive than you can now get the 500GB drives (per GB), but considering I paid NZ$.50+ per GB for the 6x Seagate 500GB drives I have, I'm a happy camper! Reading the various reviews, the WD7500AAKS appears to perhaps be an even better drive than the Seagate 500GB, in terms of noise level and performance. The only negative comment I saw was the relatively higher start-up current, but if you have a decent Power Supply this shouldn't be any major concern. I bought my first WD7500AAKS a couple of weeks ago and swapped out my parity drive (giving me only a 500GB additional data drive from the old parity), but looking forward to adding the next one with a huge 750GB capacity per drive. Assuming I eventually add another 7 drives to my array, this will equate to an extra 1.75TB over what I would have had with the 500GB drives. I don't have the screen in front of me at the moment, but from memory the 750GB WD was actually more than 150% of the formatted capacity of the Seagate 500GB drives. I can check this later, unless someone else has the numbers.
  12. Thanks for the info Tom. Just to clarify though... is the re-scan button truly gone altogether? Not just in regard to writable user shares? ie. If writing to the existing individual drive shares will it also no longer be necessary to "re-scan"? And if so (which is fantastic by the way), I assume the automated process will not reset the user shares network connections like the current "re-scan" button does. ie. So you could be watching a movie (uninterrupted), while completing a write to the unRAID server?
  13. Refer my post over here (with reference link): http://lime-technology.com/forum/index.php?topic=571.msg3710#msg3710 ie. AHCI has full driver support for SATA features such as NCQ, instead of emulating SATA as a PATA drive. What performance benefit for unRAID is (if any) I have not tried to test, but possibly more significant in terms of hardware support. ie. The single fully functional AHCI driver will likely work with any AHCI compatible controller (eg. The jMicron). Also can't answer the formatting question, although I would not have thought this would affect the existing data on the drive? Can anyone else chime in here?
  14. Thanks for the clarification Tom. I overlooked the kernel change, which of course is a significant change. I would also agree that moving to a later kernel to pick up related bug fixes is a very good move, ahead of adding new features. I would much prefer new features are added to as stable a base as possible, rather than building on wobbly foundations! John: Please have patience, we are all awaiting these new features, I'm sure Tom will release 4.2beta when it is good and ready.
  15. Hi Joe. Just thought I would add my views on the points you raise. I can see what you mean by the version numbering and lack of a beta release, but I would add that this 4.1 release is really no different to the 4.0 final release process in this regard. ie. If you refer to the change log there were 2 changes made between the last 4.0beta release and the subsequent release that was labelled 4.0 final (so what would have happened if there were issues introduced by these 2 'final release' changes?). Ideally a beta release process should continue until there are no reported issues from 'beta' testing, such that the 'final' stable release has only the version number changed from the last beta compile (ie. so no new issues can possibly be introduced). With regard to version numbering, I agree that the change of a minor version number is necessary for these subtle changes to an already released 4.0 final. However, the issue should really be that the more substantial feature enhancements of both 'security' and 'writable user shares' do really warrant a more significant version number increment than just the equivalent release increment used for the subtle changes that produced this 4.1 release increment? Personally, I would have considered the current release as more of a 4.0.1 release, with the 'security' and 'writable shares' additions still being considered a 4.1 release increment. Apologies for rambling Tom, but also being a software developer you tend to focus on these sorts of things.