limetech

Administrators
  • Content count

    6228
  • Joined

  • Last visited

Community Reputation

16 Good

4 Followers

About limetech

  • Rank
    Advanced Member
  • Birthday

Converted

  • Gender
    Undisclosed
  1. Actually drive rebuild puts hardly any "stress" on devices because it's a sequential operation from beginning to end. It might put extra stress on your power supply since it requires all drives to be spun up and seeking (though on drives not being accessed for any other purpose, these are tiny track-to-track seeks). However, if your PSU is not up to the task of parity sync/rebuild it is waaaay under-spec'ed. Lesson: when it comes to PSU always get highest quality you can find and more than you think you need.
  2. Like any computer system it can be more or less "secure". Depends on what you want secured. Note: please only use latest release. Note2: next minor release (6.4) includes https support for accessing server.
  3. To clarify a bit on robw83's answers: 1. Yeah that's right. 2. True hot-swap, where you can yank old drive, plug in new drive, then have it auto-rebuild, while all the while server is still fully operational, is currently not supported. However "warm swap", where you Stop array first, swap drive without powering-off, refresh browser and assign new drive, then Start array (and start rebuild) is supported if your h/w supports it. 3. Not sure what is meant by "too much stress" - not sure I agree with that.
  4. Do you have other support topics that describe these lockups?
  5. Well at least it doesn't appear to be destructive. We have sent a report to the reiserfsprogs maintainer about this, so far no response. We are preparing a 6.3.3 release which will have this program reverted back to the previous release.
  6. Thank you, getting ready to get a 6.3.3 out there and will included this.
  7. Yeah this is probably not good.......
  8. Update: we have a fix for this but significant enough code changes that we are going to implement in the 6.4 series.
  9. Working on getting samples. Won't be until 6.4 series.
  10. Sure we'll include this driver in the next kernel build.
  11. Guys we're on thin ice here with this topic. You are only permitted to create a OS X/MacOS VM in unRAID running on Apple hardware only! To do otherwise breaks the terms of their EULA and the last thing we need is a haunting by the ghost of steve jobs. Please don't include the "osk" key or anything about using non-Apple h/w. Cheers, Tom
  12. Nice find johnnie.black! Damn I should have seen that... That particular model HUH728080AL4200 is indeed a 4K-logical-sector size device: https://www.hgst.com/sites/default/files/resources/Ultrastar-He8-DS.pdf ...and right, unRaid OS does not properly handle that at present.
  13. That error is meaningless & will be eliminated in next release.
  14. The message "invalid partitions(s)" means that after using 'sgdisk' to write a GPT partition, upon read-back, the exact GPT partition layout, including contents of the protective-MBR is not the same as what was written. Usually this might happen due to disk error, but if that happened there would have been a syslog entry for that as well. Indeed puzzling. There is one odd message in the syslog: Mar 13 14:35:30 Tower kernel: BTRFS: device fsid 78f1f644-dea0-4e1d-a352-abbf2b21afc6 devid 1 transid 10 /dev/sde This happens near start of boot up, when btrfs file system is loading. There is a subsystem called "blkid" that btrfs uses to keep a small in-memory database of the overall system btrfs configuration. Apparently this particular device was once formatted with btrfs as a "whole device", that is, not in a partition (there are no partitions). All I can think might be happening is unRaid tries to write a GPT but then blkid says, "hey this is supposed to be btrfs" so it goes and restores something in the MBR so that it continues to be recognized. This is a rather common "issue" with btrfs (I call it an issue): once a device/partition has been formatted with btrfs it's difficult to purge it off that device/partition. I suggest trying something like this: wipefs -a /dev/sde See if that works. If not, something more drastic: dd if=/dev/zero of=/dev/sde bs=1M count=1 But be very careful with above commands, that "/dev/sde" is indeed the problematic device.
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.