• Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About -Daedalus

  • Rank
    Advanced Member


  • Gender
  1. No, actually I meant just what I said. An extension of the current approach "Easily add and replace disks", would be "Easily add, replace, and remove disks". These are the main operations a user would expect to be able to do with something designed around easy data management, and it stands to reason all options should be (roughly) equally easy.
  2. It could also be argued that it's somewhat expected. I'll grant you removing a disk is far less common than adding one, but unRAID's big callout is how easily disks can be added, and that you don't have to faff about with matches drives, pools, etc. like you do with something like ZFS. By this, you'd think an obvious extension would be easily adding, removing, and replacing disks.
  3. This. This is what I was getting at. unRAID already does most of the stuff people here need, but if Lime Tech wants to target more of the mass market, there's a bunch of hand-holding and graceful exit stuff that has to be in place first. This topic is a prime example of one of those things.
  4. My apologies; I should have been clearer. I was talking about replacing a working drive with another (bigger) working drive. Naturally replacing a failed disk would mean the array is running degraded at some point, however that shouldn't ever need to be the case when you haven't had a failed disk. And yes, you could say "Run dual parity", and that would protect you in this case because you'd still be partially degraded, and on principle that's stupid, because it shouldn't have to happen. It's also not always viable. Some people don't want to (out of physical slots, for example) or it wouldn't make sense (4-disk array). This is only true if you haven't written anything to the array in the mean time, because the new data wouldn't be reflected in the new drive, so you'd end up having to run a check afterwards (which you should anyway, but again, if this was handled more robustly, you shouldn't have to).
  5. I think Yippy's point is still valid though; the array shouldn't have to be put in a degraded (or partially so, with dual parity) state, simply to replace a drive. I think a "mirror commands going to parity to new drive as well, until new drive matches parity, then swap, and mark now extra parity as unneeded" sounds like the logical solution here. I'd imagine something similar already happens when adding a second parity drive, so surely some of the work is already done.
  6. Bumping for visibility. New forums are lovely, but the theme does make my eyes bleed slightly.
  7. Valid points. And, to be honest, once I can move to somewhere big enough that I can shove that stuff in a separate room some place and leave a dedicated KVM with it, then I won't care much about IPMI anyway. Either way, it'll be an interesting couple of months, seeing how Threadripper/EPYC (Big Ryzen?) turns out.
  8. Most (if not all) Threadripper boards won't have server features though, namely IPMI. Have you ever used an IPMI solution? If you haven't, try it out. You'll never go back. That alone, never mind the longer warranty (though that's pretty damn nice as well), is worth the extra cost for me.
  9. I have a very similar issue every time something gets copied from user/downloads to user/media, both set to use cache. For some reason it isn't just a an index change, it's a complete copy, and performance goes to hell when this happens.
  10. Supermicro? No-name board? Oh my sweet Summer child...
  11. Having fans ramp to required speed based on component temperature is a far more elegant solution than halting operations because of overheating.
  12. People have been asking for a while, but Limetech generally doesn't publicly disclose anything forward, and I can understand why; being such a small team, they are opening up a rather large can of worms (read: user disappointment) for not getting feature X deployed on time, and the like. It would be lovely to see, but I doubt it. For what it's worth, I'd love to see more native monitoring tools in unRAID. I know there are docker containers and plugins that can be used, but really, they should be native. Container/VM usage of CPU, RAM, NICs, disks, etc. I'd also love some QoL stuff: Not having to put the array in a failed state to replace a drive, similar with parity, for example.
  13. Would also be nice to see current core/thread assignments somewhere. Since you're playing with applicable things anyway...
  14. Wouldn't really work; he'd have to move the most frequently watched stuff manually, and might not have an idea what this is likely to be ahead of time. I confess, I thought the unRAID cache would do this prior to using it. I'm not sure how feasible a feature it is, but it would be nice to be able to section off 2-300GB of the cache drive/pool to store frequently used stuff. One use-case that does come to mind: Steam games. Yes you can manually move them, but really, that's a pain in the nuts. (I also know games don't benefit much from SSD vs. HDD, but when have PC users been shied away from something because of diminishing returns? )
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.