Roancea

Members
  • Posts

    36
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Roancea's Achievements

Noob

Noob (1/14)

0

Reputation

  1. So, just wanted to contribute back to this thread with info that might help (and offer a chance for someone to correct me!). A few years back I had virtualized unRAID via PLOP and could first-hand attest to the slooooowww boot time and issues with that; so much so that I had gone baremetal. With a new build, I decided I'd revisit virtualizing unRAID under ESX, and ran across plopkexec (actually mentioned elsewhere on this forum for a different purpose). I can say that this drastically cuts down on the boot time (for all perceived purposes, same as bare metal). With passed through M1015, I now have effectively the same config as my prior bare metal (and even moved it over seamlessly) without the hassles I recall of the AIO solutions prior. Granted, this is now a day in, so still waiting for the other shoe to drop...
  2. I realize that you can do it this way if everything you're running is in one box (and in fact has upsides because of the speed of the internal vmxnet). However, what I'm specifically talking about is having multiple vm hosts, and a separate box for shared storage datastores - i.e., all of the vmdk's will reside on a physically separate box from other vm hosts, which then presents the issue of sufficient network bandwidth/iops from the physical datastore box to the physical vm hosts.
  3. I apologize in advance if this is already touched on, but the topic at hand seems to overlap significantly with my current situation. I have a vm host that has a variety of guests on it, including passthrough gpu W7 boxes and an unraid server (all-in-one as a kind of test platform to date). I'm looking at breaking some of these functions out for better consolidation/grouping of functions, and one of the primary issues I want to do is split my storage out to a separate box - a) for ease of storage expansion, b) centralization of most storage, both media and vm datastores, and c) not have data access down if I want to change around hardware in a more experimental passthrough situation. Which brings me to - I'm looking at using ZFS on a separate box for vm data stores, with the potential of having that box actually still be a vm host, but only host unraid and zfs side by side. The major issue I'm considering right now is what type of network interface to use, as I could see 1gbps being a bottleneck for iops across numerous vms. However, I will be honest when saying my network infrastructure knowledge is my weakest point, and while I'm more than willing to learn, any direction would be definitely welcomed. I've heard of a few people picking up 4gbps fibre equipment for cheap off ebay, but I'm not sure all what that would entail and quite where to start. Standard teaming/agg. of 1gbps nics seems to have drawbacks and not quite accomplish direct bandwidth gains, especially if I were to run NFS (multipathing doesn't really function there, correct?). And 10gbps ethernet appears to be flat-out unaffordable. So, I'm sure I'm mincing certain issues, but please bear with me here, and if you have any thoughts off your experiences, I'd love to hear.
  4. I've heard good things about the hitachi 3tb's, so if cost was roughly even I'd go that route. That being said, I found a few of those seagates for very cheap a month or so ago, and so far they seem to be decent. We'll see how they hold up.
  5. Don't bother testing it. LSI + spindown + spinup + parity check = a lot of errors. There's something bothering me about this issue: it's taking so log to got fixed! Someone tried those 3.1.x kernels with the original md driver? Maybe the way the unRAID md driver access the disks is triggering this bug. Disappointing to hear. Guess that means still sitting on 5b12 for awhile longer.
  6. Joe, you're always helpful and always prompt. Thanks so much for that info, I'll keep it in mind for next time as I'm already halfway through rebuilding and I'm not thinking it would be a great idea to stop it partway through.
  7. So, just want to confirm what I've skimmed from assorted threads from the last few months. Reading through everything here, it doesn't appear as if there's really a "trust my array" procedure anymore for the 5b series - the mdcmd set invalidslot 99 apparently doesn't work any more, correct? So, when I bumped a cable loose (array was idle, not writing) and had a drive red ball on me, please tell me if there's any other option than: unassign drive restart array stop array reassign drive rebuild drive If that's the case, what would one do now under the 5 betas if for whatever reason you had a similar issue with more than 1 drive? In that situation, you can't rebuild, and there doesn't seem to be a way to just tell unraid to accept the array anymore? Am I missing something?
  8. If I recally correctly, b12a wouls segfault on shutdown on some systems. Perhaps that is what is happening to yours. Thanks for the very fast reply, and for all your work on this. Is there any solution to this on b12a other than simply upgrading beta (assuming that is a fix)? I'm holding off on moving up to b14 b/c I'm using an LSI SAS controller and read that there was issues with that, so until a new release comes out I believe I'm stuck at 12a.
  9. So, was experimenting with these under b12a using the tgz in the extra folder. Seems to work well to start, vmxnet3 driver appears to work properly and the openvmtools load on startup. However, its stated that there's a custom script to do a clean shutdown - supposedly so I could have the vmhost initiate the call, correct? When I tested this by executing a guest shutdown from the vmsphere client, the unraid web client shuts down, I believe the array was stopped, but the guest vm will not actually shut down. Am I missing something here that should be evident? Any help would be great, as this is one piece to getting my vmhost on automated ups shutdown. Thanks!
  10. I was under the impression that there were some weird permission issues with using NFS shares concurrently, but maybe that was just me reading something out of context. In any case, should smb really be giving me only 15MB/s?
  11. I'm sure I'm just making a stupid mistake, but hopefully someone can catch this for me. Transferring from one unraid server to another, lets say Box 1 has the media I want to copy, and Box 2 is where it needs to go (Box 2 is a fresh install, no parity at the moment). If on Box 2 I mount the share from Box 1 --> mount -t cifs -o username=guest //192.168.1.x/Media /temp and then go into mc to start pushing stuff from Box 1 to Box 2, it shows up as ~15-20M/s transfer speed. However, if I copy from Box 1 to a W7 machine, I get 100M/s+ read speed, and then a copy from W7 to Box 2 gives me ~70M/s write speed. So at least I know that no, both boxes are not defaulted down to a 100M/s link (and that's reflected in ifconfig as 1000M/s links accurately). Any idea what's causing my slow down when I do a direct copy between the boxes? Am I using some wrong parameter for my mount? Is there an additional setting I need to configure? Is mc by some weird thing causing a slow transfer? Thanks for any help. edit: worse comes to worse, I can just use the W7 box as an intermediate to copy/paste between the 2, but more would like to know why I'm having this issue.
  12. This mobo is incredibly interesting....pls link to your new thread when you start it. I love that it has the SAS2008 onboard. That would free me up to pass thru more PCI devices to my guests. Hmmmm....you've made me a little jealous Yea I definitely had looked at that board as well. The one thing that occurred to me though, and maybe I'm wrong, but if you decided to go the tyan route, wouldn't it be cheaper and with possibly some advantages to going the sister model without the Integrated SAS ports? Assuming you weren't in some odd position getting the SAS model for significantly less than the normal price difference between it and the non-SAS version, buying the non-sas mb + a SAS card for 60ish would net you some spare cash. The non-SAS version has an extra 8x port that wasn't dedicated to the integrated controller, so you haven't lost or gained anything there. And you're dealing with actual SAS connections on the card so if you did want to break out an expander, it'd probably be easier, not to mention cleaner than having 14 sata ports in that one corner. That being said, its still a very attractive option, amd I'm (very) slightly kicking myself on buying my x9scm, not because it's not a fantastic board, but because I'm having to screw around to fit a 16x length card in one of the 8x's (hooray for odd solutions).
  13. So, just upgraded multiple drives in a 5 disc array to 3tb drives. Ran parity checks and spot checked the media between each step to make sure there were no issues. All added disk passed preclears. This finished a day or so ago and everything appeared to function ok. Went to access the array just a short while ago and came across something weird - the array and the web page were accessible and didn't appear to throw any errors - Yet, part of the media on my share was oddly missing. Checked the page and 2 drives weren't spun up - spun them up and checked the individual disks and I couldn't see any files on either. Tried to telnet in and putty immediately crashed out before even pulling a prompt. Restarted the server and came back to seeing the end of a failed startup with smart statuses still on screen showing ok. Restarted again and the array came up fine, all media accessible like normal. Only thing that caught my eye was there was the flash drive showed 2.5gb free out of 4. Checked on it and there's a 1.6gb log file there - I've never seen anything like that, any ideas? Running a parity check on it again, but now I'm concerned. And accessing a 1.6gb log file to look through is proving to be a little bit of a pain.
  14. Any overwhelming reason you didn't break out the usenet apps to a separate vm? For me that's definitely one of the first things I'm doing with my new esxi transition. Am I missing something?
  15. Seconded. Just ordered one from them this week and it was here in 2 days and packed extremely well. Flashed with no real trouble, so would definitely recommend them.