bman

Members
  • Content count

    101
  • Joined

  • Last visited

Community Reputation

2 Neutral

About bman

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. Definitely overkill. Grab a used Supermicro 24-bay case from eBay for a few hundred $ and start from there. SAS expanding backplanes are okay but you will eventually saturate the 6Gbps or 12Gbps limit of them as you add more drives. Normal unRAID use wouldn't show any problem, but parity checks will turn from hours to days long as you add more drives and don't have the full throughput available for each. I stick with SATA backplanes and multiple controllers to ensure full rates to each drive, just to ensure I don't have 5-day-long parity checks like I did with my first PCI-based system. If you really feel the need to cram a whack ton of physical drives into 4RU, you may wish to visit https://www.backuppods.com/ to grab a chassis from the folks who make the Backblaze pods, and go from there. Keep in mind though, if unRAID is your OS choice, your limit for data storage is basically 30 physical drives (28 data and 2 parity, and not counting cache drives). Using more than that is likely more troublesome than helpful.
  2. Correct. So in essence, you will be running your FreeNAS as normal, and an unRAID NAS at the same time. Once you've got them both working and networked together, mount your FreeNAS volume on your unRAID system, and rsync all your data across to your unRAID share(s) as required. FreeNAS does not support the XFS (or ReiserFS) volumes used by unRAID, and conversely unRAID does not support the ZFS volumes used with FreeNAS, at least by default. You can search for ways to add ZFS support to unRAID, but it requires lots of DIY. For these reasons, build both, copy from one to the next, then dismantle the unwanted server and reuse its drives.
  3. I also agree with bjp999 about the drive cages. I gravitate toward the Supermicro 5-in-3, and have plenty of them in service, but for me they are not the ultimate in value. This is because I have built my unRAID systems out to have more than 15 drives each, which is the maximum of most tower style cases. The extra money I spent on the drive cages could have been spent towards a server chassis that houses 24 drives to begin with. Now when I do a build I go straight for the 24-bay hot swap server chassis, usually used from eBay. It leaves lots of room to expand and ends up being a bit cheaper, and a lot nicer than cobbling together an external 5-in-3 bay or two just to add more drives to a server.
  4. What unRAID does for NAS that no others do is offer ease of capacity expansion. You can add any size drive you want at any time (regarding that your parity drive is at least as large as any of your data drives). This negates the need to match a bunch of the same drives together when what you're looking for is oodles of storage, and so saves cost when upgrade time comes. The downside is (generally) you're using one or two disks at a time with unRAID, and this means your data flow is limited by the max transfer rates of a single drive (or multiple drives if your files are spread across them evenly). FreeNAS and others give you more bandwidth by striping multiple data paths together for higher throughput at the cost of requiring same size disks for each member, and so greater cost and lower upgrade flexibility. So in making the decision between the two (specifically, because you mentioned them both) you need to consider whether data transfer rates will be a limiting factor for your use case, and how you will solve that issue if yes. However, beyond that perhaps more importantly is how concerned you are for the safety of your data. When you get into large volumes (above 10 or 12TB) you really need to consider traditional RAID level limitations (RAID5 or RAID6 or even ZFS variations on those themes) which are only really good up to that volume size (http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/, among other similar articles all over the Internet). Beyond that you are tempting fate to the point where you may win the lottery sooner than you'll escape data loss with RAID5 or RAID6. With unRAID the risks are nowhere near as high because each data disk is its own separate file system that is simply unified for ease of file management, and XOR'd onto one or two parity drives for protection. The end result is, as long as you keep on top of drive faults and other hardware failures, your data are much less likely to fall off the edge of the earth. If catastrophe should ensue, and you lose more than one drive at the same time with unRAID, you still have all your other drives' worth of data intact, awaiting your enjoyment. In my own experience I have only lost data from my unRAID arrays because I hit the DELETE key when I shouldn't have. In nearly 10 years, I'd say that's a solid recommendation. I cannot say the same for any of my RAID5 or RAID6 arrays I have used over the years, unfortunately. I never trust them without full backups of all data.
  5. Most drives use fewer than 10 watts each, so 200W (that's under 20A on the +12V rail) total during normal operation for 20 HDDs. Spin-up is an exception where a drive can draw a higher amperage than at any other time. So you could concern yourself with ensuring you have enough amperage on the 12V rail to accommodate 20 drives spinning up (40A, let's say) but even then your current power supply is plenty big. I had 20 drives on a 450W power supply for years (it was a decent Seasonic model) and the only thing I noticed when I swapped it out for a 650W model was that my 20 HDDs reached full RPM a little quicker. Your mileage may vary, but in my experience, PSU quality is much more important than its power rating.
  6. Never had much luck with S3 no matter which OS or board I've tried. Weird things happen, at least to me. That said I have had great success with Asus, MSI, and Supermicro boards for unRAID. My Gigabyte board for my main system has a flaky BIOS and that was enough for me to steer clear of the brand for servers. My last few Asus systems were beyond 11 years old and too painfully slow to use daily, otherwise they still worked fine when I recycled them. My current unRAID servers both have 10+ year old Supermicro workstation/server motherboards in them (because I could get them for free) and they're solid. Admittedly a 12-year-old Supermicro from my first server died last week, prompting a swap to a newer (by a year) one last night, but I'd say Supermicro is a winner too, in general.
  7. +1 on having no flaky drives in your array. We're not talking pastry. You can get a lot of space in your shell using 8TB or 10TB drives, and the sooner you start with the big drives, the happier you'll be down the road as you replace worn out drives in your system. For most of us there's no way to go but an increase in storage capacity as the years go by. If you can swing it, get an 8TB drive with a 3 year or 5 year warranty, as they generally last longer than the cheap 8TB externals and similar. Sometimes you get lucky (I have a WD Green drive that's 22 000 hours over its expected lifespan) but mostly you end up replacing lower quality drives more often. The total cost of ownership either way is hard to predict, but higher grade drives should last long enough to be worth it, if for no other reason than you're not replacing drives every 3, but rather every 7 years.
  8. In my own situation what i thought was good to start with just kept getting bigger over the years, so when I see you can get yourself another 4TB of space using one fewer drive for only $100 more, I'd +1 that idea all day! I suspect 3.5-inch HDDs won't get much larger than 10TB in future so starting with larger drives means you won't run out of physical space in your case sooner than you thought. It's amazing how many years pass by after you start relying on unRAID and it just keeps on working like it was new. To that point, I like the Samsung 850 Pro SSD over the EVO because they have greater longevity (and a 10-year warranty). My ten-year unRAID milestone is not far off and I wish I could say any of my drives lasted that long. One more thing to consider is that as your number of physical drives grows, you need bandwidth to perform parity checks. Something like the Supermicro AOC-SAS2LP-MV8 8-port controller card https://www.newegg.com/Product/Product.aspx?Item=N82E16816101792 needs 8 PCIe lanes to offer full read & write bandwidth to all connected drives. The motherboard you have listed should have plenty with its two x8 slots (the third only looks like an x8 but is actually an x4) as you go forward for up to 22 physical drives (using the built-in SATA ports and two of the above cards), but this is something to note if you are looking at other motherboards with perhaps fewer lanes available. I'd say your list in general is perfect for a start. I suspect the really cheap networking gear may cause issues after a couple years but the price is compelling. You just need to be ready to test with newer or better pieces if/when you start experiencing connectivity troubles. Best of luck with your build.
  9. This is (to me at least) another bit of confusing nomenclature similar to the "Restore" button from unRAID 4.5 and below. What you're cloning is settings. Perhaps the button could be called "Clone Settings to..." What I'd like is a "Clone Destination" button that rsyncs the selected share to any mounted destination (say from my main server to my backup server). That's what I immediately think of when I see the button now. $0.02
  10. So if, in my messing around I DID do the above and my UnRAID system no longer works properly... how do I edit the XML from the command line? How do I find it to edit it?
  11. Ha! I had no idea, actually. Did not suspect that auto-complete was part of the package. Thanks for pointing that out
  12. unRaids purpose is to be a NAS, Yup, and while I empathize (somewhat) with your plight around DLNA, it is a NAS that I need to safely store and centralize my movies collection (so my four young kids don't destroy my Blu-ray and DVD discs). Said NAS has to be large enough, quick enough, but most importantly stable and safe enough that I don't have to worry about it, and that's what UnRAID is first and foremost. My WDTV Live works well enough playing straight from my two UnRAID boxes that none of us in the family is crying for any upgrades, wife included. (My personal feelings on DLNA are that it's kludgy and glitch-prone; I haven't had great successes with it yet. I therefore feel it is NOT necessarily the best thing since sliced bread, and I have doubts about it becoming the de facto standard for home theatre appliances the world over.)
  13. I arrived at a preclear-on-Windows solution in a similar but slightly modified way: I downloaded an UnRAID VM from this thread: http://lime-technology.com/forum/index.php?PHPSESSID=38d4236c2b985151e3416c4f654b00a9&topic=6260.15 I put a new 500GB WD Scorpio Black in my Windows machine's hot swap bay (http://www.icydock.com/goods.php?id=141) Win7's Disk Management snap-in told me it was Disk3 Running VirtualBox Manager as administrator was the next step. Then a Command Prompt, again as Administrator. cd c:\Program Files (x86)\Oracle\VirtualBox VBoxManage internalcommands createrawvmdk -filename "h:\VBoxes\newrawhd.vmdk" -rawdisk "\\.\PhysicalDrive3" Then in VBox Manager GUI, add a machine by pointing to the previously-downloaded UnRAID.vmdk Choose the VM, go to Settings -> Storage and attach the newrawhd.vmdk file to the hard drive controller. Boot up the UnRAID VM. Find its IP address with ifconfig eth0 Use that to copy the latest preclear.sh file to the \\192.168.56.102\flash (which is my UnRAID VM's USB drive root folder) Then a preclear.sh -c 5 /dev/hdb was all it took to get me 'Windows' machine to perform a preclear on a new disk without me having to mess around with my slightly-more-difficult-to-add-drives-to proper UnRAID boxes. I can't say the 47MB/s rate pleases me as much as a 100+ MB/s rate would normally, but it was easy enough that I'd do it again in a heartbeat versus wrestling with my over-stuffed UnRAID boxes unless I had to. I didn't try USB or FireWire drive connectivity because I know they'd both be slower than a proper SATA connection. You'll also note I changed preclear_disk.sh to preclear.sh (by renaming the file). I'm not the world's best typist, and I need no better reason to truncate text from script names.
  14. I experienced a similar problem when trying to follow the "remove drive" procedure here: http://lime-technology.com/wiki/index.php/FAQ_remove_drive I'm not sure why, but after I removed disk13, the webGUI complained that there were "no drives" in the array, even though I only removed one. When I performed an 'initconfig' and rebooted, sure enough, there were no drives in my array! After my initial panic subsided a bit, I realized there was a super.old file on my flash drive that was my configuration before I tried to remove disk13. After renaming super.old to super.dat and rebooting the server, all was well.
  15. I'm not 100% sure what you're asking. Are you asking if you can recover your files, or are you asking if you can configure a new flash drive to work the same way as your old one did, with shares and files all intact? I'll answer the former question of whether or not you can recover your files: Yes. Each disk is its own self-contained ReiserFS volume, and will be accessible under any operating system (unRAID is one example) that supports ReiserFS. You do not need to "set drives" and configure an array. All you need to do is mount the file systems from each of your drives, and presumably mount a new "spare" drive as well, then proceed to copy your files to a new location. If you're asking about rebuilding a new flash drive while keeping your old "array intact", I suspect that is also relatively easy to do, but I don't have the expertise to say what would work with certainty.
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.