praeses

Members
  • Posts

    55
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • Personal Text
    ... for Life

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

praeses's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. I work with 24 bay hotswap chassis regularly however for for unRAID where I don't find I need them I prefer the Rosewill 15 bay chassis. By going with single data/power cables it's easier to isolate the problem and without expanders/midplanes there's fewer catastrophic points of failure. I would recommend just unscrewing the middle fan bracket (I think 4 screws on the sides?) and toss it out if you don't need it for cooling. It opens up a ton of room in the case and then just pencil in the serial numbers of the drives ontop of the 3 drive racks with the drive number (either 1-5 on each or 1-15 overall).
  2. It seems the common route on a budget is to get used Dell Perc H310 or IBM 1015, and re-flash them into IT mode and use SAS-SATA breakout cables. There is also the wiki You can get fancier with SAS/SATA backplanes/hotswap bays etc but it will depend on your requirements. Personally I like to keep things simple, one connector per drive for both data and power. I would suggest checking out other builds in the forum to see what aligns with your requirements especially including form factor and budget: https://forums.unraid.net/forum/35-unraid-compulsive-design/ Many folk are happy with buying off the shelf used old servers as well since most of the cabling/etc. is already in place. Probably a few questions to start would be (at least to ask yourself): How much are you looking to spend, are you looking to re-use any existing hardware, what form-factor (size/dimensions), are hotswap bays important, is ECC important, single or dual parity, cache drive, is faster than gigabit (~125MByte/sec) required as this would be much more performance demanding.
  3. One thing to keep in mind with larger arrays and specifically large arrays with large drives is how long will it take to complete a parity check and if that will get in the way of your business. If running on the weekend (36 hours) is fine, then fewer larger drives to get started with sufficient I/O makes sense in the long term as it is easier to upgrade as you mentioned adding 4TB regularly. If you have a tighter window (12hrs for example) more smaller drives with more PCIe lanes/controllers. I'm bringing this up specifically as you mentioned the word "office". Typically you will run into PCIe/controller bottlenecks far before CPU, so for comparison I have an AMD Sempron 3850 (quad core 1.3ghz Kabini, Atom-class) that does fine with ~ 60% utilization during parity check @ 115MB/sec average on 13Data+1Parity+2Cache (have slight pci-e bottleneck for the first 1/3rd of the parity check where drives can go slightly faster but not worth me upgrading). It really doesn't require much for CPU power for single parity as a file-server only just a platform supported by the processor with sufficient IO for the number of drives you plan to run concurrently. 8x12TB Drives is a different beast than 24x4TB drives.
  4. I usually go with one of the pcie 1x Matrox G550 cards, there's open source drivers as well and have been around for ever. I don't feel like re-inventing the wheel when it comes to console access. They're often integrated onto motherboard for servers and I typically pick them up for $20-30. Also come in low-profile form factor. There may be ones with fans but everyone I've got to date has been fanless.
  5. The main reason is reliability. The Atom-baed platforms have been hit with issues, and folks have lost faith. To compound this Intel has not been great in handling these matters. Additionally, the cost per performance has never been as great in practice as folks have hoped for or expected. Certain tasks they struggle with. Earlier on their benefits were through having many threads and reasonable I/O however considering they're based on older platforms they are often now surpassed by cheap ubiquitous desktop/workstation boards & processors. SMT being available on a broad range of processors, advancements in chipsets and ram reducing platform power consumption considerably, and other improvements make Denverton and company a hard sell. Additionally being bound to a motherboard's controller can have it's issues as well, especially considering the value in the cheap compatible perc IT flashed controllers. They've always looked neat and seem like they should be decent but whenever I've worked with them, or thought about buying one personally, the math has never added up.
  6. Scroll though this list for a visual idea: http://www.satacables.com/html/sata-pci-brackets.html I've seen ones with 6 before in a full height single-width. When it comes to storage, my preference is to stay away from anything external, especially esata for a few reasons: Separate power supplies, more points of failure Not a robust connection (either damage or temporary disconnects) cables are typically stiff and things may not sit nicely Everyone has their preferences however. For temporary stuff USB 3.0+ seems to be fairly reliable, as in to just copy data off a disk. Most notably if the drives are small and can be powered directly off the USB port safely. System performance can take a hit however. Typically when you're looking at adding external drives as part of your array your cost/power consumption/cabling, and in some cases overall volume/size goes up more than you may expect. I wouldn't recommend planning a build with anything dangling on the outside except for maybe a USB NIC or similar (I think Aquantia has 2.5/5G Base-T USB 3.0/3.1 versions coming soon, will be interesting if they end up being supported).
  7. I don't have the skills to contribute but I gotta say I like the stylizing of sane's with the interlinked lowercase "u" and "n"
  8. Since you mention expandability, an option I use in servers for scratch disks is using: http://ca.startech.com/HDD/Mobile-Racks/25in-SATA-Removable-Hard-Drive-Bay-for-PC-Expansion-Slot~S25SLOTR And connect it to the motherboard's SATA ports and enable hotswap on those ports in the bios. That may be worth doing for your docker/cache drive(s). Keep in mind many motherboards do not allow non-video cards in some slots (typically the first physical 16x slot), there's information on this forum. SAS Expanders/multipliers are a way around that but introduce another point of failure/cost. Often people choose a motherboard that has 8 ports onboard (or 4x from a pci slot), use reverse sata-sas cables to connect them to two backplane ports, then the other 4 are connected to two cards to allow all 24 bays to be connected. There's lots of ways to do it though, each with caveats (price being one of them). Noise wise, I do like the 120mm fan bracket for that case. What you're going for looks similar to what I've been using, although I am using Supermicro OEM cards (same chipset I believe, may be bit cheaper, I haven't had issues but I haven't read up on them recently so I'ld trust someone else's more recent claims over mine) and I chose Ubiquiti Unifi 720p cameras. For the flash drive you can copy the files off to restore it later if it breaks, and Lime Tech has a policy they put in the announcements section recently I believe. If it's a decent one, wear and tear will be minimal as it shouldn't be regularly writing to it. Use an internal USB port or a nano/flush USB drive to reduce the chance of it get broken off if knocked around. Spindown is configurable, shouldn't be an issue.
  9. Although it does add another point of failure, there are SATA->IDE adapters (some bi-directional, some uni-directional, you would have to be careful to which direction you order). As far as I know, they don't work ATAPI though. You could get a pair of those and a SIL 3132 card.
  10. praeses

    Norco 4224 Thread

    http://www.norcotek.com/item_detail.php?categoryid=1&modelno=RPC-4224 So yeah, it looks like it has what most people are looking for: I am just trying to find some place that sells it.
  11. The version I found before was slightly more compact (only had a single connector despite using the 10 pins for stability) although both of the linked by prostuff1 and xamindar are far superior to the models I have come across in recent searches. I will just choose one of those and proceed. Its for two purposes, one I have a very small unraid server (itx) and want to put the rather long key inside so it doesn't get bent if slid against the wall, the other is for a development virtual server which isn't related to unRAID. Thanks guys.
  12. I'm looking for a source for an adapter to plug the flash drive directly into a USB Header on the motherboard without an intermediary cable or circuit board. I previously had a bookmark that was from one of distributors for multimedia PCs in cars however I can no longer find it. Anyone know of a source?
  13. That's 80x80x15mm, 15mm thick. Standard fans are 80x80x25mm. You can find quieter 15mm replacements although airflow is better the way i have it setup. I'll take a picture sometime (it'll be a little while).
  14. I have pretty much the same thing although re-branded. The top slot failed pretty quick for me, so I'm only using 4 drives. There may be a bit of a bottle neck when doing parity if you're using all 5 drives over SATA 2 on a PCI-e 1x slot which would be 250/5=50mbytes/sec per drive minus overhead. I still find the whole idea of having it separate from the server a little awkward. Stock cooling fans were uselessly loud and 15mm, I replaced them with fairly silent ones mounted using the same holes but externally with grills which reduced the noise significantly while still maintaining 5-10C above ambient.