Leaderboard

Popular Content

Showing content with the highest reputation on 07/21/17 in all areas

  1. Mostly agree with your premise, but just because private companies are involved doesn't mean total deregulation is in order. Those private companies take advantage of a public good (road right of way or radio spectrum) to deliver their product. It's in everyone's best interest to not allow abuse of a public good for private benefit. Notice I said abuse, not use. Economies of scale, barriers to market entry, monopolistic tendencies, carving up of territories to the detriment of the public good, these are just some of the complications that make a pure free market difficult to manage when you are talking about internet connections. Would you be in favor of allowing power companies to turn off service if a particular feed line to a remote neighborhood is going cost more to maintain than they could ever recoup by the customers connected to it? Is grid connection a "right"? Is internet access a "right"? Is it ok for an ISP to charge extra for certain customers based on service level and infrastructure necessary to serve them? The telco's were regulated by the government to provide voice lines at comparable prices to remote locations because telephone service was deemed necessary for the public good. Does internet take that role now? I'm trying to ask these questions in a way that provokes thought on both sides, but I'm aware my opinions shade the wording. Please don't take offense.
    2 points
  2. Is anybody using docker compose? Are there any plans to integrate it with unRAID?
    1 point
  3. Firstly I would like to thank all the info provided by johnnie.black, any reference to speeds are taken from his real world tests post here. I'm still trying to wrap my head around the different configurations on how to hook up 24+ drives. I've also noticed a lot of question around using expanders. I've made some terrible diagrams in paint to help visually. I've been hung up on the different ways to connect the controllers to the sas exapnders, and what max speed the resulting drives will have available to them. I'm going to try to have the questions and hopefully the answers universal so it's helpful to all. These examples are how to connect 24 drives. Of course there is also on board ports/controllers, some people may want to add a couple SSDs for cache and increasing the needed amount of ports. Everyone will have their own optimal configuration. Hopefully we can have some discussion about different configurations in this thread. Some info on the diagrams bellow: Each black line is a cable. 8 Port controller is actually a 2 port controller with 4 channels/lanes per port for a total of 8 channels/lanes per card. This can be a Dell Perc H310 or IBM M1015 for example. There are others but these seem to be the most popular and been used with the expanders before so they are compatible. Connection from the expander or controller (if not using expander) to the hard drives will use a SFF8087 to SATA (forward breakout) cable, therefore allowing 4 drives per expander/controller port. Connection from controller to expander uses SFF8087 to SFF8087 cable Controller to backplane with built in expander would also use the same SFF8087 to SFF8087 cable 2 Controllers and 2 Intel RAID SAS Expander RES2SV240 (Shown in Dual Link) 1 Controller and 1 HP 6Gb (3Gb SATA) SAS Expander (Dual Link) 3 Controllers, Direct Connect 1 Controller 2 Intel RAID SAS Expander RES2SV240 Expected but not tested speeds from jonnie.black using a PCIe 2.0 HBA the bottleneck is the PCIe bus, max speed ~110/125MB/s using a PCIe 3.0 HBA the bottleneck are the SAS2 links, 2200 * 2 / 24 = 185MB/s The M1015 and H310 are both PCIe 2.0 x8 cards 2 Controllers 1 HP 6Gb (3Gb SATA) SAS Expander - Not Possible This configuration is not possible, you can't connect the same expander to more than one controller. - jonnie.black Some more info General Expander info. General Expander info 2 HP vs Intel. Intel RES2SV240 Wiki Some info on the HP 6Gb (3Gb SATA) SAS Expander. It does require a PCI-E x4 port for power only. I do suspect that you could use one of following type of adapters but I haven't seen anyone confirm that it does work. Type 1 Type 2 Type 3 Type 4 Now I know some cases have expanders built in, I don't have one (yet) so I don't know a lot about them yet, as far as dual link, single link, speed etc. I would like to add some info/discussion on disk speed and what is considered too much of a bottle neck for current drives, future drives and how much the speed decrease will effect unraid usage. With most people using gigabit lan which has a theoretical max of 125 MB/s and a real world max close to that. Any disk speed over 125MB/s won't make any difference writing to the array. Only benefit of speeds over 125MB/s is decreased time for parity checks. Will add more info as I find it and hopefully some of the smarter people start to chime in.
    1 point
  4. Ok, this may be dumb, but I have a use case that this would be really effective on. Currently I pass trough 2 unassigned 10k drives to a vm as scratch disks for audio/video editing. In the vm, they are then setup as Raid 0. Super fast. The problem becomes that the drives are then bound to that VM. I can't use the disks for any other vm nor span a disk image (work areas) for separate vm's on that pool. I think it would be better to have the host (unRaid) manage the raid, and then mount the "network" disks and use them that way. Since the vm uses paravirtualized 10GBE adapters, performance should be no issue. And multiple vm's could access them as well. Why don't I just add more drives to my current cache pool? Separation. I don't want the dockers that are running, or the mover, or anything else to interfere with performance. Plus, i'm not sure how mixing SSD's and spinners would work out. Maybe ok? I'm sure someone has done that. TLDR: Essentially I'm suggesting that we be able to have more than one pool of drives in a specifiable raid setup (0 and 10! please!)
    1 point
  5. I would definitely suggest the hot swap cages, like the SuperMicro CSE-M35T-1B (available on eBay or new). The cabling issues you mention are real, and the people who forgo them often get into nasty situations and wind up struggling to avoid losing data on their very first disk add or replacement. I do not believe that the Node 804 can be adapted, Look at something like the Antec 900 or Rebel REX 8 that can accommodate 3 of those cages (15 drives). You don't have to add them all at once if you build does not require that many slots initially. The heavy driver for faster and high core CPUs are the VMs and transcodes. Are you looking for a more-or-less full time Windows VM? Will you be playing games requiring high performance CPU and graphics? Running a normal Windows workstation (so called "daily driver")? Or not running a Windows VM at all? Will you be interested in transcoding video. If your "players" are pretty powerful, often they can handle the full video image and handle any transcoding themselves. But if the players are lower power, esp phones or tablets, a lot of the processing falls on the server. HEVC (8 bit and now 10 bit) is especially processing intensive to transcode, and some CPUs have special hardware support that turns the transcode into childs play, allowing more transcodes and not bogging down the processor. The Kaby Lakes are the best at doing these transcodes. My general recommendation is a 4-core Xeon with 32G of ECC memory, 6-8 SATA MB ports, and adding on an LSI SAS9201-8i HBA (Sata controller). And adding a 250G or 500G SSD. But AMD recently released Ryzen processors with more cores and a cheaper price that looks pretty compelling, if they can resolve some technical issues that are giving the early adopters some problems. Most of the early adopters seem to have workarounds that, although they increase power consumption, lead to a stable build. There is a long tread on Ryzen you should read if you are interested in moving in that direction. Otherwise a 4 Core Kaby would be my suggestion.
    1 point
  6. Hello and welcome. You've done a lot of great research, and it sounds like you are close to piecing it altogether. That said, let's focus on the CPU for the moment. Here are several points I noticed: 4-6 transcode streams simultaneously. At roughly 2,000 Passmarks per 1080p stream, you're looking at a pretty good size CPU. Don't forget that unRAID needs some CPU for itself. Up to 4 1080i/4k streams - Do you want to transcode 4k source, or will you just be supplying a 4k stream to a 4k-ready player? If you want to encode/transcode 4k then you will want to look at the latest Kaby Lake processors for their support for HEVC 10 bit video. VMs and Gaming - storing game ROMs and serving them to devices isn't a big deal, but I wasn't sure if you want to actually game in a VM (i.e. IOMMU/VT-d) - if so then you'll need to size appropriately. Which CPU to pick depends on what you're thinking about the questions above. A Kaby Lake i7 or E3 Xeon is probably a great choice unless you want to be serious about VMs and Gaming on your server.
    1 point
  7. I do similar stuff on my NAS, specs in my signature. The i5 is good enough for 2 or 3 transcodes, so bumping up to an i7 is more than adequate for what you're looking. A few comments about the parts you've chosen: Check the Node 804's drive cages will accept the 8TB drives you've chosen. The screw holes on the 8TB Red are different to 'traditional' drives. The 8TB drives don't work on the Node 304 without janky mods. Choose a different cache SSD - the TLC Samsung will wear out quite quickly. I use a MLC Samsung (the OEM NVMe drive that's sort of a 850 PRO) and had no problems, but I have used both an 850 EVO and a 750 EVO and both lost 10% of their life in a matter of months.
    1 point
  8. for existing users our image is still running, for new users however it won't pull the initial files. in light of this we're beginning to move some of our repos over to having the files pulled at build time rather than runtme, we'd been thinking about it for a while but this situation was the proverbial straw for the camel's back.
    1 point
  9. For anyone interested heres the geek benchmark for VM between Ryzen 1700 and FX-8350 Results with FX-8350: Single-Core Score:1744 Multi-Core Score:3731 Results with RYZEN 1700: Single-Core Score:3486 Multi-Core Score:7292
    1 point
  10. I do not think this is that practical as it is likely to involve some fundamental changes within Linux core code to be that useful. UnRAID tries to avoid that sort of change.
    1 point
  11. OK, if you're not using SAS3 disks and since the server is PCIe 2.0 no point in buying a more expensive PCIe 3.0 and/or SAS3 HBA, of course you can for future proofing. Each x4 mini SAS link will have 2400MB/s bandwidth, of those 2200MB/s are usable, so if you use 2 HBAs, one for each EXP3000 you'll have around 180MB/s for each disk during parity checks/rebuilds (note: this with all SATA3 disks, using some SATA2 disks will bring the whole speed down), if you use a single HBA for both EXP3000 the bottleneck will be the x8 PCIe 2.0 slot, this can vary a little from board to board but in my experience ii's between 2500 and 3000MB/s, so you'd get around 100/125MB/s per disk during parity check/rebuilds.
    1 point
  12. That server uses PCIe 2.0, I assume you'll be using SATA3 disks?
    1 point
  13. @gridrunner Thank you. I passed a USB controller to it. All working well now. Thanks for your videos. I've found them very informative
    1 point
  14. my bad. I've forgotten how to do the computations... the diagram you've quoted above is exactly the case for a single 9200-8e card. There will be no speed disadvantage with a 16e (however, the 16e might be a PCIe 3.0 card - thus possibly giving you better bandwidth) So you need to take this into account: Is your server populated with PCIe3.0 capable Xeon(s)?
    1 point
  15. Sorry, just realized I wasn't clear. A single 9208e card your getting has 2x 4x ports. Each EXP3000 ESM only requiring a single 4x port to work with. Thus a single 9208e card should be able to access each EXP3000 at 4.8Gb/s - of course 12 drives in a EXP3000 would mean a theoretical max speed of 40MB/s. @johnnie.black would have better numbers for the actual real world performance.
    1 point
  16. Hmm. It seems the EXP3000 has built-in expanders and only needs a minimum one a single link to each EXP3000. so a single LSI9208e should be enough... unless EXP3000 has dual ESMs... in which case another LSI9208e may be necessary.
    1 point
  17. If nothing else, the backups after a week or two will only take ~10 minutes instead of 1 hour as it only has to copy changed files
    1 point
  18. Same as the 9211-8i, only difference is the vertical connectors instead of horizontal.
    1 point
  19. You can find 9201-8i or 9211-8i considerably cheaper, but they are EOL and the 9211-8i may require flashing.
    1 point
  20. Just do a quick search here on the forums on "Marvell", which is the chipset on the Supermicro cards. You won't like the results. That said, the SASLP and SAS2LP were the preferred controllers for a long time and a lot of people are still using them successfully - but as you can see from the search results, there are good reasons not to use them in a new build if you have a better alternative.
    1 point
  21. Have you tried changing the bios on the VM to be SeaBIOS instead of OVMF? You'll have to recreate the VM (you can't edit it and change the bios type)
    1 point
  22. Hi - Since you are putting in a couple of monster CPUs, I assume you are either planning to do lots of Plex transcoding or do some serious VMing. In either case, especially with VMs, the SAS2LP is no longer the preferred SATA controller. Get something LSI based like an IBM M1015, Dell PERC H310, or LSI SAS9201-8i HBA (all on eBay, and the first two would need to be flashed). Curiously, that motherboard doesn't have any x16 slots - were you planning any high end video cards?
    1 point
  23. Yes, replace: 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 45 You can skip some steps: 1: Power down 2: Remove old parity and install the new drive 3: Power on 4: Assign the new drive in the slot of the parity drive and start array to begin parity sync
    1 point
  24. I have mine set to transcode as well and it seems to work fine. I forked their GitHub last night and added my changes and created the project on the docker hub. Looks like it was a success so I just need to test it out myself and then I will put it up with my dockers and do a pull request. https://hub.docker.com/r/pinion/docker-plex-1/
    1 point
  25. The RAID5/6 write hole is one of the remaing data integrity risks. Since 4.11 the Linux kernel has support for a journal device where writes to the array and parity are journaled for a number of stripes before they are written to the array devices. This can help in a case of crash to redo the last write operation and/or use the journaled data to reconstruct inconsistent information. Maybe this could be leveraged for unraid (or implemented in another way), in particular because unraid already is based heavily on a cache drive where this data could be written to. Even if this would require an additonal (smaller) ssd drive, it seems a like a good idea to add in unraid. Performance wise it should not really have in impact as these writes are on an SSD device and can be done in parallel to the array writes. The relevant changes in the linux kernel: Kernel 4.11.x: Journaled RAID4/5/6 to close the write hole Based in work started in Linux 4.4, this release adds journalling support to RAID4/5/6 in the MD layer (not to be confused with btrfs RAID). With a journal device configured (typically NVRAM or SSD), the "RAID5 write hole" is closed - a crash during degraded operations cannot result in data corruption. Recommended LWN article: A journal for MD/RAID5 Blog entry: Improving software RAID with a write-ahead log Code: commit Sorry for the Bold Face, but somehow the editior refuses to change it.
    1 point
  26. I posted on the rc9 post and this is Tom's response: Could this be your problem?
    1 point