Leaderboard

Popular Content

Showing content with the highest reputation on 08/17/17 in all areas

  1. Turbo Write technically known as "reconstruct write" - a new method for updating parity JonP gave a short description of what "reconstruct write" is, but I thought I would give a little more detail, what it is, how it compares with the traditional method, and the ramifications of using it. First, where is the setting? Go to Settings -> Disk Settings, and look for Tunable (md_write_method). The 3 options are read/modify/write (the way we've always done it), reconstruct write (Turbo write, the new way), and Auto which is something for the future but is currently the same as the old way. To change it, click on the option you want, then the Apply button. The effect should be immediate. Traditionally, unRAID has used the "read/modify/write" method to update parity, to keep parity correct for all data drives. Say you have a block of data to write to a drive in your array, and naturally you want parity to be updated too. In order to know how to update parity for that block, you have to know what is the difference between this new block of data and the existing block of data currently on the drive. So you start by reading in the existing block, and comparing it with the new block. That allows you to figure out what is different, so now you know what changes you need to make to the parity block, but first you need to read in the existing parity block. So you apply the changes you figured out to the parity block, resulting in a new parity block to be written out. Now you want to write out the new data block, and the parity block, but the drive head is just past the end of the blocks because you just read them. So you have to wait a long time (in computer time) for the disk platters to rotate all the way back around, until they are positioned to write to that same block. That platter rotation time is the part that makes this method take so long. It's the main reason why parity writes are so much slower than regular writes. To summarize, for the "read/modify/write" method, you need to: * read in the parity block and read in the existing data block (can be done simultaneously) * compare the data blocks, then use the difference to change the parity block to produce a new parity block (very short) * wait for platter rotation (very long!) * write out the parity block and write out the data block (can be done simultaneously) That's 2 reads, a calc, a long wait, and 2 writes. Turbo write is the new method, often called "reconstruct write". We start with that same block of new data to be saved, but this time we don't care about the existing data or the existing parity block. So we can immediately write out the data block, but how do we know what the parity block should be? We issue a read of the same block on all of the *other* data drives, and once we have them, we combine all of them plus our new data block to give us the new parity block, which we then write out! Done! To summarize, for the "reconstruct write" method, you need to: * write out the data block while simultaneously reading in the data blocks of all other data drives * calculate the new parity block from all of the data blocks, including the new one (very short) * write out the parity block That's a write and a bunch of simultaneous reads, a calc, and a write, but no platter rotation wait! Now you can see why it can be so much faster! The upside is it can be much faster. The downside is that ALL of the array drives must be spinning, because they ALL are involved in EVERY write. So what are the ramifications of this? * For some operations, like parity checks and parity builds and drive rebuilds, it doesn't matter, because all of the drives are spinning anyway. * For large write operations, like large transfers to the array, it can make a big difference in speed! * For a small write, especially at an odd time when the drives are normally sleeping, all of the drives have to be spun up before the small write can proceed. * And what about those little writes that go on in the background, like file system housekeeping operations? EVERY write at any time forces EVERY array drive to spin up. So you are likely to be surprised at odd times when checking on your array, and expecting all of your drives to be spun down, and finding every one of them spun up, for no discernible reason. * So one of the questions to be faced is, how do you want your various write operations to be handled. Take a small scheduled backup of your phone at 4 in the morning. The backup tool determines there's a new picture to back up, so tries to write it to your unRAID server. If you are using the old method, the data drive and the parity drive have to spin up, then this small amount of data is written, possibly taking a couple more seconds than Turbo write would take. It's 4am, do you care? If you were using Turbo write, then all of the drives will spin up, which probably takes somewhat longer spinning them up than any time saved by using Turbo write to save that picture (but a couple of seconds faster in the save). Plus, all of the drives are now spinning, uselessly. * Another possible problem if you were in Turbo mode, and you are watching a movie streaming to your player, then a write kicks in to the server and starts spinning up ALL of the drives, causing that well-known pause and stuttering in your movie. Who wants to deal with the whining that starts then? Currently, you only have the option to use the old method or the new (currently the Auto option means the old method). But the plan is to add the true Auto option that will use the old method by default, *unless* all of the drives are currently spinning. If the drives are all spinning, then it slips into Turbo. This should be enough for many users. It would normally use the old method, but if you planned a large transfer or a bunch of writes, then you would spin up all of the drives - and enjoy faster writing. Tom talked about that Auto mode quite awhile ago, but I'm rather sure he backed off at that time, once he faced the problems of knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use. So that provides 3 options, but many of us are going to want tighter and smarter control of when it is in either mode. Quite awhile ago, WeeboTech developed his own scheme of scheduling. If I remember right (and I could have it backwards), he was going to use cron to toggle it twice a day, so that it used one method during the day, and the other method at night. I think many users may find that scheduling it may satisfy their needs, Turbo when there's lots of writing, old style over night and when they are streaming movies. For awhile, I did think that other users, including myself, would be happiest with a Turbo button on the Main screen (and Dashboard). Then I realized that that's exactly what our Spin up button would be, if we used the new Auto mode. The server would normally be in the old mode (except for times when all drives were spinning). If we had a big update session, backing up or or downloading lots of stuff, we would click the Turbo / Spin up button and would have Turbo write, which would then automatically timeout when the drives started spinning down, after the backup session or transfers are complete. Edit: added what the setting is and where it's located (completely forgot this!)
    1 point
  2. Not familiar with that card, but I know this one works under unRAID, or did for me anyway. https://www.amazon.com/IO-Crest-Controller-Non-Raid-SI-PEX40064/dp/B00AZ9T3OU/ref=sr_1_1?ie=UTF8&qid=1502994985&sr=8-1&keywords=io+crest
    1 point
  3. excellent, thanks for confirming.
    1 point
  4. Yes, CHBMB is correct the macvlan information is stored in the docker image and this image needs to be deleted to start with a clean sheet.
    1 point
  5. Enable docker hub searching in community applications, then search for and install it using the apps tab as normal. You may have to adjust some settings, but with an app this simple it'll probably work with little to no tweaking.
    1 point
  6. Both should work, and most users have no issues with Realtek NIC, though Intel is always better.
    1 point
  7. I do believe the macvlan stuff is configured within the docker.img Sent from my LG-H815 using Tapatalk
    1 point
  8. I think so, if all are in use it can limit a little but nothing major and it should still provide decent performance.
    1 point
  9. Not much info on that architecture, but I would expect it to use UMI (AMD DMI equivalent), that would be shared by both SATA ports plus the 6 PCIe lanes, so around 1600MB/s usable for all.
    1 point
  10. No unless the PCIe x4 slot is using a DMI slot.
    1 point
  11. Sounds about right, since in the first instance you are reading from one array disk and writing to another, in the second you are reading from your desktop and writing to an array disk. Parity protected writes are complex and involve all disks with turbo write enabled, it makes sense that reading from one of the other array disks would cause a slowdown compared to a pure write.
    1 point
  12. AFAIK, bridging is optional for dockers with the macvlan support, but for me its easier to keep bridging turned on.
    1 point
  13. hmm. run "docker network ls" - you should have only 3 networks bridge host none run "docker rm name" on all the others I'm guessing dm-ba57b5a60b33 is autogenerated docker network in 6.4 series. Docker will persist the network settings in the docker.img file across unraid upgrades. Also, telling docker to use br0.1 when it has not been configured will make docker create it anyway, but how its setup is not clear - which can cause problems that will be hard to debug. (AFAIK it will try to do a macvlan subinterface, which makes the containers use a subinterface of a subinterface)
    1 point
  14. Something is wrong with your setup right now... How did you create the br0.1 interface? The one in your ip commands is created as a VLAN subinterface. You can reconfigure the pihole container back, delete the homenet docker network with "docker network rm homenet", and stop the array and disable the VLAN network. Then try again.
    1 point
  15. Then my answer stands. * you can't use VLANs unless you have a VLAN supported switch. * with only a single NIC, dockers with dedicated IPs can not talk to the host and vice versa.
    1 point
  16. I think you need to create new private keys for transmit 5 then export the public keys to openssh format. Client is Transmit 5 so private keys must be in Transmit 5 format. Server is OpenSSH, so public keys must be in openssh format. This is how I work sometimes with PuTTY and WinSCP from Windows to unRAID (and other servers) unless Transmit 5 refuses to talk to an openssh server for the time being...
    1 point
  17. I have downloaded an installed the 0807 BIOS dated 07/19/2017 Under "Advanced" --> "AMD CBS", there appear to be several new options: Zen Common Options RedirectForReturnDIs L2 TLB Associativity Platform First Error Handling Enable IBS Opcache Control Custom Pstates / Throttling (*) Core/Thread Enablement (*) Streamline Stores Control DF Common Options DRAM scrub time Redirect scrubber control Disable DF sync flood propagation GMI encryption control xGMI encryption control CC6 memory region encryption Location of private memory regions System probe filter Memory interleaving size Channel Interleaving hash UMC Common Options DDR4 Common Options Fall_CNT DRAM Controller Configuration (*) CAD Bus Configuration (*) Data Bus Configuration (*) Common RAS (*) Security (*) DRAM Memory Mapping Chipselect Interleaving BankGroupSwap BankGroupSwapAlt Address Hash Bank Address Hash CS NBIO Common Options NB Configuration (*) NBIO Internal Poison Consumption NBIO RAS Control PSI ACS Enable PCIe ARI Support CLDO_VDDP Control (*) There are more sub-menus under this option, but I didn't dig to the bottom Overall, the BIOS upgrade went smoothly. None of the "dead system hang" boots like in the past couple of BIOS upgrades. unRAID booted normally and all my VMs/Dockers auto-started properly ... including the Windows 10 VM with GPU pass-through. System temperatures are about the same, idling round 45c. Not sure what else to check? - Bill
    1 point
  18. The trick is to make a share to handle your downloads, and set the Share Settings - Use cache disk to "only". On this share configure all your post download automation task. ( run par, unrar, repair, unzip, rename ) should all be done in a folder on this cache only share. It will be unprotected but lighting fast and wont put a strain on the protected array. Secondly train your Sonarr, Radarr, to go fetch the finalized product on the "download" share and save it on the "media share". I have no mover enabled on my media share; It does not make any sence to do so. Thats how I have set it up and imho; the good way to do it.
    1 point
  19. Change to advanced mode and switch the BIOS to SeaBIOS instead of OVMF before you create the VM.
    1 point
  20. Here's the whole part list: AMD Threadripper 1950X: https://www.newegg.com/Product/Product.aspx?Item=N82E16819113447 MSI X399 Gaming Pro Carbon AC: https://www.newegg.com/Product/Product.aspx?Item=N82E16813144079 Fractal Design S24 CPU watercooler: https://www.newegg.com/Product/Product.aspx?Item=N82E16835352029 Phanteks PH-ES614P_BK Case: https://www.newegg.com/Product/Product.aspx?Item=N82E16811854003 Rosewell 1000watt powersupply: https://www.newegg.com/Product/Product.aspx?Item=N82E16817182188 Samsung 960 Pro M.2 512GB NVMe: https://www.newegg.com/Product/Product.aspx?Item=N82E16820147596 I'm hoping the watercooler radiator/fans fit in the case properly -- my concern with a regular heatsink/fan was compatibility for the sTR4 socket and ram slot clearance. As for ram, I'm going to borrow existing ram from our Ryzen or other machines here for the time being until I can find an official memory compatibility list for this board that MSI seems to be doing a great job at hiding
    1 point
  21. Well yes and I'm not sure it's worth the expense, but they fit your requirements quite nicely... In truth it's hard to justify expensive hardware based solely on power savings. I pay over $0.21/kwh (expensive in the US) and when I did the math on a much more power efficient build it would have taken *years* to pay back the expense. For a system that idles most of the time, the actual $ savings of a low power build over a modern standard build just aren't that great. Just be careful with board selection and don't get a lot of features you don't need. For instance, let's say a low power build idles at 35w. Running 24 hours a day, that's 306 kwh per year. A 45w standard build would use 394 kwh per year. At $0.21/kwh that's a cost difference of $18/year (and as @jonathanm notes, it may not even be that great). I won't make any assumptions about how important $18/year is to you. But it's clear that a) it takes a long time to justify spending any extra money on a low power build, and b) at US energy prices it doesn't cost that much more over time to have a fully functional modern 1151 Pentium setup running.
    1 point
  22. Which means the server will be fully spun up much longer, increasing power usage overall. "Low power" chips are useful in enclosures where you can't get rid of heat, but otherwise are generally not more efficient at idle than the same family chip in a normal usage scenario.
    1 point
  23. I just noticed that you wanted to use dual parity. You should read this post: When I was looking a CPU's, this basically meant that I was looking at processor families that had been introduced after 2014 for Intel and 2015 for AMD. With my semipron 140, dual parity checked times almost doubled to give a very rough feel. Even upgrading to a dual core AMD processor (off E-Bay) didn't get the time down that much. My Media Server (specs below) is idling at 36 watts and that includes three case fans and the PS losses! Remember that you haven't even thought about what your SATA card, case fans and PS inefficiencies is are going to be. You may be chasing a red herring in attempting to get to the lowest possible power CPU-MB combination. You may only end up saving 5-10W and then have sub-par performance. I just started a non-correcting parity check and the power is now in the 95-108W area. (Refreshing the GUI screen increases the CPU usage which accounts for the increase from the 95W range!)
    1 point
  24. No! I believe that four lanes will handle eight spinners without a bottleneck. I think you would need eight lanes if you used eight ssd's! If the MB has two PCIE 2.0X8 slots, you might be able to use two of these cards and not have a problems. You might want to look through these threads: and
    1 point
  25. TDP isn't a particularly good way to select a CPU for unRAID. TDP represents the maximum amount of heat you should have to dissipate - but that's at full load. Your server will spend most of its time at idle so it's more important to evaluate idle power consumption. Intel chips have traditionally had better idle power consumption than AMD. You're on the right track with implementations like the Celeron J's but you should also look at the Intel Atom boards. You might get a better port/slot configuration and the slightly higher TDP will not necessarily translate into higher power consumption figures.
    1 point
  26. To begin with, your power profiles are going to get shot in the behind as soon as you add in a SATA card into the mix. Nobody has (or probably ever will) designed and built a RAID controller chip set with power conservation in mind! Performance is the first objective! You are going to endup needing a PCIE 2.0 X4 slot. PCIE 2.0 X1 will only really support two modern HD's at a time. (8 + drives on a PCIE 2.0 X1 will drop parity and rebuilt speeds below 25MB/sec. You do the math... Plus both X1 slots will (probably) be sharing the same internal MB bus which will take things even lower if you have a card in each.) Writing also requires the use of two disks--- Parity plus the data disk. If you really want to reduce that power profile, look at going to 8TB or 10TB drives to see if you can cut the drive count that way.
    1 point
  27. If anyone is interested, I did toggle a few settings to see what worked and what didn't. I was able to get this working on Q35-2.7 & OVMF. Also worked on i440fx-2.7 & OVMF. I created a new VM with SeaBIOS and after a restart was able to get both Q35-2.7 and i440fx-2.7 running. All of this was with the syslinux.cfg modification. With that said, is there an "optimal" configuration that I should be using to deploy a LibreELEC VM?
    1 point
  28. Well gents, just wanted to give you a update on Ryzen w/ c-states enabled locking up unRAID... or rather how we've fixed it! On my Ryzen test machine, with array stopped (to keep it idle) with c-states enabled in bios, I'm approaching 7 days of uptime. Before the changes we made to the kernel in the upcoming RC7, It would only make it a few hours and lockup. I think you guys will find RC7 just epyc! (sorry, couldn't resist) FYI, these are the two kernel changes we had to add to make it stable for Ryzen: - CONFIG_RCU_NOCB_CPU: Offload RCU callback processing from boot-selected CPUs - CONFIG_RCU_NOCB_CPU_ALL: All CPUs are build_forced no-CBs CPUs Thanks to everyone here for testing and helping narrow it down to c-states. That made it easier for us to test and find a solution.
    1 point
  29. Thank you for the feedback. I have limited access to different bluray disk types. Are there additional strings you can provide? Just let me know if any more show up. I'll update the script soon!
    1 point