Leaderboard

Popular Content

Showing content with the highest reputation on 10/22/17 in all areas

  1. Hey everyone, just thought I'd put this up here after reading a syslog by another forum member and realizing a repeating pattern I've seen here where folks decide to let Plex create temporary files for transcoding on an array or cache device instead of in RAM. Why should I move transcoding into RAM? What do I gain? In short, transcoding is both CPU and IO intensive. Many write operations occur to the storage medium used for transcoding, and when using an SSD specifically, this can cause unnecessary wear and tear that would lead to SSD burnouts happening more quickly than is necessary. By moving transcoding to RAM, you alleviate the burden from your non-volatile storage devices. RAM isn't subject to "burn out" from usage like an SSD would be, and transcoding doesn't need nearly as much space in memory to perform as some would think. How much RAM do I need for this? A single stream of video content transcoded to 12mbps on my test system took up 430MB on the root ram filesystem. The quality of the source content shouldn't matter, only the bitrate to which you are transcoding. In addition, there are other settings you can tweak to transcoding that would impact this number including how many second of transcoding should occur in advance of being played. Bottom line: If you have 4GB or less of total RAM on your system, you may have to tweak settings based on how many different streams you intend on transcoding simultaneously. If you have 8GB or more, you are probably in the safe zone, but obviously the more RAM you use in general, the less space will be available for transcoding. How do I do this There are two tweaks to be made in order to move your transcoding into RAM. One is to the Docker Container you are running and the other is a setting from within the Plex web client itself. Step 1: Changing your Plex Container Properties From within the webGui, click on "Docker" and click on the name of the PlexMediaServer container. From here, add a new volume mapping: /transcode to /tmp Click "Apply" and the container will be started with the new mapping. Step 2: Changing the Plex Media Server to use the new transcode directory Connect to the Plex web interface from a browser (e.g. http://tower:32400/web). From there, click the wrench in the top right corner of the interface to get to settings. Now click the "Server" tab at the top of this page. On the left, you should see a setting called "Transcoder." Clicking on that and then clicking the "Show Advanced" button will reveal the magical setting that let's you redirect the transcoding directory. Type "/transcode" in there and click apply and you're all set. You can tweak some of the other settings if desired to see if that improves your media streaming experience. Thanks for reading and enjoy!
    1 point
  2. Anyone had any luck with a threadripper build yet? I am thinking about jumping over to team red and curious what types of issues folks are having. I saw you need to run a release candidate right now. Just curious what other problems are out there?
    1 point
  3. unRAID v6.4 has a built-in USB backup feature. It creates a zip file of the USB content and this file can be stored locally. The USB Creator tool allows to rebuild a USB stick with the previously stored zip file.
    1 point
  4. LSI Corporation SAS2 Flash Utility Version 19.00.00.00 (2014.03.17) Copyright (c) 2008-2014 LSI Corporation. All rights reserved No LSI SAS adapters found! Limited Command Set Available! ERROR: Command Not allowed without an adapter! ERROR: Couldn't Create Command -c Exiting Program. LSI Corporation SAS2 Flash Utility Version 20.00.00.00 (2014.09.18) Copyright (c) 2008-2014 LSI Corporation. All rights reserved Adapter Selected is a LSI SAS: SAS2008(B1) Controller Number : 0 Controller : SAS2008(B1) PCI Address : 00:01:00:00 SAS Address : 5003005-..... NVDATA Version (Default) : 14.01.00.08 NVDATA Version (Persistent) : 14.01.00.08 Firmware Product ID : 0x2213 (IT) Firmware Version : 20.00.07.00 NVDATA Vendor : LSI NVDATA Product ID : SAS9211-8i BIOS Version : N/A UEFI BSD Version : N/A FCODE Version : N/A Board Name : SAS9211-8i Board Assembly : N/A Board Tracer Number : N/A Finished Processing Commands Successfully. Exiting SAS2Flash. How I did it, just to make sure I did something wrong. Booted in FreeOS and issued megarec -writesbr 0 SBR-A21.bin While your custom SBR didn't work at all, well at least we know the flashing of different SBR works, the A21.SBR shows the same results. Also no drives recognized. Pulled an unRAID syslog with the A21.SBR. It seems the controller doesn't reset when querried by the OS. To rule out general issues I plugged an H200 controller and ran unRAID. It detects drives as expected. syslog
    1 point
  5. Just add a path /media (container path) mapped to /mnt/user/media (or whatever the share is called). Then create a library by adding the files from /media
    1 point
  6. @Jonny I just flashed a D2607 A11 with the toolset you put together. This is my result: LSI Corporation SAS2 Flash Utility Version 19.00.00.00 (2014.03.17) Copyright (c) 2008-2014 LSI Corporation. All rights reserved Adapter Selected is a LSI SAS: SAS2008(B1) Controller Number : 0 Controller : SAS2008(B1) PCI Address : 00:01:00:00 SAS Address : 5003005censored:) NVDATA Version (Default) : 14.01.00.08 NVDATA Version (Persistent) : 14.01.00.08 Firmware Product ID : 0x2213 (IT) Firmware Version : 20.00.07.00 NVDATA Vendor : LSI NVDATA Product ID : SAS9211-8i BIOS Version : N/A UEFI BSD Version : N/A FCODE Version : N/A Board Name : SAS9211-8i Board Assembly : N/A Board Tracer Number : N/A Finished Processing Commands Successfully. Exiting SAS2Flash. Yea, looks good, but unfortunately the controller is not detecting any drive on any port. It says A11-GS1 on the PCB. Attached is the original SBR dump. Any advice what I could try next? Also, could you try the command: sas2flsh.efi -l adapter.txt -c 0 -list My machine is locking up when I try to log. Instead i need to pipe the output into a file. "-c 0 -list" alone ist working for some reason... Could you also show me your output of e.g.: megarec -writesbr 0 SBR-A11.bin The last word is "success" but in between i get: Warning! IO Base address high. Currently not supported. Warning! IO Base address high. Currently not supported. O-SBR.BIN
    1 point
  7. Apologies for the necro, but having seen that single-drive vDev expansion is coming for ZFS some time in the future, I figured I'd nudge this again for visibility. For myself, I'd be happy just having ZFS for cache, not the main array. Myself and some other users have been having some issues possibly related to BTRFS cache pooling (see below, the issue seem to go away when a single XFS cache device is used), and I feel like having something that's been around longer and has been put through it's paces a little more might be a nice option. I understand that for all the reasons Limetech listed above, it might still not be viable, but just putting it out there none-the-less. https://forums.lime-technology.com/topic/58381-large-copywrite-on-btrfs-cache-pool-locking-up-server-temporarily/
    1 point
  8. I'd be just as worried (if not more) with domestic (US) companies having to comply with secret FISA orders
    1 point
  9. The docker image is corrupt, delete and recreate.
    1 point
  10. Simultaneous errors on both: Oct 20 20:13:27 Tower2 kernel: md: disk0 read error, sector=4580201424 Oct 20 20:13:27 Tower2 kernel: md: disk29 read error, sector=4580201424 Oct 20 20:13:27 Tower2 kernel: md: disk0 read error, sector=4580201432 Oct 20 20:13:27 Tower2 kernel: md: disk29 read error, sector=4580201432 They are both on the same controller, and on the same breakout cable, I would try replacing or swapping that cable first, also check the power connections, if issue persists swap them to another controller.
    1 point
  11. I think you are over-thinking this!. You can always specify a specific disk for the vdisk of your VMs. That way you bypass the User Share System System and have absolute control over where the vdisk is placed.
    1 point
  12. Pre-clearing is never a requirement - it is only worth doing if you want to do a confidence check on a drive before adding it to an already existing parity protected array. the OP message sounds like a new array where the disks have already been tested. If that is true then the easiest thing is to just create the array and then let the unRAID format the drives and build parity. Preclearing the drives in such a scenario adds no value.
    1 point
  13. The title says it all. Auto-update docker containers.
    1 point