Leaderboard

Popular Content

Showing content with the highest reputation on 07/19/17 in all areas

  1. Would it be possible to add a module for USB Attached SCSI (CONFIG_USB_UAS) protocol? This is required to support a card like the StarTech PEXUSB3S44V. There are a number of users trying to get this card working. The UnRaid kernel enumerates the card but it does not actually work. This card has 4 independant USB3 controllers and it looks as if you could pass each controller to a different virtual machine. UASP Information On this thread @Siberys details his issue with the StarTech PEXUSB3S44V USB3 card with 4 separate USB3 controllers that needs UASP support to function. 08:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02) (A) 09:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02) (B) 0a:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02) © 0b:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller [1912:0015] (rev 02) (D) /sys/kernel/iommu_groups/42/devices/0000:08:00.0 /sys/kernel/iommu_groups/43/devices/0000:09:00.0 /sys/kernel/iommu_groups/44/devices/0000:0a:00.0 /sys/kernel/iommu_groups/45/devices/0000:0b:00.0 Post 1 Post 2 UPDATE: I have built a unRaid 6.4 rc6 test kernel with USB Attached SCSI enabled and have a card to test with on the way. I will hopefully have and update after July 1. Thanks
    1 point
  2. Hi, Guys. This is a series of 3 videos about tuning the unRAID server. It is a guide that is for the server as a whole but has a lot of information for VMs so I thought this forum section the best place to post this. Some of the topics are:- Cpu governor and enabling turbo boost About vCPUs and hyperthreading. How VMs and Docker containers and affect each other performance. Pinning cores to Docker containers. Using the same container with different profiles Allocating resources to Docker containers. Decreasing latency in VMs Using emuatorpin Isolating CPU cores Setting extra profiles in syslinux for isolated cores Checking wether cores have been correctly isolated Disabling hyperthreading Having unRAID manage vCPUs as opposed to vCPU pinning. Hope this video is interesting Part 1 Part 2 Part 3
    1 point
  3. What it'd do: maintain a catalog of all the content on the server - files, folder, shares etc. The catalog would be built on initial install. After that it would hook into the unRaid filesystem and get updated automatically on any file modification/access, without needing to run a separate script on schedule. Thus it'd be like updating parity in realtime vs doing it on schedule. Features it'd provide to the user - - instant search of any file/folder on the server: would list the locations (disk, share) where its located, without having to search - if a disk goes offline, can generate a report of all missing content and all affected shares/folders - various stats such as most used, age, size map, distribution - possibly a realtime view of shares/disks with file access lighting up sections The key to all this is automatic updates of the underlying catalog by hooking into the file system, which I believe is shfs. IMO this would be a very powerful addon. I don't know if APIs exist to enable this, I've never written a plugin, but am interested.
    1 point
  4. You are correct. Something like a preclear (preread) would exercise the disk end to end (from computer to the drive including HBA, cabling, backplanes, etc). The drive extended test runs 100% on the drive. If the cabling works well enough to initiate the test, that is all that is required. I expect (not recommending it) but that you could pull the sata cable and the self-test would continue unaffected. The nice thing about the self-test is that if it fails, all fingers and toes point at the drive. It can't be anything else. And if it runs successfully and there continue to be issues from the OS, you have great confidence it is not with the drive itself, and something like a cabling issue or HBA compatibility issue.
    1 point
  5. Your UPS may be sending those messages when the voltage ranges outside a threshold, even though it can correct through AVR and doesn't need to switch over to batteries. There is a wide range of voltage that can be boosted or bucked to ok levels with a good quality UPS. I'm not sure acpupsd makes a distinction (or can) between a full failure and a line correction event. Your house wiring may be borderline inadequate, and events like the HVAC compressor kicking in may be sagging the voltage enough to make the UPS complain, especially if the neighborhood is under power stress to begin with.
    1 point
  6. It gets really messy, Couch potato does a great job at renaming and moving files. What i like about it is that i could drop a file into the watched folder and CP would grab it rename it and move no issues. Radarr on the other hand does not, if its not in your list it will not touch it. You have to add it to the list and then import them manually. I could be wrong or have it setup wrong but that's what I've experienced , also radar is still in development status.
    1 point
  7. Luckily I'm asking this one in advance - my unRAID is currently out of commission due to a near-instant failure of a drive during my trial run haha I'll definitely check out that drive cage, thanks
    1 point
  8. Welcome to the forums! You've already demonstrated a valuable skill that many new users lack - asking a question before doing something you don't fully understand. We are all influenced by our experiences and preconceptions, and what we read is colored by those, leading us to not fully grasp everything in the first read. Although tinkering and trying things is part and parcel to learning unRaid, operations that affect array integrity should be confirmed until you fully understand how parity works. When a disk is being simulated (I.prefer that word, but same meaning as emulated), you are in a precarious position. Another disk dropping from the array will cause the simulation to end and your data to be potentially lost. And contrary to what you might think, drives drop for lots of reasons unrelated to actual drive failure. Putting humpty dumpty back together so you can simulate again can get very tricky if a cable is loose and another drive gets kicked. Loose or slightly skewed cables are incredibly common! I would advise that you be extremely careful if you need to enter the case that all connections are secure! Consider investing in drive cages (look at the CSE-M35T-1B). Enjoy your array, and keep asking questions, and you'll avoid your own Waterloo!.:)
    1 point
  9. I just added port 2203 and now it works. By the looks of it you have to have two ports mapped 2202 and 2203. I noticed this in the sys log also: Fix Common Problems: Error: Docker Application ubooquity, Container Port 2203 not found or changed on installed application https://hub.docker.com/r/linuxserver/ubooquity/ tells you: -p 2202 - the library port -p 2203 - the admin port
    1 point
  10. Yes. The 'emulated' disk will be treated just like the original physical disk as far as accessing its data is concerned.
    1 point
  11. If you remove a single disk from a parity protected array then the data is still there as unRAID will emulate the missing/damaged disk using the combination of parity plus the other data drives. At this point you are no longer protected against data loss if another drive fails. To get back to a protected state you add in a replacement disk and unRAID rebuilds the contents of the ‘emulated’ drive onto the replacement physical drive. When that completes you are back into a protected state. note that is different to removing a disk drive that you do not intend to replace. If you want to reduce the number of drives then it is your responsibility to handle getting the data of the drive that is to be removed, there is no automated built-in capability in unRAID for this task although plugins such as unBALANCE can help.
    1 point
  12. Absolutely not! If you remove a disk from the array (do a new config and recompute parity excluding one of your disks), any data on the disk you removed will remain on the disk you removed. It will not be redistributed onto other array disks. After it is removed, you could mount it outside the array and copy the data back onto the array, but 1T will take a while. And until it is complete, the data will not be protected. I think you might want to look into the unbalance plugin. That would redistribute the data from the disk while it remains in the array. Afterwards you could remove the disk and all of the data would remain on the array. Keeping checking in before you do anything significant to confirm you understand the instructions and outcomes. It is easy to make mistakes, and we're happy to help.
    1 point
  13. Good catch. Hard for the UPS to calculate things properly if the load isn't being sensed. Maybe a defective UPS? (or you were right in your first guess about the server not being on a protected circuit) The dummy load lightbulb test is looking more and more valuable here if things are plugged in correctly. (not incorrectly)
    1 point
  14. I looked at your UPS settings page (in your first post) and noticed that the "UPS Load and UPS Load % readings were both zero. Are you sure that you have pluged the server into a battery protected outlet on the UPS? (I know this is like a customer service representative asking if a dead computer is plugged into the wall outlet but there is always a reason to this sort of question!)
    1 point
  15. This will back up your VM configuration. Any files on the drive image of the VM will have to be backed up and treated as a separate machine.
    1 point
  16. Right, the permission issues is actually due to some missing libraries. The container is using Slackware as the base OS after all. I've pushed out the necessary change to correct the missing library issue, so the docker should work again. However, the account linking issue is still there AFAIK, and I"m still looking for a way to consistently replicate the problem so I can fix it.
    1 point
  17. Every single one of those errors means that unRaid was unable to read a sector from the disk, so what it did was spinup all the other drives and re-wrote the contents of the sectors based upon the data on the other drives and the parity drive. Reset it if you choose, but if / when you see another one, post up the diagnostics for people to see and properly advise. What this means is that as far as the drive is concerned, it is ok. But smart tests do not test the transfer of data to/from the server itself, so the implication is that this is the failure point. Quite probably poor cabling connections.
    1 point
  18. @shEiD @johnnie.black @itimpi Oh how we love to be comforted! While it is true that the mathematics show you are protected from two failures, drives don't study mathematics. And they don't die like light bulbs. In the throes of death they can do nasty things, and those nasty things can pollute parity. And if it pollutes one parity, it pollutes both parties. So even saying single parity protects against one failure is not always so, but let's say it protects against 98% of them. Now the chances of a second failure are astronomically smaller than a single failure. And it does not protect in the 2% that even a single failure isn't protected, and that 2% may dwarf the percentage of failures dual parity is going to rescue. I did an analysis a while back - the chances of dual parity being needed in a 20 disk array is about the same as the risk of a house fire. And that was with some very pessimistic failure rate estimates. Now RAID5 is different. First, RAID5 is much faster to kick a drive that does not respond in a tight time tolerance than unRaid (which only kicks a disk in a write failure). And second, if RAID5 kicks a second drive, ALL THE DATA in the entire array is lost. With no recovery possible expect backups. And it takes the array offline - a major issue for commercial enterprises that depend on these arrays to support their businesses. With unRaid the exposure is less, only affecting the two disks that "failed", and still leaving open other disk recovery methods that are very effective in practice. And typically our media servers going down is not a huge economic event. Bottom line - you need backups. Dual parity is not a substitute. Don't be sucked into the myth that you are fully protected from any two disk failures. Or that you can use the arguments for RAID6 over RAID5 to decide if dual parity is warranted in your array. A single disk backup of the size of a dual parity disk might provide far more value than using it for dual parity! And dual parity only starts to make sense with arrays containing disk counts in the high teens or twenties. (@ssdindex)
    1 point
  19. Here is a way to do it assuming you already have linuxserver container running on your server: 1. go to: \\Tower\flash\config\plugins\dockerMan\templates-user 2. copy my-plex.xml to my-k-plex.xml 3. edit the new xml file and change line tree and four per below and save <Name>k-plex</Name> <Repository>kmcgill88/k-plex</Repository> 4. Stop your current linuxserver docker container and add a new container using the k-plex template that you just created. 5. After the container is running follow the instructions from https://hub.docker.com/r/kmcgill88/k-plex/ 6. Done. Enjoy and send your thanks to linuxserver, kmcgill88 and erikkaashoek.
    1 point
  20. Here's my list of instructions.... Use them at your own risk..... If upgrading to v12 please see here: ##Turn on maintenance mode docker exec -it nextcloud occ maintenance:mode --on ##Backup current nextcloud install docker exec -it nextcloud mv /config/www/nextcloud /config/www/nextcloud-backup ##Grab newest nextcloud release and unpack it docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest.tar.bz2 -C /config/www ##Copy across old config.php from backup docker exec -it nextcloud cp /config/www/nextcloud-backup/config/config.php /config/www/nextcloud/config/config.php ##Now Restart docker container docker restart nextcloud ##Perform upgrade docker exec -it nextcloud occ upgrade ##Turn off maintenance mode docker exec -it nextcloud occ maintenance:mode --off ## Now Restart docker container docker restart nextcloud Once all is confirmed as working ##Remove backup folder docker exec -it nextcloud rm -rf /config/www/nextcloud-backup ##Remove Nextcloud tar file docker exec -it nextcloud rm /config/latest.tar.bz2
    1 point
  21. SUCCESS!!! These are all the devices associated with the High Point RocketU 1144C 4-port USB3 PCIe card and they were all in the same IOMMU group: 0b:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA [10b5:8609] (rev ba) 0b:00.1 System peripheral [0880]: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA [10b5:8609] (rev ba) 0c:01.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA [10b5:8609] (rev ba) 0c:05.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA [10b5:8609] (rev ba) 0c:07.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA [10b5:8609] (rev ba) 0c:09.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8609 8-lane, 8-Port PCI Express Gen 2 (5.0 GT/s) Switch with DMA [10b5:8609] (rev ba) 0d:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142] 0e:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142] 0f:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142] 10:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1042A USB 3.0 Host Controller [1b21:1142] I applied the pcie_acs_override option, but all 4 USB controllers were always in the same IOMMU group. After googling a ton and reading many comments by Alex Williamson I decided to add the following to the pcie_acs_override option in the syslinux.cfg file: append pcie_acs_override=downstream,id:10b5:8609 initrd=/bzroot I can't seem to highlight the part I added, but I added the "id:10b5:8609". I'm not sure if this PCIe switch is on the motherboard or if it's on the 1144C card. By specifically adding this device to the pcie_acs_override option it split all of the USB controllers on the card into their own IOMMU group. I now have a single PCIe USB3 controller card which passes through individual USB3 ports to 4 different VMs.
    1 point