Leaderboard

Popular Content

Showing content with the highest reputation on 06/21/17 in all areas

  1. Your case should have come with 1 side filter that covered the 2-120mm fans and one on top for the power supply intake, similar to the link you posted. Silverstone probably has replacements or might send you some it you didn't get any. That power supply is a full size you would need a sfx. This is the one I got. https://www.amazon.com/gp/aw/d/B01CGI5M24/ref=mp_s_a_1_1?ie=UTF8&qid=1498069747&sr=8-1&pi=AC_SX236_SY340_FMwebp_QL65&keywords=corsair+sfx+600&dpPl=1&dpID=51KMMloK5gL&ref=plSrch But I would wait and see if your board still keeps shutting off. I had that same power supply with more drives and it worked fine. It's still working in my backup server. Maybe it was just a power outage/flicker. If you don't have a UPS, that would be a better investment. If it's just freezing you can log in to the ipmi and load up the console and see if there some errors or hook up a monitor.
    1 point
  2. Sync is not a backup. If corruption or accidental deletion occurs, it will be synced to the remote location. If you want the files to be identically usable at both locations, that is sync. If you want to be able to recover from corruption or deletion, that is a backup, but typically the files will be difficult or impossible to directly access at the second location to protect them. For sync, I'd probably use VPN + rsync or some variation. For backup, crashplan is free for site to site and works fairly well.
    1 point
  3. That's what I have blanked... the first one is an (optional) source network IP. Don't need that in that any of the other NAT rules I have that are working. I just blanked it out of habit.... lol
    1 point
  4. Yes, copying while parity is syncing it's a bad idea as it will slow down both operations and take a longer total time.
    1 point
  5. You can make unRAID forget all assignments by going to Tools and clicking on New Config, but this won't delete any data, if you want to do that easiest way is change to a different filesystem, format, change back to the intended filesystem and format one more time.
    1 point
  6. No. That's good to rule out any other issue, so if the same disk fails again it's probably bad.
    1 point
  7. Give this a try : Go to unRAID --> Settings --> Community Applications --> General Settings Find an option labelled "Enable additional search results from dockerHub?", and change the drop down to "Yes". Click "Apply" Then click "Done". Go to unRAID --> Apps and locate the search field (top right hand corner of the Apps page) Type "ib docker" (no quotes) and click the search button. You'll be shown a Community Applications (CA) Search Page which states that there was "no matching content found", Inspect the search page closely however, and you'll see a docker icon/logo in the middle of the page, with a link underneath labelled "Get more results from DockerHub". Click the link and a short search will be then be conducted on your behalf, resulting in a list of potential apps from the DockerHub. Select your preferred version, click "Add" and follow the prompts (same as adding a docker from the CA "Apps" section of unRAID) Please note that Ryan Kennedy of ryankennedy.io has not published this app on DockerHub (his only public repository is DeepDream - last updated 2 years ago) so the specific code you likned to will not be shown in the search results. However if I understand your intent correctly, I believe that a search conducted via the community applications "Apps" page will list several dockers for the same "Interactive Brokers" application, essentiually assisting you to achieve the same outcome (getting the IB app running in a docker on unRAID). A casual inspection indicates that the 5th, or perhaps 8th app, in the search results might be worth a quick look. Hope that helps
    1 point
  8. There are a lot of UDMA_CRC errors for such a new disk, first thing to try is to replace that SATA cable and re-sync parity.
    1 point
  9. Update the firmware of the Marvell controller to see if it help, they are a know problem on those boards.
    1 point
  10. I can't think of a way that a bad drive would cause those symptoms. We need more info about your hardware, and the condition of things when you noticed the server was offline. Was the tower completely powered down? If not, was there an error message on the screen? My immediate gut reaction is a bad power supply, but without more info that's a pretty useless guess.
    1 point
  11. Hdhr does not save anything anywhere. It rebroadcasts the ota tv signal over the network. That's all it does. Converts the ota signal to a signal/stream on your network that any device can access. Plex or whatever pvr client you use can access that network stream and do things with it like display it live or record it, etc. Hdhr connect's broadcast stream is mpeg2 (standard dvd format). Extend broadcasts in h264, which is a newer compression method that takes significantly less space and bandwidth. So the transcode I was referring to happens while extend is converting the ota tv signal to the network stream, before it reaches plex of other pvr client
    1 point
  12. Hey! so I was having the same issue. i went back and tried again with newer versions, no luck. I then went and rewatched the tutorial video and realized that when installing, you have to click "custom" or whatever, and choose what features to do. When you do that and selects what the tutorial shows, it installs just fine!
    1 point
  13. @Catsk To install on a Synology DS you cannot use the command line (or at least I have never got to work on an image). You want to enter in the parameters via the Docker GUI the first time you launch the container. You can then save your settings so you don't have to redo when you update the container. In the General Settings you want to make sure you check the box "Execute Container Using Higher Privilege" . Do this last or it has a habit of unchecking. It will give a warning when you do. Enter in the parameters via the GUI: Volume, Port Settings, and Environment. Nothing goes in the "Links" section. Because of an issue with Synology on current firmware (talked about in this thread but don't have the link) you need to also setup a Task to run as "root" on "Bootup" with the commands shown in the screen shot (Run Commands). Otherwise the docker will not launch correctly. Once you have everything running well, export out your configuration through the GUI and save to disk somewhere for when you need to download a new version/upgrade. That way, you simply delete the Container and Image, redownload new Image and before you launch you Import in settings for Container. This will work as long as you use the same basic image each time (i.e., "Latest"). That way you only have to do the GUI deal one time (which is a pain). I have this running on Synology with no major issues. You do have to watch any patches that come out for Synology since sometimes it breaks things but other than some weird aspects for the way Synology did things, you can get most any Docker image working easily with version of above.
    1 point
  14. If it's not on the parity disk, then don't worry about it. And, with most BIOS's so long as they see an HPA partition already existing, they won't try and create another one.
    1 point
  15. For the money, I'd tend to go with the 9201-8i card on eBay (~$45 shipped). It will do the same thing for nearly every use case. But I will say that controllers tend to be the longest living assets in my unRAID server. I still have a couple Adaptec 1430SAs that still work quite well. So if you are betting on drives continuing to get faster, maybe the 9207 is worth a small premium. Its up to you. Here are some facts to guide the decision depending on your use case. The 9201 and 9207 are basically the same thing, except PCIe 3.0 vs 2.0. I think the card might have been better packaged as an x4 card. Or better yet, been sold as a 16 drive card in an x8 slot. But maybe if using this with a SAS expander, the x8 bandwidth makes sense. If you put it in a PCIe 3.0 x8 slot, you'd basically have 1000 MB/sec to each drive. That is faster than the 6Gb/sec sata spec for the drives, so you'd really only be getting 600 MB/sec to each drive. With the 2.0 card, you'd get 500 MB/sec, enough to run 8 SSDs at near full speed (full speed is 550 MB/sec) in parallel. (Who runs 8 SSDs at full speed?) With the PCIe 3.0 bandwidth, I'd like the 12 Gb/sec SATA capability for the future proofing. If you put it in a PCIe 3.0 x4 slot, you'd have half that bandwidth (500 MB/sec per drive), enough to run 8 SSD drives nearly full speed in parallel). Not bad. And for spinners, this is way overkill. At PCIe 2.0, you'd be half that (250 MB.sec), constrained for a full complement of SSDs running in parallel, but plenty fast for 8 spinners or 7 spinners with 1 SSD. This is the same as the speed of the PCIe 2.0 in an x8 slot, which is how many people run the 9201-8i. Interestingly, if you put it into a PCIe 3.0 x1 slot, you'd be able to run 6 drives at a very respectable 165 MB/sec, or 8 at 125 MB/sec. That is actually very good for an x1 slot, where with a typical PCIe 1.x card, you'd be limited to 1 drive at 250MB.sec or 2 drives at 125 MB/sec. Although I don't know too many x1 slots that are PCIe 3.0 or would hold an x8 card without melting the back of the slot!. The PCIe 2.0 card would get your 3-4 drives, considerably fewer than 6 to 8. This card really shines in a fast x1 slot! I'd have few reservations about ordering the $99 card if that's what I wanted. I have to believe Newegg would hold them to a high standard for customer satisfaction. And if you put on an credit card, you'd have all the power needed to get your money back if they didn't work properly. I bought a new 9201-16i from Hong Kong for a decent price on eBay, and it works great!
    1 point
  16. As said before, hdhomerun is super easy to set up. Connect it to ethernet and then in plex go to add a dvr, it scans the network and finds the hdhomerun. With regards to hdhr connect and extend, the connect broadcasts the stream pretty much as is in mpeg2 format. Plex *can* transcode that if you set it to (marked experimental but works fine for me). Extend takes the mpeg2 stream and transcodes it to h264 before broadcasting over the network. If plex is running on a beefy server and you don't mind it transcoding, get the connect as it's cheaper. That's what I did. I haven't tried live TV because it's currently only supported on ios and android tv (not regular Android) clients and I don't have either of those.
    1 point
  17. The unsupported part is what you are watching live TV on. Only IOS and Android TV is supported for LiveTV viewing right now. This is different then what HDHR models are supported. - https://support.plex.tv/hc/en-us/articles/115007689648-Watching-Live-TV
    1 point
  18. And note that only a limited number of devices and clients are currently supported. I have HDHR and can record OTA fine, but Live TV doesn't work for me because I don't have a supported client.
    1 point
  19. You have a VM running on a corporate server? You certainly do. This is not the way to save a four hundred dollars. In fact, you should not have shared the Flash Drive as Public share. Shut the network down completely and start googling from a single trusted computer with all sharing turnoff. Find out the name of the processes that do this encrypting and start checking every computer in the facility until you find the one(s) with that process running. You may get lucky and find a key that will unlock your files. (Apparently, some of these guys were lazy and reused the keys...) Most users have found that LimeTech treats its customers very fairly. You might have to present your case to them but they have treated most folks very, very well and on a timely basis. I suspect some employee/officer got 'social engineered' and turned this beast loose. You need to re-instruct folks about security and look at what you are doing. (I personally think that running a VM (or even a Docker with outside access) on a corporate server is not an ideal way to save a few bucks. The less stuff that is running on your servers, the easier it will be secure them.) You also need to determine who needs w/r privileges verses read-only access and implement that. And stop sharing the Flash Drive...
    1 point
  20. I can see one share seems to have a split level of 0. The unraid manual says: c) 0 (zero) = everything for that User Share is kept on one disk, but system chooses which disk. Later you can add more disk(s) to the share by explicitly creating a share name folder on those disk(s). This is the share called process. Suggest checking that one. Whilst I'm not an expert in unraid, it looks to me like anything written to share "process" will always get written on disk 1. The syslog errors indicate multiple errors in writing files into the "process" share on disk 1 due to lack of space.
    1 point
  21. Cannot see off-hand why files are going to the wrong disk. However you will want to set the Minimum Frees space to something larger than zero. A good value is something like twice the size of the largest file you want to copy. That should at least stop the case of unRAID starting to write a file to a disk and then finding it runs out of space for the file during the copy.
    1 point
  22. https://wiki.lime-technology.com/Files_on_v6_boot_drive
    1 point
  23. There are two things at play here - one is the the PCI version number (1.0/1.1, 2.0, 3.0) and the other is the number of "lanes". PCIe 1.x is 250 MB/sec per lane PCIe 2.0 is 500MB/sec per lane PCIe 3.0 is 1000MB/sec per lane Each card's maximum number of lanes is determined by the card's physical design (i.e., literally the length of the PCIe bus connector.) A 1 lane card is the shortest, and a 16 lane card is the longest. The motherboard and the card will negotiate a specification based on the highest spec both the slot and the card support. So a PCIe 2.0 card in a PCIe 3.0 slot, will run at PCIe 2.0 speed. Similarly they will agree on the number of lanes based on the "shortest" one - the card or the slot. Most disk controller cards are either 1 lane, 4 lane, or 8 lane, often referred to a x1, x4, x8. If you put an x4 card in an x8 slot, you will only have 4 usable lanes. And if you put an x8 card in an x4 slot, you will also have 4 usable lanes. Putting an x8 card into an x4 slot is not always physically possible, because the x4 slot is too short. But some people have literally melted away the back end of the slot to accommodate the wider card, which is reported to work just fine. Making things just a little more confusing, some motherboards have an x8 physical slot but is actually just wired for x4. So you can put a longer card in there with no melting, but it only uses 4 of the lanes. If you have, say, a PCIe 1.1 card with 1 lane, and it supports 4 drives, then your performance per drive would be determined by dividing the 250 MB/sec bandwidth by 4 = ~62.5 MB/sec max speed if all four drives are running in parallel. Since many drives are capable of 2-3x that speed, you would be limited by the card. If the slot were a PCIe 2.0 slot, you'd have 500MB/sec speed for 4 drives, meaning 125 MB/sec. While drives can run faster on their outer cylinders, this would likely be acceptable speed, with only minor impact on parity check speeds. With a PCIe 3.0 drive, you'd have 250 MB/sec per drive for each of the 4 drives. More than fast enough for any spinner, but maybe not quite fast enough for 4 fast SSDs all running at full speed at the same time. You might think of each step in PCIe spec as equivalent to doubling the number of lanes from a performance perspective. So a PCIe 1.1 x8 card would be roughly the same speed as a PCIe 2.0 x4 card. Hope that background allows you to answer most any questions about controller speed. I should note that PCIe 1.x and 2.0 controller cards are the most popular. And as I said, x1, x4 and x8 the most common widths If you are looking at a 16 port card, and looking at the speed necessary to support 16 drives in a single controller ... PCIe1.1 at x4 = 1GB/sec / 16 = 62.5 MB/sec - significant performance impact with all drives driven PCIe1.1 at x8 / PCIe 2.0 at x4 = 2GB/sec/16 = 125 MB/sec - some performance impact with all drive driven PCIe2.0 at x8 / PCIe 3.0 at x4= 4GB/sec/16 = 250 MB/sec - no performance limitations for spinning disks (at least today) PCIe3.0 at x8 - 8 GB/sec/16 = 500 MB/sec per drive - no performance limitations even for 16 SSDs. The speeds listed are approximate, but close enough for government work. Keep in mind, it is very uncommon to drive all drives at max speed simultaneously. But the unRAID parity check does exactly that, and parity check speed is a common measure here. If you are willing to sacrifice parity check speed, a slower net controller speed will likely not hold you back for most non-parity check operations. For 16 drives on one controllers, I'd recommend a PCIe 2.0 slot at x8. For example, an LSI SA 9201-16i Here is a pretty decent article on PCIe if you need more info: http://www.tested.com/tech/457440-theoretical-vs-actual-bandwidth-pci-express-and-thunderbolt/
    1 point