bjp999

Moderators
  • Content count

    6452
  • Joined

  • Last visited

Community Reputation

81 Good

About bjp999

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Virginia
  • Personal Text
    ASRock E3C224-4L/A+, SM C2SEE/B, Asus P5B VM DO/C
  1. I very much disagree with this. While true that unRaid has redundancy to rebuild a single failed disk, the idea of buying cheap drives and ignoring reliability is not a wise course. A disk full of low reliability drives is a huge headache as smart issues would frequently pop up requiring replacements, warranty actions, and ultimately higher cost in the long run. That being said, there are not that many truly low reliability drives to avoid. I do agree that buying economical drives with reasonably good reliability ratings and good experiences on the forum is a very reasonable option. Maybe this is what curious meant. HGST is at the top of the heap in terms of reliability, and demands a deserved premium. But the Helium feature and its premium seem.unnecessary on a media array. The warranty is excellent. The HGST 8T NAS drive is a better idea IMO and cheaper. And this seems to be what toe went for. It is more expensive than some, but they are excellent drives with longer warranty. With a couple out-of-warranty failures of value drives in an array, you probably wind up with similar cost in the long run. BTW, the WD Red 8T is a helium filed drive (not that it matters) and was recently available for $180, but with 2 yr warranty extracted from an external. Not as good as the HGST IMO, but taking cost into consideration, something to consider.
  2. Ha ha. I used a Timex Sinclair. As you entered a program, the screen would eventually have fewer and fewer lines, as it was using that memory for storing the program!
  3. I was playing around with Medusa and copied a show over from SickBeard. Figured I would add it and see how Medusa works. The show had 3 seasons. But strangely when it added, the first two seasons added fine and the shows were recognized. But the third season Medusa decided it wanted to re-download all of the episodes rather than calalog the files that were there. i canceled the downloads and moved it back for now. Any idea what happened or how to avoid this behavior. The episodes, as far as I can tell, were in the quality settings I had selected. But regardless, I'd expect Medusa to merely re-catalog when adding an existing show, not go searching for new versions to download. Not sure how to get support for Medusa directly, and thought I would check here to see if any unRAIDers could help.
  4. The earlier versions weren't quite as whiz bang. And if you installed 4x 5in3s, all those blue fans on the front would be gone. But I get you. My Sharkoon:
  5. I have my Windows userid / password entered into the unRAID server, and it passes it through and no need to log on. Are you using any special characters in the userid or password? Almost seems like the Windows 7 version of Samba is not compatible with the unRAID version. Maybe @johnnie.black would have an idea.
  6. I am not sure about how to get a GPU to do transcoding. Maybe you can share that trick with me. My current server has all slots consumed with controller cards, but I do have an RES2SV240 put away for a rainy day, that I could put into service and maybe free up an x4 slot. My experience with drives - HGST (previously Hitachi) are great. Last forever. Tanks. Highest recommendation. I'd use them all day and all night if they were a little cheaper. But they are worth the premium in reliability IMO. Seagates - Long ago best of the best. Had an 80Meg ST4096. What a tank it was! Today - not a good reputation. Recent class action lawsuit due to high failure rate. One model was at like 90% failure rate. Think the lawsuit may have jolted them a little to improve quality. I have invested in some 8T archives that work well so far. Will only buy cheap Seagates. If I'm going to spend more, I'll go with the HGST. WD - People here like them. I don't particularly. Their incredibly high LCAs always annoyed me. The 1T green drives were awesome though that was a long time ago. The 2T greens were not nearly as good. And then I switched to Hitachi and stayed there. Hitachi didn't draw a premium price then. I went from 2T to 6T with almost all Hitachi / HGST. Did recently buy 2 8T from WD. Good price. So far so good. Like Seagate, I'd only buy cheap WDs, expecting higher failure rates. HGST is $260/8T. Seagate/WD are $180/8T. I actually consider each a good deal, and bought some of each and will likely cost me about the same in the long run. Supermicro CSE-M35T-1B Cages can be had for ~$50-$60 on ebay. Kindof expensive for 4. But they are awesome and last forever. Assuming same cages in the SuperMicro rack mounted case. There was a flurry a month or 2 ago of people buying those SuperMicro rack mounted cases on eBay. With upgraded backplanes. Not sure why they dried up if you can't find em. Is it happy hour yet? Cheers!
  7. Extremely unlikely. Can you access via the IP address of the server? Go into Windows Explorer (not the browser) and enter \\ip_address (e.,g \\192.168.1.50) and hit enter. If you do that what happens? Does it prompt for a userid/password? Does it take a long time and say host not found? Does it return relatively quickly but just not list anything? Something else?
  8. Welcome to the forums. You;ll find we're a friendly and helpful bunch, but we have our limitations in answering questions like this one. In fact, it is one of the hardest questions we get. What server hardware should I buy? The selections are frequently changing, with motherboards coming and going. Most experienced users here HAVE a server, and unless they are planning an upgrade, as not shopping servers to determine which is best motherboard, CPU, etc. We might have some ideas based on other user selected, or monitor features we ourselves think are important, but it is very hard to pick out something for someone else, especially when you factor cost into the equation. First YOU need to understand what you want to do with the server (#VMs and their OSes/purposes, passthrough?, gaming?, target max disks or capacity, #transcodes?,HEVC transcodes?, quiet?, anything else?). And then you need to dig in, understand the offerings, understand the pricing, and put together a configuration made up of real products with links to those products that seem like they would generally meet your requirements. Users here can then react to your selections based on your wants and needs and give you some useful advice. But the first step is yours. In this thread the only one that listed specific products was @tdallen. And you were not happy with his advise, despite the fact that he has spec'ed an excellent unRAID motherboard / CPU combo, very similar to mine in fact. And in or very close to your budget. And then you dissed him. TBH it is very much in line with what I would recommend.. Come back with a configuration in line with what you are wanting, and you might get better responses. You also might want to look back through the forums, because we get questions like yours a lot, and you may find some good pointers for things others are considering and pros / cons.
  9. This has been said so many times. I must have said it a dozen times in the last month The issue is not PCIe 2.0 or 3.0. That is not a problem at all. The issue is the Marvell chips and their ability to support hardware passthrough to VMs without corrupting data. Having certain BIOS settings enabled, even without using VMs, can cause problems. And the problems seem to vary by motherboard. I suggest steering clear until this is resolved. My motherboard has a Marvell controller onboard for 2 ports, and I have disabled it in my BIOS. And I have both Supermicro SAS and SAS2 controllers, which are sitting on a shelf awaiting a fix. I have an LSI SAS9201-16i and it works great. I do think it is supported invisibly. But what I think or you think doesn't much matter. What matters is reality. You should do your homework. But even if it is not supported now, seems this will be supported very soon. I personally would not buy one of these older Xeons because my primary need is Plex. I don't want my server to run games or anything like that which requires lots of processing muscle.~10,000 Passmark is plenty of power for me without HEVC transcoding. And I feel that a Kaby at that power level will be my next upgrade. EVERY new unRAID user wants to optimize speed. It is a hard message to explain that you have the tools to make things that need to happen fast, fast. But that much of what unRAID does does not need to be super fast. The speed of your individual disks matters little. Parity is going to slow down the writes. 40 - 50 MB/sec is about the most you'd expect for writes (see below about turbo writes which are faster). But you are NOT very frequently waiting for those writes to occur - for most it happens in the background or overnight. And turbo write (see below) gives a sizable bump in performance when needed. Reads are not striped, so are limited by your individual disk speeds. But much of what our servers do is stream light data streams to viewers. And even if we want faster speed for a specific need, most are limited to gigabit LAN, so whether your disk is able to read at 125 MB/sec, or 225 MB/sec, you are still going to be limited. Believe it or not, in practice, the speed of unRAID is very seldom mentioned as a problem by the users. The things you really want to be fast are your Dockers and VMs. Their OSes / containers and files that are being downloaded, uncompressed, repaired, etc. You want that to be fast. And all of these can be configured to happen on the SSD. And copies to the server can be configured to go through the cache and be made as fast as network allows. And if you want something like a database, you can store that on an SSD in a RAID-1 configuration, so you have a very fast media for database accesses with redundancy. On the occasion when you want writes to the unRAID array to be fast(er), you want unRAID has a write mode called "Turbo Write" or "Reconstructed Write", that will help drive higher performance. This is particularly valuable on initial array load. In summary - I don't much care whether my disks are 7200 RPM, 5900 RPM, or 5400 RPM. I just want them to be big, reliable, and inexpensive. 8T is the "sweet spot" for now IMO, and 10T are not bigger enough to make it worthwhile IMO. 12T, 16T - I might feel different. But every user makes these decisions. I do have a 500G SSD and use it effectively to drive performance in my time sensitive use cases. I have a Sharkoon Rebel 12 case. It holds 4 5in3s (20 drives). I have one extra 5in3 that sits on top of the case for 5 extra drives. Works for me. The Sharkoon case is no longer made (i don't think it is anyway), but might be obtained on eBay. Antec makes a 1200 that I think could be made to support my configuration.
  10. I'd suggest steering clear of the Marvell based controller cards, like the Supermicro SAS card, and even confirming that some of the motherboard ports are not based on Marvell chips. Might look at the LSI SAS9201-16i, a non-RAID card that works very well with unRaid. Also, take a good look at the requirements to transcode HEVC. A ton of horsepower may not be enough. Kaby Lake CPUs have hardware features that might be more advantageous with much lower passmark scores. I have a cheap, slow Braswell Celeron Nuc for playback that can handle 8bit HEVC that would stress an earlier generation Xeon with much more muscle. (But it doesn't handle 10 bit like the Kaby Lakes.) You don't mention VMs, but 64G of memory implies that is your plan. If not, that might be overkill. 10T drives are relatively new and expensive per TB. 8T deals abound. Recently both the Seagate Archive and WD RED 8T drives have been under $180. I'd suggest looking there as a start to moderate cost. The HGST NAS drives are a bit more expensive, but likely will prove to be the longest lived out of the options. They are also faster at 7200 RPM. The Norco is a beast, and I've heard complaints on quality of the cases and cages. There is a comparable Supermicro people have bought on eBay and are happy with. I am not a rack mount guy, and use a tall tower with 4 5in3 cages (plus one external) to provide 25 drive slots in a much smaller footprint. Personal preference. Good luck with your build!
  11. Sometimes the simplest ideas are the best. I have four Dockers that I use frequently, and often have them open in my browser. Each one taking a tab. I just click on the tab that I want to bring up its Web Interface to do something. But it occurred to me it would be nice to have one tab that contains all of those Web Interfaces in four rectangles (upper right, upper left, lower right, lower left). Means three fewer tabs and I frequently am using them in combination, so very handy to be able to just move the mouse to the one I want to control. And less hunting for the one I want. So I created a very basic .html file that I call quad.html, that creates 4 frames on the screen. Each frame contains a Docker management interface. The webpages seem to size nicely into these frames, and are big enough to be useful without a log of scrolling and sliding. YMMV. quad.html <!DOCTYPE html> <html> <FRAMESET cols="100%" rows="50%,50%" > <FRAMESET cols="50%,50%" rows="100%"> <FRAME src="http://192.168.1.199:1111"> <FRAME src="http://192.168.1.199:2222"> </FRAMESET> <FRAMESET cols="50%,50%" rows="100%"> <FRAME src="http://192.168.1.199:3333"> <FRAME src="http://192.168.1.199:4444"> </FRAMESET> </html> Just copy and paste the above into notepad and edit the "src" tags to refer to the URLs for your Dockers, Call it whatever you want with an 'html extension. Double click on the file in Explorer, and voila - it opens in your browser with all four Docker WebGuis in one convenient Window. You can adjust the frame borders to create different sized windows, which the web pages will resize to. Using the tags in the html file, you can hard code the widths and heights as desired. If your eyes are better than mine, you could make 6, 9, 16, whatever frames. But for me 4 is enough to make my life just a little easier. Enjoy your arrays!
  12. It looks like a solid build, but you really need to be more specific on what you intend to do with it. If you want to run 3 Windows VMs and transcode 8 HEVC streams - this is not powerful enough. You have 4 cores and 7800 passmark. If you want to transcode 2-3 non-HEVC streams, and maybe run 1 Windows VM, you should be good. But if you want a Windows VM, I'd up the memory to 16G so you can give 8G to Windows.
  13. For the money, I'd tend to go with the 9201-8i card on eBay (~$45 shipped). It will do the same thing for nearly every use case. But I will say that controllers tend to be the longest living assets in my unRAID server. I still have a couple Adaptec 1430SAs that still work quite well. So if you are betting on drives continuing to get faster, maybe the 9207 is worth a small premium. Its up to you. Here are some facts to guide the decision depending on your use case. The 9201 and 9207 are basically the same thing, except PCIe 3.0 vs 2.0. I think the card might have been better packaged as an x4 card. Or better yet, been sold as a 16 drive card in an x8 slot. But maybe if using this with a SAS expander, the x8 bandwidth makes sense. If you put it in a PCIe 3.0 x8 slot, you'd basically have 1000 MB/sec to each drive. That is faster than the 6Gb/sec sata spec for the drives, so you'd really only be getting 600 MB/sec to each drive. With the 2.0 card, you'd get 500 MB/sec, enough to run 8 SSDs at near full speed (full speed is 550 MB/sec) in parallel. (Who runs 8 SSDs at full speed?) With the PCIe 3.0 bandwidth, I'd like the 12 Gb/sec SATA capability for the future proofing. If you put it in a PCIe 3.0 x4 slot, you'd have half that bandwidth (500 MB/sec per drive), enough to run 8 SSD drives nearly full speed in parallel). Not bad. And for spinners, this is way overkill. At PCIe 2.0, you'd be half that (250 MB.sec), constrained for a full complement of SSDs running in parallel, but plenty fast for 8 spinners or 7 spinners with 1 SSD. This is the same as the speed of the PCIe 2.0 in an x8 slot, which is how many people run the 9201-8i. Interestingly, if you put it into a PCIe 3.0 x1 slot, you'd be able to run 6 drives at a very respectable 165 MB/sec, or 8 at 125 MB/sec. That is actually very good for an x1 slot, where with a typical PCIe 1.x card, you'd be limited to 1 drive at 250MB.sec or 2 drives at 125 MB/sec. Although I don't know too many x1 slots that are PCIe 3.0 or would hold an x8 card without melting the back of the slot!. The PCIe 2.0 card would get your 3-4 drives, considerably fewer than 6 to 8. This card really shines in a fast x1 slot! I'd have few reservations about ordering the $99 card if that's what I wanted. I have to believe Newegg would hold them to a high standard for customer satisfaction. And if you put on an credit card, you'd have all the power needed to get your money back if they didn't work properly. I bought a new 9201-16i from Hong Kong for a decent price on eBay, and it works great!
  14. This is not the FSBO board. This is good deals on new products. If you are interested in buying one, click on the link and buy from Newegg.
  15. The license key would be the last thing I'd be worried about if this were to happen to me. The very first thing I would do upon realizing I had RansomWare on my server is to power it down. That is me - but I'd rather deal with a dirty shutdown than give the virus more time. I'd do the same with any Windows workstation. It's like having a tiger in the house. I'd want to shut every door. You need one machine to be able to Google for research. Even a tablet or smartphone might suffice. Unlike a normal Linux or Windows environment, unRAID completely reinstalls the OS with every boot to memory. You should be able to rebuild a new USB stick, and reboot. Don't run any dockers, VMs, etc. Seems impossible that RansomWare would survive that. unRAID seems the easiest to clean vs a Windows or regular Linux box with a persistent OS install. I want to re-iterate that it is much more likely that a RansomWare attack would occur through an infected Windows machine and get to unRAID through unRAID's Samba shares. The flash disk IS a Samba share. After shutting down the unRAID server, I would go hunting for a Windows box that is infected. We know that unRAID is vulnerable to such an attack through Samba. We have never seen one of these (until now possibly). (An aside - I keep all of my media shares read only Samba shares to prevent being exposed). There is a old expression about when you hear hoofbeats, think horses not zebras. Attacking unRAID would not be very profitable given the small user base. It might be possible for a generic Linux strain to do so, but seems very unlikely. This is the zebra IMO. It could be a zebra, but nothing I've seen so far convinces me. Which leads me to believe, Siwat, that you have another infected computer that you haven't found. A few questions for Siwat ... 1 - Are you running any VMs on unRAID? A windows VM is just like a Windows computer. It can infect unRAID. And probably do more damage faster than a network connected Windows box. 2 - Have you checked each and every Windows box for signs of the virus? 3 - Are you running any Dockers on unRAID? Anything recently installed or updated? 4 - Is your server exposed to the Internet for external access? If so, how? I do not envy you the next few days sorting this out. But please keep us updated on the steps you are taking. We may be able to provide some general guidance and suggestions, I am not aware of anyone of the forum with specific experience this issue.
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.