Fleat

Members
  • Posts

    13
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Fleat's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Great, thank you for the information. Leaning towards option 2 because it would be much quicker. Do you foresee any problems with that (if I don't lose an array drive during the parity rebuild)?
  2. I would like to shrink my Unraid array by removing 4 data drives and both of the parity drives. I intend to spin up another NAS solution for testing purposes and as a backup for the Unraid solution. I have attached a diagram of what I intend to do. Ideal End Result: Unassigned Disk 1 (Toshiba 5TB) to Parity 1, all WD Red 8TB drives removed from the array & parity I started here: https://wiki.lime-technology.com/Shrink_array Can I do this swap while remaining fault tolerant as one of the existing parity drives will still have the calculated parity without using the clear-me script on each drive? Here are a couple of the approaches I am considering, and I am wondering if these are feasible Interim config with 1 valid parity drive and a new dual parity - Will I remain fault tolerant? New config Remove Parity Disk 2 Remove Array Disks 7, 8, 9, and 10 from the config (Disks are empty) Check the box "Parity is already valid" Array Result: Array with 1 8TB parity drive and 6 5TB Data Drives - Is the parity truly still valid here? Stop array and add Unassigned Disk 1 (Toshiba 5TB) as a Dual Parity and build the parity New config Remove WD 8TB and leave Unassigned Disk 1 (Toshiba 5TB) as the only parity Check the box "Parity is already valid" Start the array Abandon parity and follow this: "Do not click the check box for Parity is already valid; make sure it is NOT checked; parity is not valid now and won't be until the parity build completes" New config Remove Parity Disk 1, 2 Remove Array Disks 7,8,9, and 10 from the config (Disks are empty) Add Unassigned Disk 1 (Toshiba 5TB) as Parity Disk 1 Make sure "Parity is already valid" is unchecked Start array and let the parity rebuild
  3. I do not seem to notice the same type of behavior when testing with the HP220 (SAS9205-8i) outside of Unraid. My interim solution is to have a spin up group that spins up the entire array as that seems to retain the performance I would expect along with no weirdness of accessing various content across drives. This is not a great solution as I will soon have 20 drives that will have to spin up to access the media on one drive just to get the performance I need. Lots of wasted energy and generated heat that shouldn't be required.
  4. Thanks for the heads up about that. I will test out the extra SAS controller in that kind of scenario outside of Unraid.
  5. TLDR: Having a SAS card in the loop with only the necessary drive spun up causes slow transfer speeds and some other weirdness. Could use any advice people have. Problem: Everything was working great before I moved to a SAS card (for more drives). With a SAS card in place, I can only seem to achieve a max of around 40MB/s in transfers to external networked computers as well as internally networked VM's unless all drives in the array are spun up. Accessing a drive that causes a "spin up" in Unraid will also freeze access on any other drives that are currently being utilized. For example, watching a movie stored on Disk 1 and everything is going fine until someone tries to stream a TV show stored on Disk 4 at the same time. This will essentially "freeze" the movie until Disk 4 is spun up and the information is found. With the current speeds, I cannot stream high quality FLAC music or my ripped Blu-Ray movies without getting "Reads too slow from server" in Kodi many times. Troubleshooting Steps Test #1: Tests performed with all drives on SAS2 backplane plugged into M1015 port 0 a. Transfer file directly from cache drive (SSD) to networked PC: 110MB/s b. Transfer file directly from array drive (HD) to networked PC: 40MB/s Note: This is with only the necessary disk spun up c. Transfer file directly from array drive (HD) to VM on cache drives: 40MB/s Note: This is with only the necessary disk spun up d. Transfer file directly from array drive (HD) to networked PC: 110MB/s Note: All drives are spun up during this test e. Transfer file directly from array drive (HD) to VM on cache drives: 150-200MB/s Note: All drives are spun up during this test Test #2: Tests performed with a single drive from the array on port 1 of the M1015 using a 8087 to sata breakout cable (bypassing the SATA backplane) a. Transfer file directly from single drive (HD) to networked PC: 40MB/s Note: This is with only the necessary disk spun up b. Transfer file directly from single drive (HD) to VM on cache drives: 40MB/s Note: This is with only the necessary disk spun up c. Transfer file directly from single drive (HD) to networked PC: 110MB/s Note: All drives are spun up during this test Test #3: Tests performed with a single drive from the array on motherboard SATA a. Transfer file directly from single drive (HD) to networked PC: 110MB/s Note: This is with only the necessary disk spun up b. Transfer file directly from single drive (HD) to VM on cache drives: 150-200MB/s Note: This is with only the necessary disk spun up Test #4: Bought a new SAS card HP220 (SAS2308 vs SAS2008 chipset) and flashed it to LSI IT firmware All tests from Test #2 were performed with identical results aside from a few unexpected parity drive read errors. Test #5: M1015 in an Xpenology setup with WD 8TB Reds in reverse sata breakout set up in SHR All tests resulted in full gigabit transfer speeds to networked PCs Note: This test doesn't really validate much except that the SAS card is working Other configuration tests 1. Swapped to a new SFF8087 cable 2. Swapped from the onboard Intel NIC to a dual port dedicated Intel card 3. Performed SMB tweaks per this post (https://forums.lime-technology.com/topic/46802-faq-for-unraid-v6/?page=2&tab=comments#comment-526285) 4. Removed all other cards and swapped pci-e slots with identical results 5. Ran the diskspeed script from these forums and performance seems adequate 6. Two full passes of Memtest on Unraid boot without any errors Hardware Specifications: PCPartPicker Software: Unraid 6.3.5 with dual parity (2 8TB WD Reds) Diagnostics: The anonymized version contained more info than I felt it should - I can provide this upon request to Unraid folks Diskspeed Script Results:
  6. FYI for anyone using this. It looks like you should point your repository to "homeassistant/home-assistant:latest" instead of "balloob/home-assistant:latest" to pull in updates now Here is a note from the breaking changes of the recent update: "The location of the Docker image has changed. There was no possibility for us to keep maintaining the old image (as it was bound to the GitHub repo under my name) or to make a redirect. So if you are using the Home Assistant Docker image, change it to run homeassistant/home-assistant:latest"
  7. Excellent! I actually have been on the gitter chat today to try to determine if you can speed up the polling between Plex and Home Assistant. The automation I have set up works, but it is fairly slow to actually react to changes from the plex media player.
  8. Thank you for the docker, it is working well for me. My theater now automatically turns off the lights when a movie plays in Plex, turns them to a dimmed status when paused, and turns them back on when Plex is stopped. Also added some lights in the living room that automatically turn on when the sun goes down for the dogs. I do get an error when starting up Home Assistant, but it doesn't seem to affect anything I am using it for at this point. It appears to be related to the piece that you can use to track your router? Error: 16-04-17 20:55:41 homeassistant.bootstrap: Error during setup of component device_tracker Traceback (most recent call last): File "/usr/src/app/homeassistant/bootstrap.py", line 158, in _setup_component if not component.setup(hass, config): File "/usr/src/app/homeassistant/components/device_tracker/__init__.py", line 98, in setup conf = conf[0] IndexError: list index out of range
  9. Yes, I have definitely been considering switching to a workstation motherboard. If I did, I would just use the X99 board to upgrade my desktop. I have had this up and running for a few weeks now and everything seems great so we will see how I feel about it in a couple of months. The processor was an oem eBay purchase from Japan. The turbo clock speed for all cores is lower (2200) but serves my needs quite well for the type of workloads I toss at it. Edit (to further elaborate): The motherboard does support RDIMM with a Xeon processor. You can see an example of it on eBay here It doesn't seem to have any limitations that affect me personally. CPU Mark is around 24,000. I was interested in hardware pass-through, a high core count, and low power usage and it meets those needs well. If anyone has any additional specific questions about the CPU, I can do my best to get that information for you.
  10. FNG here. Just introducing myself and sharing a bit of my disaster / project. I am primarily here because I had to un-f$%* my previous mistakes from my last NAS. The single largest mistake was building a ZFS server with 8 of the dreaded Seagate ST3000DM001. If only I knew then what I know now... I saw that more than I saw a happy status for the array. In the end, I ended up RMA'ing 6/8 drives and all 6 of the replacements have reallocated sectors and errors as well by this point. Two of the original drives are somehow miraculously good still. My second mistake was using Freenas and ZFS. I built the last server at a time that Freenas was not that mature, and ZFS is extremely unforgiving with pool expansion. This led to a server that just didn't fit my needs that well, and one that constantly needed maintenance because of my poor choice of drives. SO, time to move on in a big way! The party starts here with 7 shiny enterprise grade 5TB hard drives And for good measure, lets add some enterprise grade SSD's to the mix (along with two regular burners for pass through). Those should certainly get the juices flowing! And now we skip all the boring stuff and get straight to the money shot A little more detail about my setup (see full specs in signature): - Only 6 of the 7 5TB drives are being utilized until Unraid adds dual parity - Currently running 32GB non-ecc memory soon to be upgraded to 64GB DDR4 ECC - Cache pool is comprised of 4 Intel DC S3500 300GB in Raid10 (2 of the 6 went to a friend) - Hosting 4 primary virtual machines with a handful of others for test environments - Plex Media Server, NZBGet, and Sonarr on a VM that is intended for external RDP - CouchPotato, Guacamole, muximux, plexpy, and OWNCloud Dockers - AMD 6450 is passed through for a software development machine - Nvidia GT 720 is passed through for an HTPC in the basement theater via HDMI over ethernet - Yes I cheaped out and didn't buy a workstation motherboard Things I still need to figure out: - The Plex Media Server docker went into a rogue state and caused the Unraid web gui to become unresponsive. I was only able to fix this by SSH'ing in and kill-9'ing the docker process in its entirety. CPU and Memory usage was low at this time... so not sure what to make of that. So for now, I am sticking with my VM for this purpose until that kind of behavior is resolved. My library is very large (15TB+) so that could impact things (I did see this is a logged defect in the forums) - Cache pool performance is pretty poor overall (Read some posts that this should improve in 6.2) - Get a git repository up and running (either via docker or vm) : Any suggestions? So that is where I stand at this point. I spent a LOT of time researching NAS solutions, and settled on Unraid because it seemed like I could replace 3+ machines with one box. I have been following it for a long time and it has really matured nicely which is what brought me on board. Hoping this time I made a better choice with the disk drives
  11. This update broke notifications sent via Gmail for me as well. I switched over to an Outlook account which appears to be working just fine.