EGOvoruhk

Members
  • Posts

    95
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

EGOvoruhk's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. Hopefully this news means Unraid 7 is going to come with some regularly added features/upgrades to justify the increase? Seems all the good stuff the past few years has come directly from the community, and Unraid itself has just been in maintenance/security mode. My (non-internet connected) server has been on 6.11 for a year and half and I don't feel like I'm missing out on anything yet Would love to actually see "unlimited" devices apply to the main array, bcachefs when it's "ready", a simple way to deal with bit-rot detection (and healing), 3+ parity drive support, etc
  2. Are there any issues for older MacOS versions? I'm trying to follow the YouTube video (though it seems outdated, in regards to the settings shown), but get "stuck" on the notify script. It just hangs forever. I'm trying to install Mojave within Unraid 6.10.3, and it creates a "Macinabox Mojave" folder in the VM share, with OpenCore and a vDisk, and also downloads a (roughly) 2GB "Mojave-install.img" file to the ISO location, and that's about it, the script just sits there running in User Scripts
  3. Does a cache pool in btrfs-RAID suffer from the massive 5.10 speed regression? https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.10-Btrfs-Regression
  4. I'm trying to pinpoint this issue. Both of my parity drives end up in error state (red x/disabled) after attempting to write a large amount (~50GB) of data to a share. It's happened first about a week ago, but the drives came back as fine after some testing and recreated the parity with no issues, and the large file download was hash checked/resumed with no errors. Small files (~5GB) downloaded just fine in between now and then, until another large download killed them both again today. The only thing I can think, is that there's an issue due to how much free space I have (which is to say none, as most drives are at 99% capacity), because it happened to pop at the exact moment I started to write that large amount each time Here's my setup Parity: 2x10TB Disks: 20x8TB, 3x4TB, 1x10TB (Slowly switching out to 8TB, and now 10TB as drives die) Cache: 2x1TB SSD (RAID1) Hardware: Case: Supermicro SC846E16-R1200B Motherboard: X8DTE-F CPU: 2xIntel Xeon L5640 Memory: 48GB Controllers: 2xSuperMicro AOC-S2308L-L8i (LSI SAS3008-based, one card dedicated to dual channel to the 24-bay chassis, one card dedicated to the 2 parity drives and 2 cache drives) I know I need to free up some space, or rather get some more, but I'd like to rule out any hardware or software setup issues first. Like I said, the failed drives tested and rebuilt fine, the controller card and cabling seems to test fine as well, and there's been no issues with the cache which live on the exact same controller. I'm hoping it's just a write error due to how low I am on space, and can just put a mental block to not go below a certain threshold until I can add some more. I'm not sure if it's relevant, but because of how low on space I am, I was writing to disk shares rather than to a global share, because I knew said disk had enough storage to store the file (I was trying to make sure the disk had twice the available space than the file size, eg 100GB free for a 50GB file) Logs attached. My system has been pretty rock solid (minus a brief issue where I was running some mismatched firmware a while back) so I'm not too well versed into what I should be poking around the logs looking for void-diagnostics-20201124-1539-anon.zip
  5. Those drives were sitting physically untouched in a rackmount server for days (Disk0 never actually being physically touched at all, as that was the 8TB parity that was left intact), and went through 2 full checks. I'm curious why both would fail at the exact same time (within 3 seconds, per the log). Could it be indicative of a different issue? They're connected via SFF-8087 fanout cables, and the controller/SFF-8087 end was never unplugged, and hasn't been for over a year, so it shouldn't be any seating issues. They also passed SMART tests after the failure without ever being touched, so obviously not a cabling issue Just wondering where I should be focusing my attention. One drive would make sense, both simultaneously throws me for a loop
  6. Upgraded to 6.8, and about 8 hours later, both my parity drives popped up as disabled simultaneously after some normal usage. Curious if there's anything in the logs that may hint as to why they would both drop at the same time Note: Prior to the 6.8 upgrade I had ran a full parity check with zero errors, then I upgraded my 2x8GB dual parity with a new 10GB drive (now 1x8GB, 1x10GB) and parity sync passed with zero errors, then I upgraded a 4GB data drive with the retired 8TB parity drive, and that sync also passed with zero errors. Then I made a backup of my flash, and upgraded to 6.8 without issues. All my VMs and Dockers were running, and everything seemed normal until later when both parity drives popped up with red Xs I shutdown all VMs/Dockers, ran a SMART test on both parity drives and they came back fine, grabbed my diagnostics, stopped the array, and powered down void-diagnostics-20191228-1756.zip
  7. I came home this weekend to find my 6.3.5 dual parity system had lost one of its parity drives and a data drive. I immediately overnighted some replacement drives from Amazon, along with a 3rd, which should get here in a few hours I was curious, what is the proper method for replacing them. Should I do them both at the same time? The data drive first, and then the parity drive? Or vice versa?
  8. This would only solve the LSI card passthrough, correct? Is there a way to disable MSI-X on other drivers? I passthrough my NIC card as well
  9. If you're referring to running unRAID as a VM guest, there is currently a passthrough bug. Any hardware passed through to the unRAID VM will no longer be seen in 6.2. You might be okay if you RDM your drives, but my servers have their controllers passed through, so I can't confirm
  10. Confirmed issue: http://lime-technology.com/forum/index.php?topic=48374 Can limetech or jonp chime in? Hopefully something that can be fixed, but it's been around for all 6.2 betas
  11. Seems to be a pass through issue on 6.2. It's affecting my RAID card and my NIC that I pass through as well (No network, no disks showing)
  12. So it doesn't boot at all? Do you get a local console prompt? If so, log in and run the diagnostics script at the local console, then post the resulting zip file. Just because the network doesn't appear to be working doesn't mean the system isn't booting. It boots, and looks totally fine, outside of the fact that the network (and presumably other passed through devices) aren't being picked up. I can get into the console, but have no way of downloading the diagnostic tool, so best I can do is copy the normal unRAID log to the flash drive so I can access and share it from the console, diagnostics will be saved to the flash drive. Oh, whoops. I was looking at the WiKi and it was mentioning I had to download something. I ran it and attached What would happen if you set /boot/config/network.cfg and BONDING="NO" There's only one NIC on the VM, so there's no bonding available But it's not a network configuration issue, it's an issue with the passthrough. It's affecting other passed through devices, 6.2 doesn't see the drives on my RAID card either, which pick up just fine on 6.1.9. The network thing was just the first issue I noticed, because I couldn't even get into the system right off the bat
  13. So it doesn't boot at all? Do you get a local console prompt? If so, log in and run the diagnostics script at the local console, then post the resulting zip file. Just because the network doesn't appear to be working doesn't mean the system isn't booting. It boots, and looks totally fine, outside of the fact that the network (and presumably other passed through devices) aren't being picked up. I can get into the console, but have no way of downloading the diagnostic tool, so best I can do is copy the normal unRAID log to the flash drive so I can access and share it from the console, diagnostics will be saved to the flash drive. Oh, whoops. I was looking at the WiKi and it was mentioning I had to download something. I ran it and attached tower-diagnostics-20160419-1149.zip
  14. So it doesn't boot at all? Do you get a local console prompt? If so, log in and run the diagnostics script at the local console, then post the resulting zip file. Just because the network doesn't appear to be working doesn't mean the system isn't booting. It boots, and looks totally fine, outside of the fact that the network (and presumably other passed through devices) aren't being picked up. I can get into the console, but have no way of downloading the diagnostic tool, so best I can do is copy the normal unRAID log to the flash drive so I can access and share it syslog.txt
  15. This is the first 6.2 beta I've tested on my server, but the 6.2 releases seem to have problems with ESXi and hardware passthrough. I've got a fresh install of ESXi 6.0.0.3620759 and a fresh USB install of 6.2 Beta 21, with my Intel 82574L Ethernet card and LSI 2308 RAID card passed through. Upon booting, unRAID fails to identify the network card or appear on my network, and ifconfig returns no IP results. Wiping the USB drive and installing 6.1.9, and booting the same VM up with the same settings, the card is identified fine, and I'm able to browse to it from the network immediately Obviously can't share any logs since I can't get into the system, but, thoughts? Edit: Nevermind, didn't know diagnostic script was built in now. Attached zip tower-diagnostics-20160419-1149.zip