All Activity

This stream auto-updates   

  1. Past hour
  2. Nothing special, just swap them, LSI needs to be in IT mode, so it needs to be flash if it's in IR mode, but it's simple, like a bios update.
  3. +1 for an integrated Docker-compose
  4. Hmm. The LSI9211-8i seem affordable enough. Anything special need to be done when swapping cards? Will I have to reassign all drives again?
  5. The app folder contains settings for CrashPlan, but not for the container itself. Since most of the time problems are related to container's configuration, re-installing it is often sufficient
  6. Alright. Thanks again
  7. Being new to dockers I was thinking the config files would be in there. I only removed the crashplan folder from the app folder. Figured better safe than sorry. Thank you for the quick feedback!
  8. Yeah it would be. 3 files. in which case you upgrade again. But a dated backup wouldn't make a difference anyways. (BTW, on upgrades, the previous version is stored on the flash drive in the previous folder) Its on a todo list. Just not at the top.
  9. Any 6TB disk will work.
  10. If those are real pending sectors, and they probably are, you'll get some read errors during the parity check, those are expected, then either unRAID will successfully writes those sectors back to the disk or there will be a write error and it will be disabled, if it's disabled replace it, if not run another non correcting check, if there are more read errors replace it, if not you can give it a second chance.
  11. To be honest, I have yet to see any store ever run diagnostics on hardware. If it passes BIOS tests, then its good to go for them. Try a different stick and a fresh install.
  12. Just before i started the check.. it started saying this I'm gonna buy a new drive . I have 4 WD 6TB Blues in there. Can i buy ANY 6TB or does it have to be WD Blue? Or WD?
  13. Since IPv6 is not supported on unRAID, the container cant't bind to an IPv6 interface, which is not really a problem. This is the suggestion I would have done. Removing and re-installing the container make sure your template has the proper default. For example, playing with the networking mode can affects port mappings, something that is not obvious to see when you edit the template. Note that you could simply re-install the container, without removing your app folder.
  14. Thanks
  15. So I guess I'm not being hacked then.
  16. Yes
  17. Wow. Thanks! heh .. Just so i am 100% clear.. un-checking this Is ALL i need to do?
  18. Yes, running a correcting check with a possibly bad disk is very bad idea, it can corrupt your parity and in case the disk really needs to be replaced you then will rebuild a corrupt disk.
  19. Those are hardware errors, possibly a bad cable.
  20. So uncheck the write corrections box?
  21. Just make sure it's a non correcting check.
  22. My motherboard, cpu and memory has been assembled in the hardware store. Bios update done for the board and they claim everything is tested. So also the memory. But to be sure I can do a test myself. The USB stick is coming from my previous unraid server. That one worked without problems. I just plugged this usb stick in my new server. My server boots. I see the unraid screen with options. Unraid boots and I see an OK and a 2nd one. After that one my server reboots. So unraid doesn't start up.
  23. Hi there. I'm an old sheep that strayed away from the Unraid flock years ago but have found my my back. Overall, this is a great preview release and I've been running 6.33-6.35 Pro for the previous month with no issues. Since upgrading to 6.4 rc6 this am, I can't boot into GUI mode. I'm also running with UEFI boot too. It seems to load the bzimage fine and I can see all the text but end up with a blinking cursor in the upper lefthand corner. This is also the case in GUI safe mode. When I change the syslinux config to boot into non-gui mode, I get the root command prompt just fine. Is there something that I'm missing? I've browsed the rc forum but can't seem to find my way through this. Sorry in advance if I've missed something. Thanks! Edit: To clarify, I can boot the machine just fine since applying the rc, but can't access the onboard GUI. I can access the Web GUI just fine through my phone's browser, the browsers in my Win10 VM hosted in unraid, etc...
  24. Thanks guys. I will start a check now and failing that take strike's advice
  25. A couple days ago I started noticing that the mover was taking forever to transfer movies from the cache pool to data drives, and checking the log I found a ton of BTRFS errors. Having experienced something like this before, I followed the same steps this time to re-do the cache pool (stopping docker, deleting the docker.img, moving cache shares to the array, wiping the file system on both cache drive, then moving the cache shares back). After starting the array again, I ran a BTRFS scrub which found and corrected 9 read errors, then ran it again correcting 4 read errors, then finally got no errors running the scrub the third time. After that ran BTRFS balance, then finally restarted the array in maintenance mode and ran BTRFS status under Check Filesystem Status, which returned this: checking extents checking free space cache checking fs roots checking csums checking root refs Checking filesystem on /dev/sdc1 UUID: 10fca33f-5aca-4c9b-8a8e-99c373eb0fe4 found 13622984704 bytes used err is 0 total csum bytes: 13082368 total tree bytes: 222707712 total fs tree bytes: 186515456 total extent tree bytes: 20742144 btree space waste bytes: 45097406 file data blocks allocated: 13400276992 referenced 13400276992 I assume "err is 0" means that no filesystem errors were found, however checking the attached log I still see blk_update_request I/O errors associated with sdc which is cache drive 1. Ex: Jun 25 13:22:33 JBOX kernel: sd 1:0:1:0: [sdc] tag#29 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=0x00 Jun 25 13:22:33 JBOX kernel: sd 1:0:1:0: [sdc] tag#29 CDB: opcode=0x28 28 00 01 e8 d1 80 00 00 20 00 Jun 25 13:22:33 JBOX kernel: blk_update_request: I/O error, dev sdc, sector 32035200 Jun 25 13:22:33 JBOX kernel: mpt2sas_cm0: log_info(0x31080000): originator(PL), code(0x08), sub_code(0x0000) Jun 25 13:22:33 JBOX kernel: mpt2sas_cm0: log_info(0x31080000): originator(PL), code(0x08), sub_code(0x0000) Jun 25 13:22:33 JBOX kernel: sd 1:0:1:0: [sdc] tag#30 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=0x00 Jun 25 13:22:33 JBOX kernel: sd 1:0:1:0: [sdc] tag#30 CDB: opcode=0x28 28 00 01 e8 d1 60 00 00 20 00 Jun 25 13:22:33 JBOX kernel: blk_update_request: I/O error, dev sdc, sector 32035168 Jun 25 13:22:33 JBOX kernel: sd 1:0:1:0: [sdc] tag#31 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=0x00 Jun 25 13:22:33 JBOX kernel: sd 1:0:1:0: [sdc] tag#31 CDB: opcode=0x28 28 00 01 e8 d0 a0 00 00 20 00 Jun 25 13:22:33 JBOX kernel: blk_update_request: I/O error, dev sdc, sector 32034976 So apparently there's still a problem with the cache pool even though neither SSD in the pool is reporting any SMART errors, etc. Any ideas what's going on and what I should do next? When this happened before I chocked it up to issues with the SAS2LP-MV8 controller I was using at the time (data drives on that controller would sometimes drop off the array also, including once during a parity check which wasn't fun). Since I replaced it with the LSI 9211-8i though, everything had been working fine. So do the continued I/O errors after replacing the cache pool suggest some new hardware issue (like maybe the card has become slightly unseated?) or is it more likely an issue with BTRFS? Because at this point I'm reluctant to even try restarting docker and recreating the docker.img again if there's still some kind of corruption and I think I'm about fed up w/BTRFS anyway. That being the case is it worth trying the replace cache procedure again and this time reformatting one of the cache SSDs to XFS and then hoping that copying the cache shares back to the single cache drive might somehow get rid of the I/O errors? If the cache data is just corrupted at this point and there's no real fix except to delete all of it and start over, then I'm not recreating my Plex server from template again on top of BTRFS. It's a lot of work and the extra protection offered by the cache pool is only a nice theory if you're repeatedly having to wipe your cache due to software corruption when the drives themselves are perfectly healthy. Or that's the way I'm leaning at the moment anyway. I'd still appreciate any feedback anyone has to offer on the best way forward from here. Thanks. syslog.txt
  26. I removed Crashplan, deleted the folder in the app folder and reboot the server. Then reinstalled Crashplan. I used TightVNC, which seemed to have better performance than the webgui, and was able to log in without an issue. Not sure why it worked. But it worked.
  1. Load more activity
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.