twg

Members
  • Posts

    126
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

twg's Achievements

Apprentice

Apprentice (3/14)

0

Reputation

  1. no one here use Unraid with Sonos ?
  2. Didn’t find much via search so here goes. Unraid generally has been solid and I haven’t looked at it in a long while besides keeping it up dated to latest version. for awhile I noticed my Sonos could not connect to my music share on unraid. I finally did some googling today and it seems sonos will only connect to nas via SMB1. I’ve got Netbios enabled which I believe is SMB1 and also wsd enabled too. I have “min protocol=SMB1” in the smb-conf file. yet I still cannot have Sonos find my music share. It used to work and not sure when it stopped working, must have been an upgrade or something. anyone with suggestions ?
  3. I had a similar problem trying to upgrade to 6.8.1, I only have 1 network interface so no bonding or anything. I reverted back and just recently tried 6.8.3 and still not working. I had to set my network to static in order for it to work.
  4. I am seeing the same problem, so not fixed yet
  5. There's a problem with 6.8.0. I'm running 6.7.2 with no issues and everytime I update to 6.8.0 the server can't get an IP address. When I revert to 6.7.2 the problem goes away. I tried twice already. Same behaviour.
  6. Brand new, I took it out of the box and tested it once. Paid $180, selling for $150.
  7. so for whatever reason my 6 yr old AOC-SAS2LP cards that used to work are now exhibiting the dropped drive issue, even the brand new replacement spare AOC-SAS card I had... I recently upgraded from 6.3.5 to latest 6.6.2 Unraid so maybe the newer kernels are more finicky with that controller... I went out and bought a LSI 9260-16i... now i'm reading that this card can't be flashed to IT mode... just my bad luck. Before I try to sell this card to recoup my losses, does anyone have anything I can try to get this card to work ? I've read that I can use megacli to force the drives connected to raid0, thereby creating a whole array of raid0 drives, but I don't want to do that beacuse it messes up the drive identifiers and so if a drive goes bad it becomes a problem replacing disks... not to mention I've got an existing array that I don't want to screw up.
  8. Hmm, I just purchased a LSI MegaRAID 9260-16i to replace my AOC-SAS2LP cards which started exhibiting the dropped disk problem... Unraid doesn't see the drives connected to the 9260... googling reveals there isn't an IT mode on the card... how did you get your 9260 working ?
  9. I put a new drive in to replace the failed parity drive and it finished rebuilding the parity drive. I decided to buy another parity and put 2 parity drives to cover myself... and within 2 hours of adding the 2nd parity drive, another one of my disk redballed. The relevent part of the log shows similar errors: Oct 20 14:21:48 Tower emhttpd: shcmd (217): echo 128 > /sys/block/sdp/queue/nr_requests Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#0 CDB: opcode=0x88 88 00 00 00 00 00 82 30 e8 10 00 00 02 f8 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184243216 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#1 CDB: opcode=0x88 88 00 00 00 00 00 82 30 e6 d0 00 00 01 40 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184242896 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#2 CDB: opcode=0x88 88 00 00 00 00 00 82 30 e2 d0 00 00 04 00 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184241872 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#3 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#3 CDB: opcode=0x88 88 00 00 00 00 00 82 30 e1 90 00 00 01 40 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184241552 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#4 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#4 CDB: opcode=0x88 88 00 00 00 00 00 82 30 dd 90 00 00 04 00 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184240528 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#5 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#5 CDB: opcode=0x88 88 00 00 00 00 00 82 30 dc 50 00 00 01 40 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184240208 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#6 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#6 CDB: opcode=0x88 88 00 00 00 00 00 82 30 d8 50 00 00 04 00 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184239184 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#7 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#7 CDB: opcode=0x88 88 00 00 00 00 00 82 30 d7 08 00 00 01 48 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184238856 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#8 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#8 CDB: opcode=0x88 88 00 00 00 00 00 82 30 d3 08 00 00 04 00 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184237832 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#9 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] tag#9 CDB: opcode=0x88 88 00 00 00 00 00 82 30 cf 08 00 00 04 00 00 00 Oct 20 18:09:01 Tower kernel: print_req_error: I/O error, dev sdp, sector 2184236808 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] Read Capacity(16) failed: Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] Sense not available. Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] Read Capacity(10) failed: Result: hostbyte=0x04 driverbyte=0x00 Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] Sense not available. Oct 20 18:09:01 Tower kernel: sd 13:0:5:0: [sdp] 0 512-byte logical blocks: (0 B/0 B) I'm beginning to think the chances of 3 of my drives failing all within 1-2 days is too coincidental... there must be something else going on... I've attached the output of my diagnostics tower-diagnostics-20181020-1829.zip
  10. I recently had a data drive quit on me, at least Unraid said so, so I replaced it and it went thru a data rebuild. In the process, the server froze, so I rebooted it. It completed rebuilding the data drive and when it finished, I saw the following message: Event: Unraid Parity sync / Data rebuild Subject: Notice [TOWER] - Parity sync / Data rebuild finished (11640829 errors) Description: Duration: 1 day, 6 minutes, 23 seconds. Average speed: 92.2 MB/s Importance: warning What does it mean when it lists all those errors, are those errors in the drive rebuild ? ie. there's bad data ? When I check the drive log, I get a whole bunch of these: Oct 18 20:45:35 Tower kernel: sd 13:0:5:0: [sdo] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 18 20:45:35 Tower kernel: sd 13:0:5:0: [sdo] tag#0 CDB: opcode=0x88 88 00 00 00 00 03 9d f4 24 a0 00 00 02 00 00 00 Oct 18 20:45:35 Tower kernel: print_req_error: I/O error, dev sdo, sector 15534924960 Oct 18 20:45:35 Tower kernel: sd 13:0:5:0: [sdo] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 18 20:45:35 Tower kernel: sd 13:0:5:0: [sdo] tag#1 CDB: opcode=0x88 88 00 00 00 00 03 9d f4 26 a0 00 00 02 00 00 00 Oct 18 20:45:35 Tower kernel: print_req_error: I/O error, dev sdo, sector 15534925472 Oct 18 20:45:35 Tower kernel: sd 13:0:5:0: [sdo] tag#2 UNKNOWN(0x2003) Result: hostbyte=0x04 driverbyte=0x00 Oct 18 20:45:35 Tower kernel: sd 13:0:5:0: [sdo] tag#2 CDB: opcode=0x88 88 00 00 00 00 03 9d f4 28 a0 00 00 02 00 00 00 Oct 18 20:45:35 Tower kernel: print_req_error: I/O error, dev sdo, sector 15534925984 I've attached the full drive log. So I was getting some really weird issues, multiple drives would drop out on me, different drives everytime I reboot... open my server, seemed like some power cables were loose, so I replugged those in... still multiple drives failing on me, it seems like it's coming from one drive controller, the AOC-SASLP-MV8 controller... luckily I had a spare AOC-SAS2LP-MV8 controller, so I plugged that in... I see almost all of my drives... except my parity drive is not listed... I hear a drive struggling to seek properly, and sure enough it's my parity drive... it seems my parity drive has died... Now I'm not sure what to do... did my original data drive rebuild properly ? considering the errors I got ? I still have the failed data drive... suggestions ? I have a spare drive I can replace the parity drive but hesitant to do anything at this point that may be permanent and damage my data... help!!
  11. i may have another problem which caused this, I noticed my nightly rsync's were failing... and it seems like a mount command is now not working properly ever since I upgraded to latest ver of Unraid, so likely the command mount has probably some new behaviour which is breaking things... it's filling up my rootfs as a result and probably freezing my system... will go for dig and post another thread
  12. thanks, I rebooted and the data rebuild re-started again from 0%... all seems to be going well right now 43% with another 13hrs to go
  13. I was running 6.3.5 for the longest time and recently upgraded to 6.6.1. I followed all the instructions and everything seems to have gone smoothly. Well recently one of my drives went offline. I shutdown Unraid, pulled the bad drive and put in a new drive, restarted Unraid, stopped the array and added the new drive to the array, re-started the array. At this point, Unraid started to recreate my damaged drive (one of the data drives). The next morning, I noticed I couldn't access my Unraid server, no web interface, no telnet, not even the console. I mean the num caps key on my keyboard connected to the Unraid server wasn't even toggling when I pushed it, so it seems the Unraid server is completely frozen. I'm hesitant to shutdown/reboot, but is that my only option right now ? What happens when I reboot, will it resume restoring data from where it left off/crashed ?
  14. is there a way to preserve the directory timestamps ?
  15. I'm getting an exit status 23 with nothing happening... i'm trying to move files from disk 3 to disk 12. I've run the docker safe new permissions script... This is the command: MOVE: Move command (rsync -avRX --partial "Console/Security Camera" "/mnt/disk12/") was interrupted: exit status 23 When I run the rsync manually, I realized I had to change the current dir to /mnt/disk3 but the rsync command actually worked... not sure what the unbalance plugin error could be caused by