zaker

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by zaker

  1. @JorgeB thanks. I knew it was sdd. sdd is my backup disk for my docker storage as well as crucial things on my unraid array should I manage to destroy it. I will indeed try a cable swap, although curiously it's been content since I powered it back on... Thanks for your nod to cables.
  2. I've been seeing issues with my setup. I'm thinking maybe my PSU is on its way out. Or possibly my battery backup. I attempted to grab it via syslog but that has been pretty fruitless so far. However, something strange...See the attached image of the boot messages sent to the monitor about the SATA devices. It seemed to struggle to get one to talk correctly. Eventually the thing booted and it reads fine atm. Thinking (hoping) a bad cable, SMART seems okay, but...anyone seen this before with some insight? Also sending diagnostics, just cause. backup-diagnostics-20231117-1805.zip
  3. manual update and error check on usb stick seems to have solved issues.
  4. I found the manual upgrade/downgrade procedure, I'll give that a go for 6.12.1 and see if it gets unstuck, otherwise I'll attempt a 6.11.something I guess.
  5. Like a hasty fool, I didn't notice I was getting a 6.x.0 release until it was too late, and I guess I'm paying the price. looking for help - guessing either a link to the manual upgrade process to get 6.12.1, or a specific pointer to the issue. I shut things down to troubleshoot local routing concerns, and it doesn't come back up now (no gui). Something goofed in the startup scripts, or so it appears. Here's the boot output: Updating hardware database index: /sbin/udevadm hudb --update Triggering udeu events: /bin/udevadm trigger --action=change Starting system message bus: rusr/bin/dbus-uuidgen --ensure : /usr/bit Starting elogind: /lib64/elogind/elogind --daemon Starting Internet super-server daemon: /usr/sin/inetd Starting OpenSSH SSH daemon: /usr/sbin/sshd Starting NTP daemon: rusr/sbin/ntpd -g -u ntp:ntp Starting ACPI daemon: rusr/sbin/acpid Enabled CPU frequency scaling governor: ondemand Updating MIME database: rusr/bin/update-mime-database /usr/share/mime Updating gdk-pixbuf. loaders: rusr/bin/update-gdk-pixbuf-loaders & Compiling Settings XML schema files: rusr/bin/glib-compile-schemas rusr/share/glib-2.0/schemas & Starting crond: /usr/sbin/crond Starting atd: /usr/sbin/atd -b 15 -1 1 nu: cannot stat rusr/local/bin/mover': No such file or directory sh: line 6: Device: command not found sh: line 7: Serial: command not found sh: line 8: LU: conmand not found sh: line 9: Firmware: command not found sh: line 10: User: conmand not found sh: line 11: Sector: command not found sh: line 12: Rotation: command not found sh: line 13: Form: command not found sh: line 14: Device: command not found sh: line 15: ATA: command not found sh: -c: line 16: syntax error near unexpected token '(' sh: -c: line 16: 'SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/S)' sh: line 6: Device: command not found sh: line ?: Serial: command not found sh: line 8: LU: command not found sh: line 9: Firmware: command not found sh: line 10: User: command not found sh: line 11: Sector: command not found sh: line 12: Rotation: command not found sh: line 13: Form: command not found sh: line 14: Device: command not found sh: line 15: ATA: command not found sh: -c: line 16: syntax error near unexpected token "(* sh: -c: line 16: "SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/S)* error: failed to connect to the hypervisor error: Operation not supported: Cannot use direct socket mode if no URI is se unRAID Server OS version: 6.12.0 Pu address: 192.168.1.50 1Pub address: not set backup login:
  6. First time seeing this warning. The machine is 4 years old now, the processor (Ryzen 3700x) is not as old though as the original was replaced under warranty just shy of 3 years when one of the memory controller's channels died.
  7. So says fix common problems plugin, but the directions to address it are stale. Anyway, herebackup-diagnostics-20230326-2209.zip's my diagnostics. Help appreciated.
  8. I just found this, so, sort of answering my own question here: https://docs.ibracorp.io/docker-compose/docker-compose-for-unraid I also found this, looks very promising: https://dev.to/felizk/remote-deploy-with-docker-compose-to-unraid-1dai
  9. Can anyone summarize what needs to be done to setup and use this, for someone who is familiar with unraid, and its use of docker, but maybe not well versed (yet) with docker compose? This would be super helpful. My case is that I have a docker-compose.yaml for a web app I've written that I'd like to host on my unraid, I don't yet have this plugin installed, and I dont want to go thru the slow process of publishing my work on github, creating an unraid app that gets full (but inadequate) support via the official route.... I wish I could sit here and digest ten long forumn pages, but...that's just not in the cards atm.
  10. I have a 3TB drive that is formatted btrfs TOSHIBA_DT01ACA300_Y5UNPWAGS (sde) dev 1 I mount it and then reboot. I hear the thing should automatically be mounted again. It isn't. I was thinking of moving one of my caches to be an unassigned device rather than a cache pool because I don't need its folders listed as shares. It is simply going to be a backup target exposed to a docker app. Logs are pretty clean. I have the requisite plugins. What am I missing here? backup-diagnostics-20220526-2219.zip
  11. All of a sudden, All Docker Containers show version 'not available'. Log shows this, very frequently: May 25 23:11:21 backup root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token What led to this, probably, was that I copied over the contents of a cache pool (1 3TB drive) to an unassigned 16TB drive, then took the array offline and reassigned the drive in the cache pool to that larger drive. (no I didn't use the user mount). The other thing I did was to set up to encrypt my drives, but then I stopped without actually encrypting anything once I realized that I didn't stand to gain anything assuming nobody has physical access to my box. Diagnostics attached backup-diagnostics-20220525-2316.zip
  12. Sounds like a workaround but not resolution. Did anything else present itself? I have an update ready and am nervous!
  13. I think this has to do with IoMmu https://docs.microsoft.com/en-us/windows-hardware/drivers/display/iommu-model still learning myself...
  14. I also see this very same message, in the very same place, at the end of login, with v6.9.2. Nevertheless I do have a VM that works. Perhaps it is not running optimal, who knows...
  15. Yeah, the thing seems to have died. expired or invalid SSL setup.
  16. @trurl yes that is something I plan to address, though that may not have been clear in the message a few hours ago. @Squid yes, that is the symptom I'm describing. I need you to tell me the cure :). VM service is not enabled yet for above explanations. Nobody has bothered to explain that anywhere AFAICT. Perhaps if I just enable it, things automagically happen? How would it know to put it on a cache? etc.
  17. Sure, here you go! Note that I have a parity sync/data rebuild in progress as I'm upgrading my parity to a 2nd, bigger parity drive. Any thoughts as to why the non-user shares are missing and how to resolve is appreciated. .backup-diagnostics-20220317-1435.zip
  18. Related to HellDiverUK's suggestion, I actually want the system share but don't have it. I'm not finding guidance on setting up non-user (system) shares for system, domains, and appdata. I have Unraid 6.9.2, and it has been upgraded over the years from the early days (pre 5.0, pre VM and fancy docker support) as time marched on. For me, appdata is a user share, and in there I see the docker.img file. I'd really like to reorganize these "system" things more along the lines of SpaceInvaderOne's videos in how he describes the "Part 4 multiple cache pools" video. I've today moved it to modern hardware. The previous hardware was an Intel core 2 duo 4GB machine. So it ran Docker, but not VMs (and don't think it had virtualization support at the CPU level anyway). Now I'm planning on enabling VM so I can pass thru my Windows 11 setup which is on a 1GB nvme SSD (or truly virtualize it so I can use the ssd as a cache drive and leverage that 80% of free space on it for other things coupled with the VM/appData backup plugins). May also buy another drive for 2nd cache but only have one M2 nvme connector so it would be SATA. Anyway, any guidance on setting up the missing system shares would be appreciated!
  19. I am seeing that the install option is not available to me for this plugin, strangely. I am running 6.9.2, and am adding a 2nd parity (so it is doing a parity sync/data rebuild). I guess that is why...?
  20. Is it normal for the parity drive to report no smart test result when it is done? B/c that has happened twice now....
  21. I find myself in this same dark hole of docker not starting. Attached are my diagnostics. Extended smart tests on all drives turns up nothing (although the next day one of the drives that is very old has no record of the smart report, not sure if that is "normal."). Then I revisit later and it does see the test...weird... Actually what i think happened here is that I kicked off smart extended on multiple drives at once, and the system can't keep them straight when you do that. As I revisit the thing now, it sees the test again for that drive as completed without error and a different drive as test interrupted. I think it is severely confused, so I'm kicking off the one that was supposedly interrupted again. When I first saw the error, I decided to restart the thing. Upon reboot it started a parity check. After 8 hours it finished successfully, corrected tens of errors supposedly. Restarted again, still no docker. I thought that was really odd, so I kicked off another parity check, but not correcting errors. After just a few minutes it reported finding 17 errors. I immediately cancelled and thought "oh no, my parity might be bad." Then the smart tests that surfaced nothing. Smart attributes on the ancient WDC Velociraptor show this that I believe is more likely not the drive but a cable or memory? 199 UDMA CRC error count, flag 0x003e, value 200, worst 200, threshold 000, type Old age, updated Always, failed Never, raw value 31 I've gone ahead and ordered a couple drives to be ready to act but also to move away from ReiserFS if I can figure out how to do that. Any help is appreciated. backup-diagnostics-20220315-2240.zip
  22. Thanks, Well, a reboot of the Unraid box did see my drives mounted normally, meaning I can write to the file system again. I realize that I should probably make the switch, and if I saw an easy button for that, I would. With 3 little kids, my time is soooo strapped! Anyway, here's my full diagnostics. backup-diagnostics-20211219-2240.zip
  23. I'm also seeing a drive in read-only mode, noticed because of issues I was seeing with Docker. Although mine is REISERFS setup. Any suggestions welcome! Oct 27 20:16:13 backup kernel: md: recovery thread: exit status: 0 Oct 27 22:45:55 backup kernel: REISERFS error (device md2): vs-4080 _reiserfs_free_block: block 356413332: bit already cleared Oct 27 22:45:55 backup kernel: REISERFS (device md2): Remounting filesystem read-only Oct 27 22:45:55 backup kernel: ------------[ cut here ]------------ Oct 27 22:45:55 backup kernel: WARNING: CPU: 1 PID: 4714 at fs/reiserfs/journal.c:3379 journal_end+0x44/0xa3 [reiserfs] Oct 27 22:45:55 backup kernel: Modules linked in: veth xt_nat xt_tcpudp macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter reiserfs nfsd lockd grace sunrpc md_mod ip6table_filter ip6_tables iptable_filter ip_tables x_tables e1000e sky2 coretemp kvm_intel i2c_i801 kvm i2c_smbus i2c_core ata_piix thermal fan button intel_agp intel_gtt agpgart acpi_cpufreq [last unloaded: e1000e] Oct 27 22:45:55 backup kernel: CPU: 1 PID: 4714 Comm: shfs Not tainted 5.10.28-Unraid #1 Oct 27 22:45:55 backup kernel: Hardware name: Dell Inc. Vostro 200/0CU409, BIOS 1.0.16 09/20/2008 Oct 27 22:45:55 backup kernel: RIP: 0010:journal_end+0x44/0xa3 [reiserfs] Oct 27 22:45:55 backup kernel: Code: 08 41 83 f8 01 7e 1d 48 8b 3f 48 c7 c1 48 d3 21 a0 48 c7 c2 c8 90 21 a0 48 c7 c6 5d d3 21 a0 e8 91 45 ff ff 83 7d 14 00 75 0a <0f> 0b 41 b8 fb ff ff ff eb 50 8b 45 08 ff c8 85 c0 89 45 08 7e 39 Oct 27 22:45:55 backup kernel: RSP: 0018:ffffc900013bfe50 EFLAGS: 00010246 Oct 27 22:45:55 backup kernel: RAX: ffff888100a472c0 RBX: 00000000fffffffb RCX: 0000000000000000 Oct 27 22:45:55 backup kernel: RDX: ffffc900013bfe68 RSI: ffffffffa021d2b9 RDI: ffffc900013bfe60 Oct 27 22:45:55 backup kernel: RBP: ffffc900013bfe60 R08: 0000000000000000 R09: 00000000008373b8 Oct 27 22:45:55 backup kernel: R10: ffffc900013bfd50 R11: ffff888103343000 R12: ffffffffa0218d40 Oct 27 22:45:55 backup kernel: R13: 00000000ffffff9c R14: ffff888094906d80 R15: ffff88806b0dc938 Oct 27 22:45:55 backup kernel: FS: 0000145cbe835700(0000) GS:ffff88812bc80000(0000) knlGS:0000000000000000 Oct 27 22:45:55 backup kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Oct 27 22:45:55 backup kernel: CR2: 00002dbca07ad008 CR3: 000000007fa50000 CR4: 00000000000006e0 Oct 27 22:45:55 backup kernel: Call Trace: Oct 27 22:45:55 backup kernel: reiserfs_evict_inode+0xbf/0x105 [reiserfs] Oct 27 22:45:55 backup kernel: evict+0xb7/0x16b Oct 27 22:45:55 backup kernel: do_unlinkat+0x13c/0x1d3 Oct 27 22:45:55 backup kernel: do_syscall_64+0x5d/0x6a Oct 27 22:45:55 backup kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Oct 27 22:45:55 backup kernel: RIP: 0033:0x145cbf96c277 Oct 27 22:45:55 backup kernel: Code: f0 ff ff 73 01 c3 48 8b 0d 16 6c 0d 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 b8 57 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e9 6b 0d 00 f7 d8 64 89 01 48 Oct 27 22:45:55 backup kernel: RSP: 002b:0000145cbe834c38 EFLAGS: 00000213 ORIG_RAX: 0000000000000057 Oct 27 22:45:55 backup kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 0000145cbf96c277 Oct 27 22:45:55 backup kernel: RDX: 0000145cbe834c60 RSI: 0000145cbe834c60 RDI: 0000145cac01e200 Oct 27 22:45:55 backup kernel: RBP: 0000145cbe834d10 R08: 0000000000000001 R09: 735f3636625f6c6c Oct 27 22:45:55 backup kernel: R10: fffffffffffff118 R11: 0000000000000213 R12: 000000000000031b Oct 27 22:45:55 backup kernel: R13: 0000145cbe533038 R14: 000000000046aa10 R15: 0000000000000000 Oct 27 22:45:55 backup kernel: ---[ end trace 14579543766ef5b2 ]---
  24. Homebridge is in need of an update. 2018? that's quite a while back. I dumbly updated it via the GUI, which ain't gonna work. And now it doesn't boot. [12/19/2021, 10:11:45 PM] Failed to save cached accessories to disk: EROFS: read-only file system, open '/root/.homebridge/accessories/cachedAccessories' /bin/sh: 1: cannot create /root/.homebridge/log.txt: Read-only file system npm WARN notsup Unsupported engine for [email protected]: wanted: {"homebridge":"^1.3.8","node":"^16.13.1"} (current: {"node":"12.20.1","npm":"6.14.10"}) npm WARN notsup Not compatible with your version of node/npm: [email protected]