thany

Members
  • Posts

    207
  • Joined

  • Last visited

Everything posted by thany

  1. I see this docker requires as many as 7 ports to be mapped. In the settings they are just called Host Port 1-7. 1. Can you please give them meaningful names? 2. Until then, can you list what they're for?
  2. I was never asked to do that when I installed the docker. So how could I possibly have known to do this? Or indeed, how should I have known that it's apprently /config? Edit: scratch that, it's under "more settings", it's mapped to appdata/unms perfectly fine. And there is stuff in there. But I've also got another directory appdata/unifithat also has data. Could be renamed or something and reset itself because of that?
  3. My UniFI docker (you call it unms) seems to have lost its login information. Seems it's not being persisted anywhere, which makes sense, because there's no directory mapping. Nobody wants have to go through the setup every time Unraid has gone through a reboot, or the docker updated, or some other stop/start cycle. Can this be fixed please?
  4. I just found that couchdb isn't persisting its config. I added CORS configuration, and it started doing what I needed it to do perfectly. But as soon as the docker had to restart (or maybe after an update? not sure actually), this bit of configuration is gone, and all config fields are back to what looks like default/initial config. Can this be fixed please? This is probably a config in the dockerfile or definition of fields for Unraid. Pretty sure this is not a bug in couchdb itself.
  5. Getting only 9MB/s to my SSD cache. Just by copying a big file through windows networking to a share that uses an SSD as cache. Such SSDs are fast enough to saturate the 1Gb ethernet connection from the client, and the unraid server is connected on 10Gb. There are no other processes accessing the array. Even without cache, directly to the HDDs then, it should still easily be able to saturate a 1Gb network. The other way, server to client, is even worse at around 6MB/s. I also don't see anything that is blocking up the CPU, memory utilisation, or any other significant I/O. It's also not intermittent. The slow speed is constant. It will not improve if I leave the file copy going. It also doesn't improve if I let things settle by giving it a few minutes and then trying again. Another thing worthy of note is that in the "main" tab, I see 0.0B/s for every disk/ssd involved all the time (in both directions). It's like it's copying to/from the memory cache, which would be great, but should be able to saturate ten network connections with bandwidth to spare. So that doesn't explain why it's so slow. Rather I think something is broken. I would expect at least 100MB/s in each direction. And preferably also see that reflected on the "main" tab. So what's blocking it up? unraid-diagnostics-20231228-1504.zip
  6. It was alright for a while, but it suddenly started happening again. I doubleclicked on a link in a reply that I was quoting (not yet submitted) which pops up the link editor popup. That would not go away, so I couldn't proceed to submit my reply (well, that is without hackery in the devtools). Cookies cannot cause such behaviour, and if clearing cache would solve it (not tried yet) it would only mean you're telling the browser to cache your files too aggressively, or invalidate them incorrectly. Either way, it's 100% reproducible now. I see a "MultiQuote" tooltip just floating around and I'm afraid to attach a screenshot, because the popup for it will never go away, leaving me effectively unable to attach one. Edit: clear cache (and refresh the page) doesn't help.
  7. Yup, that was it. I must've added it at some point, yes... Totally missed this one. That wasn't it, because of the above, but I tried it anyway and it said it cleaned up 6 "configurations". I wish it would say what configurations they are, that it cleaned up. It kinda doesn't feel right when it cleans up something without me knowing what that is. Or alternatively, make it so that cleaning up won't be neccesary
  8. That feels like something that I would have to put in somewhere deep into the OS somehow, which then does its job. But if I ever need to touch it for whatever reason, I'm going to have the greatest difficulty finding where it got buried again. So basically the answer is "no it's impossible, unless you like to hack it in" which means if it ever stops working (or starts working too well), I'm on my own. I would rather stick with supported features, which is why it would be nice to be able to use the "scheduler" for more than just the mover, and have the option to make it do a selection of action at certain intervals. Basically, make the scheduler more configurable and do more stuff. Then it's also (more) visible just in the web UI, so it's not buried somewhere in some script or config file that I won't know about anymore after a week. It's okay though. And thanks for the reply, but no thanks
  9. Sorry, I've been on holiday since my TS. Here are my diagnostics. But as said, such file/directory simply doesn't exist anywhere, as proven by that `find` command. Or did I do it wrong? And surely if Windows discovers I have no access to a share, it would show an error about it, but that time I saw *nothing* pop up. After upgrading unraid to the latest version however, I get the error "The network path cannot be found" which usually means the server name is incorrect, even though other shares on the same server work fine. unraid-diagnostics-20231221-1411.zip
  10. When connecting to my unraid through a VPN, I don't get any output. When I connect locally, it works fine. I do get the popup "updating plugins" or "updating the container", but it never gets any content. For the plugin updater, I decided to just hit the close button after some time, and refreshed the page only to see that the plugins did actually get updated. That's why My assessment is that *only* the verbose output in the popup isn't working, whereas the actual updating process in the background is (or seems to be) working totally fine. As for updating a docker, the three loading "balls" keep animating seemingly forever. Again, after allowing it some time to do its thing, I refreshed the page to find the docker updated. Even stranger, when I open the network tab in the devtools, the output starts working. Maybe there's something not quite right with XHR calls or something, because the "Main" page wasn't showing any meaningful content either. I thought it was just slow (which it is) but now that I'm typing this out, it might be related. Any ideas?
  11. "Just" as if it's obvious 😕 It's not. Can you please add some sensible defaults?
  12. Okay, like I said, can this be added?
  13. When I go to \\unraid I can see a share called "downloading" that no longer exists. It's been a while since I've accessed unraid this way, but iirc, this share existed on a ZFS pool that I've removed. Somehow the share still lingers. Trying to open it in Windows, just doesn't do anything. No error or nothing, just as if I never clicked it. The confusing bit is that it doesn't show up on the shares tab in the web UI. So my question: where exactly is it pulling shares from, when returning the list of shares when accessing unraid via SMB? I have already done find . -name "downloading" in the root with 0 results. So at least we know it's not a file/directory with that name, that's causing this phantom share to exist. I've also created a share with that name, and then deleted it. That doesn't fix it either. I'm using Unraid 6.12.4.
  14. Sorry, I know this has been asked before, but in my defence, it was asked many years ago. By having said that, I do know this can be done through userscript. But I don't prefer installing userscripts, because they feel like going outside the scope of the system you're applying them to. Kind of like applying user script/styles to a website - it may break at any future update. So here's hoping Unraid now supports this option natively. I just want to add a schedule where I set up on which time of which days the disks are required to be spun up. The flip side could be to just never spin them down. And sure, some drives are designed to do that, but I'm not sure that mine are. Sure they are enterprisey-like drives, but that doesn't mean they are designed for 24/7 spinning, and even then, that still doesn't mean spin cycles are actually bad. And besides, I like to save power where I reasonably can, because I kind of like this planet 😊 Anyway, I digress. If this feature exists, where is it please? If it doesn't exist, pretty please to add it? 🙏🏻
  15. I've just installed CouchDB with the default settings, and the first thing it does after starting, is stopping itself. The log says: ************************************************************* ERROR: CouchDB 3.0+ will no longer run in "Admin Party" mode. You *MUST* specify an admin user and password, either via your own .ini file mapped into the container at /opt/couchdb/etc/local.ini or inside /opt/couchdb/etc/local.d, or with "-e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password" to set it via "docker run". ************************************************************* I understand what it says, but I don't know how to act upon it. I don't know how to pass commnadline arguments, or where I'm expected to put an ini file, or even what exactly to put there. Dear author, could you please change the docker such that a default setup at least works? And/or add fields in the docker to enter a username/password to use? Much obliged
  16. It's not working. I *know* that I'm accessing a files from the main array, I *know* which share I'm accessing it on, and I can *see* with my own two eyeballs my access is producing disk I/O activity. All three monitor are enabled. Even though I might be seeing hundreds of MB/s being written and/or read, the File Activity plugin happily shows no activity whatsoever. Then why do I need File Activity plugin? Because sometimes I also see some kind of ghost file activity of about 5MB/s being written that for whatever reason blocks up most other I/O. And that too doesn't show up, even though that's actually the important bit that I need to know about. So how do I make it actually show activity that is definitely occurring?
  17. I'm just trying to make the simplest possible ZFS pool: one drive, no fancy settings. Here's what I did: 1. Stop the array 2. Add pool - giving it a name and 1 slot 3. Assign into it a totally empty new spinning rust drive 4. Click on the pool name 5. Set the filesystem to zfs 6. Set autotrim to Off 7. Apply & Done 8. Start the array Actual result: That one drive says "Unmountable: Unsupported or no file system" Expected result: Since I said the pool should be of the ZFS filesystem, I would expect it to make new (empty!) disks that filesystem. It doesn't make sense to yell at the user for the filesystem not being present, when the OS probably should have created the filesystem as part of creating the pool. So my question is two-fold: 1. What is the responsibility of the OS when it comes to creating a pool of new devices, and creating filesystem structures on it? 2. Where can I find a concise but complete guide on how exactly to create a ZFS pool? For now: I'm just going to use the FORMAT option near the bottom on the Main tab, in hopes that it will satisfy the new pool with an actual filesystem. I'm also going to hope that in a future version of unRAID, this process will have recieved some more love, giving the user a smoother experience without them being yelled at
  18. Ok, so the name "historical devices" is just a bit unfortunate. It should be something like "disconnected devices". Makes more sense to me anyway, not sure why the current name was chosen.
  19. I wonder what is the purpose of Historical Devices? From what I see, it lists devices that are no longer present in the system. So why would I need to keep their information around? What is the actual real-world use-case for this feature? Or alternatively I could ask, what problem does it solve?
  20. Same problem here. I'm on a 4K monitor at 27 inches, which is set to 200% in the display settings. I think there's plenty of space for an extra column: But I actually have to zoom *out* to below 100% in the browser, in order to get three columns. If we're being given a custom layout, the number of columns should also be custom. What concerns me is two things: 1) How could custom rearranging possibly work if the number of columns is not fixed. Where do things go from a column that is no longer allowed to be visible? 2) Regardless of the number of columns, why is the default layout not to fill up the available columns equally? It doesn't feel right that my left column (again, default layout!) is four times longer than the right column.
  21. For now, I watched this video: I decided to go with his last approach, that is allegedly the safest: 1. Stop the array 2. Unassign the faulty device 3. Start the array in maintenance mode 4. Repair the XFS filesystem with the -L option (not sure if this is even neccesary, because of next steps) 5. Stop the array once more 6. Assign the drive back into the array where it belonged 7. Start the array The array is now rebuilding. This feels like a good sign to me, but let's see where it goes.
  22. First, I'll explain what exactly happened. I hot-added a drive to the system, not to the array. I don't intent to add it at all. Everything seemed fine, except in Unassigned Devices, one o the main array drives showed up. I didn't touch it, because the array seemed to be intact. The drive I added also showed up an I mounted it. Then. I added another drive, which also showed up in Unassigned Devices. Perfect. I wanted to see what's up with one of my drives showing in UD, so I attempted to stop the array. Nothing happened. Huh. So I manually stopped my VMs, and released files still locked on my pc. Stop the array. Still nothing. Alrightythen, a reboot it is. After about 3 minutes, the system came back up again, and this time, that one disk that showed up in UD, doesn't anymore. Instead, it shows in the array as "Device is disabled, contents emulated." I guess emulated contents means it's using parity to serve the missing data. Doesn't matter, let's first solve the actual problem. In the log I was able to find: Jun 28 00:35:37 unraid kernel: XFS (md1p1): Corruption warning: Metadata has LSN (1:155054) ahead of current LSN (1:149811). Please unmount and run xfs_repair (>= v4.3) to resolve. Jun 28 00:35:37 unraid kernel: XFS (md1p1): log mount/recovery failed: error -22 Jun 28 00:35:37 unraid kernel: XFS (md1p1): log mount failed Jun 28 00:35:37 unraid root: mount: /mnt/disk1: wrong fs type, bad option, bad superblock on /dev/md1p1, missing codepage or helper program, or other error. Jun 28 00:35:37 unraid root: dmesg(1) may have more information after failed mount system call. Jun 28 00:35:37 unraid emhttpd: shcmd (38): exit status: 32 Jun 28 00:35:37 unraid emhttpd: /mnt/disk1 mount error: Unsupported or no file system So I stopped the array again, so that at least things weren't going to try and use a filesystem that can't be mounted. Then, I started the array in maintenance mode, with the intent to do a filesystem check. The check when down fairly quickly, and the result is as follows: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 2596075000, counted 2283514869 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 2 - agno = 4 - agno = 5 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 6 - agno = 12 - agno = 13 - agno = 1 - agno = 7 - agno = 14 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. I'm not sure how to interpret this. It feels positive, but that's only because I'm not seeing anything that is yelling at me. Only thing that concerns me is that "replay the log" sentence. I have no idea how to do that. For good measure, let's also have the clarity of mind to supply the output of `blkid`: root@unraid:~# blkid /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="272B-4CE1" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/sdf9: PARTUUID="c1c50c9c-48d1-1044-a906-d698d6bb5fba" /dev/sdf1: LABEL="ssd" UUID="17022178612354483592" UUID_SUB="10143665043133095078" BLOCK_SIZE="512" TYPE="zfs_member" PARTLABEL="zfs-6a86c4c98cead86a" PARTUUID="610d4be7-859e-c04c-ac9c-64c88c2b549e" /dev/sdd9: PARTUUID="8863e506-6f9e-c548-a0d6-4dd5402ca53d" /dev/sdd1: LABEL="ssd" UUID="17022178612354483592" UUID_SUB="16910454766111439637" BLOCK_SIZE="512" TYPE="zfs_member" PARTLABEL="zfs-59bc099f2ed51ee4" PARTUUID="bb6c7dd3-52f9-1d41-ba9b-56fc0cb6b006" /dev/md2p1: UUID="3d4faa0b-eb20-42d8-9635-98fa3154070c" BLOCK_SIZE="512" TYPE="xfs" /dev/sdb1: UUID="c4c67326-2450-46d6-b0bb-8f99c9cf535c" BLOCK_SIZE="512" TYPE="xfs" /dev/sdk1: PARTLABEL="Microsoft reserved partition" PARTUUID="0eddd9a7-6a13-486b-9084-8d635e560bc1" /dev/sdk2: LABEL="Archive" UUID="32BB-D30F" BLOCK_SIZE="512" TYPE="exfat" PARTLABEL="Basic data partition" PARTUUID="68c0d2c6-c5c8-4f3f-918a-d00c7cb1b3bf" /dev/sdi9: PARTUUID="e95523c3-3274-9e41-beea-18bed5fe8502" /dev/sdi1: LABEL="ssd" UUID="17022178612354483592" UUID_SUB="4254412105164419679" BLOCK_SIZE="512" TYPE="zfs_member" PARTLABEL="zfs-b465be5a5071c8f5" PARTUUID="360de15b-cd6f-1444-ba23-05d364b8f2f5" /dev/md1p1: UUID="10f79463-d4ab-4d1b-84bd-5a04a4934de4" BLOCK_SIZE="512" TYPE="xfs" /dev/sdg1: UUID="10f79463-d4ab-4d1b-84bd-5a04a4934de4" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="9d6ced82-21c9-43ba-9f30-1151fe599def" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="3d4faa0b-eb20-42d8-9635-98fa3154070c" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="7dbb0b26-8bca-4cb1-9b99-05271e644bef" /dev/sdc9: PARTUUID="e324a5e9-e84b-7e43-9241-0e58097c03e8" /dev/sdc1: LABEL="ssd" UUID="17022178612354483592" UUID_SUB="4258666910141786191" BLOCK_SIZE="512" TYPE="zfs_member" PARTLABEL="zfs-2ca269b663a3f652" PARTUUID="a8895a61-69b4-d948-b0b5-4f13f1389da0" /dev/sdl1: PARTUUID="fa7a5bb3-1e71-263c-49de-d9d5243e7b88" /dev/sdj1: UUID="3d0fba2a-df71-412b-9794-e386cef8f9b9" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="eef81965-07a6-441c-939e-04e281dc176d" /dev/md3p1: UUID="10b78442-e0fa-4ee8-851c-21785b3fb351" BLOCK_SIZE="512" TYPE="xfs" /dev/sdh1: UUID="10b78442-e0fa-4ee8-851c-21785b3fb351" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="fe020cb4-b7dd-46c7-8039-32cd9adca459" There's that /mnt/md1p1 again: it detects an xfs filesystem, which seems positively correct. So I guess the disk is physically fine after all this. I shall also attach a diagnostics zipfile. I think it might be best to pull those two drives for now, since the trouble all started when I added the first one. Maybe this will even fix the problem magically. Currently I just want to get the array up and running again. That's priority number 1. After that, and *only* after that, I would like to do some evaluation to how this could've happened, what was the actual problem, and how can this be prevented in the future. And perhaps even come up with a plan to bake protection against this failure into unRAID itself, is at all possible. But for now, what do I do to get the array up and running again please? unraid-diagnostics-20230628-0109.zip
  23. Here I'm triggering multiple popups in the header: And a user popup and sort dropdown at the same time: And here I'm able to popup multiple @-mentions at the same time:
  24. It worked perfectly! The only thing that wasn't quite clear is that the array has to be stopped in order to modify anything around ZFS. I wasn't used to that, but found out quickly enough.
  25. Thank you. If I'm not mistaken, it says I just create a new pool and are allowed to assign existing ZFS devices to it, which will then "import" a previously created pool. If that works, that's excellent, but I've made a data backup just to be on the safe side, just in case it decides to clear all my data as part of creating the pool The `zpool status` command reports my pool as `raidz1-0` so I think I should be fine, since `raidz1` appears to be supported. Fingers crossed.