Interstellar

Members
  • Posts

    676
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Interstellar's Achievements

Enthusiast

Enthusiast (6/14)

12

Reputation

  1. Unsurprising that this is leading to some concern and misconceptions. Its good that us long time licenses are grandfathered but at the end of the day I paid $100 or whatever 10 years ago and have had many thousands of dollars of dev for that money. That sadly does not pay the bills at Limetech! With this new plan UnRAID keeps working albeit without updates, this means you haven’t lost functionality, you technically got what you paid for, which was the use of UnRAID in perpetuity and updates for a year. I’d imagine quite a few basic users don’t even remember to update that regularly anyway! Security is an interesting topic, not sure what side of the fence I sit on, if you want to be up to date then pay up or should security updates for an additional year be covered? From LT’s PoV, this can only be done by maintaining multiple branches and I don’t think LT is big enough for that, so I wouldn’t blame them for saying the former… It wasn’t so long ago that you’d pay regularly for new SW versions, I used to buy every new version of Lightroom… Long story short is that a balance needs to be struck and I don’t blame LT going down this route, it seems, a reasonable balance. If I was buying again I’d just get the top tier and be done with it.
  2. Do you have a GPU passed through to one of them? If so sleep that one and see what happens. Immediate drop in power usage for me.
  3. This issue is still present in 6.12.6. I can provide a syslog at the weekend as I may have some spare time, I could also run certain commands if needed at the same time, so if there is any useful information that doesn't come with the diagnostics, please provide the commands now. Thanks.
  4. Still looking for a way to manually start a verification process rather than temporarily setting a verification schedule and then once its started turning it off again....
  5. Coming back to this, as having to keep a VM started to keep the GPU asleep is quite annoying. Are there any commands or investigations I can do to help? I'm going to avoid removing the GPU because its not easy to do and the server runs the house and the internet 😃 Edit: To reiterate with an AMD GPU: Shutting down the VM in 6.11.5 results in roughly the same power level as a VM asleep in 6.12. Shutting down the VM in 6.12 results in power levels much higher. Given the errors I saw in the log above, it seems that it could be that a VM shutdown with a GPU in now doesn't go into D3cold vs something else, but I can't find any commands that would tell me which state the GPU is in.
  6. Finally restarted, Although: If I force shutdown a hibernating VM and I try and restart I get this in a loop... vfio-pci 0000:03:00.0: Unable to change power state from D3cold to DO, device inaccessible vfio-pci 0000:03:00.0: Unable to change power state from D3cold to DO, device inaccessible vfio-pci 0000:03:00.1: Unable to change power state from D3cold to D0, device inaccessible vfio-pci 0000:03:00.1: Unable to change power state from D3cold to DO, device inaccessible Which whilst forcing a restart, gives some hints as to the problem, I assume the GPU is in D3cold when hibernating (and in 6.11.5 was in D3cold when shutdown) but is no longer going into D3Cold when shutdown in 6.12. I can't seem to find a command that I can run to check what state the GPU is in, does anyone know how?
  7. Restarted... As requested: zfs list NAME USED AVAIL REFER MOUNTPOINT master 9.11T 3.48T 104K /mnt/master master/Mac HD 2 3.94T 3.48T 3.94T /mnt/master/Mac HD 2 master/Mac HD 3 5.09T 3.48T 5.09T /mnt/master/Mac HD 3 master/Working Files 76.8G 3.48T 76.8G /mnt/master/Working Files zfs mount master /mnt/master master/Mac HD 3 /mnt/master/Mac HD 3 master/Working Files /mnt/master/Working Files master/Mac HD 2 /mnt/master/Mac HD 2 showmount -e Export list for TOWER-NAS: /mnt/master <IP list> /mnt/cache <IP list> /mnt/disk1 <IP list> Trying to mount via the command that worked in 6.11.5: mount -t nfs TOWER-NAS.local:"/mnt/master/Mac HD 2" "/Volumes/Mac HD 2" rpc.mountd[12392]: authenticated mount request from IP:870 for /mnt/master/Mac HD 3 (/mnt/master) rpc.mountd[12392]: request to export directory /mnt/master/Mac HD 3 below nearest filesystem /mnt/master Master is setup as Private and: <IP>(sec=sys,rw,anongid=100,anonuid=99,all_squash) Thanks
  8. Nothing special: MSI B650M mATX RX6600 - VFIO bind passthrough 1TB NVME Corsair MP510 - VFIO bind passthrough i5-13500 Intel Quad Port NIC I340-T4 2x32GB DDR4 5x 3.5" SATA 1TB 850 EVO 320GB 2.5" In any case, now confirmed. If I shutdown a VM rather than sleep it, I get an increase in power usage. If I sleep the VM, I actually get a reduction in power usage by a few W compared to 6.11.5. I suspect this is to do with the state the GPU and/or the NVME SSD and/or the PCI-E lanes are in when the VM is asleep opposed to shutdown. As it's a complete faff to remove/add the GPU (to aid diagnostics). I'm going to make a temporary mini VM (4GB, less cores) that I can sleep (to stop using up 20GB of RAM unnecessarily...!) Edit: However the GPU seems to not want to give a signal out after a long period in sleep, so I had to resume and then stop the VM via WebGUI and restart it. Edit 2: I've removed the GPU [from the VM] and started the VM up and slept it again, shall see overnight what affect this has (i.e. does the state the SSD is in effect the power usage). - Power levels a tad lower maybe, but in the noise.
  9. Just for public, even with the small test I've done now if I sleep the VM the power levels go down to 'normal', i.e. low 30s. If I start it up and then shut it down, back to higher power levels. So without taking the GPU out and testing (not straight forward as it messes all the VFIO binds and stuff up for my OPNSENSE VM) I'm reasonably confident some of the changes w.r.t GPU and/or passthrough are causing this. Maybe the rebar stuff has something to do with it? Will confirm over the coming weeks that sleeping the VM with the GPU passthrough rather than shutting it down brings the power usage back to 'normal' levels. Edit: Starting it up and sleeping it again, back to 31W idle. So I'm 90% confident something has changed w.r.t the state of a VFIO bind AMD GPU after VM shutdown has changed which means it stays in a higher power level with 6.12 compared to 6.11.
  10. Just doing that, doing some additional anonymising. My name is in a lot of the files still (disk.cfg, smb config, lost, etc...) - the anonymising needs a bit of work still. DM'd you the pertinent files though. Edit: Another datapoint, although tentative. Instead of shutting the VM down I put it to sleep instead, unfortunately a Plex stream started at the same time (disk spinning plus CPU usage increase) but power usage is down a few W even then, so I'll see how it is overnight.
  11. I have now confirmed this again. See screenshot. Very clear and sudden increase in power usage after I've upgraded to 6.12.4 from 6.11.5. powertop --auto-tune has been run (as normal) fans are running at the same speeds hard disks are spun down (power goes up by 5-6W each when spun up, 25W in total when all spinning.) corefreq shows the CPU still going into C3 state/<3W when idle. CPU usage over time is the same. All ASPM items seem to be enabled the same. The only thing I can think of is that the GPU (passthrough, bound to vfio) is not going into sleep after VM shutdown unlike it does in 6.11.5. Reason being I had to start the VM, then shut it down, to get the GPU/PCI-E slot/etc to go into a deeper sleep (if I didn't I also had 9-11W more power draw). @mgutt - Have you seen anything like this? @Devs - It would be interesting to know if there are any commands or other checks I can do to see what is going on, but as it stands I think I have to go back to 6.11.5, increasing my power usage by 30% due to SW updates doesn't seem like a good thing
  12. Progress! Thanks - we're now mounted and working, but the startup of the array is so long that I'm getting GUI timeout errors.. Sep 14 12:37:02 TOWER emhttpd: /usr/sbin/zpool import -d /dev/sdh1 2>&1 Sep 14 12:37:03 TOWER emhttpd: pool: master Sep 14 12:37:03 TOWER emhttpd: id: 735173779397214344 Sep 14 12:37:03 TOWER emhttpd: shcmd (634497): /usr/sbin/zpool import -N -o autoexpand=on -d /dev/sdh1 735173779397214344 master Sep 14 12:37:05 TOWER emhttpd: /usr/sbin/zpool status -PL master 2>&1 Sep 14 12:37:05 TOWER emhttpd: pool: master Sep 14 12:37:05 TOWER emhttpd: state: ONLINE Sep 14 12:37:05 TOWER emhttpd: config: Sep 14 12:37:05 TOWER emhttpd: NAME STATE READ WRITE CKSUM Sep 14 12:37:05 TOWER emhttpd: master ONLINE 0 0 0 Sep 14 12:37:05 TOWER emhttpd: /dev/sdh1 ONLINE 0 0 0 Sep 14 12:37:05 TOWER emhttpd: errors: No known data errors Sep 14 12:37:05 TOWER emhttpd: shcmd (634498): /usr/sbin/zfs set mountpoint=/mnt/master master Sep 14 12:37:05 TOWER emhttpd: shcmd (634499): /usr/sbin/zfs set atime=off master Sep 14 12:37:05 TOWER emhttpd: shcmd (634500): /usr/sbin/zfs mount master Sep 14 12:39:01 TOWER emhttpd: shcmd (634505): /usr/sbin/zpool set autotrim=off master Sep 14 12:39:01 TOWER emhttpd: shcmd (634506): /usr/sbin/zfs set compression=off master Sep 14 12:39:01 TOWER emhttpd: shcmd (634507): /usr/sbin/zfs mount -a Sep 14 12:39:58 TOWER nginx: 2023/09/14 12:39:58 [error] 12456#12456: *255109 upstream timed out (110: Connection timed out) while reading upstream, client: 10.10.1.110, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "TOWER.local", referrer: "http://TOWER.local/Main" Sep 14 12:40:57 TOWER emhttpd: shcmd (634512): sync Sep 14 12:40:57 TOWER emhttpd: shcmd (634513): mkdir /mnt/user0 Sep 14 12:40:57 TOWER emhttpd: shcmd (634514): /usr/local/bin/shfs /mnt/user0 -disks 14 -o default_permissions,allow_other,noatime Sep 14 12:40:57 TOWER shfs: FUSE library version 3.12.0 Sep 14 12:40:57 TOWER emhttpd: shcmd (634515): mkdir /mnt/user Sep 14 12:40:57 TOWER emhttpd: shcmd (634516): /usr/local/bin/shfs /mnt/user -disks 15 -o default_permissions,allow_other,noatime -o remember=0 Used to take about 2 minutes from the start button to having internet (Opnsense VM) running again, now it takes the best part of 10m. That section above is 3 minutes on its own. Why has adding the ZFS disk slowed the process down so much? Edit: Problem two is now I've got two NFS shares I cannot get rid of which is preventing me mounting the folders rather than the top level master drive. I think its because there are spaces? Export list for TOWER: /mnt/master/Working * <--- Not in /etc/exports /mnt/master/Mac (everyone) <--- Not in /etc/exports /mnt/user/Working-Data ....<IPs>.... /mnt/master ....<IPs>.... /mnt/cache ....<IPs>.... /mnt/disk1 ...<IPs>.... zfs get sharenfs NAME PROPERTY VALUE SOURCE master sharenfs off local master/Mac HD 2 sharenfs off inherited from master master/Mac HD 3 sharenfs off inherited from master master/Working Files sharenfs off inherited from master Edit 2: Edited: nano /etc/exports.d/zfs.exports and removed all the entries. Ran exportfs -r, and the entries are gone. But: rpc.mountd[31793]: request to export directory /mnt/master/Mac HD 2 below nearest filesystem /mnt/master ??
  13. Took a while (disk spinning up I think) NAME PROPERTY VALUE SOURCE master type filesystem - master creation Mon Jul 24 19:08 2023 - master used 9.07T - master available 3.52T - master referenced 104K - master compressratio 1.01x - master mounted yes - master quota none default master reservation none default master recordsize 128K default master mountpoint /mnt/master local master sharenfs on local master checksum on default master compression lz4 local master atime off local master devices on default master exec on default master setuid on default master readonly off default master zoned off default master snapdir hidden default master aclmode discard default master aclinherit restricted default master createtxg 1 - master canmount on default master xattr on default master copies 1 default master version 5 - master utf8only off - master normalization none - master casesensitivity sensitive - master vscan off default master nbmand off default master sharesmb off default master refquota none default master refreservation none default master guid 5678621045851606521 - master primarycache all default master secondarycache all default master usedbysnapshots 0B - master usedbydataset 104K - master usedbychildren 9.07T - master usedbyrefreservation 0B - master logbias latency default master objsetid 54 - master dedup off default master mlslabel none default master sync disabled local master dnodesize legacy default master refcompressratio 1.00x - master written 104K - master logicalused 9.23T - master logicalreferenced 46K - master volmode default default master filesystem_limit none default master snapshot_limit none default master filesystem_count none default master snapshot_count none default master snapdev hidden default master acltype off default master context none default master fscontext none default master defcontext none default master rootcontext none default master relatime off default master redundant_metadata all default master overlay on default master encryption off default master keylocation none default master keyformat none default master pbkdf2iters 0 default master special_small_blocks 0 default Disk is now mounted and data is visible.
  14. Cannot open master dataset does not exist. Something has occurred to me and that is the disk in 6.11.5 automatically mounted at boot (even before array start) is there a ZFS flag that I may have set and forgot that automatically mounts disks and when 6.12 imports it that flag mounts it?
  15. zfs list: no datasets available zfs mount: blank Insightful 😄