s.Oliver

Members
  • Content count

    148
  • Joined

  • Last visited

Community Reputation

1 Neutral

About s.Oliver

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  1. unRAID OS version 6.4.0-rc15e available

    can't say that, all unRAID UI is lightning fast here.
  2. unRAID OS version 6.4.0-rc15e available

    well, i have then to join @bonienl here. starting up my windows 10 VM (with passthrough'ed NVIDIA GPU) also logged these vmwrite errors (Call Trace) three times in a row – but: the VM started without any visible or experienced error so far. it seems to be stable. hadn't got these errors before (so that's a RC15e thingie). though, i haven't since then rebooted nor did i shut it down, so can't report if i would see these hangs then, which some experience.
  3. [-rc15e] IPMI Issues

    hey guys! i'm on an ASRock Rack EPC612D8A-TB board. it has an VGA connector for IPMI (which also is set for being the primary display) and there is a display connected to it. IPMI has it's own dedicated network port. the board is updated to all latest firmware / BMC and up until (including) RC14 the output on the (real) display was "correctly" in white text on black background. watched yesterday the whole re-boot process with RC15 via IPMI*. i could follow all the way and i didn't get disconnected or anything else – so it works. text/background were (and still are) in black/white (via IPMI). but on the real display connected to VGA the text color is now blue (background still black). everything else seems to be ok. a grep'ing for 'IPMI' results in these few lines: Dec 6 18:14:56 unRAID kernel: ipmi message handler version 39.2 Dec 6 18:14:56 unRAID kernel: ipmi_si IPI0001:00: ipmi_si: probing via ACPI Dec 6 18:14:56 unRAID kernel: ipmi_si IPI0001:00: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 6 18:14:56 unRAID kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 6 18:14:56 unRAID kernel: IPMI System Interface driver. Dec 6 18:14:56 unRAID kernel: ipmi_si: probing via SPMI Dec 6 18:14:56 unRAID kernel: ipmi_si: SPMI: io 0xca2 regsize 1 spacing 1 irq 0 Dec 6 18:14:56 unRAID kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x0, irq 0 Dec 6 18:14:56 unRAID kernel: ipmi_si IPI0001:00: Found new BMC (man_id: 0x00c1d6, prod_id: 0xaabb, dev_id: 0x20) Dec 6 18:14:56 unRAID kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized * on a separate bloody windows VM, because they suck so much and no solution on Mac is available, which works. and boy i tried a hell lot of stuff. actually i got it to the point, where the remote display would be displayed, but just a second before it craps out with an error '...redirecting floppy controller...' and the java programm gets killed. supermicro has a java utility for linux, which does work on mac (not officially) and uses standard IPMI commands – but here i couldn't get remote display working (maybe with much tinkering of these parameters, not sure and no nerve for that).
  4. oh well, i'm sorry to have wasted your time here. curiously, something must have changed along the way... i had both plug-ins installed for almost all of my unraid time. there were one, two, three occasions, where the counts were wrong, but always an update of either one of the plug-ins then somewhen fixed it again. now it happened again, so i thought, it's maybe yours which needs fixing. thanks alot for being that patient and helpful. have a good night!
  5. cha-ching! bingo! without it, the values are reported correctly!!!
  6. yes, i have the latest installed.
  7. Version 2017.10.29b (i did always immediately update).
  8. thanks for the last update, didn't change anything. :-( to clarify myself from earlier postings: i used lsof (no argument) and then searched for the relevant parts (because of that the values were higher then right now). here are the actual outputs: ls -la /mnt/disks/ drwxrwxrwx 1 nobody users 80 Oct 18 15:16 Samsung_SSD_850_EVO/ drwxrwxrwx 5 nobody users 54 Oct 17 11:50 UDWD4TB/ lsof /mnt/disks/UDWD4TB/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME tvheadend 14733 nobody 51w REG 8,65 2994501788 14206387 /recordings/Recording1.ts tvheadend 14733 nobody 57w REG 8,65 152684764 14206397 /recordings/Recording2.ts tvheadend 14733 nobody 62w REG 8,65 3916977180 4311589916 /recordings/Recording3.ts UD shows 7 open files – correct value is 3 ---- lsof /mnt/disks/Samsung_SSD_850_EVO/ COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME qemu-syst 12310 root 18u REG 0,38 55810129920 262 /mnt/disks/Samsung_SSD_850_EVO/VM1/Name1/vdisk2.qcow2 qemu-syst 12310 root 19u REG 0,38 55810129920 262 /mnt/disks/Samsung_SSD_850_EVO/VM1/Name1/vdisk2.qcow2 qemu-syst 21566 root 22u REG 0,38 85001043968 258 /mnt/disks/Samsung_SSD_850_EVO/VM2/vdisk2.qcow2 qemu-syst 21566 root 23u REG 0,38 85001043968 258 /mnt/disks/Samsung_SSD_850_EVO/VM2/vdisk2.qcow2 UD shows 10 open files – correct value is 2 ---- so i think it's not too bad – unifying the results of both would lead to correct amount of open files.
  9. that lists the correct amount of open files for the share without duplicates. 😀 but for the UD device without share it needs to be filtered for the process ‚duplicates‘ - in my case it lists 4 open files (double for each real physical file).
  10. no and that‘s the strange part, UD does report opened files for it. i can‘t trace anything with lsof; not the device, not the share.
  11. sorry to say no change in my case. more precisely: lsof only shows these 30 open files on only one UD device sdi (the SSD with vdisk images on BTRFS; auto mount, but no share); it looks like this: /mnt/disks/Samsung_SSD_850_serial/pathtofolder1/vdisk2.qcow2 (12x) /mnt/disks/Samsung_SSD_850_serial/pathtofolder2/vdisk1.qcow2 (18x) lsof doesn't show any references to the other UD device sde (HDD, XFS, auto mount + share); it is mounted to: /mnt/disks/WDdisk_serial share has custom name: /mnt/disks/UDWD4TB (i can't reference it with any parts of either 'sde' or it's name nor it's custom name).
  12. dlandon, that‘s what i did. 32 processes reported the same 2 files as opened. UD reports 10. physically we have 2 files which are opened. i would expect UD to report exactly those 2. why report more?
  13. well, in my case 32 processes have the same 2 files open. but that should result in (only) 2 open files displayed, right 😃
  14. well, have no screenshot at hand (will provide one soon), but in the past it was working absolutely fine (there was 1-2 times in the past, where it was wrong too, but a following update to the plug-in solved these issues always). let's say my tvh docker recorded 2 shows, UD showed exactly those 2 open files. now with a recording of 2-3 shows it provides completely other values like 8-10. live example: right now on another UD device (SSD) i have stored 2 vdisk images which are actively used from 2 VMs – UD shows 10 open files. lsof shows 32 hits, because of different processes using the same 2 files. just tell me, how i can help debug this error case. i'll try to help as much as i can. and free time is mostly a rare thing to have, so i do understand your comments about development. much more it is valued, that you take care. thx.
  15. OPEN FILES count is wrong hey dlandon, thanks for a wonderful plug-in. i've had the issue in 6.4.0 RC9/RC10 definitely (and maybe also in 6.3.5 before) but can't say exactly at which plug-in it started (a guess could be 5-7 versions back). and may i ask, if we could get the same reads/writes column (with flippable switch between count of read/writes vs. speed)? ideally they would be aligned with the columns from the drives (above) of the array – so it would perfectly match optically and make it possible too check values with a quick view to it. i see it would be probably necessary to move the mount button much more to the right. hopefully you can fix the counting bug – it's quite useful feature. thx. alot, great work you do here.

Copyright © 2005-2017 Lime Technology, Inc. unRAID® is a registered trademark of Lime Technology, Inc.