kode54

Members
  • Content count

    246
  • Joined

  • Last visited

Community Reputation

1 Neutral

About kode54

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. Am I expecting too much from my Win10 VM?

    Bumping to prove that under certain conditions, great things could happen for no apparent reason, other than hopefully more optimal configuration settings. Upon adding to my domain tag: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> And adding the following override to the -cpu switch to the end, just before the end of the domain tag: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=Microsoft'/> </qemu:commandline> I was able to eke out some more performance, by turning on the hypervisor enlightenments, without tripping the Nvidia drivers. Doing it this way requires qemu version 2.5.0. It is possible to pass that vendor_id argument through the features->hyperv block, but only in libvirt 1.3.3 or newer, or possibly 2.1.0. On the left, my first post here. On the right, my most recent benchmark of the same VM setup, only with the enlightenments enabled. Again, it's still possible to trick Nvidia's drivers, by faking the vendor_id of the hypervisor, which they specifically blacklist. I chose Microsoft for the lulz.
  2. ZFS filesystem support

    Bumping this old topic to get it some more attention. Especially since a branch with Nexenta ZFS based TRIM support is waiting to be accepted into the main line. I for one would love to see ZFS support replace BTRFS use. Create n-drive zpool based on the current cache drive setup, and create specialized and quota limited ZFS datasets for the Docker and libvirt configuration mount points. Yes, Docker supports ZFS. And from what I've seen, one only wants to stick with BTRFS on a system they're ready to nuke at a moment's notice.
  3. L3 Cache is missing in the VM

    There is a trick to switching over: 1) Add another hard drive, maybe 1MB or slightly larger, and make it VirtIO. 2) Boot the VM. 3) Install the viostor drivers for your OS to support the tiny image you mounted above. 4) Shut down the VM. 5) Delete the mini temporary VirtIO drive. 6) Change your boot image to VirtIO. 7) It should boot fine now.
  4. -Delete-

    Whatever it was, it obviously wasn't important enough to share.
  5. To install the driver in unRAID, he'd need a version of ALSA built for the correct kernel.
  6. Version 6.3.0-rc6 Release Notes

    I've dumped the ZFS stuff, probably too crazy to be using that bleeding edge code anyway. And now I'm down one SSD, it may have failed, or the cable or port may have failed. Now I've got one cache drive, the 256GB, formatted XFS. Seems to be in working order now, though.
  7. Am I expecting too much from my Win10 VM?

    Updated CrystalDiskMark shots, with a new VM backed on an XFS cache drive. First one's using cache='none' io='native': Second one's using the defaults that always get overwritten by the template editor, cache='writeback', and no io parameter: The io=native mode appears to reflect the actual drive performance, with the overhead of XFS and virtualization factored in.
  8. GUI non responsive upon mounting array

    Wtf? Dec 12 18:23:50 unraid root: plugin: running: /boot/packages/python-2.7.5-x86_64-1.txz Dec 12 18:23:50 unraid root: Dec 12 18:23:50 unraid root: +============================================================================== Dec 12 18:23:50 unraid root: | Upgrading python-2.7.9-x86_64-1 package using /boot/packages/python-2.7.5-x86_64-1.txz Dec 12 18:23:50 unraid root: +============================================================================== Dec 12 18:23:50 unraid root: Dec 12 18:23:50 unraid root: Pre-installing package python-2.7.5-x86_64-1... Dec 12 18:23:55 unraid root: Dec 12 18:23:55 unraid root: Removing package /var/log/packages/python-2.7.9-x86_64-1-upgraded-2016-12-12,18:23:50... Then after taking the array online and offline a few times: Dec 12 18:25:20 unraid emhttp: unclean shutdown detected Dec 12 18:35:19 unraid sudo: root : TTY=unknown ; PWD=/ ; USER=nobody ; COMMAND=/usr/bin/deluged -c /mnt/cache/deluge -l /mnt/cache/deluge/deluged.log -P /var/run/deluged/deluged.pid I see you are using a very messy plugin-based Deluge setup that plonks conflicting versions of Python on the system, first 2.7.9, then 2.7.5 replaces 2.7.9 like it's a newer version. And lots of Python libraries. And ZIP and RAR and Par2 and yenc utilities. And then the last thing it does at the end is start deluged, which may or may not be hanging the httpd because it never returns. Maybe try finding a Docker package that does everything you want? E: I also see you've got a weird frankenstein mix of Reiser and XFS partitions, too. Yikes.
  9. Where is the Roadmap?

    If this weren't a sticky topic, I'd have called that one hell of a bump. And then I noticed the bump occurred 11 days ago.
  10. 6.3.0-rc already knows about newer virtio iso downloads. Presumably, none of these were added to 6.2 newer releases because they were presumed to be incompatible with the older Qemu?
  11. Version 6.3.0-rc6 Release Notes

    I only supplied it to get the user here going quickly. Naturally, it's not a trusted repository of packages. I assumed that dmacias would build it himself. It has no dependencies, and building it merely requires basic build tools installation on Slackware 14.1, and the atop package from Slackbuilds, if you trust them. I intend to contribute to a "trusted" repository for unRAID-ZFS, at least for alternative branches. My builds are intended to be tested and possibly trusted on SSD block devices, as they are based on spl:master and zfs:ntrim (from dweeezil repository). I have packages for my personal use built for 6.2.4 and 6.3.0-rc6, using the Slackbuilds scripts for spl-solaris and zfs-on-linux, with both modified to include a different version name, and the latter modified to pass --with-spl=/tmp/SBo/spl-<version>. I'm not sure what to do to earn trust for packages, though. E: Removed binary package from my bucket, as NerdPack makes it redundant.
  12. Proposing the inclusion of the atop package from Slackbuilds, for moments where it may be useful to monitor which resources may be maxing out in a system. It will handily display color coded load percentages for memory and disks, and blink a status line red if it's being maxed out. May be useful in tracking down overburdening issues some people are experiencing.
  13. Version 6.3.0-rc6 Release Notes

    Next time you prepare such activity, open an SSH terminal to your unRAID machine, and install atop and run it. atop has been added to NerdPack. Use that to install it, and run it from ssh to look for bottlenecks.
  14. SPICE for VMs

    It is possible if you use virt-manager to configure your VMs. But you won't be able to connect to them from the WebUI, since there is no web SPICE client in unRAID. There are already performance issues with the stock noVNC client, so you may experience better performance if you use a native client.
  15. unRAID Server Version 6.2.4 Available

    Try deleting network.cfg, reboot, and reconfigure your network changes from scratch? Fixed my Docker issues from downgrading, may fix your Samba issues from upgrading. Maybe also keep a backup copy, see how the two differ between your original and the recreated version.
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.