Dodgy

Members
  • Content count

    3
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Dodgy

  • Rank
    Newbie
  1. Interesting, as if that was the case there shouldn't be an issue specifying the locked flag as it shouldn't be doing anything different. I have managed to get it working by building a custom kernel with hugepages (hugetlbfs) enabled (rather than just THP). Using hugetlbfs I am able to lock all of the RAM for the VM and perform manual allocation (though 1GB pages are still being problematic).
  2. Description: Attempting to lock the RAM of a VM by using the <locked/> tag in the XML results in the VM failing to start. How to reproduce: Edit the XML of a VM, add the <locked/> tag in the <memoryBacking> section, save, attempt to start. Expected results: VM should start and all memory should be fully-allocated and locked from being paged (see https://libvirt.org/formatdomain.html#elementsMemoryBacking) Actual results: VM fails to start with an error -12 from VFIO_MAP_DMA (see below) Other information: In the dmesg output the following statement is printed: 'vfio_pin_pages: RLIMIT_MEMLOCK (18253611008) exceeded', however ulimit shows locked memory as unlimited. The UI shows the following error-message: internal error: process exited while connecting to monitor: qemu-system-x86_64: -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.2,multifunction=on,addr=0x5: VFIO_MAP_DMA: -12 qemu-system-x86_64: -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.2,multifunction=on,addr=0x5: vfio_dma_map(0x2baab62e9c00, 0x0, 0x80000000, 0x2ba6b3e00000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.2,multifunction=on,addr=0x5: VFIO_MAP_DMA: -12 qemu-system-x86_64: -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.2,multifunction=on,addr=0x5: vfio_dma_map(0x2baab62e9c00, 0x100000000, 0x380000000, 0x2ba733e00000) = -12 (Cannot allocate memory) qemu-system-x86_64: -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.2,multifunction=on,addr=0x5: vfio: memory listener initialization failed for container System has 128GB RAM and the VM has 16GB allocated, so it isn't an overcommitment issue. lime-diagnostics-20170421-1156.zip
  3. Currently testing out unRAID and using an Intel 750 NVMe SSD (which is working well), however one limitation is that the device cannot be properly managed (i.e. firmware updates, low-level partitioning etc) due to the nvme cli utility not being present. The utility can be found at https://github.com/linux-nvme/nvme-cli and it would be great if it could be included in unRAID in the future (as it would make working with NVMe devices significantly easier). I know there are other bits around NVMe devices that are needed (i.e. support for reading drive temps, discard triggering etc), however those are different issues for a different day.
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.