goinsnoopin

Members
  • Content count

    296
  • Joined

  • Last visited

Community Reputation

0 Neutral

About goinsnoopin

Converted

  • Gender
    Undisclosed
  1. Assigned 2 cores and threaded pair to each VM. Also increased ram to 12gb. Running fine at 720p. Thanks!
  2. I have 32 gb ram total. I assigned 8gb ram to each vm. I am running the game at 1024x720 and will try your suggestions of lowering it even furtherand running passmark. Here are the min requirements: PlayerUnknown's Battlegrounds' Minimum PC Requirements Bluehole lists the minimum requirements you'll need to play PlayerUnknown's Battlegrounds. OS: 64-bit Windows 7, 8.1, or 10 CPU: Intel Core i3-4340 or AMD FX-6300 GPU: nVidia GeForce GTX 660 2GB or AMD Radeon HD 7850 2GB Memory: 6 GB Storage: 30 GB DirectX: 11.1 compatible video card or equivalent Checking Your PC's Hardware
  3. I have an i7 4790K CPU....so 4 cores plus the associated hyperthreads. My kids have just discovered Person Unknown Battleground in Steam and have asked me to spin up two gaming VMs so they can play together. Running one VM is no problem...but when I run two gaming VMs the game stutters and becomes unplayable. I am passing a Nvidia GTX970 to one VM and a GTX950 to the second VM. Both vms run fine by themselves. Wondering if anyone has suggestions on the best CPU assignment strategy for making this work...or is this CPU just not have enough horsepower to run two gaming VMs??? I have also tried stopping all my dockers to make sure they are not using resources when they are gaming. Currently have cores 1,2,3,5,6,7 isolated, my vm xml has emulator pin for cores 0,4. I have tried assigning 1,2,3,5,6,7 to both vms and have also tried assigning 4 cores to each so they both had at least one core of their own. I have never assigned cores 0 and 4 to anything....should I try that or just leave this core to unraid? I have been reading the $90 xeon 2670 thread and wondering if I need to go in that direction, but would prefer not to at this point since we need to get a new car. Thanks in advance. Dan
  4. Thanks Johnnie...ran the xfs_repair....everything seems good! Thanks!
  5. Ran xfs_repair with verbose switch....here is more detailed output: Phase 1 - find and verify superblock... - block cache size set to 3029920 entries Phase 2 - using internal log - zero log... zero_log: head block 312683 tail block 312683 - scan filesystem freespace and inode maps... Metadata corruption detected at xfs_agf block 0x1/0x200 flfirst 118 in agf 0 too large (max = 118) agf 118 freelist blocks bad, skipping freelist scan sb_fdblocks 131031877, counted 131031871 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Tue Aug 1 22:33:39 2017 Phase Start End Duration Phase 1: 08/01 22:33:18 08/01 22:33:23 5 seconds Phase 2: 08/01 22:33:23 08/01 22:33:23 Phase 3: 08/01 22:33:23 08/01 22:33:32 9 seconds Phase 4: 08/01 22:33:32 08/01 22:33:32 Phase 5: Skipped Phase 6: 08/01 22:33:32 08/01 22:33:39 7 seconds Phase 7: 08/01 22:33:39 08/01 22:33:39 Total run time: 21 seconds
  6. I could use some assistance....logs filling up seems to be an issue with disk md2. Stopped array and entered maintenance mode....did xfs check on all disks...this confirms errors with disk 2. Here is output for xfs_repair -n...looking for advice on next step before proceeding. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... Metadata corruption detected at xfs_agf block 0x1/0x200 flfirst 118 in agf 0 too large (max = 118) agf 118 freelist blocks bad, skipping freelist scan sb_fdblocks 131031877, counted 131031871 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 3 - agno = 2 - agno = 0 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. tower-diagnostics-20170801-2022.zip
  7. Any suggestions on how to integrate Piwigo and COPS dockers to the reverse proxy? Thanks, Dan
  8. Jonp, To be honest I have not felt any impacts from the upgrade in terms of my windows 10 VM performance. Keyboard and mouse seem fine. I did plug in a usb printer and it seemed to take a long time to recognize and load the driver however. I will pay more attention as it seems like the Ubuntu VM users are reporting issues. For what its worth this is the USB3 PCI card that I have: https://www.amazon.com/Inateck-Express-Connector-Controller-Internal/dp/B00FPIMJEW/ref=cm_cr_arp_d_product_top?ie=UTF8 Thanks, Dan Hi snoopin, Quick question: is there any felt impact from the upgrade in terms of your VM, performance, or anything with your USB devices attached to that controller? If not, the message is likely harmless, but curious if you can trace this back to any symptoms you notice when using the VM.
  9. After upgrading from 6.2.4 to 6.3.0, I get the following in my VM/qemu log. 2017-02-05T20:29:39.741418Z qemu-system-x86_64: -device vfio-pci,host=08:00.0,id=hostdev3,bus=pci.0,addr=0x9: Failed to mmap 0000:08:00.0 BAR 2. Performance may be slow 2017-02-05T20:29:39.746332Z qemu-system-x86_64: warning: Unknown firmware file in legacy mode: etc/msr_feature_control I looked back at some old diagnostics from a couple of weeks ago and I did not get this message. This is for an add-on USB 3.0 controller that I have passed through to the VM. I have not fully tested the USB but can tell you that it is working on some level as I am typing and using a mouse that is plugged into this device....not sure that the performance may be slow means. I am post here vs KVM forum since I did not get this message previously. Any suggestions? tower-diagnostics-20170205-1544.zip
  10. Those html5 templates are awesome...thanks for the recommendation. Just curious...what are you using to edit them? Do you have a recommended freeware html editor....or are you just using a text editor such as Notepad++ Thanks, Dan
  11. Has anyone configured their default file for the COPS calibre docker? If so I would like suggestions on what to try. I got several other dockers working based on suggestions in this thread. Here is what I tried: location ^~ /cops { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.0.50:85; } This is all new to me and the confusing part is the URLBase changes. I see how some dockers like sonarr and htpc manager have settings within the docker....but others don't so I am not sure what to do. Also how are most people using this..for example do you create an index.html page with links to each of your web interfaces to the dockers you are trying to reach? If so do you keep the "landing" page open to the public and then when you click the link to the docker...then it goes to https??? The reason I am asking is that I would like to have www.mydomain.com be open to the public with a link to a public photo gallery (using an unraid docker...haven't picked one yet) and then have some other page with hyperlinks to my hidden docker management tools. Thanks in advance for any help you can provide. Dan
  12. I had this issue once and it ended up being a bios setting issue. When I was getting the code 43, the primary video device was set to PCIE. Changing this to the integrated video card made the code 43 issue go away. Dan
  13. Unraid 6.2.4. I experienced an issue when upgrading to 6.2.3 (also 6.2.4). The symptom...after the plugin updated unraid a reboot is needed. I stopped the array then clicked reboot and the chrome tab in my VM browser said system is rebooting and then the VM shutdown as it should when the array is stopped. When I rebooted unraid, the array was set to auto start and the VM was set to auto start. Once the VM was up and running and I opened chrome (since my chrome setting was set to "continue where I left off"), it opened the unraid tab that had the unraid is rebooting status message...and I am assuming that this somehow was sending the stop array function and would immediately shut down the VM. Forum users that were providing feedback suggested that I add this post to the defect report forum. Here is a link to my thread in the KVM subforum: http://lime-technology.com/forum/index.php?topic=53510.0 By changing my chrome setting to the "open a new tab" setting my system is stable. kode54 in the forum above suggested the following solution/items for consideration:
  14. Possibly Resolved...please let me know if this is a plausible explanation. I realized that the VM was running fine when I ran other software. The VM shut down when Chrome was opened and I started browsing. I noticed that I had a second unraid tab open...right after opening chrome, I clicked this tab...and it had the unraid, system is going down for a reboot (my text may not be exact)...but this was the tab that was open from when I rebooted unraid for the 6.2.3 update. My chrome settings in the "On Startup" section is checked for "Continue where you left off". I am thinking this may have been sending the stop array signal??? Not sure...but now that I closed all tabs, my VMs have been running for 30 minutes or so. I hope this makes sense...I will come back and mark this thread as solved if my system stays up for 24 hours. Fingers crossed! Dan
  15. RobJ, Thanks for sticking with me on this....as you can imaging going from having no issues...to a system that does not work is frustrating. I updated to 6.2.4, no change. Booted in safe mode, same thing stop command is issued and VM shuts down immediately. Attached is a diagnostic for 6.2.4 booted in safe mode. For reference, after the VM stopped, I did start the array back up. I know the old powerdown plugin issued a command when you hit/held the power button...now I no longer have this plugin as it was depricated with 6.2. Is there anything else that could cause the shutdown command. I will have to search for this ransomware plugin as I never heard about that. Any other suggestions. Again...thanks for staying with me on this. Dan Dan tower-diagnostics-20161107-1749.zip
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.