JonathanM

Moderators
  • Posts

    16137
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. It's a good idea to do a non-correcting parity check after doing a disk rebuild. Rebuilds don't "check their work" by reading what was written to the rebuilt drive, it's assumed if a write completes without error, it wrote correctly.
  2. Depends. There are other things you can tweak with regards to memory, cache pressure and such, and honestly Unraid is tuned for best performance with smaller amounts of RAM and may not make the best use of more than 64GB of RAM. I don't have the luxury of owning any systems with more than 32GB right now, so I must leave hands on research as an exercise for the reader.
  3. It will kill performance if run too often, as caching data is what speeds many things along. As a clean up tool run when performance isn't a priority, or before starting a task, it should be fine. It does prove to some extent that you are over committing the memory you have for optimum performance, so more RAM would help if you really need to reserve that much RAM for VM use. I would try reducing the VM RAM allocations and see if it hurts or helps the VM performance. RAM caching by the host is one of the things that can really speed up a VM, and if you deny the host that RAM it can hurt the VM speed.
  4. After a minute of googling (as in, no real research) I found this which may or may not do something in Unraid, haven't tried it, so use at your own risk, it was billed as a "linux" solution. This apparently a. clears speculative data that was cached b. consolidates the in use memory. If you are game to try this, execute at a point where the VM would fail to launch. To repeat, I HAVE NO CLUE IF THIS WILL DO BAD THINGS TO UNRAID.
  5. Perhaps out of unfragmented memory. Some operations require contiguous blocks, and over time more and more addresses can be tied up and unable to be reallocated, even if the total amount free is plenty.
  6. If the stock scheduling doesn't give you the flexibility you need, maybe look into the tuning plugin?
  7. Don't do that. You need to leave resources available for the host (Unraid) to emulate the motherboard and other I/O. At the very least leave CPU 0 available for Unraid. Since you only have 4 threads, I'd only use CPU2 and CPU3 for the VM, maybe try with only the last thread for the VM and leave the other three for the host. That may also be too much, depending on how much RAM the system has. If the physical box has 32GB, 8 for the VM should be fine. If it has 16 or less, reduce the VM to 4096. The more resources you tie to the VM, the slower the host is going to run, which in turn slows the VM way down. Give the VM the absolute minimum and add a little at a time until performance doesn't increase.
  8. Just the reverse array -> cache, or cache only The advantage of having the cache primary and array be a secondary with a move to cache setting is that if you ever accidentally fill the cache and the minimum free space is set correctly the excess data will go to the array, then when the cache has room, the mover will put the data back on the cache. Cache only will give an out of space error when it gets below the minimum free space set.
  9. Yes, the parity array is great for mass storage, very bad for random I/O, especially random writes. SSD or NVME is a must for vdisks.
  10. Yep, that would be why the VM is dog slow. vdisks should be on fast pools, not parity protected array disks.
  11. All support questions for specific containers should be posted in their thread, not spread out across the forum. That way people can easily see what others have asked, and the answers they received. Many problems have already been asked and answered.
  12. At any point did you format a drive? If so, you erased all the existing files.
  13. 1. What drive is the VM using for vdisk or passthrough? 2. Try changing the RAM to 8GB
  14. Perhaps post in the support thread specific to your container. In the Unraid GUI, click on the container icon, and select support.
  15. Obviously before messing with it make a backup. Stop your HA VM, and click on the 32GB under CAPACITY. Change it to 42G, or whatever floats your boat, and apply the change. Set up a new VM with your favorite live utility OS as the ISO. https://gparted.org/livecd.php is a good option. Add the existing haos vmdk vdisk file as a disk to the new VM. Boot the new VM, it should start the utility OS, where you can use gparted to expand the partition to fill the expanded vdisk image.
  16. Which is why the Unraid regular container startup has customizable delays between containers. Black start from nothing is easier, partially running start during backup sequence is more complex, it needs even better customizations. Shutdown and startup conditionals and/or delays would be ideal. As an example, for my nextcloud stack I'd like nc to be stopped, wait for it to close completely, stop collabora, stop mariadb. Backup all three. Start mariadb, start collabora, wait for those to be ready to accept connections, start nextcloud. The arr stack is even more complex. The arr's and yt dl need to be stopped, then the nzb, then the torrent and vpn. Startup should be exactly the reverse, with ping conditionals ideal, blind delays acceptable.
  17. I think that is backwards. emhttp was the only web engine in the past, currently nginx is the web server, and emhttp takes care of the background tasks.
  18. Sorry, I didn't mean to imply that there are properly working boards that don't run with all slots full. If the manufacturer says their board will run with model XXXX RAM, it should run it fine, but that doesn't mean boards don't fail. I just wanted to let you know that could be a failure symptom, you can have a board where all the slots are fine, all the DIMM's are fine, but all 4 at once isn't. I personally had a board that ran fine with all 4 DIMMS for years, until it didn't. The only failure mode was random errors when all 4 slots were full, it ran perfectly on any 2 of the DIMMS, but put all 4 in and memtest would fail every time.
  19. Are you positive nothing else was trying to access the drive during the test?
  20. Some motherboards just won't run with all slots filled.