WEHA

Members
  • Posts

    91
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

WEHA's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. I guess this was one of those disks, I put in a non-smr and it's going 120-150MB/s... I knew SMR were no good but THIS bad... wow
  2. I mean ok it's an SMR drive, but this bad? Is this not a sequential write? It was pretty much like this from the beginning but I can give that a try
  3. I had a data disk bad and even disabled so I replaced it. I only have a larger disk available so it wants to do a parity swap, I get it can take while but I'm seeing 5 / 6MB/s... for 3TB Can someone point out how to get the ball rolling faster because at this rate it will take more than 5 days. Diag attached. unneptunus-diagnostics-20240120-1510.zip
  4. I have write errors on a cache drive and now my VM's & Dockers are not responding (just opnsense). mount|grep cache /dev/nvme2n1p1 on /mnt/cache type btrfs (ro,noatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/) I'm on holiday so I don't want to stop the array to remove the disk. Is there a way to tell unraid to stop using that disk? I saw this but it does not seem to be right syntax for unraid: echo 1 > /sys/block/nvme0n1/device/delete Sep 14 07:23:20 Tower kernel: BTRFS error (device nvme2n1p1): error writing primary super block to device 2 Label: none uuid: 987c4458-3b7c-4bbe-af87-c2f8bdde7c60 Total devices 2 FS bytes used 724.88GiB devid 1 size 931.51GiB used 903.54GiB path /dev/nvme2n1p1 devid 2 size 0 used 0 path /dev/nvme0n1p1 MISSING Sidenote: With errors like these, I would also think that it would give at least an error when I'm looking at the array itself? Would it be a good idea to do the following? btrfs device remove /dev/nvme0n1p1 /mnt/cache How can I remount rw? Thanks!
  5. You shoud be able to start a trial on an unknown usb stick to get started with your backup config. Now you are just hijacking peoples systems who are already getting a headache of a non-working server. EDIT: for like a (few) day(s) or something, linked to an account so you can track abuse
  6. Yes, helpful, thank you... like I already mentioned that I already requested it. There seems to a flaw in the documentation saying you need to request a trial but that's not possible when you have a backup of your configuration.
  7. Starting a Trial is also not a solution as the configuration backup refers to an activated license so trial is invalid... I would like to use my purchased product please???!!!
  8. Another stick bites the dust, not sure why as these are practically unused sticks... New stick installed, no license key obviously, the very convenient 1 year block on requesting a new license is very helpful. I requested it via e-mail but who knows how long that is going to take. I click free trial but there is no procedure to get he trial license, only how to install a stick... yes thank you, already done that. I'm assuming this should be available on the machine itself but it does not have internet because the firewall is a VM on the same machine... Either way I only get "fix error" that just opens the messages with the options "purchase key" and "redeem activation code". So... now what?
  9. I had a share that was set to cache prefer on cache nr 2. I want to get rid of cache 2 to replace hdd's with ssd's. So I changed the setting to cache yes on cache nr 1. When I started mover it wanted to move the files on cache nr 2 but it said "file exists": move: move_object: /mnt/cache2/xxx.yyy File exists When I set it to cache yes on cache nr 2 and restarted mover, it started working again.
  10. That should not be necessary at all, exceptions is when all else fails. Anyway, it does not really matter, it's "fixed" in beta35
  11. Very well, thanks for your input.
  12. Well yes, not via btrfs but I have no issues with the vm, no errors in eventlog and full backups are working. That's why I believe the vdisk is fine. It's just weird to me that only docker image is affected and it was on a COW share. But if you're confident that there is no issue with this scenario then ok.
  13. I mean COW by enabling. So system had COW and the vdisk had NOCOW But docker image was corrupt and vdisk image was not.
  14. It's strange that it's only the docker file and not the vm file... could it be related to NOCOW / COW? I enabled this for the system share and thus the docker image, the vdisk has NOCOW. Thank you for assisting
  15. I moved everything off, 2 files remained, 1 vdisk file and docker img. The docker image was unable to be moved due to an i/o error, so I removed it and recreated it on another pool. I reran scrub and now no errors are detected. Is this related to docker image being set as xfs on a btrfs pool? I set this to xfs to be sure the bug that causes much disk i/o to be gone. Smart does not show any errors on the disk so I can be sure this was a software corruption and not caused by a hardware (hdd) defect?