WEHA

Members
  • Content count

    25
  • Joined

  • Last visited

Community Reputation

0 Neutral

About WEHA

  • Rank
    Member
  1. Would it be possible to add an included shares like the excluded shares? Or can you tell me what I need to add to my manual change in the smb config? thanks!
  2. I have a problem that a share is not detected. It is one that is not exported via the unraid gui but rather manual config change of samba. The reason for this is that it's a users share where the subfolders have different permissions which is not possible from the gui. Is there a way to enable the recycle bin for this share? The plugin does not see it. thanks!
  3. Ok well that's not that interesting I'm sure there are reasons for this... anyway, thanks for assisting me!
  4. Just as I suspected, but then unraid showing it protected is a bug then no? Balance status: Data, single: total=871.00GiB, used=801.92GiB System, single: total=32.00MiB, used=128.00KiB Metadata, single: total=4.00GiB, used=1.25GiB GlobalReserve, single: total=512.00MiB, used=0.00B So nothing is raid 1...
  5. Alright Thanks, 2 questions though. If you don't mind. Do I need to move the data first or can I be sure not to loose any data with the conversion? This is not mentioned in the post so I'd like to be sure, I can imagine btrfs is smart enough to do this when enough space is available. Reading your faq post properly I see that metadata can be in raid 1 mode separately, what exactly does this mean -> are the files protected or not? Unraid indicates they are but I would think metadata is not enough, this confuses me as you would surely understand. thanks again!
  6. Do I have to use : -dconvert=single -mconvert=raid1 single as in single disk or as in raid 0? thanks!
  7. Fair enough but why do the shares indicate that they are protected?
  8. So I was adding a pcie card in the unraid system and booted it back up. Array was on auto start so this started but I noticed one of the cache disks was missing. Not sure why this would be allowed to happen as an array disk missing would prevent the array from starting, is this normal behavior? Anyway, I removed the card and I got the cache disk back and it was balancing. The strange was that the used data status was dropping so it scared me at first but when checking with du, nothing changed. The balance finished and now I have a 2TB cache disk where this should be 1TB (2 x 1TB nvme sdd). Cache shares are still showing green meaning protected but when I check balance on the cache page it says no balance found. Stopping and starting the array does not change anything. So 2 questions: - How do I fix this? unless the only way is remaking the cache... - How can I make unraid not mount the cache when one of the disks is missing? Diagnostics attached tower-diagnostics-20170730-1757.zip
  9. So, I made a cache with 2 ssd's of 120 GB (112GB irl) I made a VM with a 100GB img file. When I put the VM on the cache, everything is fine. When things are being written to the image the cache fills up but the img file is still 100GB If I move it to the array and back to the cache I can use it again until the remaining 12GB get's written again. The img file is the only thing on the cache: /dev/sde1 112G 112G 72K 100% /mnt/cache 100G -rwxrwxrwx 1 root users 100G Jul 22 10:01 vdisk1.img* Can anyone explain to me why this is happening and how I can stop it? thanks!
  10. Sorry for the late reply but I wasn't able to turn off and on the array. I don't have the errors anymore so I suppose it must be related somehow to a bad btrfs.
  11. Seems that was the problem, reformatted sdd to xfs (instead of btrfs). Stopped and started the array again and voila, working cache disk. So this is a bug?
  12. Ok I will reformat that disk. Am I correct to think that when excluding a disk from the shares and running mover moves the data away from that disk? thanks!
  13. Your commands did not do/find anything: wipefs -o 0x10040 /dev/sdg wipefs: /dev/sdg: offset 0x10040 not found wipefs -o 0x10040 /dev/sdf wipefs: /dev/sdf: offset 0x10040 not found One disk was empty so I wiped the other one with the other command: wipefs -a /dev/sdf1 /dev/sdf1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d wipefs -a /dev/sdf /dev/sdf: 2 bytes were erased at offset 0x000001fe (dos): 55 aa /dev/sdf: calling ioctl to re-read partition table: Success I recreated the cache with only sdg, mounted fine. The next command returned nothing: btrfs device add /dev/sdf1 /mnt/cache I stopped the array and from the moment I selected 2 slots the green dot went to blue, adding the second disk made 2 blue icons Started the array and tried the following command anyway: btrfs balance start --bg --full-balance /mnt/cache Checking status: Every 2.0s: btrfs balance status /mnt/cache ERROR: cannot access '/mnt/cache': No such file or directory btrfs fi show /mnt/cache ERROR: superblock checksum mismatch ERROR: cannot scan /dev/sdc1: Input/output error ERROR: not a valid btrfs filesystem: /mnt/cache sdc is an array disk formatted as btrfs
  14. Ok I've been trying some things: Cache slots was set on 2 because I wanted to add 2 drives. When I set cache slots to 1, it works?! Disk is then tagged as normal (green icon) From the moment I set the slots to 2 the first disk is tagged as new (blue icon) and unmountable This with either disk. EDIT: in disk.cfg cacheUUID is still empty, whereas on another test server I have an SSD mounted for cache and that actually shows an UUID
Copyright © 2005-2017 Lime Technology, Inc. unRAIDĀ® is a registered trademark of Lime Technology, Inc.