me.so.bad Posted March 31, 2017 Share Posted March 31, 2017 (edited) I was rsyncing files from my macbook to the server (rsync -av Projektarchiv [email protected]:/mnt/user/archive/Archiv/Projektarchiv) and when I came back the log was full of "no space on device" errors and my Windows VM was frozen… my log is full of these messages: Quote Mar 31 20:16:46 Datenteich shfs/user: err: shfs_write: write: (28) No space left on device Mar 31 20:16:49 Datenteich shfs/user: err: shfs_rename: rename: /mnt/cache/archive/Archiv/Projektarchiv/[…] (28) No space left on device Mar 31 20:16:49 Datenteich shfs/user: err: shfs_write: write: (28) No space left on device Mar 31 20:16:49 Datenteich shfs/user: err: shfs_rename: rename: /mnt/cache/archive/Archiv/Projektarchiv[…] (28) No space left on device but GUI & df both tell me there is plenty of space everywhere: Quote root@Datenteich:~# df -h /var/ Filesystem Size Used Avail Use% Mounted on rootfs 16G 619M 15G 4% / root@Datenteich:~# df -h /tmp/ Filesystem Size Used Avail Use% Mounted on rootfs 16G 619M 15G 4% / root@Datenteich:~# df -h / Filesystem Size Used Avail Use% Mounted on rootfs 16G 619M 15G 4% / root@Datenteich:~# df -h /mnt/ Filesystem Size Used Avail Use% Mounted on rootfs 16G 619M 15G 4% /mnt root@Datenteich:~# df -h /mnt/cache Filesystem Size Used Avail Use% Mounted on /dev/sdg1 233G 127G 106G 55% /mnt/cache root@Datenteich:~# df -h /mnt/user Filesystem Size Used Avail Use% Mounted on shfs 4.8T 1.8T 3.1T 37% /mnt/user root@Datenteich:~# df -h /mnt/user0 Filesystem Size Used Avail Use% Mounted on shfs 4.6T 1.7T 3.0T 36% /mnt/user0 as logs say cache was full I tried to invoke mover – worked. now I'm able to start my VMnope, crashed at startup and can resume rsyncing… so what happend here? diagnostics are attached, but without logs (too many personal information… why arent mover logs anonymized?). I can mail logs to limetech if needed. datenteich-diagnostics-20170331-2136_nologs.zip Edited March 31, 2017 by me.so.bad Quote Link to comment
trurl Posted March 31, 2017 Share Posted March 31, 2017 You can turn off mover logging for future diagnostics, Settings - Scheduler - Mover Settings. You can also edit the syslog yourself and post it. Your post doesn't really follow the Defect Report Guidelines, so I am moving it to General Support for now. Quote Link to comment
me.so.bad Posted March 31, 2017 Author Share Posted March 31, 2017 3 minutes ago, trurl said: Your post doesn't really follow the Defect Report Guidelines, so I am moving it to General Support for now. Hm ok, whats wrong with it? I dont really know what happend so I cant add more information… I know logs are important, but as far as i can see there is nothing helpful in it and - as I said - I can mail them to LT if needed, just dont want to upload them here. Quote Link to comment
me.so.bad Posted March 31, 2017 Author Share Posted March 31, 2017 diags with logs: datenteich-diagnostics-20170331-2136 2_anon.zip Quote Link to comment
trurl Posted March 31, 2017 Share Posted March 31, 2017 5 minutes ago, me.so.bad said: Hm ok, whats wrong with it? I dont really know what happend so I cant add more information… If we determine it is really a defect then we will know more about it and we can add more information and put it in defect reports. 99% of the time when someone doesn't know what happened, it is not a defect. Quote Link to comment
trurl Posted March 31, 2017 Share Posted March 31, 2017 I am hoping johnnie.black will comment, he is the btrfs/cache pool expert. Quote Link to comment
JorgeB Posted March 31, 2017 Share Posted March 31, 2017 This is a btrfs problem with allocated space, I'm on the phone so not easy to write a lot, I'm going to try and find a previous thread with same issue and link here. Quote Link to comment
me.so.bad Posted March 31, 2017 Author Share Posted March 31, 2017 (edited) thats just… wtf?! so I got 50% used on my disk but because of "something" btrfs refuses to put more data on it and I have no chance rely on the numbers displayed by unRAID? after running your balance cmd i do get a little bit space back, but how is it possible that of my 2*250G RAID1 only ~25% are usable?! at least my VM survived it… need to make backups asap. Edited March 31, 2017 by me.so.bad Quote Link to comment
JorgeB Posted March 31, 2017 Share Posted March 31, 2017 4 minutes ago, me.so.bad said: but how is it possible that of my 2*250G RAID1 only ~25% are usable?! That's false and if you read the other thread I explained why this happens, after running that command you should be able to completely fill your pool, you were getting an out of space error because no new metada chunks could be allocated, since all unused space was allocated for data chunks, this is a "characteristic" of btrfs, less likely to happen in latest kernels but still possible, especially in the cache, since in normal usage it gets filled to the top and then emptied multiple times. Quote Link to comment
me.so.bad Posted March 31, 2017 Author Share Posted March 31, 2017 sorry, my english… my question was: what happend that in that situation after only 25% is filled up with data the filesystem was "full"? and is there anything I can to to prevent it in future? Quote Link to comment
JorgeB Posted March 31, 2017 Share Posted March 31, 2017 (edited) OK, I'm home on the computer so let me try to explain this one more time, first btrfs works differently than most other file systems, because before any writes are done it allocates chunks, mostly for data and metadata, these are usually 1GB and 256MB in size respectively, btrfs fi show displays allocated vs used space, this was your cache pool: Quote Label: none uuid: c3200c16-0d62-4ee2-882d-3a5f8d32eec1 Total devices 2 FS bytes used 126.22GiB devid 1 size 232.89GiB used 232.88GiB path /dev/sdg1 devid 2 size 232.89GiB used 232.88GiB path /dev/sdf1 So device size is 232.89GiB only 126.22GiB were in use, but 232.88GiB were allocated. Now lets look at btrfs fi df were we can see how many space is allocated and used for each type of chunk, again data and metadata are the ones that interest us, the others are negligible: Quote Data, RAID1: total=231.83GiB, used=125.38GiB System, RAID1: total=32.00MiB, used=48.00KiB Metadata, RAID1: total=1.02GiB, used=867.03MiB GlobalReserve, single: total=135.22MiB, used=0.00B So, you have a lot of free space on the chunks allocated for data, and that was not the problem, but if you look at metadata the existing chunks are almost full, so it needed to create a new metadata chunk, but because the devices were completely allocated it was not possible, hence the out of space error. The command you ran reclaimed all previously allocated data chunks that were only 5% or less used, so the needed metadata chunk could be created. Btrfs is being constantly improved and these situations should happen much less often in the future, but to avoid this for now run the same balance command anytime the allocated space comes close to total device space, say above 95% capacity or so. Edited April 1, 2017 by johnnie.black 1 Quote Link to comment
HellDiverUK Posted April 2, 2017 Share Posted April 2, 2017 I get this the odd time, even when writing to pool drives (no cache shares). I just run the "New Permissions" tool in unRAID and it fixes the problem. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.