unRaid GUI showing conflicting disk size information


Recommended Posts

Hello,

 

I have a disk in my unRaid array that is a WD 1TB Red.  As shown below, unRaid is showing this correctly in the disk and device portion.  However, in the right side the size, used and free information is showing a 4TB drive, which is clearly incorrect.  I've pasted the output of fdisk for the drive also showing it's 1TB.  I was going to replace the drive with a 4TB drive until I saw this.

 

Is this a cosmetic issue?  Is there a way to "reset" the sized, used and free information?  Am I ok to replace it with a bigger drive?

 

Thanks,

 

Al

 

2017-03-14_2-15-19.thumb.png.2e4aa2fde0fee9dd8967cd88d9c608ac.png

 

root@Tower:~# fdisk -l /dev/sdc
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start        End    Sectors   Size Id Type
/dev/sdc1          64 1953525167 1953525104 931.5G 83 Linux

 

EDIT:  The stat plugin is showing the size correctly.

2017-03-14_2-28-03.thumb.png.e10f32616035eba6cb23ace9a78f91a8.png

Edited by ajeffco
Added information, updated title
Link to comment

Need more information.  That's not cosmetic, that's something that is really wrong.  Need to see the diagnostics from the first post, as well as what actions you may have taken before that, as well as what steps you took to replace the drive with the 4TB.  Parity is probably completely invalid.

 

Is it possible you restored super.dat from an older backup?

Link to comment
3 hours ago, RobJ said:

Need more information.  That's not cosmetic, that's something that is really wrong.  Need to see the diagnostics from the first post, as well as what actions you may have taken before that, as well as what steps you took to replace the drive with the 4TB.  Parity is probably completely invalid.

 

Is it possible you restored super.dat from an older backup?

By diagnostics, are you talking about the "diagnostics" command output file?

 

This is a new install, nothing was restored.  I started with a few drives, migrated some data from my other machine, move drives over, preclear, add to array, etc.  Two of the drives were 1TB drives and were replaced to gain more space.  Right now, it looks normal in the GUI as shown below.  I randomly picked 5 files on the drive and they are correct.

 

Steps to replace the drive were:  Preclear a replacement 4TB.  When complete, stop the array, in the Disk 3 selection box, choose the "new" 4TB drive, start the array, let parity rebuild.  The results of that action are blow, the data rebuild on Disk 3 just completed about an hour ago.

 

I don't doubt that it was a serious problem, just not sure how to proceed to assist with any resolution if there is a problem.  I'll read that other thread.

 

Thanks,


Al

 

imageproxy.php?img=&key=00b562fcac28e7272017-03-14_15-26-16.thumb.png.2672bdb73506b0044e457ab9b586d12f.png

Edited by ajeffco
Add steps for rebuild
Link to comment
1 hour ago, ajeffco said:

Unfortunately I stopped and restarted the array just over an hour ago.  The diagnostic zip file looks to be mostly reset to that time.  So there's nothing pointing at a problem with Disk 3 in that file.

Stopping and starting does not reset the diagnostics. Reboot does. Did you actually go to Tools - Diagnostics and download the diagnostics zip?

Link to comment
58 minutes ago, johnnie.black said:

Post the output of both:


btrfs fi show /mnt/disk3

btrfs fi df /mnt/disk3

root@Tower:~# btrfs fi show /mnt/disk3
Label: none  uuid: 25d79d48-80f9-4b90-8091-515048193568
        Total devices 1 FS bytes used 3.30TiB
        devid    1 size 3.64TiB used 3.54TiB path /dev/md3

root@Tower:~# btrfs fi df /mnt/disk3
Data, single: total=3.53TiB, used=3.30TiB
System, single: total=4.00MiB, used=416.00KiB
Metadata, single: total=5.01GiB, used=3.40GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

 

Link to comment
24 minutes ago, trurl said:

Stopping and starting does not reset the diagnostics. Reboot does. Did you actually go to Tools - Diagnostics and download the diagnostics zip?

 

The server has been up for 25+ hours.  I should have been more clear.  the syslog files are the only ones that appear to go back to the reboot of the server, the others are timestamped with the restart of the array.  I'll work on copying my syslogs to stable storage, think I saw a post on that somewhere.

 

 

Link to comment

Does the disk still contain the old 1tb data only or did you copy some more?

 

I find curious that both disk2 and 3 have the exact same free space, I've seem some btrfs "confusion" between disks before, can you try copying a >1GB file to disk2 and check that free space changes only for disk2 and not for disk3 also?

 

Still would like to see current diagnostics.

Link to comment
31 minutes ago, ajeffco said:

 

The server has been up for 25+ hours.  I should have been more clear.  the syslog files are the only ones that appear to go back to the reboot of the server, the others are timestamped with the restart of the array.  I'll work on copying my syslogs to stable storage, think I saw a post on that somewhere.

The files other than syslog are indeed a current snapshot.  Doesn't matter whether you start/stop or not. There is no history kept of all the many things that they show.

Link to comment

You should post the diags on the thread so everyone can see them, more eyes more chance of getting help, not send them by PM, but there's definitely something going on:
 

Mar 13 15:43:56 Tower emhttp: Mounting disks...
Mar 13 15:43:56 Tower emhttp: shcmd (51): /sbin/btrfs device scan |& logger
Mar 13 15:43:56 Tower root: ERROR: device scan failed on '/dev/md3': File exists
Mar 13 15:43:56 Tower root: ERROR: there are 1 errors while registering devices

I almost certain my earlier hunch is correct, ie, space stats showing for disk3 are repeated from disk2, did you try to copy a file to disk2 like I asked?

 

You should reboot to see if this goes away, if not it will need further investigation.

Link to comment
53 minutes ago, johnnie.black said:

You should post the diags on the thread so everyone can see them, more eyes more chance of getting help, not send them by PM, but there's definitely something going on:

The last paragraph in the first post of this sticky is relevant

  • PLEASE do not privately ask the moderators or other users for personal support!  Helpful moderators and users are just users like yourself, unpaid volunteers, and do not provide official support, so it is better to ask support questions on the forum instead.  That way, you get the benefit of having answers from the whole community, instead of just one person who may or may not know, or may be wrong, or may be unavailable.  Plus, you will probably get faster help.  And, other users in the community can benefit by learning from your problem and its solution.
Link to comment

One more thing, since the the btrfs scan error is right after boot and before you upgraded disk3 keep old disk3 intact, just remembered to check this:

 

/dev/md1        1.9T  1.7T  133G  93% /mnt/disk1
/dev/md3        3.7T  3.4T  344G  91% /mnt/disk2
/dev/md4        3.7T  3.4T  257G  94% /mnt/disk4
...

 

So where's disk3 and what disk was rebuilt? O.o

Link to comment
49 minutes ago, trurl said:

The last paragraph in the first post of this sticky is relevant

  • PLEASE do not privately ask the moderators or other users for personal support!  Helpful moderators and users are just users like yourself, unpaid volunteers, and do not provide official support, so it is better to ask support questions on the forum instead.  That way, you get the benefit of having answers from the whole community, instead of just one person who may or may not know, or may be wrong, or may be unavailable.  Plus, you will probably get faster help.  And, other users in the community can benefit by learning from your problem and its solution.

 

Trurl, I didn't ask privately for support.  He asked for a diagnostic, my mistake was providing it via PM vs. in thread.  My apologies.

Edited by ajeffco
Link to comment
16 minutes ago, johnnie.black said:

One more thing, since the the btrfs scan error is right after boot and before you upgraded disk3 keep old disk3 intact, just remembered to check this:

 

/dev/md1        1.9T  1.7T  133G  93% /mnt/disk1
/dev/md3        3.7T  3.4T  344G  91% /mnt/disk2
/dev/md4        3.7T  3.4T  257G  94% /mnt/disk4
...

 

So where's disk3 and what disk was rebuilt? O.o

 

It's sitting on my desk :)

 

In the GUI I see all my drives.  In the CLI, df is missing disk 3.  Screenshots and a new diagnostic attached.  Would it be easier for me to start converting to and migrating the data to xfs in a rolling fashion starting with Disk 10?

 

GUI:

2017-03-14_20-23-57.thumb.png.ec991178fc4af0521c19012b530017b2.png

 

CLI (Missing /dev/md2 aka /mnt/disk3):

root@Tower:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs           16G  404M   16G   3% /
tmpfs            16G  252K   16G   1% /run
devtmpfs         16G     0   16G   0% /dev
cgroup_root      16G     0   16G   0% /sys/fs/cgroup
tmpfs           128M  2.4M  126M   2% /var/log
/dev/sda1       976M  152M  825M  16% /boot
/dev/md1        1.9T  1.7T  133G  93% /mnt/disk1
/dev/md3        3.7T  3.4T  344G  91% /mnt/disk2
/dev/md4        3.7T  3.4T  257G  94% /mnt/disk4
/dev/md5        1.9T  1.8T   63G  97% /mnt/disk5
/dev/md6        3.7T  3.6T   76G  98% /mnt/disk6
/dev/md7        3.7T  3.6T   75G  98% /mnt/disk7
/dev/md8        3.7T  3.4T  249G  94% /mnt/disk8
/dev/md9        3.7T  784G  2.9T  22% /mnt/disk9
/dev/md10       3.7T   17M  3.7T   1% /mnt/disk10
shfs             33T   25T  8.1T  76% /mnt/user

 

BTRFS FI output in case it helps

root@Tower:~# btrfs fi show /dev/md2
Label: none  uuid: 25d79d48-80f9-4b90-8091-515048193568
        Total devices 1 FS bytes used 3.30TiB
        devid    1 size 3.64TiB used 3.54TiB path /dev/md3

root@Tower:~# btrfs fi show /mnt/disk3
Label: none  uuid: 25d79d48-80f9-4b90-8091-515048193568
        Total devices 1 FS bytes used 3.30TiB
        devid    1 size 3.64TiB used 3.54TiB path /dev/md3

root@Tower:~# btrfs fi df /dev/md2
ERROR: not a btrfs filesystem: /dev/md2
root@Tower:~# btrfs fi df /mnt/disk3
Data, single: total=3.53TiB, used=3.30TiB
System, single: total=4.00MiB, used=416.00KiB
Metadata, single: total=5.01GiB, used=3.40GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

 

 

tower-diagnostics-20170314-2025.zip

Link to comment

After running the test you mentioned, I thought earlier I had accessed files on Disk3.  They are really on Disk2.  When I do a find in the cli, it's the same files.

 

An example, a file from my synology backup share:

root@Tower:/mnt/disk3/synback/filer_1.hbk# find /mnt -name synobkpinfo.db
/mnt/user/synback/filer_1.hbk/synobkpinfo.db
/mnt/disk3/synback/filer_1.hbk/synobkpinfo.db
/mnt/disk2/synback/filer_1.hbk/synobkpinfo.db

root@Tower:/mnt/disk3/synback/filer_1.hbk# md5sum /mnt/user/synback/filer_1.hbk/synobkpinfo.db
103358f0b308ae36349bc17d1103e607  /mnt/user/synback/filer_1.hbk/synobkpinfo.db

root@Tower:/mnt/disk3/synback/filer_1.hbk# md5sum /mnt/disk3/synback/filer_1.hbk/synobkpinfo.db
103358f0b308ae36349bc17d1103e607  /mnt/disk3/synback/filer_1.hbk/synobkpinfo.db

root@Tower:/mnt/disk3/synback/filer_1.hbk# md5sum /mnt/disk2/synback/filer_1.hbk/synobkpinfo.db
103358f0b308ae36349bc17d1103e607  /mnt/disk2/synback/filer_1.hbk/synobkpinfo.db

 

 

Link to comment
Just now, ajeffco said:

 

You'd think I knew better ;).  Good reminder tho.

I would believe johnnie.black before I would believe myself, but in general, there have certainly been cases where someone gave bad advice in public, and it was corrected by others. So if you take advice in private...

 

And there has been more than one occasion where someone PMed me about something that needed a quick response, and I was out in the wilderness somewhere and didn't get the message till much later.

Link to comment

more oddity...  lsblk shows the disk at 4TB however btrfs thinks it's 931GB

 

lsblk output for disk 3:

sdg      8:96   0   3.7T  0 disk
└─sdg1   8:97   0   3.7T  0 part

 

btrfs fi show for /dev/sdg1:

 btrfs fi show /dev/sdg1
Label: none  uuid: 25d79d48-80f9-4b90-8091-515048193568
        Total devices 1 FS bytes used 384.00KiB
        devid    1 size 931.51GiB used 1.02GiB path /dev/sdg1

And unassigned devices shows:

2017-03-15_0-48-17.thumb.png.200fc7d6c951bba1e40a42aed7ebc1d8.png

 

It's really jacked up... 

Edited by ajeffco
Added unassigned devices screenshot
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.