Cleaning out Docker image


dalben

Recommended Posts

@ikosa

 

I used cadvisor as well. Like you, the size of my containers never changes. I conclude that either cadvisor is reporting to us the size of the images -- which AFAIK should not change -- or there is something in docker.img besides the containers that is growing.  AFAIK Lime has not said whether there is anything else in docker.img.  Perhaps Lime could say whether there is supposed to be anything else in docker.img.

You can get similar size results with docker commands which I don't remember now. But of course they both can be wrong.

We need some tools/commands to look/report/investigate non container part of docker image.

Link to comment

I use plex transcode to ram (have 32gb). I noticed my docker image growing the other day but then when I decided to stop and restart plex to let it update itself the docker percentage went down considerably.  You might want to try that to see if it helps.

Link to comment

I don't use plex but I think this scenario can't be true. IMO you have to see this increase in cadvidor but AFAIK the increase is not on any container at least for me, the increase is in the non container part in docker image.

 

Plex may not be the the only app that temporarily uses large amounts of diskspace. Any media server that does transcoding is likely to store big temporary files in the container. I see you have Serviio installed. I don't have any experience with Serviio, but if it does transcoding it could have the same problem I am thinking that Plex has.

 

In my case, I suspect Plex is the problem because the amount of diskspace utilized in docker.img sometimes jumps by 1 or 2 gb a day, and sometimes it is relatively stable for days at a time. 1 to 2 gbs / day is way more than a container's log files would account for. And, I noticed that the growth spurts seem to coincide with the days my daughter watched a lot of anime's on her ipad.

 

I don't know why cadvisor does not show the size of the Plex container growing.  Maybe cadvisor does not count temp files as part of the container's size?  Maybe cadvisor is reporting the size of the image since that would be the container size when the container is first started. My container sizes, as reported by cadvisor, have been constant while my docker utilization continues to climb.

 

Questions for Lime Technology:

 

1)  When a docker app creates temporary files in the container, does the utilization of docker.img increase?

2)  When a docker app deletes temporary files in the container, does the utilization of docker.img decrease?

3)  What else, besides containers, is stored in docker.img?

Link to comment

3)  What else, besides containers, is stored in docker.img?

The Community Applications plugin stores some files in it.  Total space is ~2 Meg.

 

This is interesting. So we can confirm that there are other things in docker.img than the containers.  Is there any way to see what  is in docker.img?

Link to comment

At work, but IIRC navigate to /var/lib/docker/unraid

 

Thanks Squid.  It that where docker.img is mounted in unraid's filesystem?

I would think, but not 100% sure.  CA uses it for semi-temporary/reboot persistent storage for things like what's currently displayed, a cache of dockerHub icons (if dockerHub mode is enabled), etc.  CA's very temporary files (intermediate downloaded files, various "flags", etc) are stored in /tmp (ram)
Link to comment

And, even there the vast amount of storage is taken up by the containers.  dockerMan presumably also stores its settings there, but everything else is miniscule compared to the size of the containers.  We're talking megabytes here, compared to the gigs done by the containers.

 

Link to comment

And, even there the vast amount of storage is taken up by the containers.  dockerMan presumably also stores its settings there, but everything else is miniscule compared to the size of the containers.  We're talking megabytes here, compared to the gigs done by the containers.

 

@squid,  thanks for your help.  This has been the most progress I've had in understanding what is going on in a week.

 

At that location, I see a /tmp folder that contains dozens of files with names like tmp-1018085421.url which mc reports to be of size 714424 (I don't know what units this is in).  However, I see that /community.applications is of size 60, so I'm guessing the other file is very much bigger. The oldest tmp file seems to correspond to the date I last rebuilt docker.img.  Here is a list of what's in /tmp inside docker.img.

 

root@Media:/tmp# ls -l
total 50472
drwxrwxrwx 3 root root     60 Sep 23 07:29 community.applications/
drwx------ 2 root root     40 Sep 24 16:59 mc-root/
drwxr-xr-x 4 root root     80 Sep 23 07:27 notifications/
drwxr-xr-x 2 root root    200 Sep 28 23:16 plugins/
-rw-rw-rw- 1 root root      0 Sep 28 10:00 preclear_assigned_disks1
-rw-rw-rw- 1 root root   1680 Sep 28 10:00 preclear_report_sda
-rw-rw-rw- 1 root root    168 Sep 28 10:00 preclear_stat_sda
-rw-rw-rw- 1 root root     10 Sep 28 10:00 read_speedsda
-rw-rw-rw- 1 root root   4571 Sep 28 10:00 smart_finish_sda
-rw-rw-rw- 1 root root   4571 Sep 27 09:55 smart_mid_after_zero1_sda
-rw-rw-rw- 1 root root    144 Sep 27 09:55 smart_mid_pending_reallocate_sda
-rw-rw-rw- 1 root root   4571 Sep 26 23:02 smart_mid_preread1_sda
-rw-rw-rw- 1 root root   4574 Sep 26 09:22 smart_start_sda
-rw-rw-rw- 1 root root 705595 Sep 23 13:47 tmp-1001038311.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-1018085421.url
-rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1022176830.url
-rw-rw-rw- 1 root root 711853 Sep 25 09:58 tmp-1026715570.url
-rw-rw-rw- 1 root root 714424 Sep 28 09:40 tmp-1037033344.url
-rw-rw-rw- 1 root root 711853 Sep 27 05:01 tmp-1041957111.url
-rw-rw-rw- 1 root root 711853 Sep 25 09:57 tmp-104351457.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:43 tmp-1079397565.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:08 tmp-1114330130.url
-rw-rw-rw- 1 root root 714554 Sep 28 23:16 tmp-1115295570.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:09 tmp-1115779883.url
-rw-rw-rw- 1 root root 714554 Sep 29 10:48 tmp-1184253690.url
-rw-rw-rw- 1 root root 714938 Oct  1 08:22 tmp-1244214065.url
-rw-rw-rw- 1 root root 714424 Sep 28 02:27 tmp-124625526.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:07 tmp-1257712804.url
-rw-rw-rw- 1 root root 711853 Sep 26 22:48 tmp-1262417929.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:26 tmp-1266159711.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:07 tmp-1303099892.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:17 tmp-1304736983.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:07 tmp-1320279264.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:44 tmp-137839263.url
-rw-rw-rw- 1 root root 711853 Sep 26 14:04 tmp-1394325280.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:47 tmp-1475139423.url
-rw-rw-rw- 1 root root 714554 Sep 29 10:48 tmp-1478797206.url
-rw-rw-rw- 1 root root 711853 Sep 26 22:58 tmp-1502409291.url
-rw-rw-rw- 1 root root 714554 Sep 28 10:04 tmp-1527394120.url
-rw-rw-rw- 1 root root 714554 Sep 28 23:16 tmp-1548075922.url
-rw-rw-rw- 1 root root 711853 Sep 26 22:47 tmp-1565010257.url
-rw-rw-rw- 1 root root 714554 Sep 28 10:05 tmp-1584110921.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:30 tmp-160169679.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:21 tmp-1661075416.url
-rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1672534634.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:43 tmp-1701087208.url
-rw-rw-rw- 1 root root 714554 Sep 28 15:39 tmp-170252017.url
-rw-rw-rw- 1 root root 708172 Sep 24 22:10 tmp-1731846235.url
-rw-rw-rw- 1 root root 714554 Sep 29 10:49 tmp-1776699683.url
-rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1794736320.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:17 tmp-1798316926.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:23 tmp-1867079391.url
-rw-rw-rw- 1 root root 714938 Sep 30 18:02 tmp-1892408965.url
-rw-rw-rw- 1 root root 711853 Sep 27 09:44 tmp-1992924892.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:06 tmp-2004304119.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:34 tmp-2058323394.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:44 tmp-2087387426.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:44 tmp-2088655953.url
-rw-rw-rw- 1 root root 714554 Sep 28 15:39 tmp-2136823997.url
-rw-rw-rw- 1 root root 714554 Sep 29 10:50 tmp-2143980462.url
-rw-rw-rw- 1 root root 705595 Sep 23 10:56 tmp-238162114.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:07 tmp-244130472.url
-rw-rw-rw- 1 root root 711853 Sep 26 11:22 tmp-297073847.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:37 tmp-307181750.url
-rw-rw-rw- 1 root root 714554 Sep 29 13:18 tmp-319612520.url
-rw-rw-rw- 1 root root 708172 Sep 24 05:56 tmp-364760669.url
-rw-rw-rw- 1 root root 711853 Sep 26 14:23 tmp-381722333.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:29 tmp-45712661.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:37 tmp-46253092.url
-rw-rw-rw- 1 root root 714554 Sep 28 23:17 tmp-489865132.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-49543768.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:30 tmp-532269974.url
-rw-rw-rw- 1 root root 711853 Sep 27 09:48 tmp-54267583.url
-rw-rw-rw- 1 root root 714554 Sep 29 06:36 tmp-569650762.url
-rw-rw-rw- 1 root root 714554 Sep 29 07:31 tmp-595762458.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:31 tmp-626615610.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-741555393.url
-rw-rw-rw- 1 root root 711853 Sep 26 11:20 tmp-771294825.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:36 tmp-793902811.url
-rw-rw-rw- 1 root root 714554 Sep 28 10:05 tmp-870656813.url
-rw-rw-rw- 1 root root 708172 Sep 24 13:29 tmp-87708617.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:43 tmp-926217946.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:37 tmp-9295676.url
-rw-rw-rw- 1 root root 714938 Sep 30 19:15 tmp-933241343.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:22 tmp-970283086.url
drwx------ 2 root root     60 Sep 26 09:22 tmux-0/
-rw-rw-rw- 1 root root 263768 Sep 27 09:55 zerosda

 

Couple of questions:

 

1) Do you have any idea what is generating these tmp files?

2) If I did not rebuild docker.img, would these tmp files get purged from docker.img on their own?

Link to comment

And, even there the vast amount of storage is taken up by the containers.  dockerMan presumably also stores its settings there, but everything else is miniscule compared to the size of the containers.  We're talking megabytes here, compared to the gigs done by the containers.

 

@squid,  thanks for your help.  This has been the most progress I've had in understanding what is going on in a week.

 

At that location, I see a /tmp folder that contains dozens of files with names like tmp-1018085421.url which mc reports to be of size 714424 (I don't know what units this is in).  However, I see that /community.applications is of size 60, so I'm guessing the other file is very much bigger. The oldest tmp file seems to correspond to the date I last rebuilt docker.img.  Here is a list of what's in /tmp inside docker.img.

 

root@Media:/tmp# ls -l
total 50472
drwxrwxrwx 3 root root     60 Sep 23 07:29 community.applications/
drwx------ 2 root root     40 Sep 24 16:59 mc-root/
drwxr-xr-x 4 root root     80 Sep 23 07:27 notifications/
drwxr-xr-x 2 root root    200 Sep 28 23:16 plugins/
-rw-rw-rw- 1 root root      0 Sep 28 10:00 preclear_assigned_disks1
-rw-rw-rw- 1 root root   1680 Sep 28 10:00 preclear_report_sda
-rw-rw-rw- 1 root root    168 Sep 28 10:00 preclear_stat_sda
-rw-rw-rw- 1 root root     10 Sep 28 10:00 read_speedsda
-rw-rw-rw- 1 root root   4571 Sep 28 10:00 smart_finish_sda
-rw-rw-rw- 1 root root   4571 Sep 27 09:55 smart_mid_after_zero1_sda
-rw-rw-rw- 1 root root    144 Sep 27 09:55 smart_mid_pending_reallocate_sda
-rw-rw-rw- 1 root root   4571 Sep 26 23:02 smart_mid_preread1_sda
-rw-rw-rw- 1 root root   4574 Sep 26 09:22 smart_start_sda
-rw-rw-rw- 1 root root 705595 Sep 23 13:47 tmp-1001038311.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-1018085421.url
-rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1022176830.url
-rw-rw-rw- 1 root root 711853 Sep 25 09:58 tmp-1026715570.url
-rw-rw-rw- 1 root root 714424 Sep 28 09:40 tmp-1037033344.url
-rw-rw-rw- 1 root root 711853 Sep 27 05:01 tmp-1041957111.url
-rw-rw-rw- 1 root root 711853 Sep 25 09:57 tmp-104351457.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:43 tmp-1079397565.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:08 tmp-1114330130.url
-rw-rw-rw- 1 root root 714554 Sep 28 23:16 tmp-1115295570.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:09 tmp-1115779883.url
-rw-rw-rw- 1 root root 714554 Sep 29 10:48 tmp-1184253690.url
-rw-rw-rw- 1 root root 714938 Oct  1 08:22 tmp-1244214065.url
-rw-rw-rw- 1 root root 714424 Sep 28 02:27 tmp-124625526.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:07 tmp-1257712804.url
-rw-rw-rw- 1 root root 711853 Sep 26 22:48 tmp-1262417929.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:26 tmp-1266159711.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:07 tmp-1303099892.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:17 tmp-1304736983.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:07 tmp-1320279264.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:44 tmp-137839263.url
-rw-rw-rw- 1 root root 711853 Sep 26 14:04 tmp-1394325280.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:47 tmp-1475139423.url
-rw-rw-rw- 1 root root 714554 Sep 29 10:48 tmp-1478797206.url
-rw-rw-rw- 1 root root 711853 Sep 26 22:58 tmp-1502409291.url
-rw-rw-rw- 1 root root 714554 Sep 28 10:04 tmp-1527394120.url
-rw-rw-rw- 1 root root 714554 Sep 28 23:16 tmp-1548075922.url
-rw-rw-rw- 1 root root 711853 Sep 26 22:47 tmp-1565010257.url
-rw-rw-rw- 1 root root 714554 Sep 28 10:05 tmp-1584110921.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:30 tmp-160169679.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:21 tmp-1661075416.url
-rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1672534634.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:43 tmp-1701087208.url
-rw-rw-rw- 1 root root 714554 Sep 28 15:39 tmp-170252017.url
-rw-rw-rw- 1 root root 708172 Sep 24 22:10 tmp-1731846235.url
-rw-rw-rw- 1 root root 714554 Sep 29 10:49 tmp-1776699683.url
-rw-rw-rw- 1 root root 711853 Sep 25 10:04 tmp-1794736320.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:17 tmp-1798316926.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:23 tmp-1867079391.url
-rw-rw-rw- 1 root root 714938 Sep 30 18:02 tmp-1892408965.url
-rw-rw-rw- 1 root root 711853 Sep 27 09:44 tmp-1992924892.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:06 tmp-2004304119.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:34 tmp-2058323394.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:44 tmp-2087387426.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:44 tmp-2088655953.url
-rw-rw-rw- 1 root root 714554 Sep 28 15:39 tmp-2136823997.url
-rw-rw-rw- 1 root root 714554 Sep 29 10:50 tmp-2143980462.url
-rw-rw-rw- 1 root root 705595 Sep 23 10:56 tmp-238162114.url
-rw-rw-rw- 1 root root 714424 Sep 27 15:07 tmp-244130472.url
-rw-rw-rw- 1 root root 711853 Sep 26 11:22 tmp-297073847.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:37 tmp-307181750.url
-rw-rw-rw- 1 root root 714554 Sep 29 13:18 tmp-319612520.url
-rw-rw-rw- 1 root root 708172 Sep 24 05:56 tmp-364760669.url
-rw-rw-rw- 1 root root 711853 Sep 26 14:23 tmp-381722333.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:29 tmp-45712661.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:37 tmp-46253092.url
-rw-rw-rw- 1 root root 714554 Sep 28 23:17 tmp-489865132.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-49543768.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:30 tmp-532269974.url
-rw-rw-rw- 1 root root 711853 Sep 27 09:48 tmp-54267583.url
-rw-rw-rw- 1 root root 714554 Sep 29 06:36 tmp-569650762.url
-rw-rw-rw- 1 root root 714554 Sep 29 07:31 tmp-595762458.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:31 tmp-626615610.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:46 tmp-741555393.url
-rw-rw-rw- 1 root root 711853 Sep 26 11:20 tmp-771294825.url
-rw-rw-rw- 1 root root 705595 Sep 23 07:36 tmp-793902811.url
-rw-rw-rw- 1 root root 714554 Sep 28 10:05 tmp-870656813.url
-rw-rw-rw- 1 root root 708172 Sep 24 13:29 tmp-87708617.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:43 tmp-926217946.url
-rw-rw-rw- 1 root root 714424 Sep 27 14:37 tmp-9295676.url
-rw-rw-rw- 1 root root 714938 Sep 30 19:15 tmp-933241343.url
-rw-rw-rw- 1 root root 705595 Sep 23 11:22 tmp-970283086.url
drwx------ 2 root root     60 Sep 26 09:22 tmux-0/
-rw-rw-rw- 1 root root 263768 Sep 27 09:55 zerosda

 

Couple of questions:

 

1) Do you have any idea what is generating these tmp files?

2) If I did not rebuild docker.img, would these tmp files get purged from docker.img on their own?

/tmp is stored in ram, not inside docker.img

 

All those tmp-xxxx.url files in /tmp are from CA (was a memory leak caused by a typo of mine -> fixed on latest CA update) and can be deleted (or you can just restart the server)

 

Link to comment

@squid 

 

Is docker.img mounted to /var/lib/docker/unraid or one level up at /var/lib/docker?

 

It looks to me like the containers are one level up at /var/lib/docker.  At that level I see:

 

/containers of size 768, and /graph of size 16520. Does this sound right to you?

 

root@Media:/var/lib/docker# ls -l
total 20
drwx------ 1 root root    20 Sep 22 16:17 btrfs/
drwx------ 1 root root   768 Sep 29 10:49 containers/
drwx------ 1 root root 16520 Sep 27 15:06 graph/
drwx------ 1 root root    32 Sep 22 16:16 init/
-rw-r--r-- 1 root root  5120 Sep 29 10:49 linkgraph.db
-rw------- 1 root root   604 Sep 29 10:49 repositories-btrfs
drwx------ 1 root root     0 Sep 27 15:06 tmp/
drwx------ 1 root root     0 Sep 22 16:16 trust/
drwxrwxrwx 1 root root   112 Oct  1 12:03 unraid/
-rw-rw-rw- 1 root root    71 Oct  1 12:03 unraid-autostart
-rw-rw-rw- 1 root root   166 Sep 29 10:49 unraid-update-status.json
drwx------ 1 root root   256 Sep 27 14:43 volumes/

Link to comment

@squid 

 

Is docker.img mounted to /var/lib/docker/unraid or one level up at /var/lib/docker?

 

It looks to me like the containers are one level up at /var/lib/docker.  At that level I see:

Then that's probably where its mounted.  Like I said at work so couldn't check.

 

@Squid  Thanks again.  I'll monitor the size of the directories at /var/lib/docker.  The next time I see a big jump in utilization of docker.img, I should now be able to see what is growing.  This is a big help.

Link to comment

This does not make sense to me. My docker image is 15G and %55 utilization. But ./btrfs is 67G :S my first guess it sums the hardlinked files mapped to cache drive but they are aprox. 6G. Docker is a complicated thing :)

BTW with this i guess i understand the numbers in the settings/docker page are first one is the space of containers second is the complete docker image.

 

root@Tower:/var/lib/docker# du -sh ./*
67G     ./btrfs
6.7M    ./containers
1.9M    ./graph
5.5M    ./init
8.0K    ./linkgraph.db
4.0K    ./repositories-btrfs
0       ./tmp
0       ./trust
16M     ./unraid
4.0K    ./unraid-autostart
4.0K    ./unraid-update-status.json
0       ./volumes

 

Label: none  uuid: b882b5ad-8dc1-43e6-ade4-c9542b0051c2
Total devices 1 FS bytes used 5.21GiB
devid    1 size 15.00GiB used 8.04GiB path /dev/loop0

 

root@Tower:/var/lib/docker# df
Filesystem       1K-blocks       Used  Available Use% Mounted on
......
/dev/loop0        15728640    8288248    6815592  55% /var/lib/docker

 

tobbenb/mkvtoolnix-gui	latest	5036ba1338dcce3e3dcf6e52	698.55 MiB	26.07.2015 22:54:48
sparklyballs/serviio	latest	18049895a6aada79820586b2	1.02 GiB	22.05.2015 14:07:02
needo/sickrage		latest	b580bde9d271c63427fc6c19	339.51 MiB	01.08.2015 17:37:35
needo/mariadb		latest	566c91aa7b1e209ddd41e5b0	563.20 MiB	11.07.2014 14:53:52
needo/couchpotato	latest	196d8d33f934fa545f1310ef	322.36 MiB	06.05.2015 06:27:38
mobtitude/vpn-pptp	latest	1164a70c07b12e5287b06cad	202.88 MiB	01.02.2015 21:35:57
mace/openvpn-as		latest	e3724358ea4c045ae9727a8c	281.96 MiB	15.08.2015 12:19:30
hurricane/docker-btsync	free	7054a03fe9c4d8cbfba3d147	273.97 MiB	17.04.2015 20:25:42
google/cadvisor		latest	9d2add265f7f96e8973b7678	19.11 MiB	24.09.2015 01:12:14
gfjardim/transmission	latest	5a0c5c6db90d5a636c807bb5	454.02 MiB	06.09.2014 07:03:03
frosquin/softether	latest	bca90fde0bebc881df1a7dda	148.16 MiB	31.08.2015 17:51:40
emby/embyserver		latest	67bcdb16d1973cb5a96d49fd	886.14 MiB	22.09.2015 19:28:14
TOTAL								5,215.23 MiB

Link to comment

This does not make sense to me. My docker image is 15G and %55 utilization. But ./btrfs is 67G :S my first guess it sums the hardlinked files mapped to cache drive but they are aprox. 6G. Docker is a complicated thing :)

BTW with this i guess i understand the numbers in the settings/docker page are first one is the space of containers second is the complete docker image.

Haven't looked that closely at it, but the 67G is probably the space that the container would take up if docker didn't leverage the COW (copy on write) features of btrfs

Link to comment

Im not sure about serviio s or emby server s transcoding settings but I don't use them,  I just install them in case of emergency and to test.

i check and map them to cache drive. I dont think that will solve my issue (because i use them very rare) but this is an important point if you use these kind of software.

Link to comment

I guess i find a clue: i just add /tmp mappings to emby and serviio. And in this procedure emby downloads an image again (afaik it is ~200MB) (updated version i guess) and disk utilization changes signifantly after that but container spaces are same! (Nobody is using any docker between two summaries. of course except couchpotato, sicrage etc but there are no active downloads etc)

 

Label: none  uuid: b882b5ad-8dc1-43e6-ade4-c9542b0051c2
    	Total devices 1 FS bytes used 6.45GiB
    	devid    1 size 15.00GiB used 9.79GiB path /dev/loop0

 

tobbenb/mkvtoolnix-gui	latest	5036ba1338dcce3e3dcf6e52	698.55 MiB	26.07.2015 22:54:48
sparklyballs/serviio	latest	18049895a6aada79820586b2	1.02 GiB	22.05.2015 14:07:02
needo/sickrage		latest	b580bde9d271c63427fc6c19	339.51 MiB	01.08.2015 17:37:35
needo/mariadb		latest	566c91aa7b1e209ddd41e5b0	563.20 MiB	11.07.2014 14:53:52
needo/couchpotato	latest	196d8d33f934fa545f1310ef	322.36 MiB	06.05.2015 06:27:38
mobtitude/vpn-pptp	latest	1164a70c07b12e5287b06cad	202.88 MiB	01.02.2015 21:35:57
mace/openvpn-as		latest	e3724358ea4c045ae9727a8c	281.96 MiB	15.08.2015 12:19:30
hurricane/docker-btsync	free	7054a03fe9c4d8cbfba3d147	273.97 MiB	17.04.2015 20:25:42
google/cadvisor		latest	9d2add265f7f96e8973b7678	19.11 MiB	24.09.2015 01:12:14
gfjardim/transmission	latest	5a0c5c6db90d5a636c807bb5	454.02 MiB	06.09.2014 07:03:03
frosquin/softether	latest	bca90fde0bebc881df1a7dda	148.16 MiB	31.08.2015 17:51:40
emby/embyserver		latest	32d7b2c34fcdc4a1c5c8f51c	886.15 MiB	30.09.2015 08:25:39

 

and an orphan image poped out

(orphan image)
Image ID: 67bcdb16d197
<none>:<none> 	  	  	  	  	
Created 1 week ago

which is odd. 4 days ago i delete my docker image and create all containers from start.

Link to comment

I am currently traveling but am getting the same error messages.

 

This just started happening after updating several plugins and dockers (including unRAID Server OS (6.1.3)).

 

I am getting emails that say: "Docker image disk utilization of 70%"

 

Then a few minute later I get an email that says: "Docker image disk utilization returned to normal level"

I called home and it may be linked to my son watching movies via Plex?

 

I am running:

 

linuxserver/couchpotato

linuxserver/nzbget

linuxserver/plex

linuxserver/sonarr

gfjardim/crashplan

gfjardim/dropbox

sparklyballs/krusader

 

These dockers installed but not running:

 

linuxserver/headphones

gfjardim/crashplan-desktop

 

In my situation the docker size drops back down. I am happy to provide any information if it will help debug this issue if people think it is related.

 

Thanks!

 

DZ

Link to comment

I get the same e-mails, however mine go all the way up to 100% and never say " Have gone back to normal".

The kicker here... I don't have Docker enabled, never have.

I'm slightly confused.

:o

Would you have time to create a Defect Report, outlining what you are seeing and the exact content of the messages, and include your diagnostics, so the settings/variables can be confirmed/checked?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.