aptalca Posted January 6, 2016 Share Posted January 6, 2016 I don't believe there is log rotation enabled in unraid's implementation of docker. There doesn't seem to be a limit on log size either. I kept running out of space in my docker image and finally realized that a few of the containers were filling up the image with their logs. One container had a log that was 2.8GB (couchpotato) The temporary solution was to reinstall the container and it reset the files in /var/lib/docker/containers and got rid of the large logs. It freed up a ton of space, too. (For any users interested, the easiest way to reinstall is to click on the container image in the unraid gui, select edit, don't change any settings and just hit save, it will reinstall the container with the same settings) Is there anyway a limit on size can be implemented, or log rotation? Or perhaps an option to move the logs somewhere else where there is more storage available? Thanks PS. If anyone's interested in checking how big their logs are, type this in the unraid terminal and it will list the largest logs: du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60 1 3 Quote Link to comment
CHBMB Posted January 8, 2016 Share Posted January 8, 2016 Thanks for this Aptalca. Just regained 2GB of space in my docker.img from 2 containers alone.... Quote Link to comment
mdumont1 Posted January 9, 2016 Share Posted January 9, 2016 This solution works great, Thanks! I was able to clear up 11GB of space by reinstalling the sonarr docker. I wonder if these log files can be removed directly without reinstalling the docker? If so, a cron job to remove the files would keep things fairly clean. Quote Link to comment
unevent Posted January 10, 2016 Share Posted January 10, 2016 Docker 1.8 added log rotation https://docs.docker.com/reference/logging/overview/ --log-opt max-size=50m To clear log you can execute: echo "" > $(docker inspect --format='{{.LogPath}}' <container_name_or_id>) edit: the above does not show correctly on tapatalk - add the container name to the end before executing. Use Aptalca's command to list the log sizes then turn advanced view on in Docker tab and match docker container id to the docker name and run the command above using the Docker name. Quote Link to comment
sparklyballs Posted January 10, 2016 Share Posted January 10, 2016 unraid at present uses docker 1.7.1 Quote Link to comment
BRiT Posted January 10, 2016 Share Posted January 10, 2016 unraid at present uses docker 1.7.1 By unraid, do you mean the public released version of 6.1.6 or does the private elite rockstar Linus edition also use docker 1.7.1 ? Quote Link to comment
sparklyballs Posted January 10, 2016 Share Posted January 10, 2016 unraid at present uses docker 1.7.1 By unraid, do you mean the public released version of 6.1.6 or does the private elite rockstar Linus edition also use docker 1.7.1 ? i could set up a youtube channel and find out Quote Link to comment
smoldersonline Posted January 10, 2016 Share Posted January 10, 2016 Thank you very much @Aptalca! In my case BTSync was causing my image to fill up quickly. Quote Link to comment
CHBMB Posted January 10, 2016 Share Posted January 10, 2016 Thank you very much @Aptalca! In my case BTSync was causing my image to fill up quickly. Whose version of BTSync, perhaps if it's LT's version, that might give this some traction.... Quote Link to comment
smoldersonline Posted January 10, 2016 Share Posted January 10, 2016 Yep, LT's version. It's gone now, but the log was huge. Quote Link to comment
Capt.Insano Posted January 11, 2016 Share Posted January 11, 2016 Great find by aptalca! That command showed that linuxserver/headphones had a log file 2GB big!! Hopefully 6.2 might include some docker updates to include the log rotation. Thanks! Quote Link to comment
JustinAiken Posted January 12, 2016 Share Posted January 12, 2016 Hahaha wow, there goes 11GB of sickrage logs!! Thanks for the command/workaround! Quote Link to comment
NAS Posted January 13, 2016 Share Posted January 13, 2016 IMHO we should treat situations like that as bugs and report them as such because no single docker should have the abilty to take down them all (which is what happens because we uniquely run it via loopback). Quote Link to comment
tomhoover Posted January 22, 2016 Share Posted January 22, 2016 PS. If anyone's interested in checking how big their logs are, type this in the unraid terminal and it will list the largest logs: du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60 Thanks for the command line to locate the runaway log file. I'm new to docker, and incorrectly assumed the logs were within the docker container itself. I tried searching around inside my BitTorrent Sync container (by running the 'docker exec -it Sync bash' command), but couldn't find the problem. I'm syncing so many files with BTSync, that the log would grow to 50GB during the initial "indexing" after starting BTSync. Thanks to you, I finally figured out what was happening and modified my Sync template to include '--log-driver=none' on the startup command line. I've lost my BTSync logs for now, but I no longer have to keep growing my docker.img size to accommodate my increased use of BTSync. Thanks! Quote Link to comment
gshlomi Posted January 25, 2016 Share Posted January 25, 2016 Yep, LT's version. It's gone now, but the log was huge. Same here. 30GB(!!!) of logs... I've increased my docker image size each time until I'll find some free time and easy way to locate the growing files... Quote Link to comment
smashingtool Posted January 27, 2016 Share Posted January 27, 2016 Yep, just read through a few threads and found myself here. Turns out my LimeTech BTSync container has a 7.3GB log file! I reinstalled it last night and my docker utilization has crept up all day Quote Link to comment
Darts Posted January 28, 2016 Share Posted January 28, 2016 Can't we just delete those logs with midnight commander? Quote Link to comment
unevent Posted January 28, 2016 Share Posted January 28, 2016 Telnet/SSH to unRAID. Issue the command in Aptalca's post: du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60 It lists the logs for the containers and their sizes. Locate the Docker Container ID in the list of a desired (large) log to delete: Go to the unRAID GUI Docker tab. Turn on advanced view. Find the container that matches the ID from the 'du' command. To delete the log, issue the command below with the name of the container (case sensitive) echo "" > $(docker inspect --format='{{.LogPath}}' sabnzbd) 1 2 Quote Link to comment
Darts Posted January 28, 2016 Share Posted January 28, 2016 OK and if I have containers in Aptalca's command that are not in the unraid GUI, I can just delete them with MC as they were from old containers? Thanks for the walkthrough by the way Quote Link to comment
unevent Posted January 29, 2016 Share Posted January 29, 2016 OK and if I have containers in Aptalca's command that are not in the unraid GUI, I can just delete them with MC as they were from old containers? Thanks for the walkthrough by the way Not exactly sure how old container/images and their logs are handled. Can try deleting them as you say or wait for someone more familiar with it to chime in. Quote Link to comment
NAS Posted January 29, 2016 Share Posted January 29, 2016 I think the issue is generally containers that haven't mapped their noisy daemon logs out to the host and dont rotate or destroy. In this scenario the daemon logs accumulate within the container and consume loopback image space until it is either manually deleted (bad) or the container reset (better but not great). Certainly we should at the very minimum provide a GUI way for user to know where this space is going because currently we just alert saying essentially "your almost out of space go work out the docker command line to know where its gone". Also I am off the firm opinion docker container that have this issue should be fixed as it is a bug. Pythin apps that use the native HTTP server are particularly bad for this. Quote Link to comment
aptalca Posted January 29, 2016 Author Share Posted January 29, 2016 I think the issue is generally containers that haven't mapped their noisy daemon logs out to the host and dont rotate or destroy. In this scenario the daemon logs accumulate within the container and consume loopback image space until it is either manually deleted (bad) or the container reset (better but not great). Certainly we should at the very minimum provide a GUI way for user to know where this space is going because currently we just alert saying essentially "your almost out of space go work out the docker command line to know where its gone". Also I am off the firm opinion docker container that have this issue should be fixed as it is a bug. Pythin apps that use the native HTTP server are particularly bad for this. These logs are not within the containers. The syslogs of the containers are mapped to the host where a user can access them without having to exec into the containers. Under unraid's implementation, these logs are saved within the docker.img but separately from the containers That is part of the reason why it causes confusion because when the logs get large, the containers don't, but the docker.img gets full. The users get confused about why the image is ballooning when the containers themselves aren't. Quote Link to comment
ken-ji Posted January 29, 2016 Share Posted January 29, 2016 If I remember correctly the real culprit is the docker logs which is not syslog, but rather the stdout and stderr of process id 1 ie the app docker is running so if wrappers are used and the wrappers allow the stdout and stderr to just bubble up that will cause the docker.img to balloon. A well written docker will either have provisions for syslog output, or a logfile/logdir which can be mapped to the host volume. Quote Link to comment
NAS Posted January 29, 2016 Share Posted January 29, 2016 I think the issue is generally containers that haven't mapped their noisy daemon logs out to the host and dont rotate or destroy. In this scenario the daemon logs accumulate within the container and consume loopback image space until it is either manually deleted (bad) or the container reset (better but not great). Certainly we should at the very minimum provide a GUI way for user to know where this space is going because currently we just alert saying essentially "your almost out of space go work out the docker command line to know where its gone". Also I am off the firm opinion docker container that have this issue should be fixed as it is a bug. Pythin apps that use the native HTTP server are particularly bad for this. These logs are not within the containers. The syslogs of the containers are mapped to the host where a user can access them without having to exec into the containers. Under unraid's implementation, these logs are saved within the docker.img but separately from the containers That is part of the reason why it causes confusion because when the logs get large, the containers don't, but the docker.img gets full. The users get confused about why the image is ballooning when the containers themselves aren't. OK this is interesting and exactly not what I thought the (and my) issue is. Can we show some proof of this so that others can confirm? Edit: I just remoted in to have a quick look and it seems to confirm the above. I think we would still benefit from showing the output of a real world example other can check. If no one else gets to it I will do it when i return as I am sure I am being hit with this the same as everyone esle Quote Link to comment
Nem Posted February 1, 2016 Share Posted February 1, 2016 Just found 30GB worth of logs in my couchpotato container! Thanks for the fix, but we really should have a way to at least inspect the docker image within the webgui to see what is taking up the space, if not be able to clear old logs without having to SSH into the machine Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.