Log rotation for docker or limit log size?


Recommended Posts

Telnet/SSH to unRAID.  Issue the command in Aptalca's post:

 

du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60

 

It lists the logs for the containers and their sizes.  Locate the Docker Container ID in the list of a desired (large) log to delete:

 

 

 

Go to the unRAID GUI Docker tab.  Turn on advanced view.  Find the container that matches the ID from the 'du' command.

 

 

 

 

To delete the log, issue the command below with the name of the container (case sensitive)

 

echo "" > $(docker inspect --format='{{.LogPath}}' sabnzbd)

 

 

 

Mods, please pin this post.... It took me 30 minutes to find the answer to my incresaing docker uttilization warnings. It was CouchPotato from linuxserver for me. It showed 2 files at 18GB.

 

148gy7j.png

 

Thank you for this post!

 

Link to comment

I wonder if we could make a script that:

[*]Issues the du command and keeps track of any logs over 512MB

[*]Matches the du ID to the container name

[*]Issues the final command to delete the log

[*]Notifies unRAID of the logs that have been remove due to large file size

 

Someone PLEASE make this script... I for one would put it in a daily cron task. I use Pushover and when my limits hit 75%, my Pushover starts going crazy dinging every 2 minutes.

Link to comment

Though I'd share the contents of the docker logs my always active docker ELK (elasticsearch, logstash, kibana)

# ls -l containers/*/*json.log
-rw------- 1 root root 42281 Jan 28 09:37 containers/26281e1b4e15e44587fd29c57a64673c34d21b620ea6d3845d08e4eb9debd712/26281e1b4e15e44587fd29c57a64673c34d21b620ea6d3845d08e4eb9debd712-json.log
# cat containers/*/*json.log
{"log":"2015-09-05 14:32:06,089 CRIT Supervisor running as root (no user in config file)\n","stream":"stdout","time":"2015-09-05T06:32:06.090005525Z"}
{"log":"2015-09-05 14:32:06,089 WARN Included extra file \"/etc/supervisor/conf.d/elasticsearch.conf\" during parsing\n","stream":"stdout","time":"2015-09-05T06:32:06.090064071Z"}
{"log":"2015-09-05 14:32:06,089 WARN Included extra file \"/etc/supervisor/conf.d/logstash.conf\" during parsing\n","stream":"stdout","time":"2015-09-05T06:32:06.090081198Z"}
{"log":"2015-09-05 14:32:06,089 WARN Included extra file \"/etc/supervisor/conf.d/kibana.conf\" during parsing\n","stream":"stdout","time":"2015-09-05T06:32:06.090090118Z"}
{"log":"2015-09-05 14:32:06,134 INFO RPC interface 'supervisor' initialized\n","stream":"stdout","time":"2015-09-05T06:32:06.134474366Z"}
{"log":"2015-09-05 14:32:06,134 CRIT Server 'unix_http_server' running without any HTTP authentication checking\n","stream":"stdout","time":"2015-09-05T06:32:06.134594865Z"}
{"log":"2015-09-05 14:32:06,134 INFO supervisord started with pid 1\n","stream":"stdout","time":"2015-09-05T06:32:06.134748882Z"}
{"log":"2015-09-05 14:32:07,137 INFO spawned: 'elasticsearch' with pid 14\n","stream":"stdout","time":"2015-09-05T06:32:07.138196439Z"}
{"log":"2015-09-05 14:32:07,144 INFO spawned: 'logstash' with pid 16\n","stream":"stdout","time":"2015-09-05T06:32:07.145160788Z"}
{"log":"2015-09-05 14:32:07,145 INFO spawned: 'kibana' with pid 17\n","stream":"stdout","time":"2015-09-05T06:32:07.146199429Z"}
{"log":"2015-09-05 14:32:08,405 INFO success: elasticsearch entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2015-09-05T06:32:08.406137648Z"}
{"log":"2015-09-05 14:32:08,406 INFO success: logstash entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2015-09-05T06:32:08.406227982Z"}
{"log":"2015-09-05 14:32:08,406 INFO success: kibana entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2015-09-05T06:32:08.406344723Z"}
{"log":"2015-09-05 15:00:23,948 INFO stopped: logstash (exit status 0)\n","stream":"stdout","time":"2015-09-05T07:00:23.948343974Z"}
{"log":"2015-09-05 15:00:24,577 INFO spawned: 'logstash' with pid 200\n","stream":"stdout","time":"2015-09-05T07:00:24.57817949Z"}
{"log":"2015-09-05 15:00:25,579 INFO success: logstash entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2015-09-05T07:00:25.579548873Z"}
...snip...
{"log":"2016-01-16 16:34:22,433 WARN received SIGTERM indicating exit request\n","stream":"stdout","time":"2016-01-16T08:34:22.448680678Z"}
{"log":"2016-01-16 16:34:22,497 INFO waiting for elasticsearch, logstash, kibana to die\n","stream":"stdout","time":"2016-01-16T08:34:22.498612724Z"}
{"log":"2016-01-16 16:34:22,645 INFO stopped: kibana (exit status 143)\n","stream":"stdout","time":"2016-01-16T08:34:22.645916423Z"}
{"log":"2016-01-16 16:34:23,574 INFO stopped: logstash (exit status 0)\n","stream":"stdout","time":"2016-01-16T08:34:23.57517315Z"}
{"log":"2016-01-16 16:34:25,076 INFO stopped: elasticsearch (exit status 143)\n","stream":"stdout","time":"2016-01-16T08:34:25.07722009Z"}
{"log":"2016-01-16 16:37:09,766 CRIT Supervisor running as root (no user in config file)\n","stream":"stdout","time":"2016-01-16T08:37:09.813773193Z"}
{"log":"2016-01-16 16:37:09,813 WARN Included extra file \"/etc/supervisor/conf.d/elasticsearch.conf\" during parsing\n","stream":"stdout","time":"2016-01-16T08:37:09.813887535Z"}
{"log":"2016-01-16 16:37:09,813 WARN Included extra file \"/etc/supervisor/conf.d/logstash.conf\" during parsing\n","stream":"stdout","time":"2016-01-16T08:37:09.814012204Z"}
{"log":"2016-01-16 16:37:09,813 WARN Included extra file \"/etc/supervisor/conf.d/kibana.conf\" during parsing\n","stream":"stdout","time":"2016-01-16T08:37:09.81402458Z"}
{"log":"2016-01-16 16:37:10,010 INFO RPC interface 'supervisor' initialized\n","stream":"stdout","time":"2016-01-16T08:37:10.010184361Z"}
{"log":"2016-01-16 16:37:10,010 CRIT Server 'unix_http_server' running without any HTTP authentication checking\n","stream":"stdout","time":"2016-01-16T08:37:10.010297305Z"}
{"log":"2016-01-16 16:37:10,010 INFO supervisord started with pid 1\n","stream":"stdout","time":"2016-01-16T08:37:10.01045926Z"}
{"log":"2016-01-16 16:37:11,012 INFO spawned: 'elasticsearch' with pid 8\n","stream":"stdout","time":"2016-01-16T08:37:11.01297078Z"}
{"log":"2016-01-16 16:37:11,013 INFO spawned: 'logstash' with pid 9\n","stream":"stdout","time":"2016-01-16T08:37:11.014255634Z"}
{"log":"2016-01-16 16:37:11,072 INFO spawned: 'kibana' with pid 19\n","stream":"stdout","time":"2016-01-16T08:37:11.072458946Z"}
{"log":"2016-01-16 16:37:12,150 INFO success: elasticsearch entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-16T08:37:12.150462608Z"}
{"log":"2016-01-16 16:37:12,150 INFO success: logstash entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-16T08:37:12.150501523Z"}
{"log":"2016-01-16 16:37:12,150 INFO success: kibana entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-16T08:37:12.150608654Z"}
{"log":"2016-01-17 19:07:33,709 WARN received SIGTERM indicating exit request\n","stream":"stdout","time":"2016-01-17T11:07:33.722539727Z"}
{"log":"2016-01-17 19:07:33,722 INFO waiting for elasticsearch, logstash, kibana to die\n","stream":"stdout","time":"2016-01-17T11:07:33.722797262Z"}
{"log":"2016-01-17 19:07:33,756 INFO stopped: kibana (exit status 143)\n","stream":"stdout","time":"2016-01-17T11:07:33.757462542Z"}
{"log":"2016-01-17 19:07:34,349 INFO stopped: logstash (exit status 0)\n","stream":"stdout","time":"2016-01-17T11:07:34.349372707Z"}
{"log":"2016-01-17 19:07:35,906 INFO stopped: elasticsearch (exit status 143)\n","stream":"stdout","time":"2016-01-17T11:07:35.906304864Z"}
{"log":"2016-01-17 19:23:03,896 CRIT Supervisor running as root (no user in config file)\n","stream":"stdout","time":"2016-01-17T11:23:03.9083908Z"}
{"log":"2016-01-17 19:23:03,908 WARN Included extra file \"/etc/supervisor/conf.d/elasticsearch.conf\" during parsing\n","stream":"stdout","time":"2016-01-17T11:23:03.90886324Z"}
{"log":"2016-01-17 19:23:03,908 WARN Included extra file \"/etc/supervisor/conf.d/logstash.conf\" during parsing\n","stream":"stdout","time":"2016-01-17T11:23:03.908874091Z"}
{"log":"2016-01-17 19:23:03,908 WARN Included extra file \"/etc/supervisor/conf.d/kibana.conf\" during parsing\n","stream":"stdout","time":"2016-01-17T11:23:03.90888071Z"}
{"log":"2016-01-17 19:23:03,972 INFO RPC interface 'supervisor' initialized\n","stream":"stdout","time":"2016-01-17T11:23:03.98494693Z"}
{"log":"2016-01-17 19:23:03,973 CRIT Server 'unix_http_server' running without any HTTP authentication checking\n","stream":"stdout","time":"2016-01-17T11:23:03.98505717Z"}
{"log":"2016-01-17 19:23:03,973 INFO supervisord started with pid 1\n","stream":"stdout","time":"2016-01-17T11:23:03.98506688Z"}
{"log":"2016-01-17 19:23:04,975 INFO spawned: 'elasticsearch' with pid 9\n","stream":"stdout","time":"2016-01-17T11:23:04.978630247Z"}
{"log":"2016-01-17 19:23:04,977 INFO spawned: 'logstash' with pid 10\n","stream":"stdout","time":"2016-01-17T11:23:04.978658854Z"}
{"log":"2016-01-17 19:23:04,978 INFO spawned: 'kibana' with pid 11\n","stream":"stdout","time":"2016-01-17T11:23:04.978765248Z"}
{"log":"2016-01-17 19:23:06,068 INFO success: elasticsearch entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-17T11:23:06.068778919Z"}
{"log":"2016-01-17 19:23:06,068 INFO success: logstash entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-17T11:23:06.068844456Z"}
{"log":"2016-01-17 19:23:06,068 INFO success: kibana entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-17T11:23:06.06899372Z"}
{"log":"2016-01-17 19:34:01,814 WARN received SIGTERM indicating exit request\n","stream":"stdout","time":"2016-01-17T11:34:01.817285436Z"}
{"log":"2016-01-17 19:34:01,815 INFO waiting for elasticsearch, logstash, kibana to die\n","stream":"stdout","time":"2016-01-17T11:34:01.81731838Z"}
{"log":"2016-01-17 19:34:01,820 INFO stopped: kibana (exit status 143)\n","stream":"stdout","time":"2016-01-17T11:34:01.821753957Z"}
{"log":"2016-01-17 19:34:04,848 INFO waiting for elasticsearch, logstash to die\n","stream":"stdout","time":"2016-01-17T11:34:04.848414675Z"}
{"log":"2016-01-17 19:34:08,287 INFO waiting for elasticsearch, logstash to die\n","stream":"stdout","time":"2016-01-17T11:34:08.287929894Z"}
{"log":"2016-01-17 19:34:11,291 INFO waiting for elasticsearch, logstash to die\n","stream":"stdout","time":"2016-01-17T11:34:11.291731559Z"}
{"log":"2016-01-17 19:34:28,406 CRIT Supervisor running as root (no user in config file)\n","stream":"stdout","time":"2016-01-17T11:34:28.406957095Z"}
{"log":"2016-01-17 19:34:28,406 WARN Included extra file \"/etc/supervisor/conf.d/elasticsearch.conf\" during parsing\n","stream":"stdout","time":"2016-01-17T11:34:28.40700857Z"}
{"log":"2016-01-17 19:34:28,406 WARN Included extra file \"/etc/supervisor/conf.d/logstash.conf\" during parsing\n","stream":"stdout","time":"2016-01-17T11:34:28.407106051Z"}
{"log":"2016-01-17 19:34:28,406 WARN Included extra file \"/etc/supervisor/conf.d/kibana.conf\" during parsing\n","stream":"stdout","time":"2016-01-17T11:34:28.407118501Z"}
{"log":"Unlinking stale socket /var/run/supervisor.sock\n","stream":"stderr","time":"2016-01-17T11:34:28.410087227Z"}
{"log":"2016-01-17 19:34:28,718 INFO RPC interface 'supervisor' initialized\n","stream":"stdout","time":"2016-01-17T11:34:28.71917638Z"}
{"log":"2016-01-17 19:34:28,719 CRIT Server 'unix_http_server' running without any HTTP authentication checking\n","stream":"stdout","time":"2016-01-17T11:34:28.719324938Z"}
{"log":"2016-01-17 19:34:28,719 INFO supervisord started with pid 1\n","stream":"stdout","time":"2016-01-17T11:34:28.719934049Z"}
{"log":"2016-01-17 19:34:29,721 INFO spawned: 'elasticsearch' with pid 8\n","stream":"stdout","time":"2016-01-17T11:34:29.722346458Z"}
{"log":"2016-01-17 19:34:29,723 INFO spawned: 'logstash' with pid 9\n","stream":"stdout","time":"2016-01-17T11:34:29.724682989Z"}
{"log":"2016-01-17 19:34:29,724 INFO spawned: 'kibana' with pid 10\n","stream":"stdout","time":"2016-01-17T11:34:29.725211896Z"}
{"log":"2016-01-17 19:34:30,742 INFO success: elasticsearch entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-17T11:34:30.742849818Z"}
{"log":"2016-01-17 19:34:30,742 INFO success: logstash entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-17T11:34:30.742928129Z"}
{"log":"2016-01-17 19:34:30,742 INFO success: kibana entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-17T11:34:30.743085732Z"}
{"log":"2016-01-28 09:29:10,241 WARN received SIGTERM indicating exit request\n","stream":"stdout","time":"2016-01-28T01:29:10.247255332Z"}
{"log":"2016-01-28 09:29:10,305 INFO waiting for elasticsearch, logstash, kibana to die\n","stream":"stdout","time":"2016-01-28T01:29:10.306160393Z"}
{"log":"2016-01-28 09:29:10,511 INFO stopped: kibana (exit status 143)\n","stream":"stdout","time":"2016-01-28T01:29:10.512025099Z"}
{"log":"2016-01-28 09:29:11,331 INFO stopped: logstash (exit status 0)\n","stream":"stdout","time":"2016-01-28T01:29:11.331625011Z"}
{"log":"2016-01-28 09:29:13,082 INFO stopped: elasticsearch (exit status 143)\n","stream":"stdout","time":"2016-01-28T01:29:13.082373861Z"}
{"log":"2016-01-28 09:37:13,270 CRIT Supervisor running as root (no user in config file)\n","stream":"stdout","time":"2016-01-28T01:37:13.270942212Z"}
{"log":"2016-01-28 09:37:13,270 WARN Included extra file \"/etc/supervisor/conf.d/elasticsearch.conf\" during parsing\n","stream":"stdout","time":"2016-01-28T01:37:13.271027008Z"}
{"log":"2016-01-28 09:37:13,270 WARN Included extra file \"/etc/supervisor/conf.d/logstash.conf\" during parsing\n","stream":"stdout","time":"2016-01-28T01:37:13.271155894Z"}
{"log":"2016-01-28 09:37:13,270 WARN Included extra file \"/etc/supervisor/conf.d/kibana.conf\" during parsing\n","stream":"stdout","time":"2016-01-28T01:37:13.271166407Z"}
{"log":"2016-01-28 09:37:13,620 INFO RPC interface 'supervisor' initialized\n","stream":"stdout","time":"2016-01-28T01:37:13.62075362Z"}
{"log":"2016-01-28 09:37:13,620 CRIT Server 'unix_http_server' running without any HTTP authentication checking\n","stream":"stdout","time":"2016-01-28T01:37:13.620885702Z"}
{"log":"2016-01-28 09:37:13,620 INFO supervisord started with pid 1\n","stream":"stdout","time":"2016-01-28T01:37:13.621044481Z"}
{"log":"2016-01-28 09:37:14,623 INFO spawned: 'elasticsearch' with pid 9\n","stream":"stdout","time":"2016-01-28T01:37:14.623676248Z"}
{"log":"2016-01-28 09:37:14,624 INFO spawned: 'logstash' with pid 10\n","stream":"stdout","time":"2016-01-28T01:37:14.625204605Z"}
{"log":"2016-01-28 09:37:14,626 INFO spawned: 'kibana' with pid 11\n","stream":"stdout","time":"2016-01-28T01:37:14.627862817Z"}
{"log":"2016-01-28 09:37:15,960 INFO success: elasticsearch entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-28T01:37:15.961012378Z"}
{"log":"2016-01-28 09:37:15,960 INFO success: logstash entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-28T01:37:15.961061424Z"}
{"log":"2016-01-28 09:37:15,961 INFO success: kibana entered RUNNING state, process has stayed up for \u003e than 1 seconds (startsecs)\n","stream":"stdout","time":"2016-01-28T01:37:15.961178099Z"}

They aren't syslogs and thus the problem is with the way the docker was developed. the JSON has a stream field which is usually stdout or stderr. and that means the docker developer left the app process to just spew out whatever it wants on stdout, which docker is happily saving for the user.

 

This particular logfile was generated from the initial deploy of my ELK docker, which I have not updated until now. Restarting the docker is not enough. The whole container needs to update or must be deleted. I have no idea what happens when the log is deleted, while the docker is running (docker might be keeping it open, and you'll endup with the gigantic file containing nulls from experience in a similar situation)

 

tl;dr the dockers spewing large amounts of stuff to stdout and stderr are bugs and the developer should fix it (or if LT could implement the other log drivers for docker - like syslog)

Link to comment
  • 4 weeks later...

OK and if I have containers in Aptalca's command that are not in the unraid GUI, I can just delete them with MC as they were from old containers?

 

Thanks for the walkthrough by the way :)

 

I have a container that is not showing in my unraid docker GUI.  How would I go about deleting log file for it (24G)?

 

 

Link to comment

I would wager that they're not actually logs, they're cached versions of the volume mounts of the dockers, hence the large sizes with dockers like couch and btsync.

 

these dockers move large files in and out of their mounted folders, so the possibility for huge cached copies is higher with these type of apps.

Link to comment
  • 2 months later...

Docker 1.8 added log rotation  https://docs.docker.com/reference/logging/overview/

--log-opt max-size=50m

 

 

To clear log you can execute:

 

echo "" > $(docker inspect --format='{{.LogPath}}' <container_name_or_id>)

 

edit: the above does not show correctly on tapatalk - add the container name to the end before executing.  Use Aptalca's command to list the log sizes then turn advanced view on in Docker tab and match docker container id to the docker name and run the command above using the Docker name.

Since 6.2+ supports the log options, you'd want to add this to the extra parameters:

 

--log-opt max-size=50m --log-opt max-file=1

 

Specifying the max size without the max-file won't actually do anything because all its going to do is archive the old logs and recreate another.  Specifying max-file will delete any old logs.

 

I've added this to the Docker FAQ as a number of containers are guilty about extreme logging.

Link to comment
  • 2 months later...

OK and if I have containers in Aptalca's command that are not in the unraid GUI, I can just delete them with MC as they were from old containers?

 

Thanks for the walkthrough by the way :)

 

I have a container that is not showing in my unraid docker GUI.  How would I go about deleting log file for it (24G)?

 

would also like an answer to this post please

Link to comment

I will be updating to 6.2 soon and using the docker log file options to fix my insane sonarr logs.

 

Is there anything the end users like me can do to fix the actual problem and not just dealing with it with those options?

Setting sonarr to only info logging, and not trace or debug.  Beyond that doubt it.  Purely a sonarr issue with what they log.  The logging options I listed should hopefully fix the problem.
Link to comment

Setting sonarr to only info logging, and not trace or debug.  Beyond that doubt it.  Purely a sonarr issue with what they log.  The logging options I listed should hopefully fix the problem.

 

Thanks! I changed it from trace to info. We shall see.

width=300http://i46.photobucket.com/albums/f109/squidaz/Untitled_zpswhsseute.png[/img]

 

Haha, understood! I don't think I changed it to trace upon install, but maybe...

Link to comment
  • 1 month later...

Dear unraid team,

 

We desperately need log rotation for docker. One docker container went rogue and started filling up its logs. The app itself was designed to prevent issues, meaning it rotates its logs and discards old ones, so no damage in appdata.

 

However, unraid docker implementation has no rotation or limit built in, so the docker log for that container ballooned to 10GB and filled the docker image.

 

Please, please, please implement either rotation or some sort of limit for these logs since space in the docker.img is precious, and when it's full, it breaks all other containers, and may even cause some permanent damage to some running containers.

 

MAJOR EDIT: Squid let me know that there is a way to limit log size, by adding in extra parameters to the container settings (thanks Squid). You can find the info here: http://lime-technology.com/forum/index.php?topic=40937.msg475225#msg475225

  • Upvote 1
Link to comment
  • 1 month later...

ThanX to the previous posters. Just found this thread today it answered a lot of questions and the on going issue of filling up the docker image file. I have been constantly increasing it to fit and now it's 50G. I have another question, it looks like the crap left behind is from some old dockers (x9) that have been long gone not  sure why it was left behind. Will the above command still remove them by container id since I don't have a name anymore? I will probably try it anyway since it won't kill what's running now. or at least I hope not. FYI: no they don't show up as running containers or as orphaned containers.

 

mike

Unraid 6.2.4

Link to comment
  • 1 month later...
  • 3 weeks later...
  • 3 months later...

I think it's my resilio-sync, hence I deleted the container.  After deleting the container, my docker size went down to 47%.  The logs stayed at 100% though.  Looking at the logs now, don't know what to do.  I'm posting the logs, hope you can help identify.  They don't seem that big afterall.

 

root@Tower:~# du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60                                                                                                             

99M /var/lib/docker/containers/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b-json.log

99M /var/lib/docker/containers/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b

15M /var/lib/docker/containers/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56-json.log

15M /var/lib/docker/containers/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56

7.2M /var/lib/docker/containers/f5ca4e7ac826dabad11b81aea5a223170240c0df8ec0451efaa2dfaa220b7d6e/f5ca4e7ac826dabad11b81aea5a223170240c0df8ec0451efaa2dfaa220b7d6e-json.log

7.2M /var/lib/docker/containers/f5ca4e7ac826dabad11b81aea5a223170240c0df8ec0451efaa2dfaa220b7d6e

3.4M /var/lib/docker/containers/13dd6e9a05189978a82e5350706d0131557d42fc5ec2fa166dfca3c76d9f8d46/13dd6e9a05189978a82e5350706d0131557d42fc5ec2fa166dfca3c76d9f8d46-json.log

3.4M /var/lib/docker/containers/13dd6e9a05189978a82e5350706d0131557d42fc5ec2fa166dfca3c76d9f8d46

136K /var/lib/docker/containers/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c

112K /var/lib/docker/containers/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c-json.log

100K /var/lib/docker/containers/73893104cfaee705465999746d74a0455c773f3161235e62061a56661a947019

84K /var/lib/docker/containers/1ff74239943f1026b5eebef11c6865201ddc21e894e40614567ead4ab9b1c25f

80K /var/lib/docker/containers/73893104cfaee705465999746d74a0455c773f3161235e62061a56661a947019/73893104cfaee705465999746d74a0455c773f3161235e62061a56661a947019-json.log

60K /var/lib/docker/containers/1ff74239943f1026b5eebef11c6865201ddc21e894e40614567ead4ab9b1c25f/1ff74239943f1026b5eebef11c6865201ddc21e894e40614567ead4ab9b1c25f-json.log

56K /var/lib/docker/containers/a70999d2919d797b031498c1a0be1d1e6162fc0c95144843520dad0e4dbf0163

40K /var/lib/docker/containers/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266

36K /var/lib/docker/containers/a70999d2919d797b031498c1a0be1d1e6162fc0c95144843520dad0e4dbf0163/a70999d2919d797b031498c1a0be1d1e6162fc0c95144843520dad0e4dbf0163-json.log

16K /var/lib/docker/containers/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266-json.log

8.0K /var/lib/docker/containers/13dd6e9a05189978a82e5350706d0131557d42fc5ec2fa166dfca3c76d9f8d46/shm

4.0K /var/lib/docker/containers/f5ca4e7ac826dabad11b81aea5a223170240c0df8ec0451efaa2dfaa220b7d6e/resolv.conf

4.0K /var/lib/docker/containers/f5ca4e7ac826dabad11b81aea5a223170240c0df8ec0451efaa2dfaa220b7d6e/hosts

4.0K /var/lib/docker/containers/f5ca4e7ac826dabad11b81aea5a223170240c0df8ec0451efaa2dfaa220b7d6e/hostname

4.0K /var/lib/docker/containers/f5ca4e7ac826dabad11b81aea5a223170240c0df8ec0451efaa2dfaa220b7d6e/hostconfig.json

4.0K /var/lib/docker/containers/f5ca4e7ac826dabad11b81aea5a223170240c0df8ec0451efaa2dfaa220b7d6e/config.v2.json

4.0K /var/lib/docker/containers/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56/resolv.conf.hash

4.0K /var/lib/docker/containers/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56/resolv.conf

4.0K /var/lib/docker/containers/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56/hosts

4.0K /var/lib/docker/containers/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56/hostname

4.0K /var/lib/docker/containers/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56/hostconfig.json

4.0K /var/lib/docker/containers/dff9fe5936a0e070693d239bf4df4a45025529350dddb54abf2a3d76edf1fa56/config.v2.json

4.0K /var/lib/docker/containers/a70999d2919d797b031498c1a0be1d1e6162fc0c95144843520dad0e4dbf0163/resolv.conf

4.0K /var/lib/docker/containers/a70999d2919d797b031498c1a0be1d1e6162fc0c95144843520dad0e4dbf0163/hosts

4.0K /var/lib/docker/containers/a70999d2919d797b031498c1a0be1d1e6162fc0c95144843520dad0e4dbf0163/hostname

4.0K /var/lib/docker/containers/a70999d2919d797b031498c1a0be1d1e6162fc0c95144843520dad0e4dbf0163/hostconfig.json

4.0K /var/lib/docker/containers/a70999d2919d797b031498c1a0be1d1e6162fc0c95144843520dad0e4dbf0163/config.v2.json

4.0K /var/lib/docker/containers/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266/resolv.conf.hash

4.0K /var/lib/docker/containers/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266/resolv.conf

4.0K /var/lib/docker/containers/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266/hosts

4.0K /var/lib/docker/containers/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266/hostname

4.0K /var/lib/docker/containers/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266/hostconfig.json

4.0K /var/lib/docker/containers/a52adbc14657a465c790ddbf3a33d244bc6c0c5f9156c1588248b980a8ac0266/config.v2.json

4.0K /var/lib/docker/containers/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b/resolv.conf.hash

4.0K /var/lib/docker/containers/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b/resolv.conf

4.0K /var/lib/docker/containers/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b/hosts

4.0K /var/lib/docker/containers/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b/hostname

4.0K /var/lib/docker/containers/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b/hostconfig.json

4.0K /var/lib/docker/containers/92d1ff5328c48875f90c3ad199e92c3f803a08edbb2ee24716b3f264701e405b/config.v2.json

4.0K /var/lib/docker/containers/73893104cfaee705465999746d74a0455c773f3161235e62061a56661a947019/resolv.conf

4.0K /var/lib/docker/containers/73893104cfaee705465999746d74a0455c773f3161235e62061a56661a947019/hosts

4.0K /var/lib/docker/containers/73893104cfaee705465999746d74a0455c773f3161235e62061a56661a947019/hostname

4.0K /var/lib/docker/containers/73893104cfaee705465999746d74a0455c773f3161235e62061a56661a947019/hostconfig.json

4.0K /var/lib/docker/containers/73893104cfaee705465999746d74a0455c773f3161235e62061a56661a947019/config.v2.json

4.0K /var/lib/docker/containers/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c/resolv.conf.hash

4.0K /var/lib/docker/containers/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c/resolv.conf

4.0K /var/lib/docker/containers/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c/hosts

4.0K /var/lib/docker/containers/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c/hostname

4.0K /var/lib/docker/containers/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c/hostconfig.json

4.0K /var/lib/docker/containers/4a8031b5802b38062436dda7a1179c062a6e260937a4021a1c008eb83fd4b03c/config.v2.json

4.0K /var/lib/docker/containers/1ff74239943f1026b5eebef11c6865201ddc21e894e40614567ead4ab9b1c25f/resolv.conf.hash

4.0K /var/lib/docker/containers/1ff74239943f1026b5eebef11c6865201ddc21e894e40614567ead4ab9b1c25f/resolv.conf

root@Tower:~# 

Link to comment

The log is the system log, not the docker logs. Docker logs are stored in the "docker" section of that screenshot along with the docker images and the containers.

Your issue is with something else spamming the system log. Click on the log button at the top right of your unraid gui and it should show what's spamming it. Likely an error that keeps getting repeated.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.