robschaar

Members
  • Posts

    28
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

robschaar's Achievements

Noob

Noob (1/14)

0

Reputation

  1. guess not....forgot to delete my data directory.
  2. So completely dumb.....I changed the name of the Docker to Pynab2 to give it one last shot and it worked right off the bat this time. I don't know what was sticking before, but now it is working (i mean waiting).
  3. same problem....not righting to the main folder
  4. i manually copied the files from /etc/postgresql/9.4/main to /data/main and now it is looking like it will work.
  5. it seems like it is during the postgres-initialise.sh *** Running /etc/my_init.d/003-postgres-initialise.sh... initialising empty databases in /data completed initialisation 2015-06-03 02:57:02,644 CRIT Supervisor running as root (no user in config file) 2015-06-03 02:57:02,647 INFO supervisord started with pid 55 2015-06-03 02:57:03,650 INFO spawned: 'postgres' with pid 59 2015-06-03 02:57:03,664 INFO exited: postgres (exit status 2; not expected) 2015-06-03 02:57:04,667 INFO spawned: 'postgres' with pid 60 2015-06-03 02:57:04,681 INFO exited: postgres (exit status 2; not expected) 2015-06-03 02:57:06,684 INFO spawned: 'postgres' with pid 61 2015-06-03 02:57:06,697 INFO exited: postgres (exit status 2; not expected) setting up pynab user and database 2015-06-03 02:57:09,703 INFO spawned: 'postgres' with pid 87 2015-06-03 02:57:09,715 INFO exited: postgres (exit status 2; not expected) 2015-06-03 02:57:10,717 INFO gave up: postgres entered FATAL state, too many start retries too quickly pynab user and database created building initial nzb import THIS WILL TAKE SOME TIME, DO NOT STOP THE DOCKER IMPORT COMPLETED *** Running /etc/my_init.d/004-set-the-groups.sh... Testing whether database is ready database appears ready, proceeding Traceback (most recent call last): File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1033, in _do_get return self._pool.get(wait, self._timeout) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/queue.py", line 145, in get raise Empty sqlalchemy.util.queue.Empty For some reason it is not writing to the main directory inside config. That is I believe where my problem is. I tried to make sure it has permissions, but when I reboot all of it gets changed back to what it was.
  6. I have it setup with the exact same settings as my sabnzbd container.
  7. Instead of going to pynab/data i made another directory pynabdata so it wasn't in the same location. Still has same problem. For some reason it will not write to the main directory. I can create files and move files to there, but postgresql will not write to there. Its not a space issue I have over 10GB free for the docker.
  8. the error log is saying postgres cannot access the server configuration file "/data/main/postgresql.conf": no such file or directory
  9. I just increased my docker size to 20GB from 10GB to make sure. That didn't seem to help. My config goes to /mnt/user/appdata/pynab, the data directory goes to /mnt/user/appdata/pynab/data. This is where I keep all of my docker image configs. Ill get the other logs.
  10. Here is my log... *** Running /etc/my_init.d/001-fix-the-time.sh... Current default time zone: 'America/Los_Angeles' Local time is now: Wed Jun 3 01:45:59 PDT 2015. Universal Time is now: Wed Jun 3 08:45:59 UTC 2015. *** Running /etc/my_init.d/002-set-the-config.sh... config.js exists in /config, may require editing config.py exists in /config, may require editing groups.json exists in /config, may require editing *** Running /etc/my_init.d/003-postgres-initialise.sh... initialising empty databases in /data completed initialisation 2015-06-03 01:46:06,085 CRIT Supervisor running as root (no user in config file) 2015-06-03 01:46:06,088 INFO supervisord started with pid 55 2015-06-03 01:46:07,091 INFO spawned: 'postgres' with pid 59 2015-06-03 01:46:07,103 INFO exited: postgres (exit status 2; not expected) 2015-06-03 01:46:08,105 INFO spawned: 'postgres' with pid 60 2015-06-03 01:46:08,117 INFO exited: postgres (exit status 2; not expected) 2015-06-03 01:46:10,121 INFO spawned: 'postgres' with pid 61 2015-06-03 01:46:10,133 INFO exited: postgres (exit status 2; not expected) setting up pynab user and database 2015-06-03 01:46:13,138 INFO spawned: 'postgres' with pid 87 2015-06-03 01:46:13,150 INFO exited: postgres (exit status 2; not expected) 2015-06-03 01:46:14,151 INFO gave up: postgres entered FATAL state, too many start retries too quickly pynab user and database created building initial nzb import THIS WILL TAKE SOME TIME, DO NOT STOP THE DOCKER IMPORT COMPLETED *** Running /etc/my_init.d/004-set-the-groups.sh... Testing whether database is ready database appears ready, proceeding Traceback (most recent call last): File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1033, in _do_get return self._pool.get(wait, self._timeout) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/queue.py", line 145, in get raise Empty sqlalchemy.util.queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect return fn() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect return _ConnectionFairy._checkout(self) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout rec = pool._do_get() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get self._dec_overflow() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise raise value File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get return self._create_connection() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection return _ConnectionRecord(self) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__ self.connection = self.__connect() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect connection = self.__pool._invoke_creator(self) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect return dialect.connect(*cargs, **cparams) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect return self.dbapi.connect(*cargs, **cparams) File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async) psycopg2.OperationalError: could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"? The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/pynab/pynab.py", line 258, in group_list() File "/opt/pynab/pynab.py", line 177, in group_list groups = pynab.groupctl.group_list() File "/opt/pynab/pynab/groupctl.py", line 72, in group_list for group in groups: File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2515, in __iter__ return self._execute_and_instances(context) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances close_with_result=True) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session **kw) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 882, in connection execution_options=execution_options) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind engine, execution_options) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind conn = bind.contextual_connect() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect self._wrap_pool_connect(self.pool.connect, None), File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2073, in _wrap_pool_connect e, dialect, self) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1403, in _handle_dbapi_exception_noconnection exc_info File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 188, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=exc_value) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 181, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect return fn() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect return _ConnectionFairy._checkout(self) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout fairy = _ConnectionRecord.checkout(pool) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout rec = pool._do_get() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get self._dec_overflow() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__ compat.reraise(exc_type, exc_value, exc_tb) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise raise value File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get return self._create_connection() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection return _ConnectionRecord(self) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__ self.connection = self.__connect() File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect connection = self.__pool._invoke_creator(self) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect return dialect.connect(*cargs, **cparams) File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect return self.dbapi.connect(*cargs, **cparams) File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect conn = _connect(dsn, connection_factory=connection_factory, async=async) sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
  11. I just tried it once more, make sure I deleted the config directory and data directory and still am not able to get PostgreSQL to startup and no database present. I don't really know what I could be doing wrong? I mean it basically does everything for you. The script is in /etc/init.d so it should startup. I also made sure it was marked as starting up and that was set.
  12. I actually just deleted the container and all the folders and tried it again. It still isn't starting PostgreSQL.
  13. I noticed a few things with this. I thought I was getting the same error as you, but I guess I was not. Postgresql was not running on startup so it was not able to see the dbase. I got that to start and then it did not have a pynab dbase. I added that and so of course it does not have the pynab user. I'm looking at adding the user and seeing if pynab works a little better.
  14. Thank you for posting about the error if it is still making the dbase. I installed the docker before you put that post up and have been getting the error ever since. I'm guessing i will just have to keep waiting for it to complete the dbase. So far it has been going for about an hour.