In SQLAlchemy, the Engine refers to a connection pool.
Typically, "the Engine is intended to normally be a permanent fixture established up-front and maintained throughout the lifespan of an application. It is not intended to be created and disposed on a per-connection basis; it is instead a registry that maintains both a pool of connections as well as configurational information about the database and DBAPI in use, as well as some degree of internal caching of per-database resources."
However, as pointed out in the Engine Disposal section of https://docs.sqlalchemy.org/en/14/core/connections.html:
"When a program uses multiprocessing or fork(), and an Engine object is copied to the child process, Engine.dispose() should be called so that the engine creates brand new database connections local to that fork. Database connections generally do not travel across process boundaries."
This bug was unearthed while working on the datapusher to make it concurrent - i.e. having it use PostgreSQL, using multiple uwsgi workers.
ckan/datapusher#200
ckan/datapusher#198
ckan-service-provider maintains a job database using sqlalchemy. Currently, the SQLALCHEMY_DATABASE_URI defaults to sqlite - which is not meant to be used as a concurrent database, resulting in database locks.
Changing SQLALCHEMY_DATABASE_URI to a postgresql connect string eliminated the database lock issues. However, since database connections do not travel across process boundaries, psycopg2 was giving another error:
(psycopg2.OperationalError) SSL error: decryption failed or bad record mac
Resolved this issue by setting 'lazy-apps = true' in uwsgi. (ckan/datapusher#201 (comment))
Another fix that will not just be specific to datapusher, but for other ckan-service-provider clients, would be to have each worker/process have its own Engine by using Engine.dispose().
https://virtualandy.wordpress.com/2019/09/04/a-fix-for-operationalerror-psycopg2-operationalerror-ssl-error-decryption-failed-or-bad-record-mac/