trbs / pid Goto Github PK
View Code? Open in Web Editor NEWPidfile featuring stale detection and file-locking, can also be used as context-manager or decorator
Home Page: https://pypi.python.org/pypi/pid/
License: Apache License 2.0
Pidfile featuring stale detection and file-locking, can also be used as context-manager or decorator
Home Page: https://pypi.python.org/pypi/pid/
License: Apache License 2.0
ContextDecorator allow use the same object as decorator and context manager.
It's very simple and more pythonic than additional decorator. But ContextDecorator
added only in 3.2 in standart library.
What do you think about that?
I've been working on a python daemon with pid
, and after initial logs directly to console I wanted to switch to file logging, using python's logging
module. This is when I run into the problem.
I have start
/stop
functions to manage the daemon:
import os
import sys
import time
import signal
import lockfile
import logging
import logging.config
import daemon
from pid import PidFile
from mpmonitor.monitor import MempoolMonitor
# logging.config.fileConfig(fname="logging.conf", disable_existing_loggers=False)
# log = logging.getLogger("mpmonitor")
def start():
print("Starting Mempool Monitor")
_pid_file = PidFile(pidname="mpmonitor.pid", piddir=curr_dir)
with daemon.DaemonContext(stdout=sys.stdout,
stderr=sys.stderr,
stdin=sys.stdin,
pidfile=_pid_file):
# Start the monitor:
mpmonitor = MempoolMonitor()
mpmonitor.run()
def stop():
print("\n{}\n".format(pid_file))
try:
with open(pid_file, "r") as f:
content = f.read()
f.close()
except FileNotFoundError as fnf_err:
print("WARNING - PID file not found, cannot stop daemon.\n({})".format(pid_file))
sys.exit()
print("Stopping Mempool Monitor")
# log.info("Stopping Mempool Monitor")
pid = int(content)
os.kill(pid, signal.SIGTERM)
sys.exit()
which works as you would expect it to.
If I uncomment the logging code,
logging.config.fileConfig(fname="logging.conf", disable_existing_loggers=False)
log = logging.getLogger("mpmonitor")
the code breaks down, and some pretty random stuff happens. The error message:
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.6/logging/__init__.py", line 998, in emit
self.flush()
File "/usr/lib/python3.6/logging/__init__.py", line 978, in flush
self.stream.flush()
OSError: [Errno 9] Bad file descriptor
Call stack:
File "mpmonitor.py", line 111, in <module>
start()
File "mpmonitor.py", line 48, in start
pidfile=_pid_file):
File "/home/leilerg/.local/lib/python3.6/site-packages/daemon/daemon.py", line 389, in __enter__
self.open()
File "/home/leilerg/.local/lib/python3.6/site-packages/daemon/daemon.py", line 381, in open
self.pidfile.__enter__()
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 216, in __enter__
self.create()
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 168, in create
self.setup()
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 77, in setup
self.logger.debug("%r entering setup", self)
Message: '%r entering setup'
Arguments: (<pid.PidFile object at 0x7fc8faa479e8>,)
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.6/logging/__init__.py", line 998, in emit
self.flush()
File "/usr/lib/python3.6/logging/__init__.py", line 978, in flush
self.stream.flush()
OSError: [Errno 9] Bad file descriptor
Call stack:
File "mpmonitor.py", line 111, in <module>
start()
File "mpmonitor.py", line 48, in start
pidfile=_pid_file):
File "/home/leilerg/.local/lib/python3.6/site-packages/daemon/daemon.py", line 389, in __enter__
self.open()
File "/home/leilerg/.local/lib/python3.6/site-packages/daemon/daemon.py", line 381, in open
self.pidfile.__enter__()
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 216, in __enter__
self.create()
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 170, in create
self.logger.debug("%r create pidfile: %s", self, self.filename)
Message: '%r create pidfile: %s'
Arguments: (<pid.PidFile object at 0x7fc8faa479e8>, '/home/leilerg/python/mempoolmon/mpmonitor.pid')
Traceback (most recent call last):
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 136, in inner_check
pid = int(pid_str)
ValueError: invalid literal for int() with base 10: 'DEBUG - 2020-04-'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "mpmonitor.py", line 111, in <module>
start()
File "mpmonitor.py", line 48, in start
pidfile=_pid_file):
File "/home/leilerg/.local/lib/python3.6/site-packages/daemon/daemon.py", line 389, in __enter__
self.open()
File "/home/leilerg/.local/lib/python3.6/site-packages/daemon/daemon.py", line 381, in open
self.pidfile.__enter__()
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 216, in __enter__
self.create()
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 180, in create
check_result = self.check()
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 165, in check
return inner_check(self.fh)
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 139, in inner_check
raise PidFileUnreadableError(exc)
pid.PidFileUnreadableError: invalid literal for int() with base 10: 'DEBUG - 2020-04-'
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.6/logging/__init__.py", line 998, in emit
self.flush()
File "/usr/lib/python3.6/logging/__init__.py", line 978, in flush
self.stream.flush()
OSError: [Errno 9] Bad file descriptor
Call stack:
File "/home/leilerg/.local/lib/python3.6/site-packages/pid/__init__.py", line 197, in close
self.logger.debug("%r closing pidfile: %s", self, self.filename)
Message: '%r closing pidfile: %s'
Arguments: (<pid.PidFile object at 0x7fc8faa479e8>, '/home/leilerg/python/mempoolmon/mpmonitor.pid')
The random stuff I was referring to is now the file mpmonitor.pid
doesn't contain a PID anymore but some attempted logs/error messages
user@mylaptor:mempoolmon: cat mpmonitor.pid
DEBUG - 2020-04-05 10:52:55,676 - PidFile: <pid.PidFile object at 0x7fc8faa479e8> entering setup
DEBUG - 2020-04-05 10:52:55,678 - PidFile: <pid.PidFile object at 0x7fc8faa479e8> create pidfile: /home/leilerg/python/mempoolmon/mpmonitor.pid
DEBUG - 2020-04-05 10:52:55,678 - PidFile: <pid.PidFile object at 0x7fc8faa479e8> check pidfile: /home/leilerg/python/mempoolmon/mpmonitor.pid
DEBUG - 2020-04-05 10:52:55,678 - PidFile: <pid.PidFile object at 0x7fc8faa479e8> closing pidfile: /home/leilerg/python/mempoolmon/mpmonitor.pid
To me this looks like the pid
logfile got confused with the PID file somehow. This is odd as I explicitly set disable_existing_loggers=False
.
Any ideas? Am I doing something not-so-obviously wrong?
Hello,
I feel I'm doing something wrong, but I checked the source code of pid and couldn't find the issue.
I'm basically doing the following:
try:
with PidFile(piddir='.'):
print 'Starting script'
do even more stuff that takes quite a lot of time...
except PidFileError:
print 'Unable to start script - another instance is running!'
I tried piddir = '.', '/tmp/' and quite a lot of things, but for some reason the pid file is not being created.
I added a simple print statement within the source's init.py to show me the filename, and sure enough it's showing correctly, but not getting creating.
Of course, I'm able to run multiple instances of the script.
Any thoughts?
Many thanks in advance
//M
Note that this issue is not obvious library bug, more like opening discussion around topic.
background: https://petermalmgren.com/signal-handling-docker/
use case:
pid library is used inside docker so that lock files are stored to mounted folder that host holds (not inside docker). Library lock down physical resource for test usage while tests are running. pid file is destroyed normally when tests ends but if for some reason test crashes pid module doesn't have chance to remove pid file and file left behind. This process pid is always 1 because of it's running inside docker.
problem:
if process dies and pid file with pid 1
left behind it means that next process thinks pid file is still valid and resource cannot be used until pid file is destroyed separately.
This is annoying issue that luckily happens rarely but haven't figure out yet any good solution to avoid such problem.
Hi ya
The other day I was running a second instance of a script, running in it's own pipenv virtual environment, and noticed that it didn't run because the pidfile was already created. To make that work, I had to provide a path where the pidfile should be stored for the two virtual environments. Seems to me if a script runs in its own virtual environment, it should manage it's own pidfile, and we shouldn't have to make this explicit.
What I'm proposing is that we make a change to the default location for the pidfile. If we are running in a virtual environment, the pidfile should be created in a subdirectory that resembles the name of the virtual environment, based on sys.prefix
, e.g. 'C:\\Users\\username1\\AppData\\Roaming\\test-hvyc-pcr
or /run/user/username1/test-hvyc-pcr
or ...
The effect of the change is that scripts with the same name but running in different virtual environments can run at the same time, without specifying piddir
or pidname
.
Happy to code that up but wanted to get opinions first.
script.py:
pidfile = PidFile(allow_samepid=True)
pidfile.create()
while True:
time.sleep(1)
If I start this script twice I would expect the second create()
to raise a PidFileError
but it doesn't.
I'm using version 2.1.1
The PidFile class registers close()
to be called at process exit time, but if the user manually calls close()
, this can lead to a race condition where Instance A of a script closes the pid file and instance Instance B of the same script is allowed to run, but A delete's B's pid file.
Pseudo Code
pid_file = PidFile(...)
try:
pid_file.create()
except Exception:
if not pid_file.filename or not os.path.exists(pid_file.filename):
raise
raise MyAlreadyRunningError
try:
do_something()
finally:
pid_file.close()
# A hasn't called at exit yet, but B is allowed to run
# Shortly later, A exits and A's `atexit` closes the pid file that B creates
It would be possible to not call close()
, but we have tests to check for MyAlreadyRunningError and pytest does not exit (therefore never calls the registered atexit
handlers) until after the full suite of test has been run.
The registration is done here but since self._is_setup
is always True
it is never called (setup()
is called as the first step in create()
).
I can't find any documentation for the PidFileBase
constructor's arguments. I've been reading the source code, and I'm having trouble seeing e.g. what the difference between lock_pidfile
and allow_samepid
is.
I'm having issue acquiring the lock file, either with decorator or context manager.
Here is how I'm proceeding :
from pid import PidFile
if __name__ == "__main__":
with PidFile('deploy') as p:
ideascube = initIdeascube()
if ideascube.internet_is_up():
ideascube.install_ansible()
ideascube.run_deploy()
When I'm running the script /var/run/deploy
is not always created. What am I missing ?
It would be really nice if this checked the pid file content against it's own pid, so that multiple calls within the same process wouldn't fail so badly. My use case is tornado's exec-on-file-change mechanism.
Use fasteners or some other multiplatform locking, to be able to run on Windows.
Since version 2.2.4 this library is broken. If you run the same script twice, you get the pid.PidFileAlreadyLockedError: [Errno 35] Resource temporarily unavailable error, however the pid file is also deleted and if you run the script again it will succeed even though the first instance is still running.
Below is a sample script. Run it in one terminal, then run it again twice in a separate terminal to observe the incorrect behavior.
#!/usr/bin/env python
from pid import PidFile
from time import sleep
if __name__ == '__main__':
with PidFile('test123', piddir='/tmp'):
sleep(9999999)
I'm using pid
module in conjunction with python-daemon.
Although I don't have a simple example yet, the way python-daemon
works is by providing a DaemonContext
in which you can pass a file locking context manager. Normally I would use lockfile.pidfile
but it doesn't feature staleness detection and pid
seems to do everything correctly (almost).
The following code is contrived and hasn't been tested, but explains the situation I'm seeing.
from daemon import DaemonContext
from pid import Pidfile
# Using detach_process, DaemonContext roughly performs the following:
#1. fork() [exit parent]
#2. setsid()
#3. fork() [exit parent]
#4. override signal handlers
#5. pidfile.__enter__()
with DaemonContext(detach_process=True, pidfile=Pidfile(pidname="test", piddir="/tmp")):
pass
The problem is that entering the context manager at step 5, the Pidfile
context gathered information about the process environment before step 1 in the above comment, and overrides the signal handler if overriding TERM.
I am trying to brainstorm a way to provide a DaemonContext
friendly pid context manager. I can subclass Pidfile
and implement DaemonContextPidfile, which reinitialises all variables in __init__()
when __enter__()
is called. Or perhaps the process environment should be gathered only when the __enter__()
has been called.
The simpler solution is:
with DaemonContext(detach_process=True):
with Pidfile(pidname="test", piddir="/tmp/"):
pass
Which is perfectly valid, except Pidfile()
may tread on an existing SIGTERM handler.
Perhaps we can supply a SIGTERM handler to Pidfile
which decorates the passed signal handler to ensure Pidfile.close()
is called before executing the handler that is passed in.
Please let me know if you think any of these ideas seem viable.
Thanks.
Hi! Thanks for the awesome library, I think it's the best pidfile manager around.
I have some doubt about one thing though, on Windows the pdifile is simply stored in the temp folder. Doesn't that make it susceptible to being deleted by some cleanup process?
I like the idea of storing the pidfile in a user folder first (similar to /var/user/%s), so I was just wondering if %APPDATA% would be the sensible first location to try, perhaps followed by ~\AppData\Roaming if APPDATA not exists, and finally fall back to tempdir? (inspired by confuse)
Happy to try to code it up (would be my first Github contrib), although I'm pretty sure you know what you are doing.
If I understand this error correctly, the code did not raise a PidFileAlreadyRunningError
when it was supposed to. I do not yet know why.
I have no problem running the test suite on Linux (Python 3.7.1), only on FreeBSD (Python 3.6.6).
======================================================================
FAIL: test_pid.test_pid_check_samepid_two_processes
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/nose/case.py", line 197, in runTest
self.test(*self.arg)
File "/usr/ports/devel/py-pid/work-py36/pid-2.2.1/tests/test_pid.py", line 280, in test_pid_check_samepid_two_processes
pidfile_proc2.create()
File "/usr/local/lib/python3.6/contextlib.py", line 88, in __exit__
next(self.gen)
File "/usr/ports/devel/py-pid/work-py36/pid-2.2.1/tests/test_pid.py", line 42, in raising
raise AssertionError("Failed to throw exception of type(s) %s." % (", ".join(exc_type.__name__ for exc_type in exc_types),))
AssertionError: Failed to throw exception of type(s) PidFileAlreadyRunningError, PidFileAlreadyLockedError.
-------------------- >> begin captured logging << --------------------
PidFile: DEBUG: <pid.PidFile object at 0x807eee2c8> entering setup
PidFile: DEBUG: <pid.PidFile object at 0x807eee2c8> create pidfile: /tmp/setup.py.pid
PidFile: DEBUG: <pid.PidFile object at 0x807eee2c8> check pidfile: /tmp/setup.py.pid
PidFile: DEBUG: <pid.PidFile object at 0x807eee368> entering setup
PidFile: DEBUG: <pid.PidFile object at 0x807eee368> create pidfile: /tmp/setup.py.pid
PidFile: DEBUG: <pid.PidFile object at 0x807eee368> check pidfile: /tmp/setup.py.pid
PidFile: DEBUG: <pid.PidFile object at 0x807eee2c8> closing pidfile: /tmp/setup.py.pid
PidFile: DEBUG: <pid.PidFile object at 0x807eee368> closing pidfile: /tmp/setup.py.pid
--------------------- >> end captured logging << ---------------------
----------------------------------------------------------------------
Ran 32 tests in 0.030s
FAILED (failures=1)
Test failed: <unittest.runner.TextTestResult run=32 errors=0 failures=1>
error: Test failed: <unittest.runner.TextTestResult run=32 errors=0 failures=1>
Also, on line 279, I don't understand why it is expecting PidFileAlreadyLockedError
? If I remove it, so it only expects PidFileAlreadyRunningError
, the test still passes (on Linux). So, what exactly is supposed be happening in this test case?
Hello!
Git version has same_pid option but thats not present in pip 2.0.1, could you pls update it? Cheers
I am using PidFileError
exception raised while acquiring the lock to check its acquired status. But it interferes with the other attempts to acquire the lock during the check period. Is there a way get the status directly?
def is_locked(filename):
try:
with PidFile(filename):
# Parallel lock acquire attempts fail here
return False
except PidFileError:
return True
It would be nice to add some better API support for querying the process status.
For example, if there is no extant pidfile, the .check() method throws an IOError due to lack of write access. I suppose this could be interpreted as "file does not exist", but it's not very clear.
It would also be nice to be able to get access to the numeric pid if it exists. As it is now, if a process is already running, calling .check() throws an exception. The error message does contain the pid of the running process, but unless I'm missing something, there's no way to access it without parsing it out of the error message.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.