Giter VIP home page Giter VIP logo

apscheduler's People

Contributors

adamkalis avatar agronholm avatar antonkorobkov avatar bachya avatar bjmc avatar blakegao avatar c-oreills avatar cloud-rocket avatar cxong avatar davidbrochart avatar edgarrmondragon avatar eendebakpt avatar gilbsgilbs avatar henkhogan avatar jarekwg avatar junnplus avatar jvillar avatar linuxdynasty avatar luc-tielen avatar nikolas avatar orweis avatar outime avatar peterschutt avatar pohmelie avatar pokkakiyo avatar pr0ps avatar pre-commit-ci[bot] avatar reskov avatar romanlevin avatar txomon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

apscheduler's Issues

APScheduler main loop throws exception IOError 514

Originally reported by: Anonymous


Environment
OS : Linux 2.6.18-92.el5xen #1 SMP Tue Jun 10 19:20:18 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux
Python : Python 2.7.1
APScheduler : APScheduler-2.0.2-py2.7

LOG
Exception in thread APScheduler:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 530, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 483, in run
self.__target(_self.__args, *_self.__kwargs)
File "/home/ead/disk1/adstat-tocassa/thirdpart/APScheduler-2.0.2-py2.7.egg/apscheduler/scheduler.py", line 552, in _main_loop
self._wakeup.wait(wait_seconds)
File "/usr/lib/python2.7/threading.py", line 394, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 257, in wait
_sleep(delay)
IOError: [Errno 514] Unknown error 514


add_cron_job issue with minute='*/1'

Originally reported by: ecollage (Bitbucket: ecollage, GitHub: ecollage)


Am running the following code in a console:

#!python

from apscheduler.scheduler import Scheduler
import logging
logging.basicConfig()

sched = Scheduler()
sched.start()

def test_job():
    print "hello world"

sched.add_cron_job(test_job, minute='*/1')

I would expect that test_job is fired every minute...but the job actually executes every second. Bug?


Unable to pass value to function

Originally reported by: Anonymous


Using Python 2.7.2 -Trying to pass a value to the function that add_cron_job calls. Errors out.

#!python
from apscheduler.scheduler import Scheduler

sched = Scheduler()
sched.start()

jobid=2

def run_job(n):
    print "Running Job: ", n

def run_job2():
    print "Running Job without passing ID "

# Trying to pass a value to the function fails with "TypeError: func must be callable"
sched.add_cron_job(run_job(jobid),  minute='*/5')

# Not passing a value
sched.add_cron_job(run_job2,  minute='*/5')

sched.print_jobs()

Error:
Traceback (most recent call last):
File "schedual.py", line 17, in
sched.add_cron_job(run_job(2), minute='_/5')
File "c:\sw_nt\python27\lib\site-packages\apscheduler\scheduler.py", line 347,
in add_cron_job
return self.add_job(trigger, func, args, kwargs, *_options)
File "c:\sw_nt\python27\lib\site-packages\apscheduler\scheduler.py", line 258,
in add_job
options.pop('coalesce', self.coalesce), **options)
File "c:\sw_nt\python27\lib\site-packages\apscheduler\job.py", line 44, in i
nit

raise TypeError('func must be callable')
TypeError: func must be callable


Can't get jobs running

Originally reported by: andrea_crotti (Bitbucket: andrea_crotti, GitHub: Unknown)


So I'm trying out apscheduler and I think it's great, but I don't get
the events running for some reason..
I'm using APScheduler-2.1.0-py2.7 on Archlinux 64 bits.

Here below is the simple script I'm trying, when I run it I see that the
job is actually created but it looks like nothing is ever run:

$ python simple.py
[<Job (name=partial, trigger=<SimpleTrigger
(run_date=datetime.datetime(2013, 3, 19, 12, 29, 10, 845871))>)>]

All it should do is to create a couchdb database or and write a document
in it, but I don't get the entry in the db and I don't get any error
either.

Wait=True also seems to do nothing, but even adding a
while True:
pass

right after doesn't seem to have any effect..
Any idea why the job is not actually run?

from datetime import datetime, timedelta
from apscheduler.scheduler import Scheduler, EVENT_JOB_ERROR, EVENT_JOB_EXECUTED
from functools import partial

from couchdbkit import Server

sched = Scheduler()


def write_to_db(db_name, doc):
    s = Server()
    s.get_or_create_db(db_name)
    print("Saving document %s" % str(doc))
    s.save_doc(doc)
    raise AssertionError


def my_listener(event):
    if event.exception:
        print("Running the job crashed")
    else:
        print("Everything worked fine")


if __name__ == '__main__':
    sched.add_listener(my_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR)
    in_2_minutes = datetime.utcnow() + timedelta(seconds=2)
    sched.add_date_job(partial(write_to_db, 'test_db', {'x': 10}), in_2_minutes)

    sched.start()
    sched.shutdown(wait=True)

Windows, PY2EXE with SQLAlchemy Fails

Originally reported by: Stan Brinkerhoff (Bitbucket: stanbrinkerhoff, GitHub: Unknown)


I am attempting to include APScheduler in a Windows PY2EXE built 'exe' that worked before the introduction of APScheduler.

Test Case

/dir/test.py

#!python
from apscheduler.scheduler import Scheduler
from sqlalchemy import create_engine

if __name__ == "__main__":
    pass

/dir/setup.py

#!python

import py2exe
from distutils.core import setup


setup (
    options = {"py2exe":
               {
                "includes": ["sqlalchemy",
                             "apscheduler"],
                }
               },
    console = ["test.py"],

)

Executing 'setup.py py2exe' seems to work correctly; however during execution of the resulting 'binary' this error is emitted:

#!python

C:\code\stan_sv\trunk\aps>dist\test.exe
Traceback (most recent call last):
  File "test.py", line 2, in <module>
    from sqlalchemy import create_engine
  File "sqlalchemy\__init__.pyc", line 112, in <module>
  File "sqlalchemy\engine\__init__.pyc", line 74, in <module>
  File "sqlalchemy\engine\strategies.pyc", line 21, in <module>
  File "sqlalchemy\pool.pyc", line 22, in <module>
ImportError: cannot import name queue

This issue does not occur when APScheduler is not present. The offending line of pool.py is the import below (sqla_queue).

#!python

import weakref, time, threading

from sqlalchemy import exc, log
from sqlalchemy import queue as sqla_queue
from sqlalchemy.util import threading, pickle, as_interface, memoized_property

proxies = {}

ValueError: Unrecognized expression for field "week"

Originally reported by: skokan (Bitbucket: skokan, GitHub: skokan)


Original bad code, line 257 in scheduler.py:

#!python
        trigger = CronTrigger(year, month, day, day_of_week, hour, minute, second)
        return self.add_job(trigger, func, args, kwargs)

Fixed code - bad count of input parameter for class CronTrigger:

#!python
        trigger = CronTrigger(year, month, day, week, day_of_week, hour, minute, second)

Commit feature for persistant job stores

Originally reported by: Josh Harrison (Bitbucket: hijakk, GitHub: hijakk)


I'm running APS in an environment where I can't be assured that a clean scheduler.shutdown() will be issued when the service goes down or is restarted.
From what I can tell, this means that jobs stored to a persistent jobstore aren't successfully written to the file - so on restart, those jobs disappear. This is at least my experience with the file based solution.
Currently, I have my app taking the new job, then closing and reopening the jobstore. This makes sure that jobs are saved as they come in. But, this seems like a fairly hack-y way to address this - much better would be a scheduler.persistent_jobstore_commit(jobstore=None) type method, where you can optionally specify a jobstore and make sure that jobs you've entered to it are saved in the case of the process being terminated without getting a proper shutdown command.

#!python
#jobs.py
def afunction(term):
    try:
        f = open("/tmp/exampleout.txt","a")
    except:
        f = open("/tmp/exampleout.txt","w")
    f.write(term+"\n")
    f.close()
#!python
#apschedserver.py
from apscheduler.jobstores.shelve_store import ShelveJobStore
from apscheduler.scheduler import Scheduler
from jobs import afunction
import cherrypy, os

class Jobs(object):
    exposed=True
    scheduler = Scheduler()
    scheduler.add_jobstore(ShelveJobStore('jobs.shelf'), 'shelve')
    scheduler.start()

    def addjob(self, message, seconds, name):
        self.scheduler.add_interval_job(func=afunction, name=name,
                        args=[message], seconds=seconds, jobstore='shelve')

    def GET(self):
        jobs = self.scheduler.get_jobs()
        return {"jobs":[job.name for job in jobs]}

    def POST(self):
        json = cherrypy.request.json
        self.addjob(json["message"], json["seconds"], json["name"])
        self.scheduler.remove_jobstore('shelve')
        self.scheduler.add_jobstore(ShelveJobStore('jobs.shelf'), 'shelve')
        return self.GET()



if __name__ == '__main__':
    cherrypy.config.update({'server.socket_host': '0.0.0.0', 'server.socket_port': 8081, 'server.thread_pool': 1})

    cherrypy.tree.mount(Jobs(), "/", config={
        '/': {
            'tools.encode.on': True,
            'tools.decode.on': True,
            'tools.json_in.on': True,
            'tools.json_out.on': True,
            'request.dispatch': cherrypy.dispatch.MethodDispatcher(),

        },
    })
    cherrypy.engine.start()
    cherrypy.engine.block()

APScheduler-3.0.0.pre1 crontrigger throwing error.

Originally reported by: croxis (Bitbucket: croxis, GitHub: croxis)


As of the latest pull (on 4/13) I am getting an attribute error when attempting to set a cron schedule. I understand that 3.0 is still under construction and its also very likely I'm just doing it wrong :)

#!python

from apscheduler.triggers.cron import CronTrigger
    from datetime import timezone
    trigger = CronTrigger(timezone.utc,
                          second='0',
                          month=month,
                          day=day,
                          hour=hour,
                          minute=minute,
                          day_of_week=day_of_week)
    scheduler.add_job(auto_feed, trigger)

throws

#!python

File "/usr/lib/python3.4/site-packages/APScheduler-3.0.0.pre1-py3.4.egg/apscheduler/triggers/cron/__init__.py", line 41, in __init__
    self.timezone = astimezone(timezone) or getattr(self.start_date, 'tzinfo', None) or tzlocal()
AttributeError: 'CronTrigger' object has no attribute 'start_date'

Cannot create RPM

Originally reported by: Marco Mornati (Bitbucket: mmornati, GitHub: mmornati)


There is an error in setup.py that cause a bdist_rpm error.

Traceback (most recent call last):
File "setup.py", line 19, in
long_description=open('README.rst').read(),
IOError: [Errno 2] No such file or directory: 'README.rst'

I suppose the working path and project path are different!


Refactor job store system for greater scalability

Originally reported by: Alex Grönholm (Bitbucket: agronholm, GitHub: agronholm)


The job stores currently load all the jobs into memory. This was originally done for performance reasons when jobs are executed frequently. However, this turned out not to reflect real world needs. Instead, real world use cases can require massive amounts of jobs, which would quickly deplete the available memory. As such, the job store semantics will have to be changed so that the jobs are kept in the storage, and are only loaded on demand, based on their next run times. This will cause more frequent hitting of the backend, but it will also enable APScheduler to scale much better.


Cannot determine the reference when I try add callable class job

Originally reported by: Anonymous


My simple test

#!python

from apscheduler import util
class Test():
    def __call__(self):
        self.run(self)
    def run(self):
        print "Hello"

obj=Test()
print obj
util.obj_to_ref(obj)

my output
<main.Test instance at 0x0000000002F8E548>

Traceback (most recent call last):
File "D:\workspace\test\test.py", line 82, in
util.obj_to_ref(obj)
File "build\bdist.win-amd64\egg\apscheduler\util.py", line 171, in obj_to_ref
ValueError: Cannot determine the reference to <main.Test instance at 0x0000000002F8E548>

How can I fix it?


Use relative instead of absolute imports

Originally reported by: Novimir Pablant (Bitbucket: amicitas, GitHub: amicitas)


The use of relative imports instead of absolute imports would allow embedding of the apscheduler library into other projects without source modification.

The proposed change would be to go from imports like this:

#!python

from apscheduler.util import *

To import statements like this:

#!python

from .util import *

I found this change useful so that I could include the apsheduler source in a project of mine rather than having to specify an additional external library dependence.

If this is considered a reasonable suggestion I am happy to make the changes and a pull request.


Maximum run count usage example

Originally reported by: Dariusz Suchojad (Bitbucket: dsuch, GitHub: dsuch)


Hello,

I'm in a process of migrating the code using APScheduler 1.x over to 2.x series and have noticed a changelog entry "Maximum run count can be configured for all jobs, not just those using interval-based scheduling" at http://packages.python.org/APScheduler/index.html#id3

The trouble is that I can't seem to find any usage example of the said feature. Comparing the code base 1.x vs. 2.x I can see the IntervalTrigger's init method no longer accepts the 'repeat' method yet there doesn't seem to be any equivalent way of achieving the same result.

Can you please shed some light on this and post a piece of sample code showing how one should use the feature in 2.x?

Thanks!


Shutdown before Start

Originally reported by: Anonymous


If shutdown is called before start is called I get:

#!python

/usr/local/lib/python2.6/dist-packages/apscheduler/scheduler.pyc in shutdown(self, timeout)
    131             terminate, 0 to wait forever, None to skip waiting
    132         """
--> 133         if self.stopped or not self.thread.isAlive():
    134             return
    135

self.thread still None. Probably shouldn't call shutdown before start, but anyways.


docs: How to run cron jobs

Originally reported by: Thomas Güttler (Bitbucket: thomas-guettler, GitHub: Unknown)


Please explain in the docs how to run example:

http://pythonhosted.org/APScheduler/cronschedule.html#example-1

My question (and I guess other people have this question, too):

How to execute this line?

Example: Linux server boots. The above snippet is installed somewhere, ... but how does this example get executed?

Please tell me, if you don't understand my question.


Odd shutdown behavior

Originally reported by: John Matthew (Bitbucket: jmatthew, GitHub: jmatthew)


First off, great library! Nothing comes close! Thank you for writing this.

I'm trying to build a system using APscheduler that provides a way to schedule jobs at various intervals, but if we need to shutdown the jobs early before they are finished, i pass in a function they can check to see if they should finsh early.

I've written a test to try and make this work, and it give inconsistent behavior, sometimes raising an exception about trying to schedule after shutdown.

Here is my code, and the log file is attached.

from datetime import datetime
from time import sleep
import os
import logging 
import logging.config
import logconfig

logger = logging.getLogger(__name__)

from apscheduler.scheduler import Scheduler

stopit = False

def tick5(sfunction):
    while not sfunction():
        logger.info('Tick5 The time is: %s' % datetime.now() )
        sleep(5)
    logger.debug("exiting tick5")

def tick10(sfunction):
    while not sfunction():
        logger.info('Tick10 The time is: %s' % datetime.now())
        sleep(10)
    logger.debug("exiting tick10")

def tick15(sfunction):
    while not sfunction():
        logger.info('Tick15 The time is: %s' % datetime.now())
        sleep(10)
    logger.debug("exiting tick15")

def tick20(sfunction):
    while not sfunction():
        logger.info('Tick20 The time is: %s' % datetime.now())
        sleep(20)
    logger.debug("exiting tick20")


def tick25(sfunction):
    while not sfunction():
        logger.info('Tick25 The time is: %s' % datetime.now())
        sleep(25)
    logger.debug("exiting tick25")


def tick30(sfunction):
    while not sfunction():
        logger.info('Tick30 The time is: %s' % datetime.now())
        sleep(30)
    logger.debug("exiting tick30")


def stop():
    return stopit

if __name__ == '__main__':
    scheduler = Scheduler()

    scheduler.add_interval_job(tick5,args=[stop], seconds=5 )
    scheduler.add_interval_job(tick10,args=[stop], seconds=10 )
    scheduler.add_interval_job(tick15,args=[stop], seconds=15 )
    scheduler.add_interval_job(tick20,args=[stop], seconds=20 )
    scheduler.add_interval_job(tick25,args=[stop], seconds=25 )
    scheduler.add_interval_job(tick30,args=[stop], seconds=30 )

    logger.info("starting")
    scheduler.start()

    logger.info("started, sleeping 40 seconds, then issuing shutdown")
    sleep(40)

    stopit = True
    logger.info("shutting down... set stopit to True.. waiting for all jobs to finish")
    scheduler.shutdown()

    logger.info("finished")

interval example does not work

Originally reported by: Michal Nowikowski (Bitbucket: godfryd, GitHub: godfryd)


#!python
$ PYTHONPATH=.. python interval.py 
Traceback (most recent call last):
  File "interval.py", line 16, in <module>
    scheduler.add_job(tick, 'interval', {'seconds': 3})
  File ".../apscheduler/apscheduler/scheduler.py", line 298, in add_job
    raise KeyError('No trigger by the name "%s" was found' % trigger)
KeyError: 'No trigger by the name "interval" was found'



global misfire_grace_time does not work

Originally reported by: Anonymous


The default misfire_grace_time value for a job is set 1 in Job.init():

#!python
class Job(object):
    def __init__(self, trigger, func, args, kwargs, name=None,
                 misfire_grace_time=1, max_runs=None, max_concurrency=1):

So the following code in Schedule.add_job() does not work:

#!python

class Scheduler(object):
    def add_job():
        ...
        job = Job(trigger, func, args or [], kwargs or {}, **options)
        if job.misfire_grace_time is None:
            job.misfire_grace_time = self.misfire_grace_time
        ...

WeekdayPositionExpression: unexpected keyword argument 'option' and 'weekday'

Originally reported by: skokan (Bitbucket: skokan, GitHub: skokan)


The Python (2.5) report following error:
File "C:\programy.win\Python25\lib\site-packages\apscheduler\triggers.py", lin
e 35, in compile_single

#!python
    return compiler(**match.groupdict())
TypeError: __init__() got an unexpected keyword argument 'option'

Solution:
Original code with error:

#!python

class WeekdayPositionExpression(object):
    options = ['1st', '2nd', '3rd', '4th', '5th', 'last']
    value_re = re.compile(r'(?P<option>%s) +(?P<weekday>(?:\d+|\w+))'
                          % '|'.join(options), re.IGNORECASE)

Fixed line:

#!python
    value_re = re.compile(r'(?P<**option_name**>%s) +(?P<weekday_name>(?:\d+|\w+))'
                          % '|'.join(options), re.IGNORECASE)

Unexpected results when job is scheduled from inside another job

Originally reported by: samuelh (Bitbucket: samuelh, GitHub: samuelh)


#!python

from apscheduler.scheduler import Scheduler
from time import sleep
from datetime import datetime, timedelta

sched = Scheduler()
sched.start()

def job_2():
    print "Job 2"

def job_1():
    global sched
    print "Job 1"
    sched.add_date_job(job_2, datetime.now() + timedelta(0,1))

sched.add_interval_job(job_1, seconds=3)

while 1:
    sleep(1)

Race condition when adding a job once scheduler is started

Originally reported by: chemicalstorm (Bitbucket: chemicalstorm, GitHub: chemicalstorm)


Using the following code:

#!python
from apscheduler.scheduler import Scheduler
logging.basicConfig(level=logging.DEBUG)

def job_function():
    print "Hello World"
    time.sleep(10)

# Start the scheduler
sched = Scheduler()

sched.start()
sched.add_cron_job(job_function, **{'minute':'*/2'})

It seems we can fall in a race condition. Several launches of this code display different outputs (I just added a log trace in scheduler.py, function //_real_add_job//)

Functionnal:

guestdev@guestdev-laptop:~/Projet/bac_a_sable/APScheduler-2.0.1$ python test_aps.py 
INFO:apscheduler.threadpool:Started thread pool with 0 core threads and 20 maximum threads
INFO:apscheduler.scheduler:Scheduler started
DEBUG:apscheduler.scheduler:Looking for jobs to run
DEBUG:apscheduler.scheduler:No jobs; waiting until a job is added
DEBUG:apscheduler.scheduler:Adding real job with wakeup: True
INFO:apscheduler.scheduler:Added job "job_function (trigger: cron[minute='*/2'], next run at: 2011-06-20 13:32:00)" to job store "default"
DEBUG:apscheduler.scheduler:Looking for jobs to run
DEBUG:apscheduler.scheduler:Next wakeup is due at 2011-06-20 13:32:00 (in 100.794154 seconds)

Race condition, which leads to multiple fires of the same job:

guestdev@guestdev-laptop:~/Projet/bac_a_sable/APScheduler-2.0.1$ python test_aps.py 
INFO:apscheduler.threadpool:Started thread pool with 0 core threads and 20 maximum threads
INFO:apscheduler.scheduler:Scheduler started
DEBUG:apscheduler.scheduler:Adding real job with wakeup: True
DEBUG:apscheduler.scheduler:Looking for jobs to run
INFO:apscheduler.scheduler:Added job "job_function (trigger: cron[minute='*/2'], next run at: 2011-06-20 13:32:00)" to job store "default"
DEBUG:apscheduler.scheduler:Next wakeup is due at 2011-06-20 13:32:00 (in 99.725313 seconds)
DEBUG:apscheduler.scheduler:Looking for jobs to run
DEBUG:apscheduler.scheduler:Next wakeup is due at 2011-06-20 13:32:00 (in 99.724925 seconds)

I can't reproduce the bug if I add a job before starting the scheduler.


APScheduler main loop throws exception IOError 514

Originally reported by: maoz_guttman (Bitbucket: maoz_guttman, GitHub: Unknown)


Hi,

I hit the "APScheduler main loop throws exception IOError 514" as described in issue #18

It is not easy to reproduce it since it is rarely happens and you have to run APScheduler for a long time with a lot of jobs, but it is easy to fix.

  1. time.sleep API is raising an "IOError: [Errno 514] Unknown error 514" exception rarely. You might have to call time.sleep API thousands of times in order to reproduce it. You can read about it here.
  2. threading._Event.wait API is calling that time.sleep API when the timeout arguement is not None.
  3. scheduler.py --> _main_loop function is calling threading._Event.wait API.

A simple way to workaround it is by wrapping "self._wakeup.wait(wait_seconds)" in scheduler.py --> _main_loop function in a try-except block. Something like:

#!python
def _main_loop(self):
  ...
  try:
    self._wakeup.wait(wait_seconds)
  except IOError as exception:
    pass
  ...

Are you willing to do that fix?

Thanks,

Maoz


input unicode in convert_to_datetime (from util.py)

Originally reported by: shoyu (Bitbucket: shoyu, GitHub: shoyu)


I use

#!python

from __future__ import unicode_literals

in my code.
There is an exception in //util.convert_to_datetime// when the program calls //add_interval_job// , start_date parameter because the parameter is unicode and not str.

Replacing

#!python

elif isinstance(input, str)

by

#!python

elif isinstance(input, basestring)

solves the issue.

Thank you for your great job.


Allow to schedule a provided Job-instance

Originally reported by: Kevin Palm (Bitbucket: palm_kevin, GitHub: Unknown)


Currently it is not possible to have own Job-Classes: The Job-Instance is always created by the Scheduler.add_job method.

I think that it would be nicer if there was another function like add_raw_job that received the Job-Instance as an argument.

Implemented it would look something like this;

#!python

    def add_job(self, trigger, func, args, kwargs, jobstore='default', **options):
        job = Job(trigger, func, args or [], kwargs or {}, options.pop('misfire_grace_time', self.misfire_grace_time), options.pop('coalesce', self.coalesce), **options)
        return self.add_raw_job(job, trigger, func, args, kwargs, jobstore='default', **options)

    def add_raw_job(self, job, trigger, func, args, kwargs, jobstore='default', **options):
        if not self.running:
            self._pending_jobs.append((job, jobstore))
            logger.info('Adding job tentatively -- it will be properly '
                        'scheduled when the scheduler starts')
        else:
            self._real_add_job(job, jobstore, True)
        return job

BTW: I would make most arguments of Job-class optional (behalves trigger and func of course)


Scheduling jobs causes all jobs that are still within misfire_grace_time to run again

Originally reported by: Caleb Shay (Bitbucket: Chinstrap, GitHub: Unknown)


Using the following code, and assuming that right now it is 11:30 am:

#!python
import sys
from apscheduler.scheduler import Scheduler

sched = Scheduler()
sched.configure({'misfire_grace_time': 3600})
sched.start()

# Schedule job in past, but inside grace_time
sched.add_cron_job(sys.stdout.write,hour='11',minute='1',second='0',args=['Job 1'])
# Immediately prints 'Job 1'

# Schedule another job in the past, but inside grace_time
sched.add_cron_job(sys.stdout.write,hour='11',minute='2',second='0',args=['Job 2'])
# Immediately prints 'Job 1Job 2'

# Schedule job in the future
sched.add_cron_job(sys.stdout.write,hour='12',minute='0',second='0',args=['Job 3'])
# Immediately prints 'Job 1Job 2'

Apparently, adding a job causes all existing jobs to be reevaluated against grace_time.


date_schedule decorator

Originally reported by: fmafma (Bitbucket: fmafma, GitHub: Unknown)


Hi,

What about adding a date_schedule() decorator? I tried, and it seems to work fine:

#!python

    def date_schedule(self, **options):
        """
        Decorator version of :meth:`add_date_job`.
        This decorator does not wrap its host function.
        Unscheduling decorated functions is possible by passing the ``job``
        attribute of the scheduled function to :meth:`unschedule_job`.
        """
        def inner(func):
            func.job = self.add_date_job(func, **options)
            return func
        return inner

There are maybe some issues I didn't see (yet)!


modify get_jobs, so that a jobstore name could be passed

Originally reported by: Anonymous


In my environment, I use multiple jobstore aliases. I used to use get_jobs to know all the jobs that are scheduled, but I only needed to know which jobs were scheduled on a certain alias.

I modified get_jobs so that a jobstore alias could be passed as well. This solved my issue, and I am unsure if you would like to add this to your repo. But I figured, I will post it and you can deny it if you choose to :D.

407 def get_jobs(self, alias=None):
408 """
409 Returns a list of all scheduled jobs.
410
411 :return: list of :class:~apscheduler.job.Job objects
412 """
413 self._jobstores_lock.acquire()
414 try:
415 jobs = []
416 if not alias:
417 for jobstore in itervalues(self._jobstores):
418 jobs.extend(jobstore.jobs)
419 else:
420 if alias in self._jobstores:
421 jobs.extend(self._jobstores[alias].jobs)
422 return jobs
423 finally:
424 self._jobstores_lock.release()


Add test for misfire_grace_time

Originally reported by: freeplant (Bitbucket: freeplant, GitHub: freeplant)


Two months ago, I used Apscheduler for scheduling jobs that run on
minute 1 at every hour. However I found the misfire_grace_time not
worked in version 1.3.1 and in version 2.0. So I fixed the problem
myself to use apscheduler.

I looks at the newest version, and add a test script. Glad to see
Apscheduler pass the test now.

The test script is attached.


High CPU Usage / Hang on System Time Change

Originally reported by: Robert Walter (Bitbucket: ttsthermaltech, GitHub: ttsthermaltech)


Seem to have found a reproducable bug on an embedded arm project that we are working on that uses apscheduler for performing interval based logging operations.

The scheduler works exceedingly well, except under one abmornal during testing.

If during boot, the hardware clock for some reason resets due to bad battery, etc, the system time will be starting at 0 past epoch upon initial program run, when the apscheduler is configured and started with a interval based job ranging from 5 - 60 second intervals.

Once the user corrects the time, or the NTP daemon corrects the time, the apscheduler task hangs indefinately, and the CPU usage shoots to the moon.

We have reproduced this with a little as a basic python program with just the scheduler loaded and running a simple 5 second interval job printing a single line of text. We set the date to an arbitrary time in the past (epoch + 0) , start the program, and using a separate terminal, or via NTP, adjust the time to the current time / date. Immediately the program hangs, and CPU usage goes up.

I am going to start digging for the reason, but any help would be appreciated.

Thanks !


raise TypeError('func must be callable')

Originally reported by: webus (Bitbucket: webus, GitHub: webus)


I'm using 2.1.2 version of library

#!python

def send_mail_job(job_dic):
    mail_theme = job_dic['mail_theme']
    mail_body = job_dic['mail_body']
    mail_from = job_dic['mail_from']
    mail_to = job_dic['mail_to']

sched = Scheduler()
sched.start()
job_dic = {}
job_dic['mail_theme'] = ""
job_dic['mail_body'] = ""
job_dic['mail_from'] = ""
job_dic['mail_to'] = ""
start_date = datetime(
                    task.start_date.year,
                    task.start_date.month,
                    task.start_date.day,
                    task.start_date.hour,
                    task.start_date.minute,
                    task.start_date.second)
job = sched.add_job(patent_send_mail_job, 'simple', [start_date], [job_dic])

What is the magic error ? I just want to run job by my datetime data

#!bash

Traceback (most recent call last):                                                                                                     
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 115, in get_response       
    response = callback(request, *callback_args, **callback_kwargs)                                                                    
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/views/generic/base.py", line 68, in view                
    return self.dispatch(request, *args, **kwargs)                                                                                     
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/braces/views/_access.py", line 64, in dispatch                 
    request, *args, **kwargs)                                                                                                          
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/views/generic/base.py", line 86, in dispatch            
    return handler(request, *args, **kwargs)                                                                                           
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/views/generic/edit.py", line 222, in post               
    return super(BaseUpdateView, self).post(request, *args, **kwargs)                                                                  
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/views/generic/edit.py", line 165, in post               
    return self.form_valid(form)                                                                                                       
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/views/generic/edit.py", line 127, in form_valid         
    self.object = form.save()                                                                                                          
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/forms/models.py", line 370, in save                     
    fail_message, commit, construct=False)                                                                                             
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/forms/models.py", line 87, in save_instance             
    instance.save()                                                                                                                    
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/db/models/base.py", line 546, in save                   
    force_update=force_update, update_fields=update_fields)                                                                            
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/db/models/base.py", line 664, in save_base              
    update_fields=update_fields, raw=raw, using=using)                                                                                 
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/django/dispatch/dispatcher.py", line 170, in send              
    response = receiver(signal=self, sender=sender, **named)                                                                           
  File "/home/wbs/src/work/patent/patent/patent_tasks/tasks.py", line 59, in update_task_list_on_new_task                              │
    job = sched.add_job(patent_send_mail_job, 'simple', [start_date], [job_dic])                                                       
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/apscheduler/scheduler.py", line 284, in add_job               
    options.pop('coalesce', self.coalesce), **options)                                                                                 
  File "/home/wbs/.virtualenvs/patent/local/lib/python2.7/site-packages/apscheduler/job.py", line 47, in __init__ 
    raise TypeError('func must be callable')                                                                                           
TypeError: func must be callable           

Scheduler.add_cronjob does not handle "second" argument correctly.

Originally reported by: Kunitaka NIIDATE (Bitbucket: kunix, GitHub: kunix)


add_cronjob method in Scheduler class seems to handle "second" argument as "minute".

For example below code shuld schedule jobs at 56 second in every minutes.
But that code schedules the job at 56 minute in every hours, and the job executed every seconds in 21:56. (That code executed at 21:55)

Python version is 2.6.4 on Mac OS 10.6.3.

#!python
from apscheduler.scheduler import Scheduler
import logging
logging.basicConfig(level=logging.DEBUG,filename="./debug.log",filemode="w")

def job(name=""):
  print "------"+name+"------"

scheduler = Scheduler()
job1 = scheduler.add_cron_job(job, second=56, args=["job1"])
scheduler.start()

debug.log

INFO:apscheduler.scheduler:Added job "job"
DEBUG:apscheduler.scheduler:Next wakeup is due at 2010-05-07 21:56:00 (in 16.967846 seconds)
INFO:apscheduler.scheduler:Scheduler started
DEBUG:apscheduler.scheduler:Executing job "job"
DEBUG:apscheduler.scheduler:Next wakeup is due at 2010-05-07 21:56:01 (in 0.998793 seconds)
DEBUG:apscheduler.scheduler:Executing job "job"
DEBUG:apscheduler.scheduler:Next wakeup is due at 2010-05-07 21:56:02 (in 0.998881 seconds)
:

Custom datetime_func support.

Originally reported by: Anonymous


Adding datetime_func in scheduler class and trigger class to support custom datetime function. As we want add this powerful module to a game service, so that we can not use the system time as the scheduler's start time. Thank you very much!

Example(In scheduler.py):
...
...
while not self._stopped:
logger.debug('Looking for jobs to run')

  •        now = datetime.now()
    
  •       now = datetime_func()
         next_wakeup_time = self._process_jobs(now)
    
    ...
    ...

Exception in thread Thread-1

Originally reported by: Jesid Andrey Durango Isaza (Bitbucket: jdurango, GitHub: jdurango)


Hi, I'm having this exception with this (this is a little example to reproduce the exception)

#!python

def showMessage():
        print "Show this message"

sh = Scheduler()
sh.start()
sh.add_interval_job(showMessage, seconds=6, name='showMessage')

Only execute the job one time and the output is:

#!python

Show this message
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py", line 505, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/home/ubuntu/projects/env/local/lib/python2.7/site-packages/apscheduler/threadpool.py", line 91, in _run_jobs
    self._threads.remove(currentThread())
KeyError: <_MainThread(MainThread, started 140260121077504)>

Thanks


Persist the scheduled jobs list

Originally reported by: Anonymous


Add a configurable/abstract datastore mechanism to persist scheduled jobs. This would allow APScheduler to be used more easily in services that may be restarted [server reboot, host migration, etc...]. Quartz.NET provides a data store mechanism for persisting jobs.

One issue is that the triggers calculate the interval, etc... for recurring jobs; but they do not maintain the values from which those intervals where derived - which is useful in presenting to users/clients the list of scheduled jobs.


OverflowError on BlockingScheduler on Windows

Originally reported by: David Hait (Bitbucket: dhait, GitHub: dhait)


Simple blocking scheduler on windows (Python 3.3):

scheduler = BlockingScheduler()
    try:
        scheduler.start()
    except (KeyboardInterrupt, SystemExit):
        pass

throws exception:

File "testme.py", line 291, in main
scheduler.start()

File "C:\Python33\lib\site-packages\apscheduler-3.0.0.pre1-py3.3.egg\apscheduler\schedulers\blocking.py", line 17, in start
self._main_loop()

File "C:\Python33\lib\site-packages\apscheduler-3.0.0.pre1-py3.3.egg\apscheduler\schedulers\blocking.py", line 26, in _main_loop
self._event.wait(wait_seconds or self.MAX_WAIT_TIME)

File "C:\Python33\lib\threading.py", line 547, in wait
signaled = self._cond.wait(timeout)

File "C:\Python33\lib\threading.py", line 288, in wait
gotit = waiter.acquire(True, timeout)

OverflowError: timeout value is too large


How apscheduler read exist jobs not create new one????

Originally reported by: kang cheng (Bitbucket: abudulemusa, GitHub: abudulemusa)


1 This is my code ..
@scheduler.cron_schedule(day_of_week='0-6', hour='13',jobstore=STORE_ALIAS)
def scan_trm():
scheduler.start()

2 This is the record in my mysql database..
<table_data name="t_jobs">

465
0xsdf
main:test
0x80025D71012E
0x80027D71012E
test
10
1

1
2014-01-13 13:00:00
0

</table_data>
3 If i run again,the apscheduler will create new one? Actually i need the it can listen which already create before..

what should i do??


MySQL use with SQLAlchemyJobStore

Originally reported by: lerovitch (Bitbucket: lerovitch, GitHub: lerovitch)


Hi,

I've tried to connect to a MySQL database to have a persistent JobStore. The problem is that when creating the Table 'apscheduler_jobs', it has a Name column with 1024 which is a Primary Key or Unique field. The database responds with the following:
"Mysql - #1071 - Specified key was too long; max key length is 1000 bytes"

I supose that either changing the size of the Name column or not doing it Unique is a good solution


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.