Giter VIP home page Giter VIP logo

go-crond's People

Contributors

anarcher avatar berendiwema avatar christophlehmann avatar danielkucera avatar dispensable avatar igorwwwwwwwwwwwwwwwwwwww avatar iloveitaly avatar kumy avatar lmunch avatar mblaschke avatar n-rodriguez avatar sergeyklay avatar smlx avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-crond's Issues

Dump all the configuration

Hi markus,

is it possible to add an option to dump the current configuration that is load. Like nginx -T

Output from jobs

The output only shows that the job as been executed but not the actual output from the job. Is there anyway to get the output from the jobs on stdout?

Thanks

Segfault when getting user id ?

Running in docker (openshift) in image openshift/base-centos7 I'm getting following error:

sh-4.2$ go-crond --allow-unprivileged
go-crond: Starting go-crond version 0.6.0
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x519652]

goroutine 1 [running]:
main.main()
        /Users/mblaschke/Projects/go-crond/main.go:371 +0x332  

[Support] Unable to run cron jobs on nth weekday of the month

Hi, for some reason my cron job that is meant to run on every second Tuesday of the month, does not honour the day of month range that I give it in a 0 0 8-14 * 2 expression, which I double checked here

My setup:
Docker image used: webdevops/go-crond:23.2.0-alpine
CMD arguments: ["--auto", "--verbose"]
My crontabs folder is copied like so: COPY --chmod=755 crontabs /etc/crontabs
There I have a cron job /etc/crontabs/job_1 contains: 0 0 8-14 * 2 /files/job_1.sh

The Results
docker logs go-crond 2>&1 | grep -i job_1

{"command":"/files/job_1.sh","crontab":"/etc/crontabs/job_1","level":"info","msg":"cronjob added","shell":"sh","spec":"0 0 8-14 * 2","time":"2023-06-21T09:04:22+12:00","user":"root"}
{"command":"/files/job_1.sh","crontab":"/etc/crontabs/job_1","level":"debug","msg":"executing","shell":"sh","spec":"0 0 8-14 * 2","time":"2023-06-27T00:00:00+12:00","user":"root"}
{"command":"/files/job_1.sh","crontab":"/etc/crontabs/job_1","elapsed_s":9306.533054711,"exitCode":0,"level":"info","msg":"finished","result":"success","shell":"sh","spec":"0 0 8-14 * 2","time":"2023-06-27T02:35:06+12:00","user":"root"}
{"command":"/files/job_1.sh","crontab":"/etc/crontabs/job_1","level":"debug","msg":"executing","shell":"sh","spec":"0 0 8-14 * 2","time":"2023-07-04T00:00:06+12:00","user":"root"}
{"command":"/files/job_1.sh","crontab":"/etc/crontabs/job_1","elapsed_s":9303.841112584,"exitCode":0,"level":"info","msg":"finished","result":"success","shell":"sh","spec":"0 0 8-14 * 2","time":"2023-07-04T02:35:10+12:00","user":"root"}

Looks like the 8-14 day of month range is ignored, and the job runs every Tuesday. I also double chcked the date and time on the container, and it's correct. Any ideas?

Segfault with simple commands

go-crond crash randomly with this trace:

# cat crontab 
@every 5s www-data echo HELLO
@every 1s www-data echo HI

# go-crond -v  crontab --no-auto
go-crond: Starting go-crond version 0.6.1
go-crond: Add cron job spec:'@every 5s' usr:www-data cmd:'echo HELLO'
go-crond: Add cron job spec:'@every 1s' usr:www-data cmd:'echo HI'
go-crond: Start runner with 2 jobs
go-crond: exec: spec:'@every 1s' usr:www-data cmd:'echo HI'
go-crond: ok: spec:'@every 1s' usr:www-data cmd:'echo HI' time:1.084355ms
go-crond: exec: spec:'@every 1s' usr:www-data cmd:'echo HI'
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0xe5 pc=0x7eff8c61d4d8]

runtime stack:
runtime.throw(0x60c5cf, 0x2a)
	/usr/local/go/src/runtime/panic.go:596 +0x95
runtime.sigpanic()
	/usr/local/go/src/runtime/signal_unix.go:274 +0x2db

goroutine 20 [syscall, locked to thread]:
runtime.cgocall(0x51e200, 0xc4200f6b00, 0x60c2bd)
	/usr/local/go/src/runtime/cgocall.go:131 +0xe2 fp=0xc4200f6ac0 sp=0xc4200f6a80
os/user._Cfunc_mygetpwnam_r(0x7eff780008c0, 0xc4200ee3f0, 0x7eff780008e0, 0x400, 0xc4200fe030, 0x0)
	os/user/_obj/_cgo_gotypes.go:161 +0x4d fp=0xc4200f6b00 sp=0xc4200f6ac0
os/user.lookupUser.func2.1(0x7eff780008c0, 0xc4200ee3f0, 0x7eff780008e0, 0x400, 0xc4200fe030, 0xc4200f6ba8)
	/usr/local/go/src/os/user/lookup_unix.go:66 +0x17b fp=0xc4200f6b58 sp=0xc4200f6b00
os/user.lookupUser.func2(0x10)
	/usr/local/go/src/os/user/lookup_unix.go:70 +0x51 fp=0xc4200f6b98 sp=0xc4200f6b58
os/user.retryWithBuffer(0xc4200ec3b0, 0xc4200f6cb0, 0xc4200ec3b0, 0xc420014780)
	/usr/local/go/src/os/user/lookup_unix.go:253 +0x2b fp=0xc4200f6c00 sp=0xc4200f6b98
os/user.lookupUser(0xc42000d9aa, 0x8, 0x0, 0x0, 0x0)
	/usr/local/go/src/os/user/lookup_unix.go:71 +0x1ab fp=0xc4200f6ce8 sp=0xc4200f6c00
os/user.Lookup(0xc42000d9aa, 0x8, 0x9, 0xc42000d9aa, 0x8)
	/usr/local/go/src/os/user/lookup.go:15 +0x35 fp=0xc4200f6d20 sp=0xc4200f6ce8
main.(*Runner).AddWithUser.func1(0xc4200f02c0, 0x2)
	/Users/mblaschke/Projects/go-crond/runner.go:61 +0x84 fp=0xc4200f6de0 sp=0xc4200f6d20
main.(*Runner).cmdFunc.func1()
	/Users/mblaschke/Projects/go-crond/runner.go:133 +0x14d fp=0xc4200f6f90 sp=0xc4200f6de0
github.com/robfig/cron.FuncJob.Run(0xc4200a3180)
	/go/src/github.com/robfig/cron/cron.go:92 +0x25 fp=0xc4200f6fa0 sp=0xc4200f6f90
github.com/robfig/cron.(*Cron).runWithRecovery(0xc420016550, 0x8d4ac0, 0xc4200a3180)
	/go/src/github.com/robfig/cron/cron.go:165 +0x57 fp=0xc4200f6fc8 sp=0xc4200f6fa0
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc4200f6fd0 sp=0xc4200f6fc8
created by github.com/robfig/cron.(*Cron).run
	/go/src/github.com/robfig/cron/cron.go:199 +0x69d

goroutine 1 [chan receive]:
main.main()
	/Users/mblaschke/Projects/go-crond/main.go:413 +0x75c

goroutine 17 [syscall, locked to thread]:
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:2197 +0x1

goroutine 5 [syscall]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:116 +0x104
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.1
	/usr/local/go/src/os/signal/signal_unix.go:28 +0x41

goroutine 6 [select, locked to thread]:
runtime.gopark(0x60e498, 0x0, 0x604d46, 0x6, 0x18, 0x2)
	/usr/local/go/src/runtime/proc.go:271 +0x13a
runtime.selectgoImpl(0xc42003af50, 0x0, 0x18)
	/usr/local/go/src/runtime/select.go:423 +0x1364
runtime.selectgo(0xc42003af50)
	/usr/local/go/src/runtime/select.go:238 +0x1c
runtime.ensureSigM.func1()
	/usr/local/go/src/runtime/signal_unix.go:434 +0x2dd
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:2197 +0x1

goroutine 7 [chan receive]:
main.registerRunnerShutdown.func1(0xc420014840, 0xc42000e088)
	/Users/mblaschke/Projects/go-crond/main.go:424 +0x68
created by main.registerRunnerShutdown
	/Users/mblaschke/Projects/go-crond/main.go:430 +0xf0

goroutine 8 [select]:
github.com/robfig/cron.(*Cron).run(0xc420016550)
	/go/src/github.com/robfig/cron/cron.go:191 +0x3f0
created by github.com/robfig/cron.(*Cron).Start
	/go/src/github.com/robfig/cron/cron.go:144 +0x55

# go-crond -v  crontab --no-auto
go-crond: Starting go-crond version 0.6.1
go-crond: Add cron job spec:'@every 5s' usr:www-data cmd:'echo HELLO'
go-crond: Add cron job spec:'@every 1s' usr:www-data cmd:'echo HI'
go-crond: Start runner with 2 jobs
go-crond: exec: spec:'@every 1s' usr:www-data cmd:'echo HI'
go-crond: ok: spec:'@every 1s' usr:www-data cmd:'echo HI' time:1.066013ms
go-crond: exec: spec:'@every 1s' usr:www-data cmd:'echo HI'
go-crond: ok: spec:'@every 1s' usr:www-data cmd:'echo HI' time:1.015143ms
go-crond: exec: spec:'@every 1s' usr:www-data cmd:'echo HI'
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0xe5 pc=0x7ff4221064d8]

runtime stack:
runtime.throw(0x60c5cf, 0x2a)
	/usr/local/go/src/runtime/panic.go:596 +0x95
runtime.sigpanic()
	/usr/local/go/src/runtime/signal_unix.go:274 +0x2db
[…]

checksum file

Could you add a checksum file to ensure that the artifacts to be downloaded could be verified?

Defunct/zombie processes accumulates under go-crond

Hi,

I made a container version of an app (which I have minimal control over), and I implemented the cron part using go-crond, but at runtime I'm observing an accumulation of processes in the "defunct" state :

dev      2530591  0.0  0.3 722320 12440 ?        Sl   mars12   3:26  \_ /home/dev/bin/containerd-shim-runc-v2 -namespace moby -id 5571b9c2af06289520b43579d570dd83a766304c27c9c3019e07f8e2f16ba19f -address /run/user/1001/docker/containerd/containerd.sock                                                                  
dev      2530636  0.0  0.2 1234760 8688 ?        Ssl  mars12   0:40  |   \_ /usr/local/bin/go-crond -v www-data:/var/spool/cron/crontabs/www-data                                                                                                                                                                             
165568   2750043  0.0  0.0      0     0 ?        Z    mars12   0:00  |       \_ [exec65f0de702] <defunct>
165568   2823765  0.0  0.0      0     0 ?        Z    mars13   0:00  |       \_ [exec65f12a9c1] <defunct>
165568   2824042  0.0  0.0      0     0 ?        Z    mars13   0:00  |       \_ [exec65f12ad82] <defunct>
[...]
165568   2897261  0.0  0.0      0     0 ?        Z    04:24   0:00  |       \_ [exec66024e1c2] <defunct> 
165568   2897559  0.0  0.0      0     0 ?        Z    04:25   0:00  |       \_ [exec66024e581] <defunct> 
165568   2897853  0.0  0.0      0     0 ?        Z    04:26   0:00  |       \_ [exec66024e942] <defunct>

I do not observe this behavior when the application is running on a "real" server (e.g. using anacron).

The problems seems to be related to the fact that the process launched by the crontab (which I have no control over) seems to use a shell & construct to "fork" himself in the background, and somehow this leaves the process in a zombie state when running under go-crond.

Here is a striped down test case that replicates the problem.

  • Create a crontab that performs a shell background execution:
cat <<'EOF' > crontab
*/1 * * * * sh -c 'sleep 2 &'
EOF

chmod 0600 crontab
  • Run the crontab in a container using go-crond:
docker run --rm -it --volume ./crontab:/tmp/crontab webdevops/go-crond:debian -v root:/tmp/crontab
  • While the go-crond container is running, observe that "defunct" processes are created and accumulates over time:
watch "ps auxwww | grep -P '\\bdefunct'"

We see that defunct/zombie processes accumulates over time:

root       71596  0.0  0.0      0     0 pts/0    Z+   11:59   0:00 [sleep] <defunct>
root       71766  0.0  0.0      0     0 pts/0    Z+   12:00   0:00 [sleep] <defunct>
[...]

Update release binaries to fix CVEs

I'm utilizing the current release (23.2.0) and it's binaries from GitHub , in my own dockerfile build.

Current release binaries (23.2.0) contains critical and many high/medium CVEs due to using pkg:golang/[email protected].

Could you please update the built binaries to stdlib >=1.20.10 to fix these CVEs?

Docker scout shows:

## Packages and Vulnerabilities

   2C     9H     6M     0L  stdlib 1.19.6
pkg:golang/[email protected]

    ✗ CRITICAL CVE-2023-24540
      https://scout.docker.com/v/CVE-2023-24540
      Affected range : <1.19.9  
      Fixed version  : 1.19.9   
    
    ✗ CRITICAL CVE-2023-24538
      https://scout.docker.com/v/CVE-2023-24538
      Affected range : <1.19.8  
      Fixed version  : 1.19.8   
    
    ✗ HIGH CVE-2023-29403
      https://scout.docker.com/v/CVE-2023-29403
      Affected range : <1.19.10  
      Fixed version  : 1.19.10   
    
    ✗ HIGH CVE-2023-45283
      https://scout.docker.com/v/CVE-2023-45283
      Affected range : <1.20.11  
      Fixed version  : 1.20.11   
    
    ✗ HIGH CVE-2023-44487
      https://scout.docker.com/v/CVE-2023-44487
      Affected range : <1.20.10  
      Fixed version  : 1.20.10   
    
    ✗ HIGH CVE-2023-39325
      https://scout.docker.com/v/CVE-2023-39325
      Affected range : <1.20.10  
      Fixed version  : 1.20.10   
    
    ✗ HIGH CVE-2023-24537
      https://scout.docker.com/v/CVE-2023-24537
      Affected range : <1.19.8  
      Fixed version  : 1.19.8   
    
    ✗ HIGH CVE-2023-24536
      https://scout.docker.com/v/CVE-2023-24536
      Affected range : <1.19.8  
      Fixed version  : 1.19.8   
    
    ✗ HIGH CVE-2023-24534
      https://scout.docker.com/v/CVE-2023-24534
      Affected range : <1.19.8  
      Fixed version  : 1.19.8   
    
    ✗ HIGH CVE-2023-29400
      https://scout.docker.com/v/CVE-2023-29400
      Affected range : <1.19.9  
      Fixed version  : 1.19.9   
    
    ✗ HIGH CVE-2023-24539
      https://scout.docker.com/v/CVE-2023-24539
      Affected range : <1.19.9  
      Fixed version  : 1.19.9   
    
    ✗ MEDIUM CVE-2023-29406
      https://scout.docker.com/v/CVE-2023-29406
      Affected range : <1.19.11  
      Fixed version  : 1.19.11   
    
    ✗ MEDIUM CVE-2023-39319
      https://scout.docker.com/v/CVE-2023-39319
      Affected range : <1.20.8  
      Fixed version  : 1.20.8   
    
    ✗ MEDIUM CVE-2023-39318
      https://scout.docker.com/v/CVE-2023-39318
      Affected range : <1.20.8  
      Fixed version  : 1.20.8   
    
    ✗ MEDIUM CVE-2023-45284
      https://scout.docker.com/v/CVE-2023-45284
      Affected range : <1.20.11  
      Fixed version  : 1.20.11   
    
    ✗ MEDIUM CVE-2023-29409
      https://scout.docker.com/v/CVE-2023-29409
      Affected range : <1.19.12  
      Fixed version  : 1.19.12   
    
    ✗ MEDIUM CVE-2023-24532
      https://scout.docker.com/v/CVE-2023-24532
      Affected range : <1.19.7  
      Fixed version  : 1.19.7   
    

Missing signal to jobs when receiving SIGTERM makes clean shutdown impossible

When trying to run cron jobs with go-crond from a container, and a new deployment happens, the container receives a SIGTERM to signal the request for termination. This causes go-crond to exit immediately, while all running jobs never receive a signal. The exit of go-crond signals openshift that the container shut down, terminating the container and all still-running jobs un-cleanly.

My expectation is: when go-crond receives SIGTERM it should send all running jobs the same SIGTERM and wait for them to exit, only then should go-crond exit. This would let the jobs exit cleanly.

Catch worker output

An option --log-output to catch and log output of cronjob would be nice.

root@835e4577c072:/# su -s /bin/bash www-data -c './go-crond-64-linux-dynamic /etc/crontab --allow-unprivileged --no-auto'
go-crond: Starting go-crond version 0.6.1
go-crond: WARNING: go-crond is NOT running as root, disabling user switching
go-crond: add: spec:'* * * * *' usr:www-data cmd:'/bin/echo hello >> /var/log/cron.log'
go-crond: add: spec:'* * * * *' usr:www-data cmd:'date'
go-crond: Start runner with 2 jobs

In this example the output of the date command should be written to stdout

OpenShift containers

So I've been attempting to actually allow OpenShift containers through your OpenCPU, and I've been running into some issues. I cannot seem to actually run a container without failure / crashing.

The issue is that I'm given an error stating that :

  • Starting periodic command scheduler cron
    --
      | cron: can't open or create /var/run/crond.pid: Permission denied
      | ...fail!
      | AH00526: Syntax error on line 33 of /etc/apache2/sites-enabled/default-ssl.conf:
      | SSLCertificateKeyFile: file '/etc/ssl/private/ssl-cert-snakeoil.key' does not exist or is empty
      | Action '-DFOREGROUND' failed.
      | The Apache error log may have more information.

Even with something as simply as just a From OpenCpu/Base seems to net this error. While this is probably an issue on my end because I'm not particularly familiar with OpenShift, do you have any suggestions on how to circumvent this problem?

Single files (/etc/crontab) are ignored

A check unfortunately only works with directories leading to a rather irritating error message:

$ strace -f -e file /opt/go-crond-64-linux --allow-unprivileged --auto
INFO[0000] starting go-crond version 20.7.0 (09eeb85; go1.14.4)  
[pid   377] openat(AT_FDCWD, "/etc/passwd", O_RDONLY|O_CLOEXEC) = 3
WARN[0000] go-crond is NOT running as root, disabling user switching 
[pid   377] newfstatat(AT_FDCWD, ".", {st_mode=S_IFDIR|0777, st_size=42, ...}, 0) = 0
[pid   377] newfstatat(AT_FDCWD, "/var/www/nextcloud", {st_mode=S_IFDIR|0777, st_size=42, ...}, 0) = 0
INFO[0000] starting http server on                      
[pid   377] chdir("/var/www/nextcloud") = 0
[pid   377] newfstatat(AT_FDCWD, "/etc/alpine-release", 0xc0001a1a38, 0) = -1 ENOENT (No such file or directory)
[pid   377] newfstatat(AT_FDCWD, "/etc/redhat-release", 0xc0001a1b08, 0) = -1 ENOENT (No such file or directory)
[pid   377] newfstatat(AT_FDCWD, "/etc/SuSE-release", 0xc0001a1bd8, 0) = -1 ENOENT (No such file or directory)
[pid   377] newfstatat(AT_FDCWD, "/etc/debian_version", {st_mode=S_IFREG|0644, st_size=5, ...}, 0) = 0
INFO[0000]  --> detected Debian family, using distribution defaults 
[pid   377] newfstatat(AT_FDCWD, "/etc/crontab", {st_mode=S_IFREG|0644, st_size=589, ...}, 0) = 0
[pid   377] newfstatat(AT_FDCWD, "/etc/crontab", {st_mode=S_IFREG|0644, st_size=589, ...}, 0) = 0
INFO[0000] path /etc/crontab does not exists            
[pid   377] newfstatat(AT_FDCWD, "/etc/cron.d", 0xc0001a1f18, 0) = -1 ENOENT (No such file or directory)
[pid   376] chdir("/")                  = 0
INFO[0000] start runner with 0 jobs                     
^Cstrace: Process 373 detached
strace: Process 374 detached
strace: Process 375 detached
strace: Process 376 detached
strace: Process 377 detached
strace: Process 378 detached
strace: Process 379 detached
INFO[0003] got signal: interrupt                        
INFO[0003] stop runner                                  
INFO[0003] terminated                            

As you can see from strace /etc/crontab does exist but is logged as “does not exist”. Putting /etc/crontab into a dedicated folder and running with --include <folder> works. The linked line in the code seems to fit as it only allows folders to pass.

//edit:

Running a file directly like /opt/go-crond-64-linux /etc/crontab works.

Exit status

When the container/go-crond is stopped the exit status is 1 :

-----> Setup log archives directories
=====> Executing command: go-crond --allow-unprivileged --include=/etc/cron.d --log.json --server.bind=0.0.0.0:8080
{"level":"info","msg":"starting go-crond version 23.2.0 (868c01d; go1.19.6) ","time":"2023-04-07T02:58:32+02:00"}
{"level":"info","msg":"{\"ShowVersion\":false,\"ShowOnlyVersion\":false,\"ShowHelp\":false,\"Cron\":{\"DefaultUser\":\"root\",\"IncludeCronD\":[\"/etc/cron.d\"],\"Auto\":false,\"RunParts\":null,\"RunParts1m\":null,\"RunParts15m\":null,\"RunPartsHourly\":null,\"RunPartsDaily\":null,\"RunPartsWeekly\":null,\"RunPartsMonthly\":null,\"AllowUnprivileged\":true,\"WorkDir\":\"/\",\"EnableUserSwitching\":false},\"Log\":{\"Verbose\":false,\"Json\":true},\"Server\":{\"Bind\":\"0.0.0.0:8080\",\"ReadTimeout\":5000000000,\"WriteTimeout\":10000000000,\"Metrics\":false},\"Args\":{\"Crontabs\":null}}","time":"2023-04-07T02:58:32+02:00"}
{"level":"warning","msg":"go-crond is NOT running as root, disabling user switching","time":"2023-04-07T02:58:32+02:00"}
{"level":"info","msg":"starting http server on 0.0.0.0:8080","time":"2023-04-07T02:58:32+02:00"}
{"command":"/app/bin/concerto-cli postgres reindex --env_file /app/.env --version_file /app/VERSION \u003e /proc/1/fd/1 2\u003e\u00261","crontab":"/etc/cron.d/concerto","level":"info","msg":"cronjob added","shell":"/bin/bash","spec":"00 3 * * 0","time":"2023-04-07T02:58:32+02:00","user":"nonroot"}
{"command":"/app/bin/concerto-cli postgres vacuum --env_file /app/.env --version_file /app/VERSION --full \u003e /proc/1/fd/1 2\u003e\u00261","crontab":"/etc/cron.d/concerto","level":"info","msg":"cronjob added","shell":"/bin/bash","spec":"45 3 * * 0","time":"2023-04-07T02:58:32+02:00","user":"nonroot"}
{"command":"/app/bin/concerto-cli postgres vacuum --env_file /app/.env --version_file /app/VERSION \u003e /proc/1/fd/1 2\u003e\u00261","crontab":"/etc/cron.d/concerto","level":"info","msg":"cronjob added","shell":"/bin/bash","spec":"45 3 * * 1-6","time":"2023-04-07T02:58:32+02:00","user":"nonroot"}
{"command":"/app/bin/concerto-cli postgres backup --env_file /app/.env --version_file /app/VERSION \u003e /proc/1/fd/1 2\u003e\u00261","crontab":"/etc/cron.d/concerto","level":"info","msg":"cronjob added","shell":"/bin/bash","spec":"30 5 * * *","time":"2023-04-07T02:58:32+02:00","user":"nonroot"}
{"command":"/app/bin/concerto-cli logrotate start --env_file /app/.env --verbose \u003e /proc/1/fd/1 2\u003e\u00261","crontab":"/etc/cron.d/concerto","level":"info","msg":"cronjob added","shell":"/bin/bash","spec":"00 0 * * *","time":"2023-04-07T02:58:32+02:00","user":"nonroot"}
{"level":"info","msg":"start runner with 5 jobs\n","time":"2023-04-07T02:58:32+02:00"}
{"level":"info","msg":"got signal: terminated","time":"2023-04-07T03:12:28+02:00"}
{"level":"info","msg":"stop runner","time":"2023-04-07T03:12:28+02:00"}
{"level":"info","msg":"terminated","time":"2023-04-07T03:12:28+02:00"}
[INFO  tini (1)] Spawned child process 'docker-entrypoint' with pid '7'
[DEBUG tini (1)] Passing signal: 'Terminated'
[DEBUG tini (1)] Received SIGCHLD
[DEBUG tini (1)] Reaped child with pid: '7'
[INFO  tini (1)] Main child exited normally (with status '1')

IMHO it should be 0 or 143 : https://komodor.com/learn/exit-codes-in-containers-and-kubernetes-the-complete-guide/

Thank you!

Note : I use exec in docker-entrypoint

nonroot@4e6939d94f43:/app$ ps faux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
nonroot     2478  1.0  0.0   4164  3316 pts/0    Ss   03:43   0:00 bash
nonroot     2485  0.0  0.0   6760  2916 pts/0    R+   03:43   0:00  \_ ps faux
nonroot        1  0.0  0.0   2380   568 ?        Ss   02:32   0:00 tini -w -- docker-entrypoint go-crond --allow-unprivileged --include=/etc/cron.d --log.json --server.bind=0.0.0.0:8080
nonroot        7  0.0  0.0 720164 16072 ?        Sl   02:32   0:00 go-crond --allow-unprivileged --include=/etc/cron.d --log.json --server.bind=0.0.0.0:8080

Metrics http service always runs

The http server is always started even when metrics are not enabled.

This behaviour has started with the latest 20.6.0 release.

panic: runtime error

I run

go-crond \
    --include=/usr/local/etc/cron.d \
    --run-parts-hourly=/usr/local/etc/cron.hourly \
    --run-parts-weekly=/usr/local/etc/cron.weekly \
    --run-parts-daily=/usr/local/etc/cron.daily \
    --run-parts-monthly=/usr/local/etc/cron.monthly \
    --run-parts=1m:application:/usr/local/etc/cron.minute \
    --run-parts=15m:admin:/usr/local/etc/cron.15min

and i have :

go-crond: Starting go-crond version 0.6.0
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x519652]

goroutine 1 [running]:
main.main()
	/Users/mblaschke/Projects/go-crond/main.go:371 +0x332

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.