Giter VIP home page Giter VIP logo

bigben's Introduction


NOTICE:

This repository has been archived and is not supported.

No Maintenance Intended


NOTICE: SUPPORT FOR THIS PROJECT HAS ENDED

This projected was owned and maintained by Walmart. This project has reached its end of life and Walmart no longer supports this project.

We will no longer be monitoring the issues for this project or reviewing pull requests. You are free to continue using this project under the license terms or forks of this project at your own risk. This project is no longer subject to Walmart's bug bounty program or other security monitoring.

Actions you can take

We recommend you take the following action:

  • Review any configuration files used for build automation and make appropriate updates to remove or replace this project
  • Notify other members of your team and/or organization of this change
  • Notify your security team to help you evaluate alternative options

Forking and transition of ownership

For security reasons, Walmart does not transfer the ownership of our primary repos on Github or other platforms to other individuals/organizations. Further, we do not transfer ownership of packages for public package management systems.

If you would like to fork this package and continue development, you should choose a new name for the project and create your own packages, build automation, etc.

Please review the licensing terms of this project, which continue to be in effect even after decommission.

BigBen

BigBen is a generic, multi-tenant, time-based event scheduler and cron scheduling framework based on Cassandra and Hazelcast

It has following features:

  • Distributed - BigBen uses a distributed design and can be deployed on 10's or 100's of machines and can be dc-local or cross-dc
  • Horizontally scalable - BigBen scales linearly with the number of machines.
  • Fault tolerant - BigBen employs a number of failure protection modes and can withstand arbitrary prolonged down times
  • Performant - BigBen can easily scale to 10,000's or even millions's of event triggers with a very small cluster of machines. It can also easily manage million's of crons running in a distributed manner
  • Highly Available - As long as a single machine is available in the cluster, BigBen will guarantee the execution of events (albeit with a lower throughput)
  • Extremely consistent - BigBen employs a single master design (the master itself is highly available with n-1 masters on standby in an n cluster machine) to ensure that no two nodes fire the same event or execute the same cron.
  • NoSql based - BigBen comes with default implementation with Cassandra but can be easily extended to support other NoSql or even RDBMS data stores
  • Auditable - BigBen keeps a track of all the events fired and crons executed with a configurable retention
  • Portable, cloud friendly - BigBen comes as application bundled as war or an embedded lib as jar, and can be deployed on any cloud, on-prem or public

Use cases

BigBen can be used for a variety of time based workloads, both single trigger based or repeating crons. Some of the use cases can be

  • Delayed execution - E.g. if a job is to be executed 30 mins from now
  • System retries - E.g. if a service A wants to call service B and service B is down at the moment, then service A can schedule an exponential backoff retry strategy with retry intervals of 1 min, 10 mins, 1 hour, 12 hours, and so on.
  • Timeout tickers - E.g. if service A sends a message to service B via Kafka and expects a response in 1 min, then it can schedule a timeout check event to be executed after 1 min
  • Polling services - E.g. if service A wants to poll service B at some frequency, it can schedule a cron to be executed at some specified frequency
  • Notification Engine - BigBen can be used to implement notification engine with scheduled deliveries, scheduled polls, etc
  • Workflow state machine - BigBen can be used to implement a distributed workflow with state suspensions, alerts and monitoring of those suspensions.

Architectural Goals

BigBen was designed to achieve the following goals:

  • Uniformly distributed storage model
    • Resilient to hot spotting due to sudden surge in traffic
  • Uniform execution load profile in the cluster
    • Ensure that all nodes have similar load profiles to minimize misfires
  • Linear Horizontal Scaling
  • Lock-free execution
    • Avoid resource contentions
  • Plugin based architecture to support variety of data bases like Cassandra, Couchbase, Solr Cloud, Redis, RDBMS, etc
  • Low maintenance, elastic scaling

Design and architecture

See the blog published at Medium for a full description of various design elements of BigBen

Events Inflow

BigBen can receive events in two modes:

  • kafka - inbound and outbound Kafka topics to consume event requests and publish event triggers
  • http - HTTP APIs to send event requests and HTTP APIs to receive event triggers.

It is strongly recommended to use kafka for better scalability

Event Inflow diagram

inflow

Request and Response channels can be mixed. For example, the event requests can be sent through HTTP APIs but the event triggers (response) can be received through a Kafka Topic.

Event processing guarantees

BigBen has a robust event processing guarantees to survive various failures. However, event-processing is not same as event-acknowledgement. BigBen works in a no-acknowledgement mode (at least for now). Once an event is triggered, it is either published to Kafka or sent through an HTTP API. Once the Kafka producer returns success, or HTTP API returns non-500 status code, the event is assumed to be processed and marked as such in the system. However, for whatever reason if the event was not processed and resulted in an error (e.g. Kafka producer timing out, or HTTP API throwing 503), then the event will be retried multiple times as per the strategies discussed below

Event misfire strategy

Multiple scenarios can cause BigBen to be not able to trigger an event on time. Such scenarios are called misfires. Some of them are:

  • BigBen's internal components are down during event trigger. E.g.

    • BigBen's data store is down and events could not be fetched
    • VMs are down
  • Kafka Producer could not publish due to loss of partitions / brokers or any other reasons

  • HTTP API returned a 500 error code

  • Any other unexpected failure

In any of these cases, the event is first retried in memory using an exponential back-off strategy.

Following parameters control the retry behavior:

  • event.processor.max.retries - how many in-memory retries will be made before declaring the event as error, default is 3
  • event.processor.initial.delay - how long in seconds the system should wait before kicking in the retry, default is 1 second
  • event.processor.backoff.multiplier - the back off multiplier factor, default is 2. E.g. the intervals would be 1 second, 2 seconds, 4 seconds.

If the event still is not processed, then the event is marked as ERROR. All the events marked ERROR are retried up to a configured limit called events.backlog.check.limit. This value can be an arbitrary amount of time, e.g. 1 day, 1 week, or even 1 year. E.g. if the the limit is set at 1 week then any event failures will be retried for 1 week after which, they will be permanently marked as ERROR and ignored. The events.backlog.check.limit can be changed at any time by changing the value in bigben.yaml file and bouncing the servers.

Event bucketing and shard size

BigBen shards events by minutes. However, since it's not known in advance how many events will be scheduled in a given minute, the buckets are further sharded by a pre defined shard size. The shard size is a design choice that needs to be made before deployment. Currently, it's not possible to change the shard size once defined.

An undersized shard value has minimal performance impact, however an oversized shard value may keep some machines idling. The default value of 1000 is good enough for most practical purposes as long as number of events to be scheduled per minute exceed 1000 x n, where n is the number of machines in the cluster. If the events to be scheduled are much less than 1000 then a smaller shard size may be chosen.

Multi shard parallel processing

Each bucket with all its shards is distributed across the cluster for execution with an algorithm that ensures a random and uniform distribution. The following diagram shows the execution flow.
shard design

Multi-tenancy

Multiple tenants can use BigBen in parallel. Each one can configure how the events will be delivered once triggered. Tenant 1 can configure the events to be delivered in kafka topic t1, where as tenant 2 can have them delivered via a specific http url. The usage of tenants will become more clearer with the below explanation of BigBen APIs

Docker support

BigBen is dockerized and image (bigben) is available on docker hub. The code also contains scripts, which start cassandra, hazelcast and app. To quickly set up the application for local dev testing, do the following steps:

  1. git clone $repo
  2. cd bigben/build/docker
  3. execute ./docker_build.sh
  4. start cassandra container by executing ./cassandra_run.sh
  5. start app by executing ./app_run.sh
  6. To run multiple app nodes export NUM_INSTANCES=3 && ./app_run.sh
  7. wait for application to start on port 8080
  8. verify that curl http://localhost:8080/ping returns 200
  9. Use ./cleanup.sh to stop and remove all BigBen related containers

Non-docker execution

BigBen can be run without docker as well. Following are the steps

  1. git clone $repo
  2. cd bigben/build/exec
  3. execute ./build.sh
  4. execute ./app_run.sh

Env properties

You can set the following environment properties

  1. APP_CONTAINER_NAME (default bigben_app)
  2. SERVER_PORT (default 8080)
  3. HZ_PORT (default 5701)
  4. NUM_INSTANCES (default 1)
  5. LOGS_DIR (default bigben/../bigben_logs)
  6. CASSANDRA_SEED_IPS (default $HOST_IP)
  7. HZ_MEMBER_IPS (default $HOST_IP)
  8. JAVA_OPTS

#How to override default config values? BigBen employs an extensive override system to allow someone to override the default properties. The order of priority is system properties > system env variables > overrides > defaults The overrides can be defined in config/overrides.yaml file. The log4j.xml can also be changed to change log behavior without recompiling binaries

How to setup Cassandra for BigBen?

Following are the steps to set up Cassandra:

  1. git clone the master branch
  2. Set up a Cassandra cluster
  3. create a keyspace bigben in Cassandra cluster with desired replication
  4. Open the file bigben-schema.cql and execute cqlsh -f bigben-schema.cql

APIs

cluster

GET /events/cluster

  • response sample (a 3 node cluster running on single machine and three different ports (5701, 5702, 5703)):
{
    "[127.0.0.1]:5702": "Master",
    "[127.0.0.1]:5701": "Slave",
    "[127.0.0.1]:5703": "Slave"
}

The node marked Master is the master node that does the scheduling.

tenant registration

A tenant can be registered by calling the following API

POST /events/tenant/register

  • payload schema
{
  "$schema": "http://json-schema.org/draft-04/schema#",
  "type": "object",
  "properties": {
    "tenant": {
      "type": "string"
    },
    "type": {
      "type": "string"
    },
    "props": {
      "type": "object"
    }
  },
  "required": [
    "tenant",
    "type",
    "props"
  ]
}
  • tenant - specifies a tenant and can be any arbitrary value.

  • type - specifies the type of tenant. One of the three types can be used

    • MESSAGING - specifies that tenant wants events delivered via a messaging queue. Currently, kafka is the only supported messaging system.
    • HTTP - specifies that tenant wants events delivered via an http callback URL.
    • CUSTOM_CLASS - specifies a custom event processor implemented for custom processing of events
  • props - A bag of properties needed for each type of tenant.

  • kafka sample:

{
    "tenant": "TenantA/ProgramB/EnvC",
    "type": "MESSAGING",
    "props": {
        "topic": "some topic name",
        "bootstrap.servers": "node1:9092,node2:9092"
    }
}
  • http sample
{
     "tenant": "TenantB/ProgramB/EnvC",
     "type": "HTTP",
     "props": {
          "url": "http://someurl",
          "headers": {
            "header1": "value1",
            "header2": "value2"
          }
     }
}

fetch all tenants:

GET /events/tenants

event scheduling

POST /events/schedule

Payload - List<EventRequest>

EventRequest schema:

{
  "$schema": "http://json-schema.org/draft-04/schema#",
  "type": "object",
  "properties": {
    "id": {
      "type": "string"
    },
    "eventTime": {
      "type": "string",
      "description": "An ISO-8601 formatted timestamp e.g. 2018-01-31T04:00.00Z"
    },
    "tenant": {
      "type": "string"
    },
    "payload": {
      "type": "string",
      "description": "an optional event payload, must NOT be null with deliveryOption = PAYLOAD_ONLY"
    },
    "mode": { 
      "type": "string",
      "enum": ["UPSERT", "REMOVE"],
      "default": "UPSERT",
      "description": "Use REMOVE to delete an event, UPSERT to add/update an event"
    },
    "deliveryOption": {
      "type": "string",
      "enum": ["FULL_EVENT", "PAYLOAD_ONLY"],
      "default": "FULL_EVENT",
      "description": "Use FULL_EVENT to have full event delivered via kafka/http, PAYLOAD_ONLY to have only the payload delivered"
    }
  },
  "required": [
    "id",
    "eventTime",
    "tenant"
  ]
}

find an event

GET /events/find?id=?&tenant=?

dry run

POST /events/dryrun?id=?&tenant=?

fires an event without changing its final status

cron APIs

coming up...

bigben's People

Contributors

aaabramov avatar bobdevtools avatar cwstege avatar dependabot[bot] avatar g0s0127 avatar mmonto7 avatar mosabua avatar msaxen1 avatar sandeepmalik avatar shaunegan avatar smalik3 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bigben's Issues

Request full wiki documantation

With only the readme it takes of a lot of effort for any developer who is evaluating a scheduler framework to decide if bigben is something we would want to go ahead with.
I am not sure if we have a wiki for internal use cases, if yes can you guys please make it available for the open-source community, if not then would it make sense to start working towards that.

Not able to run through Docker!

Hello,
I am not able to run Bigben with docker.
docker_build.sh and cassandra_run.sh are working fine. Cassandra container is getting deployed and schema is also getting created. Bigben image is also getting created. But while running the app docker image dadarek/wait-for-dependencies is getting time out.
"Service 172.17.0.2:8080 did not start within 300 seconds. Aborting..."

Kindly help me resolve this issue. Thanks!

Best regards,
Radhakrishna

Info on bucket.backlog.check.limit retry mechanism

I am experimenting with single node cluster. Facing issue with retry mechanism.
HTTP API for request/response. In memory retries are working perfectly fine.But after that event is not retried .
given below are configuration:
processor:
max.retries: 3
initial.delay: 10
backoff.multiplier: 3
eager.loading: true
tasks:
max.events.in.memory: 100000
scheduler.worker.threads: 8
buckets:
backlog.check.limit: 900
background:
load.fetch.size: 100
load.wait.interval.seconds: 15

Logs:
2019-03-08 20:05:37.347 ERROR [New I/O worker #4] ProcessorRegistry:98 - error in processing event by processor after multiple retries, will be retried later if within 'buckets.backlog.check.limit', e-id: c012b8a8-59bb-4a7c-9081-7343d92982d3
java.lang.RuntimeException: accepted
at com.walmartlabs.bigben.processors.ProcessorRegistry$getOrCreate$2$1$$special$$inlined$apply$lambda$1.onCompleted(processors.kt:150)

Run into an problem when running BigBen locally with Docker

scripts provided in the project work mostly ok except when running the app I got

bigben             | 2019-09-30 22:34:02.882 ERROR [main] BigBen:59 - error in loading modules, system will exit now
bigben             | com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.TransportException: [/127.0.0.1:9042] Cannot connect))
bigben             | 	at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:232)
bigben             | 	at com.datastax.driver.cor`e.ControlConnection.connect(ControlConnection.java:79)
bigben             | 	at com.datastax.driver.core.Cluster$Manager.negotiateProtocolVersionAndConnect(Cluster.java:1600)
bigben             | 	at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1518)
bigben             | 	at com.datastax.driver.core.Cluster.init(Cluster.java:159)
bigben             | 	at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:330)
bigben             | 	at com.datastax.driver.core.Cluster.connect(Cluster.java:280)
bigben             | 	at com.walmartlabs.bigben.providers.domain.cassandra.CassandraModule.<clinit>(CassandraModule.kt:63)
bigben             | 	at java.lang.Class.forName0(Native Method)
bigben             | 	at java.lang.Class.forName(Class.java:264)
bigben             | 	at com.walmartlabs.bigben.utils.commons.ModuleRegistry.createModule(modules.kt:75)
bigben             | 	at com.walmartlabs.bigben.utils.commons.ModuleRegistry.loadModules(modules.kt:63)
bigben             | 	at com.walmartlabs.bigben.BigBen$Initializer.<clinit>(BigBen.kt:57)
bigben             | 	at com.walmartlabs.bigben.BigBen.init(BigBen.kt:42)
bigben             | 	at com.walmartlabs.bigben.app.App.<init>(app.kt:64)
bigben             | 	at com.walmartlabs.bigben.app.RunKt.app(run.kt:50)
bigben             | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
bigben             | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
bigben             | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
bigben             | 	at java.lang.reflect.Method.invoke(Method.java:498)
bigben             | 	at kotlin.reflect.jvm.internal.calls.CallerImpl$Method.callMethod(CallerImpl.kt:71)
bigben             | 	at kotlin.reflect.jvm.internal.calls.CallerImpl$Method$Static.call(CallerImpl.kt:80)
bigben             | 	at kotlin.reflect.jvm.internal.KCallableImpl.call(KCallableImpl.kt:106)
bigben             | 	at kotlin.reflect.jvm.internal.KCallableImpl.callDefaultMethod$kotlin_reflect_api(KCallableImpl.kt:152)
bigben             | 	at kotlin.reflect.jvm.internal.KCallableImpl.callBy(KCallableImpl.kt:110)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.callFunctionWithInjection(ApplicationEngineEnvironmentReloading.kt:347)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.executeModuleFunction(ApplicationEngineEnvironmentReloading.kt:297)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.instantiateAndConfigureApplication(ApplicationEngineEnvironmentReloading.kt:273)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.createApplication(ApplicationEngineEnvironmentReloading.kt:126)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.start(ApplicationEngineEnvironmentReloading.kt:245)
bigben             | 	at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:106)
bigben             | 	at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:18)
bigben             | 	at io.ktor.server.engine.ApplicationEngine$DefaultImpls.start$default(ApplicationEngine.kt:46)
bigben             | 	at io.ktor.server.netty.EngineMain.main(EngineMain.kt:17)
bigben             | 	at com.walmartlabs.bigben.app.RunKt.main(run.kt:30)
bigben             | Exception in thread "main" java.lang.reflect.InvocationTargetException
bigben             | 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
bigben             | 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
bigben             | 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
bigben             | 	at java.lang.reflect.Method.invoke(Method.java:498)
bigben             | 	at kotlin.reflect.jvm.internal.calls.CallerImpl$Method.callMethod(CallerImpl.kt:71)
bigben             | 	at kotlin.reflect.jvm.internal.calls.CallerImpl$Method$Static.call(CallerImpl.kt:80)
bigben             | 	at kotlin.reflect.jvm.internal.KCallableImpl.call(KCallableImpl.kt:106)
bigben             | 	at kotlin.reflect.jvm.internal.KCallableImpl.callDefaultMethod$kotlin_reflect_api(KCallableImpl.kt:152)
bigben             | 	at kotlin.reflect.jvm.internal.KCallableImpl.callBy(KCallableImpl.kt:110)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.callFunctionWithInjection(ApplicationEngineEnvironmentReloading.kt:347)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.executeModuleFunction(ApplicationEngineEnvironmentReloading.kt:297)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.instantiateAndConfigureApplication(ApplicationEngineEnvironmentReloading.kt:273)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.createApplication(ApplicationEngineEnvironmentReloading.kt:126)
bigben             | 	at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.start(ApplicationEngineEnvironmentReloading.kt:245)
bigben             | 	at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:106)
bigben             | 	at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:18)
bigben             | 	at io.ktor.server.engine.ApplicationEngine$DefaultImpls.start$default(ApplicationEngine.kt:46)
bigben             | 	at io.ktor.server.netty.EngineMain.main(EngineMain.kt:17)
bigben             | 	at com.walmartlabs.bigben.app.RunKt.main(run.kt:30)
bigben             | Caused by: java.lang.ExceptionInInitializerError
bigben             | 	at com.walmartlabs.bigben.BigBen$Initializer.<clinit>(BigBen.kt:61)
bigben             | 	at com.walmartlabs.bigben.BigBen.init(BigBen.kt:42)
bigben             | 	at com.walmartlabs.bigben.app.App.<init>(app.kt:64)
bigben             | 	at com.walmartlabs.bigben.app.RunKt.app(run.kt:50)
bigben             | 	... 19 more
bigben             | Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.TransportException: [/127.0.0.1:9042] Cannot connect))
bigben             | 	at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:232)
bigben             | 	at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
bigben             | 	at com.datastax.driver.core.Cluster$Manager.negotiateProtocolVersionAndConnect(Cluster.java:1600)
bigben             | 	at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1518)
bigben             | 	at com.datastax.driver.core.Cluster.init(Cluster.java:159)
bigben             | 	at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:330)
bigben             | 	at com.datastax.driver.core.Cluster.connect(Cluster.java:280)
bigben             | 	at com.walmartlabs.bigben.providers.domain.cassandra.CassandraModule.<clinit>(CassandraModule.kt:63)
bigben             | 	at java.lang.Class.forName0(Native Method)
bigben             | 	at java.lang.Class.forName(Class.java:264)
bigben             | 	at com.walmartlabs.bigben.utils.commons.ModuleRegistry.createModule(modules.kt:75)
bigben             | 	at com.walmartlabs.bigben.utils.commons.ModuleRegistry.loadModules(modules.kt:63)
bigben             | 	at com.walmartlabs.bigben.BigBen$Initializer.<clinit>(BigBen.kt:57)
bigben             | 	... 22 more

So I switched to running it without docker. There might be something wrong in the image as it was built many months ago.

What is the future of BigBen ?

Hi
BigBen has not been actively worked on for quite some time, what is the reason behind this ? what is the future of BigBen ?
-Tobias

Docker images for BigBen

It would be awesome to provide docker images or at least instructions how to create proper BigBen docker deployments.

Ideally, include docker-compose.yml example in readme.

I'm gonna try that and in case of success I will create proper PR for that.

Module function cannot be found for the fully qualified name [com.walmartlabs.bigben.app.RunKt.logs]

Hello Team,

Anyone faced this issue while running the APP server.
log4j: Adding appender named [console] to category [com.walmartlabs.bigben].
Exception in thread "main" java.lang.ClassNotFoundException: Module function cannot be found for the fully qualified name 'com.walmartlabs.bigben.app.RunKt.logs'
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.executeModuleFunction(ApplicationEngineEnvironmentReloading.kt:367)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.access$executeModuleFunction(ApplicationEngineEnvironmentReloading.kt:33)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1$$special$$inlined$forEach$lambda$1.invoke(ApplicationEngineEnvironmentReloading.kt:287)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1$$special$$inlined$forEach$lambda$1.invoke(ApplicationEngineEnvironmentReloading.kt:33)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartupFor(ApplicationEngineEnvironmentReloading.kt:320)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.access$avoidingDoubleStartupFor(ApplicationEngineEnvironmentReloading.kt:33)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:286)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading$instantiateAndConfigureApplication$1.invoke(ApplicationEngineEnvironmentReloading.kt:33)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.avoidingDoubleStartup(ApplicationEngineEnvironmentReloading.kt:302)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.instantiateAndConfigureApplication(ApplicationEngineEnvironmentReloading.kt:284)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.createApplication(ApplicationEngineEnvironmentReloading.kt:137)
at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.start(ApplicationEngineEnvironmentReloading.kt:257)
at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:126)
at io.ktor.server.netty.EngineMain.main(EngineMain.kt:26)
at com.walmartlabs.bigben.app.RunKt.main(run.kt:30)

Issue with $HOST_IP

I've been trying to get bigben running but there was a issue with how HOST_IP is initialized.

ifconfig | grep -Eo 'inet (addr:)?([0-9]*\.){3}[0-9]*' | grep -Eo '([0-9]*\.){3}[0-9]*' | grep -v '127.0.0.1'}

which gives me the following output -
172.17.0.1 10.250.1.129

My ifconfig output is -

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:c4:19:ef:ec  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.250.1.129  netmask 255.255.255.0  broadcast 10.250.1.255
        inet6 fe80::1306:4384:456a:2a3  prefixlen 64  scopeid 0x20<link>
        ether a0:8c:fd:28:8a:bd  txqueuelen 1000  (Ethernet)
        RX packets 22745  bytes 5727392 (5.7 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4943  bytes 615163 (615.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 14597  bytes 1526704 (1.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 14597  bytes 1526704 (1.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

So basically the HOST_IP gets all the IP addresses except the lo. In my case, changing it to the 127.0.0.1 resulted in success but I wanted to know if there's any other method?

What if i want to fire an event which is less than 60 seconds from now and the bucket scan already happened for this event?

We have a special case where we want an event to get fired after Now + X second( where X is less than 60 second).
Going through the code i figured out that there is a ScheduleScanner which runs every 1 minute to scan the buckets to be processed. So if this scan for the bucket happened, and my events is added after this scan , the events is not fired , which makes sense as per the current implementation.

What i am suggesting , when we are adding events , can we have a lookup if the events bucket is already scanned then schedule this event as well for processing.

Remove schedules event?

Hi,

Is it possible to remove schedules events?

Background of this question: am looking to use BigBen for scheduling notifications based on (editable) events on Calendars that use support recurrence.

My idea is schedule the next occurrence of the calendar event as a scheduled event in BigBen, but I need to take into account the user changing or deleting the calendar event, which means removing the scheduled event in BigBen if the next occurrence has changed.

Payload is null in scheduled events

Problem

When scheduling an event that will result in a event being fired at some point in the future, the payload received by the tenant is null, regardless of what was set during the schedule.

When executing the event to be fired using dryrun, the payload is delivered as expected.

When scheduling an event in the past, which results in an instant firing of the event, the payload is delivered as expected.

When delivering an event in the ERROR state when the target tenant is available, the payload is delivered as expected.

Example request

POST {{host}}/events/schedule

[
  {
    "id": "my-event-id",
    "eventTime": "some time in the future",
    "tenant": "my-tenant",
    "payload": "my-payload",
    "mode": "UPSERT"
  }
]

Results in the following body being delivered

{
  "id": "my-event-id",
  "eventTime": "some time",
  "tenant": "my-tenant",
  "mode": "UPSERT",
  "payload": null,
  "eventId": "some uuid",
  "triggeredAt": "some time",
  "eventStatus": "triggered",
  "error": null
}

All other scenarios the payload is delivered.

purging: need clarity on bucket removal startegy while purging

my understanding after going through logs and code:
lookback range for scheduler is controlled by buckets.backlog.check.limit(set to 30 for testing).
Once bucket loader is initialised and specified number of buckets are loaded.
subsequent tries will be made from cache(fun getProcessableShardsForOrBefore).
which is based on this calculation "buckets.keys.sorted().take(buckets.size - maxBuckets)" (fun ifPurgeNeeded())
Scenario 1:
In case where events are not there in bucket, bucket will be purged successfully and removed from map.

scenario 2:
Bigben is not able to process the bucket,that bucket will be still there along with new addition.
but it will only be scheduled based on lookback range.
consider a case where because of some reason buckets are going to error state.
what will be the max number of buckets in map and when will it be removed from map in case it is always failing.

sample logs where buckets size went up to 253 but cannot be purged because of failed buckets(buckets = ConcurrentHashMap<ZonedDateTime, BucketSnapshot>()):
{2019-03-13T03:21Z=BucketSnapshot(id=2019-03-13T03:21Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:30Z=BucketSnapshot(id=2019-03-13T03:30Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:45Z=BucketSnapshot(id=2019-03-13T03:45Z, count=0, processing={}, awaiting={}), 2019-03-13T01:39Z=BucketSnapshot(id=2019-03-13T01:39Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:24Z=BucketSnapshot(id=2019-03-13T02:24Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:24Z=BucketSnapshot(id=2019-03-13T03:24Z, count=0, processing={}, awaiting={}), 2019-03-13T03:27Z=BucketSnapshot(id=2019-03-13T03:27Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:36Z=BucketSnapshot(id=2019-03-13T03:36Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:39Z=BucketSnapshot(id=2019-03-13T03:39Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:48Z=BucketSnapshot(id=2019-03-13T03:48Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:15Z=BucketSnapshot(id=2019-03-13T03:15Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:39Z=BucketSnapshot(id=2019-03-13T02:39Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:18Z=BucketSnapshot(id=2019-03-13T03:18Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:33Z=BucketSnapshot(id=2019-03-13T03:33Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:42Z=BucketSnapshot(id=2019-03-13T03:42Z, count=1, processing={0}, awaiting={}), 2019-03-13T01:41Z=BucketSnapshot(id=2019-03-13T01:41Z, count=1, processing={0}, awaiting={}), 2019-03-13T01:40Z=BucketSnapshot(id=2019-03-13T01:40Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:19Z=BucketSnapshot(id=2019-03-13T03:19Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:28Z=BucketSnapshot(id=2019-03-13T03:28Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:37Z=BucketSnapshot(id=2019-03-13T03:37Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:46Z=BucketSnapshot(id=2019-03-13T03:46Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:49Z=BucketSnapshot(id=2019-03-13T03:49Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:38Z=BucketSnapshot(id=2019-03-13T02:38Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:39Z=BucketSnapshot(id=2019-03-12T22:39Z, count=334, processing={0}, awaiting={}), 2019-03-13T03:50Z=BucketSnapshot(id=2019-03-13T03:50Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:41Z=BucketSnapshot(id=2019-03-13T03:41Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:25Z=BucketSnapshot(id=2019-03-13T02:25Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:23Z=BucketSnapshot(id=2019-03-13T03:23Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:14Z=BucketSnapshot(id=2019-03-13T03:14Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:32Z=BucketSnapshot(id=2019-03-13T03:32Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:34Z=BucketSnapshot(id=2019-03-13T02:34Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:38Z=BucketSnapshot(id=2019-03-12T22:38Z, count=61, processing={0}, awaiting={}), 2019-03-13T03:35Z=BucketSnapshot(id=2019-03-13T03:35Z, count=1, processing={0}, awaiting={}), 2019-03-13T01:44Z=BucketSnapshot(id=2019-03-13T01:44Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:17Z=BucketSnapshot(id=2019-03-13T02:17Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:44Z=BucketSnapshot(id=2019-03-13T03:44Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:20Z=BucketSnapshot(id=2019-03-13T03:20Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:26Z=BucketSnapshot(id=2019-03-13T03:26Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:29Z=BucketSnapshot(id=2019-03-13T03:29Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:26Z=BucketSnapshot(id=2019-03-13T02:26Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:35Z=BucketSnapshot(id=2019-03-13T02:35Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:22Z=BucketSnapshot(id=2019-03-13T03:22Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:31Z=BucketSnapshot(id=2019-03-13T03:31Z, count=0, processing={}, awaiting={}), 2019-03-13T03:40Z=BucketSnapshot(id=2019-03-13T03:40Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:44Z=BucketSnapshot(id=2019-03-12T22:44Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:47Z=BucketSnapshot(id=2019-03-13T03:47Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:46Z=BucketSnapshot(id=2019-03-12T22:46Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:38Z=BucketSnapshot(id=2019-03-13T03:38Z, count=0, processing={}, awaiting={}), 2019-03-13T01:43Z=BucketSnapshot(id=2019-03-13T01:43Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:19Z=BucketSnapshot(id=2019-03-13T02:19Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:40Z=BucketSnapshot(id=2019-03-13T02:40Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:16Z=BucketSnapshot(id=2019-03-13T03:16Z, count=1, processing={0}, awaiting={}), 2019-03-13T01:38Z=BucketSnapshot(id=2019-03-13T01:38Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:41Z=BucketSnapshot(id=2019-03-13T02:41Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:45Z=BucketSnapshot(id=2019-03-12T22:45Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:54Z=BucketSnapshot(id=2019-03-13T02:54Z, count=1, processing={0}, awaiting={}), 2019-03-13T00:49Z=BucketSnapshot(id=2019-03-13T00:49Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:25Z=BucketSnapshot(id=2019-03-13T03:25Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:34Z=BucketSnapshot(id=2019-03-13T03:34Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:43Z=BucketSnapshot(id=2019-03-13T03:43Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:41Z=BucketSnapshot(id=2019-03-12T22:41Z, count=213, processing={0}, awaiting={})}

Error when trying to run bigben in a single node, non-docker execution.

Hello,

I'm getting the below error when trying to run non-docker, single node execution. Any pointers to resolve this?

2019-04-22 06:46:28 ERROR [main] BigBen:59 - error in loading modules, system will exit now
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /16.254.3.170:9042 (com.datastax.driver.core.exceptions.TransportException: [/16.254.3.170:9042] Cannot connect))
        at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:232)
        at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
        at com.datastax.driver.core.Cluster$Manager.negotiateProtocolVersionAndConnect(Cluster.java:1600)
        at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1518)
        at com.datastax.driver.core.Cluster.init(Cluster.java:159)
        at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:330)
        at com.datastax.driver.core.Cluster.connect(Cluster.java:280)
        at com.walmartlabs.bigben.providers.domain.cassandra.CassandraModule.<clinit>(CassandraModule.kt:63)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:264)
        at com.walmartlabs.bigben.utils.commons.ModuleRegistry.createModule(modules.kt:75)
        at com.walmartlabs.bigben.utils.commons.ModuleRegistry.loadModules(modules.kt:63)
        at com.walmartlabs.bigben.BigBen$Initializer.<clinit>(BigBen.kt:57)
        at com.walmartlabs.bigben.BigBen.init(BigBen.kt:42)
        at com.walmartlabs.bigben.app.App.<init>(app.kt:64)
        at com.walmartlabs.bigben.app.RunKt.app(run.kt:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at kotlin.reflect.jvm.internal.calls.CallerImpl$Method.callMethod(CallerImpl.kt:71)
        at kotlin.reflect.jvm.internal.calls.CallerImpl$Method$Static.call(CallerImpl.kt:80)
        at kotlin.reflect.jvm.internal.KCallableImpl.call(KCallableImpl.kt:106)
        at kotlin.reflect.jvm.internal.KCallableImpl.callDefaultMethod$kotlin_reflect_api(KCallableImpl.kt:152)
        at kotlin.reflect.jvm.internal.KCallableImpl.callBy(KCallableImpl.kt:110)
        at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.callFunctionWithInjection(ApplicationEngineEnvironmentReloading.kt:347)
        at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.executeModuleFunction(ApplicationEngineEnvironmentReloading.kt:297)
        at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.instantiateAndConfigureApplication(ApplicationEngineEnvironmentReloading.kt:273)
        at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.createApplication(ApplicationEngineEnvironmentReloading.kt:126)
        at io.ktor.server.engine.ApplicationEngineEnvironmentReloading.start(ApplicationEngineEnvironmentReloading.kt:245)
        at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:106)
        at io.ktor.server.netty.NettyApplicationEngine.start(NettyApplicationEngine.kt:18)
        at io.ktor.server.engine.ApplicationEngine$DefaultImpls.start$default(ApplicationEngine.kt:46)
        at io.ktor.server.netty.EngineMain.main(EngineMain.kt:17)
        at com.walmartlabs.bigben.app.RunKt.main(run.kt:30)

Thanks!

Inconsistent event status

When an event is received, the status is sent as TRIGGERED. However, when using the find endpoint and retrieving this event, the status is PROCESSED. Is this the expected status only after successful delivery of an event?

Unable to install BigBen:app version: 1.0.7-SNAPSHOT

Hello,

I tried installing Bigben (bigben#non-docker-execution), but the execution is failing due to access.

Does it require any configuration setup? Here I have attached the log trace for the reference.

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for BigBen:parent 1.0.7-SNAPSHOT:
[INFO]
[INFO] BigBen:parent ...................................... SUCCESS [ 0.287 s]
[INFO] BigBen:commons ..................................... SUCCESS [ 19.904 s]
[INFO] BigBen:lib ......................................... SUCCESS [ 10.523 s]
[INFO] Bigben:cron ........................................ SUCCESS [ 3.245 s]
[INFO] BigBen:cassandra ................................... SUCCESS [ 3.622 s]
[INFO] Bigben:kafka ....................................... SUCCESS [ 2.409 s]
[INFO] BigBen:app ......................................... FAILURE [ 2.051 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 43.303 s
[INFO] Finished at: 2020-01-20T12:25:17+05:30
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project bigben-app: Could not resolve dependencies for project com.walmartlabs.bigben:bigben-app:takari-jar:1.0.7-SNAPSHOT: Failed to collect dependencies at io.ktor:ktor-server-core:jar:1.1.1 -> org.jetbrains.kotlinx:atomicfu:jar:0.12.0: Failed to read artifact descriptor for org.jetbrains.kotlinx:atomicfu:jar:0.12.0: Could not transfer artifact org.jetbrains.kotlinx:atomicfu:pom:0.12.0 from/to jcenter (http://jcenter.bintray.com): Access denied to: http://jcenter.bintray.com/org/jetbrains/kotlinx/atomicfu/0.12.0/atomicfu-0.12.0.pom -> [Help 1]
[ERROR]_

Request: Documentation on how to run a BigBen cluster in Kubernetes

First of all, thank all the maintainers for the hard work. I have a single node BigBen running in our dev k8s cluster, so far it has been working reliably for us. We now would like to set up a 3 node BigBen cluster in our production k8s cluster to support real traffic, but there is no documentation about this. So, as the title describes, what should I do to set up a 3 node Bigben cluster in k8s? If you guys already have this worked out, it would be really nice if you guys can share that with me. Or else, if you could point me in the right direction, it would save us a bunch of time.

Correct format to configure Cassandra contactPoints?

Greetings,

I have been trying to configure BigBen to leverage multiple Cassandra seeds with the following formats:

CASSANDRA_HOST: 10.xxx.yy.zzz:9042,10.xxx.yy.aaa:9042,10.xxx.yy.bbb
CASSANDRA_HOST: "10.xxx.yy.zzz,10.xxx.yy.aaa,10.xxx.yy.bbb"
CASSANDRA_HOST: 10.xxx.yy.zzz:9042,10.xxx.yy.aaa:9042,10.xxx.yy.bbb (shot in the dark)

Unfortunately, I see the following in the logs:
Wait... cassandra at 10.xxx.yy.zzz,10.xxx.yy.aaa,10.xxx.yy.bbb:9042 โ€ฆ.

Can someone please inform me what the appropriate format is? I cannot seem to deduce the correct format.

Expected performance

Hi,

I was wondering what the expected performance of one bigben machine should be? Right now, we are running a simple performance test using kafka (one tenant with a topic of replication factor 1 and 1 partition). The test is as follows:

  1. Tenant is registered with topic
  2. 3 batches of equal amounts of jobs are scheduled using one producer:
    • first batch immediately
    • second batch after 30 second delay
    • third batch after 60 second delay
  3. One consumer listens on the output topic and acknowledges the message (simple log statement)

When the batch size is 100, this works just fine. However, we've seen hiccups where if the batch size is 200+, sometimes the events aren't properly sent. The immediately scheduled events will always send, but the delayed execution ones seem to only send either 40% or 0%, in the case of the 30 second delay batch. Are we configuring anything incorrectly?

FYI, we are using the default big ben shard sizing.

Bigben Kafka consumer dies whenever topic has >1 partition

Setting the topics under kafka.topics bigben listens on to 1 for both replication factor and 1 partition works, but anytime we try and go beyond that, bigben consumers error out with the following message:

2019-01-10 01:21:19.931 ERROR [messageProcessor#0] KafkaMessageProcessor:172 - unknown exception, closing the consumer java.lang.IllegalStateException: No current assignment for partition bigben-1 at org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:269) at org.apache.kafka.clients.consumer.internals.SubscriptionState.pause(SubscriptionState.java:404) at org.apache.kafka.clients.consumer.KafkaConsumer.pause(KafkaConsumer.java:1526) at com.walmartlabs.bigben.kafka.KafkaMessageProcessor.init(kafka.kt:140) at com.walmartlabs.bigben.BigBen$3$1$1.run(BigBen.kt:105) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

We are using the master branch.

Guide on how to customize services using extensions/plugins

In the documentation, there is a mention that it would be possible to use other data stores by using some extension points, but there are no details on how this could be accomplished. I found the class that implements the interface for Cassandra, but not sure what would need to be done to do it with an RDBMS like MySQL.

Also, is there an alternative to Hazelcast? We don't have that in our infrastructure, but we do have Zookeeper and Redis (which also offers distributed locks).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.