Hi, the connector works great on another machine, but when I ported over to a new machine with the same connector configuration, I run into a NullPointerException with the following trace:
[2020-07-07 17:37:32,676] INFO Releasing access on FileStateBackingStore instance for group fp-test (remaining = null) (io.streamthoughts.kafka.connect.filepulse.state.Stat eBackingStoreRegistry:82)
[2020-07-07 17:37:32,676] INFO Stopping instance registered instance FileStateBackingS tore for group fp-test (io.streamthoughts.kafka.connect.filepulse.state.StateBackingSt oreRegistry:84)
[2020-07-07 17:37:32,676] INFO Closing FileStateBackingStore (io.streamthoughts.kafka. connect.filepulse.storage.KafkaStateBackingStore:112)
[2020-07-07 17:37:32,676] INFO Stopping KafkaBasedLog for topic connect-file-pulse-sta tus (io.streamthoughts.kafka.connect.filepulse.storage.KafkaBasedLog:145)
[2020-07-07 17:37:32,677] ERROR WorkerConnector{id=fp-test} Error while starting conne ctor (org.apache.kafka.connect.runtime.WorkerConnector:119)
java.lang.NullPointerException
at io.streamthoughts.kafka.connect.filepulse.storage.KafkaBasedLog.stop(KafkaBasedLog.java:153)
at io.streamthoughts.kafka.connect.filepulse.storage.KafkaStateBackingStore.stop(KafkaStateBackingStore.java:114)
at io.streamthoughts.kafka.connect.filepulse.state.StateBackingStoreRegistry.release(StateBackingStoreRegistry.java:85)
at io.streamthoughts.kafka.connect.filepulse.source.FilePulseSourceConnector.start(FilePulseSourceConnector.java:128)
at org.apache.kafka.connect.runtime.WorkerConnector.doStart(WorkerConnector.java:111)
at org.apache.kafka.connect.runtime.WorkerConnector.start(WorkerConnector.java:136)
at org.apache.kafka.connect.runtime.WorkerConnector.transitionTo(WorkerConnector.java:196)
at org.apache.kafka.connect.runtime.Worker.startConnector(Worker.java:252)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.startConnector(DistributedHerder.java:1079)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.access$1300(DistributedHerder.java:117)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:1095)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$15.call(DistributedHerder.java:1091)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{
"config": {
"connector.class": "io.streamthoughts.kafka.connect.filepulse.source.FilePulseSourceConnector",
"fs.cleanup.policy.class": "io.streamthoughts.kafka.connect.filepulse.clean.LogCleanupPolicy",
"fs.scanner.class": "io.streamthoughts.kafka.connect.filepulse.scanner.local.LocalFSDirectoryWalker",
"fs.scan.directory.path": "/path/to/files/",
"fs.scan.filters": "io.streamthoughts.kafka.connect.filepulse.scanner.local.filter.RegexFileListFilter,io.streamthoughts.kafka.connect.filepulse.scanner.local.filter.LastModifiedFileListFilter",
"fs.scan.interval.ms": "1000",
"fs.recursive.scan.enable": "false",
"file.filter.regex.pattern": "status-[0-9]*.json$",
"file.filter.minimum.age.ms": "180000",
"internal.kafka.reporter.bootstrap.servers": "localhost:9092",
"internal.kafka.reporter.topic": "topic_1",
"offset.strategy": "name+hash",
"task.reader.class": "io.streamthoughts.kafka.connect.filepulse.reader.RowFileInputReader",
"topic": "topic_2",
"tasks.max": 1,
"transforms": "ExtractField",
"transforms.ExtractField.type":"org.apache.kafka.connect.transforms.ExtractField$Value",
"transforms.ExtractField.field":"message",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false"
},
"name": "filepulse-test"
}
Any advice on what might be causing the issue? Much appreciated.