Giter VIP home page Giter VIP logo

logstash-output-dynatrace's Introduction

Logstash Dynatrace output plugin

Travis Build Status

This project is developed and maintained by Dynatrace R&D.

A Logstash output plugin for sending logs to the Dynatrace Generic log ingest API v2. Please review the documentation for this API before using the plugin.

Installation Prerequisites

  • Logstash 7.6+

Installation Steps

Logstash is typically installed in the /usr/share/logstash directory, and plugins are installed using the /usr/share/logstash/bin/logstash-plugin command. If your logstash installation directory is different than this, your logstash-plugin command may be in a different location.

/usr/share/logstash/bin/logstash-plugin install logstash-output-dynatrace

Example Configuration

See below for a detailed explanation of the options used in this example configuration.

output {
  dynatrace {
    id => "dynatrace_output"
    ingest_endpoint_url => "${ACTIVE_GATE_URL}/api/v2/logs/ingest"
    api_key => "${API_KEY}"
  }
}

Configuration Overview

The following configuration options are supported by the Dynatrace output plugin as well as the common options supported by all output plugins described below.

Dynatrace-Specific Options

Setting Input Type Required
ingest_endpoint_url String Yes
api_key String Yes
ssl_verify_none Boolean No

Common Options

The following configuration options are supported by all output plugins:

Setting Input type Required
codec Codec No
enable_metric Boolean No
id String No

Configuration Detail

ingest_endpoint_url

  • Value type is string
  • Required

This is the full URL of the Generic log ingest API v2 endpoint on your ActiveGate. Example: "ingest_endpoint_url" => "https://abc123456.live.dynatrace.com/api/v2/logs/ingest"

api_key

  • Value type is string
  • Required

This is the Dynatrace API token which will be used to authenticate log ingest requests. It requires the logs.ingest (Ingest Logs) scope to be set and it is recommended to limit scope to only this one. Example: "api_key" => "dt0c01.4XLO3..."

ssl_verify_none

  • Value type is boolean
  • Optional
  • Default value is false

It is recommended to leave this optional configuration set to false unless absolutely required. Setting ssl_verify_none to true causes the output plugin to skip certificate verification when sending log ingest requests to SSL and TLS protected HTTPS endpoints. This option may be required if you are using a self-signed certificate, an expired certificate, or a certificate which was generated for a different domain than the one in use.

NOTE: Starting in plugin version 0.5.0, this option has no effect in versions of Logstash older than 8.1.0. If this functionality is required, it is recommended to update Logstash or stay at plugin version 0.4.x or older.

enable_metric

  • Value type is boolean
  • Default value is true

Disable or enable metric logging for this specific plugin instance. By default we record all the metrics we can, but you can disable metrics collection for a specific plugin.

id

  • Value type is string
  • There is no default value for this setting.

Add a unique ID to the plugin configuration. If no ID is specified, Logstash will generate one. It is strongly recommended to set this ID in your configuration. This is particularly useful when you have two or more plugins of the same type. For example, if you have 2 dynatrace outputs. Adding a named ID in this case will help in monitoring Logstash when using the monitoring APIs.

output {
  dynatrace {
    id => "my_plugin_id"
  }
}

Troubleshooting issues

When troubleshooting, always try to reduce the configuration as much as possible. It is recommended to disable all plugins except the Dynatrace output plugin and a simple input plugin like the http input plugin in order to isolate problems caused by only the Dynatrace output plugin.

Enable Debug Logs

See https://www.elastic.co/guide/en/logstash/current/logging.html#logging.

You can enable debug logging in one of several ways:

  • Use the --log.level debug command line flag
  • Configure the log4j2.properties file (usually in /etc/logstash) - More info here
  • Use the logging API - details here

logstash-output-dynatrace's People

Contributors

arminru avatar dyladan avatar pirgeo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

krishnamurtyp

logstash-output-dynatrace's Issues

Dynatrace gives a Bad Request

[2022-07-28T16:41:03,119][ERROR][logstash.outputs.dynatrace][main][dynatrace_nonprod] Dynatrace returned 400 Bad Request

Currently I have no idea why this is. There are probably limitations at API level, but I don't know now if the message is to long or someting else.

How to set dt.entity.host

Hi all,

I need to set the dt.entity.host when configuring the output in the logstash config so I can find the logs later on in the Dynatrace log explorer...

Any Ideas?

rgds Steve

Manticore::SocketException / Broken pipe

We use your plugin for a big volume of logs. We note that it is not stable. After some hours, we get some errors like the following and the service get stuck despite that logstash status is green. The restart of logstash correct the problem.

[2023-09-11T09:16:58,461][ERROR][logstash.outputs.dynatrace][main][logstash_output] Could not fetch URL {:ingest_endpoint_url=>"https://##DYN_TENANT##.live.dynatrace.com/api/v2/logs/ingest", :message=>"Broken pipe", :class=>Manticore::SocketException, :will_retry=>true}

the pipeline is like this

    input {
      file {
        path => "/share/logs/**/*.log"
        sincedb_path => "/usr/share/logstash/data/file-sincedb.db"
        start_position => end
        mode => "tail"

        codec => multiline {
          pattern => "^\[[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3} GMT\] "
          negate => true
          what => "previous"
        }
      }
    }
    filter{
      mutate {
        replace => [ "host", "logstash-prod" ]
      }
      grok {
          match => [ "[@metadata][path]" ,"%{GREEDYDATA:log.source}"]
      }
    }
    output {
      dynatrace {
        id => "logstash_output"
        ingest_endpoint_url => "https://xxxxxx.live.dynatrace.com/api/v2/logs/ingest"
        api_key => "xxxxxx"
      }
    }

Expected:
We would like that the service get not stuck and manage socket correctly

Versions:
-Logstash is running on kubernetes. The logstash image is docker.elastic.co/logstash/logstash:8.5.1.
-The plugin of logstash-output-dynatrace is 0.5.0. Logstash is running in a statefulset mode.
-The env var LS_JAVA_OPTS is set with "-Xmx11g -Xms11g" and the statefulset ressources is defined like this

        resources:
          requests:
            cpu: "5"
            memory: 15Gi
          limits:
            cpu: "5"
            memory: 15Gi

Additionnal information:
Note that when the service logstash is stuck with a lot of error "Could not fetch URL ", a curl command on the logstash running pod could be done on the same target "https://xxxxxx.live.dynatrace.com/api/v2/logs/ingest" without http error code. And, the log injected with curl could be seen on dynatrace dashboard.

Unknown error raised: "OpenSSL::SSL::SSLErrorWaitReadable: read would block"

Describe the bug
Running docker.elastic.co/logstash/logstash-oss:8.8.1 with the logstash-input-http and logstash-output-dynatrace plugins results in Logstash throwing the following error and stopping. Some data is sent to our Dynatrace instance.

Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2023-06-13T17:28:06,517][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2023-06-13T17:28:06,532][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.8.1", "jruby.version"=>"jruby 9.3.10.0 (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.7+7 on 17.0.7+7 +indy +jit [x86_64-linux]"}
[2023-06-13T17:28:06,537][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2023-06-13T17:28:06,551][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2023-06-13T17:28:06,553][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2023-06-13T17:28:06,750][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"2445bebd-b123-4e67-8940-0aa4175758fc", :path=>"/usr/share/logstash/data/uuid"}
[2023-06-13T17:28:07,340][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-06-13T17:28:07,674][INFO ][org.reflections.Reflections] Reflections took 109 ms to scan 1 urls, producing 132 keys and 464 values
[2023-06-13T17:28:07,939][INFO ][logstash.codecs.json     ] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2023-06-13T17:28:07,960][INFO ][org.logstash.ackedqueue.QueueUpgrade] No PQ version file found, upgrading to PQ v2.
[2023-06-13T17:28:08,027][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2023-06-13T17:28:08,133][INFO ][logstash.outputs.dynatrace][main] Client {:client=>"#<Net::HTTP <snipped>.live.dynatrace.com:443 open=false>"}
[2023-06-13T17:28:08,169][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x7a18fb44@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2023-06-13T17:28:09,232][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.06}
[2023-06-13T17:28:09,244][INFO ][logstash.codecs.json     ][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2023-06-13T17:28:09,330][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-06-13T17:28:09,333][INFO ][logstash.inputs.http     ][main][f0db974808adf9a8e2d0c031d75e3a8e944a03568b45634309250e38e174b6dd] Starting http input listener {:address=>"0.0.0.0:8080", :ssl=>"false"}
[2023-06-13T17:28:09,338][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2023-06-13T17:29:45,275][INFO ][logstash.codecs.json     ][main][f0db974808adf9a8e2d0c031d75e3a8e944a03568b45634309250e38e174b6dd] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2023-06-13T17:29:51,879][INFO ][logstash.codecs.json     ][main][f0db974808adf9a8e2d0c031d75e3a8e944a03568b45634309250e38e174b6dd] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2023-06-13T17:29:53,056][INFO ][logstash.codecs.json     ][main][f0db974808adf9a8e2d0c031d75e3a8e944a03568b45634309250e38e174b6dd] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2023-06-13T17:29:53,742][ERROR][logstash.outputs.dynatrace][main][dynatrace_output] Unknown error raised {:error=>"#<OpenSSL::SSL::SSLErrorWaitReadable: read would block>"}
[2023-06-13T17:29:53,745][ERROR][logstash.javapipeline    ][main] Pipeline worker error, the pipeline will be stopped {:pipeline_id=>"main", :error=>"(SSLErrorWaitReadable) read would block", :exception=>Java::OrgJrubyExceptions::StandardError, :backtrace=>[], :thread=>"#<Thread:0x7a18fb44@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 sleep>"}
[2023-06-13T17:29:55,490][ERROR][logstash.outputs.dynatrace][main][dynatrace_output] Unknown error raised {:error=>"#<OpenSSL::SSL::SSLErrorWaitReadable: read would block>"}
[2023-06-13T17:29:55,493][ERROR][logstash.javapipeline    ][main] Pipeline worker error, the pipeline will be stopped {:pipeline_id=>"main", :error=>"(SSLErrorWaitReadable) read would block", :exception=>Java::OrgJrubyExceptions::StandardError, :backtrace=>[], :thread=>"#<Thread:0x7a18fb44@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 sleep>"}
[2023-06-13T17:29:55,508][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2023-06-13T17:29:55,595][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2023-06-13T17:29:55,598][INFO ][logstash.runner          ] Logstash shut down.

This happens when running the container in Rancher as a Kubernetes deployment, Kubernetes Version: v1.24.10 +k3s1.

I'm not seeing that behavior when running the same docker image in Docker Desktop or as a Kubernetes deployment in Rancher Desktop on an Intel MacBookPro.

To Reproduce
See Dockerfile and Kubernetes definitions below.

Expected behavior
Pushing logs to the Dynatrace Classic log endpoint won't result in a crash.

Screenshots
If applicable, add screenshots to help explain your problem.

Server (please complete the following information):

  • OS: Ubuntu 20.04.6 LTS
  • Platform: 3-node Rancher cluster

Additional context

# logstash.yml
allow_superuser: false
queue.type: persisted
path.data: /usr/share/logstash/data
path.queue: /usr/share/logstash/data/queue
api.http.host: "0.0.0.0"
# logstash.conf
input {
  http {
    port => 8080
  }
}

output {
  dynatrace {
    id => "dynatrace_output"
    ingest_endpoint_url => "${LOGSTASH_DYNATRACE_INGEST_URL:false}"
    api_key => "${LOGSTASH_DYNATRACE_API_KEY:false}"
  }
}
# Dockerfile
FROM docker.elastic.co/logstash/logstash-oss:8.8.1

RUN rm -f /usr/share/logstash/pipeline/logstash.conf

RUN /usr/share/logstash/bin/logstash-plugin install logstash-input-http
RUN /usr/share/logstash/bin/logstash-plugin install logstash-output-dynatrace

COPY logstash.conf /usr/share/logstash/pipeline/
COPY logstash.yml /usr/share/logstash/config/
# Kubernetes definitions
---
apiVersion: v1
kind: Service
metadata:
  name: logstash
spec:
  type: ClusterIP
  sessionAffinity: None
  ports:
    - port: 80
      name: http
      protocol: TCP
      targetPort: 8080
  selector:
    app: logstash

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: logstash
  namespace: logstash
spec:
  serviceName: logstash
  replicas: 1
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: logstash
          image: custom-logstash-image:8.8.1
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 2000m
              memory: 2Gi
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
            - name: monitoring
              containerPort: 9600
              protocol: TCP
          startupProbe:
            httpGet:
              port: monitoring
              path: /
            initialDelaySeconds: 30
            periodSeconds: 1
            failureThreshold: 300
            timeoutSeconds: 10
          env:
            - name: LOGSTASH_DYNATRACE_INGEST_URL
              value: "https://<snipped>.live.dynatrace.com/api/v2/logs/ingest"
            - name: LOGSTASH_DYNATRACE_API_KEY
              value: "<api key>"
            - name: TZ
              value: "America/Kentucky/Louisville"

Intermittent Net::OpenTimeout - retry on plugin level or in Logstash?

Is your feature request related to a problem? Please describe.
We receive Net::OpenTimeout error occasionally while pushing logs to Dynatrace SaaS.

[2022-09-08T10:49:47,482][ERROR][logstash.javapipeline    ][logstash-dt] Pipeline worker error, the pipeline will be stopped {:pipeline_id=>"logstash-dt", :error=>"(OpenTimeout) Net::OpenTimeout", 
:exception=>Java::OrgJrubyExceptions::RuntimeError, 
:backtrace=>["E_3a_.logstash.logstash_minus_8_dot_3_dot_2.vendor.jruby.lib.ruby.stdlib.net.protocol.ssl_socket_connect(E:/logstash/logstash-8.3.2/vendor/jruby/lib/ruby/stdlib/net/protocol.rb:41)", 
"E_3a_.logstash.logstash_minus_8_dot_3_dot_2.vendor.jruby.lib.ruby.stdlib.net.http.connect(E:/logstash/logstash-8.3.2/vendor/jruby/lib/ruby/stdlib/net/http.rb:985)", 
"E_3a_.logstash.logstash_minus_8_dot_3_dot_2.vendor.jruby.lib.ruby.stdlib.net.http.do_start(E:/logstash/logstash-8.3.2/vendor/jruby/lib/ruby/stdlib/net/http.rb:924)", 
"E_3a_.logstash.logstash_minus_8_dot_3_dot_2.vendor.jruby.lib.ruby.stdlib.net.http.start(E:/logstash/logstash-8.3.2/vendor/jruby/lib/ruby/stdlib/net/http.rb:913)", 
"E_3a_.logstash.logstash_minus_8_dot_3_dot_2.vendor.jruby.lib.ruby.stdlib.net.http.request(E:/logstash/logstash-8.3.2/vendor/jruby/lib/ruby/stdlib/net/http.rb:1465)", 
"E_3a_.logstash.logstash_minus_8_dot_3_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_dynatrace_minus_0_dot_2_dot_1.lib.logstash.outputs.dynatrace.send(E:/logstash/logstash-8.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-output-dynatrace-0.2.1/lib/logstash/outputs/dynatrace.rb:117)", 
"E_3a_.logstash.logstash_minus_8_dot_3_dot_2.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_output_minus_dynatrace_minus_0_dot_2_dot_1.lib.logstash.outputs.dynatrace.multi_receive(E:/logstash/logstash-8.3.2/vendor/bundle/jruby/2.5.0/gems/logstash-output-dynatrace-0.2.1/lib/logstash/outputs/dynatrace.rb:79)", 
"org.logstash.config.ir.compiler.OutputStrategyExt$AbstractOutputStrategyExt.multi_receive(org/logstash/config/ir/compiler/OutputStrategyExt.java:143)", 
"org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121)", 
"E_3a_.logstash.logstash_minus_8_dot_3_dot_2.logstash_minus_core.lib.logstash.java_pipeline.start_workers(E:/logstash/logstash-8.3.2/logstash-core/lib/logstash/java_pipeline.rb:300)"], 
:thread=>"#<Thread:0x1babbff4 sleep>"}

Describe the solution you'd like
Possibility of retries in case of connection timeout. When this issue happens, we simply restart logstash service and it resumes normally.

Additional context
I see Dynatrace plugin is able to handle and retry HTTP errors, but not this one. My question is, can connection timeout be handled and retried on plugin level, or it must be done in Logstash as described in this issue? The main reason I'm asking is because request to implement Logstash ability to restart failed pipelines has been open for 1,5 years now and doesn't have developer assigned yet.
Thank you.

"Issue with 'dynatrace' Output Plugin Configuration: Unable to configure plugins"

 Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (PluginLoadingError) Couldn't find any output plugin named 'dynatrace'. Are you sure this is correct? Trying to load the dynatrace output plugin resulted in this error: Unable to load the requested plugin named dynatrace of type output. The plugin is not installed."

Environment:

Operating System: Rhel 8.4

Steps to Reproduce:

  1. install logstash
  2. install logstash-output-dynatrace
  3. add configuration file.
  4. start logstash
  5. observe error message in the Logstash Logs

Expected Behavior:

The 'dynatrace' output plugin should be loaded successfully, and Logstash should start without errors.

Actual Behavior:

Logstash fails to start due to an error loading the 'dynatrace' output plugin. The error message indicates that the plugin cannot be found or is not installed correctly.

Additional Information:

our configuration file looks like this

input {   file

        {
         path => <PATH_TO_LOGS>
         start_position => "beginning"
         sincedb_path => "/dev/null"
        }
      }
filter { }
output {
        #stdout { codec => json }
        dynatrace {
                ingest_endpoint_url => <URL>
                api_key => <API_TOKEN>
                ssl_verify_none => true
        }
}

actual logs

Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::JavaLang::IllegalStateException", :message=>"Unable to configure plugins: (PluginLoadingError) Couldn't find any output plugin named 'dynatrace'. Are you sure this is correct? Trying to load the dynatrace output plugin resulted in this error: Unable to load the requested plugin named dynatrace of type output. The plugin is not installed.", :backtrace=>["org.logstash.config.ir.CompiledPipeline.<init>(CompiledPipeline.java:120)", "org.logstash.execution.AbstractPipelineExt.initialize(AbstractPipelineExt.java:186)", "org.logstash.execution.AbstractPipelineExt$INVOKER$i$initialize.call(AbstractPipelineExt$INVOKER$i$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:847)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1319)", "org.jruby.ir.instructions.InstanceSuperInstr.interpret(InstanceSuperInstr.java:139)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:367)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:128)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:115)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:446)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:92)", "org.jruby.RubyClass.newInstance(RubyClass.java:931)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:446)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:92)", "org.jruby.ir.instructions.CallBase.interpret(CallBase.java:548)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:367)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.InterpreterEngine.interpret(InterpreterEngine.java:88)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.INTERPRET_METHOD(MixedModeIRMethod.java:238)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:225)", "org.jruby.internal.runtime.methods.DynamicMethod.call(DynamicMethod.java:228)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:476)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:293)", "org.jruby.ir.interpreter.InterpreterEngine.processCall(InterpreterEngine.java:328)", "org.jruby.ir.interpreter.StartupInterpreterEngine.interpret(StartupInterpreterEngine.java:66)", "org.jruby.ir.interpreter.Interpreter.INTERPRET_BLOCK(Interpreter.java:116)", "org.jruby.runtime.MixedModeIRBlockBody.commonYieldPath(MixedModeIRBlockBody.java:136)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:66)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.Block.call(Block.java:144)", "org.jruby.RubyProc.call(RubyProc.java:352)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:111)", "java.base/java.lang.Thread.run(Thread.java:840)"]}
  • we have Ensured that the 'dynatrace' output plugin is installed correctly on the Logstash instance.
  • Double-check the plugin configuration in the Logstash configuration file to ensure it is spelled correctly and includes all required parameters.

Logstash keep restarting after plugin installation

ISSUE TYPE

Bug Report

COMPONENT NAME

logstash-output-dynatrace

OS / ENVIRONMENT

CentOS Linux release 8.4.2105
4.18.0-305.25.1.el8_4.x86_64

CONFIGURATION

SUMMARY

I followed the steps from available documentation and together with Dynatrace support successfully installed the plugin.

[root@ip-xxx centos]# /usr/share/logstash/bin/logstash-plugin list | grep dynatrace logstash-output-dynatrace
After that Logstash service kept on restarting in infinite loop. Here is the log:

Nov 25 23:18:52 ip-xxx.ap-southeast-2.compute.internal systemd[1]: Started logstash. Nov 25 23:18:52 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: Using bundled JDK: /usr/share/logstash/jdk Nov 25 23:18:52 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be remove> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: [FATAL] 2021-11-25 23:19:04.504 [main] Logstash - Logstash stopped processing because of an error: (PathError) The > Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: org.jruby.exceptions.StandardError: (PathError) The path /root/logstash-output-dynatrace does not exist. Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.sour> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.sour> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.sour> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.defi> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1820) ~[jruby-complete-9.2.19.0.jar:?] Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.spec> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.defi> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.defi> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.defi> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.defi> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.runt> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.bundler_minus_2_dot_2_dot_30.lib.bundler.setu> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.lib.bootstrap.bundler.setup!(/usr/share/logstash/lib/bootstrap/bundler.rb:79) ~[?:?] Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1839]: at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:89)> Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal systemd[1]: logstash.service: Main process exited, code=exited, status=1/FAILURE Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal systemd[1]: logstash.service: Failed with result 'exit-code'. Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal systemd[1]: logstash.service: Service RestartSec=100ms expired, scheduling restart. Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal systemd[1]: logstash.service: Scheduled restart job, restart counter is at 7. Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal systemd[1]: Stopped logstash. Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal systemd[1]: Started logstash. Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1891]: Using bundled JDK: /usr/share/logstash/jdk Nov 25 23:19:04 ip-xxx.ap-southeast-2.compute.internal logstash[1891]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be remove>

According to this log:

The path /root/logstash-output-dynatrace does not exist.

But the directory is in place:

[root@ip-xxx centos]# ll -lth /root/logstash-output-dynatrace total 44K -rwxrwxrwx. 1 root root 4.6K Nov 12 03:51 Gemfile.lock -rwxrwxrwx. 1 root root 2.2K Nov 12 03:39 logstash-output-dynatrace.gemspec drwxrwxrwx. 3 root root 43 Nov 12 03:39 spec -rwxrwxrwx. 1 root root 83 Nov 12 03:39 CHANGELOG.md drwxrwxrwx. 2 root root 28 Nov 12 03:39 docs -rwxrwxrwx. 1 root root 991 Nov 12 03:39 Gemfile drwxrwxrwx. 3 root root 22 Nov 12 03:39 lib -rwxrwxrwx. 1 root root 12K Nov 12 03:39 LICENSE -rwxrwxrwx. 1 root root 712 Nov 12 03:39 Rakefile -rwxrwxrwx. 1 root root 2.9K Nov 12 03:39 README.md -rwxrwxrwx. 1 root root 5 Nov 12 03:39 VERSION [root@ip-xxx centos]# ll -lth /root/ total 24K -rwxrwxrwx. 1 root root 301 Nov 12 05:09 20-dynatrace-output.conf drwxrwxrwx. 7 root root 4.0K Nov 12 03:51 logstash-output-dynatrace -rw-------. 1 root root 6.2K Dec 4 2020 anaconda-ks.cfg -rw-------. 1 root root 5.9K Dec 4 2020 original-ks.cfg

STEPS TO REPRODUCE

Install the plugin on Centos using provided documentation.

This looks like a bug to me. I tried to change the ownership of the folder recursively with logstash user but it didn't help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.