Giter VIP home page Giter VIP logo

logstash-output-syslog's Issues

'ssl-tcp' protocol enforces the use of ssl_cert with no error handling

This project has just the same issue as logstash-plugins/logstash-output-tcp#22.

config:

input {
    stdin { }
}

output {
    syslog {
        host => "localhost"
        port => 9000
        protocol => "ssl-tcp"
        ssl_cacert => "./ca_cert.pem"
    }
}

output:

[2017-11-01T15:14:25,029][ERROR][logstash.agent           ] Pipeline aborted due to error {:exception=>#<TypeError: can't convert nil into String>, :backtrace=>["org/jruby/RubyIO.java:3804:in `read'", "org/jruby/RubyIO.java:3987:in `read'", "/usr/local/Cellar/logstash/5.6.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.3/lib/logstash/outputs/syslog.rb:229:in `setup_ssl'", "/usr/local/Cellar/logstash/5.6.2/libexec/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-3.0.3/lib/logstash/outputs/syslog.rb:132:in `register'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:17:in `register'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/output_delegator.rb:43:in `register'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:290:in `register_plugin'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:301:in `register_plugins'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:310:in `start_workers'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/pipeline.rb:235:in `run'", "/usr/local/Cellar/logstash/5.6.2/libexec/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}

And I think logstash-plugins/logstash-output-tcp#31 can be ported to this project to solve this issue. Thank you.

Please rollout the new patch version for this plugin to the upstream

The current upstream version 3.0.5 has the following code in the register function:

if @codec.instance_of? LogStash::Codecs::Plain
end

which always returns false and so the @format of the LogStash::Codecs::Plain codec is never defined, making the message config option of the plugin useless as it's always with the default value %{message}.

Further, since there is no @format defined for LogStash::Codecs::Plain, the encode function of the codec always relies on event.to_s which results in syslog actual payload becoming %{timestamp} %{host} %{message}. And, this is always prepending the %{timestamp} %{host} part which breaks the format of the desired syslog payload.

I'm using this plugin to send high volume logs to our other critical systems that use the messages received via syslog to identify various kinds of patterns in the logs. It would make our work easier and would benefit everyone if the current changes are pushed to the upstream with a new patch version. Thanks

Use metadata to set configuration properties

Will it be possible to use metadata fields to configure the plugin?

For instance being able to do the following:

syslog {
  port => 514
  host => '10.11.12.13'
  appname => "%{type}"
  message => "%{[@metadata][message]}"
}

I believe the Elasticsearch output plugin allows this syntax.

Enforce new connections like rsyslog does with the rebindinterval option

Rsyslog provides a rebindinterval option for several output plugins like omelasticsearch, omrelp and omfwd. This is very helpful if you send the messages to Kubernetes ClusterIP service, because it will use a static iptables rule for the load balancing. This would route all traffic from a single source (the Logstash output) to the same pod backing that service.

With the rebindinterval option a new connection (by changing the the source port) will be established every x messages. This will ensure a proper load balancing to all pods backing the ClusterIP service even with the static iptables rules.

Something similar would also be beneficial for this output plugin.

FYI: @richardgilm

timestamp from extra defined field

It should be possible to define a field from which to read the timestamp to use in the syslog message like it is now possible for all other fields in a syslog message.
e.g.

syslog {
  host => "syslog-server"
  port => 514
  protocol => "udp"
  timestamp => "syslog-date"
}

This would help with some problems that come with the inability of a syslog server to understand timezones.

"message" configuration parameter ignored in Logstash 7.2.0 and up

  • Version: 7.2.0 and up
  • Operating System: Reproduced on RHEL 7.7 and Windows 10
  • Config File:
input { 
  stdin { } 
}
output {
  syslog {
        host => "127.0.0.1"
        port => "514"
        sourcehost => "test"
        message => "dummy"
    }
}
  • Steps to Reproduce:

Expected behaviour: The output will replace the default %{message} field with the text "dummy" and send it to the syslog server running on localhost.

Output on Logstash 7.1.1:
Feb 3 14:21:50 test LOGSTASH[-]: dummy

Output on Logstash >7.2.0:
Feb 3 14:23:48 test LOGSTASH[-]: 2020-02-03T14:23:48.725Z hostname logmessage

Also validated using tcpdump:

# tcpdump -nnAs0 -i lo port 52467

listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes

02:41:27.609504 IP 127.0.0.1.52467 > 127.0.0.1.514: SYSLOG user.notice, length: 82

E..nwy@[email protected]<13>Feb 05 01:41:27 test LOGSTASH[-]: 2020-02-05T01:41:27.504Z hostname logmessage

I'm guessing the change in codec.encode() might be the culprit?
elastic/logstash#10620

no implicit conversion of nil into String

When I try run logstash with next config:

output {
    if "security" in [message] {    
    syslog {
        host => "something.com"       
        port => 5514
        protocol => "ssl-tcp"
        ssl_cacert => "/usr/share/logstash/cert.pem"
        }

I get next error:

[2020-07-03T16:10:59,570][ERROR][logstash.pipeline        ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::OutputDelegator:0x39163514>", :error=>"no implicit conversion of nil into String", :thread=>"#<Thread:0xd02579a run>"}
[2020-07-03T16:10:59,575][ERROR][logstash.pipeline        ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<TypeError: no implicit conversion of nil into String>, :backtrace=>["org/jruby/RubyIO.java:3770:in `read'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-syslog-3.0.5/lib/logstash/outputs/syslog.rb:230:in `setup_ssl'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-syslog-3.0.5/lib/logstash/outputs/syslog.rb:132:in `register'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:106:in `register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:48:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:259:in `register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:270:in `block in register_plugins'", "org/jruby/RubyArray.java:1792:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:270:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:611:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:280:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:217:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:176:in `block in start'"], :thread=>"#<Thread:0xd02579a run>"}
[2020-07-03T16:10:59,600][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2020-07-03T16:10:59,961][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-07-03T16:11:04,861][INFO ][logstash.runner          ] Logstash shut down.

I tried the same config with tcp plugin and it works.

the value of "priority" parameter is ignored

when testing syslog output plugin of logstash-7.5.1, I noticed that setting "priority" parameter has no effect. According to RFC3164, priority value is a numeral that is calculated as follows: 8*facility+level
However, when setting priority to various values like 30 (corresponds to "daemon" facility and "info" level), 132 (corresponds to "local0" facility and "warning" level), etc., the messages are still generated with default priority of 13 ("user" facility and "notice" level). I have tried to define the priority in various ways, including
priority => 30
priority => "30"
priority => "<30>"
but without any effect. Please fix this bug and provide proper information in the documentation what are expected values for the "priority" parameter.

"procid" between parentheses "()"

Hello everyone,

I would like resend the syslog messages exactly like the logsource sends me it

one line like this:
<%{PRI}> %{TIMESTAMP} %{LOGSOURCENAME} (%{procid}): %{message}
logstash resend like:
%{TIMESTAMP} %{HOSTIP} %{LOGSOURCENAME}[%{procid}]: %{message}

I need a space at the end of %{LOGSOURCENAME} and replace parenthesis with brackets.

Could i redefine how this syslog-output make the syslog header?

Thank you so much.

RFC5424 output seems to format zone offset incorrectly

(This issue was originally filed by @intjonathan at https://github.com/elastic/logstash-contrib/issues/76)


When sending syslog output in rfc5424 mode, the output looks like this:

<141>1 2014-06-17T17:55:25.000+0000 twitter LOGSTASH - - - I'm hiring! Vice President, Residential Practice  at PECI - Portland, Oregon Area #jobs http://t.co/mS2qJaPwPN

The TZ offset in the timestamp shows +0000. The RFC specifies this should be a Z (or +00:00) instead of +0000. It does not permit numeric time offsets which lack a : separator. This leads to parse errors on downstream systems, which is no fun.

The code path starts in the syslog output, which delegates an sprintf of the time format to event#sprintf. I'm guessing the .SSSZ element in the format string could be changed to something that produces compliant output.

Codec support

Hi,

it would be great if one could use the syslog output like this:

syslog {
  [...]
  codec => my_codec
}

That would be helpful in order to transform the event to, for example, CEF (Common Event Format) and transmit it to a server via syslog.

The documentation (https://www.elastic.co/guide/en/logstash/current/plugins-outputs-syslog.html) states that "codec" is an available configuration option but for me it didn't work.
I guess "@codec.encode(event)" and a codec callback are missing.

Support DSCP Configuration Socket Option in Logstash Syslog output Plugin

Support DSCP Configuration Socket Option in Logstash Syslog output Plugin

There is support needed to send syslog output request with DSCP configuration in UDP socket. It will be used for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks.

This is requirement to expose the socket option in syslog output plugin to configure the DSCP configuration.

IPv4: setsockopt(sock, IPPROTO_IP, IP_TOS, &opt, sizeof(opt))
IPv6: setsockopt(sock, IPPROTO_IPv6, IPV6_TCLASS, &opt, sizeof(opt))

Could you please share whether this can be supported in Logstash future release and any backlog available already.

Ref:
https://en.wikipedia.org/wiki/Differentiated_services

TypeError: can't convert nil into String

I have installed last version of plugin (v0.1.4)
I got an error, every time then I start logstash.

TypeError: can't convert nil into String
              + at org/jruby/RubyString.java:1172
        receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-0.1.4/lib/logstash/outputs/syslog.rb:127
         handle at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/outputs/base.rb:88
    output_func at (eval):169
   outputworker at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:244
  start_outputs at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:166

syslog output config is

 syslog {
                facility => "local4"
                host => "localhost"
                port => 514
                severity => "informational"
                appname => "varnish"
            }

I have use lumberjack as input
Everything works fine if I use file output, but with syslog output logstash fail down

below example of output event string

{"@version":"1","@timestamp":"2015-07-31T06:28:57.000Z","type":"varnishncsa","file":"-","host":"hlcache03","offset":"848412357","clientip":"127.0.0.1","timestamp":"31/Jul/2015:00:28:57 -0600","verb":"GET","request":"http://localhost:80/varnish-status","httpversion":"1.1","response":"200","bytes":"231","agent":"\"Go 1.1 package http\"","req_proc_time":108.0,"sent_bytes":426,"be_proc_time":46.0}

Is the logstash-output-syslog plugin "supported" ?

Can someone confirm?

Loss of logs from logstash when rsyslog is restarted

Hi Team,

We are using logstash 3pp v7.15.2 in our project.When a remote syslog server restarts logstash will take 1-60 minutes to reconnect.
And during this time none of the logs will be "stored", so they will just be lost.After rsyslog and logstash connection is established everything works ok, logs at the time of reconnection are lost but these should be stored in the logstash persistent queue once the rsyslog output is not available.
we tried following scenarios and these are our findings:

Case 1: Deployed elastic search,logstash (configured rsyslog output but not deployed)
stats of rsyslog pipeline :

"outputs" : [ {
    "id" : "e6ad9c62172fd69b98c84ca8fada4ab96cb2e1b4bb5bc5248eead15a4b9f4012",
    "events" : {
      "out" : 0,
      "in" : 1,
      "duration_in_millis" : 86
    },
    "name" : "syslog"
  } ]
},
"reloads" : {
  "last_failure_timestamp" : null,
  "successes" : 0,
  "failures" : 0,
  "last_error" : null,
  "last_success_timestamp" : null
},
"queue" : {
  "type" : "persisted",
  "capacity" : {
    "max_unread_events" : 0,
    "max_queue_size_in_bytes" : 1073741824,
    "queue_size_in_bytes" : 7904,
    "page_capacity_in_bytes" : 67108864
  },
  "events" : 7,
  "data" : {
    "free_space_in_bytes" : 29855367168,
    "storage_type" : "ext4",
    "path" : "/opt/logstash/data/queue/syslog"
  },
  "events_count" : 7,
  "queue_size_in_bytes" : 7904,
  "max_queue_size_in_bytes" : 1073741824

we are able to see that rsyslog pipeline persistent queue is able to store events when rsyslog is not deployed. And once rsyslog is deployed it is able to receive all the logs.

Case2: Deployed elasticsearch,logstash,rsyslog restarting both elastic search and rsyslog and sending logs

pipeline stats for elastic search :

"name" : "elasticsearch"
  }, {
    "id" : "07aa8e0b7b6a3b03343369c6241012b28a5381ebbbb638b8ec25904c8e2f947b",
    "events" : {
      "out" : 0,
      "in" : 0,
      "duration_in_millis" : 68
    },
    "name" : "stdout"
  } ]
},
"reloads" : {
  "last_failure_timestamp" : null,
  "successes" : 0,
  "failures" : 0,
  "last_error" : null,
  "last_success_timestamp" : null
},
"queue" : {
  "type" : "persisted",
  "capacity" : {
    "max_unread_events" : 0,
    "max_queue_size_in_bytes" : 1073741824,
    "queue_size_in_bytes" : 29636,
    "page_capacity_in_bytes" : 67108864
  },
  "events" : 4,
  "data" : {
    "free_space_in_bytes" : 29845852160,
    "storage_type" : "ext4",
    "path" : "/opt/logstash/data/queue/elasticsearch"
  },
  "events_count" : 4,
  "queue_size_in_bytes" : 29636,
  "max_queue_size_in_bytes" : 1073741824
},

pipeline stats for rsyslog :

     "name" : "syslog"
    } ]
  },
  "reloads" : {
    "last_failure_timestamp" : null,
    "successes" : 0,
    "failures" : 0,
    "last_error" : null,
    "last_success_timestamp" : null
  },
  "queue" : {
    "type" : "persisted",
    "capacity" : {
      "max_unread_events" : 0,
      "max_queue_size_in_bytes" : 1073741824,
      "queue_size_in_bytes" : 29636,
      "page_capacity_in_bytes" : 67108864
    },
    "events" : 0,
    "data" : {
      "free_space_in_bytes" : 29845852160,
      "storage_type" : "ext4",
      "path" : "/opt/logstash/data/queue/syslog"
    },
    "events_count" : 0,
    "queue_size_in_bytes" : 29636,
    "max_queue_size_in_bytes" : 1073741824
  },

In this case events are not being stored in the syslog pipeline persistent queue but are stored in elastic search pipeline persistent queue.

And once elastic search and rsyslog are restarted we are able to receive logs(that we are sending during output disconnected) only in elastic search but not in rsyslog.

Could you please share your comments on this. we think that logstash is not detecting rsyslog when it gets restarted and hence logs are not stored in persistent queue and logs are lost.

Please fill in the following details to help us reproduce the bug:
-->

Logstash information:

Please include the following information:

  1. Logstash version (e.g. bin/logstash --version)
  2. Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker)
  3. How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes)
  4. How was the Logstash Plugin installed

JVM (e.g. java -version):

If the affected version of Logstash is 7.9 (or earlier), or if it is NOT using the bundled JDK or using the 'no-jdk' version in 7.10 (or higher), please provide the following information:

  1. JVM version (java -version)- jdk 11
  2. JVM installation source (e.g. from the Operating System's package manager, from source, etc).
  3. Value of the JAVA_HOME environment variable if set.-

OS version (uname -a if on a Unix-like system):

Description of the problem including expected versus actual behavior: Described above

Steps to reproduce: Described above

Please include a minimal but complete recreation of the problem,
including (e.g.) pipeline definition(s), settings, locale, etc. The easier
you make for us to reproduce it, the more likely that somebody will take the
time to look at it.

Provide logs (if relevant):

Facility and Severity not set correctly

I forwarded syslog messages from Rsyslog to Logstash and I want to use "logstash-output-syslog" output plugin in Logstash to forward the events to another syslog server. The issue is that I set the facility to "local0" in logstash but I get the other facility (i.e., alert) in the receiver. The plugin works properly for some facilities though (e.g., mail). I attached my Logstash config file and sample syslog event to the issue.

logstash-output-syslog-3.0.5 with protocol UDP and an IPv6 address fails in connect

When using logstash-output-syslog-3.0.5 with protocol UDP and an IPv6 address, it fails in connect, at line 209 of logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-syslog-3.0.5/lib/logstash/outputs/syslog.rb:

i.e.

208: socket = UDPSocket.new
209: socket.connect(@host, @port) <-- fails here, if @host is an IPv6 address.

If I change line 208 to explicitly use IPv6 it works for IPv6 addresses.

i.e.

208: socket = UDPSocket.new(Socket::AF_INET6)
209: socket.connect(@host, @port) <-- this now works when @host is an IPv6 address.

Obviously, that breaks IPv4, which isn't an issue for me, but a proper fix should handle both IPv4 and IPv6.

Absent my change, it works with IPv4, and it works with TCP, it's just the combination of UDP and IPv6 that fails.

This has been/is being discussed at https://discuss.elastic.co/t/logstash-output-syslog-3-0-5-not-sending-udp-over-ipv6/307279

Double syslog header when using logstash-input-syslog and logstash-output-syslog

Hello,

We are using logstash as a shipper to receive syslog messages and distribute them to multiple destinations.
When we receive message using the logstash-input-syslog plugin and forwarding them with logstash-output-syslog a double syslog header is present after the output which seems kind a logic.

My question is to have a parameter or option available which checks the input message and give the possibility to not add a second syslog header when a header was already present.

For all general issues, please provide the following details for fast resolution:

  • Version: 2.4
  • Operating System: Linux 6.8
  • Config file:

input { tcp { port => 5002 } } output { syslog { facility => "daemon" host => "<IP>" port => 514 severity => "alert" rfc => "rfc5424" appname => "dummy" protocol => ["tcp"] } }

Preserve timezone info

Currently the plugin sends messages with date in UTC both for RFC3164 and RFC5424 formats.
But in syslog world messages usually sending after applying a timezone.
In fact there is a little bit messy when whole syslog server uses local time but messages received from logstash are in UTC and with missed TZ info.
Is it possible to emit messages after applying TZ and preserve TZ info?
Before:
2018-02-28T14:55:34.706+00:00
After:
2018-02-28T10:55:34.706-04:00
I guess it would break current setups with RFC3164 format because there is no TZ info in this format.
So maybe there is no necessity to change behavior for this format.
Nevertheless messages in RFC5424 could use timezone seamlessly. Maybe it would be reasonable to add flag like apply_timezone to clarify this behavior.

logstash-7.5.1 does not properly support IETF syslog protocol over TLS (RFC5424-5425)

The syslog output plugin supports the "rfc" parameter which can be set to "rfc5424", in order to enable IETF syslog protocol. In addition, there is also "protocol" parameter which can be set to "ssl-tcp", in order to use IETF syslog protocol over TLS. However, transmission of IETF syslog messages over TLS has been standardized in RFC5425, but logstash does not follow that standard. In particular, logstash fails to encapsulate syslog messages into syslog frames as it should according to RFC5425. When receiving syslog messages from logstash with syslog-ng, syslog-ng reports this problem in its debug log:

Feb 5 16:13:25 localhost syslog-ng[4850]: Invalid frame header; header=''

This issue is not new and been also observed by other users:
https://discuss.elastic.co/t/logstash-ssl-tcp-syslog-ng-invalid-frame-header-header/118088

Here is the configuration file snippet for reproducing this issue:

output {
syslog {
id => "LogServer"
sourcehost => "%{[host][name]}"
appname => "myprogram"
facility => "local7"
severity => "informational"
host => "127.0.0.1"
port => "6514"
protocol => "ssl-tcp"
rfc => "rfc5424"
ssl_cert => "/etc/logstash/cert.pem"
ssl_key => "/etc/logstash/key.pem"
codec => json
}
}

This issue has been tested with logstash-7.5.1 and centos7, but as other users have observed the same issue already 2 years ago, it is apparently not specific to one logstash version or operating system platform.

Syslog output plugin does not translate host

  • Version:
    main
  • Operating System:
    any
  • Config File (if you have sensitive info, please remove it):
input
{
  http
  {
     host => "0.0.0.0"
     port => "80"
     codec => "json"
  }
}
output
{
  if [action] == "syslog"
  {
    syslog
    {
      host => "%{address}"
      appname => "myapp"
      protocol => "tcp"
      severity => "alert"
      facility => "log alert"
      message => "%{body}"
      port => 514
    }
  }
  • Sample Data:
{
  "action":"syslog",
  "address":"logstash-syslog",
  "subject":"blabla",
  "body":"bodybody"
}
  • Expected results
    Syslog plugin connects to dynamic host defined in field address (=logstash-syslog)

  • Actual results
    Syslog plugin loads configuration file definition and uses host from there, it does not dynamically reflect what's inside field address. Message works as expected.

  • Steps to Reproduce:
    Run the sample data against the configuration.

  • Exception:

[2017-11-18T21:22:02,654][WARN][logstash.outputs.syslog  ] syslog tcp output exception: closing, reconnecting and resending event {:host=>"%{address}", :port=>514, :exception=>#<SocketError: initialize: name or service not known>, :backtrace=>["org/jruby/ext/socket/RubyTCPSocket.java:137:in `initialize'", "org/jruby/RubyIO.java:875:in `new'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-syslog-3.0.4/lib/logstash/outputs/syslog.rb:209:in `connect'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-syslog-3.0.4/lib/logstash/outputs/syslog.rb:177:in `publish'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-plain-3.0.4/lib/logstash/codecs/plain.rb:41:in `encode'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-syslog-3.0.4/lib/logstash/outputs/syslog.rb:147:in `receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `block in multi_receive'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:22:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:49:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:538:in `block in output_batch'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:536:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:481:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:439:in `block in start_workers'"], :event=>#<LogStash::Event:0x1d7b4ae4>}

reconnecting and resending event {:host=>"%{address}", ...

Exception thrown if "message" field is missing

If a message is missing a message field the syslog output fails with the following Ruby stacktrace and no good indication of what's actually wrong:

TypeError: can't convert nil into String
              + at org/jruby/RubyString.java:1172
        receive at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-syslog-0.1.4/lib/logstash/outputs/syslog.rb:128
         handle at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/outputs/base.rb:88
    output_func at (eval):416
   outputworker at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:244
  start_outputs at /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:166

This is evidenced by issue #10 and two cases in https://discuss.elastic.co/t/syslog-output-plugin-configuration/26368 so it's something real users are being confused by. At the very least we should document this de facto requirement but preferably the code should be patched to give a better error message and/or use a default message of some kind.

output when have multiple logstash servers in cluster

If i have, let's say, 3 servers in a logstash cluster, can i coordinate them somehow so when i configure syslog output only one of those servers will actually send the syslog message? otherwise all 3 would send and the receiving systems would get duplicated messages.

Dynamic configuration values.

It is not possible to configure the plugin using interpolation with the values obtained from a log event.
Would you provide support for this as there are other plugins that support this feature. It is ideal to configure the plugin in a manner similar to:

output {
    syslog {
        host        => '10.11.12.1'
        port        => 12345
        protocol    => 'tcp'
        facility    => %{SYSLOG_FACILITY}
        severity    => %{SYSLOG_SEVERITY}
        appname     => %{PROGRAM}
        sourcehost  => %{LOGSOURCE}
        procid      => %{PID}
        msgid       => 'logstash'
    }
}

This would allow the forwarding of filtered events to a SYSLOG host with elements of the original event.

Thanks,

Prevent reconnect on udp connections to localhost

We don't expect udp connections to fail because they are stateless, but ...
udp connections may fail/raise an exception if used with localhost/127.0.0.1, if there is no one listening on the respective port.

To prevent a failing udp connection to localhost from halting the event
queue in logstash, we ignore exceptions if the connection is udp.

Logstash syslog output plugin not setting facility correctly

Using this plugin to send some logs to a remote host.

Configuration in logstash related with this plugin is as follows:
output {
syslog {
host => "10.0.0.25"
port => "514"
protocol => "udp"
facility => "local5"
severity => "debug"
}
}

Then on the remote rsyslog server I've noticed that the logs are send with severity 3.
If I want them to be send to remote server as local5 then I need to configure logstash with facility == "local7". This is not scalable since if we want to send to the remote host as facility 'local7', then it's not clear which value to use in the config.

This behavior is seen in logstash 5.2. This was not happening in logstash 1.5.6.

Logstash hangs when connection to remote syslog server hangs

I'm sending logs both to ES and a remote syslog server after parsing them. I ran into an issue today where after a few logs were sent to ES it would just stop processing completely. It turns out that I was unable to connect to my remote syslog server. The IP address is reachable but the security groups were preventing me from connecting which would just make my connection attempt hang. Turns out that this plugin also hangs trying to connect and then eventually just stops all logs from being processed at all with no indication as to why. I think there should be a timeout setting on the socket connection so that if it can't connect it just fails gracefully and stops trying to send messages.

RSpec Tests failing: order of json codec changed

Currently the rspec tests are failing (http://build-eu-00.elastic.co/job/logstash-plugin-output-syslog-unit/22/), due to a change in one of dependent components. I am not sure if it is due to a change in logstash-codec-json or in logstash-event.

The failing line is https://github.com/logstash-plugins/logstash-output-syslog/blame/master/spec/outputs/syslog_spec.rb#L102 (unfortunately introduced by myself). The problem with this line is, it tries to validate the json output of the json codec with a regular expression, which expects the elements in the json output to respect a defined order.

@logstash-core what do you suggest to resolve this problem?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.