Giter VIP home page Giter VIP logo

postfix-grok-patterns's Introduction

Logstash grok patterns for postfix logging

A set of grok patterns for parsing postfix logging using grok. Also included is a sample Logstash config file for applying the grok patterns as a filter.

Usage

  • Install logstash
  • Add 50-filter-postfix.conf to /etc/logstash/conf.d or pipeline dir for dockerized Logstash
  • Make dir /etc/logstash/patterns.d
  • Add postfix.grok to /etc/logstash/patterns.d
  • Restart logstash

The included Logstash config file requires two input fields to exist in input events:

  • program: the name of the program that generated the log line, f.i. postfix/smtpd (named tag in syslog lingo)
  • message: the log message payload without additional fields (program, pid, etc), f.i. connect from 1234.static.ctinets.com[45.238.241.123]

This event format is supported by the Logstash syslog input plugin out of the box, but several other plugins produce input that can be adapted fairly easy to produce these fields too. See ALTERNATIVE INPUTS for details.

Tests

In the test/ directory, there is a test suite that tries to make sure that no previously supported log line will break because of changing common patterns and such. It also returns results a lot faster than doing sudo service logstash restart :-).

The test suite needs the patterns provided by Logstash, you can easily pull these from github by running git submodule update --init. To run the test suite, you need a recent version of ruby (2.6 or newer should work), and the jls-grok and minitest gems. Then simply execute ruby test/test.rb. NOTE: The whole test process can now be executed inside a docker container, simply by running the runtests.sh script.

Adding new test cases can easily be done by creating new yaml files in the test directory. Each file specifies a grok pattern to validate, a sample log line, and a list of expected results.

Also, the example Logstash config file adds some informative tags that aid in finding grok failures and unparsed lines. If you're not interested in those, you can remove all occurrences of add_tag and tag_on_failure from the config file.

Contributing

I only have access to my own log samples, and my setup does not support or use every feature in postfix. If you miss anything, please open a pull request on github. If you're not very well versed in regular expressions, it's also fine to only submit sample unsupported log lines.

Other guidelines:

  • There is no goal to parse every possible Postfix log line. The goal is to extract useful data from the logs in a generic way.
  • The target for data extraction is logging from a local server. There have been requests to parse SMTP replies from remote (Postfix) servers that are logged by the SMTP client (postfix/smtp program name). There is no way to parse these replies in a generic way, they differ from implementation to implementation (f.i. Postfix vs Exim) and from server to server (every admin can customize the message format). Parsing stock replies from remote Postfix servers could be done, but would be confusing since the messages don't originate from the local server. Requests for parsing these are not honoured. If you like to do that, implement it yourself, or start a separate project, I'd be happy to add a link to it. :)

License

Everything in this repository is available under the New (3-clause) BSD license. See LICENSE for details.

Acknowledgement

I use postfix, logstash, elasticsearch and kibana in order to get everything working. For writing the grok patterns I depend heavily on grokdebug, and I looked a lot at antispin's useful logstash grok patterns.

postfix-grok-patterns's People

Contributors

agentelinux avatar busindre avatar dh0mp5eur avatar eltrai avatar hyili avatar jarpy avatar matejzero avatar rooty0 avatar thomaspatzke avatar ulab avatar whyscream avatar wolfgangkarall avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

postfix-grok-patterns's Issues

Polluted fields like #143

Like issue #143 I still get the same problem.

Hello, i got the same problem, utf8 chain was creating fields like "postfix_?utf-8?B?N2NmMGJmYjEyMTViYkBjb250YWN0LWVyYW0uZnINCj4" with value "?="
My data are the same as you, two chain with a space and no bracket and it seem to trigger a kv filter anyway.

My message :
2B8A6446CC29: message-id==?utf-8?B?PDIwMjAwNjEyXzE2NDkwN19jZWI2OTlhMWU5Mjk0YzE1ODAz?=? =?utf-8?B?N2NmMGJmYjEyMTViYkBjb250YWN0LWVyYW0uZnINCj4=?=

The fix does not seems to work

Add support for command counters in disconnect

Examples:

Oct 24 06:35:16 alison postfix-in/smtpd[23832]: disconnect from unknown[72.10.165.66] ehlo=2 starttls=1 mail=1 rcpt=0/1 data=0/1 quit=1 commands=5/7

Oct 27 09:21:37 alison postfix-in/submission/smtpd[17892]: disconnect from unknown[94.142.213.250] ehlo=2 starttls=1 auth=1 mail=1 rcpt=1 data=1 quit=1 commands=8

Postfix lmtp logs

Hello!

We are using lmtp for local mail delivery and some of the logs are not getting parsed, since I see you don't use lmtp and therefore have not written filters for it.

I tried to do it on my out but I'm not getting anywhere. Could you help me out?

Log samples:

6BEA81CDC0AB: to=<[email protected]>, relay=1.example.com.si[private/dovecot-lmtp], delay=0.03, delays=0.02/0/0/0.01, dsn=2.0.0, status=sent (250 2.0.0 <[email protected]> CVN5Bcsl9FR9ZQAAA15QOA Saved)
B140F1CDC09B: to=<[email protected]>, relay=1.example.com.si[private/dovecot-lmtp], delay=0.1, delays=0.04/0/0/0.06, dsn=2.0.0, status=sent (250 2.0.0 <[email protected]> ifvECDwm9FS4AwAAA15QOA Saved)

Thanks, Matej

program field is null

I created mutate filter in conjunction with the two file

filter {
  mutate {
    rename => [
      "syslog_program", "program",
      "syslog_message", "message"
    ]
  }
}

I restarted logstash and ran bin/logstash -f 50-filter-postfix.conf
But all program field shows null value
1

Unsupported Log Lines

Thank you for your fantastic work on this project, the patterns work flawlessly for the most part. I'm not familiar enough yet to contribute grok expressions, but I've included a sample of Postfix log lines that are not matched by the current grok expressions.

SSL-related

The following look like they could be generalised using the SSL_connect: prefix.

Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:SSLv3 write finished A
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:unknown state
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:SSLv3 write change cipher spec A
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:SSLv3 flush data
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:before/connect initialization
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:SSLv3 read server certificate A
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:SSLv3 read server key exchange A
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:SSLv3 read server hello A
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: SSL_connect:SSLv3 read server done A
Dec 29 17:47:34 myserverhostname postfix/smtp[26194]: SSL_connect:SSLv3 read server session ticket A

TLS-related

Dec 29 17:47:30 myserverhostname postfix/smtp[26194]: initializing the client-side TLS engine
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: aspmx.l.google.com[74.125.24.26]:25: TLS cipher list "aNULL:-aNULL:ALL:+RC4:@STRENGTH"
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: setting up TLS connection to aspmx.l.google.com[74.125.24.26]:25
Dec 29 14:39:23 myserverhostname postfix/smtp[10868]: rbgcon07.fnb.co.za[196.10.116.9]:25: TLS cipher list "aNULL:-aNULL:ALL:+RC4:@STRENGTH"

Repeated Messages

This is just 1 example, but I assume we could see these types of repeated messages for any log line pattern.

Dec 29 11:09:22 myserverhostname postfix/smtp[27911]: message repeated 2 times: [ mail.leoaylen.com[78.129.232.71]:25: depth=0 verify=0 subject=/OU=GT50485055/OU=See www.rapidssl.com/resources/cps (c)15/OU=Domain Control Validated - RapidSSL(R)/CN=mira.bargus.co.uk]

Network Connectivity

Trying to connect to Google over IPv6, but IPv6 has not been set up locally. Can probably be generalised using the suffix Network is unreachable.

Dec 29 17:47:32 myserverhostname postfix/smtp[26194]: connect to aspmx.l.google.com[2a00:1450:400b:c02::1a]:25: Network is unreachable

Session-related

Dec 29 17:47:34 myserverhostname postfix/smtp[26194]: save session smtp&mydomain.com&aspmx.l.google.com&74.125.24.26&&5A61A6AC5D08312A12A4F4D11DE3219B2131E21613723A09C4E111C712512312 to smtp cache
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: looking for session smtp&mydomain.com&aspmx.l.google.com&74.125.24.26&&5A61A6AC5D08312A12A4F4D11DE3219B2131E21613723A09C4E111C712512312 in smtp cache

Other - 1

Dec 29 17:47:34 myserverhostname postfix/smtp[26194]: aspmx.l.google.com[74.125.24.26]:25: subject_CN=mx.google.com, issuer_CN=Google Internet Authority G2, fingerprint=00:24:C8:8D:4F:C6:93:D7:E0:8E:9B:55:46:13:50:0C, pkey_fingerprint=4F:E1:46:3D:60:1B:EC:85:2F:65:21:5F:AF:0A:3E:E7
Dec 29 11:47:44 myserverhostname postfix/smtp[30750]: bloz.com.au[103.226.222.194]:25: subject_CN=*.au.syrahost.com, issuer_CN=USERTrust RSA Organization Validation Secure Server CA, fingerprint=F8:E7:F8:69:20:74:5B:A8:62:FD:F1:31:E0:BD:2E:22, pkey_fingerprint=CD:5A:ED:59:6F:76:05:37:E3:31:DC:90:E0:A9:4D:B7
Dec 29 10:52:03 myserverhostname postfix/smtp[26654]: nav-gateway.mweb.co.za[196.35.198.130]:25: depth=0 verify=1 subject=/serialNumber=A6Vpz1M1RjmczfUE2dRkY8JO6RrkLfph/OU=GT83193388/OU=See www.rapidssl.com/resources/cps (c)13/OU=Domain Control Validated - RapidSSL(R)/CN=*.synaq.com
Dec 29 10:52:03 myserverhostname postfix/smtp[26654]: nav-gateway.mweb.co.za[196.35.198.130]:25: subject_CN=*.synaq.com, issuer_CN=RapidSSL CA, fingerprint=9B:89:62:D0:5F:36:6B:4E:44:6E:B7:A4:63:6B:F2:B0, pkey_fingerprint=46:68:D1:58:68:16:83:39:E8:9F:F9:19:42:31:8B:0D

Other - 2

Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: aspmx.l.google.com[74.125.24.26]:25: depth=1 verify=1 subject=/C=US/O=Google Inc/CN=Google Internet Authority G2
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: aspmx.l.google.com[74.125.24.26]:25: depth=2 verify=1 subject=/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
Dec 29 17:47:33 myserverhostname postfix/smtp[26194]: aspmx.l.google.com[74.125.24.26]:25: depth=0 verify=1 subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=mx.google.com
Dec 29 12:37:15 myserverhostname postfix/smtp[2060]: aon-co-za.mail.protection.outlook.com[207.46.163.215]:25: depth=2 verify=1 subject=/CN=Microsoft Internet Authority    
Dec 29 12:37:15 myserverhostname postfix/smtp[2060]: aon-co-za.mail.protection.outlook.com[207.46.163.215]:25: depth=1 verify=1 subject=/DC=com/DC=microsoft/DC=corp/DC=redmond/CN=MSIT Machine Auth CA 2
Dec 29 12:37:15 myserverhostname postfix/smtp[2060]: aon-co-za.mail.protection.outlook.com[207.46.163.215]:25: depth=0 verify=1 subject=/C=US/ST=WA/L=Redmond/O=Microsoft/OU=Forefront Online Protection for Exchange/CN=mail.protection.outlook.com
Dec 29 08:09:22 myserverhostname postfix/smtp[14708]: za-smtp-inbound-1.mimecast.co.za[41.74.193.201]:25: depth=1 verify=1 subject=/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
Dec 28 13:58:06 myserverhostname postfix/smtp[29673]: za-smtp-inbound-1.mimecast.co.za[41.74.193.201]:25: depth=2 verify=1 subject=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5

ECS compatibility

Seems like some fields can be mapped to ECS fields.

mutate {
      copy => { "postfix.message_level" => "log.level" }
      copy => { "postfix.client_ip"     => "client.ip" }
      copy => { "postfix.client_port"   => "client.port" }
      copy => { "postfix.relay_ip"      => "destination.ip" }
      copy => { "postfix.relay_port"    => "destination.port" }
      copy => { "postfix.server_ip"     => "server.ip" }
      copy => { "postfix.server_port"   => "server.port" }
}

Index Pattern

Hi
In /etc/logstash/conf.d/postfix.conf

input {
beats {
port => 5044
}
}

filter {
# grok log lines by program name (listed alpabetically)
if [program] =~ /^postfix./anvil$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_ANVIL}$" ]
tag_on_failure => [ "_grok_postfix_anvil_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/bounce$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_BOUNCE}$" ]
tag_on_failure => [ "_grok_postfix_bounce_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./cleanup$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_CLEANUP}$" ]
tag_on_failure => [ "_grok_postfix_cleanup_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/dnsblog$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_DNSBLOG}$" ]
tag_on_failure => [ "_grok_postfix_dnsblog_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./error$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_ERROR}$" ]
tag_on_failure => [ "_grok_postfix_error_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/local$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_LOCAL}$" ]
tag_on_failure => [ "_grok_postfix_local_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./master$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_MASTER}$" ]
tag_on_failure => [ "_grok_postfix_master_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/pickup$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_PICKUP}$" ]
tag_on_failure => [ "_grok_postfix_pickup_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./pipe$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_PIPE}$" ]
tag_on_failure => [ "_grok_postfix_pipe_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/postdrop$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_POSTDROP}$" ]
tag_on_failure => [ "_grok_postfix_postdrop_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./postscreen$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_POSTSCREEN}$" ]
tag_on_failure => [ "_grok_postfix_postscreen_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/qmgr$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_QMGR}$" ]
tag_on_failure => [ "_grok_postfix_qmgr_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./scache$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_SCACHE}$" ]
tag_on_failure => [ "_grok_postfix_scache_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/sendmail$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_SENDMAIL}$" ]
tag_on_failure => [ "_grok_postfix_sendmail_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./smtp$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_SMTP}$" ]
tag_on_failure => [ "_grok_postfix_smtp_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/lmtp$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_LMTP}$" ]
tag_on_failure => [ "_grok_postfix_lmtp_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./smtpd$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_SMTPD}$" ]
tag_on_failure => [ "_grok_postfix_smtpd_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/postsuper$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_POSTSUPER}$" ]
tag_on_failure => [ "_grok_postfix_postsuper_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./tlsmgr$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_TLSMGR}$" ]
tag_on_failure => [ "_grok_postfix_tlsmgr_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/tlsproxy$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_TLSPROXY}$" ]
tag_on_failure => [ "_grok_postfix_tlsproxy_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./trivial-rewrite$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_TRIVIAL_REWRITE}$" ]
tag_on_failure => [ "_grok_postfix_trivial_rewrite_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/discard$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_DISCARD}$" ]
tag_on_failure => [ "_grok_postfix_discard_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix./virtual$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "^%{POSTFIX_VIRTUAL}$" ]
tag_on_failure => [ "_grok_postfix_virtual_nomatch" ]
add_tag => [ "_grok_postfix_success" ]
}
} else if [program] =~ /^postfix.
/ {
mutate {
add_tag => [ "_grok_postfix_program_nomatch" ]
}
}

# process key-value data if it exists
if [postfix_keyvalue_data] {
    kv {
        source       => "postfix_keyvalue_data"
        trim_value   => "<>,"
        prefix       => "postfix_"
        remove_field => [ "postfix_keyvalue_data" ]
    }

    # some post processing of key-value data
    if [postfix_client] {
        grok {
            patterns_dir   => "/etc/logstash/patterns.d"
            match          => ["postfix_client", "^%{POSTFIX_CLIENT_INFO}$"]
            tag_on_failure => [ "_grok_kv_postfix_client_nomatch" ]
            remove_field   => [ "postfix_client" ]
        }
    }
    if [postfix_relay] {
        grok {
            patterns_dir   => "/etc/logstash/patterns.d"
            match          => ["postfix_relay", "^%{POSTFIX_RELAY_INFO}$"]
            tag_on_failure => [ "_grok_kv_postfix_relay_nomatch" ]
            remove_field   => [ "postfix_relay" ]
        }
    }
    if [postfix_delays] {
        grok {
            patterns_dir   => "/etc/logstash/patterns.d"
            match          => ["postfix_delays", "^%{POSTFIX_DELAYS}$"]
            tag_on_failure => [ "_grok_kv_postfix_delays_nomatch" ]
            remove_field   => [ "postfix_delays" ]
        }
    }
}

# process command counter data if it exists
if [postfix_command_counter_data] {
    grok {
        patterns_dir   => "/etc/logstash/patterns.d"
        match          => ["postfix_command_counter_data", "^%{POSTFIX_COMMAND_COUNTER_DATA}$"]
        tag_on_failure => ["_grok_postfix_command_counter_data_nomatch"]
        remove_field   => ["postfix_command_counter_data"]
    }
}

# Do some data type conversions
mutate {
    convert => [
        # list of integer fields
        "postfix_anvil_cache_size", "integer",
        "postfix_anvil_conn_count", "integer",
        "postfix_anvil_conn_rate", "integer",
        "postfix_client_port", "integer",
        "postfix_cmd_auth", "integer",
        "postfix_cmd_auth_accepted", "integer",
        "postfix_cmd_count", "integer",
        "postfix_cmd_count_accepted", "integer",
        "postfix_cmd_data", "integer",
        "postfix_cmd_data_accepted", "integer",
        "postfix_cmd_ehlo", "integer",
        "postfix_cmd_ehlo_accepted", "integer",
        "postfix_cmd_helo", "integer",
        "postfix_cmd_helo_accepted", "integer",
        "postfix_cmd_mail", "integer",
        "postfix_cmd_mail_accepted", "integer",
        "postfix_cmd_quit", "integer",
        "postfix_cmd_quit_accepted", "integer",
        "postfix_cmd_rcpt", "integer",
        "postfix_cmd_rcpt_accepted", "integer",
        "postfix_cmd_rset", "integer",
        "postfix_cmd_rset_accepted", "integer",
        "postfix_cmd_starttls", "integer",
        "postfix_cmd_starttls_accepted", "integer",
        "postfix_cmd_unknown", "integer",
        "postfix_cmd_unknown_accepted", "integer",
        "postfix_nrcpt", "integer",
        "postfix_postscreen_cache_dropped", "integer",
        "postfix_postscreen_cache_retained", "integer",
        "postfix_postscreen_dnsbl_rank", "integer",
        "postfix_relay_port", "integer",
        "postfix_server_port", "integer",
        "postfix_size", "integer",
        "postfix_status_code", "integer",
        "postfix_termination_signal", "integer",

        # list of float fields
        "postfix_delay", "float",
        "postfix_delay_before_qmgr", "float",
        "postfix_delay_conn_setup", "float",
        "postfix_delay_in_qmgr", "float",
        "postfix_delay_transmission", "float",
        "postfix_postscreen_violation_time", "float"
    ]
}

}

output {
elasticsearch {
hosts => "localhost:9200"
index => "postfix3-%{+YYYY.MM.dd}"
}
}
``
postfix.grok in /etc/logstash/patterns.d/

But Kibana=> Index Pattern
image
Is that normal ? I expected to see other fields, not Time and source.

Integration with rsyslog

With rsyslog json template:

template(name="json-template"
  type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"@version\":\"1")
      constant(value="\",\"message\":\"")     property(name="msg" format="json")
      constant(value="\",\"sysloghost\":\"")  property(name="hostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"programname\":\"") property(name="programname")
      constant(value="\",\"procid\":\"")      property(name="procid")
    constant(value="\"}\n")
}

How to get postfix/smtpd from rsyslog as program? tag you mention in the docs doesn't exist in rsyslog properties.

Improper command pipelining

It's a lost connection log?

postfix/smtpd[14759]: improper command pipelining after DATA from unknown[72.13.58.7]:

then
POSTFIX_LOSTCONN (lost connection|timeout|improper command pipelining)

Thx

POSTFIX_SMTPD_LOSTCONN

My smtpd logs always contains a bytes value.
postfix/smtpd[10569]: lost connection after DATA (0 bytes) from unknown[X.X.X.X]
postfix/smtpd[31435]: lost connection after DATA (7774180 bytes) from unknown[X.X.X.X]

POSTFIX_SMTP_STAGE (CONNECT|HELO|EHLO|AUTH|MAIL|RCPT|DATA ((?:%{BASE10NUM}) bytes)|STARTTLS|RSET|UNKNOWN|.)
or
POSTFIX_SMTP_STAGE (CONNECT|HELO|EHLO|AUTH|MAIL|RCPT|DATA|DATA ((?:%{BASE10NUM}) bytes)|STARTTLS|RSET|UNKNOWN|.)

unknown[92.246.76.92]: SASL LOGIN authentication failed: UGFzc3dvcmQ6

HI ,

Dear all,
During the testing GROK for postfix i found one thing what can be helpfull for analize postix logs.
Right now If somebody try to access the mailserver with wrong user or password. inside the field "postfix_message" we can find below info:

postfix_message |   | unknown[92.246.76.92]: SASL LOGIN authentication failed: UGFzc3dvcmQ6

Will be nice if we ca also parse attacker IP address. and message "SASL LOGIN authentication failed: UGFzc3dvcmQ6"
IF somebody from the users already solve this issue, please let me know.

thx

Arek

Postifx polluted fields

Hi,

I noticed that some message ID are messing up with Kibana fields
I have more than 7000 polluted fields like this one

{
    "program": "postfix/cleanup",
    "message": "ADD961003206D: message-id==?UTF-8?B?PDE5?=? =?UTF-8?B?MTIyMDExMjkxNjYxMUBDbGllbnQtMDQ1LnJ1YmluZXR0ZXJpZS5sb2NhbD4=?=",
    "received_at": "2019-12-20T10:29:09.784Z",
    "postfix_queueid": "ADD961003206D",
    "tags": [
      "_grok_postfix_success"
    ],
    "postfix_?UTF-8?B?MTIyMDExMjkxNjYxMUBDbGllbnQtMDQ1LnJ1YmluZXR0ZXJpZS5sb2NhbD4": "?=",
    "ecs": {
      "version": "1.1.0"
    },
    "@version": "1",
    "postfix_message-id": "=?UTF-8?B?PDE5?=?"
  }
}

Does anyone ever experienced such issue ?
Is there any way to sanitize data from message field to avoid this problem ?

Thank you in advance

policyd-spf pattern

Hello,

would it be possible to add patterns for policyd-spf?

Example:
Apr 15 11:37:46 examplemx01 policyd-spf[28662]: None; identity=mailfrom; client-ip=192.192.192.192; helo=mail.example.com; [email protected]; [email protected]

The result would be:
program: policyd-spf
postfix_client_hostname: mail.example.com
postfix_client_ip: 192.192.192.192
postfix_from: [email protected]
postfix_to: [email protected]
postfix_spf_status: none

Status can be Pass, Fail or None (none = no spf record)

No smtpd NOQUEUE mails in log

Hi there. With aggregation module I got no NOQUEUE messages. (like this: postfix/smtpd: NOQUEUE: reject: RCPT from test.sender: 450 4.2.0 test@receptient: Recipient address rejected: Greylisted for 60 seconds; from=[email protected] to=test@receptient proto=ESMTP helo=<test.sender>
This cause by this string in aggregation module.
if ![postfix_queueid] { drop {}
So, there is a two solutions:
First -- remove these messages drop, but it will cause a large amount of useful logs.
Second -- add NOQUEUE as a kind of queue id. I more like a second way.

So, if you agree, I can make a pull request with

POSTFIX_QUEUEID ([0-9A-F]{6,}|[0-9a-zA-Z]{12,}|NOQUEUE)
and
POSTFIX_SMTPD_NOQUEUE %{POSTFIX_QUEUEID:postfix_queueid}: blablabla

Not working with logstash 6.X

After upgrading to Logstash 6 this does not work anymore :(
Can anyone help me find a way to get it working again?

postfix grok patterns doesn't work

I'm trying to filter postfix logs using the grok patterns provided.

The thing is, its not filtering the postfix logs properly. It is not segregating the message field which contains various fields like FROM, TO , HOST, STATUS, NCRPT , etc.

The output just comes as

"message" => "Jun 16 00:00:01 serverhost postfix/qmgr[2337]: 9B6G21E2221: from=[email protected], size=6273, nrcpt=1 (queue active)",
"@Version" => "1",
"@timestamp" => "2016-06-23T13:25:21.958Z",
"type" => "log",
"file" => "maillog.1",
"host" => "localhost.localdomain",
"offset" => "541"

The message comes as it is without being filtered.

I'm still using logstash and elastic search, kibana are all latest stable release.

Can any one help?

A little error

Hi,

I've installed the grok patterns and the filter conf file.

I just had to add this to the top of the grok file to make it work:

HOST [0-9a-zA-Z\.]*

Otherwise, I had this error:

:message=>"The error reported is: \n  pattern %{HOST:postfix_client_hostname} not defined"}

I thought maybe you would know if my change is good and if you need to add this to the file yourself or in the documentation somewhere.

I'm running the 2.0.0 version on Centos 7, and after I added the line, it seems to be working fine.

The way to match

Dear authors.
Like issue #66:
I have some confusions that I saw in 50-filter-postfix.conf and postfix.grok
First, I push my log up. For example, like this:
Sep 30 00:00:02 SendingGW2 postfix/pickup[14400]: 321511FE45: uid=0 from=

and then the log will be mapped in [program] =~ /^postfix.*/pickup$/ field.
--> after that it will go to /etc/logstash/patterns.d directory for processing.
and the log will find POSTFIX_PICKUP in /etc/logstash/patterns.d for matching.
POSTFIX_PICKUP %{POSTFIX_KEYVALUE}
--> POSTFIX_KEYVALUE %{POSTFIX_QUEUEID:postfix_queueid}: %{GREEDYDATA:postfix_keyvalue_data}

With my log matching, I just see: postfix_queueid=321511FE45
but be confused about "uid=0 from=" matching to %{GREEDYDATA:postfix_keyvalue_data}

Should I need to change your configuration or not? and how can I edit your file to matching my log.
Thanks and best regards.

Fields are not loaded if an aggregate filter is used

Thank you very much for the work done. Everything works fine.
I'm using the following setup: Filebeat -> Logstash -> Elasticsearch
I use your filters in my work. I noticed one peculiarity, if I use 51-filter-postfix-aggregate.conf, data on the postfix/anvil and postfix/scache fields cease to flow to elasticsearch. As soon as I remove the aggregation, these fields are loaded into elastic. Also, the fields that I set manually stop loading.
Tell me please, what can be changed in 51-filter-postfix-aggregate.conf for more correct work?
Thanks for help

Messages status doesn't work

First, thank you for this dashboard.

So, I have been configured all tools like in the description, but the panel "Messages Total", "Messages status totals" and "status Over time" doesn't work, I have only the logs at the bottom .

Have you an idea to resolv this problem ?

Thank you
github-issue-postfix-kibana

Pattern for smtp / PIX workarounds missing?

I am not using the patterns directly in logstash, but tried to re-create them for direct ingest streams. So I am not sure if I might have missed a pattern, but I just stumbled over this where the GROKs did not match:

Jul 29 08:48:33 mx-out-01 postfix/smtp[9833]: B0ADE20FC7: enabling PIX workarounds: disable_esmtp delay_dotcrlf for example.com[93.184.216.34]:25

elastic search mapping

Hello,

can anybody list what is the mapping they used for elastic search.

Any help will be appreciated.

Is this compatible with the latest Logstash?

Is this version compatible with latest Logstash and Kibana 4? The reason why i ask this is when i follow the readme i get this line in my logstash log and logstash service stops:

{:timestamp=>"2015-12-16T15:26:30.889000+0100", :message=>"Error: Expected one of #, input, filter, output at line 33, column 1 (byte 738) after "}
{:timestamp=>"2015-12-16T15:26:30.897000+0100", :message=>"You may be interested in the '--configtest' flag which you can\nuse to validate logstash's configuration before you choose\nto restart a running system."}

If i remove the 50-filter-postfix.conf file and restart logstash all is running again.

program / message vs. syslog_program / syslog_message

I'm new with logstash, so I based a lot of my setup on the documentation.

In the configuration examples they are creating syslog_* fields instead of the one without the prefix you are using.

Changing the field names might help others that base their setup on the official example (and realizing that 50-filter-postfix.conf is run before logstash-syslog.conf ;-)).

Excess postfix_ fields created by unicode email subject lines

Thanks for this great set of patterns+config, it's already successfully parsing millions of postfix log lines per day!

Unfortunately, there seems to be an issue with non-ascii email subject lines like "Grüße ‒ test":

When parsing this line

Dec 14 11:52:17 mail.example.com postfix/cleanup[20534]: D95341E50350: info: header Subject: =?utf-8?Q?Gr=C3=BC=C3=9Fe?= =?utf-8?Q?_=E2=80=92?= test from local; from=<[email protected]> to=<[email protected]>

The result in ES looks like this:
selection_20161214_1530x289_001

The problem is that it causes thousands of postfix_ fields.

"postfix_smtp_response" parsing issues

Seems like there may be issues with this, I started seeing some random terms appearing such as:
*postfix_http://support.google.com/mail/bin/answer.py?hl.raw
*postfix_http://support.google.com/mail/bin/answer.py?answer.raw
*postfix_http://postmaster.1and1.com/en/error-messages?ip.raw
*postfix_http://support.google.com/mail/bin/answer.py?answer

Here's the full syslog output of the log line I believe generated the first time in my list:

2015-04-22T09:44:35.282606-07:00 gallifrey postfix/smtp[62812]: 4E74613FE76: to=<[email protected]>, relay=gmail-smtp-in.l.google.com[2607:f8b0:400e:c04::1b]:25, delay=1.9, delays=0.05/0/1.1/0.71, dsn=5.7.1, status=bounced (host gmail-smtp-in.l.google.com[2607:f8b0:400e:c04::1b] said: 550-5.7.1 [2607:4100:3:25:100::2      12] Our system has detected that this 550-5.7.1 message is likely unsolicited mail. To reduce the amount of spam sent 550-5.7.1 to Gmail, this message has been blocked. Please visit 550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131 for 550 5.7.1 more information. qc12si8475009pab.211 - gsmtp (in reply to end of DATA command))

The reason I think it's messing with the smtp_response parsing is... I get these grok'd terms as a result:
*postfix_smtp_response in reply to end of DATA command)
*postfix_http://support.google.com/mail/bin/answer.py?hl en&answer=188131

Hope you have time to take a look. I'll update this if I figure out a nice fix.

Explaining some keyword

Dear authors,
My name is Tan and I come from Vietnam.
You know, at this time, I'm focusing on processing log for monitoring, ...
I read a lot of documents and then I see your art was so useful.
But having some keywords that I could not understand well, so that why can why explain it to me.
For example:

if [program] =~ /^postfix.*/anvil$/ {
grok {
patterns_dir => "/etc/logstash/patterns.d"
match => [ "message", "%{POSTFIX_ANVIL}" ]
tag_on_failure => [ "_grok_postfix_anvil_nomatch" ]
add_tag => [ "_grok_postfix_success" ]

---> in if[program]: what's program and its function?

Thank you and best regards!

Not getting new columns with aggregation

This is my variation of the aggregation script:

filter {
  if ![messageid] {
    drop {}
  } else if [program] == "amavis" and [message] =~ /(?i)message\-id/ {
    aggregate {
      task_id => "%{messageid}"
      code => "
        map['avamis_status'] ||= event.get('status')
        map['avamis_reason'] ||= event.get('reason')
        map['avamis_from'] ||= event.get('from')
        map['avamis_to'] ||= event.get('to')
        map['avamis_size'] ||= event.get('size')
      "
    }
  } else if [program] == "dovecot" and [message] =~ /(?i)msgid=/ {
    aggregate {
      task_id => "%{messageid}"
      code => "
        map['dovecot_status'] ||= event.get('status')
      "
    }
  } else if [program] == "postfix/cleanup" and [message] =~ /(?i)message\-id=/ {
    aggregate {
      task_id => "%{messageid}"
      code => "
       map.each do |key, value|
         event.set(key, value)
       end
     "
    }
  }
}

The problem is that the postfix/cleanup doesn't get aggregated columns appended to its data by messageid.

Could it be the order that's messing things up?

Btw, I set the pipeline.workers: 1, as required by the documentation, to force the execution in a single thread.

Any help would be precious.

Unable to parse logs (_grokparsefailure).

Hi,
I am using dbmail and while shiping /var/log/mail.log, logstash isn't able to parse the logs and in kibana it is showing [tags : _grokparsefailure]

Below is the snippet of my /var/log/mail.log

Apr 20 10:07:35 dbmail postfix/lmtp[29199]: master_notify: status 1
Apr 20 10:07:35 dbmail postfix/lmtp[29199]: connection closed
Apr 20 10:07:35 dbmail postfix/smtpd[29468]: disconnect from mail-la0-f50.google.com[192.168.215.50]
Apr 20 10:07:40 dbmail postfix/smtpd[28310]: connect from localhost[127.0.0.1]
Apr 20 10:07:40 dbmail postfix/smtpd[28310]: disconnect from localhost[127.0.0.1]

Below is the snippet of logstash log
{:timestamp=>"2015-04-20T06:05:24.332000-0400", :message=>"Exception in lumberjack input", :exception=>#<LogStash::ShutdownSignal: LogStash::ShutdownSignal>, :level=>:error}

Error when exporting from Logstash to Elasticsearch

Hello,

I'm using the following setup: Filebeat -> Logstash -> Elasticsearch

In the Logstash config, I'm using, as suggested a 49-postfix-fix.conf (to give me the needed input fields), and the 50-postfix-filter.conf

When Filebeats ships logs to Logstash, I'm getting the following error in Logstash:

[2017-02-22T14:54:34,225][WARN ][logstash.outputs.elasticsearch] Failed action. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-postfix-2017.02.22", :_type=>"log", :_routing=>nil}, 2017-02-22T14:54:46.695Z mail02-dev.multicert.dev tlsmgr_cache_run_event: start TLS smtpd session cache cleanup], :response=>{"index"=>{"_index"=>"logstash-postfix-2017.02.22", "_type"=>"log", "_id"=>"AVpmURTOT-_IxuMBgvGH", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Mixing up field types: class org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType != class org.elasticsearch.index.mapper.KeywordFieldMapper$KeywordFieldType on field message"}}}}}

Can you help me, please?

Kind Regards,
Pedro Queirós

Message-id produces wrong result

The following message-id breaks the parser:
Aug 14 15:29:57 X postfix/cleanup[11962]: 41qYP05TZCz5xY9: message-id==?utf-8?Q?=3CE1F7DC2C-82B5-4927-B0DB-0179227E665C=40aalborgf?=? =?utf-8?Q?=C3=B8rstehj=C3=A6lp=2Edk=3E?=

I suspect it is because of the space

postfix:cleanup subject contains a url

Hi

Firstly thanks for the patterns. it is appreciated.

I am having an issue when the "warning" (we add the subject so we can easily find the email) contains strange characters such as below

May 25 11:59:15 mail4 postfix/cleanup[2185]: D8B07E3DB6: warning: header Subject: https://drive.google.com/file/d/0B8wxcvprDYVdlVsdf1kzOVk/view?usp=sharing from o1678917x173.outbound-mail.sendgrid.net[167.89.17.173]; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<o1678917x173.outbound-mail.sendgrid.net>

May 25 12:27:10 mail postfix/cleanup[10485]: AF35455A2C: warning: header Subject:  =?UTF-8?Q?ID&A_Awards_2016:_Bathroom_Over_=C2=A3100,000_Award_Coming_Soon?=? =?UTF-8?Q?...?[216.27.86.143]; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<drone086.ral.icpbounce.com>

Essentially we get

:response=>{"create"=>{"_index"=>"logstash-2016.05.25", "_type"=>"syslog", "_id"=>"AVTnkYeOykeme6L6JtYY", "status"=>400, 
"error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Field name [postfix_https://drive.google.com/file/d/0B8wd8NTtsprDYVdlV29iY1kzOVk/view?usp] cannot contain '.'"}}}, :level=>:warn}

"[tags]"=>[{"message"=>"<22>May 25 12:27:10 mail postfix/cleanup[10485]: AF35455A2C: warning: header Subject:  =?UTF-8?Q?IDOver_=C2=A3100,000_Award_Coming_Soon?=? =?UTF-8?Q?...?= from drone086.ral.icpbounce.com[216.27.86.143]; from=<[email protected]> to=<frank.sawkins@czechandelo=<drone086.ral.icpbounce.com>", "@version"=>"1", "@timestamp"=>"2016-05-25T11:26:50.330Z", "host"=>"127.0.0.1", "port"=>37311, "type"=>"syslog", "program"=>["postfix/cleanup", "po_queueid"=>"AF35455A2C", "tags"=>["_grok_postfix_success"], "postfix_Subject:"=>"?UTF-8?Q?ID&A_Awards_2016:_Bathroom_Over_=C2=A3100000_Award_Coming_Soon?=?", "postfix_?UTF-8?Q?...?"="[email protected]", "postfix_to"=>"[email protected]", "postfix_proto"=>"ESMTP", "postfix_helo"=>"drone086.ral.icpbounce.com"}, "tags"]}>>]"_index"=>"logstash-2016.05.25", "_type"=>"syslog", "_id"=>"AVTnqspCykeme6L6LenL", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Field name [postfix_?UTF-8?'"}}}, :level=>:warn}

This seems to only be happening on the CLEANUP messages.

Is there a way that we can get these formatted correctly ? (some thing like if contains warning: header Subject: the reset is the data for postfix_subject)

Any help is greatly appreciated ?

Thanks

Could not index event to Elasticsearch

The error was discovered. If there are many indexes , then logstash will be in the logs
Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"%{id}", :_index=>"_integration_ms", :_type=>"integr_sybase", :routing=>nil}, #LogStash::Event:0x44b7f68], :response=>{"index"=>{"_index"=>"_integration_ms", "_type"=>"integr_syb", "_id"=>"%{id}", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id '%{id}'. Preview of field's value: '{name=mail1.domain.com}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:299"}}}}} . That is, filebeat tries to write data to each index.
If in logstash
output {
if "postfix" in [tags]{
elasticsearch {
hosts => "localhost:9200"
index => "postfix-%{+YYYY.MM.dd}"
}
}
}

and filebeat.yml
filebeat.inputs:

  • type: log
    enabled: true
    paths:
    - /var/log/maillog*
    exclude_files: [".gz$"]
    tags: ["postfix"]
    output.logstash:
    hosts: ["10.50.11.8:5044"]

in logs the same errors and the index is not created.
Can you help ?

_grok_postfix_command_counter_data_nomatch

I get the above error for the following line:

helo=1 auth=0/1 quit=1 commands=2/3

in the field

postfix_command_counter_data

It's because the auth-field is missing in postfix.grok. Wouldn't it be better to simply use the kv filter for parsing this field? Should be a more general (and more robust) solution.

test suite is broken on travis

$ ruby test/test.rb
/home/travis/.rvm/rubies/ruby-2.2.7/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:59:in `require': cannot load such file -- minitest/autorun (LoadError)
	from /home/travis/.rvm/rubies/ruby-2.2.7/lib/ruby/site_ruby/2.2.0/rubygems/core_ext/kernel_require.rb:59:in `require'
	from test/test.rb:2:in `<main>'

The command "ruby test/test.rb" exited with 1.

Done. Your build exited with 1

Support for Logstash Beats plugin (Filebeat)

The current config assumes use of Logstash's syslog input to receive postfix logs. However, this implementation won't work if logs are transported using the Logstash Beats plugin instead of the syslog input, since the program field will not be present.

This should be mentioned in the docs. The following fix works for me. For people using Logstash Beats, by adding this section before the very first if statement in the config file, the program field will be available to subsequent expressions. Note that this assumes log lines to be in default syslog format.

    grok {
        match => { "message" => "%{SYSLOGTIMESTAMP} %{SYSLOGHOST} %{DATA:program}(?:\[%{POSINT}\])?: %{GREEDYDATA}" }
    }

Match statements don't match lines with carriage returns

I've been using your ES postfix grok patterns for quite some time. Amazing work! After upgrading to ES 5.5 I noticed that postfix messages were no longer getting tagged. After a bit of debugging I noticed that the following match:

match => [ "syslog_message", "^%{POSTFIX_SMTP}$" ]

Doesn't match syslog_messages ending with newlines or carriage returns. If I feed the following message to logstash:

"message" => "Aug 31 00:00:03 foo01 postfix/smtp[9147]: 30D232017FA: to=[email protected], relay=mail.foo.com[1.2.3.4]:25, delay=1.1, delays=0.04/0/0.03/1, dsn=2.6.0, status=sent (250 2.6.0 [email protected] [InternalId=306728030] Queued mail for delivery)\r"

It spits out a _grok_postfix_smtp_nomatch. If I change the match to include carriage returns and newlines:

match => [ "syslog_message", "^%{POSTFIX_SMTP}(\r|\n|$)" ]

It works as expected. I'm debating between altering the match or using gsub in my syslog input filter to remove carriage returns. Have you by any chance come across this? If so, which method did you use to get everything working? Thanks again for the amazing work!

postfix/local parsing

Hi! I found you do not parse postfix/local lines. On purpose? I started at postfix/local parsing. Just quick today, I can make a pull request next week:

postfix-grok:

# local patterns
# Fluent format: /^(?<queueid>[^ ]*): to=<(?<rcpt-to>[^ ]*)>, [^*]* relay=(?<relay>[^ ]*), [^*]* status=(?<status>[^ ]*)/
POSTFIX_LOCAL_TO to=%{NOTSPACE:postfix_to}
POSTFIX_LOCAL_RELAY relay=%{POSTFIX_RELAY_INFO}
POSTFIX_LOCAL_DELAY delay=%{NUMBER:postfix_delay_total}
POSTFIX_LOCAL_DELAYS delays=%{POSTFIX_DELAYS}
POSTFIX_LOCAL_STATUS status=%{NOTSPACE:postfix_status}
POSTFIX_LOCAL %{POSTFIX_QUEUEID:postfix_queueid}: %{POSTFIX_LOCAL_TO}, [^*]* %{POSTFIX_LOCAL_RELAY}, %{POSTFIX_LOCAL_DELAY}, delays=%{POSTFIX_DELAYS}, [^*]* %{POSTFIX_LOCAL_STATUS

logstash config:

} else if [syslog_program] =~ /^postfix.*\/local$/ {
    grok {
        patterns_dir   => "/etc/logstash/patterns.d"
        match          => [ "message", "%{POSTFIX_LOCAL}" ]
        tag_on_failure => [ "_grok_postfix_local_nomatch" ]
        add_tag        => [ "_grok_postfix_success" ]
    }

It works for:

Feb 18 16:38:07 linloglogstash postfix/local[5789]: 2A22C263F6: to=[email protected], orig_to=root@localhost, relay=local, delay=0.07, delays=0.04/0/0/0.03, dsn=2.0.0, status=sent (delivered to command: procmail -a "$EXTENSION")

but not for:

Feb 18 16:26:39 linloglogstash postfix/local[5663]: 892A0205B6: to=ghdsgfhdslfh@localhost, relay=local, delay=0.05, delays=0.02/0/0/0.02, d
sn=5.1.1, status=bounced (unknown user: "ghdsgfhdslfh")

due to no orig_to in there that I skip in POSTFIX_LOCAL. Best would be to allow for optional "something=" parameters in between, but do not expect them either.

Postfix QueueID Size

Hello,

In your Grok pattern, you chose to have the postfix ID as 15+. Upon testing at my workplace, we discovered that our ids are 14 characters and not 15.

Can you change your rule for 14+ ?

postfix/error program

These paterns doesn't work for postfix/error program

tags: _grokparsefailure, _grok_postfix_program_nomatch

message: 2DC5552A34: to=<[email protected]>, relay=none, delay=63495, delays=63350/144/0/0, dsn=4.4.2, status=deferred (delivery temporarily suspended: lost connection with mta7.am0.yahoodns.net[98.136.216.22] while sending RCPT TO)

How to use this script ?

Hi !
I'm starting to use logstach and find your work for postfix but how i can use this with my others configurations.

I try this (after adding your 2 files) :

input{
  file {
    type => "linux-syslog"
    path => "/var/log/syslog"
  }
}

But without success … linux-syslog index it with syslog_program not program like in your filter.

thanks for help

optional field not optional

The following line

Nov 22 18:57:55 SERVER postfix/qmgr[1671]: QID: from=<>, size=3618, nrcpt=1 (queue active), QID: from=<>, size=3618, nrcpt=1 (queue active)

is parsed such that the field

postfix_from

contains

>

=> postfix_from should be optional

add include_keys into kv

Hi,

please add include_keys => [ "proto", "from", "helo", "client", "message-id", "to", "relay", "delay", "delays", "dsn" ] into kv {}.
sometimes kv create field like postfix_10934gjoiasjgoeirqjg90234j09gqe it is because sometimes postfix from contains "=" for example (10934gjoiasjgoeirqjg90234j09gqe=[email protected])

david

New patterns

I added several patterns so may be it'll be helpful.

POSTFIX_CLEANUP_REPLACE %{POSTFIX_QUEUEID:postfix.queueid}: replace: header Message-(Id|ID): <%{NOTSPACE}> from %{POSTFIX_CLIENT_INFO}; %{POSTFIX_KEYVALUE_DATA:postfix.keyvalue_data}: Message-(Id|ID): <%{NOTSPACE}>

POSTFIX_SMTP_SSLAUTHERR %{POSTFIX_QUEUEID:postfix.queueid}: SASL authentication failed; server %{POSTFIX_RELAY_INFO} said: %{GREEDYDATA:postfix.smtp_response}

POSTFIX_CLEANUP %{POSTFIX_CLEANUP_MILTER}|%{POSTFIX_CLEANUP_REPLACE}|%{POSTFIX_WARNING}|%{POSTFIX_KEYVALUE}
POSTFIX_SMTP %{POSTFIX_SMTP_DELIVERY}|%{POSTFIX_SMTP_CONNERR}|%{POSTFIX_SMTP_SSLAUTHERR}|%{POSTFIX_SMTP_SSLCONNERR}|%{POSTFIX_SMTP_LOSTCONN}|%{POSTFIX_SMTP_TIMEOUT}|%{POSTFIX_SMTP_RELAYERR}|%{POSTFIX_TLSCONN}|%{POSTFIX_WARNING}|%{POSTFIX_SMTP_UTF8}|%{POSTFIX_TLSVERIFICATION}

POSTFIX_POSTMAP %{POSTFIX_WARNING}
POSTFIX_SCRIPT %{POSTFIX_WARNING}

Grok config file part:

if [program] =~ /^postfix.*\/postmap$/ {
    grok {
        patterns_dir   => "/etc/logstash/patterns"
        match          => [ "postfix.full_message", "^%{POSTFIX_POSTMAP}$" ]
        tag_on_failure => [ "_grok_postfix_postmap_nomatch", "_grokparsefailure" ]
        add_tag        => [ "_grok_postfix_success" ]
    }
} else if [program] =~ /^postfix.*\/postfix-script$/ {
    grok {
        patterns_dir   => "/etc/logstash/patterns"
        match          => [ "postfix.full_message", "^%{POSTFIX_SCRIPT}$" ]
        tag_on_failure => [ "_grok_postfix_script_nomatch", "_grokparsefailure" ]
        add_tag        => [ "_grok_postfix_success" ]
    }
}

Also there are several postfix.smtp_response patterns:

if [program] =~ /^postfix.*\/smtp$/ {
  grok {
      patterns_dir   => "/etc/logstash/patterns"
      match          => [ "postfix.full_message", "^%{POSTFIX_SMTP}$" ]
      tag_on_failure => [ "_grok_postfix_smtp_nomatch", "_grokparsefailure" ]
      add_tag        => [ "_grok_postfix_success" ]
  }
  if "postfix.smtp_response" {
    grok {
        patterns_dir   => "/etc/logstash/patterns"
        match => {
          "postfix.smtp_response" => [
            "^host %{NOTSPACE} said: %{POSTFIX_STATUS_CODE:postfix.status_code}",
            "%{POSTFIX_STATUS_CODE:postfix.status_code}(-| )%{POSTFIX_STATUS_CODE_ENHANCED:postfix.status_code_enhanced} %{POSTFIX_WARNING_LEVEL:postfix.message_level}: %{GREEDYDATA:postfix.message}",
            "%{POSTFIX_STATUS_CODE:postfix.status_code}(-| )%{POSTFIX_STATUS_CODE_ENHANCED:postfix.status_code_enhanced} %{GREEDYDATA:postfix.message}",
            "%{POSTFIX_STATUS_CODE:postfix.status_code} %{GREEDYDATA:postfix.message}"
          ]
        }
        tag_on_failure => [ "_grok_postfix_smtp_response_nomatch", "_grokparsefailure" ]
        add_tag        => [ "_grok_postfix_success" ]
    }
  }
}

postfix_action

Please change the line

POSTFIX_ACTION (discard|reject|defer|accept|header-redirect)
to
POSTFIX_ACTION (discard|reject|defer|accept|header-redirect|filter)

Now i can use the pattern for my log from postfix(zimbra).

minor nomatch errors

_grok_postfix_local_nomatch

with

Nov 23 20:15:38 elk postfix/local[XXXX]: warning: dict_nis_init: NIS domain name not set - NIS lookups disabled, warning: dict_nis_init: NIS domain name not set - NIS lookups disabled

This is probably not so important. More important:

_grok_postfix_local_nomatch

with

Nov 23 20:15:38 elk postfix/smtp[XXXX]: QID: replace: header From: "(Cron Daemon)" <XXX>: From: XXX, QID: replace: header From: "(Cron Daemon)" <XXX>: From: XXX

warn_if_reject

Just brainstorming things here, if you don't think it's feasible, just close this issue :)

I noticed that warn_if_reject entries are not parsed correctly. For example:

NOQUEUE: reject_warning: RCPT from example.com[93.184.216.34]: 553 5.7.1 <[email protected]>: 
    Sender address rejected: not owned by user ph123; 
    from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<example.com>

This can be fixed by adding reject_warning to POSTFIX_ACTION?

POSTFIX_ACTION (accept|defer|discard|filter|header-redirect|reject|reject_warning)

Environment variable `POSTFIX_COMMAND_COUNTER_DATA` is not set

I get this Messages if i try to use your Patterns:

[2017-07-18T13:23:59,993][ERROR][logstash.agent ] Cannot create pipeline {:reason=>"Cannot evaluate ${POSTFIX_COMMAND_COUNTER_DATA}. Environment variable POSTFIX_COMMAND_COUNTER_DATA is not set and there is no default value given."}

Logstash 5.5 / Ubuntu 17.04

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.