Giter VIP home page Giter VIP logo

pfelk's Introduction

Version badge

YouTube

Elastic Integration

pfSense/OPNsense + Elastic Stack

pfelk dashboard

Contents

Prerequisites

  • Ubuntu Server v20.04+ or Debian Server 11+ (stretch and buster tested)
  • pfSense v2.5.0+ or OPNsense 23.0+
  • Minimum of 8GB of RAM (Docker requires more) and recommend 32GB (WiKi Reference)
  • Setting up remote logging (WiKi Reference)

pfelk is a highly customizable open-source tool for ingesting and visualizing your firewall traffic with the full power of Elasticsearch, Logstash and Kibana.

Key features:

  • ingest and enrich your pfSense/OPNsense firewall traffic logs by leveraging Logstash

  • search your indexed data in near-real-time with the full power of the Elasticsearch

  • visualize you network traffic with interactive dashboards, Maps, graphs in Kibana

Supported entries include:

  • pfSense/OPNSense setups
  • TCP/UDP/ICMP protocols
  • KEA-DHCP (v4/v6) message types with dashboard - in development
  • DHCP (v4/v6) message types with dashboard - depreciated
  • IPv4/IPv6 mapping
  • pfSense CARP data
  • openVPN log parsing
  • Unbound DNS Resolver with dashboard and Kibana SIEM compliance
  • Suricata IDS with dashboard and Kibana SIEM compliance
  • Snort IDS with dashboard and Kibana SIEM compliance
  • Squid with dashboard and Kibana SIEM compliance
  • HAProxy with dashboard
  • Captive Portal with dashboard
  • NGINX with dashboard

pfelk aims to replace the vanilla pfSense/OPNsense web UI with extended search and visualization features. You can deploy this solution via ansible-playbook, docker-compose, bash script, or manually.

pfelk overview

  • pfelk-overview

Quick start

Installation

docker-compose

script installation method

  • Download installer script from pfelk repository
  • $ wget https://raw.githubusercontent.com/pfelk/pfelk/main/etc/pfelk/scripts/pfelk-installer.sh
  • Make script executable
  • $ chmod +x pfelk-installer.sh
  • Run installer script
  • $ sudo ./pfelk-installer.sh
  • Configure Security here
  • Templates here
  • Finish Configuring here
  • YouTube Guide

manual installation method

Roadmap

This is the experimental public roadmap for the pfelk project.

See the roadmap »

Comparison to similar solutions

Comparisions »

Contributing

Please reference to the CONTRIBUTING file. Collectively we can enhance and improve this product. Issues, feature requests, PRs, and documentation contributions are encouraged and welcomed!

License

This project is licensed under the terms of the Apache 2.0 open source license. Please refer to LICENSE for the full terms.

pfelk's People

Contributors

13bm avatar a3ilson avatar ax42 avatar bharathkarumudi avatar carlphilipp avatar clickhereforbadcode avatar eckley avatar fktkrt avatar gauthig avatar jclendineng avatar kaeltis avatar kaismax avatar kdgundermann avatar linkyone avatar mmartinello avatar nuggie avatar opoplawski avatar pclever1 avatar pilotboy72 avatar ragnarensar avatar revere521 avatar shacthulu avatar shagoy avatar swedishmike avatar swiftbird07 avatar tarkilhk avatar wylde780 avatar yusi1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pfelk's Issues

Fix for 13-snort.conf Geo_IP and ET_Sig

Not really a bug, but not an enhancement either - I'm a super noob with the ELK stack, and your project here has been huge for me getting started. that said..

I was seeing no ET Sig info or Geo IP info for my Snort Alerting

so through trial and error i modified your existing 13-snort.conf to the following to fix that:

I'm not sure if this is specific to me, or if it will help others as well. (updated several times to try to get the code tag to not break everything --- sorry)

# 13-snort.conf
filter {
  if "pf" in [tags] and [syslog_program] =~ /^snort/ {
    mutate {
    add_tag => [ "Snort" ]
  }
    grok {
      patterns_dir => ["/etc/logstash/conf.d/patterns"]
      match => [ "syslog_message", "%{SNORT}"]
  }
    if ![geoip] and [ids_source][ip] {
    # Check if source IP address is private. ##change source to ids_source in all below##
       cidr {
         address => [ "%{[ids_source][ip]}" ]
         network => [ "0.0.0.0/32", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16", "fc00::/7", "127.0.0.0/8", "::1/128", "169.254.0.0/16", "fe80::/10", "224.0.0.0/4", "ff00::/8", "255.255.255.255/32", "::" ]
         add_field => { "[@metadata][ids_source][locality]" => "private" }
  }
# Check to see if source.locality exists. If it doesn't the source.ip didn't match a private address space and locality must be public. 
    if ![@metadata][ids_source][locality] {
      geoip {
         add_tag => [ "GeoIP" ]
         source => "[ids_source][ip]"
         database => "/usr/share/GeoIP/GeoLite2-City.mmdb"
         target => "[ids_source][geo]"
       }
     }
#if [application] =~ /^snort/ {  ##this looks like it should be reading syslog_program instead##
  if [syslog_program] =~ /^snort/ {
    mutate {
        add_tag => [ "ET-Sig" ]
        add_field => [ "Signature_Info", "http://doc.emergingthreats.net/bin/view/Main/%{[ids_sig_id]}" ]
        }
      }
    }
  }
}

Opnsense 19.7.3 Missing fields in index pattern

I've got ELK installed and pulling logs from Opnsense 19.7.3. The issue if that I'm only seeing 20 fields in the index pattern. I suppose it is due to the grok filter but I don't seem to find the issue.
Any help is appreciated. Thank you !

Attached are the logs from Logstash. logstash-plain.log
Screenshot 2019-09-10 at 11 50 39 AM
Screenshot 2019-09-10 at 11 48 48 AM
Screenshot 2019-09-10 at 11 49 25 AM

dp-ubuntu@ubuntu:/etc/logstash/conf.d$ ls
01-inputs.conf 10-syslog.conf 11-pf.conf 30-outputs.conf patterns

dp-ubuntu@ubuntu:/etc/logstash/conf.d$ cat 11-pf.conf

filter { if "pf" in [tags] { grok { add_tag => [ "firewall" ] match => [ "message", "<(?<evtid>.*)>(?<datetime>(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?<prog>.*?): (?<msg>.*)" ] } mutate { gsub => ["datetime"," "," "] } date { match => [ "datetime", "MMM dd HH:mm:ss" ] timezone => "Asia/Singapore"

P.S - I've set the timezone correctly and also the IP of the firewall

World Heat Map field - Could not locate that index-pattern-field (id: geoip.location)

Describe the bug
After following all of the instructions in the README and installing the searches, visualizations, and dashboard.json files. When I view the World Heat Map I'm presented with this error. Along with being prompted to try and fix it. Top Country word graph worlds without issue and i can see geo location fields being populated on the docs themselves.

using logstash 7.3 and kibana 7.2

Thank for this project it is helping me understand logstash processing much better.

More Snort Things I did.....if they are usefull and/or fun

Is your feature request related to a problem? Please describe.
Not a Problem, I just developed some more Snort based items and would like to share if useful

Describe the solution you'd like
I created a dashboard and some visualizations based on yours, added some things to the 13-Snort.conf for source geo ip, and a template to add them to the geo hash - all based on your work - so complete plagarism

I'll upload the or send the ndjson file if you want - not sure how to do that here

Here is my 13-Snort.conf,

# 13-snort.conf
filter {
  if "pf" in [tags] and [syslog_program] =~ /^snort/ {
    mutate {
      add_tag => [ "Snort" ]
    }
    grok {
      patterns_dir => ["/etc/logstash/conf.d/patterns"]
      match => [ "syslog_message", "%{SNORT}"]
    }
    if ![geoip] and [ids_source][ip] {
    # Check if source IP address is private.
      cidr {
        address => [ "%{[ids_source][ip]}" ]
        network => [ "0.0.0.0/32", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16", "fc00::/7", "127.0.0.0/8", "::1/128", "169.254.0.0/16", "fe80::/10", "224.0.0.0/4", "ff00::/8", "255.255.255.255/32", "::" ]
        add_field => { "[@metadata][ids_source][locality]" => "private" }
      }
    # Check to see if source.locality exists. If it doesn't the source.ip didn't match a private address space and locality must be public.
      if ![@metadata][ids_source][locality] {
        geoip {
          add_tag => [ "GeoIP" ]
          source => "[ids_source][ip]"
          database => "/usr/share/GeoIP/GeoLite2-City.mmdb"
          target => "[ids_source][geo]"
        }
        geoip {
         default_database_type => 'ASN'
         database => "/usr/share/GeoIP/GeoLite2-ASN.mmdb"
         #cache_size => 5000
         source => "[ids_source][ip]"
         target => "[ids_source][as]"
        }
        mutate {
         rename => { "[ids_source][as][asn]" => "[ids_source][as][number]"}
         rename => { "[ids_source][as][as_org]" => "[ids_source][as][organization][name]"}
        }
      }
      if [syslog_program] =~ /^snort/ {
        mutate {
          add_tag => [ "ET-Sig" ]
          add_field => [ "Signature_Info", "http://doc.emergingthreats.net/bin/view/Main/%{[ids_sig_id]}" ]
        }
      }
    }
  }
}

Here is the Geo Hash template based on yours, I just call it "pf-location-template2
" becasue i am very creative

PUT _template/pf-location-template2

{
  "pf-location-template2" : {
    "order" : 0,
    "index_patterns" : [
      "pf-*"
    ],
    "settings" : { },
    "mappings" : {
      "properties" : {
        "ids_source" : {
          "properties" : {
            "geo" : {
              "dynamic" : true,
              "properties" : {
                "ip" : {
                  "type" : "ip"
                },
                "location" : {
                  "type" : "geo_point"
                }
              }
            },
            "as" : {
              "dynamic" : true,
              "properties" : {
                "ip" : {
                  "type" : "ip"
                },
                "location" : {
                  "type" : "geo_point"
                }
              }
            }
          }
        }
      }
    },
    "aliases" : { }
  }
}

Problems with the new grok patterns

I get this after upgrading to latest grok file:

[2018-06-09T00:21:47,250][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"} [2018-06-09T00:21:47,259][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"} [2018-06-09T00:21:47,590][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified [2018-06-09T00:21:47,796][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.4"} [2018-06-09T00:21:47,956][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2018-06-09T00:21:50,090][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50} [2018-06-09T00:21:50,322][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [2018-06-09T00:21:50,325][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"} [2018-06-09T00:21:50,489][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"} [2018-06-09T00:21:50,529][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6} [2018-06-09T00:21:50,529][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the typeevent field won't be used to determine the document _type {:es_version=>6} [2018-06-09T00:21:50,535][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2018-06-09T00:21:50,538][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} [2018-06-09T00:21:50,545][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]} [2018-06-09T00:21:50,702][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::FilterDelegator:0x56b6146 @metric_events_out=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: out value:0, @metric_events_in=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: in value:0, @metric_events_time=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: duration_in_millis value:0, @id=\"bad3ef91a4a713af20e15b5f872cf70a9cb37544e4cccc9195af22e86b738e01\", @klass=LogStash::Filters::Grok, @metric_events=#<LogStash::Instrument::NamespacedMetric:0x3193091d @metric=#<LogStash::Instrument::Metric:0x53bc1c96 @collector=#<LogStash::Instrument::Collector:0x7ae16f64 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x24de502f @store=#<Concurrent::Map:0x00000000000fbc entries=4 default_proc=nil>, @structured_lookup_mutex=#<Mutex:0xd2c6e48>, @fast_lookup=#<Concurrent::Map:0x00000000000fc0 entries=195 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :filters, :bad3ef91a4a713af20e15b5f872cf70a9cb37544e4cccc9195af22e86b738e01, :events]>, @filter=<LogStash::Filters::Grok patterns_dir=>[\"/etc/logstash/conf.d/patterns\"], match=>{\"message\"=>\"%{DHCPD}\"}, id=>\"bad3ef91a4a713af20e15b5f872cf70a9cb37544e4cccc9195af22e86b738e01\", enable_metric=>true, periodic_flush=>false, patterns_files_glob=>\"*\", break_on_match=>true, named_captures_only=>true, keep_empty_captures=>false, tag_on_failure=>[\"_grokparsefailure\"], timeout_millis=>30000, tag_on_timeout=>\"_groktimeout\">>", :error=>"pattern %{DHCPD} not defined", :thread=>"#<Thread:0x4fac4934@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"} [2018-06-09T00:21:50,887][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{DHCPD} not defined>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.4/lib/grok-pure.rb:123:inblock in compile'", "org/jruby/RubyKernel.java:1292:in loop'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.4/lib/grok-pure.rb:93:in compile'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:281:in block in register'", "org/jruby/RubyArray.java:1734:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:275:in block in register'", "org/jruby/RubyHash.java:1343:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:270:in register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:342:in register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:353:in block in register_plugins'", "org/jruby/RubyArray.java:1734:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:353:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:731:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:363:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:in run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:250:in block in start'"], :thread=>"#<Thread:0x4fac4934@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"} [2018-06-09T00:21:50,910][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}

grok parse error

While trying to start fresh i've encountered some grok errors. It appears to me based on tags that the error occurs in 10-pf.conf. Below is an event without matching 05-syslog.conf. I'm not sure how to debug this further.

{
"host" => "10.42.0.1",
"@Version" => "1",
"type" => "syslog",
"message" => "<134>Oct 1 22:13:51 filterlog: 5,,,1000000103,igb0.20,match,block,in,4,0x0,,113,10251,0,none,17,udp,132,83.248.107.164,161.184.221.253,8999,51412,112",
"@timestamp" => 2019-10-02T04:13:51.412Z
}

Below is an event when matching
{
"@timestamp" => 2019-10-02T04:10:34.782Z,
"host" => "10.42.0.1",
"tags" => [
[0] "pf",
[1] "Ready",
[2] "_grokparsefailure"
],
"@Version" => "1",
"event" => {
"original" => "<134>Oct 1 22:10:34 filterlog: 7,,,1000000105,igb0.21,match,block,in,6,0x00,0x00000,1,UDP,17,438,fe80::7add:12ff:fe83:eba,ff02::c,60004,1900,438"
},
"type" => "syslog"
}

Thanks

Only 16 fields in elastic

I installed ELK stack according readme and point in pfsense to the ELK stack server but only get 16 fields after adding index:

OS Unbuntu 18.04

Elastic, Logstash, Kibana (please complete the following information):

  • Version v7.4

Aantekening 2019-11-04 180827

Help with ICMPv6/PIM grokparsefailure on pfSense 2.4.4-p3

Thanks for putting this together. I really appreciate it.

I am getting _grokparsefailure and_geoip_lookup_failure for a few different messages, like ICMPv6 and PIM messages. Is this normal? Can I fix this or turn them off?

tags: pfsense, firewall, _grokparsefailure, _geoip_lookup_failure
message: 5,,,1000000003,em0,match,block,in,6,0xe0,0x00000,255,ICMPv6,58,32,fe80::2aa:6eff:fec1:5019,ff02::1,

tags: pfsense, firewall, _grokparsefailure, _geoip_lookup_failure
message: 5,,,1000000003,em0,match,block,in,6,0xe0,0x00000,1,PIM,103,334,fe80::2aa:6eff:fec1:5019,ff02::d,

Thanks!
JuicyRoots

Parse Openvpn Logs also

Would it also be able to parse Openvpn logs in these confs? I would like to monitor (from a Openvpn server) failed login attempts (usernames) also along with country of these failed logins (geoip)

I attempted to give this a shot but failed miserably.

Did some debugging after code refactor

Describe the bug
minor code bugs - please feel free to review anything helpful

in 11-firewall.conf the "network.direction" field is now missing and replaced by simply "direction" - i think this is part of the grok pattern changes?:

 mutate {
      add_field => { "[event][dataset]" => "firewall"}
      update => { "[direction]" => "%{[direction]}bound" }
      update => { "[network][type]" => "ipv%{[network][type]}" }
    }
  }
}

in 13-snort.conf - the asn data part is missing and change syslog_program to pf_program in the last mutate statement

 }
        geoip {
         default_database_type => 'ASN'
         database => "/usr/share/GeoIP/GeoLite2-ASN.mmdb"
         #cache_size => 5000
         source => "[ids_source][ip]"
         target => "[ids_source][as]"
        }
        mutate {
         rename => { "[ids_source][as][asn]" => "[ids_source][as][number]"}
         rename => { "[ids_source][as][as_org]" => "[ids_source][as][organization][name]"}
        }
      }
      if [pf_program] =~ /^snort/ {
        mutate {
          add_tag => [ "ET-Sig" ]
          add_field => [ "Signature_Info", "http://doc.emergingthreats.net/bin/view/Main/%{[ids_sig_id]}" ]
        }
      }
    }
  }
}

in 15-others.config - I added unbound as its common for pfBlockerNG but theres not really any patterns yet for DNS, and i chnaged apinger to dpinger - for my PFsense 2.4.4-release p3 its now dpinger.

 }
    if [pf_program] =~ /^unbound/ {
      mutate {
        add_tag => [ "unbound" ]
      }
    }

}
    if [pf_program] =~ /^dpinger/ {
      mutate {
        add_tag => [ "dpinger" ]
      }
    }


for the new patterns i was getting _grokparserfailures for just some IGMP entries
The reason appears to be that the "rason" field value sometimes has a hyphen - i commonly see the alert value: "ip-option" which won't work with the standard WORD pattern - this is what i have found as a solution: (?\b[\w-]+\b)

PF_LOG_DATA %{INT:rule_number},%{INT:sub_rule}?,,%{INT:tracker},%{DATA:interface},(?<reason>\b[\w\-]+\b),%{WORD:action},%{WORD:direction},

That was a huge refactor - I think its easier to read and digest whats happening this way - thanks!

Unable start logstash

Hello. I have problems with logstash running.
Software: CentOS 7 , logstash-7.5.0-1

I make all steps from documentation. The error I have is:

logstash: [2019-12-12T17:09:39,548][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.5.0"}
logstash: [2019-12-12T17:09:44,075][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [^\\]\[,], "]" at line 322, column 23 (byte 10131) after filter {\n if "pf" in [tags] {\n if [pf_program] =~ /^dhcpd$/ {\n mutate {\n add_tag => [ "dhcpd" ]\n }\n grok {\n patterns_dir => ["/etc/logstash/conf.d/patterns"]\n match => [ "pf_message", "%{DHCPD}"]\n }\n }\n if [pf_program] =~ /^charon$/ {\n mutate {\n add_tag => [ "ipsec" ]\n }\n }\n if [pf_program] =~ /^barnyard2/ {\n mutate {\n add_tag => [ "barnyard2" ]\n }\n }\n if [pf_program] =~ /^openvpn/ {\n mutate {\n add_tag => [ "openvpn" ]\n }\n grok {\n patterns_dir => ["/etc/logstas /conf.d/patterns"]\n match => [ "pf_message", "%{OPENVPN}"]\n }\n }\n if [pf_program] =~ /^ntpd/ {\n mutate {\n add_tag => [ "ntpd" ]\n }\n }\n if [pf_program] =~ /^php-fpm/ {\n mutate {\n add_tag => [ "web_portal" ]\n }\n grok {\n patterns_dir => ["/etc/logstash/conf.d/patterns"]\n match => [ "pf_message", "%{PF_APP}%{PF_APP_DATA}"]\n }\n mutate {\n lowercase => [ 'pf_ACTION' ]\n }\n }\n if [pf_program =~ /^unbound/ {\n mutate {\n add_tag => ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2584:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:156:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:26:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326:in block in converge_state'"]}

Somebody can help with this case?

Best wishes.

Containerization of PFELK

Is your feature request related to a problem? Please describe.
Running pfelk in containers could be another deploy method.

Describe the solution you'd like
There would be Dockerfiles for the components (Elasticsearch, Logstash, Kibana), the configuration files, patterns would be included in them.
Management is question to discuss, we could use simple docker compose for configuring multiple containers, or we can choose an orchestration tool eg. Kubernetes or Docker Swarm.
Should we target a single or multi node architecture?

In my opinion we should either stick to Docker Compose or choose Kubernetes, depending on the architecture.

Additional context
I am in favor of using VMs under the ELK stack, but being able to deploy this on containers could be better suited for some use cases.

I am quite busy at the moment, but would like to work in this, so any help is appreciated starting from design decisions to implementation details.

What everybody is thinking?

Problem with Grok

Hi,

first of all thanks for this fantastic solution.

I have a problem with the logstash grok, the message field is not changed ..
Surely it will be my mistake but I have followed the guide several times and I can't understand where I'm wrong.

This is my logstash log file:
logstash-plain.log

and this is my result in kibana:

{
  "_index": "pf-2019.09.04",
  "_type": "_doc",
  "_id": "hT9c-2wB3hlBxPd8ZQSS",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2019-09-04T08:20:32.169Z",
    "@version": "1",
    "tags": [
      "pfsense"
    ],
    "type": "syslog",
    "host": "192.168.100.1",
    "message": "<134>Sep  4 10:20:31 filterlog: 5,,,1000103483,em0,match,block,in,4,0x0,,128,2365,0,none,17,udp,78,10.0.101.173,10.0.101.255,137,137,58"
  },
  "fields": {
    "@timestamp": [
      "2019-09-04T08:20:32.169Z"
    ]
  },
  "sort": [
    1567585232169
  ]
}

I hope you can to help, thanks in advice,
Mattia

No data in ES

Latest few commits don't provide any data to elasticsearch. I get UDP packets on 5140 from opnsense but ES is empty. Running latest commit updated today.

[geoip] and [ids_src_ip] what are their usage?

I am curious what is this line serving for?

if ![geoip] and [ids_src_ip] !~ /^(10.|192.168.)/ {
geoip {
add_tag => [ "GeoIP" ]
source => "ids_src_ip"
database => "/usr/share/GeoIP/GeoLite2-City.mmdb"
}

I find it in 2 locations in File: 11-pfsense.conf

Thank you!

opnsense suricata _grokparsefailure

I'm getting _grokparsefailure from opnsense suricata logs either in syslog or eve format.

logstash-plain.log

[2019-10-09T12:04:56,936][WARN ][logstash.filters.grok    ][main] Grok regexp threw exception {:exception=>"Invalid FieldReference: ids_network[transport] :backtrace=>["org/logstash/ext/JrubyEventExtLibrary.java:87:in `get'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:365:in `handle'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:343:in `block in match_against_groks'", "(eval):15:in `block in compile_captures_func'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:202:in `capture'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:343:in `block in match_against_groks'", "org/jruby/RubyArray.java:1800:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:339:in `match_against_groks'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:329:in `match'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:293:in `block in filter'", "org/jruby/RubyHash.java:1417:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:292:in `filter'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:143:in `do_filter'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:162:in `block in multi_filter'", "org/jruby/RubyArray.java:1800:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:159:in `multi_filter'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:115:in `multi_filter'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:243:in `block in start_workers'"], :class=>"RuntimeError"}
[2019-10-09T12:04:56,936][WARN ][logstash.filters.grok    ][main] Grok regexp threw exception {:exception=>"Invalid FieldReference: ids_network[transport] :backtrace=>["org/logstash/ext/JrubyEventExtLibrary.java:87:in `get'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:365:in `handle'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:343:in `block in match_against_groks'", "(eval):15:in `block in compile_captures_func'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jls-grok-0.11.5/lib/grok-pure.rb:202:in `capture'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:343:in `block in match_against_groks'", "org/jruby/RubyArray.java:1800:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:339:in `match_against_groks'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:329:in `match'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:293:in `block in filter'", "org/jruby/RubyHash.java:1417:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-grok-4.1.1/lib/logstash/filters/grok.rb:292:in `filter'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:143:in `do_filter'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:162:in `block in multi_filter'", "org/jruby/RubyArray.java:1800:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:159:in `multi_filter'", "org/logstash/config/ir/compiler/AbstractFilterDelegatorExt.java:115:in `multi_filter'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:243:in `block in start_workers'"], :class=>"RuntimeError"}

Kibana discover

tags:pf, Suricata, _grokparsefailure @timestamp:Oct 9, 2019 @ 12:05:43.129 received_from:0.0.0.0 syslog_program:suricata syslog_timestamp:Oct 9 12:05:43 received_at:Oct 9, 2019 @ 12:05:43.129 syslog_hostname:OPNsense.localdomain type:syslog syslog_message:{"timestamp": "2019-10-09T12:05:43.134723+0200", "flow_id": 1763022386147449, "in_iface": "igb1", "event_type": "alert", "src_ip": "0.0.0.0", "src_port": 3872, "dest_ip": "0.0.0.0", "dest_port": 57274, "proto": "TCP", "alert": {"action": "allowed", "gid": 1, "signature_id": 2000334, "rev": 13, "signature": "ET P2P BitTorrent peer sync", "category": "Potential Corporate Privacy Violation", "severity": 1, "metadata": {"updated_at": ["2010_07_30"], "created_at": ["2010_07_30"]}}, "app_proto": "failed", "flow": {"pkts_toserver": 433, "pkts_toclient": 615, "bytes_toserver": 38268, "bytes_toclient": 581579, "start": "2019-10-09T11:06:22.897145+0200"}} event.original:<174>Oct 9 12:05:43 OPNsense.localdomain suricata[41807]: {"timestamp": "2019-10-09T12:05:43.134723+0200", "flow_id": 1763022386147449, "in_iface": "igb1", "event_type": "alert", "src_ip": "0.0.0.0", "src_port": 3872, "dest_ip": "0.0.0.0", "dest_port": 57274, "proto": "TCP", "alert": {"action": "allowed", "gid": 1, "signature_id": 2000334, "rev": 13, "signature": "ET P2P BitTorrent peer sync", "category": "Potential Corporate Privacy Violation", "severity": 1
tags:pf, Suricata, _grokparsefailure @timestamp:Oct 9, 2019 @ 12:05:43.129 received_from:0.0.0.0 ids_gen_id:1 syslog_program:suricata ids_desc:ET P2P BitTorrent peer sync syslog_timestamp:Oct 9 12:05:43 received_at:Oct 9, 2019 @ 12:05:43.129 ids_sig_id:2000334 syslog_hostname:OPNsense.localdomain ids_class:Potential Corporate Privacy Violation type:syslog syslog_message:[1:2000334:13] ET P2P BitTorrent peer sync [Classification: Potential Corporate Privacy Violation] [Priority: 1] {TCP} 0.0.0.0:3872 -> 0.0.0.0:57274 event.original:<173>Oct 9 12:05:43 OPNsense.localdomain suricata[41807]: [1:2000334:13] ET P2P BitTorrent peer sync [Classification: Potential Corporate Privacy Violation] [Priority: 1] {TCP} 0.0.0.0:3872 -> 0.0.0.0:57274 event.dataset:suricata event.module:suricata host:0.0.0.0 ids_sig_rev:13 ids_pri:1 @version:1 syslog_pid:41807 _id:Dyv7r20BTkRClDrURZzC _type:_doc _index:pf-2019.10.09 _score: -

IPs are scrubed with 0.0.0.0 for security reasons. Proper IPs are parsed to logs from opnsense.

Geohash not working

Hey, I have been trying to figure out how to get Geohash working in the Coordinate Map visualization for a while and I can't figure it out. In the 3ilson youtube video it looks like there is no configuration in ELK required.

When I go to Visualize -> Coordinate Map and set Aggregation to Geohash I get the error:
The index pattern pf* does not contain any of the following compatible field types: geo_point

On a firewall event I see the following fields properly parsed out:

  • destination.geo.city_name
  • destination.geo.continent_code
  • destination.geo.contry_code2
  • destination.geo.contry_code3
  • destination.geo.contry_name
  • destination.geo.ip
  • destination.geo.latitue
  • destination.geo.location.lat
  • destination.geo.location.lon
  • destination.geo.longitude
  • destination.geo.postal_code
  • destination.geo.region_code
  • destination.geo.regin_name
  • destination.geo.timezone
  • tags : GeoIP

It looks like either goipupdate isn't working properly or a GROK needs to be created. Can anyone chime in and give me some direction on this?

From my research the missing field geo_point just needs to have the lon, lat values in it.

Java 12 no longer available

Describe the bug
Following your instructions to install java 12..

Reading package lists... Done
Building dependency tree
Reading state information... Done
Package oracle-java12-installer is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source

E: Package 'oracle-java12-installer' has no installation candidate

Steps to reproduce the behavior:

sudo apt-get install oracle-java12-installer

  • OS: Ubuntu
  • Version 18.04

Had to install Java 13 but logstash is not getting any logs from pf

elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-11-04 13:17:56 UTC; 16min ago
Docs: http://www.elastic.co
Main PID: 3098 (java)
Tasks: 58 (limit: 2181)
CGroup: /system.slice/elasticsearch.service
├─3098 /usr/share/elasticsearch/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiating
└─3189 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller

Nov 04 13:17:08 ubuntu systemd[1]: Starting Elasticsearch...
Nov 04 13:17:10 ubuntu elasticsearch[3098]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be
Nov 04 13:17:56 ubuntu systemd[1]: Started Elasticsearch.


kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-11-04 13:03:32 UTC; 33min ago
Main PID: 1010 (node)
Tasks: 21 (limit: 2181)
CGroup: /system.slice/kibana.service
└─1010 /usr/share/kibana/bin/../node/bin/node /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

Nov 04 13:23:44 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:44Z","tags":[],"pid":1010,"method":"get","statusCode":304,"req":{"
Nov 04 13:23:45 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:44Z","tags":[],"pid":1010,"method":"get","statusCode":304,"req":{"
Nov 04 13:23:53 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:53Z","tags":[],"pid":1010,"method":"post","statusCode":200,"req":{
Nov 04 13:23:54 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:54Z","tags":[],"pid":1010,"method":"post","statusCode":200,"req":{
Nov 04 13:23:54 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:54Z","tags":[],"pid":1010,"method":"post","statusCode":200,"req":{
Nov 04 13:23:54 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:54Z","tags":[],"pid":1010,"method":"post","statusCode":200,"req":{
Nov 04 13:23:55 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:55Z","tags":[],"pid":1010,"method":"post","statusCode":200,"req":{
Nov 04 13:23:55 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:55Z","tags":[],"pid":1010,"method":"post","statusCode":200,"req":{
Nov 04 13:23:56 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:23:56Z","tags":[],"pid":1010,"method":"post","statusCode":200,"req":{
Nov 04 13:24:03 ubuntu kibana[1010]: {"type":"response","@timestamp":"2019-11-04T13:24:02Z","tags":[],"pid":1010,"method":"post","statusCode":200,"req":{
lines 1-18/18 (END)


logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-11-04 13:36:52 UTC; 15s ago
Main PID: 5201 (java)
Tasks: 14 (limit: 2181)
CGroup: /system.slice/logstash.service
└─5201 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt

Nov 04 13:36:52 ubuntu systemd[1]: logstash.service: Service hold-off time over, scheduling restart.
Nov 04 13:36:52 ubuntu systemd[1]: logstash.service: Scheduled restart job, restart counter is at 29.
Nov 04 13:36:52 ubuntu systemd[1]: Stopped logstash.
Nov 04 13:36:52 ubuntu systemd[1]: Started logstash.
Nov 04 13:36:52 ubuntu logstash[5201]: Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely
Nov 04 13:36:55 ubuntu logstash[5201]: WARNING: An illegal reflective access operation has occurred
Nov 04 13:36:55 ubuntu logstash[5201]: WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-cor
Nov 04 13:36:55 ubuntu logstash[5201]: WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
Nov 04 13:36:55 ubuntu logstash[5201]: WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
Nov 04 13:36:55 ubuntu logstash[5201]: WARNING: All illegal access operations will be denied in a future release


from my /var/log/logstash/logstash-plain.log

[2019-11-04T13:37:29,078][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :excep$
[2019-11-04T13:37:29,094][ERROR][logstash.agent ] An exception happened when converging configuration {:exception=>LogStash::Error, :message=>$
[2019-11-04T13:37:29,162][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::Ja$
[2019-11-04T13:37:29,313][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExi$
[2019-11-04T13:37:59,567][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.4.2"}
[2019-11-04T13:38:04,577][INFO ][org.reflections.Reflections] Reflections took 66 ms to scan 1 urls, producing 20 keys and 40 values
[2019-11-04T13:38:05,739][ERROR][logstash.filters.geoip ] Invalid setting for geoip filter plugin:

filter {
geoip {
# This setting must be a path
# File does not exist or cannot be opened /etc/logstash/GeoLite2-City.mmdb
database => "/etc/logstash/GeoLite2-City.mmdb"
...
}
}
[2019-11-04T13:38:05,744][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :excep$
[2019-11-04T13:38:05,766][ERROR][logstash.agent ] An exception happened when converging configuration {:exception=>LogStash::Error, :message=>$
[2019-11-04T13:38:05,830][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle `Java::Ja$
[2019-11-04T13:38:05,964][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExi$

GeoIP (geo_point)

Current configuration does not recognize GeoIP fields...working to update.

Newest releases of built in X-Pack functions

Any plan to release a new updated version of configs for the latest ek stash? Especially on the x-pack side. I found that after upgrading to latest version, the old config I had seems to cause all kinds of weird issues. Unable to add users to login, unable to activate properly login function.

grok failure's OpenVPN and DHCP

Hi

I found som parse errors that I hope you can help me straighten out.

Will post the lines that fail, in this format,
event.original:
pf_message:
tags:

Lets start with OpenVPN

<28>Dec 23 13:19:27 openvpn[91034]: WARNING: 'cipher' is used inconsistently, local='cipher AES-256-GCM', remote='cipher AES-256-CBC'
WARNING: 'cipher' is used inconsistently, local='cipher AES-256-GCM', remote='cipher AES-256-CBC'
pf, openvpn, _grokparsefailure

<28>Dec 23 12:19:27 openvpn[91034]: WARNING: 'auth' is used inconsistently, local='auth [null-digest]', remote='auth SHA1'
WARNING: 'auth' is used inconsistently, local='auth [null-digest]', remote='auth SHA1'
pf, openvpn, _grokparsefailure

<28>Dec 23 11:19:27 openvpn[91034]: WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1550', remote='link-mtu 1558'
WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1550', remote='link-mtu 1558'
pf, openvpn, _grokparsefailure

Time for DHCP

<187>Dec 23 13:18:54 dhcpd: uid lease 192.168.54.71 for client b4:fb:e4:8d:41:92 is duplicate on 192.168.54.0/24
uid lease 192.168.54.71 for client b4:fb:e4:8d:41:92 is duplicate on 192.168.54.0/24
pf, dhcpd, _grokparsefailure

<191>Dec 23 13:14:24 dhcpd: reuse_lease: lease age 17401 (secs) under 25% threshold, reply with unaltered, existing lease for 192.168.54.54
reuse_lease: lease age 17401 (secs) under 25% threshold, reply with unaltered, existing lease for 192.168.54.54
pf, dhcpd, _grokparsefailure

<190>Dec 23 11:30:27 dhcpd: Wrote 46 leases to leases file.
Wrote 46 leases to leases file.
pf, dhcpd, _grokparsefailure

<190>Dec 23 11:30:27 dhcpd: Wrote 0 new dynamic host decls to leases file.
Wrote 0 new dynamic host decls to leases file.
pf, dhcpd, _grokparsefailure

I'm running the patterns file named "pf-12.2019.grok"

Regards
Christian

New Fields to Openvpn

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
This is not a problem. Just a improvement.

Describe the solution you'd like
A clear and concise description of what you want to happen.
I think openvpn log parse/grok could be better just adding a field for USERS and IP. So we could correlate in a better way and build better dashboards. Also GeoIP on openvpn IPs would be great.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
I think adding 2 more fields and the possibility of using geoip on openvpn logs would be great to correlate data in a better way. I have built some dashboards to monitoring openvpn logs but without both fields the information is not easy to correlate.

Additional context
Add any other context or screenshots about the feature request here.

Redesign the index template for Maps support

Is your feature request related to a problem? Please describe.
Currently, we cannot use Maps in dashboards, because the current index patterns stores the geo fields as float. Which is a problem, if one would want to create point-to-point or grid aggregation layers in Maps.

Describe the solution you'd like
We should use geo_point and/or geo_shape fields in the index pattern to have this ability.

Pipeline error

Describe the bug
I have followed your instructions provided in the README, but I have several problems with the outcome.
I am monitoring my Logstash instance, and in the Pipeline Viewer I can see the following throughput.

pipeline_viewer

To Reproduce
Steps to reproduce the behavior:
I have the following modified files in my conf.d folder (with original 01-input.conf and pf-12.2019.grok):

I have added the IP of my pfSense.

# 05-syslog.conf
filter {
  if [type] == "syslog" {
    #Adjust to match the IP address of pfSense or OPNSense
    if [host] =~ /172\.21\.0\.3/ {
      mutate {
        add_tag => ["pf", "Ready"]
      }
    }
    #To enable or ingest multiple pfSense or OPNSense instances uncomment the below section
    ##############################
    #if [host] =~ /172\.2\.22\.1/ {
    #  mutate {
    #    add_tag => ["pf-2", "Ready"]
    #  }
    #}
    ##############################
    if "Ready" not in [tags] {
      mutate {
        add_tag => [ "syslog" ]
      }
    }
  }
}
filter {
  if [type] == "syslog" {
    mutate {
      remove_tag => "Ready"
    }
  }
}

Enabled my fw platform.

# 10-pf.conf
filter {
  if "pf" in [tags] {
    grok {
      # OPNsense - Enable/Disable the line below based on firewall platform
      # match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      # OPNsense
      # pfSense - Enable/Disable the line below based on firewall platform
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      # pfSense
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    mutate {
      rename => { "[message]" => "[event][original]"}
    }
  }
}

Modified this to my interfaces.

# 11-firewall.conf
filter {
  if "pf" in [tags] and [syslog_program] =~ /^filterlog$/ {
    grok {
      add_tag => [ "firewall" ]
      patterns_dir => ["/etc/logstash/conf.d/patterns"]
      match => ["syslog_message", "%{PF_LOG_ENTRY}"]
    }
    # Change interface as desired
      if [interface] =~ /^bge2$/ {
        mutate {
          add_tag => [ "WAN" ]
      }
    }
    # Change interface as desired
      if [interface] =~ /^bge3$/ {
        mutate {
          add_tag => [ "DMZ" ]
      }
    }
    # Change interface as desired
      if [interface] =~ /^lagg0$/ {
        mutate {
          add_tag => [ "LAN" ]
      }
    }
    if [source][ip] {
      # Check if source.ip address is private.
      cidr {
        address => [ "%{[source][ip]}" ]
        network => [ "0.0.0.0/32", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16", "fc00::/7", "127.0.0.0/8", "::1/128", "169.254.0.0/16", "fe80::/10", "224.0.0.0/4", "ff00::/8", "255.255.255.255/32", "::" ]
        add_field => { "[src_locality]" => "private" }
      }
      if ![src_locality] {
        geoip {
          add_tag => [ "GeoIP" ]
          source => "[source][ip]"
          database => "/usr/share/GeoIP/GeoLite2-City_20191203/GeoLite2-City.mmdb"
          target => "[source][geo]"
        }
        geoip {
         default_database_type => 'ASN'
         database => "/usr/share/GeoIP/GeoLite2-ASN_20191203/GeoLite2-ASN.mmdb"
         #cache_size => 5000
         source => "[source][ip]"
         target => "[source][as]"
       }
       mutate {
         rename => { "[source][as][asn]" => "[source][as][number]"}
         rename => { "[source][as][as_org]" => "[source][as][organization][name]"}
       }
      }
    }
    if [destination][ip] {
      # Check if destination.ip address is private.
      cidr {
        address => [ "%{[destination][ip]}" ]
        network => [ "0.0.0.0/32", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16", "fc00::/7", "127.0.0.0/8", "::1/128", "169.254.0.0/16", "fe80::/10", "224.0.0.0/4", "ff00::/8", "255.255.255.255/32", "::" ]
        add_field => { "[dest_locality]" => "private" }
      }
      if ![dest_locality] {
        geoip {
          add_tag => [ "GeoIP" ]
          source => "[destination][ip]"
          database => "/usr/share/GeoIP/GeoLite2-City_20191203/GeoLite2-City.mmdb"
          target => "[destination][geo]"
        }
        geoip {
         default_database_type => 'ASN'
         database => "/usr/share/GeoIP/GeoLite2-ASN_20191203/GeoLite2-ASN.mmdb"
         #cache_size => 5000
         source => "[destination][ip]"
         target => "[destination][as]"
       }
       mutate {
         rename => { "[destination][as][asn]" => "[destination][as][number]"}
         rename => { "[destination][as][as_org]" => "[destination][as][organization][name]"}
       }
      }
    }
    mutate {
      add_field => { "[event][dataset]" => "firewall"}
      update => { "[network][direction]" => "%{[network][direction]}bound" }
      update => { "[network][type]" => "ipv%{[network][type]}" }
      convert => [ "[location]", "float" ]
    }
  }
}

And this is my output.

output {
        elasticsearch {
                hosts => ["172.18.1.36:9200"]
                index => "pf-%{+YYYY.MM.dd}" }
}

Since I am skipping certain sections in the pipeline my dashboard looks like this. We can see there is a problem with the parsing too.

dashboard

At Discover, I can see several types of logs, all parsed wrongly.

{
  "_index": "pf-2019.12.10",
  "_type": "_doc",
  "_id": "0FPg7m4B_XZUsMXMgqlC",
  "_version": 1,
  "_score": null,
  "_source": {
    "type": "syslog",
    "host": "172.20.0.6",
    "message": "<134>Dec 10 09:15:21 filterlog: 5,,,1000000103,bge4,match,block,in,4,0x0,,64,18720,0,DF,17,udp,138,172.21.2.51,235.5.5.5,60279,58581,118",
    "@version": "1",
    "@timestamp": "2019-12-10T08:15:21.052Z",
    "tags": [
      "syslog"
    ]
  },
  "fields": {
    "@timestamp": [
      "2019-12-10T08:15:21.052Z"
    ]
  },
  "sort": [
    1575965721052
  ]
}

At the Kibana Index Pattern interface, I can see all 139 fields, but in Discover there are only 10, listed in the json above, which can cause problems with the dashboard

Operating System (please complete the following information):

  • OS: Debian
  • Version 10 (buster)

Elastic, Logstash, Kibana (please complete the following information):

  • Version: 7.5.0

ICMPv6 not parsing

I was having some issues with ICMPv6 messages parsing correctly. It appears to be related to missing protocol data. Changing the grok to below seems to have fixed it but I need more time to verify. I'm not sure the last match pattern is necessary. If it looks like a solid fix I can submit a pull request.

Example Message

11,,,1000000107,vtnet0,match,pass,in,6,0x00,0x00000,255,ICMPv6,58,32,fe80::201:5cff:fe74:d046,ff02::1:ffbb:5484,

Grok Change

grok {
  patterns_dir => ["/etc/logstash/conf.d/patterns"]
  match => [ 
    "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}",
    "message", "%{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}",
    "message", "%{PFSENSE_IPv4_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}",
    "message", "%{PFSENSE_IPv6_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}",
    "message", "%{PFSENSE_IPv6_SPECIFIC_DATA}%{PFSENSE_IP_DATA}"
  ]
}

Error in the logs

I'm receiving the following error when I run logstash. I'm not sure what the issue is as I am unable to locate any oddities in the files:
Any assistance would be appreciated.

[2019-10-06T00:16:31,031][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.4.0"}
[2019-10-06T00:16:32,625][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, ,, ] at line 45, column 28 (byte 844) after filter {\n  if \"pf\" in [tags] {\n    grok {\n      match => [ \"message\" ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2584:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:153:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:26:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326:in `block in converge_state'"]}

Suricata fields not mapped / OPNsense 19.7.4

The Suricata fields are not mapped. I´m on OPNsense 19.7.4.
The Firewall logs are correct mapped and working fine. Here are two screenshots.
Used the git checkout from today (13.09.2019)

suricata_fields

OPNsense options:
suricata_opnsense

Is the new EVE Format a problem ?
Please give me a hint if you need more informations.

Thanks for your help & time!

pfsense_2_4_2.grok Problem

When using the pfsense_2_4_2.grok file for a pfSense 2.4.2 instance Logstash fails to load with the following errors:

[2018-03-02T20:59:39,211][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::FilterDelegator:0x4f180c25 @metric_events_out=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: out value:0, @metric_events_in=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: in value:0, @metric_events_time=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: duration_in_millis value:0, @id="5d4ef03cf58a075aa69eab053881350a5f0847a563b2cc4a52d97394c3c6c3ef", @klass=LogStash::Filters::Grok, @metric_events=#<LogStash::Instrument::NamespacedMetric:0x25dc57e6 @Metric=#<LogStash::Instrument::Metric:0x5cb14d12 @collector=#<LogStash::Instrument::Collector:0x2922ba19 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x26d4f23c @store=#<Concurrent::Map:0x00000000000fc0 entries=4 default_proc=nil>, @structured_lookup_mutex=#Mutex:0x6170f267, @fast_lookup=#<Concurrent::Map:0x00000000000fc4 entries=139 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :filters, :"5d4ef03cf58a075aa69eab053881350a5f0847a563b2cc4a52d97394c3c6c3ef", :events]>, @filter=<LogStash::Filters::Grok patterns_dir=>["/etc/logstash/conf.d/patterns"], match=>{"message"=>["%{PFSENSE_LOG_DATA}%{PFSENSE_IP_SPECIFIC_DATA}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv4_SPECIFIC_DATA_ECN}%{PFSENSE_IP_DATA}%{PFSENSE_PROTOCOL_DATA}", "%{PFSENSE_LOG_DATA}%{PFSENSE_IPv6_SPECIFIC_DATA}"]}, id=>"5d4ef03cf58a075aa69eab053881350a5f0847a563b2cc4a52d97394c3c6c3ef", enable_metric=>true, periodic_flush=>false, patterns_files_glob=>"*", break_on_match=>true, named_captures_only=>true, keep_empty_captures=>false, tag_on_failure=>["_grokparsefailure"], timeout_millis=>30000, tag_on_timeout=>"_groktimeout">>", :error=>"pattern %{PFSENSE_IPv4_SPECIFIC_DATA_ECN} not defined", :thread=>"#<Thread:0xb316c02@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}

[2018-03-02T20:59:39,390][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{PFSENSE_IPv4_SPECIFIC_DATA_ECN} not defined>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.4/lib/grok-pure.rb:123:in block in compile'", "org/jruby/RubyKernel.java:1292:in loop'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.4/lib/grok-pure.rb:93:in compile'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.2/lib/logstash/filters/grok.rb:281:in block in register'", "org/jruby/RubyArray.java:1734:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.2/lib/logstash/filters/grok.rb:275:in block in register'", "org/jruby/RubyHash.java:1343:in each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.2/lib/logstash/filters/grok.rb:270:in register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:341:in register_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:352:in block in register_plugins'", "org/jruby/RubyArray.java:1734:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:352:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:736:in maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:362:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:289:in run'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:249:in block in start'"], :thread=>"#<Thread:0xb316c02@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 run>"}

[2018-03-02T20:59:39,407][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}

grokfailures

hi,

Great work on developing a set of scripts - I followed your tutorial however I'm getting _grokparsefailure errors in debug.

[2017-12-21T14:13:44,060][DEBUG][logstash.pipeline ] output received {"event"=>{"@timestamp"=>2017-12-21T14:13:43.763Z, "syslog_severity_code"=>5, "syslog_facility"=>"user-level", "@Version"=>"1", "host"=>"10.10.0.1", "syslog_facility_code"=>1, "message"=>"<134>Dec 21 14:13:43 filterlog: 9,,,1000000103,em0,match,block,in,4,0x0,,255,19307,0,DF,17,udp,336,192.168.1.3,239.255.255.250,1900,1900,316", "type"=>"syslog", "syslog_severity"=>"notice", "tags"=>["syslog", "_grokparsefailure"]}}

any ideas what is causing this?

elasticsearch service does not autostart

This is probably not a bug from pfelk but i thought i would ask here first.
Seems the service does not auto start as a service, however if i run systemctl start elasticsearch it runs fine?

Fell free to delete this.

● elasticsearch.service - Elasticsearch
Loaded: loaded (/etc/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Thu 2019-11-07 05:41:27 UTC; 6min ago
Docs: http://www.elastic.co
Main PID: 1047 (code=exited, status=143)

Nov 07 05:39:58 ubuntu systemd[1]: Starting Elasticsearch...
Nov 07 05:40:07 ubuntu elasticsearch[1047]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version
Nov 07 05:41:26 ubuntu systemd[1]: elasticsearch.service: Start operation timed out. Terminating.
Nov 07 05:41:27 ubuntu systemd[1]: elasticsearch.service: Failed with result 'timeout'.
Nov 07 05:41:27 ubuntu systemd[1]: Failed to start Elasticsearch.

Ansible playbook

Is your feature request related to a problem? Please describe.
I saw the card regarding scrip installer, and I was wondering if we should make an Ansible playbook to accelerate the deploy process. This could co-exist with a script solution.

Describe the solution you'd like

  • Prerequisite role to install java, Maxmind
  • Separate roles for Elasticsearch, Logstash, Kibana
  • Jinja templates for config files

Describe alternatives you've considered
Alternatives would be a simple script installer, but in my opinion they can co-exist.

Additional context
Proposed file structure for the ELK services:

common/roles/elasticsearch/
├── defaults
│   └── main.yml
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
└── templates
    └── elasticsearch.yml.j2
common/roles/logstash/
├── defaults
│   └── main.yml
├── files
│   ├── patterns
│       └── pf-12.2019.grok
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
└── templates
   ├── 01-input.conf.j2
   ├── 11-firewall.conf.j2
   ├── 12-suricata.conf.j2
   ├── 13-snort.conf.j2
   ├── 15-others.conf.j2
   └── 50-output.conf.j2
common/roles/kibana/
├── defaults
│   └── main.yml
├── files
│   └── dashboards
├── handlers
│   └── main.yml
├── tasks
│   └── main.yml
└── templates
    └── kibana.yml.j2

Updating the Suricata dashboards/visualizations

The old suricata json files are not not mapping well with the current indices. They are not in ndjson format either, and the support for json is going away.

Can you provide details about their former working states? Maybe export them to ndjson if you got them working, to accelerate the reworking process.

An alternative solution I have considered is to dive into their json and put them all together from scratch. I am planning to do this, just wondering maybe you have a better solution at hand.

grok failure's mostly with ipv6 data

ELK 7.4.0
pf-09.2019.grok

Analyzing my pfsense logs in Kibana and filtering for grokparsefailures shows most firewall entries cannot be parsed using the latest grok pattern listed above. When the log entries contain IPv6 data see below samples:

NOTE: replaced IPV6&IPV4 IP addresses with random numbers for security concerns.

5,,,1000000003,em0,match,block,in,6,0xe0,0x00000,1,Options,0,32,fe80::413:14ff:fe33:4327,ff421::1,HBH,RTALERT,0x0000,PADN,
5,,,1000000003,em0,match,block,in,6,0x00,0x00000,255,ICMPv6,58,32,2347:f721:454:e1::1,ff44:1:ff2f:9,
9,,,1000000103,em0,match,block,in,4,0x0,,52,5929,1480,none,17,udp,381,208.78.71.14,2.2.2.2,
9,,,1000000103,em0,match,block,in,4,0x0,,1,0,0,none,2,igmp,36,2.2.2.2,224.0.0.1,datalength=12

JSON output for some:

{
  "_index": "logstash-pfsense-000001",
  "_type": "_doc",
  "_id": "dF8tMW4BRPLOAM2DCL-s",
  "_version": 1,
  "_score": null,
  "_source": {
    "host": "2.2.2.2",
    "prog": "filterlog",
    "syslog_severity_code": 5,
    "severity_label": "Emergency",
    "syslog_facility": "user-level",
    "@version": "1",
    "evtid": "134",
    "tags": [
      "_grokparsefailure_sysloginput",
      "_grokparsefailure",
      "_geoip_lookup_failure"
    ],
    "type": "pfsense",
    "@timestamp": "2019-11-03T12:11:14.000Z",
    "priority": 0,
    "message": "9,,,1000000103,em0,match,block,in,4,0x0,,1,0,0,none,2,igmp,36,2.2.2.2,224.0.0.1,datalength=12 ",
    "severity": 0,
    "facility": 0,
    "facility_label": "kernel",
    "syslog_severity": "notice",
    "syslog_facility_code": 1
  },
  "fields": {
    "@timestamp": [
      "2019-11-03T12:11:14.000Z"
    ]
  },
  "sort": [
    1572783074000
  ]
}
{
  "_index": "logstash-pfsense-000001",
  "_type": "_doc",
  "_id": "WV8sMW4BRPLOAM2D9b4c",
  "_version": 1,
  "_score": null,
  "_source": {
    "host": "2.2.2.2",
    "prog": "filterlog",
    "syslog_severity_code": 5,
    "severity_label": "Emergency",
    "syslog_facility": "user-level",
    "@version": "1",
    "evtid": "134",
    "tags": [
      "_grokparsefailure_sysloginput",
      "_grokparsefailure",
      "_geoip_lookup_failure"
    ],
    "type": "pfsense",
    "@timestamp": "2019-11-03T12:11:09.000Z",
    "priority": 0,
    "message": "5,,,1000000003,em0,match,block,in,6,0x00,0x00000,255,ICMPv6,58,32,2444:f711:111:e1::1,ff11::1:ff14:361a,",
    "severity": 0,
    "facility": 0,
    "facility_label": "kernel",
    "syslog_severity": "notice",
    "syslog_facility_code": 1
  },
  "fields": {
    "@timestamp": [
      "2019-11-03T12:11:09.000Z"
    ]
  },
  "sort": [
    1572783069000
  ]
}
 
{
  "_index": "logstash-pfsense-000001",
  "_type": "_doc",
  "_id": "K2opMW4Bx2SgVlxP1aw1",
  "_version": 1,
  "_score": null,
  "_source": {
    "host": "2.2.2.2",
    "prog": "filterlog",
    "syslog_severity_code": 5,
    "severity_label": "Emergency",
    "syslog_facility": "user-level",
    "@version": "1",
    "evtid": "134",
    "tags": [
      "_grokparsefailure_sysloginput",
      "_grokparsefailure",
      "_geoip_lookup_failure"
    ],
    "type": "pfsense",
    "@timestamp": "2019-11-03T12:07:39.000Z",
    "priority": 0,
    "message": "5,,,1000000003,em0,match,block,in,6,0xe0,0x00000,1,Options,0,32,fe22::222:10ff:fe22:22,ff02::1,HBH,RTALERT,0x0000,PADN,",
    "severity": 0,
    "facility": 0,
    "facility_label": "kernel",
    "syslog_severity": "notice",
    "syslog_facility_code": 1
  },
  "fields": {
    "@timestamp": [
      "2019-11-03T12:07:39.000Z"
    ]
  },
  "sort": [
    1572782859000
  ]
}
{
  "_index": "logstash-pfsense-000001",
  "_type": "_doc",
  "_id": "HWooMW4Bx2SgVlxPLZew",
  "_version": 1,
  "_score": null,
  "_source": {
    "host": "2.2.2.2",
    "prog": "filterlog",
    "syslog_severity_code": 5,
    "severity_label": "Emergency",
    "syslog_facility": "user-level",
    "@version": "1",
    "evtid": "134",
    "tags": [
      "_grokparsefailure_sysloginput",
      "_grokparsefailure",
      "_geoip_lookup_failure"
    ],
    "type": "pfsense",
    "@timestamp": "2019-11-03T12:05:51.000Z",
    "priority": 0,
    "message": "9,,,1000000103,em0,match,block,in,4,0x0,,50,8301,0,+,17,udp,1500,199.253.60.1,2.2.2.2,53,48193,1631",
    "severity": 0,
    "facility": 0,
    "facility_label": "kernel",
    "syslog_severity": "notice",
    "syslog_facility_code": 1
  },
  "fields": {
    "@timestamp": [
      "2019-11-03T12:05:51.000Z"
    ]
  },
  "sort": [
    1572782751000
  ]
}

The filter for logstash:


 ## 11-pfsense.conf
filter {
  if [type] == "pfsense" {
    grok {
      match => [ "message", "<(?<evtid>.*)>(?<datetime>(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?<prog>.*?): (?<msg>.*)" ]
    }
    mutate {
      gsub => ["datetime","  "," "]
    }
    date {
      match => [ "datetime", "MMM dd HH:mm:ss" ]
      timezone => "America/New York"
    }
    mutate {
      replace => [ "message", "%{msg}" ]
    }
    mutate {
      remove_field => [ "msg", "datetime" ]
    }
    if [prog] =~ /^filterlog$/ {
      mutate {
        remove_field => [ "msg", "datetime" ]
      }
      grok {
		add_tag => [ "firewall" ]
        patterns_dir => "/opt/elastic/logstash/conf.d/patterns"
        match => [ "message", "%{PF_LOG_DATA}%{PF_IP_SPECIFIC_DATA}%{PF_IP_DATA}%{PF_PROTOCOL_DATA}",
                    message", "%{PF_IPv4_SPECIFIC_DATA}%{PF_IP_DATA}%{PF_PROTOCOL_DATA}",
                    "message", "%{PF_IPv6_SPECIFIC_DATA}%{PF_IP_DATA}%{PF_PROTOCOL_DATA}"]
      }
      mutate {
        lowercase => [ 'proto' ]
      }
	  if ![geoip] and [src_ip] !~ /^(10\.)/ {
        geoip {
            add_tag => [ "GeoIP" ]
            source => "src_ip"
        }
	  }
	  if ![geoip] and [dst_ip] !~ /^(10\.)/ {
        geoip {
            add_tag => [ "GeoIP" ]
            source => "src_ip"
        }
	  }
    }
  }
}

Significant Changes to Accessing and Using GeoLite2 Databases

Describe the bug

FYI
Just came across this: https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/

Starting December 30, 2019, we will be requiring users of our GeoLite2 databases to register for a MaxMind account and obtain a license key in order to download GeoLite2 databases. We will continue to offer the GeoLite2 databases without charge, and with the ability to redistribute with proper attribution and in compliance with privacy regulations. In addition, we are introducing a new end-user license agreement to govern your use of the GeoLite2 databases. Previously, GeoLite2 databases were accessible for download to the public on our developer website and were licensed under the Creative Commons Attribution-ShareAlike 4.0 International License.

Starting December 30, 2019, downloads will no longer be served from our public GeoLite2 page, from geolite.maxmind.com/download/geoip/database/*, or from any other public URL. See the section below for steps on how to migrate to the new download mechanism.

GeoLite2 Databases Affected

  • GeoLite2 Country
  • GeoLite2 City
  • GeoLite2 ASN

Additional context
https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases/

Suricata grock for PFsense

The suricata EVE logs have a PID right next to the word suricata. The grok filter needs to be updated to account for that.

PFSENSE %{MONTH}.%{MONTHDAY}.*%{TIME}.%{WORD:application}(?<pid>(\[[0-9]*\])?):.%{GREEDYDATA:msg}

The (?<pid>(\[[0-9]*\])?) allows for the PID or not

GeoIP trying to parse private IPs.

I borrowed an example from the logstash netflow module to clean up some of the geoip failures. This seems to filter out all of the private IP matches.

if ![geoip] and [src_ip] {
  # Check if source IP address is private.
  cidr {
         address => [ "%{[src_ip]}" ]
         network => [ "0.0.0.0/32", "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16", "fc00::/7", "127.0.0.0/8", "::1/128", "169.254.0.0/16", "fe80::/10", "224.0.0.0/4", "ff00::/8", "255.255.255.255/32", "::" ]
         add_field => { "[@metadata][src_locality]" => "private" }
  }
  # Check to see if src_locality exists. If it doesn't the src_addr didn't match a private address space and locality must be public.
  if ![@metadata][src_locality] {
    geoip {
      add_tag => [ "GeoIP" ]
      source => "src_ip"
      database => "/etc/logstash/GeoLite2-City.mmdb"
    }
  }
}

Most of the dashboard no longer gets data

Looks like a number of the fields have changed causing issues with a large portion of the dashboard. Also the path to the GeoIP db files wasn't /usr/share/GeoIP for my install it was /var/lib/GeoIP/.

Thanks

image

Suricata JSON parsing seems not to work

Hi, deploying your PELK in Docker environment and have some trouble to get the suricata feedback correct. (using ELK 7.4.0, pfsense 2.4.4-RELEASE-p3
in pfsense

  • [Status/system Logs/settings] : remote syslog activated for system-, firewall- and vpnevents.
  • [Services/Suricata/Edit Interface settings] :
  1. send alerts to syslog using LOCAL1 and level to NOTICE
  2. EVE JSON log enabled with SYSLOG output, SYSLOG facility and NOTICE priority
  3. all EVE traffic logged

in ELK /Kibana
incoming msg but not completely parsed
Oct 12, 2019 @ 10:45:23.708 | tags:pf, Suricata, _grokparsefailure, Suricata syslog_program:suricata event.module:suricata, suricata  event.original:<45>Oct 12 10:45:23 suricata[99171]: {"timestamp": "2019-10-12T10:45:23.645713+0200", "flow_id": 1719720250391083, "in_iface": "igb0", "event_type": "dns", "src_ip": "216.239.34.10", "src_port": 53, "dest_ip": "12.34.56.78", "dest_port": 61815, "proto": "UDP", "dns": {"version": 2, "type": "answer", "id": 8058, "flags": "8400", "qr": true, "aa": true, "rrname": "google.com", "rrtype": "A", "rcode": "NOERROR", "answers": [{"rrname": "google.com", "rrtype": "A", "ttl": 300, "rdata": "172.217.20.110"}], "grouped": {"A": ["172.217.20.110"]}}}  event.dataset:suricata, ON4CRM host:172.17.0.1 type:syslog @timestamp:Oct 12, 2019 @ 10:45:23.708 syslog_timestamp:Oct 12 10:45:23 received_at:Oct 12, 2019 @ 10:45:23.708 @version:1
looking at the pipeline/12-suricata.conf we're in the first part this part of the code...
if [message] =~ /^{.*}$/ { json { source => "syslog_message" target => "[suricata][eve]" } } else { grok { patterns_dir => ["/etc/logstash/patterns"] match => [ "syslog_message", "%{SURICATA}"] } }
.. .so the JSON plugin should be used. For my understanding the code after the above test does modify some fields based on the info in the JSON EVE msg stored in [suricata][eve] by the JSON plugin. Based on the msg seen, i modified the if clause as there is nou [source_ip] field in the output of pfsense.

if [suricata][eve][src_ip] { mutate { add_field => { "[source][ip]" => "%{[suricata][eve][src_ip]}"} } } if [suricata][eve][dest_ip] { mutate { add_field => { "[destination][ip]" => "%{[suricata][eve][dest_ip]}"} } } if [suricata][eve][src_port] { mutate { add_field => { "[source][port]" => "%{[suricata][eve][src_port]}"} } } if [suricata][eve][dest_port] { mutate { add_field => { "[destination][port]" => "%{[suricata][eve][dest_port]}"} } }....

At the end of the code some last fields are updated, which seems to work
mutate { add_field => { "[event][module]" => "suricata"} add_field => { "[event][dataset]" => "ON4CRM"} }

In the patterns/pf- file I got:

SURICATA \[%{NUMBER:ids_gen_id}:%{NUMBER:ids_sig_id}:%{NUMBER:ids_sig_rev}\]%{SPACE}%{GREEDYDATA:ids_desc}%{SPACE}\[Classification:%{SPACE}%{GREEDYDATA:ids_class}\]%{SPACE}\[Priority:%{SPACE}%{NUMBER:ids_pri}\]%{SPACE}{%{WORD:[ids_network[transport]}}%{SPACE}%{IP:[ids_source][ip]}:%{NUMBER:[ids_source][port]}%{SPACE}->%{SPACE}%{IP:[ids_destination][ip]}:%{NUMBER:[ids_destination][port]}

I removed the leading space as there is no space in the msg.. but this should not be used as we're not in the case of a grok parsing...

Any ideas why we get a _grokparsefailure ?

_grokparsefailure on pfsense 2.4.3

I used your config files including the pfsense_2_4_2.grok with my pfsense 2.4.3 install.

But I get 100% _grokparsefailure when the logs come in. Because of this I don't have any of the fields you demonstrated in your video and guide to use with visualizers.
I pasted an example JSON message below:
{ "_index": "logstash-2018.06.03", "_type": "doc", "_id": "YzYwx2MB7ZXvZXBcOCFc", "_version": 1, "_score": null, "_source": { "type": "syslog", "syslog_facility": "user-level", "@version": "1", "syslog_severity": "notice", "tags": [ "syslog", "_grokparsefailure" ], "@timestamp": "2018-06-03T19:46:56.089Z", "syslog_severity_code": 5, "syslog_facility_code": 1, "host": "101.17.0.18", "message": "<134>Jun 3 12:46:56 filterlog: 9,,,1000000103,igb3,match,block,in,4,0x0,,51,16809,0,none,6,tcp,40,197.44.176.155,96.24.49.165,48368,23,0,S,1179624696,,4936,," }, "fields": { "@timestamp": [ "2018-06-03T19:46:56.089Z" ] }, "sort": [ 1528055216089 ] }

Grok filter is not able to parse TCP logs properly

Describe the bug
TCP logs from pfsense are not being parsed properly and for example none of following fields such as tcp_flags, sequence_number are being extracted from the message.

To Reproduce
Using the exactly same configuration files provided (except the geoip items for simplicity) following data is being extracted from the original event:
{
"geoip" => {
"timezone" => "Europe/Helsinki",
"ip" => "x.x.x.x",
"latitude" => 60.1708,
"country_code2" => "FI",
"country_name" => "Finland",
"country_code3" => "FI",
"continent_code" => "EU",
"location" => {
"lon" => 24.9375,
"lat" => 60.1708
},
"longitude" => 24.9375
},
"offset" => "0",
"destination" => {
"port" => "47433",
"ip" => "x.x.x.x"
},
"flags" => "none",
"length" => "40",
"source" => {
"port" => "58995",
"ip" => "x.x.x.x"
},
"syslog_program" => "filterlog",
"syslog_message" => "5,,,1000000103,vmx0,match,block,in,4,0x0,,249,36398,0,none,6,tcp,x.x.x.x,x.x.x.x,58995,47433,0,S,3763004844,,1024,,",
"interface" => "vmx0",
"type" => "syslog",
"ttl" => "249",
"network" => {
"transport_id" => "6",
"transport" => "tcp",
"type" => "4",
"packets" => "0",
"direction" => "in"
},
"tags" => [
[0] "pf",
[1] "firewall"
],
"received_from" => "x.x.x.x",
"@timestamp" => 2019-11-30T19:56:09.078Z,
"syslog_timestamp" => "Nov 30 21:56:09",
"received_at" => "2019-11-30T19:56:09.078Z",
"@Version" => "1",
"host" => "x.x.x.x",
"tracker" => "1000000103",
"tos" => "0x0",
"event" => {
"code" => "5",
"original" => "<134>Nov 30 21:56:09 filterlog: 5,,,1000000103,vmx0,match,block,in,4,0x0,,249,36398,0,none,6,tcp,x.x.x.x,x.x.x.x,58995,47433,0,S,3763004844,,1024,,",
"action" => "block",
"id" => "36398",
"outcome" => "match"
}
}

Screenshots
If applicable, add screenshots to help explain your problem.

Operating System (please complete the following information):

  • OS: CentOS
  • Version 7.4

Elastic, Logstash, Kibana (please complete the following information):

  • Version 7.4

Additional context

README.md- Preparation

Can someone clarify step 23, I don't understand what to do on the 3rd step in Dev Tools with the URL.
"Input the following and press the click to send request button (triangle)"

  1. Set-up Kibana
    In your web browser go to the ELK local IP using port 5601 (ex: 192.168.0.1:5601)
    Click the wrench (Dev Tools) icon in the left pannel
    Input the following and press the click to send request button (triangle)
    https://raw.githubusercontent.com/a3ilson/pfelk/master/Dashboard/GeoIP(Template)
    Click the gear icon (management) in the lower left
    Click Kibana -> Index Patters
    Click Create New Index Pattern
    Type "pf*" into the input box, then click Next Step
    In the Time Filter drop down select "@timestamp"
    Click Create then verify you have data showing up under the Discover tab

openVPN log

ok
Hey guys, it's me again.

Is your feature request related to a problem? Please describe.
When creating new dashboards for openvpn errors i found that this error line use a counter:

Authenticate/Decrypt packet error: bad packet ID (may be a replay): [ #1280 ] -- see the man page entry for --no-replay and --replay-window for more info or silence this warning with --mute-replay-warnings
Authenticate/Decrypt packet error: bad packet ID (may be a replay): [ #1279 ] -- see the man page entry for --no-replay and --replay-window for more info or silence this warning with --mute-replay-warnings
Authenticate/Decrypt packet error: bad packet ID (may be a replay): [ #1278 ] -- see the man page entry for --no-replay and --replay-window for more info or silence this warning with --mute-replay-warnings

So is impossible to aggregate this as unique error and count.

Describe the solution you'd like
Would be awesome if the grok remove that counter before save the event at elasticsearch

Describe alternatives you've considered
I have tried to use lucene query's to fix it but failed.

Additional context
No one.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.