Giter VIP home page Giter VIP logo

Comments (49)

anojht avatar anojht commented on August 22, 2024 1

Sorry for that, logstash uses (at)Metric. I will try to be aware of this in the future.

@a3ilson I got it to work!! Had to change the IP of host to my server's subnet IP. This is because of my environment where I had elk running as a container virtually bridged to the server's network. Hence, pfsense was sending to my servers address which was then sending the traffic to the docker container.

Thanks again for your time and effort into putting this awesome guide and conf files together!

Cheers,
Anojh

from pfelk.

anojht avatar anojht commented on August 22, 2024

image
Those are the only fields I have available to me.

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

Thanks for your swift response and time in putting such a great guide together! I have uploaded my log below:
logstash.log

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

My grok pattern:
image

Discover menu:
image

pfSense Remote Log Settings:
image

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

Actually I have IPv6 disabled on my firewall and to drop any IPv6 traffic.
My network is entirely using IPv4. Even if your grok patterns do not support IPv6, why does IPv4 not seem to work either? I can't import and use your dashboards/searches/visualizers since I am missing all the necessary fields.

image

from pfelk.

anojht avatar anojht commented on August 22, 2024

I am completely new to grok so I can try working out the ipv6 pattern. As for helping with ipv6 I can help you with ipv6 entries in my log that are being dropped, it's the least I could do for you graciously making this guide open source:

Time Message
Jun 3 14:52:46 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,86,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,86
Jun 3 14:52:43 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,86,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,86
Jun 3 14:52:42 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,86,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,86
Jun 3 14:52:33 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,98,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,98
Jun 3 14:52:30 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x1839f,255,ICMPv6,58,16,fe80::146c:4c5b:1204:bb14,ff02::2,
Jun 3 14:52:25 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x1839f,255,ICMPv6,58,16,fe80::146c:4c5b:1204:bb14,ff02::2,
Jun 3 14:52:24 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,98,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,98
Jun 3 14:52:23 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,86,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,86
Jun 3 14:52:21 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,86,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,86
Jun 3 14:52:21 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,98,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,98
Jun 3 14:52:18 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,98,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,98
Jun 3 14:52:17 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x00000,1,Options,0,36,fe80::146c:4c5b:1204:bb14,ff02::16,HBH,PADN,RTALERT,0x0000,
Jun 3 14:52:15 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,98,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,98
Jun 3 14:52:15 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x00000,1,Options,0,36,fe80::146c:4c5b:1204:bb14,ff02::16,HBH,PADN,RTALERT,0x0000,
Jun 3 14:52:14 filterlog: 5,,,1000000003,igb0,match,block,in,6,0x00,0x526c3,255,UDP,17,98,fe80::146c:4c5b:1204:bb14,ff02::fb,5353,5353,98

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

I used the files on Github as I followed your guide.

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

Done logstash restarted without issues but still the same results.

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

Wait your 11-pfsense.conf file has two typos, the location of the GeoIP database should be /etc/logstash/ instead of /usr/share and also should that if host line be the ip of my pfsense which is 192.168.1.1 or my elk host which is 172.168.1.2?

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

ok ran those sequence of commands, made the appropriate changes and restarted the service. Still getting the same results. Is this because I am on 2.4.3 and these files are not compatible anymore?

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

Ok finally, we have something different lol
Now I am getting these error messages in stdout of my docker container:

[2018-06-03T22:59:20,748][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"#<LogStash::FilterDelegator:0x3771f2f @metric_events_out=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: out value:0, @metric_events_in=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: in value:0, @metric_events_time=org.jruby.proxy.org.logstash.instrument.metrics.counter.LongCounter$Proxy2 - name: duration_in_millis value:0, @id="133ce2137b4dbbc0e088c90322bfabf68bab54416afdd92002eef681741eaae1", @klass=LogStash::Filters::Grok, @metric_events=#<LogStash::Instrument::NamespacedMetric:0x1f6fa712 @metric=#<LogStash::Instrument::Metric:0x3c3fd134 @collector=#<LogStash::Instrument::Collector:0x2ba83b99 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x109dd7a4 @store=#<Concurrent::Map:0x00000000000fc8 entries=3 default_proc=nil>, @structured_lookup_mutex=#Mutex:0x5ac2fdf3, @fast_lookup=#<Concurrent::Map:0x00000000000fcc entries=202 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :filters, :"133ce2137b4dbbc0e088c90322bfabf68bab54416afdd92002eef681741eaae1", :events]>, @filter=<LogStash::Filters::Grok patterns_dir=>["/etc/logstash/conf.d/patterns"], match=>{"message"=>"%{PFSENSE_SURICATA}"}, id=>"133ce2137b4dbbc0e088c90322bfabf68bab54416afdd92002eef681741eaae1", enable_metric=>true, periodic_flush=>false, patterns_files_glob=>"*", break_on_match=>true, named_captures_only=>true, keep_empty_captures=>false, tag_on_failure=>["_grokparsefailure"], timeout_millis=>30000, tag_on_timeout=>"_groktimeout">>", :error=>"pattern %{I P:ids_src_ip} not defined", :thread=>"#<Thread:0x13af94cc run>"}

[2018-06-03T22:59:21,145][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Grok::PatternError: pattern %{I P:ids_src_ip} not defined>, :backtrace=>["/opt/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.4/lib/grok-pure.rb:123:in block in compile'", "org/jruby/RubyKernel.java:1292:in loop'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/jls-grok-0.11.4/lib/grok-pure.rb:93:in compile'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:281:in block in register'", "org/jruby/RubyArray.java:1734:in each'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:275:in block in register'", "org/jruby/RubyHash.java:1343:in each'", "/opt/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-filter-grok-4.0.3/lib/logstash/filters/grok.rb:270:in register'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:342:in register_plugin'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:353:in block in register_plugins'", "org/jruby/RubyArray.java:1734:in each'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:353:in register_plugins'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:731:in maybe_setup_out_plugins'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:363:in start_workers'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:290:in run'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:250:in block in start'"], :thread=>"#<Thread:0x13af94cc run>"}

[2018-06-03T22:59:21,203][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

The new file has large spaces as well but its ok I removed them and restarted logstash.
Now I get 124 fields and this looks much better!
However my discover window is completely empty, is that normal?

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

anojht avatar anojht commented on August 22, 2024

Discover menu is populating however still 100% _grokparsefailure :(
What are we missing here? lol

from pfelk.

anojht avatar anojht commented on August 22, 2024

I am running pfsense 2.4.3 with the latest Suricata running in inline mode on my WAN and LAN. I have openvpn, ntop, pfblockerng running as packages. I think all of this is pretty standard?

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

I have all the same packages running...So you got 124 fields but within discover everything is still failing?

from pfelk.

anojht avatar anojht commented on August 22, 2024

Yeah every single entry has the _grokparsefailure tag. I filter by tags and it shows 100% failure...

from pfelk.

anojht avatar anojht commented on August 22, 2024

Idk if this is useful but the docker container I am running is using the latest version of Elasticsearch, Logstash, Kibana which is 6.2.4.....

You said your configuration which you currently have running and working, which version of the ELK stack are you running?

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

I am running the same version. I have still have some _grokparsefailures but mostly within ipv6 (RTALERT). I assuming something is slightly different or off which is causing the issue....can you provide a couple of your messages? Do you have any that are tagged with pfsense, firewall, dhcpd, etc...

from pfelk.

anojht avatar anojht commented on August 22, 2024

Here is something weird, every single entry has only two tags: syslog and _grokparsefailure

{
  "_index": "logstash-2018.06.04",
  "_type": "doc",
  "_id": "9wIdyGMB0SL-14aK_cgP",
  "_version": 1,
  "_score": null,
  "_source": {
    "syslog_facility": "user-level",
    "@version": "1",
    "message": "<134>Jun  3 17:06:38 filterlog: 81,,,1770008623,igb3,match,block,in,4,0x0,,244,54321,0,none,6,tcp,40,107.170.231.4,70.79.164.248,40426,9000,0,S,716072279,,65535,,",
    "type": "syslog",
    "syslog_severity_code": 5,
    "syslog_facility_code": 1,
    "tags": [
      "syslog",
      "_grokparsefailure"
    ],
    "host": "172.168.1.1",
    "syslog_severity": "notice",
    "@timestamp": "2018-06-04T00:06:38.488Z"
  },
  "fields": {
    "@timestamp": [
      "2018-06-04T00:06:38.488Z"
    ]
  },
  "sort": [
    1528070798488
  ]
}
{
  "_index": "logstash-2018.06.04",
  "_type": "doc",
  "_id": "8wIdyGMB0SL-14aKyMhD",
  "_version": 1,
  "_score": null,
  "_source": {
    "syslog_facility": "user-level",
    "@version": "1",
    "message": "<141>Jun  3 17:06:24 suricata[11921]: [Drop] [1:2009582:3] ET SCAN NMAP -sS window 1024 [Classification: Attempted Information Leak] [Priority: 2] {TCP} 5.188.86.27:59762 -> 70.79.164.248:52975",
    "type": "syslog",
    "syslog_severity_code": 5,
    "syslog_facility_code": 1,
    "tags": [
      "syslog",
      "_grokparsefailure"
    ],
    "host": "172.168.1.1",
    "syslog_severity": "notice",
    "@timestamp": "2018-06-04T00:06:24.968Z"
  },
  "fields": {
    "@timestamp": [
      "2018-06-04T00:06:24.968Z"
    ]
  },
  "sort": [
    1528070784968
  ]
}
{
  "_index": "logstash-2018.06.04",
  "_type": "doc",
  "_id": "8AIdyGMB0SL-14aKZsi5",
  "_version": 1,
  "_score": null,
  "_source": {
    "syslog_facility": "user-level",
    "@version": "1",
    "message": "<190>Jun  3 17:05:59 dhcpd: DHCPACK on 172.168.1.7 to 52:54:00:68:0b:82 (freebsd) via igb2",
    "type": "syslog",
    "syslog_severity_code": 5,
    "syslog_facility_code": 1,
    "tags": [
      "syslog",
      "_grokparsefailure"
    ],
    "host": "172.168.1.1",
    "syslog_severity": "notice",
    "@timestamp": "2018-06-04T00:05:59.988Z"
  },
  "fields": {
    "@timestamp": [
      "2018-06-04T00:05:59.988Z"
    ]
  },
  "sort": [
    1528070759988
  ]
}

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

Check your Suricata log settings: Services>>Suricata>>Interfaces>>WAN Settings
suricata

from pfelk.

anojht avatar anojht commented on August 22, 2024

Yup we have the same settings for Suricata

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

The issue is within the 10-syslog.conf file. I do not understand why/where your output is getting "syslog_facility": "user-level", from. The second issues appears that the syslog filter is not parsing out the timestamp which is still contained within the message.

I assume you only have one pfSense instance? If so, please remove lines 8-12 from within 10-syslog.conf or point the second host to an IP address not in use.

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

Take a look at line 38 within the 10-syslog.conf file: locale => "en". This may be the culprit.... change to the appropriate language code.

"en", "CA"
"CA"
"fr_CA"
"en_CA"

I would adjust the locale to match what your utilizing....once changed you'll need to restart Logstash (systemctl restart logstash.service).

from pfelk.

anojht avatar anojht commented on August 22, 2024

My file only has one block which points to my single pfsense instance. I have already removed the second block.

Ah I see, my pfsense is set to English which is en_US I just changed the language code to en_US but still the same result. I have been at this two days, and just about tried the best I could....I still have no idea why it won't parse correctly....

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

The GROK pattern works...I've taken and tested your message output. Both our setups are essential the same. I would troubleshoot pfSense (their documentation is somewhat horrible);

Suggest: a clean/fresh install of pfSense (spare computer, VM, docker etc...)
Configure just the log portion to send to your ELK stack
Configure 10-syslog.conf to listen to the new IP address of your vanilla pfSense instance

If that works...I would add one package at a time while testing afterwards and hopefully identifying/troubleshooting the specific issue. It does appear that the DHCP/Suricata messages look good minus the prefix in the message that is inhibiting proper filtering. You may be able to adjust the filter to read and interpret the "syslog_facility": "user-level", which doesn't appear on mine.

{
"_index": "logstash-2018.06.04",
"_type": "doc",
"_id": "_rxEyGMB0mW2pC3sUqhW",
"_version": 1,
"_score": null,
"_source": {
"proto": "tcp",
"direction": "in",
"ip_ver": "4",
"src_port": "55291",
"offset": "0",
"tos": "0x0",
"dest_port": "443",
"length": "83",
"evtid": "134",
"prog": "filterlog",
"id": "36350",
"iface": "igb1",
"tracker": "1000000103",
"action": "block",
"@Version": "1",
"src_ip": "10.0.0.30",
"reason": "match",
"data_length": "31",
"flags": "DF",
"proto_id": "6",
"host": "10.0.0.1",
"ttl": "64",
"message": "9,,,1000000103,igb1,match,block,in,4,0x0,,64,36350,0,DF,6,tcp,83,10.0.0.30,34.202.184.200,55291,443,31,PA,219253599:219253630,4059336140,30407,,nop;nop;TS",
"rule": "9",
"tags": [
"pfsense",
"firewall",
"firewall"
],
"type": "syslog",
"@timestamp": "2018-06-04T00:48:30.000Z",
"dest_ip": "34.202.184.200"
},
"fields": {
"@timestamp": [
"2018-06-04T00:48:30.000Z"
]
},
"sort": [
1528073310000
]
}

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

I believe I found it....

within your 10-syslog.conf file what tag is added to your host?

filter {
if [type] == "syslog" {
if [host] =~ /10.0.0.1/ {
mutate {
add_tag => ["pfsense", "Ready"]

Be sure that ["pfsense", "Ready"] is added.

This is vital for the 11-pfsense.conf to correctly extract the evtid and date

from pfelk.

anojht avatar anojht commented on August 22, 2024

First of all thank you so much for your time and help. I do not have doubts in your files. I know the problem lies somewhere in my configuration.

This is how my 10-syslog.conf file starts out, I noticed you don't have slashes for your ip??
EDIT: nvm that's just Github escaping if not put inside codeblocks

filter {
  if [type] == "syslog" {
    if [host] =~ /192\.168\.1\.1/ {
      mutate {
        add_tag => ["pfsense", "Ready"]
      }
    }

    if "Ready" not in [tags] {
      mutate {
        add_tag => [ "syslog" ]
      }
    }
  }
}
filter {
  if [type] == "syslog" {
    mutate {
      remove_tag => "Ready"
    }
  }
}

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

I copied but the slashes did not....yours is correct.

from pfelk.

anojht avatar anojht commented on August 22, 2024

Odd question, my host server which is running unRAID OS is running docker with the elk container on a bridge network. The ports are correctly setup between the container and the host. Could this network setup be the culprit?

I ask because I did not do the initial hostname stuff since I didn't think it was important as I am running inside a docker environment where hostnames don't stick. Are those initial two steps necessary?

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

Give this a try....
edit your 10-syslog.conf and add the following below the "add_tag => [ "firewall" ]" line

  match => [ "message", "<(?<evtid>.*)>(?<datetime>(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\s+(?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) (?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:[0-5][0-9])) (?<prog>.*?): (?<msg>.*)" ]
        }
        mutate {
          gsub => ["datetime","  "," "]
        }
        date {
          match => [ "datetime", "MMM dd HH:mm:ss" ]
        }
        mutate {
          replace => [ "message", "%{msg}" ]
        }
        mutate {
          remove_field => [ "msg", "datetime" ]
        }

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

I would not think that your setup is the issue nor is setting the hostname...your receiving the logs but the filter is not working correctly.

from pfelk.

anojht avatar anojht commented on August 22, 2024

The file 10-syslog.conf does not have add_tag => firewall line did you mean 11-pfsense.conf?

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

That's all the help (time) I have for now...I'll check back this coming weekend. I believe the issue is within the filtering portion. Take a look at your messages, the 11-pfsense.conf, line 5 should mutate and properly filter...

from pfelk.

anojht avatar anojht commented on August 22, 2024

Thank you so much, I really appreciate the troubleshooting session! I will keep trying and report if I get it working.

Cheers,
Anojh

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

The different files are for organization...copy those lines (the ones from previous message) from the 11-pfsense.conf and paste them after the: add_tag => ["pfsense", "Ready"]

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

essentially changing the order in hopes it properly filters. Your files, setup are good but for some odd reason the filtering is not taking/working. I would focus on that area...making sure you conduct restarts of logstash after each change....looking for the message to be properly filtered.

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

The others area to key in on would be
"syslog_facility": "user-level",
"syslog_severity_code": 5,
"syslog_facility_code": 1,
"syslog_severity": "notice",

Those are visible within your provided outputs yet they are not identified/filtered within the configuration files.

from pfelk.

anojht avatar anojht commented on August 22, 2024

Thank you I will try and troubleshoot the syslog.conf file since the problem exists here for sure. Adding the previous message's settings to this file results in a config error. I will keep trying!

from pfelk.

a3ilson avatar a3ilson commented on August 22, 2024

from pfelk.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.