Giter VIP home page Giter VIP logo

ecs-logging-java's Introduction

Build Status Maven Central

ECS-based logging for Java applications

Centralized logging for Java applications with the Elastic stack made easy

Release announcements

To get notified about new releases, watch this repository for Releases only.

Getting Help

If you need help or hit an issue, please start by opening a topic on our discuss forums. Please note that we reserve GitHub tickets for confirmed bugs and enhancement requests.

Documentation

Docs are located on elastic.co.

License

ECS Logging Java is licensed under Apache License, Version 2.0.

ecs-logging-java's People

Contributors

amannocci avatar apmmachine avatar bjoern2 avatar bmorelli25 avatar cachedout avatar dependabot[bot] avatar echatman avatar eyalkoren avatar felixbarny avatar github-actions[bot] avatar halofour avatar ipalbeniz avatar jackshirazi avatar jonaskunz avatar joschi avatar michaelmcfadyensky avatar nhurion avatar nickwemekamp avatar odin568 avatar peldan avatar pgomulka avatar philkra avatar rdifrango avatar reakaleek avatar redrathnure avatar sylvainjuge avatar tobiasstadler avatar v1v avatar venkat22 avatar yorondevops avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ecs-logging-java's Issues

add process.thread.name to JUL formatter

Hello,

I wonder if you could add process.thread.name field to JUL formatter (tried through PR , but no perms):

diff --git a/jul-ecs-formatter/src/main/java/co/elastic/logging/jul/EcsFormatter.java b/jul-ecs-formatter/src/main/java/co/elastic/logging/jul/EcsFormatter.java index 940c5a5..30bda29 100644
--- a/jul-ecs-formatter/src/main/java/co/elastic/logging/jul/EcsFormatter.java
+++ b/jul-ecs-formatter/src/main/java/co/elastic/logging/jul/EcsFormatter.java
@@ -62,6 +62,7 @@ public class EcsFormatter extends Formatter {
         EcsJsonSerializer.serializeServiceName(builder, serviceName);
         EcsJsonSerializer.serializeEventDataset(builder, eventDataset);
         EcsJsonSerializer.serializeThreadId(builder, record.getThreadID());
\+        EcsJsonSerializer.serializeThreadName(builder, Thread.currentThread().getName().replaceAll("\\[", "_").replaceAll("]", "_"));
         EcsJsonSerializer.serializeLoggerName(builder, record.getLoggerName());
         if (includeOrigin && record.getSourceClassName() != null && record.getSourceMethodName() != null) {
             EcsJsonSerializer.serializeOrigin(builder, buildFileName(record.getSourceClassName()), record.getSourceMethodName(), -1);

It works nicely for me:

{"@timestamp":"2020-07-31T14:43:23.827Z", "log.level": "INFO", "message":"HttpServerVerticle deployment complete.", "service.name":"myservice","event.dataset":"myservice.log","process.thread.id":71,"process.thread.name":"vert.x-eventloop-thread-0","log.logger":"myclass"}

Trace support for Java util logging when logging is called from SLF4j

Hi,

I'm considering to add a feature that will add trace support for JUL when JUL is called from SLF4J. This can be done using the MDC.getCopyOfContextMap method from SLF4j. This is useful for javaee projects that want to keep using the logging framework of their application server (which is often JUL, unfortunately).

Below is a code snippet that shows how this could be implemented

        Map<String, String> copyOfContextMap = MDC.getCopyOfContextMap();

        copyOfContextMap.forEach((k, v) -> {
            System.out.println("key: " + k + ", value: " + v);
        });

        System.out.println(copyOfContextMap.get("trace.id"));
        System.out.println(copyOfContextMap.get("transaction.id"));

I can implement this by making a new formatter e.g. JuLSlf4jEcsFormatter that extends the JuLEcsFormatter, or I can edit the current JuLEcs formatter.

Which solution do you guys prefer? I prefer the last one but wanted to be sure before I write the code :)

Add constants for known ecs fields for code efficiency when you log

Hi,
I'd like to have the ability to contribute to the project. It would be the first time I do it, so I am not sure if I can create a branch or anything. Would that be possible?

I'd like juste to create a Constants.java class that would have all the ECS fields so it would be accessible from any project and that would be more efficient.

Plus, it would be more easy to look at which fields already exist.

ecs logging can throw an NPE on null exception message

Hello,

Like in #44

Same thing on another method for us whith a NPE on an int compare:
java.lang.NullPointerException
at co.elastic.logging.JsonUtils.quoteAsString(JsonUtils.java:63)
at co.elastic.logging.EcsJsonSerializer.serializeFormattedMessage(EcsJsonSerializer.java:79)

In your code the null message is not tested in the JsonUtils.quoteAsString :
public static void serializeFormattedMessage(StringBuilder builder, String message) {
builder.append(""message":"");
JsonUtils.quoteAsString(message, builder);
builder.append("", ");
}

Thanks to you ;)

Filebeat sample config replaces log timestamp

I setup up the logback EscEncoder and filebeat as described in the docs and noticed a strange behavior.
The timestamps in elastic/kibana where not the same as in my json log files (a few milliseconds to minutes off). The timestamp in elastic was always the indexed/read timestamp from filebeat but not the actual timestamp from the log line.

While discussing this with @xeraa, he mentioned the json.overwrite_keys property.

Setting this property to true fixes this.

Please update the sample docs for the filebeat setup to avoid confusions.

json:
    keys_under_root: true
    overwrite_keys: true

Docker configuration

Hi,

I am having a hard time working out how to use it in conjunction with docker. I have a Java application that logs everything in JSON to a file inside the docker container. I have created a ln -sf /dev/stdout /opt/application.json symlink, so everything will be output to stdout. Of course, looking into the -json.log file created by Docker in the /var/lib/docker/container/containerid/*-json.log it looks something like this:

{"log":"{\"@timestamp\":\"2019-11-03T12:33:45.658Z\", \"log.level\": \"INFO\", \"message\":\"Login pkahr (philipp.kahr)\", \"service.name\":\"myservice\",\"process.thread.name\":\"http-bio-8080-exec-8\",\"log.logger\":\"com.myservice.server.core.utils.user.ThisDirectoryServiceImpl\",\"log.origin\":{\"file.name\":\"ThisDirectoryServiceImpl.java\",\"function\":\"loadUserByUsername\",\"file.line\":622}}\n","stream":"stdout","time":"2019-11-03T12:33:45.658546603Z"}

My filebeat config looks like this:

filebeat.inputs:
- type: container
  paths: 
    - '/var/lib/docker/containers/*/*.log'
  json.keys_under_root: true
  json.overwrite_keys: true
  json.message_key: log
  multiline.pattern: '^{'
  multiline.negate: true
  multiline.match: after

I am using the message decoder like suggested in the README

  - decode_json_fields:
      fields: message
      target: ""
      overwrite_keys: true
  # flattens the array to a single string
  - script:
      when:
        has_fields: ['error.stack_trace']
      lang: javascript
      id: my_filter
      source: >
        function process(event) {
            event.Put("error.stack_trace", event.Get("error.stack_trace").join("\n"));
        }

however, I can see a couple of errors in journalctl -u filebeat -f

filebeat[29865]: 2019-11-03T13:56:09.252+0100        ERROR        readjson/json.go:52        Error decoding JSON: json: cannot unmarshal number into Go value of type map[string]interface {}
filebeat[29865]: 2019-11-03T13:56:21.269+0100        ERROR        readjson/json.go:52        Error decoding JSON: invalid character '-' in numeric literal
filebeat[29865]: 2019-11-03T13:56:21.269+0100        ERROR        readjson/json.go:52        Error decoding JSON: invalid character 'E' looking for beginning of value
filebeat[29865]: 2019-11-03T13:56:21.268+0100        ERROR        readjson/json.go:52        Error decoding JSON: unexpected EOF

filebeat[29865]: 2019-11-03T13:56:17.267+0100        WARN        elasticsearch/client.go:535        Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0x120f69c0, ext:63708382571, loc:(*time.Location)(nil)}, Meta:common.MapStr(nil), Fields:common.MapStr{"agent":common.MapStr{"ephemeral_id":"f3bb8324-dda5-4dbe-9f1a-6fc86820c37d", "hostname":"CentOS-76-64-minimal", "id":"d8cb4370-3918-40a8-88c5-80a992868234", "type":"filebeat", "version":"7.4.2"}, "ecs":common.MapStr{"version":"1.1.0"}, "host":common.MapStr{"architecture":"x86_64", "containerized":false, "hostname":"CentOS-76-64-minimal", "id":"7d91e31c18374a518bb44ad58646dfeb", "name":"CentOS-76-64-minimal", "os":common.MapStr{"codename":"Core", "family":"redhat", "kernel":"3.10.0-1062.4.1.el7.x86_64", "name":"CentOS Linux", "platform":"centos", "version":"7 (Core)"}}, "input":common.MapStr{"type":"container"}, "log":"", "log.level":"INFO", "log.logger":"com.myservice.server.core.utils.user.ServiceDirectoryServiceImpl", "log.origin":common.MapStr{"file.line":622, "file.name":"ServiceDirectoryServiceImpl.java", "function":"loadUserByUsername"}, "message":"Login pkahr (philipp.kahr)", "process.thread.name":"http-bio-8080-exec-2", "service.name":"myservice", "stream":"stdout", "tags":[]string{"testing"}}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc00022cc30), Source:"/var/lib/docker/containers/b35f79cc3aa44c98709c3a05dfee5fcc13c92d18b0cff35d821af9820e52c398/b35f79cc3aa44c98709c3a05dfee5fcc13c92d18b0cff35d821af9820e52c398-json.log", Offset:656232, Timestamp:time.Time{wall:0xbf67d03f87d679e9, ext:12500845494, loc:(*time.Location)(0x4de6580)}, TTL:-1, Type:"container", Meta:map[string]string(nil), FileStateOS:file.StateOS{Inode:0x18802e6, Device:0x902}}, TimeSeries:false}, Flags:0x1} (status=400): {"type":"mapper_parsing_exception","reason":"object mapping for [log] tried to parse field [log] as object, but found a concrete value"}

Any idea on how to use the ECS logging in combination with docker? I would love to have it, so I can jump from the APM to the container logs.

ECS formatter for JUL (java.util.logging)

What about idea to implement ECS formatter for JUL (java.util.logging)?

Motivation

there are bunch of old and very old application which use JUL as logging backend. E.g. Tomcat, Tanuki Wrapper and products based on these applications.
Sometimes is it not easy to switch logging framework, sometimes it's not possible to do this at all. But it would be nice to have logs in ECS format and as option (relatively simple?) we may have class EcsFormatter extends java.util.logging.Formatter.

Open Issues

When I tried to implement it I met a few issues:

  • there are no file related metainformation (file name, line etc).
  • looks like no thread name, only thread ID
  • no MDC or something similar concept
  • JUL has bit different log levels and looks like we need mapping between JUL and classical DEBUG/INFO/WARN/ERROR vales. The mapping does not look so pretty, but it is work.

So looks like EcsFormatter class may produce ECS-compatible record, but some of feature will be missed and seems some of AbstractEcsLoggingTest unit tests will be failed.

Questions

  1. Does it have sense to implement EcsFormatter?
  2. If yes, how to deal with test cases from AbstractEcsLoggingTest and not supported features like MDC and file meta information?

Add time zone support

Use the JVM's default timezone by default and let users configure a different time zone.

Release version 0.1.3-SNAPSHOT please

I know this is hardly the place but how can I ask for a new release with your change?

Could you release version 0.1.3-SNAPSHOT please? I would like to benefit the fix where the message can be null and does not crash the program for a near-future deployment.

Set Content-Type Header when using EcsLayout with HTTPAppender in Log4J2

Hi,

I would like to use the "EcsLayout" together with the Log4J2 HTTP Appender to send ECS formatted logs to a service of mine, which only accepts HTTP requests of Content-Type: application/json.

Previously, I used the Log4J2 provided "JsonLayout" layout in conjunction with the HTTP Appender, and according to the Log4J2 docs it was the layout that set the Content-Type header for requests (source).

When I use the "EcsLayout" layout, all requests are sent out with a Content-Type: text/plain.

I would very much appreciate it if a configuration property could be added so I could customize the Content-Type header while using the "EcsLayout" layout.

populate event.dataset to allow for dedicated partitions in "Log Rate" ML job

The Log Rate component of the Logs UI sets up an ML job to look for anomalies in log rate counts by event.dataset. For logs formatted via ecs-logging-java, that field is not set, so the logs show as "unknown" in the Log Rate UI and are grouped together with all other log sources using ecs-logging-java, which removes the out-of-the-box ability to see if one source has an unusual amount of logs.

image

Allow the newline to be optional when stack trace is structured as an array

Some log file observers have difficulty with newline characters. When a newline is encountered, it is presumed to be a separate log line. I'm having this problem when I have stackTraceAsArray="true" configured.

Each element of the array has a newline after it.
eg:
"error.stack_trace":[
"stackelement1",
"\tat stackelement2",
"\tat stackelement3",
etc....]

Would it be possible to optionally have the lines in an array, but not with newlines after each array element?

Eg:
"error.stack_trace":["stackelement1","\tat stackelement2", "\tat stackelement3", etc....]

Add ecs.version field

it would be great if there was a public constant/method that would allow getting an ECS version that the lib is compatible with.

Error handling

Hi,
Can someone explain me the best way to handle error?

If the remote host get down, if the port got close or even if the firewall block the logs for some reason, I don't want to loose the logs.

I tried a few solutions in log4j2 but none looks to work correctly.

Is it possible to logs in a file if the socket appender fail?

Note: Failover appender and "errorRef" of AsyncAppenders does not seems to work.

I think ErrorHandling is an important aspect that has not been presented yet in this project.

Thank you for any good information.

Add support for log4j < 1.2.15

We're currently using some methods that were introduced in 1.2.15, for example org.apache.log4j.spi.LoggingEvent#getTimeStamp and org.apache.log4j.spi.LoggingEvent#getProperties.

Let's see if we can find workarounds in order to support earlier versions.

Use nested JSON instead of dot strings

  • The document structure should be nested JSON objects. If you use Beats or Logstash, the nesting of JSON objects is done for you automatically. If you’re ingesting to Elasticsearch using the API, your fields must be nested objects, not strings containing dots.
  • See Why does ECS use a dot notation instead of an underline notation? for more details.

Some of us are using other tools to ingest to Elasticsearch. (Elasticsearch API, fluentd, kinesis streams) It would be nice if we did not have to provide additional parsing to get these logs into the nested json format.

Include Process ID

This a the default format of log generated by spring boot app using logback

2020-10-28 11:03:27.265 DEBUG 11672 --- [nio-8082-exec-1] c.b.estimator.domain.MarketEstimate : Value Estimation of Model : Toyota

The ECS format generated in json is as follow

{
  "@timestamp": "2020-10-28T11:03:27.265Z",
  "log.level": "DEBUG",
  "message": "Value Estimation of Model : Toyota",
  "service.name": "car-value-estimator",
  "event.dataset": "car-value-estimator.log",
  "process.thread.name": "http-nio-8082-exec-2",
  "log.logger": "com.bvader.estimator.domain.MarketEstimate",
  "event.module": "car-value-estimator",
  "event.category": "log",
  "log.origin": {
    "file.name": "MarketEstimate.java",
    "function": "calculateEstimate",
    "file.line": 29
  }
}

As you can see the filed Process ID with value 11672 is missing

Java Json format not consistent with ECS

I've imported the ecs template into our cluster and ingested data using ecs-logging-java and ecs-logging-python. The latter formats json log statements which match the ecs structure but Java not.

Here a snippet:
...
"log.level":"DEBUG",
...
"log.origin": {
"file.line": 655,
"file.name": "SpringApplication.java",
"function": "logStartupProfileInfo"
},

This should look like:
...
"log": {
"level": "DEBUG",
"origin": {
"file": {
"line": 655,
"name": "SpringApplication.java""
},
"function": "logStartupProfileInfo"
},
...

Formatting of stacktraces

Currently, the exception stack trace is just part of the message. There are some advantages and disadvantages to this.

The advantage is that it integrates nicely in the logs UI:
logs-ui

There's no additional configuration required to see a stack trace which is natively rendered and looks just the same as in a regular log file.

The disadvantage is that the whole stack trace is squashed into a single JSON line which makes them non-human readable.

Alternatives:

  • Rely on tools such as https://github.com/koenbollen/jl to view JSON logs and live with the fact that viewing them in plain does not give you a great experience when it comes to stack traces
  • Rely on plain-text logs which are logged alongside the JSON logs
  • Log the stack trace lines as JSON array elements
    • I'll have to check if Filebeat can cope with multi-line JSON but it probably can
    • Would it be instead of or in addition to printing the stacktrace in the message field?
      • If it's instead of, how do we make the Logs UI aware of it?
      • If it's in addition to, it means some overhead on disk. We could provide example Filebeat configurations on how to ignore the additional stacktrace array.
  • Render the exception in a more structured way, possibly in a format similar to https://github.com/elastic/apm-server/blob/master/docs/spec/errors/error.json#L52
    • Each stack trace element could be rendered in a new line so that it's easier human-readable.
    • Similar set of questions (in addition to or instead of stacktrace in message?)
      • If it's instead of, how do we make sure the log UI re-constructs the stacktrace in a way that's native to the corresponding programming language?

cc @pgomulka @ruflin @webmat @weltenwort

related: elastic/ecs#154

Create a setter for "topLevelLabels"

Hi.
I want to set custom topLevelLabels in co.elastic.logging.logback.EcsEncoder. (E.g. "user.name".)
But there is no setter for this variable.
Could you please add a new setter for this variable? Thanks.
Greetings Björn

Support structured logging with Logback

As logback user, I'd like to have the ability to create structured logs as https://github.com/elastic/ecs-logging-java/tree/master/log4j2-ecs-layout#structured-logging

Currently the community relies on "net.logstash.logback.encoder.LogstashEncoder" to generate structured log statements which leverages Markers. Although not ideal, as Markers are intended for filtering: http://logback.qos.ch/manual/filters.html, I think it would be useful if the EcsEncoder could support this scenario without requiring an additional dependency library

Enhancement for Compact JSON

The current implementation seems to yield a pretty printed output of the ECS JSON. Would it be possible to optionally configure the layout to output compact JSON?

Perhaps it would be valuable to add analogous features with the same capability and intent as those related to the "compact", "eventEol" and "endOfLine" etc settings which are available on the OEM JSONLayout.

See https://logging.apache.org/log4j/2.x/manual/layouts.html where pretty print vs compact format is discussed.

ecs logging can throw an NPE

ecs-logging can throw an NPE, as per stack trace below.

Caused by: java.lang.NullPointerException                                                                                                                 [849/1871]        at co.elastic.logging.JsonUtils.quoteAsString(JsonUtils.java:63)

        at co.elastic.logging.EcsJsonSerializer.serializeException(EcsJsonSerializer.java:163)
        at co.elastic.logging.log4j.EcsLayout.format(EcsLayout.java:62)
        at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:310)
        at org.apache.log4j.DailyRollingFileAppender.subAppend(DailyRollingFileAppender.java:369)
        at org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
        at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
        at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
        at org.apache.log4j.Category.callAppenders(Category.java:206)
        at org.apache.log4j.Category.forcedLog(Category.java:391)
        at org.apache.log4j.Category.error(Category.java:322)
        at private code

AbstractEcsLoggingTest should log keys as nested objects, not with dotted key names

I've noticed AbstractEcsLoggingTest checks for keys like process.thread.name while I think the key should be something like `"process": { "thread": { "name": "co.elastic.logging.logback.EcsEncoderTest"}}" instead of something like

 {"@timestamp":"2019-11-18T13:42:33.333Z","log.level":"DEBUG","message":"test","service.name":"test","process.thread.name":"main","log.logger":"co.elastic.logging.logback.EcsEncoderTest","log.origin":{"file.name":"AbstractEcsEncoderTest.java","function":"debug","file.line":47}}

Format logs in nested JSON

Hi

We are using logstash in our log ingestion pipeline and we found out that logs from this library are not in nested JSON format, which can cause issues in logstash processing, and also correct nested JSON formatting was highly recommanded by our elasticsearch support (as the *beats do).

An example log :
{"@timestamp":"2020-07-06T09:07:25.481Z", "log.level": "INFO", "message":"Command line argument: -Dcatalina.base=/opt/tomcat", "service.name":"tomcat","event.dataset":"catalina.out","process.thread.name":"main","log.logger":"org.apache.catalina.startup.VersionLoggerListener"}

would be :
{"@timestamp":"2020-07-06T09:07:25.481Z","log":{"level":"INFO","logger":"org.apache.catalina.startup.VersionLoggerListener"},"message":"Command line argument: -Dcatalina.base=/opt/tomcat","service":{"name":"tomcat"},"event":{"dataset":"catalina.out"},"process":{"thread":{"name":"main"}}}

Can this library output be properly nested ?
Thanks,
Alexandre

Thread crash when parameter is a null value

	logger.info(markerCLM, new StringMapMessage()
			.with("message", "Test message")
			.with("event.action", null)
			.with("event.category", "General"));

This will crash. It is not supposed to happen, but sometimes a parameter can be null unexpectedly.
It would be sad that a thread crash because of a null value.

Mostly, I think it crash because of the Class StringMapMessage who does not allow null as a value.

A short term solution would be to add ParameterizedMessage around the value as :
.with("event.action", new ParameterizedMessage(null)) and it returns the string "null".

Should we use another type rather than StringMapMessage? (With Java 8)

I thought 0.1.3 would fix it, but it looks like it was something else.

De-dot labels

Labels are supposed to be simple key/value pairs. If they contain dots, it creates a nested object structure. So for all non-top-level labels, the dots should be replaced with underscores.

No trace of Tomcat startup in logs using log4j2 and ecs-logging-java

I work in a company where several custom middleware Docker images are built for our developpers (httpd, Tomcat, PHP-FPM, MySQL, etc). Those images are used to build and run app images in OpenShift. Proper configuration has been made so our middleware images output their logs in stdout using json.

Recently, i have decided to move to ECS to make it easier to read logs in Kibana and your library seems like a perfect choice for Java applications.

First, i have replaced log4j/Graylog with log4j2 and followed the steps below to get json logs before trying my luck with ecs-logging-java :

setenv.sh

CLASSPATH=$CATALINA_HOME/lib/log4j-jul-2.13.2.jar:$CATALINA_HOME/lib/log4j-api-2.13.2.jar:$CATALINA_HOME/lib/log4j-core-2.13.2.jar:$CATALINA_HOME/lib/log4j-appserver-2.13.2.jar:$CATALINA_HOME/lib/jackson-annotations-2.11.0.jar:$CATALINA_HOME/lib/jackson-databind-2.11.0.jar:$CATALINA_HOME/lib/jackson-core-2.11.0.jar
LOGGING_CONFIG="-Dlog4j.configurationFile=$CATALINA_BASE/conf/log4j2.xml"
LOGGING_MANAGER="-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager"

log4j2.xml

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
    <Appenders>
        <Console name="Console" target="SYSTEM_OUT">
            <JSONLayout compact="true" eventEol="true"/>
        </Console>
    </Appenders>
    <Loggers>
        <Root level="INFO">
            <AppenderRef ref="Console"/>
        </Root>
    </Loggers>
</Configuration>

Access logs are managed through a valve in server.xml and are not controlled by log4j2.

When starting Tomcat, logs are properly written in stdout. I think org.apache.logging.log4j.jul.LogManager has something to do with it as it makes Tomcat's legacy JavaUtilLogging mechanism work with log4j.

{"instant":{"epochSecond":1588848888,"nanoOfSecond":655000000},"thread":"main","level":"INFO","loggerName":"org.apache.catalina.startup.VersionLoggerListener","message":"Server version name:   Apache Tomcat/8.5.51","endOfBatch":false,"loggerFqcn":"org.apache.logging.log4j.jul.ApiLogger","threadId":1,"threadPriority":5}
(…)
{"instant":{"epochSecond":1588848901,"nanoOfSecond":455000000},"thread":"main","level":"INFO","loggerName":"org.apache.catalina.startup.Catalina","message":"Server startup in 10306 ms","endOfBatch":false,"loggerFqcn":"org.apache.logging.log4j.jul.ApiLogger","threadId":1,"threadPriority":5}

Nice, but we need ECS fields. Custom fields can be added in JsonLayout...

<JsonLayout>
    <KeyValuePair key="additionalField1" value="constant value"/>
    <KeyValuePair key="additionalField2" value="$${ctx:key}"/>
</JsonLayout>

...but i don't know how to replace existing ones. Let's give a shot at ecs-logging-java!

setenv.sh

CLASSPATH=$CATALINA_HOME/lib/log4j-jul-2.13.2.jar:$CATALINA_HOME/lib/log4j-api-2.13.2.jar:$CATALINA_HOME/lib/log4j-core-2.13.2.jar:$CATALINA_HOME/lib/log4j-appserver-2.13.2.jar:$CATALINA_HOME/lib/jackson-annotations-2.11.0.jar:$CATALINA_HOME/lib/jackson-databind-2.11.0.jar:$CATALINA_HOME/lib/jackson-core-2.11.0.jar:$CATALINA_HOME/lib/log4j2-ecs-layout-0.3.0.jar

log4j2.xml

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
    <Appenders>
        <Console name="Console" target="SYSTEM_OUT">
            <EcsLayout serviceName="Tomcat" eventDataset="catalina.out" stackTraceAsArray="true"/>
        </Console>
    </Appenders>
    <Loggers>
        <Root level="INFO">
            <AppenderRef ref="Console"/>
        </Root>
    </Loggers>
</Configuration>

But now, no more messages are shown in logs pertaining to Tomcat's startup. Has the bridge with java.util.logging been lost?

I could manage to get logs by raising the verbosity level to DEBUG...

<Configuration status="DEBUG">
(...)
<Root level="DEBUG">

...but the messages i get are restricted to log4j2's inner workings (and are not even formatted in json) :

2020-05-07 17:33:15,151 main DEBUG Apache Log4j Core 2.13.2 initializing configuration XmlConfiguration[location=/opt/tomcat/conf/log4j2.xml]
(…)
2020-05-07 17:33:18,554 main DEBUG EcsLayout$Builder(serviceName="Tomcat", eventDataset="catalina.out", includeMarkers="null", stackTraceAsArray="true", ={}, includeOrigin="null", charset="null", footerSerializer=null, headerSerializer=null, Configuration(/opt/tomcat/conf/log4j2.xml), footer="null", header="null")

I noticed a module was developped specifically for java.util.logging in ecs-logging-java but i do not wish to revert to using it as i believe log4j2 is the way to go now!

Furthermore, there's a screenshot showing traces of a Tomcat Startup in your home page so maybe i have missed something somewhere or a piece of ecs-logging-java has been broken?

tomcat_startup_with_ecs_logging_java

Thanx :)

Released version of JBoss Formatter

I looked at the current master and found a fully implemented JBoss Formatter.
Is There any chance for a released version soon? It would be very helpful.

Configuration for formatting Throwable stack traces

I'd like to see more options for how stack traces are formatted in the ECS logs to help in reducing the size of logs.

Some examples:

  1. Maximum depth
  2. Include/exclude stack frames based on a pattern
  3. Shorten class names by abbreviating package names to fit within a specified length
  4. Root cause first or last

Ideally a configuration option could override the formatting of the stack trace through an interface implementation provided by the application. That implementation could return a String[] or Stream<String> of the formatted stack frames which are then added to the ECS log either as a JSON string or an array of strings.

I'd be happy to work on a PR for this if there is interest in it being included.

Nested JSON - De_dot plugin equivalent in log4j

Hello,
"Otherwise, the user doesn't know whether to access a field via doc["foo.bar"] or via doc["foo"]["bar"]. We don't want users to have knowledge about which fields are nested vs dotted as this is an implementation detail that can vary with different ecs-logging implementations and may even change for the same implementation."

This is a reference to issue #51 which is closed.

If we want to directly log in the ELK pipeline and that we can live with the fact that all dots are replaced for nested JSON object (no exception), then, in thoses conditions, is it possible to send nested JSON?

I would like to avoid filebeat for now and the plugin filter to de_dot is told to be CPU intensive.
See : https://www.elastic.co/guide/en/logstash/current/plugins-filters-de_dot.html

Thank you for considering this option.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.