Giter VIP home page Giter VIP logo

logback-more-appenders's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

logback-more-appenders's Issues

line data couldn't get

<appender name="FLUENT_SYNC"
          class="ch.qos.logback.more.appenders.DataFluentAppender">

    <!-- Tag for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->
    <tag>debug</tag>
    <!-- [Optional] Label for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->
    <!-- Host name/address and port number which Flentd placed -->
    <port>32429</port>
    <!-- [Optional] Additional fields(Pairs of key: value) -->
    <additionalField>
        <key>foo</key>
        <value>bar</value>
    </additionalField>
    <additionalField>
        <key>foo2</key>
        <value>bar2</value>
    </additionalField>
    <!--  [Optional] If true, Map Marker is expanded instead of nesting in the marker name -->
    <flattenMapMarker>false</flattenMapMarker>

    <encoder>
        <pattern><![CDATA[%date{HH:mm:ss.SSS} [%thread] %-5level %logger{15}#%line %message]]></pattern>
    </encoder>
</appender>

Fluent not working

Hi sndyuk,
I am new to logging. I followed your tutorial and I am testing the FLUENT and FLUENT_TEXT. But it is not working. Here are the details. Can you help me? Thank you.

I have a td-agent running locally which is accepting all forwarded log.

@type forward @type stdout

I create a main class only log a line.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**

  • Hello world!

*/
public class App {
static final Logger logger = LoggerFactory.getLogger(App.class);

public static void main(String[] args) {
	logger.info("Hello World!");

	System.out.println("Hello World!");
}

}

My logback.xml is:

%d{yyyy-MM-dd HH:mm:ss.SSS} {%thread} %-5level %m%n class="ch.qos.logback.more.appenders.DataFluentAppender"> <!-- Tag for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file --> <tag>debug</tag> <!-- [Optional] Label for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file --> <label>logback</label> <!-- Host name/address and port number which Flentd placed --> <remoteHost>localhost</remoteHost> <port>24224</port> <!-- Additional fields(Pairs of key: value) --> <additionalField> <key>foo</key> <value>bar</value> </additionalField> <additionalField> <key>foo2</key> <value>bar2</value> </additionalField> </appender> <appender name="FLUENT" class="ch.qos.logback.classic.AsyncAppender"> <!-- Max queue size of logs which is waiting to be sent (When it reach to the max size, the log will be disappeared). --> <queueSize>999</queueSize> <appender-ref ref="FLUENT_SYNC" /> </appender> <appender name="FLUENT_TEXT_SYNC" class="ch.qos.logback.more.appenders.FluentLogbackAppender"> <!-- Tag for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file --> <tag>debug</tag> <!-- [Optional] Label for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file --> <label>logback</label> <!-- Host name/address and port number which Flentd placed --> <remoteHost>localhost</remoteHost> <port>24224</port> <layout class="ch.qos.logback.classic.PatternLayout"> <pattern><![CDATA[%date{HH:mm:ss.SSS} [%thread] %-5level %logger{15}#%line %msg]]></pattern> </layout> </appender> <appender name="FLUENT_TEXT" class="ch.qos.logback.classic.AsyncAppender"> <!-- Max queue size of logs which is waiting to be sent (When it reach to the max size, the log will be disappeared). --> <queueSize>999</queueSize> <appender-ref ref="FLUENT_TEXT_SYNC" /> </appender> <!-- Logger for specific package --> <!-- Level: debug, info, error, debug includes everything. --> <logger name="be.landc.mydemo" level="info"> <appender-ref ref="FLUENT" /> <appender-ref ref="FLUENT_TEXT" /> </logger> <root level="debug"> <appender-ref ref="STDOUT" /> </root>

Trouble having timestamp with millisecond with DataFluentAppender

Hi,

i'm having trouble to get logs with millisecond resolution with DataFluentAppender.

My logback config:

...
<appender name="FLUENT_SYNC" class="ch.qos.logback.more.appenders.DataFluentAppender">
        <tag>mytag</tag>
        <label>mylabel</label>
        <remoteHost>localhost</remoteHost>
        <port>24224</port>
</appender>

      <appender name="FLUENT" class="ch.qos.logback.classic.AsyncAppender">
        <queueSize>_queueSize_</queueSize>
        <appender-ref ref="FLUENT_SYNC"/>
      </appender>


    <logger name="my.packages" level="DEBUG">
      <appender-ref ref="default" />
      <appender-ref ref="FLUENT"/>
    </logger>

My td-agent conf:

<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>

<match **>
  @type elasticsearch
  logstash_format true
  host _es_remote_host_
  port _es_remote_port_
  logstash_prefix ${tag}
  time_key_format %Y-%m-%dT%H:%M:%S.%NZ
</match>

All log i get from my app have .000 in timestamp which is very annoying

Could you give me some help ?

Best regards

Cloudwatch-v2 appender and localstack

Is there any way to override the AWS endpoint url 169.254.169.254 when using localstack locally?
We have tried setting environment variables but it appears that it is still trying to log to real AWS instead of localstack.

AWS_ACCESS_KEY_ID=access_key_id
AWS_SECRET_ACCESS_KEY=secret_access_key
AWS_SESSION_TOKEN=session_token
AWS_DEFAULT_REGION=us-east-1
AWS_REGION=us-east-1
AWS_ENDPOINT_URL=http://localstack:4566
AWS_ENDPOINT_URL_CLOUDWATCH_LOGS=http://localstack:4566
Exception in thread "Timer-1" software.amazon.awssdk.services.cloudwatchlogs.model.UnrecognizedClientException: The security token included in the request is invalid. (Service: CloudWatchLogs, Status Code: 400, Request ID: 53c25469-8c07-4a17-b588-667376eca83e)

issue while using this api

This API never sends traffic to fluentd 24224 port. I checked that we have UDP and TCP both opened. Is there a way to debug issue ?

How to pass credentials to connect remote fluentD aggregator

I have a fluentD aggregator running in AWS with configuration like this

<source>
  @type       forward
  port        24224
  <security>
     self_hostname fluentd-aggregator
     shared_key  test_key
   </security>
</source>

I didn't find tags for passing self_hostname and shared_key in logback-spring.xml
How do I pass credentials from DataFluentAppender?

CloudWatch appenders duplicate logs

Hello,
I've found a bug affecting CloudWatchIntervalAppender.
Consider the loop done by the append() method:

long size = 0;
int putIndex = 0;
for (int i = 0, l = events.size(); i < l; i++) {
    InputLogEvent event = events.get(i);
    String message = event.getMessage();
    size += (message.length() * 4); // Approximately 4 times of the length >= bytes size of utf8
    if (size > 1048576 || i + 1 == l) {
        PutLogEventsRequest request;
        if (i + 1 == l) {
            request = new PutLogEventsRequest(logGroupName, streamName, events.subList(putIndex, i + 1));
        } else {
            request = new PutLogEventsRequest(logGroupName, streamName, events.subList(putIndex, i));
            i -= 1;
        }
        size = 0;
        putIndex = i;

        // .... redacted ...
    }
}

In general, subList(fromIndex, toIndex) returns a list view spanning from fromIndex inclusive to toIndex exclusive.

Consider the scenario when the loop has accumulated enough data but it has not gathered all the events, so i + 1 < l.
The control flow proceeds to the else branch, so subList(putIndex, i) is invoked and the event index i, say 10, is decreased, so it is set to 9.

After that, putIndex is set to the (updated) value of i, 9, which is incorrect. The next chunk will begin from the latest putIndex (9) which was already been sent as the last element of the previous chunk (subList(putIndex, 10)).

I fixed the problem simply by assigning putIndex before decrementing i:

if (size > 1048576 || i + 1 == l) {
    PutLogEventsRequest request;
    if (i + 1 == l) {
        request = new PutLogEventsRequest(logGroupName, streamName, events.subList(putIndex, i + 1));
    } else {
        request = new PutLogEventsRequest(logGroupName, streamName, events.subList(putIndex, i));
        // Update putIndex before decreasing the event index
        putIndex = i;
        i -= 1;
    }
    size = 0;
    
    // ....redacted....
}

I didn't try CloudWatchLogbackAppenderV2 but it seems to have the same issue.

Thanks

Unable to apply PatternLayout DataFluentAppender

Hi,

I tried to send hostname to Fluentd using DataFluentAppender. My Logback configuration is as follows:

    <appender name="FLUENT" class="ch.qos.logback.more.appenders.DataFluentAppender">
        <tag>my-tag</tag>
        <label>my-label</label>
        <remoteHost>my-host</remoteHost>
        <port>24224</port>
        <maxQueueSize>20</maxQueueSize>
        <layout class="ch.qos.logback.classic.PatternLayout">
            <pattern>%date{HH:mm:ss.SSS} [%property{HOSTNAME}] [%thread] %-5level %logger{15}#%line %msg</pattern>
        </layout>
    </appender>

However, it doesn't work.
As far as I looked at DataFluentAppender implementation, there seems two problems.

Is there any alternative solution to add hostname?

Allow to truncate the DataFluentAppender throwable field

Hi,

Sometimes stacktraces can be really, really big, and when a problem, for instance a network issue, last for several minutes, the big amount of stacktrace logs collected can be too much and lead to a crash of the fluentd agent.

It would be nice to have an option to disable the throwable field, or to truncate it to only a few lines. For instance the SyslogAppender allows to control how throwables should be managed.

[Improvement Request] [FluentdAppenderBase] Add the ability to remove fields

Currently, FluentdAppenderBase sends to the server the following fields:

  • msg
  • message
  • logger
  • thread
  • level
  • marker
  • caller
  • throwable

It follows that some people (e.g. myself) would like to be able to customize which fields I do not want to be sent (by default all of these could be enabled)

Is this something of interest for anyone else?

Why I just get three loggers, but I can get more by using consoleAppender

when using FluentLogbackAppender, I get just three loggers:

2018-02-06 04:30:24.000000000 -0500 fmshiot.bootstrap: {"msg":"[192.168.39.20] [port_IS_UNDEFINED] [INFO] [bootstrap] [] [] [] [] [8740] [main] [o.s.c.a.AnnotationConfigApplicationContext] [Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@55b0dcab: startup date [Tue Feb 06 17:30:24 CST 2018]; root of context hierarchy] []"}
2018-02-06 04:30:25.000000000 -0500 fmshiot.bootstrap: {"msg":"[192.168.39.20] [port_IS_UNDEFINED] [INFO] [bootstrap] [] [] [] [] [8740] [main] [o.s.b.f.a.AutowiredAnnotationBeanPostProcessor] [JSR-330 'javax.inject.Inject' annotation found and supported for autowiring] []"}
2018-02-06 04:30:25.000000000 -0500 fmshiot.bootstrap: {"msg":"[192.168.39.20] [port_IS_UNDEFINED] [INFO] [bootstrap] [] [] [] [] [8740] [main] [o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker] [Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$798dd570] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)] []"}

but I can get more when using ConsoleAppender, as follows:

2018-02-06 17:30:24.789 [main] INFO o.s.c.a.AnnotationConfigApplicationContext - Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@55b0dcab: startup date [Tue Feb 06 17:30:24 CST 2018]; root of context hierarchy
2018-02-06 17:30:25.127 [main] INFO o.s.b.f.a.AutowiredAnnotationBeanPostProcessor - JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2018-02-06 17:30:25.188 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$798dd570] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

. ____ _ __ _ _
/\ / ' __ _ () __ __ _ \ \ \
( ( )__ | '_ | '| | ' / ` | \ \ \
/ )| |)| | | | | || (| | ) ) ) )
' || .__|| ||| |_, | / / / /
=========||==============|/=////
:: Spring Boot :: (v1.5.7.RELEASE)

2018-02-06 17:30:25.696 [main] INFO c.f.f.registry.RegistryApplication - No active profile set, falling back to default profiles: default
2018-02-06 17:30:25.713 [main] INFO o.s.b.c.e.AnnotationConfigEmbeddedWebApplicationContext - Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@58c34bb3: startup date [Tue Feb 06 17:30:25 CST 2018]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext@55b0dcab
2018-02-06 17:30:26.793 [main] INFO o.s.cloud.context.scope.GenericScope - BeanFactory id=737bdab9-1582-3bc0-9e4b-54df761c6ee9
2018-02-06 17:30:26.816 [main] INFO o.s.b.f.a.AutowiredAnnotationBeanPostProcessor - JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2018-02-06 17:30:26.877 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.cloud.netflix.metrics.MetricsInterceptorConfiguration$MetricsRestTemplateConfiguration' of type [org.springframework.cloud.netflix.metrics.MetricsInterceptorConfiguration$MetricsRestTemplateConfiguration$$EnhancerBySpringCGLIB$$8fa078b4] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-02-06 17:30:26.888 [main] INFO o.s.c.s.PostProcessorRegistrationDelegate$BeanPostProcessorChecker - Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$798dd570] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2018-02-06 17:30:27.224 [main] INFO o.s.b.c.e.t.TomcatEmbeddedServletContainer - Tomcat initialized with port(s): 9080 (http)
2018-02-06 17:30:27.237 [main] INFO o.a.catalina.core.StandardService - Starting service [Tomcat]
2018-02-06 17:30:27.238 [main] INFO o.a.catalina.core.StandardEngine - Starting Servlet Engine: Apache Tomcat/8.5.20
2018-02-06 17:30:27.384 [localhost-startStop-1] INFO o.a.c.c.C.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext
2018-02-06 17:30:27.384 [localhost-startStop-1] INFO o.s.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 1671 ms
2018-02-06 17:30:28.291 [localhost-startStop-1] INFO o.s.b.w.s.FilterRegistrationBean - Mapping filter: 'metricsFilter' to: [/]
2018-02-06 17:30:28.292 [localhost-startStop-1] INFO o.s.b.w.s.FilterRegistrationBean - Mapping filter: 'characterEncodingFilter' to: [/]
2018-02-06 17:30:28.292 [localhost-startStop-1] INFO o.s.b.w.s.FilterRegistrationBean - Mapping filter: 'hiddenHttpMethodFilter' to: [/]
2018-02-06 17:30:28.292 [localhost-startStop-1] INFO o.s.b.w.s.FilterRegistrationBean - Mapping filter: 'httpPutFormContentFilter' to: [/]
2018-02-06 17:30:28.292 [localhost-startStop-1] INFO o.s.b.w.s.FilterRegistrationBean - Mapping filter: 'requestContextFilter' to: [/]
2018-02-06 17:30:28.292 [localhost-startStop-1] INFO o.s.b.w.s.FilterRegistrationBean - Mapping filter: 'webRequestTraceFilter' to: [/]
2018-02-06 17:30:28.292 [localhost-startStop-1] INFO o.s.b.w.s.FilterRegistrationBean - Mapping filter: 'servletContainer' to urls: [/eureka/]
2018-02-06 17:30:28.293 [localhost-startStop-1] INFO o.s.b.w.s.FilterRegistrationBean - Mapping filter: 'applicationContextIdFilter' to: [/]
2018-02-06 17:30:28.293 [localhost-startStop-1] INFO o.s.b.w.s.ServletRegistrationBean - Mapping servlet: 'dispatcherServlet' to [/]
2018-02-06 17:30:28.381 [localhost-startStop-1] INFO c.s.j.s.i.a.WebApplicationImpl - Initiating Jersey application, version 'Jersey: 1.19.1 03/11/2016 02:08 PM'
2018-02-06 17:30:28.451 [localhost-startStop-1] INFO c.n.d.p.DiscoveryJerseyProvider - Using JSON encoding codec LegacyJacksonJson
2018-02-06 17:30:28.453 [localhost-startStop-1] INFO c.n.d.p.DiscoveryJerseyProvider - Using JSON decoding codec LegacyJacksonJson
2018-02-06 17:30:28.662 [localhost-startStop-1] INFO c.n.d.p.DiscoveryJerseyProvider - Using XML encoding codec XStreamXml
..........

Fluency logger should support same fields as FluentLogger

Currently the FluentLogger appender pushes logging data into multiple fluentd fields such as "message", "caller", "marker", "throwable", MDC properties and custom fields. The new fluency logger only supports a single field "msg" which makes it impossible to swap out FluentLogger with Fluency.

Connection refused

Hi I have setup a EFK docker stack. I have tested this works correctly by redirecting docker console output for my spring boot app to fluentd with no issues.

I have modified the spring boot app logging configuration as follows:

build.gradle

            compile 'org.fluentd:fluent-logger:0.3.2'
            compile 'com.sndyuk:logback-more-appenders:1.4.1'

fluent.conf

<source>
type forward
port 24224
bind 0.0.0.0
</source>

<match *.*>
@type elasticsearch
host elasticsearch
logstash_format true
flush_interval 10s
</match>

logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>
    <appender name="FLUENT" class="ch.qos.logback.more.appenders.DataFluentAppender">
        <remoteHost>localhost</remoteHost>
        <port>24224</port>
    </appender>
    <logger name="com.acme.joc" level="debug" additivity="false">
        <appender-ref ref="CONSOLE"/>
        <appender-ref ref="FLUENT"/>
    </logger>
</configuration>

And got the following exception.

console output

...
2017-01-09 22:50:29.941 ERROR 6 --- [           main] o.fluentd.logger.sender.RawSocketSender  : org.fluentd.logger.sender.RawSocketSender

java.net.ConnectException: Connection refused (Connection refused)
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
	at java.net.Socket.connect(Socket.java:589)
	at org.fluentd.logger.sender.RawSocketSender.connect(RawSocketSender.java:83)
	at org.fluentd.logger.sender.RawSocketSender.reconnect(RawSocketSender.java:92)
	at org.fluentd.logger.sender.RawSocketSender.flush(RawSocketSender.java:186)
	at org.fluentd.logger.sender.RawSocketSender.send(RawSocketSender.java:177)
	at org.fluentd.logger.sender.RawSocketSender.emit(RawSocketSender.java:147)
	at org.fluentd.logger.sender.RawSocketSender.emit(RawSocketSender.java:129)
	at org.fluentd.logger.FluentLogger.log(FluentLogger.java:99)
	at ch.qos.logback.more.appenders.DataFluentAppender.append(DataFluentAppender.java:75)
	at ch.qos.logback.more.appenders.DataFluentAppender.append(DataFluentAppender.java:29)
	at ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:84)
	at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
	at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:270)
	at ch.qos.logback.classic.Logger.callAppenders(Logger.java:257)
	at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:421)
	at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:383)
	at ch.qos.logback.classic.Logger.log(Logger.java:765)
	at org.apache.commons.logging.impl.SLF4JLocationAwareLog.info(SLF4JLocationAwareLog.java:155)
	at org.springframework.boot.StartupInfoLogger.logStarted(StartupInfoLogger.java:57)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:321)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1186)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1175)
	at net.juniper.pronx.joc.services.Bootstrap.main(Bootstrap.java:38)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
	at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
	at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
...

Any help much appreciated.

CloudWatch appender and AWS SDK v2

Looks like the CloudWatch appender is currently using v1 of the AWS SDK.

Any plans to make another (guessing one in parallel to the current appender) to support v2?

Output log contains both msg and message fields in FluencyLogbackAppender

because of the following lines in FluentdAppenderBase we are getting both msg and message in the output. Shouldn't the msg be put if this is not an instance of ILoggingEvent.
https://github.com/sndyuk/logback-more-appenders/blob/master/src/main/java/ch/qos/logback/more/appenders/FluentdAppenderBase.java#L43

    protected Map<String, Object> createData(E event) {
        Map<String, Object> data = new HashMap<String, Object>();
        data.put(DATA_MSG, encoder.encode(event));

        if (event instanceof ILoggingEvent) {
            ILoggingEvent loggingEvent = (ILoggingEvent) event;
            data.put(DATA_MESSAGE, loggingEvent.getFormattedMessage());

Log Message Formatting

Hi
I am using ch.qos.logback.more.appenders.FluencyLogbackAppender along with Fluency, I was wondering is there is any way to format the log message before it gets emitted.? I would need to mask certain fields.

<encoder>
			<charset>UTF-8</charset>
			<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS,UTC} UTC: ${PID:- }
				[%15.15thread] %5p [%X{traceId}/%X{spanId}] - %-40.40logger{39} :
				%replace(%.5000msg%n ){'[Pp]assword*([^,]*),', 'password=xxxx,'}%n
			</pattern>
		</encoder>

I tried the above config in my logback xml, but could not find any difference. any suggestion.?

Can't shut down cleanly because DaemonAppender can't be interrupted.

My app can't shut down gracefully, because the thread started by DaemonAppender keeps running. I've checked the code, and seems like InterruptedException is caught and the method recurses into itself instead of exiting. This code is quite similar to Logback's builtin AsyncAppender, and that implementation breaks out of the loop when it catches an InterruptedException.

Error creating cloud watch appender with version 1.8.2-JAVA9MODULE_SLF4J17 and 1.8.1-JAVA9MODULE_SLF4J17

I obtain an error when trying to initialize the cloud watch appender with version 1.8.2-JAVA9MODULE_SLF4J17 or 1.8.1-JAVA9MODULE_SLF4J17.
It works fine with 1.8.0-JAVA9MODULE_SLF4J17 version.
The error obtained is

Exception in thread "main" java.lang.IllegalStateException: Logback configuration error detected: 
ERROR in ch.qos.logback.core.joran.action.AppenderAction - Could not create an Appender of type [ch.qos.logback.more.appenders.CloudWatchLogbackAppender]. ch.qos.logback.core.util.DynamicClassLoadingException: Failed to instantiate type ch.qos.logback.more.appenders.CloudWatchLogbackAppender
ERROR in ch.qos.logback.core.joran.spi.Interpreter@16:97 - ActionException in Action for tag [appender] ch.qos.logback.core.joran.spi.ActionException: ch.qos.logback.core.util.DynamicClassLoadingException: Failed to instantiate type ch.qos.logback.more.appenders.CloudWatchLogbackAppender
	at org.springframework.boot.logging.logback.LogbackLoggingSystem.loadConfiguration(LogbackLoggingSystem.java:169)
	at org.springframework.boot.logging.AbstractLoggingSystem.initializeWithConventions(AbstractLoggingSystem.java:80)
	at org.springframework.boot.logging.AbstractLoggingSystem.initialize(AbstractLoggingSystem.java:60)
	at org.springframework.boot.logging.logback.LogbackLoggingSystem.initialize(LogbackLoggingSystem.java:118)
	at org.springframework.boot.context.logging.LoggingApplicationListener.initializeSystem(LoggingApplicationListener.java:306)
	at org.springframework.boot.context.logging.LoggingApplicationListener.initialize(LoggingApplicationListener.java:281)
	at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEnvironmentPreparedEvent(LoggingApplicationListener.java:239)
	at org.springframework.boot.context.logging.LoggingApplicationListener.onApplicationEvent(LoggingApplicationListener.java:216)
	at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:172)
	at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:165)
	at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
	at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:127)
	at org.springframework.boot.context.event.EventPublishingRunListener.environmentPrepared(EventPublishingRunListener.java:80)
	at org.springframework.boot.SpringApplicationRunListeners.environmentPrepared(SpringApplicationRunListeners.java:53)
	at org.springframework.boot.SpringApplication.prepareEnvironment(SpringApplication.java:345)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:308)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1237)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1226)
	at roc.backend.backoffice.BackofficeApplicationKt.main(BackofficeApplication.kt:16)

My Logback file is

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="1 seconds" debug="false">
    <variable name="LOG_LEVEL" value="${LOG_LEVEL:-WARN}" />
    <variable name="AWS_REGION" value="${AWS_REGION:-eu-west-1}" />
    <include resource="org/springframework/boot/logging/logback/defaults.xml" />
    <include resource="org/springframework/boot/logging/logback/console-appender.xml" />

    <appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{"yyyy-MM-dd'T'HH:mm:ss,SSS", UTC} [%thread] %-5level %logger{36} - %replace(%msg){'[\r\n]',''} %replace(%ex){'[\r\n]+', '\\n'}%nopex %n</pattern>
        </encoder>
    </appender>

    <appender name="Sentry" class="io.sentry.logback.SentryAppender"/>

    <appender name="CLOUDWATCH" class="ch.qos.logback.more.appenders.CloudWatchLogbackAppender">
        <awsConfig>
            <!--        Uncomment when need to verify that cloudwatch works-->
            <profile>admin-dev</profile>
            <region>${AWS_REGION}</region>

        </awsConfig>
        <logGroupName>BackofficeAuditory</logGroupName>
        <logStreamName>auditory</logStreamName>
        <logStreamRolling class="ch.qos.logback.more.appenders.CloudWatchLogbackAppender$CountBasedStreamName">
            <limit>100000</limit>
        </logStreamRolling>
        <createLogDestination>true</createLogDestination>
        <emitInterval>100</emitInterval>
    </appender>

    <springProfile name="local">

<!--        Uncomment when need to verify that cloudwatch works-->
<!--        <logger name="roc.backend.backoffice.service.AuditService" level="INFO">-->
<!--            <appender-ref ref="CLOUDWATCH" />-->
<!--        </logger>-->
        <root level="INFO">
            <appender-ref ref="CONSOLE" />
        </root>
    </springProfile>

    <springProfile name="!local">
        <logger name="roc.backend.backoffice.service.AuditService" level="INFO" additivity="false">
            <appender-ref ref="CLOUDWATCH"/>
        </logger>
        <root level="${LOG_LEVEL}">
            <appender-ref ref="CONSOLE" />
            <appender-ref ref="Sentry" />
        </root>
    </springProfile>

</configuration>

DataFluentAppender not integrating with Elasticsearch when structured message

Sorry in advance if this is not an issue, but a missunderstanding from my side.

I am using DataFluentAppender and it does work like a charm ;). The information goes perfectly to Elasticsearch through FluentD.

Nevertheless there is a point that I am not getting: when my message is a structured JSON, like '{"key1":"value1", "key2": "value2"}', it is indexed as a whole, as it is in a sort of "second level", where the first level is all the metadata introduced by the Appender.

What is the method we have to get rid of that data so I can get the JSON deserialized and, therefore, each field (key1 and key2) indexed separately in Elasticsearch?

Fluency - emit() failed due to buffer full. Flushing buffer. Please try again.

Hi, I got this problem, how to deal with it, please.
企业微信截图_16549236787939
It is hitting these logs a lot all the time.
image
image
image

Then checked source, maybe it append the data which its size is more than MaxBufferSize?
How can it append the data which is more than MaxBufferSize(256M) one time ?

Looking forward to getting your reply

Thank you

Here is config.
`

<appender name="FLUENCY_SYNC" class="ch.qos.logback.more.appenders.FluencyLogbackAppender">
    <!-- Tag for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->
    <!-- 微服务名 -->
    <tag>${applicationName}</tag>
    <!-- [Optional] Label for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->

    <!-- Host name/address and port number which Fluentd placed -->
    <remoteHost>${fluentdAddr}</remoteHost>
    <port>24224</port>

    <!-- [Optional] Multiple name/addresses and port numbers which Fluentd placed
   <remoteServers>
      <remoteServer>
        <host>primary</host>
        <port>24224</port>
      </remoteServer>
      <remoteServer>
        <host>secondary</host>
        <port>24224</port>
      </remoteServer>
    </remoteServers>
     -->

    <!-- [Optional] Additional fields(Pairs of key: value) -->
    <!-- 环境 -->
    <additionalField>
        <key>env</key>
        <value>${profile}</value>
    </additionalField>

    <!-- [Optional] Configurations to customize Fluency's behavior: https://github.com/komamitsu/fluency#usage  -->
    <ackResponseMode>true</ackResponseMode>
    <!-- <fileBackupDir>/tmp</fileBackupDir> -->
    <bufferChunkInitialSize>2097152</bufferChunkInitialSize>
    <bufferChunkRetentionSize>16777216</bufferChunkRetentionSize>
    <maxBufferSize>268435456</maxBufferSize>
    <bufferChunkRetentionTimeMillis>1000</bufferChunkRetentionTimeMillis>
    <connectionTimeoutMilli>5000</connectionTimeoutMilli>
    <readTimeoutMilli>5000</readTimeoutMilli>
    <waitUntilBufferFlushed>30</waitUntilBufferFlushed>
    <waitUntilFlusherTerminated>40</waitUntilFlusherTerminated>
    <flushAttemptIntervalMillis>200</flushAttemptIntervalMillis>
    <senderMaxRetryCount>12</senderMaxRetryCount>
    <!-- [Optional] Enable/Disable use of EventTime to get sub second resolution of log event date-time -->
    <useEventTime>true</useEventTime>
    <sslEnabled>false</sslEnabled>
    <!-- [Optional] Enable/Disable use the of JVM Heap for buffering -->
    <jvmHeapBufferMode>false</jvmHeapBufferMode>
    <!-- [Optional] If true, Map Marker is expanded instead of nesting in the marker name -->
    <flattenMapMarker>false</flattenMapMarker>
    <!--  [Optional] default "marker" -->
    <markerPrefix></markerPrefix>

    <!-- [Optional] Message encoder if you want to customize message -->
    <encoder>
        <pattern><![CDATA[%-5level %logger{50}#%line %message]]></pattern>
    </encoder>

    <!-- [Optional] Message field key name. Default: "message" -->
    <messageFieldKeyName>msg</messageFieldKeyName>

</appender>

 <appender name="FLUENCY" class="ch.qos.logback.classic.AsyncAppender">
    <!-- Max queue size of logs which is waiting to be sent (When it reach to the max size, the log will be disappeared). -->
    <queueSize>999</queueSize>
    <!-- Never block when the queue becomes full. -->
    <neverBlock>true</neverBlock>
    <!-- The default maximum queue flush time allowed during appender stop.
         If the worker takes longer than this time it will exit, discarding any remaining items in the queue.
         10000 millis
     -->
    <maxFlushTime>1000</maxFlushTime>
    <appender-ref ref="FLUENCY_SYNC"/>
</appender>

`

I am sure there is not a so big log data ( more than 256M),I think it maybe that the unsent data are accumulated into a big data, so it is keeping failing. Is it possible about that?

FluencyLogbackAppender : logs missing when exec time is small

Hi,
we are using your FluencyLogbackAppender to collect logs on a distributed system. It worked perfectly until we tried to add a logger to collect data about tools used to manage the system, for debugging purpose. Logger use a File appender and a FluencyLogbackAppender.
Thing is, some of these tool have a very small time of execution ( like 5s for example) and we detected that the FluencyLogbackAppender seems to not have enough time to send the logs. Adding a Thread.sleep(5000) in main thread seems to "fix" the problem, but we can't afford to do so.
I checked fluency documentation and call to close() should wait until all messages are sent (https://github.com/komamitsu/fluency#wait-until-buffered-data-is-flushed-and-release-resource).
I saw fluency.close() is called in the stop() method of FluencyLogbackAppender

and can't understand why logs are not sent. Is it a configuration problem ?

Encoder stopped working in 1.8.0

It works fine in 1.7.5 but in 1.8.0 the output is always in the default format.

I think it's caused by this: d58af9d encoder.encode is moved to the else block and never called.

Allow assumeRole via StsAssumeRoleCredentialsProvider

Hi,
I'd like to be able to have the appender assume a role instead of plainly using the access/secret credentials for CloudWatch access.
Does this project currently accept new PRs? If a PR will be submitted & accepted, could a fix version be released in a timely manner?
Thanks.

How to parse msg to json object instead of text?

Hello
I got a problem
My msg is json String like "{"a":"b","c":"d"}"
The type of the field msg in elasticsearch is text
image
image

I want the data to be parsed to json in elasticsearch, so I can filter by the params
I tried to create index template and set msg type "Flattened" or "Nested". But it didn't work

please help, than you !

DataFluentAppender generates new timestamp instead of event's time

Why does DataFluentAppender generates brand new timestamp?

Probles arises when AsyncAppender thread not as fast as we desire. With some network delay or queue problems elastic's timestamp becames significantly later then real log event.

protected void append(E event) {
    Map<String, Object> data = createData(event);

    if (isUseEventTime()) {
        fluentLogger.log(getLabel() == null ? getTag() : getLabel(), data, System.currentTimeMillis() / 1000);
    } else {
        fluentLogger.log(getLabel() == null ? getTag() : getLabel(), data);
    }
}

I can create PR with changes like that, is it useful for this repo?

protected void append(E event) {
    Map<String, Object> data = createData(event);

    if (isUseEventTime()) {
        fluentLogger.log(getLabel() == null ? getTag() : getLabel(), data, getTimestamp(event) / 1000);
    } else {
        fluentLogger.log(getLabel() == null ? getTag() : getLabel(), data);
    }
}

protected long getTimestamp(E event) {
    if (event instanceof ILoggingEvent) {
        return ((ILoggingEvent) event).getTimeStamp();
    }
    return System.currentTimeMillis();
}
  • it looses milliseconds cause of division, why so?

Add JSON formatter

The current JSON for Fluentd is hard coded so we can't handle customization request like #20.
It should introduce JSON encoder to format the log event to any layout:

  <appender ...>
    <encoder class="xxx.JsonEncoder">
      <pattern><![CDATA[{"key1":  "%msg", "key2": { "nestedKey1": "%ex{short}" } }]]></pattern>
    </encoder>
  </appender>

ERROR o.f.logger.sender.RawSocketSender - org.fluentd.logger.sender.RawSocketSender java.net.SocketException: Connection timed out (Write failed)

In my use case, we can not assume that fluentd will always be running. Ideally, the logger would log as usual if fluentd is running; if it is not running, it should probably log a warning and move on instead of throwing exceptions; finally, it should silently try to reconnect in the background and resume logging as soon as a connection is available.

Ideally the library would also do some in-memory buffering to minimize loss of log msgs. Thoughts?

Question: why MSG_SIZE_LIMIT Is needed?

Hi,
I'm wondering why MSG_SIZE_LIMIT Is used to shrink messages? I need to emit a huge event to fluentd (bigger than 65k) and it's getting only a part of it.

Regards

Add additional fields at runtime

Is it possible to support adding additional fields at runtime for ch.qos.logback.more.appenders.FluencyLogbackAppender?

At the moment one can specify additional fields up front inside the logback file, but say I wanted to add a customerId field (key value pair) to the log output, but customerId is only known at runtime.

I'm thinking of functionality along the lines of logstash MapEntriesAppendingMarker.

feature request: flattenMapMarker behavior improvement

it appears the flattenMapMarker option offers choice between two options - either ignore the marker id or mangle it - can we add a new option to actually use the marker name?

I.e. if i add marker with name "data" - use "data" as marker name instead of "marker.data"?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.