Giter VIP home page Giter VIP logo

logback-s3-rolling-policy's Introduction

Logback RollingPolicy with S3 upload

logback-s3-rolling-policy automatically uploads rolled log files to S3.

There are 2 rolling policies which can be used:

  • S3FixedWindowRollingPolicy
  • S3TimeBasedRollingPolicy

logback-s3-rolling-policy was forked from logback-s3 (https://github.com/shuwada/logback-s3) but transfered into a new project because changes were getting too big.

Index

Requirements

  • Java 1.7+

Configuration

logback.xml variables

Whether you implement one of any available S3 policies, the following extra variables (on top of Logback's) can be used:

  • s3BucketName The S3 bucket name to upload your log files to (mandatory).
  • awsAccessKey Your AWS access key. If not provided it falls back to the AWS SDK default provider chain.
  • awsSecretKey Your AWS secret key. If not provided it falls back to the AWS SDK default provider chain.
  • s3FolderName The S3 folder name in your S3 bucket to put the log files in. This variable supports dates, just put your pattern between %d{}. Example: %d{yyyy/MM/dd}.
  • shutdownHookType Defines which type of shutdown hook you want to use. This variable is mandatory when you use rolloverOnExit. Defaults to NONE. Possible values are:
    • NONE This will not add a shutdown hook. Please note that your most up to date log file won't be uploaded to S3!
    • JVM_SHUTDOWN_HOOK This will add a runtime shutdown hook. If you're using a webapplication, please use the SERVLET_CONTEXT, as the JVM shutdown hook is not really safe to use here.
    • SERVLET_CONTEXT This will register a shutdown hook to the context destroyed method of RollingPolicyContextListener. Don't forget to actually add the context listener to you web.xml. (see below)
  • rolloverOnExit Whether to rollover when your application is being shut down or not. Boolean value, defaults to false. If this is set to false, and you have defined a shutdownHookType, then the log file will be uploaded as is.
  • prefixTimestamp Whether to prefix the uploaded filename with a timestamp formatted as yyyyMMdd_HHmmss or not. Boolean value, defaults to false.
  • prefixIdentifier Whether to prefix the uploaded filename with an identifier or not. Boolean value, defaults to false. If running on an AWS EC2 instance, the instance ID will be used. If not running on an AWS EC2 instance, the hostname address will be used. If the hostname address can't be used, a UUID will be used.

web.xml

If you're using the shutdown hook SERVLET_CONTEXT as defined above, you'll need to add the context listener class to your web.xml:

<listener>
   <listener-class>ch.qos.logback.core.rolling.shutdown.RollingPolicyContextListener</listener-class>
</listener>

Run-time variables

As of version 1.3 you can set run-time variables. For now you can only add an extra S3 folder.

Just use CustomData.extraS3Folder.set( "extra_folder_name" ); somewhere in your code before the upload occurs. You can always change this value during run-time and it will be picked up on the next upload. set to null to ignore.

logback.xml rolling policy examples

An example logback.xml appender for each available policy using RollingFileAppender.

  • ch.qos.logback.core.rolling.S3FixedWindowRollingPolicy:
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <file>logs/myapp.log</file>
  <encoder>
    <pattern>[%d] %-8relative %22c{0} [%-5level] %msg%xEx{3}%n</pattern>
  </encoder>
  <rollingPolicy class="ch.qos.logback.core.rolling.S3FixedWindowRollingPolicy">
    <fileNamePattern>logs/myapp.%i.log.gz</fileNamePattern>
    <awsAccessKey>ACCESS_KEY</awsAccessKey>
    <awsSecretKey>SECRET_KEY</awsSecretKey>
    <s3BucketName>myapp-logging</s3BucketName>
    <s3FolderName>logs/%d{yyyy/MM/dd}</s3FolderName>
    <rolloverOnExit>true</rolloverOnExit>
    <shutdownHookType>SERVLET_CONTEXT</shutdownHookType>
    <prefixTimestamp>true</prefixTimestamp>
    <prefixIdentifier>true</prefixIdentifier>
  </rollingPolicy>
  <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
    <maxFileSize>10MB</maxFileSize>
  </triggeringPolicy>
</appender>

In this example you'll find the logs at myapp-logging/logs/2015/08/18/.

  • ch.qos.logback.core.rolling.S3TimeBasedRollingPolicy:
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
  <file>logs/myapp.log</file>
  <encoder>
    <pattern>[%d] %-8relative %22c{0} [%-5level] %msg%xEx{3}%n</pattern>
  </encoder>
  <rollingPolicy class="ch.qos.logback.core.rolling.S3TimeBasedRollingPolicy">
    <!-- Rollover every minute -->
    <fileNamePattern>logs/myapp.%d{yyyy-MM-dd_HH-mm}.%i.log.gz</fileNamePattern>
    <awsAccessKey>ACCESS_KEY</awsAccessKey>
    <awsSecretKey>SECRET_KEY</awsSecretKey>
    <s3BucketName>myapp-logging</s3BucketName>
    <s3FolderName>log</s3FolderName>
    <rolloverOnExit>true</rolloverOnExit>
    <shutdownHookType>SERVLET_CONTEXT</shutdownHookType>
    <prefixTimestamp>false</prefixTimestamp>
    <prefixIdentifier>true</prefixIdentifier>
    <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
      <maxFileSize>10MB</maxFileSize>
    </timeBasedFileNamingAndTriggeringPolicy>
  </rollingPolicy>
</appender>

In this example you'll find the logs at myapp-logging/log/.

AWS Credentials

It is a good idea to create an IAM user only allowed to upload S3 object to a specific S3 bucket. It improves the control and reduces the risk of unauthorized access to your S3 bucket.

The following is an example IAM policy.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "s3:PutObject"
      ],
      "Sid": "Stmt1378251801000",
      "Resource": [
        "arn:aws:s3:::myapp-logging/log/*"
      ],
      "Effect": "Allow"
    }
  ]
}

Libraries

This project uses the following libraries:

  • com.amazonaws:aws-java-sdk:1.11.7
  • ch.qos.logback:logback-classic:1.2.3
  • com.google.guava:guava:18.0
  • javax.servlet:servlet-api:2.4 (scope provided)
  • org.jetbrains:annotations:15.0 (scope provided)

logback-s3-rolling-policy's People

Contributors

danosipov avatar dhouthoo avatar giannivh avatar shuwada avatar snyk-bot avatar wvdhaute avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logback-s3-rolling-policy's Issues

Using default credential provider chain

awsAccessKey and awsSecretKey are documented as mandatory. It feels like a bad practice from security perspective.

Is there any specific reason why it wouldn't work with AWS SDK default credential provider chain?

Extend to use another S3 Endpoint

I would like to use this policy for Minio (https://www.minio.io) which is an AWS S3 compatible implementation. Therefor I need to set another Endpoint in the configuration.

My approach was to add a getter/setter to the S3TimeBasedRollingPolicy and also add the according configuration element. But it is not picked up from the configuration like the bucket or s3 folder name.

I get the follow error messages in the logs:
15:45:01,232 |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@38:19 - no applicable action for [s3Endpoint], current ElementPath is [[configuration][appender][rollingPolicy][s3Endpoint]]

How is the magic for picking up the configuration and adding it via a setter to the class working?

Upgrade logback classic

Would it be possible to upgrade the logger version as the latest aren't backwards compatible with the currently used version?

S3TimeBasedRollingPolicy does not work as expected.

It looks like the S3TimeBasedRollingPolicy does not work as expected. When App shuts down, it leaves the last log file as a .tmp file and does not compress it as expected.
Example appender configuration

 <appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
                <file>logs/myapp.csv</file>
                <encoder>
                    <pattern>%d{"yyyy-MM-dd HH:mm:ss.SSSSSSSSS"},%msg%n</pattern>
                </encoder>
                <rollingPolicy class="ch.qos.logback.core.rolling.S3TimeBasedRollingPolicy">
                    <!-- Rollover every hour-->
                    <fileNamePattern>logs/accountID=${accountID}/audit.%d{yyyy-MM-dd_HH}.%i.csv.gz</fileNamePattern>
                    <totalSizeCap>4000KB</totalSizeCap>
                    <s3BucketName>dev</s3BucketName>
                    <s3FolderName>test-buck</s3FolderName>
                    <rolloverOnExit>true</rolloverOnExit>
                    <shutdownHookType>JVM_SHUTDOWN_HOOK</shutdownHookType>
                    <prefixTimestamp>false</prefixTimestamp>
                    <prefixIdentifier>true</prefixIdentifier>
                    <timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                        <maxFileSize>10KB</maxFileSize>
                    </timeBasedFileNamingAndTriggeringPolicy>
                </rollingPolicy>
            </appender>

screen shot 2019-02-22 at 8 47 59 pm

It should have rolled up the file Index and created a gz , but it didn't seem to have incremented the fileNameCounter or compressed it. Can someone confirm that this seems like a valid issue ? Have been looking in the code but actual process debugging is hard to capture to know what happens when onShutDown() calls the rollover() function.

Does this work in combination with SiftAppender

Hi guys, I see logback provides different types of Appenders as configuration in logback.xml. So, just asking before I give it a try:
Would you happen to know of I can do something like, using a Sift Appender for using dynamic discriminator for dynamic file creations and using S3FixedSized Rolling policy in combination for pushing those dynamic files out to S3 ?
Something like this, using the incoming request's sessionID to create log files per session using your rolling S3 appender to upload those to S3 bucket with sessionID as bucket keys.

doesnt compile with new logback

Hi
I got an compilation error : S3TimeBasedRollingPolicy.java:158: error: cannot find symbol [ERROR] if (future != null) { [ERROR] ^ [ERROR] [ERROR] could not parse error message: symbol: variable future [ERROR] location: class S3TimeBasedRollingPolicy<E>

Readme needs updated

When I go to the repo URL - http://repo.linkid.be/releases - it looks like the domain has been let go and it no longer holds a nexus repo. Does this live somewhere else? Also the version in the readme can be bumped to the newest one.

Thanks

linkid.be is no more available, end up in HTTP 503

Hi,

I wanted to rely on your appender, but the repository maven on which you publicly distribute your release is no more available.

Do you think you can publish it somewhere else? Or should I start from your source and build it in a local maven repository?

Regards

License problem

Hello guys,

Could you please clarify your license policy regarding usage of logback-s3-rolling-policy in other projects.
As I see you've attached Apache 2 license to GitHub project: https://github.com/link-nv/logback-s3-rolling-policy/blob/master/LICENSE.txt

But here

https://github.com/link-nv/logback-s3-rolling-policy/blob/master/src/main/java/ch/qos/logback/core/rolling/data/CustomData.java

you have another license information

/*
 * SafeOnline project.
 *
 * Copyright 2006-2013 Lin.k N.V. All rights reserved.
 * Lin.k N.V. proprietary/confidential. Use is subject to license terms.
 */

So the question - can we use it in our (closed source) project?

Thanks in advance

Repository is down

http://repo.linkid.be/releases is down..
Could you up your server or create another repo.

Thanks.

java.lang.NoSuchFieldError: future

Hey,
we wanted to use this library in a play 2.5 application but it crashes when it tries to upload the files:

Exception in thread "pool-64-thread-1" java.lang.NoSuchFieldError: future
	at ch.qos.logback.core.rolling.S3TimeBasedRollingPolicy.waitForAsynchronousJobToStop(S3TimeBasedRollingPolicy.java:158)
	at ch.qos.logback.core.rolling.S3TimeBasedRollingPolicy.access$000(S3TimeBasedRollingPolicy.java:37)
	at ch.qos.logback.core.rolling.S3TimeBasedRollingPolicy$UploadQueuer.run(S3TimeBasedRollingPolicy.java:212)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

By looking at the code it seems valid because "future" does not exist.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.