Giter VIP home page Giter VIP logo

amazon-ecs-firelens-examples's Introduction

Amazon ECS FireLens Examples

Sample logging architectures for FireLens on Amazon ECS and AWS Fargate.

Contributing

We want examples of as many use cases in this repository as possible! Submit a Pull Request if you would like to add something.

ECS Log Collection

Basic FireLens examples

AWS for Fluent Bit init tag examples

An init tag is distributed with each release, it adds useful features for ECS customers.

Multiline Examples

Monitoring Fluent Bit

Fluent Bit Examples

Fluentd Examples

Splitting an applications logs into multiple strings

Artifacts for the blog Splitting an application’s logs into multiple streams: a Fluent tutorial

Setup for the examples

Before you use FireLens, familiarize yourself with Amazon ECS and with the FireLens documentation.

In order to use these examples, you will need the following IAM resources:

  • A Task IAM Role with permissions to send logs to your log destination. Each of the examples in this repository that needs additional permissions has a sample policy.
  • A Task Execution Role. This role is used by the ECS Agent to make calls on your behalf. If you enable logging for your FireLens container with the awslogs Docker Driver, you will need permissions for CloudWatch. You also need to give it S3 permissions if you are pulling an external Fluent Bit or Fluentd configuration file from S3. See the the FireLens documentation for more.

Here is an example inline policy with S3 access for FireLens:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject"
      ],
      "Resource": [
        "arn:aws:s3:::examplebucket/folder_name/config_file_name"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation"
      ],
      "Resource": [
        "arn:aws:s3:::examplebucket"
      ]
    }
  ]
}

Using the Examples

You must update each Task Definition to reflect your own needs. Replace the IAM roles with your own roles. Update the log configuration with the values that you desire. And replace the app image with your own application image.

Additionally, several of these examples use a custom Fluent Bit/Fluentd configuration file in S3. You must upload it to your own bucket, and change the S3 ARN in the example Task Definition.

If you are using ECS on Fargate, then pulling a config file from S3 is not currently supported. Instead, you must create a custom Docker image with the config file.

Dockerfile to add a custom configs:

FROM amazon/aws-for-fluent-bit:latest
ADD extra.conf /extra.conf

Then update the firelensConfiguration options in the Task Definition to the following:

"options": {
    "config-file-type": "file",
    "config-file-value": "/extra.conf"
}

License Summary

This sample code is made available under the MIT-0 license. See the LICENSE file.

amazon-ecs-firelens-examples's People

Contributors

akshay-saraswat avatar andreyukd avatar aripolavarapu avatar arunpatyal avatar bgola-signalfx avatar carmenapuccio avatar cce avatar claych avatar galaoaoa avatar hossain-rayhan avatar jamesiri avatar matthewfala avatar pettitwesley avatar pippinsoft avatar ravinaik1312 avatar sahya avatar set808 avatar tanvirsidhuallscripts avatar varun7447 avatar yogeshjoshi avatar zhonghui12 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazon-ecs-firelens-examples's Issues

Any of containers log not shipping to destination

I have tried the configuration shared in this repo,

As per the link given for fargate(https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/config-file-type-file) I have write extra.conf and build fluentbit custom image by copying extra.conf in Dockerfile

I have attach extra.conf and Dockerfile below, you can review this.

But there is some problem in it or missing something, because When I tried this, any of my application container logs not shipping to datadog even stdout logs are also not shipping.
Can anybody do a review & suggest best possible solution or correct me if there is something wrong or missing anything in my approach.

NOTE: In my case destination is datadog & ${APP_LOG_FILE_PATH} will be set as environmental variable of container.

extra.conf

[SERVICE]
Flush 5
Grace 30
Daemon off

[INPUT]
Name tail
Tag app-logs
Buffer_Chunk_Size 1mb
Buffer_Max_Size 100mb
Mem_Buf_Limit 250mb
Path ${APP_LOG_FILE_PATH}
Refresh_Interval 5
Path_Key source
Skip_Empty_Lines true

[OUTPUT]
Name datadog
Match *
Host http-intake.logs.datadoghq.com
TLS on
compress gzip
apikey #apikey
dd_service datadog-logging
dd_source fluentbit-test
dd_message_key log

Dockerfile

FROM public.ecr.aws/aws-observability/aws-for-fluent-bit:stable
COPY ./extra.conf /fluent-bit/etc/extra.conf

Add access logs for Envoy/AppMesh FireLens example

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Tell us about your request
What do you want us to build?
Add an example that showcases how to parse the Envoy Access Logs in AWS App Mesh with FireLens.

Which service(s) is this request for?
Fargate, ECS and AppMesh

[Question] [Help] asctime not recognized as time field in Kibana

I'm struggling with getting a time field in Kibana, using the awsfirelens plugin.

Here's my ContainerDefinitions:

      ContainerDefinitions:
        - Essential: true
          Image: amazon/aws-for-fluent-bit:latest
          Name: !Join [ '-', [ 'LogRouter', 'energy', !Ref DeployEnvironment] ]
          FirelensConfiguration:
            Type: fluentbit
            Options:
              config-file-type: 'file'
              config-file-value: '/fluent-bit/configs/parse-json.conf'
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-group: firelens-container
              awslogs-region: !Ref 'AWS::Region'
              awslogs-create-group: 'true'
              awslogs-stream-prefix: firelens
          MemoryReservation: 50

        - Environment:
          ...
          Essential: true
          Image: !Sub '${RepositoryURL}:${CommitHash}'
          LogConfiguration:
            LogDriver: awsfirelens
            Options:
              Name: firehose
              region: !Ref 'AWS::Region'
              delivery_stream: !FindInMap [ EnvMap, !Ref DeployEnvironment, LogDeliveryStream ]
              data_keys: 'asctime,name,module,lineno,funcName,levelname,message'
              time_key: 'asctime'
              time_key_format: '%Y-%m-%dT%H:%M:%S%L'

I can see asctime in Kibana fields, but only as a string.

Do I need an extra config for that?

The value of total_file_size in task-definition.json for S3 output plugin is invalid.

In this task-definition.json, total_file_size is specified as 1M. However, default value of upload_chunk_size is 5,242,880 bytes and upload_chunk_size can not be larger than total_file_size. Also, upload_chunk_size must be at least 5,242,880 bytes. I guess that this is caused by following example in Fluent Bit's document.
https://docs.fluentbit.io/manual/pipeline/outputs/s3#using-s3-without-persisted-disk

[OUTPUT]
     Name s3
     Match *
     bucket your-bucket
     region us-east-1
     total_file_size 1M
     upload_timeout 1m

So I just created the issue on fluent-bit-docs repository side.
fluent/fluent-bit-docs#411

Firelens Multiple Output based on regex

Hi - I'm starting to use firelens/fluentbit and I'm trying to send my ECS task logs to different outputs (Opensearch index) based on some tag or string of the log. I saw the example of multiple destinations, and I was able to filter the logs by regex, but I didn't find any example of how to route based on a tag or string of the log to different OpenSearch index.

Aws Ecs Firelens to send the log to opensearch with custom filed

Hi Team,

We are able to do push log from Aws ECS container to OpenSearch with help of Filenes but we want to separate each log on log.filepath kind of custom field, it helpful for sort of specific path related log alone in OpenSearch.

Is there any configuration available for this case? Please share your inputs

serialized JSON parsing

I am using the https://github.com/aws-samples/amazon-ecs-firelens-examples/blob/master/examples/fluent-bit/parse-json/extra.conf to parse a JSON field in my log message. This is my log message before parsing:

{
"container_id": "14927aae39875c1cbdae3e41e6a25b19968aba3f24f09404e76190ef61670202",
"container_name": "/ecs-LogisticsLDWtest-Logistics-17-LogisticsLDWtest-Logistics-cef884afc6c2dbe0db01",
"ec2_instance_id": "i-0a1e58472dc0e9e96",
"ecs_cluster": "LDWtest-Logistics",
"ecs_task_arn": "arn:aws:ecs:us-west-2:418338086575:task/5372b32a-0894-464d-9da1-601ba33fe77d",
"ecs_task_definition": "LogisticsLDWtest-Logistics:5",
"log": "{"thread":"http-nginx-8080-exec-3","level":"INFO","loggerName":"com.enterprise.cloud.Logistics.api.v1.KafkaConsumerV1","reqMsg":"{\"requestId\" : \"45da14d9-bbb2-4906-988c-9e8aead17f59\", \"message\" : \"Request to list consumer details.\"}","endOfBatch":false,"loggerFqcn":"org.apache.log4j.Category","instant":{"epochSecond":1578674283,"nanoOfSecond":74902000},"contextMap":{},"threadId":23,"threadPriority":5}\r",
"source": "stdout"
}

Then with JSON parsing it looks like this:

{
"container_id": "14927aae39875c1cbdae3e41e6a25b19968aba3f24f09404e76190ef61670202",
"container_name": "/ecs-LogisticsLDWtest-Logistics-17-LogisticsLDWtest-Logistics-cef884afc6c2dbe0db01",
"contextMap": {},
"ec2_instance_id": "i-0a1e58472dc0e9e96",
"ecs_cluster": "LDWtest-Logistics",
"ecs_task_arn": "arn:aws:ecs:us-west-2:418338086575:task/5372b32a-0894-464d-9da1-601ba33fe77d",
"ecs_task_definition": "LogisticsLDWtest-Logistics:5",
"endOfBatch": false,
"instant": {
"epochSecond": 1578690446,
"nanoOfSecond": 571551000
},
"level": "INFO",
"loggerFqcn": "org.apache.log4j.Category",
"loggerName": "com.enterprise.cloud.Logistics.api.v1.KafkaConsumerV1",
"reqMsg": "{"requestId" : "45da14d9-bbb2-4906-988c-9e8aead17f59", "message" : "Request to list consumer details."}",
"source": "stdout",
"thread": "http-nginx-8080-exec-3",
"threadId": 23,
"threadPriority": 5
}

The reqMsg field is JSON in itself and I am not sure why this is not getting formatted as JSON. Is there a different parser I need to use or a different regex to parse this? Any help is appreciated.

FireLens health check recommendation

As mentioned in https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/health-check, TCP Input Health Check is not recommended anymore. And we don't want our ECS tasks to tightly depend on CloudWatch. And we want to minimize the chance to lose logs. Therefore, the only option left is Simple Uptime Health Check.

However, as you suggested, "It is a very shallow health check, Fluent Bit could be completely failing to send logs but as long as its still responsive on the monitoring interface, it will be marked as healthy."

Do you have a recommended way to monitor if the Fluent Bit is actually failing? We're ok to miss logs sporadically, but we want to monitor that and do not want to miss logs for a long time.

Several of my thoughts are:

  1. Can we use the Simple Uptime Health Check for container health check command in ECS, but in the same time, can we set up another container to call the deep health check command in Fluent Bit? Then if it cannot respond, the additional container emits a metric?
  2. Can we monitor the CloudWatch logs size and use that to decide if we're losing logs?
  3. Or do you have any recommendation to monitor in case the Simple Uptime Health Check isn't able to tell that the Fluent Bit cannot send logs?

Metrics examples: remove need for metric filter by making FLB invoke a script that outputs EMF

Why?

  • removes the need for metric filter in CW log group in metrics tutorials
  • This might enable more fine-grained/specific metrics experiences, need to think on this more.

How?

  • exec input takes cmd output and turns it into logs
  • create script that obtains desired metrics and outputs EMF records
  • FLB sends cmd output (EMF records) as EMF logs
  • args to script can select metrics/experiences

Feature Request: add Kinesis Data Streams Example

I'm not sure if Kinesis Data streams is directly supported with firelens, but looking at the documentation it could be done if it reads the fluentbit config file that has the output as kinesis data stream. It would be great to verify this by adding an example for sending logs to Kinesis data stream.

Issues w/ send to multiple destinations config

Used examples/fluent-bit/send-to-multiple-destinations as config example with ECS on Fargate launch type.
The extra.conf is pulled from s3 via a bootstrap container into a shared volume, the log_router starts OK and I see it configure both destinations, the remaining app containers however fail with "CannotStartContainerError: ResourceInitializationError: failed to initialize logging driver: failed to get fluentd log driver arguments: logging driver specified of container app, but no log configuration specified"

The example docs imply the rest of the app containers only need to specify "awsfirelens" as the logDriver and no other options

from README.md: "Since the destinations are configured in the custom config file, the app log configuration does not have any options. To use FireLens for logs, and container only needs to specify the awsfirelens log driver- the options are optional."

log_router:
mountPoints = [
{
sourceVolume = var.volume
containerPath = "/mnt/firelens/"
}
]

app containerS:
logConfiguration = {
logDriver = "awsfirelens"
}

Unable to download firelens s3 config file

I am using the send-to-multiple-destinations example. I added the log-router container definition and edited our app container definition to use the awsfirelens log driver.

When I update my CloudFormation template with these changes it gets stuck starting and stopping tasks over and over again. Each task gives the error
Unable to download firelens s3 config file: unable to download s3 config extra.conf from bucket mybucket: MissingRegion: could not find region configuration

The task role has all of the S3 permissions it should need, and I tried adding those permissions to the execution role as well, but I still get the same error. So I don't think this is an S3 error, but I am not sure what the MissingRegion: could not find region configuration error is referring to.

docker pull fails

We need to use the docker image through our internal artifactory docker repository. When I try to pull this image, I get below error;

docker pull 533243300146.dkr.ecr.us-east-1.amazonaws.com/newrelic/logging-firelens-fluentbit

Using default tag: latest
Error response from daemon: Get https://533243300146.dkr.ecr.us-east-1.amazonaws.com/v2/newrelic/logging-firelens-fluentbit/manifests/latest: no basic auth credentials

Is there a way to be able to pull this image so that we can push to our internal artifactory after that.

Multiline logs configuration not working

Referring to the details in https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/master/examples/fluentd/multiline-logs, I have created a service but the multi-line logs are not being combined. I am trying to send logs to Sumo Logic.

Here is the docker file:

FROM fluent/fluentd:v1.11.0-debian-1.0
ADD extra.conf /extra.conf
USER root
RUN buildDeps="sudo make gcc g++ libc-dev"
&& apt-get update
&& apt-get install -y --no-install-recommends $buildDeps
&& gem install fluent-plugin-concat
&& gem install fluent-plugin-cloudwatch-logs
&& gem install fluent-plugin-sumologic_output
&& gem install fluent-plugin-out-http
&& gem sources --clear-all
&& rm -rf /tmp/* /var/tmp/* /usr/lib/ruby/gems//cache/.gem
USER fluent

extra.conf

 <filter app-firelens**>
   @type concat
   key log
   stream_identity_key container_id
   multiline_start_regexp /^/[DockerLogGenerator/]/
 </filter>
 #<match app-firelens**>
 #@type sumologic
 #endpoint https://collectors.sumologic.com/receiver/v1/http/XXXXXX
 #source_category ECS/FireLens/Custom/Fargatewoz
 #source_name FireLensCustom
 #</match>

Note the sumologic plugin above, I have tried with sumologic plugin, fluentd http out plugin and cloudwatch plugin, same result each time.

Task definition:

{
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "containerDefinitions": [
        {
            "image": "arunpatyal/firelens-sumo-logger:11",
            "name": "log_router_sumo_file_config",
            "memory": 100,
            "essential": true,
            "firelensConfiguration": {
                "type": "fluentd",
                "options": {
                    "enable-ecs-log-metadata": "true",
                    "config-file-type": "file",
                    "config-file-value": "/extra.conf"
                }
            },
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "awslogs-ecs-fargate-sumo",
                    "awslogs-region": "us-west-1",
                    "awslogs-stream-prefix": "awslogs-ecs-fargate-sumo"
            }
          }
        },
  		 {
  			 "essential": true,
  			 "image": "arunpatyal/firelens-log-generator:1",
  			 "name": "app",
  			 "logConfiguration": {
                   "logDriver":"awsfirelens",
                   "options": {
                    "log_group_name": "firelens-log-group",
                    "log_stream_name": "firelens-log-stream",
                    "@type": "cloudwatch_logs",
                    "region": "us-west-1"
                   }

  			},
  			"memory": 100
  		}
    ],
    "family": "firelens-service",
    "executionRoleArn": "arn:aws:iam::xxxxxxxxxx:role/ecsTaskExecutionRole",
    "cpu": "256",
    "memory": "512",
    "volumes": [],
    "placementConstraints": [],
    "taskRoleArn": "arn:aws:iam::xxxxxxxxxx:role/ecs_task_iam_role",
    "networkMode": "awsvpc"
}

The firelens-log-generator:1 is based on https://github.com/DenisBiondic/DockerLogGenerator with slight modifications, it generates logs in the below format:

[DockerLogGenerator] Current Time: 2020-07-09 05:04:40.092007251 +0000 UTC m=+35.001891793
[DockerLogGenerator] Multiline: 2020-07-09 05:04:40.092052436 +0000 UTC m=+35.001936973
This is the second line
This is the third line
[DockerLogGenerator] Current Time: 2020-07-09 05:04:45.09222899 +0000 UTC m=+40.002113544

Here is the output from cloudwatch:

{
    "container_id": "d0c83eb7e9d94aff0ca7e25f54048031740712c5f52e9e4e3b6b73988bf57afa",
    "container_name": "/ecs-firelens-service-11-app-90d782cd98eb808d1700",
    "source": "stdout",
    "log": "[DockerLogGenerator] Current Time: 2020-07-09 05:04:40.092007251 +0000 UTC m=+35.001891793",
    "ecs_cluster": "arn:aws:ecs:us-west-1:XXXXXXXX:cluster/fargate-cluster",
    "ecs_task_arn": "arn:aws:ecs:us-west-1:XXXXXXXXX:task/38e847e2-2d9f-4e58-88e0-cd875d6f40d6",
    "ecs_task_definition": "firelens-service:11"
}
{
    "log": "[DockerLogGenerator] Multiline: 2020-07-09 05:04:40.092052436 +0000 UTC m=+35.001936973",
    "container_id": "d0c83eb7e9d94aff0ca7e25f54048031740712c5f52e9e4e3b6b73988bf57afa",
    "container_name": "/ecs-firelens-service-11-app-90d782cd98eb808d1700",
    "source": "stdout",
    "ecs_cluster": "arn:aws:ecs:us-west-1:XXXXXXX:cluster/fargate-cluster",
    "ecs_task_arn": "arn:aws:ecs:us-west-1:XXXXXXXXX:task/38e847e2-2d9f-4e58-88e0-cd875d6f40d6",
    "ecs_task_definition": "firelens-service:11"
}
{
    "container_id": "d0c83eb7e9d94aff0ca7e25f54048031740712c5f52e9e4e3b6b73988bf57afa",
    "container_name": "/ecs-firelens-service-11-app-90d782cd98eb808d1700",
    "source": "stdout",
    "log": " This is the second line",
    "ecs_cluster": "arn:aws:ecs:us-west-1:XXXXXXXXX:cluster/fargate-cluster",
    "ecs_task_arn": "arn:aws:ecs:us-west-1:XXXXXXXXX:task/38e847e2-2d9f-4e58-88e0-cd875d6f40d6",
    "ecs_task_definition": "firelens-service:11"
}
{
    "container_id": "d0c83eb7e9d94aff0ca7e25f54048031740712c5f52e9e4e3b6b73988bf57afa",
    "container_name": "/ecs-firelens-service-11-app-90d782cd98eb808d1700",
    "source": "stdout",
    "log": "This is the third line",
    "ecs_cluster": "arn:aws:ecs:us-west-1:XXXXXXXXXX:cluster/fargate-cluster",
    "ecs_task_arn": "arn:aws:ecs:us-west-1:XXXXXXXXX:task/38e847e2-2d9f-4e58-88e0-cd875d6f40d6",
    "ecs_task_definition": "firelens-service:11"
}
{
    "container_name": "/ecs-firelens-service-11-app-90d782cd98eb808d1700",
    "source": "stdout",
    "log": "[DockerLogGenerator] Current Time: 2020-07-09 05:04:45.09222899 +0000 UTC m=+40.002113544",
    "container_id": "d0c83eb7e9d94aff0ca7e25f54048031740712c5f52e9e4e3b6b73988bf57afa",
    "ecs_cluster": "arn:aws:ecs:us-west-1:XXXXXXXX:cluster/fargate-cluster",
    "ecs_task_arn": "arn:aws:ecs:us-west-1:XXXXXXXXX:task/38e847e2-2d9f-4e58-88e0-cd875d6f40d6",
    "ecs_task_definition": "firelens-service:11"
}

I have also tried with http out as below:

"logConfiguration": {"logDriver":"awsfirelens","options": 
{
"endpoint" :"https://collectors.sumologic.com/receiver/v1/http/XXXXXXX",
"endpoint_url" : "https://collectors.sumologic.com/receiver/v1/http/XXXXXX",
"@type" : "http"
}

Please let me know if I am missing anything.

fluentd/multiline-logs example supports EC2 only

Please mention that example in examples/fluentd/multiline-logs is not supported by Fargate provider. Attempting to launch a task with config-file-type:s3 and config-file-value results in an error "The specified platform does not satisfy the task definition’s required capabilities".

It took us a few days of wrangling to realize that; maybe a note about compatibility would save time someone else.

Issue with FARGATE 1.4.0

Trying to use awsfirelens and send to multiple destinations.
Created fluentbit config and uploaded to S3:

[SERVICE]
    Flush 1
    Grace 30

[OUTPUT]
    Name cloudwatch_logs
    Match *
    log_stream_name fluent-bit-cloudwatch
    log_group_name fluent-bit-cloudwatch
    region eu-central-1
    log_format json/emf
    auto_create_group true

[OUTPUT]
    Name        kafka
    Match       *
    Brokers     borker1:9092,broker2:9092,broker3:9092
    Topics      myTopic

And TaskDefintion using yml:

 TaskDefinition:
    Type: 'AWS::ECS::TaskDefinition'
    Properties:
      ExecutionRoleArn: !GetAtt 
        - ECSTaskExecutionRole
        - Arn
      TaskRoleArn: !GetAtt 
        - ECSTaskExecutionRole
        - Arn
      ContainerDefinitions:
        - Name: 'log_router'
          Image: '906394416424.dkr.ecr.eu-central-1.amazonaws.com/aws-for-fluent-bit:stable'
          Essential: true
          FirelensConfiguration:
            Type: "fluentbit"
            Options:
              config-file-type: "s3"
              config-file-value: "arn:aws:s3:::XXXXXX/fluentbit-service.conf"
          LogConfiguration:
            LogDriver: awslogs
            Options:
              awslogs-group: 'log_router'
              awslogs-region: !Ref AWS::Region
              awslogs-stream-prefix: 'firelens'
        - Name: 'logger'
          Image: !Sub '${AWS::AccountId}.dkr.ecr.eu-central-1.amazonaws.com/random-logger:latest'
          Essential: true
          PortMappings:
            - HostPort: 80
              Protocol: tcp
              ContainerPort: 80
          LogConfiguration:
            LogDriver: awsfirelens
      RequiresCompatibilities:
        - FARGATE
      NetworkMode: awsvpc
      Cpu: '256'
      Memory: '512'
      Family: 'task-family'

In Cloudformaion during creation I see all the time message:

Resource handler returned message: "One or more of the requested capabilities are not supported. 
(Service: AmazonECS; Status Code: 400; Error Code: PlatformTaskDefinitionIncompatibilityException; 
Request ID: 3926503c-e2d8-4d02-b385-7619f4a7a5c3; Proxy: null)" 
(RequestToken: 32f71748-d116-65e1-185e-467067390ded, HandlerErrorCode: GeneralServiceException)

If I set PlatformVersion: 1.3.0 for Service - all works just fine.
Seems like I can't use LogConfiguartion with just LogDriver: awsfirelens and no options.
Tried to add also options for 'cloudwatch' was thinking that it will be added to 2 existing [OUTPUTS] in config file.
But see exactly same issue.
awsfirelens also works with "file" config-file-type.
But clearly there is issue when config-file-type==s3 and LogConfigration has just LogDriver: awsfirelens but no options.

Different behaviour for Customized FluentBit image for Fargate 1.4.0 and 1.3.0.

As Fargate doesn't support S3 custom config, I am creating a custom image for FluentBit as described here: https://help.sumologic.com/03Send-Data/Collect-from-Other-Data-Sources/AWS_Fargate_log_collection#optional-support-for-multiple-http-headers This results in Sumo Logic logger as a sidecar container and the main application container to use awsfirelens without any further options. With Fargate 1.3.0 everything works.

However, upgrading to Fargate 1.4.0 the main application container fails with error:

CannotStartContainerError: ResourceInitializationError: failed to initialize logging driver: failed to get fluentd logdriver arguments: logging driver specified of container vernemq

When specifying "Name" attribute to the main containers log configuration options:

  • Name: forward Sidecar starts and logs are transported to sumo logic, but the logs seem to loop and the sidecar container goes out of memory after half a minute.
  • Name: http The logs go to sumologic. But it seems that there are 2 output configuration now. One working to sumo. And one other with no valid configuration:
[2020/08/10 15:15:34] [error] [io] TCP connection failed: 127.0.0.1:80 (Connection refused) 
[2020/08/10 15:15:33] [ info] [output:http:http.0] endpoint1.collection.eu.sumologic.com:443, HTTP status=200

What is the correct option configuration with Fargate 1.4.0 and why is the empty option not working like with Fargate 1.3.0?

Error on "options" on logConfiguration declaration Datadog documentation

I found the following error on your "amazon-ecs-firelens-examples/examples/fluent-bit/datadog/" README.md. If you check the declaration of options inside "logConfiguration" the key "name" is written on lowercase,

"logConfiguration": {
	"logDriver":"awsfirelens",
	"options": {
	   "name": "datadog",
	   "apiKey": "<DATADOG_API_KEY>",
	   "dd_service": "my-httpd-service",
	   "dd_source": "httpd",
	   "dd_tags": "project:example",
	   "provider": "ecs"
   }
},

and when I tried to execute my task definition I keep getting the following error:

Error
unable to generate firelens config: unable to apply log options of container app to firelens config: missing output key Name which is required for firelens configuration of type fluentbit

I finally found another doc from a random user, and on his example he was using this declaration:

	"logDriver":"awsfirelens",
	"options": {
	   "Name": "datadog",
	   "apiKey": "<DATADOG_API_KEY>",
	   "dd_service": "my-httpd-service",
	   "dd_source": "httpd",
	   "dd_tags": "project:example",
	   "provider": "ecs"
   }
},

When I changed my own "taskdefinition", changing the "name" option to "Name", the issue was gone.
It's just a small error on the documentation but I believe it will help others if fixed.

Log driver awslogs disallows options: log_key

Hi,

I am facing below error with ECS on Fargate platform while using fluentbit through awsfirelens for logging:
An error occurred (ClientException) when calling the RegisterTaskDefinition operation: Log driver awslogs disallows options: log_key

I have referred this example:

https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/mainline/examples/fluent-bit/cloudwatchlogs

If I want to pick only log using log_key then what alternative do I have

Is it possible to define custom input tag for fluentbit configs?

Hi,

I have usecase where I'm moving my infrastructure to ECS Fargate and out like to define custom input configs.

In my task definition, I'm launching two containers: app and fluentbit. The app container uses firelens to forward stdout logs to fluentbit, which then forwards the logs to fluentd aggregators.

Additionally, I create a custom output config file at /fluent-bit/configs/custom-output.conf in fluentbit image with the following content:

[OUTPUT]
    Name forward
    Match **
    Host FLUENT_AGGREGATOR_URL
    Port FLUENT_AGGREGATOR_PORT
    tls on
    tls.verify off

The Host and Port are replaced using entrypoint.sh script(both are passed to task definition as environment variables).

The logs that get forwarded have a customer index/tag which starts with service name defined in task definition.

Task definition

  TaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      ContainerDefinitions:
        - Name: nginx-web
          Cpu: !Ref ContainerCpu
          Essential: true
          Image: !Ref ContainerImageTag
          Memory: !Ref ContainerMemory
          PortMappings:
          - ContainerPort: !Ref ContainerPort
          LogConfiguration:
            LogDriver: awsfirelens
        - Name: fluentbit-agent
          Image: !Ref FluentbitImageUrl
          Essential: true
          Environment:
          - Name: "FLUENT_AGGREGATOR_URL"
            Value: !Ref FluentdAggregatorUrl
          - Name: "FLUENT_AGGREGATOR_PORT"
            Value: !Ref FluentdAggregatorPort
          FirelensConfiguration:
            Type: fluentbit
            Options:
              config-file-type: "file"
              config-file-value: "fluent-bit/configs/custom-output.conf"
          LogConfiguration:
            LogDriver: awslogs
            Options :
              awslogs-group: !Ref CloudWatchLogsGroup
              awslogs-region: !Ref AWS::Region
              awslogs-create-group: true
              awslogs-stream-prefix: nginx-web/fluentbit-logs
          MemoryReservation: 50
      Cpu: !Ref ContainerCpu
      ExecutionRoleArn: !GetAtt TaskExecutionRole.Arn
      Memory: !Ref Memory
      NetworkMode: awsvpc
      Family: webapp
      RequiresCompatibilities:
      - FARGATE

Sample log

{
  "_index": "nginx-web-45",
  "_type": "fluentd",
  "_id": "EVLFrXUBCNxJLISHC",
  "_version": 1,
  "_score": null,
  "_source": {
    "container_name": "/ecs-nginx-web-TaskDefinition-19BCASD9W3QZ-1-nginx-web-d0bcb99c14300",
    "source": "stdout",
    "log": "10.10.37.65 - - [09/Nov/2020:16:09:46 +0000] \"GET / HTTP/1.1\" 200 2350 \"-\" \"ELB-HealthChecker/2.0\"",
    "container_id": "add7cee329be065b3735eb0ba8b8",
    "ecs_cluster": "arn:aws:ecs:eu-west-1:323423439477:cluster/web",
    "ecs_task_arn": "arn:aws:ecs:eu-west-1:3234234239477:task/web/808ec9a8fc7b68b895f03c95",
    "ecs_task_definition": "nginx-web:16",
    "@timestamp": "2020-11-09T16:09:46.000000000+00:00"
  },
  "fields": {
    "@timestamp": [
      "2020-11-09T16:09:46.000Z"
    ]
  },
  "sort": [
    1604938186000
  ]
}

From the above example, the logs are sent with Tag as service namenginx-web-45.

So the question here is:

Is possible to also have a custom input config? if yes, is it possible to have custom input Tag?

Any guidance would be much appreciated.

thanks in advance!

Adding Dynatrace FluentBit example

Hello team,

I am about to submit a PR to add an example for FluentBit integration with Dynatrace so wanted to create this issue to give you a heads up.

It's adding another folder called 'dynatrace' to the fluent-bit examples folder and contains a task definition example as well as the README.

Thanks,
Ari

parse ecs nginx logs with fluentbit

Hello @PettitWesley. Can you please advice here
I have such task definition

[{
		"essential": true,
		"image": "AWS_ACCOUNT.dkr.ecr.us-east-1.amazonaws.com/custom-fluent-bit:latest",
		"name": "log_router",
		"firelensConfiguration": {
			"type": "fluentbit",
			"options": {
				"enable-ecs-log-metadata": "false"
			}
		},
		"logConfiguration": {
			"logDriver": "awslogs",
			"options": {
				"awslogs-group": "${aws_cloudwatch_log_group.log-group.name}",
				"awslogs-region": "us-east-1",
				"awslogs-create-group": "true",
				"awslogs-stream-prefix": "firelens"
			}
		},
		"memoryReservation": 50
	},
	{
		"name": "${local.env}-${local.application}",
		"image": "${var.repo_url}:${var.image_tag}",
		"essential": true,
		"portMappings": [{
			"containerPort": 80
		}],
		"logConfiguration": {
			"logDriver": "awsfirelens",
			"options": {
				"Name": "es",
				"Host": "${var.elasticsearch_host}",
				"Port": "443",
				"Index": "my_index",
				"Type": "my_type",
				"Aws_Region": "us-east-1",
				"tls": "On"
			}
		},
		"memory": 512,
		"cpu": 256
	}
]

I'm preparing custom custom-fluent-bit images.
Dockerfile:

FROM amazon/aws-for-fluent-bit:latest
ADD fluent-bit.conf /fluent-bit/etc/
ADD parsers.conf /fluent-bit/parsers/

I need to parse nginx logs, so here's my fluent-bit.conf:

[SERVICE]
    Parsers_File /fluent-bit/parsers/parsers.conf
    Log_Level debug

[INPUT]
    Name forward
    unix_path /var/run/fluent.sock

[FILTER]
    Name parser
    Match **
    Parser nginx
    Key_Name log

[OUTPUT]
    Name es
    Match *
    Host  ES_HOST
    Port  443
    Index my_index
    Type  my_type

but the problem is that I don't see parsed nginx log. Instead of this I see something like:

{
  "_index": "my_index",
  "_type": "my_type",
  "_id": "C8H09nUBFsE_GWj4U09x",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2020-11-23T21:13:25.050Z",
    "container_id": "94ebad2b6cfd8a078f776ac8f0947ecb072e111db34ebe24d699a2ca9e29d208",
    "container_name": "/ecs-dev-emulator-sdk-35-dev-service-f4e6ceceadcfd4819401",
    "source": "stdout",
    "log": "172.16.6.223 - - [23/Nov/2020:21:13:25 +0000] \"GET / HTTP/1.1\" 200 5901 \"-\" \"ELB-HealthChecker/2.0\" \"-\""
  },
  "fields": {
    "@timestamp": [
      "2020-11-23T21:13:25.050Z"
    ]
  },
  "sort": [
    1606166005050
  ]
}

Can you please advice how to parse line "log": "172.16.6.223 - - [23/Nov/2020:21:13:25 +0000] \"GET / HTTP/1.1\" 200 5901 \"-\" \"ELB-HealthChecker/2.0\" \"-\"" ?

Provide examples of using log key values in confs and task definitions

I'm having trouble finding examples of how to access log keys in conf files or, ideally, within the log configuration options in the task definition. For example, I'd like to set the s3_key_format using the value of a key in a JSON log, like below, where keyspace is a top-level key. Is that possible?

{
  "logConfiguration" : {
    "logDriver" : "awsfirelens",
    "options" : {
      "Name" : "s3",
      "bucket" : "my-bucket",
      "s3_key_format": "/$(keyspace)/dt=%Y-%m-%d-%H/$UUID.gz"
    }
  }
}

tail file with firelens is not working

Playing with tail input and aws elasticsearch output for over 6 hours but cant figure it out how to stream logs from file, here is my config.
[SERVICE]
Log_Level trace
Parsers_File /fluent-bit/parsers/parsers_extra.conf

[INPUT]
Name tail
Path /var/www/html/var/log
Path_Key dev.log
Key log
Tag nas-client
Parser universal

[OUTPUT]
Name es
Match nas-client
Host xxxxx.eu-central-1.es.amazonaws.com
Type _doc
Aws_Auth On
Port 443
Index casino999-stage
tls On
Aws_Region eu-central-1

Expected output: logs delivered to es

Actual behaviour:
2020-08-28 02:20:08
[2020/08/27 22:20:08] [debug] [out_es] Enabled AWS Auth
2020-08-28 02:20:08
[2020/08/27 22:20:08] [debug] [input:tail:tail.3] inotify watch fd=22
2020-08-28 02:20:08
[2020/08/27 22:20:08] [debug] [input:tail:tail.3] scanning path /var/www/html/var/log
2020-08-28 02:20:08
[2020/08/27 22:20:08] [debug] [input:tail:tail.3] cannot read info from: /var/www/html/var/log
2020-08-28 02:20:08
[2020/08/27 22:20:08] [debug] [input:tail:tail.3] 0 new files found on path '/var/www/html/var/log'

I can confirm that there is a dev.log file, checked permissions and set 777 for excluding it, also set absolute path to file and then with Path_Key but none of them worked, played with other parsers beside universal, tried json, docker, multiline, docker_mode, nothing works, does anyone has any idea what can be issue?

Use forward input in FARGATE

Hello,

I have 2 containers in my task definition. One being my app in PHP and the other is firelens.
The PHP container is already using the stdout of the container for outputting everything related to the PHP process with basic text output.
My application is reporting logs with some context variables in order at the end to be process through S3/Athena (eg: TraceId, Id related to the app, datetime, log lever, file where the log was fired, etc)
So I'm trying to use a different input by sending JSON to tcp://127.0.0.1:24224 trough the logging lib I use but I don't see anything processed by the firelens container.

So I have 2 questions:

  1. Is there a way to do that?
  2. Do you have a best approach on how to handle this?

Thanks,
Lucas

Support of multiple headers in FireLens task definition

Hello,

I'm trying to figure out how to add multiple HTTP Headers (one for Authorization and one for Content-Type) as part of the HTTP output (https://docs.fluentbit.io/manual/pipeline/outputs/http) but it doesn't look like the default FireLens FluentBit image doesn't support it.

Anytime I try to use to the HTTP output and provide two headers, the resulting config still only shows one header.

Are there any plans to support multiple headers?

Thanks,
Ari

Configuration file contain errors. Aborting - parse-envoy-app-mesh

When deploying the parse-envoy-app-mesh example:
https://github.com/aws-samples/amazon-ecs-firelens-examples/tree/master/examples/fluent-bit/parse-envoy-app-mesh

I have found the error with the log-router CloudWatch log group:

tput: No value for $TERM and no -T specified
tput: No value for $TERM and no -T specified
AWS for Fluent Bit Container Image Version 2.2.0
tput: No value for $TERM and no -T specified
tput: No value for $TERM and no -T specified
[1mFluent Bit v1.3.9�[0m
1m [93mCopyright (C) Treasure Data� 0m
[1m [91mError [0m: Configuration file contain errors. Aborting

I've not updated the DockerFile or any conf/parser files.

Any ideas what this could be? Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.