Giter VIP home page Giter VIP logo

logger_json's Introduction

LoggerJSON

Build Status Coverage Status Module Version Hex Docs Hex Download Total License

A collection of formatters and utilities for JSON-based logging for various cloud tools and platforms.

Supported formatters

Installation

Add logger_json to your list of dependencies in mix.exs:

def deps do
  [
    # ...
    {:logger_json, "~> 6.0"}
    # ...
  ]
end

and install it running mix deps.get.

Then, enable the formatter in your config.exs:

config :logger, :default_handler,
  formatter: {LoggerJSON.Formatters.Basic, []}

or during runtime (eg. in your application.ex):

:logger.update_handler_config(:default, :formatter, {Basic, []})

You might also want to format the log messages when migrations are running:

config :domain, MyApp.Repo,
  # ...
  start_apps_before_migration: [:logger_json]

Additionally, you may also be try redirecting otp reports to Logger (see "Configuration" section).

Configuration

Configuration can be set using 2nd element of the tuple of the :formatter option in Logger configuration. For example in config.exs:

config :logger, :default_handler,
  formatter: {LoggerJSON.Formatters.GoogleCloud, metadata: :all, project_id: "logger-101"}

or during runtime:

:logger.update_handler_config(:default, :formatter, {Basic, %{metadata: {:all_except, [:conn]}}})

Docs

The docs can be found at https://hexdocs.pm/logger_json.

Examples

Basic

{
  "message": "Hello",
  "metadata": {
    "domain": ["elixir"]
  },
  "severity": "notice",
  "time": "2024-04-11T21:31:01.403Z"
}

Google Cloud Logger

Follows the Google Cloud Logger LogEntry format, for more details see special fields in structured payloads.

{
  "logging.googleapis.com/trace": "projects/my-projectid/traces/0679686673a",
  "logging.googleapis.com/spanId": "000000000000004a",
  "logging.googleapis.com/operation": {
    "pid": "#PID<0.29081.0>"
  },
  "logging.googleapis.com/sourceLocation": {
    "file": "/Users/andrew/Projects/os/logger_json/test/formatters/google_cloud_test.exs",
    "function": "Elixir.LoggerJSON.Formatters.GoogleCloudTest.test logs an LogEntry of a given level/1",
    "line": 44
  },
  "message": {
    "domain": ["elixir"],
    "message": "Hello"
  },
  "severity": "NOTICE",
  "time": "2024-04-12T15:07:55.020Z"
}

and this is how it looks in Google Cloud Logger:

{
  "insertId": "1d4hmnafsj7vy1",
  "jsonPayload": {
    "message": "Hello",
    "logging.googleapis.com/spanId": "000000000000004a",
    "domain": ["elixir"],
    "time": "2024-04-12T15:07:55.020Z"
  },
  "resource": {
    "type": "gce_instance",
    "labels": {
      "zone": "us-east1-d",
      "project_id": "firezone-staging",
      "instance_id": "3168853301020468373"
    }
  },
  "timestamp": "2024-04-12T15:07:55.023307594Z",
  "severity": "NOTICE",
  "logName": "projects/firezone-staging/logs/cos_containers",
  "operation": {
    "id": "F8WQ1FsdFAm5ZY0AC1PB",
    "producer": "#PID<0.29081.0>"
  },
  "trace": "projects/firezone-staging/traces/bc007e40a2e9edffa23785d8badc43b8",
  "sourceLocation": {
    "file": "lib/phoenix/logger.ex",
    "line": "231",
    "function": "Elixir.Phoenix.Logger.phoenix_endpoint_stop/4"
  },
  "receiveTimestamp": "2024-04-12T15:07:55.678986520Z"
}

Exception that can be sent to Google Cloud Error Reporter:

{
  "httpRequest": {
    "protocol": "HTTP/1.1",
    "referer": "http://www.example.com/",
    "remoteIp": "",
    "requestMethod": "PATCH",
    "requestUrl": "http://www.example.com/",
    "status": 503,
    "userAgent": "Mozilla/5.0"
  },
  "logging.googleapis.com/operation": {
    "pid": "#PID<0.250.0>"
  },
  "logging.googleapis.com/sourceLocation": {
    "file": "/Users/andrew/Projects/os/logger_json/test/formatters/google_cloud_test.exs",
    "function": "Elixir.LoggerJSON.Formatters.GoogleCloudTest.test logs exception http context/1",
    "line": 301
  },
  "@type": "type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent",
  "context": {
    "httpRequest": {
      "protocol": "HTTP/1.1",
      "referer": "http://www.example.com/",
      "remoteIp": "",
      "requestMethod": "PATCH",
      "requestUrl": "http://www.example.com/",
      "status": 503,
      "userAgent": "Mozilla/5.0"
    },
    "reportLocation": {
      "filePath": "/Users/andrew/Projects/os/logger_json/test/formatters/google_cloud_test.exs",
      "functionName": "Elixir.LoggerJSON.Formatters.GoogleCloudTest.test logs exception http context/1",
      "lineNumber": 301
    }
  },
  "domain": ["elixir"],
  "message": "Hello",
  "serviceContext": {
    "service": "nonode@nohost"
  },
  "stack_trace": "** (EXIT from #PID<0.250.0>) :foo",
  "severity": "DEBUG",
  "time": "2024-04-11T21:34:53.503Z"
}

Datadog

Adheres to the default standard attribute list as much as possible.

{
  "domain": ["elixir"],
  "http": {
    "method": "GET",
    "referer": "http://www.example2.com/",
    "request_id": null,
    "status_code": 200,
    "url": "http://www.example.com/",
    "url_details": {
      "host": "www.example.com",
      "path": "/",
      "port": 80,
      "queryString": "",
      "scheme": "http"
    },
    "useragent": "Mozilla/5.0"
  },
  "logger": {
    "file_name": "/Users/andrew/Projects/os/logger_json/test/formatters/datadog_test.exs",
    "line": 239,
    "method_name": "Elixir.LoggerJSON.Formatters.DatadogTest.test logs http context/1",
    "thread_name": "#PID<0.225.0>"
  },
  "message": "Hello",
  "network": {
    "client": {
      "ip": "127.0.0.1"
    }
  },
  "syslog": {
    "hostname": "MacBook-Pro",
    "severity": "debug",
    "timestamp": "2024-04-11T23:10:47.967Z"
  }
}

Elastic

Follows the Elastic Common Schema (ECS) format.

{
  "@timestamp": "2024-05-21T15:17:35.374Z",
  "ecs.version": "8.11.0",
  "log.level": "info",
  "log.logger": "Elixir.LoggerJSON.Formatters.ElasticTest",
  "log.origin": {
    "file.line": 18,
    "file.name": "/app/logger_json/test/logger_json/formatters/elastic_test.exs",
    "function": "test logs message of every level/1"
  },
  "message": "Hello"
}

When an error is thrown, the message field is populated with the error message and the error. fields will be set:

Note: when throwing a custom exception type that defines the fields id and/or code, then the error.id and/or error.code fields will be set respectively.

{
  "@timestamp": "2024-05-21T15:20:11.623Z",
  "ecs.version": "8.11.0",
  "error.message": "runtime error",
  "error.stack_trace": "** (RuntimeError) runtime error\n    test/logger_json/formatters/elastic_test.exs:191: anonymous fn/0 in LoggerJSON.Formatters.ElasticTest.\"test logs exceptions\"/1\n",
  "error.type": "Elixir.RuntimeError",
  "log.level": "error",
  "message": "runtime error"
}

Any custom metadata fields will be added to the root of the message, so that your application can fill any other ECS fields that you require:

Note that this also allows you to produce messages that do not strictly adhere to the ECS specification.

// Logger.info("Hello") with Logger.metadata(:"device.model.name": "My Awesome Device")
// or Logger.info("Hello", "device.model.name": "My Awesome Device")
{
  "@timestamp": "2024-05-21T15:17:35.374Z",
  "ecs.version": "8.11.0",
  "log.level": "info",
  "log.logger": "Elixir.LoggerJSON.Formatters.ElasticTest",
  "log.origin": {
    "file.line": 18,
    "file.name": "/app/logger_json/test/logger_json/formatters/elastic_test.exs",
    "function": "test logs message of every level/1"
  },
  "message": "Hello",
  "device.model.name": "My Awesome Device"
}

Copyright and License

Copyright (c) 2016 Andrew Dryga

Released under the MIT License, which can be found in LICENSE.md.

logger_json's People

Contributors

altjohndev avatar andrewdryga avatar btkostner avatar bvobart avatar davidjulien avatar dnnx avatar hkrutzer avatar imricardoramos avatar jbernardo95 avatar kaaboaye avatar kbredemeier avatar kianmeng avatar luizmiranda7 avatar marinakr avatar mathieubotter avatar mcrumm avatar moxley avatar patmaddox avatar pobo380 avatar robsonpeixoto avatar ryanhart2 avatar sgerrand avatar sophisticasean avatar sorliem avatar sparkertime avatar w1mvy avatar williamgelhar-validere avatar wingyplus avatar woylie avatar yogirajh007 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logger_json's Issues

The documentation refers to Ecto :loggers option, which is deprecated

The documentation section shown below refers to the :loggers option, which was deprecated in Ecto v3.0.0, at the time that telemetry was introduced. It was replaced with a :log option. Per the docs the :log option provides the log level used when logging the query with Elixir's Logger. It defaults to :debug and, if false, disables logging for the repository. I think that the explanation of the :loggers option should be replaced with an explanation of the :log option. I can submit a PR if agreed that this should be changed.

config :my_app, MyApp.Repo,
 adapter: Ecto.Adapters.Postgres,
 ...
 loggers: [{LoggerJSON.Ecto, :log, [:info]}]

How do I use this with azure?

Thank you for wonderful package!!

but it looks like only support for gcp.

how can I use this with azure?

Thank you. :)

Stackdriver logs do not show

This might not be an issue it could be me doing something incorrectly.

I have the following in /etc/google-fluentd/config.d/api.conf:

<source>
    @type tail
    format none
    path /opt/api/var/log/*.log*
    pos_file /var/lib/google-fluentd/pos/api.pos
    read_from_head true
    tag structured-log
</source>

As per the guide here https://cloud.google.com/logging/docs/agent/configuration

With the following in my application config:

config :logger, backends: [LoggerJSON]
config :logger_json, :backend, json_encoder: Jason

With this configuration the log message is in textPayload as expected however if I change format in the fluentd conf file from format none to format json the logs never arrive 🤷‍♂️

Changelog/migration guide for going from 4.3.0 to 5.0.0

Hi there! I have an app using logger_json 4.3.0, and I was wondering what (if anything) I need to do to upgrade to 5.0.0.

I looked through the commits, but it wasn't clear to me what the breaking changes might be.

Any help would be much appreciated. (And if this is documented elsewhere, I'd be happy to submit a PR to add it to a CHANGELOG.md file or something.)

Compilation Error: LoggerJSON.Formatter.Plug is not loaded

Hi -- In a new project with only {:logger_json, "~> 6.0"} added to the mix.exs compilation fails:

❯ mix deps.get           
Resolving Hex dependencies...
Resolution completed in 0.03s
New:
  jason 1.4.1
  logger_json 6.0.0
* Getting logger_json (Hex package)
* Getting jason (Hex package)


❯ mix compile       
==> jason
Compiling 10 files (.ex)
Generated jason app
==> logger_json
Compiling 17 files (.ex)
    error: module LoggerJSON.Formatter.Plug is not loaded and could not be found. This may be happening because the module you are trying to load directly or indirectly depends on the current module
    │
 45 │   import LoggerJSON.Formatter.{MapBuilder, DateTime, Message, Metadata, Code, Plug, RedactorEncoder}
    │   ^
    │
    └─ lib/logger_json/formatters/datadog.ex:45:3: LoggerJSON.Formatters.Datadog (module)

    error: module LoggerJSON.Formatter.Plug is not loaded and could not be found. This may be happening because the module you are trying to load directly or indirectly depends on the current module
    │
 92 │   import LoggerJSON.Formatter.{MapBuilder, DateTime, Message, Metadata, Code, Plug, RedactorEncoder}
    │   ^
    │
    └─ lib/logger_json/formatters/google_cloud.ex:92:3: LoggerJSON.Formatters.GoogleCloud (module)

    error: module LoggerJSON.Formatter.Plug is not loaded and could not be found. This may be happening because the module you are trying to load directly or indirectly depends on the current module
    │
 90 │   import LoggerJSON.Formatter.{MapBuilder, DateTime, Message, Metadata, Plug, RedactorEncoder}
    │   ^
    │
    └─ lib/logger_json/formatters/elastic.ex:90:3: LoggerJSON.Formatters.Elastic (module)

    error: module LoggerJSON.Formatter.Plug is not loaded and could not be found. This may be happening because the module you are trying to load directly or indirectly depends on the current module
    │
 18 │   import LoggerJSON.Formatter.{MapBuilder, DateTime, Message, Metadata, Plug, RedactorEncoder}
    │   ^
    │
    └─ lib/logger_json/formatters/basic.ex:18:3: LoggerJSON.Formatters.Basic (module)


== Compilation error in file lib/logger_json/formatters/datadog.ex ==
** (CompileError) lib/logger_json/formatters/datadog.ex: cannot compile module LoggerJSON.Formatters.Datadog (errors have been logged)

could not compile dependency :logger_json, "mix compile" failed. Errors may have been logged above. You can recompile this dependency with "mix deps.compile logger_json --force", update it with "mix deps.update logger_json" or clean it with "mix deps.clean logger_json"

1 ❯ elixir --version            
Erlang/OTP 26 [erts-14.2.1] [source] [64-bit] [smp:8:8] [ds:8:8:10] [async-threads:1] [jit] [dtrace]

Elixir 1.16.1 (compiled with Erlang/OTP 26)

The obvious work around is to include Plug as a dep but it seems like LoggerJSON.Formatter.Plug was written to account for its absence.

Usage of "device" undocumented

I'm already using LoggerJSON for our cloudwatch logs, but have written my own logger for Datadog.

However, I'm wondering about the "device" config option. I think it would be easier for me to create a "device" to send logs to Datadog.

Is it safe to use this undocumented feature?

Usage of Jason.Helpers.json_map may cause memory spikes

Related: michalmuskala/jason#94

As per the issue above:

Since conn.method is being passed to json_map, a closure pointing to conn will be created. Then the whole conn is sent to the Logger process, which may cause spikes in memory usage.

LoggerJSON.Plug.MetadataFormatters.GoogleCloudLogger.build_metadata/3 uses this function and this may cause large memory spikes. We were seeing something in the region of ~1.6GB allocated in the Stack + Heap.

Add fallback when binary encoding is failed

GenServer #PID<0.262.0> terminating
** (stop) {:EXIT, {%Jason.EncodeError{message: "invalid byte 0xF0 in <<125, 95, 95, 116, 101, 115, 116, 124, 79, 58, 50, 49, 58, 34, 74, 68, 97, 116, 97, 98, 97, 115, 101, 68, 114, 105, 118, 101, 114, 77, 121, 115, 113, 108, 105, 34, 58, 51, 58, 123, 115, 58, 50, 58, 34, 102, 99, 34, 59, 79, ...>>"}, [{Jason, :encode_to_iodata!, 2, [file: 'lib/jason.ex', line: 199]}, {LoggerJSON, :format_event, 5, [file: 'lib/logger_json.ex', line: 299]}, {LoggerJSON, :log_event, 5, [file: 'lib/logger_json.ex', line: 234]}, {LoggerJSON, :handle_event, 2, [file: 'lib/logger_json.ex', line: 139]}, {:gen_event, :server_update, 4, [file: 'gen_event.erl', line: 573]}, {:gen_event, :server_notify, 4, [file: 'gen_event.erl', line: 555]}, {:gen_event, :handle_msg, 6, [file: 'gen_event.erl', line: 296]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 247]}]}}
Last message: {:gen_event_EXIT, LoggerJSON, {:EXIT, {%Jason.EncodeError{message: "invalid byte 0xF0 in <<125, 95, 95, 116, 101, 115, 116, 124, 79, 58, 50, 49, 58, 34, 74, 68, 97, 116, 97, 98, 97, 115, 101, 68, 114, 105, 118, 101, 114, 77, 121, 115, 113, 108, 105, 34, 58, 51, 58, 123, 115, 58, 50, 58, 34, 102, 99, 34, 59, 79, ...>>"}, [{Jason, :encode_to_iodata!, 2, [file: 'lib/jason.ex', line: 199]}, {LoggerJSON, :format_event, 5, [file: 'lib/logger_json.ex', line: 299]}, {LoggerJSON, :log_event, 5, [file: 'lib/logger_json.ex', line: 234]}, {LoggerJSON, :handle_event, 2, [file: 'lib/logger_json.ex', line: 139]}, {:gen_event, :server_update, 4, [file: 'gen_event.erl', line: 573]}, {:gen_event, :server_notify, 4, [file: 'gen_event.erl', line: 555]}, {:gen_event, :handle_msg, 6, [file: 'gen_event.erl', line: 296]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 247]}]}}}

Time, Severity, and Message should be merged after metadata in GoogleCloudLogger

I recently debugged an issue where my log messages weren't showing up as expected. I had introduced a log message that had time metadata (how long an operation took). Since LoggerJSON.Formatters.GoogleCloudLogger merges the metadata into the map of time, severity, and message, metadata time replaced the actual timestamp. The same would happen for severity or message.

This should at a minimum be documented. Ideally the time, severity, and message would be merged into the metadata. A plus would be some way of warning that the original value had been overwritten and recording the original value.

Issue with GCP latest K8s version

This library with formatter LoggerJSON.Formatters.GoogleCloudLogger is not working in GCP K8s cluster after 7-April GCP K8S update. Latest update is using fluent-bit agent and earlier one was using fluentD.

Compilation issue with Elixir 1.6 on OTP 19

Hi and thank you for this fine library!

I tried to use logger_json with Elixir 1.6 on OTP 19 (which is what we run in production), but it won't compile due to the usage of unicode atoms in ecto.ex. This is not supported on OTP 19, since unicode atom support was introduced in OTP 20.

I'm not sure if you want to support OTP 19, but perhaps it could be some value in mentioning that OTP 20 is required in the docs.

This is the compilation error I get:

== Compilation error in file lib/logger_json/ecto.ex ==
** (SyntaxError) lib/logger_json/ecto.ex:61: unexpected token: "μ" (column 26, codepoint U+03BC)
    (elixir) lib/kernel/parallel_compiler.ex:198: anonymous fn/4 in Kernel.ParallelCompiler.spawn_workers/6```

Question: are logs sent properly to Stackdriver Logging?

I am using the current latest code from master. Tis is my config:

config :logger_json, :backend,
       metadata: :all,
       formatter: LoggerJSON.Formatters.GoogleCloudLogger

config :logger,
       backends: [LoggerJSON],
       level: :info 

Logs are sent to Stackdriver Logging, but they are not parsed, they show up as plaintext logs, although they are JSON structured.

(this is how it looks in the console when viewing the logs)

2018-10-22 09:57:13.907 CEST
{"application":{"version":"0.5.17","name":"tzdata"},"log":"Tzdata has updated the release from 2018e to 2018f"}

Questions:

  1. Are you perhaps not using the V2 LogEntry ? Because the JSON looks like the old version and not like the one described on https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry - and is this maybe the reason why the logs are not parsed?

  2. I am using Elixir in K8S, maybe something needs to be activated/enabled somewhere to tell Stackdriver Logging that it is receiving structured and not plaintext logs?

Thanks for the help!

FunctionClauseError in GoogleCloudLogger formatter

:gen_event handler LoggerJSON installed in Logger terminating
** (exit) an exception was raised:
    ** (FunctionClauseError) no function clause matching in LoggerJSON.Formatters.GoogleCloudLogger.format_crash_reason/1
        (logger_json) lib/logger_json/formatters/google_cloud_logger.ex:75: LoggerJSON.Formatters.GoogleCloudLogger.format_crash_reason({{:bad_label, {:alabel, 'The label "docker_host"  is not a valid A-label: ulabel error={bad_label,\n                                                               {context,\n                                                                "Codepoint 95 not allowed (\'DISALLOWED\') at posion 6 in \\"docker_host\\""}}'}}, [{:idna, :alabel, 1, [file: '/opt/app/deps/idna/src/idna.erl', line: 277]}, {:idna, :encode_1, 2, [file: '/opt/app/deps/idna/src/idna.erl', line: 145]}, {:hackney_url, :normalize, 2, [file: '/opt/app/deps/hackney/src/hackney_url.erl', line: 99]}, {:hackney, :request, 5, [file: '/opt/app/deps/hackney/src/hackney.erl', line: 305]}, {DealershipsCatalog.HTTPClient, :fetch_file, 2, [file: 'lib/dealerships_catalog/http_client.ex', line: 11]}, {DealershipsCatalog.ProductFeeds, :do_check_for_updates, 2, [file: 'lib/dealerships_catalog/product_feeds.ex', line: 120]}, {DealershipsCatalog.Util, :with_tmp_path, 2, [file: 'lib/dealerships_catalog/util.ex', line: 12]}, {Task.Supervised, :do_apply, 2, [file: 'lib/task/supervised.ex', line: 89]}]})
        (logger_json) lib/logger_json/formatters/google_cloud_logger.ex:67: anonymous fn/3 in LoggerJSON.Formatters.GoogleCloudLogger.format_process_crash/1
        (jason) lib/encode.ex:163: Jason.Encode.map_naive/3
        (jason) lib/encode.ex:35: Jason.Encode.encode/2
        (jason) lib/jason.ex:197: Jason.encode_to_iodata!/2
        (logger_json) lib/logger_json.ex:299: LoggerJSON.format_event/5
        (logger_json) lib/logger_json.ex:234: LoggerJSON.log_event/5
        (logger_json) lib/logger_json.ex:139: LoggerJSON.handle_event/2

It happens when I try to fetch a URL like http://docker_host:9000/... using hackney.

Logs show up as JSON and are not parsed by Stackdriver Logging

Hi, I am using the latest code from master from K8S pod, with the following config:

config :logger_json, :backend,
       metadata: :all,
       formatter: LoggerJSON.Formatters.GoogleCloudLogger

config :logger,
       backends: [LoggerJSON],
       level: :info

The logs do show up in Stackdriver Logging, but they are not parsed, i.e. it doesn't know they are structured (JSON) and is treating them like plaintext. For example this is how it looks like visually:

2018-10-22 09:57:13.907 CEST
{"application":{"version":"0.5.17","name":"tzdata"},"log":"Tzdata has updated the release from 2018e to 2018f"}

Instead it should be parsed, and only show JSON once you click on it, AFAIK.

Is this because your JSON format is not as on https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry or something needs to be enabled/set somewhere to tell Stackdriver Logging to expect a structured log entry (so that it parses it)?

Thanks for the help!

Problem with sourcelocation

For some reason my sourcelocation only point to {"file":"lib/logger_json/plug.ex","line":44,"function":"Elixir.LoggerJSON.Plug.call/2"}

Is there any problem or something?

In my debug that's the value that handle_events receive in logger_json.ex

How to avoid duplicating Formatter Configuration

Hey team! Currently, I have the following config files (test and prod).

# config/dev.exs

config :logger, :default_formatter,
  truncate: :infinity,
  format: "$time $metadata[$level] $message\n",
  metadata: [
    :request_id,
    :trace_id,
    :span_id
    # ... even more keys
  ]

And

# config/prod.exs

config :logger, :default_handler,
  formatter:
    {LoggerJSON.Formatters.Basic,
     metadata: [
       :request_id,
       :trace_id,
       :span_id
       # ... even more keys
     ]}

Where I would duplicate the configuration in both config files, ideally, I wish to use config/config.exs only or be able to configure the :default_formatter metadata only to avoid forgetting to change all the required files to keep it in sync.

What would you suggest here to avoid duplication?

Crash when logging logs from OTP

:gen_event handler LoggerJSON installed in Logger terminating
** (exit) an exception was raised:
    ** (Protocol.UndefinedError) protocol Jason.Encoder not implemented for &:application_controller.format_log/1 of type Function, Jason.Encoder protocol must always be explicitly implemented. This protocol is implemented for the following type(s): Ecto.Schema.Metadata, Ecto.Association.NotLoaded, BitString, Date, Atom, NaiveDateTime, Map, Decimal, DateTime, Time, Float, List, Any, Integer, Jason.Fragment

It appears that OTP logs have functions in their metadata, the beginning of the metadata of the log that caused the crash:

%{domain: [:otp],
 error_logger: %{report_cb: &:application_controller.format_log/1,
 tag: :info_report,
 type: :std_info}, ...

the environment is elixir 1.10.3 with OTP 23.0.2

Datadog FORMATTER CRASH

Hi,

Since upgrading to logger_json 6.0.1, we've been getting 'FORMATTER CRASH' messages in Datadog. I followed the upgrade guide and made the following changes:

config.exs

- config :logger_json, :backend,
-   metadata: [:network, :phoenix, :duration, :http, :"usr.id"],
-   formatter: LoggerJSON.Formatters.DatadogLogger

- config :logger, backends: [LoggerJSON]
+ config :logger, :default_handler,
+ formatter:
+  {LoggerJSON.Formatters.DatadogLogger,
+  metadata: [:network, :phoenix, :duration, :http, :"usr.id"]}

endpoint.ex

- plug LoggerJSON.Plug, metadata_formatter: LoggerJSON.Plug.MetadataFormatters.DatadogLogger

application.ex

def start(_, _) do
...
+ LoggerJSON.Plug.attach("logger-json-phoenix", [:phoenix, :endpoint, :stop], :info)
...

The output we're getting is of the form:

2024-05-24T08:19:38.771136+00:00 info: FORMATTER CRASH: {string,[<<"PATCH">>,32,<<"{REDACTED_URL}">>,32,<<"[">>,<<"Sent">>,32,[<<"200">>,32],<<"in ">>,[<<"1">>,<<"ms">>],<<"]">>]}

Don't seem to be getting a lot of feedback about why there's an issue or where it's coming from. Any advice appreciated!

Logs from LoggerJSON.Plug show as "" in Google Cloud Logging

We're using Elixir/Phoenix and LoggerJSON in our Plug pipelines to format log lines for Google Cloud Logging. These log lines used to be shown with proper labels in the GCL summary line, using fields from httpRequest:

20201027133125-zsxruetksy

However, today we noticed these logs are no longer shown that way in GCL, instead showing up as an empty string ("") with no labels applied:

20210318164455-6n0pz1haa0

Expanding one of these log lines still shows the required httpRequest fields are present, but GCL seems to take the jsonPayload.message (which is logged as an empty string in LoggerJSON.Plug) and for some reason no longer shows the httpRequest labels:

20210318164659-x5uj18ov72

Things I've tried in a fork of LoggerJSON:

  • Log nil in LoggerJSON.Plug, to see if that would make GCL disregard the jsonPayload.message and fall back to the properly formatted log line. This doesn't change anything.
  • Remove the message field from the payload altogether, if it's empty. This makes GCL show the JSON payload as the log line, instead of showing the properly formatted log line.

Is anyone else experiencing these issues, and do you have any idea how these log lines should be formatted to get proper formatting back in GCL?

App not passing FQDNs to datadog.

We noticed that :inet.gethostname() is not sending fully qualified domain names to datadog.
A simple fix would be to use :net_adm.localhost() and the --name option.

iex([email protected])3> :inet.gethostname()
{:ok, 'corp-7220'}
iex([email protected])4> :net_adm.localhost()
'corp-7220.us-central1-c.c.corp-integ-0.internal'

We can have a PR swapping the two functions and call it a day.
Would you see any drawbacks in abandoning :inet.gethostname/0?

cc: @rudebono

timestamp is in wrong timezone

The following assumes the timestamp is in UTC:

  def format_timestamp({date, time}) do
    [format_date(date), ?T, format_time(time), ?Z]
    |> IO.iodata_to_binary()
  end

however {date, time} seems to be provided in localtime, which means that this library ends up logging incorrect timestamps

"time" be overwritten by metadata "time" when use Elixir 1.10 with GoogleCloudLoggingFormatter

First of all, thank you for this library!!

About the title. Elixir 1.10 integrate with Erlang's logger. And Erlang's logger has metadata "time" what has unixtime(microsecond) variable.

https://elixir-lang.org/blog/2020/01/27/elixir-v1-10-0-released/
https://erlang.org/doc/man/logger.html#timestamp-0

This allows us to provide tighter integration with Erlang/OTP’s new logger
This means that the logger level, logger metadata, as well as all log messages are now shared between Erlang and Elixir APIs.

google cloud logger formatter merge metadata and logger_json values, So logger_json's "time" be overwritten by metadata "time" value.
Is it the correct behavior?

My understanding that logger_json be expected "time" value is milliseconds.

I confirmed the behavior as below.

code

w1mvy@3939502

Elixir 1.9

%{"logging.googleapis.com/sourceLocation" => %{"file" => "/home/w1mvy/ghq/github.com/Nebo15/logger_json/test/unit/logger_json_google_test.exs", "function" => "Elixir.LoggerJSONGoogleTest.test logs binary messages/1", "line" => 59}, "message" => "hello", "severity" => "DEBUG", "time" => "2020-08-05T21:54:34.233Z"}

Elixir 1.10

%{"domain" => ["elixir"], "logging.googleapis.com/sourceLocation" => %{"file" => "/home/w1mvy/ghq/github.com/Nebo15/logger_json/test/unit/logger_json_google_test.exs", "function" => "Elixir.LoggerJSONGoogleTest.test logs binary messages/1", "line" => 59}, "message" => "hello", "severity" => "DEBUG", "time" => 1596632018401128}

protocol Jason.Encoder not implemented for tuple

I'm getting the following error (the value {1, 1, 1, 1} is dummy here)

Protocol.UndefinedError: protocol Jason.Encoder not implemented for {1, 1, 1, 1} of type Tuple, Jason.Encoder protocol must always be explicitly implemented. This protocol is implemented for the following type(s): ....
  File "lib/jason.ex", line 199, in Jason.encode_to_iodata!/2
  File "lib/logger_json.ex", line 290, in LoggerJSON.format_event/5
  File "lib/logger_json.ex", line 225, in LoggerJSON.log_event/5
  File "lib/logger_json.ex", line 127, in LoggerJSON.handle_event/2
  File "gen_event.erl", line 620, in :gen_event.server_update/4
  File "gen_event.erl", line 602, in :gen_event.server_notify/4
  File "gen_event.erl", line 343, in :gen_event.handle_msg/6
  File "proc_lib.erl", line 226, in :proc_lib.init_p_do_apply/3

It seems like the event that the formatter (GoogleCloudLogger in my case) cannot be encoded to JSON. I have no idea where this tuple is coming from (looks like an ip address). The GoogleCloudLogger should never crash right? The only thing I can think of is that one of the @processed_metadata_keys (pid, file, line, function, module or application) contains this tuple as value, because these are not "safe" converted but taken as-is. Do you have any idea on how I can debug this?

Dialyzer warnings: Ecto.LogEntry and DatadogLogger pattern match

I fixed some Dialyzer warnings in #75, but there are two left:

:0:unknown_type
Unknown type: Ecto.LogEntry.t/0.

Ecto.LogEntry has been removed in Ecto 3.2.0. If you want to keep this for backwards compatibility, we can tell Dialyzer to ignore this warning.

________________________________________________________________________________
lib/logger_json/formatters/datadog_logger.ex:43:pattern_match
The pattern can never match the type.

Pattern:
%{:reason => _reason}

Type:
nil | %Jason.Fragment{:encode => ({_, _} -> [any(), ...])}

I'm not using the datalog logger, but it looks like Jason.Helpers.json_map/1 (used in LoggerJSON.FormatterUtils.format_process_crash/1) returns a Jason.Fragment struct, on which you'd have to call the encode function in order to get the final map. The datadog logger matches on the encoded map though, and not on Jason.Fragment. So this looks like a bug to me.

Bug: LoggerJSON no-ops when its not the group leader.

This is an issue because LoggerJSON will not log if you make an rpc to a running mix release. I'm using rpc to start tasks periodically on a running node (like a cronjob) but I don't get logs from the result of that rpc because of this bug in LoggerJSON.

We should remain close to the console implementation of Logger, which does not no-op if its not the group leader.
https://github.com/elixir-lang/elixir/blob/master/lib/logger/lib/logger/backends/console.ex#L138

So under the same circumstances, if I have both :console logger and LoggerJSON I will only get a log from :console when I make an RPC to a running release.

I will fork and make the PR.

How do we customise metadata sent to GCL?

Hey, first, thanks for making this, it's exactly what I need! I want to send our session id and userid along with our log entries for grouping/analysis later. Is there a way to pass this in when I use the plug?

Suggestions on how to set google logger logName?

Do you have any suggestions on how we could go about setting the logName attribute per LogEntry documentation? I don't believe this library currently supports that - and before I fork and commit to an approach, I wanted to see how you'd consider going about this.

One possibility is just to look for the log_name metadata keyword and move it to the appropriate place in the google log structure.

Logger.info("my log message", log_name: "projects/myproject/logs/custom-log-name") would produce:

"jsonPayload":{
    "message":"my log message",
    ...
  },
"logName":"projects/myproject/logs/custom-log-name",

(as opposed to the current behavior which puts logName inside of the jsonPayload)

But it's not the only google-logging specific keyword that people might want to set, so perhaps we want a special metadata struct to hold on to google-specific keywords. Something like Logger.info("my log message", _google: [log_name: "projects/myproject/logs/custom-log-name"]) to produce the same JSON structure as above.

Another possibility is to not do any of this in LoggerJSON and instead wrap GoogleCloudLogger and modify the data structure in the wrapper module.

Use json logging for mix ecto.migrate

I was wondering if anyone else was also using mix ecto.migrate within the application to run migrations on the database before starting the application?

Problem is that no matter what I configure, because the application is not loaded up for mix tasks the config.exs I guess it not read and the mix task just logs with default settings (I'm assuming).

I've done some research and this seems to be my issue... but I cannot find a solution out of the issues:
elixir-ecto/ecto#2038
elixir-ecto/ecto#2225

Just want to keep consistent application logs and hoping someone has a pattern in order to get the migration logs into json (preferably using this library).

DatadogLogger passing all JSON data to `message` field and not being parsed correctly

I set up logger_json with a Phoenix service and am using the DatadogLogger (with all the same configs as in the README).

However, whenever I log something now, I see this in DataDog:
Screen Shot 2022-11-23 at 1 16 24 PM

When clicking the "copy log as JSON" button in the top right, I get:

{
	"id": "<ID>",
	"content": {
		"timestamp": "2022-11-23T22:11:47.677Z",
		"tags": [
			"<REDACTED TAGS>"
		],
		"service": "<SERVICE NAME>",
		"message": "<REDACTED JSON I WANT DATADOG TO ACTUALLY USE/PARSE>",
		"attributes": {
			"service": "<SERVICE NAME>",
			"source": "stdout",
			"timestamp": 1669241507677
		}
	}
}

So, using the example from the README:

{
  "domain": ["elixir"],
  "duration": 3863403,
  "http": {
    "url": "http://localhost/create-account",
    "status_code": 200,
    "method": "GET",
    "referer": "http://localhost:4000/login",
    "request_id": "http_FlDCOItxeudZJ20AAADD",
    "useragent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36",
    "url_details": {
      "host": "localhost",
      "port": 4000,
      "path": "/create-account",
      "queryString": "",
      "scheme": "http"
    }
  },
  "logger": {
    "thread_name": "#PID<0.1042.0>",
    "method_name": "Elixir.LoggerJSON.Plug.call/2"
  },
  "message": "",
  "network": {
    "client": {
      "ip": "127.0.0.1"
    }
  },
  "phoenix": {
    "controller": "Elixir.RecognizerWeb.Accounts.UserRegistrationController",
    "action": "new"
  },
  "request_id": "http_FlDCOItxeudZJ20AAADD",
  "syslog": {
    "hostname": [10, 10, 100, 100, 100, 100, 100],
    "severity": "info",
    "timestamp": "2020-12-14T19:16:55.088Z"
  }
}

All of that information gets shoved into the message field in DataDog so it's not parsed and turned into attributes:

  "message": "{\"domain\":[\"elixir\"],\"duration\":3863403,\"http\":{\"url\":\"http://localhost/create-account\",\"status_code\":200,\"method\":\"GET\",\"referer\":\"http://localhost:4000/login\",\"request_id\":\"http_FlDCOItxeudZJ20AAADD\",\"useragent\":\"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36\",\"url_details\":{\"host\":\"localhost\",\"port\":4000,\"path\":\"/create-account\",\"queryString\":\"\",\"scheme\":\"http\"}},\"logger\":{\"thread_name\":\"#PID<0.1042.0>\",\"method_name\":\"Elixir.LoggerJSON.Plug.call/2\"},\"message\":\"\",\"network\":{\"client\":{\"ip\":\"127.0.0.1\"}},\"phoenix\":{\"controller\":\"Elixir.RecognizerWeb.Accounts.UserRegistrationController\",\"action\":\"new\"},\"request_id\":\"http_FlDCOItxeudZJ20AAADD\",\"syslog\":{\"hostname\":[10,10,100,100,100,100,100],\"severity\":\"info\",\"timestamp\":\"2020-12-14T19:16:55.088Z\"}}"

The only thing I suspect at this point is the DataDog agent itself or the FluentBit container we use for log routing. We don't have any crazy custom configuration for these though. The DataDog agent has the following env vars set up:

ECS_FARGATE=true
DD_ENV=dev
DD_APM_ENABLED=true
DD_PROFILING_ENABLED=true
DD_PROCESS_AGENT_ENABLED=true
DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true

But as far as I can tell from DataDog docs none of these should be changing how JSON logs are being parsed. So my last guess is that I've set up logger_json incorrectly?

Any and all ideas/help is appreciated!

Value in adding request_path?

Hi, maintainers!

First of all, thank you for this excellent library ❤️ 💙 💚

I know we can add our own formatter, but I'm wondering whether others would find value in including the request path as well as the request url. Having the request path can make it easy to exclude health checks and prometheus metrics requests.

Happy to create the PR if you think others would find this change useful.

Crash on Jason.encode_to_iodata/2

Hello!

I am converting maps and tuples to string with data |> Kernel.inspect () before sending to Logger and I am having some crashes

I would like to send the logs in json format also, do you think it would be possible to change Jason.encode_to_iodata / 2 to Jason.decode or is there any way to log in as json?

Crashes

Jason.EncodeError
(Jason.EncodeError) invalid byte 0xE7 in <<67, 104, 101, 99, 107, 111, 117, 116, 115, 46, 68, 105, 112, 108, 111, 109, 97, 116, 46, 77, 101, 115, 115, 97, 103, 101, 115, 46, 79, 114, 100, 101, 114, 115, 32, 58, 111, 114, 100, 101, 114, 115, 32, 114, 101, 99, 101, 105, 118, 101, ...>>

and

ErlangError
(ErlangError) Erlang error: {:EXIT, {%Jason.EncodeError{message: "invalid byte 0xE7 in <<67, 104, 101, 99, 107, 111, 117, 116, 115, 46, 68, 105, 112, 108, 111, 109, 97, 116, 46, 77, 101, 115, 115, 97, 103, 101, 115, 46, 79, 114, 100, 101, 114, 115, 32, 58, 111, 114, 100, 101, 114, 115, 32, 114, 101, 99, 101, 105, 118, 101, ...>>"}, [{Jason, :encode_to_iodata!, 2, [file: 'lib/jason.ex', line: 199]}, {LoggerJSON, :format_event, 5, [file: 'lib/logger_json.ex', line: 299]}, {LoggerJSON, :log_event, 5, [file: 'lib/logger_json.ex', line: 234]}, {LoggerJSON, :handle_event, 2, [file: 'lib/logger_json.ex', line: 139]}, {:gen_event, :server_update, 4, [file: 'gen_event.erl', line: 577]}, {:gen_event, :server_notify, 4, [file: 'gen_event.erl', line: 559]}, {:gen_event, :handle_msg, 6, [file: 'gen_event.erl', line: 300]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}}

Datadog Formatter Issue

Hello,
Sorry to ask here but I didn't find a better place to ask this question.
I have an Elixir app running in a GKE cluster that is sending logs and OTEL data to Datadog. We are using the LoggerJSON.Formatters.DatadogLogger in the Elixir config file to get the Span and Trace ID in the format Datadog "understand", but we are facing an issue where the IDs are not being converted.

For example, I can see these values in the GKE logs console in GCP:

jsonPayload: {
message: "[1.4.9|oban_logger.ex:0028|handle_event/4] [Oban] stop heartbeat/Sync(_DHL:a1LHs00000SVhCAMA1) ran in 435ms"
dd.trace_id: "d44f4e1f64c0d340e33bb9493f1bc491"
syslog: {3}
domain: [1]
otel_trace_flags: [2]
logger: {4}
dd.span_id: "f84c24131a261413"
erl_level: "notice"
}
labels: {6}
logName: "projects/cargosense-production-general/logs/stdout"
receiveTimestamp: "2023-07-11T20:30:24.026440953Z"

As you can see, the IDs are being keept with the original values.

I have a small Elixir app where the formatter is working correctly and correlation in Datadog is working fine.
The configuration on both apps is exactly the same, but I know from the developers (I'm the DevOps guy) that they are using some decorators (I think that's the correct term) for log messages and I'm wondering if that is messing up with the Datadog formatter.

Is there any options we need to set to get the correct values for the Span and Trace IDs?

This is the configuration we are using in the app where the formatter is not working:

config :logger,
  # This backend is required for data dog
  backends: [LoggerJSON],
  level: :info

config :logger_json, :backend,
  metadata: :all,
  formatter: LoggerJSON.Formatters.DatadogLogger

Version of the logger we are using is 5.1.2
Elixir version is 1.14.3

I can send more details if needed.
Thanks a lot for the help and sorry I can give more details since I'm not an Elixir developer.

ExUnit.CaptureLog support for LoggerJSON backend?

First of all, thank you for creating this library! I have a question regarding ExUnit:

ExUnit.CapureLog is a useful feature for capturing log messages produced during test runs so it can be inspected and suppressed from the console output. I have a test suite that takes advantage of this feature. When I switched our Logger back-end toLoggerJSON, these log messages started to appear again in the ExUnit console output.

After reviewing the documentation for ExUnit.CaptureLog it appears that the following test_helper.exs configuration only supports the :console back-end:

ExUnit.start(capture_log: true)

Do you know of a way to accomplish something similar for the LoggerJSON backend? If not, what would you recommend I do to suppress these messages from my test suite?

Why do we ignore default metadata?

The basic logger purges rather useful metadata like line file and function.

When I tested this locally, I was able to completely wipe out the blacklisted fields and this library worked just fine.

I'm happy to make a PR with my changes to remove the rather annoying discard if we're up for that.

Formatter for ElasticSearch

Hi, thanks for awesome library - I love using it on all of my projects.

I wolud like to report my Elixir logs to the ElasticSearch which uses "Elastic Common Schema". Would you be open to merge the formatter to this library or should I do my own?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.