Giter VIP home page Giter VIP logo

log-analytics-datadog's Introduction

JFrog Artifactory and Xray Log Analytics with Fluentd and Datadog

The following document describes how to configure Datadog to gather logs, metrics and violations from Artifactory and Xray through the use of FluentD.

Versions Supported

This integration is last tested with Artifactory 7.84.17 and Xray 3.92.7 versions.

Table of Contents

Note! You must follow the order of the steps throughout Datadog Configuration

  1. DataDog Setup
  2. JFrog Metrics Setup
  3. Fluentd Installation
  4. Dashboards
  5. References

DataDog Setup

DataDog setup for this integration can be done by going through the step below to add a new DataDog apiKey or by using an existing DataDog apiKey.
If a DataDog apiKey already exists and can be used for this integration, skip this part and move on to Fluentd Installation to forward logs and metrics to your DataDog account.

If you don't have a DataDog apiKey:

  • Create an account in DataDog, if one doesn't exist
  • Follow the official DataDog instructions here how to generate an apiKey that will be used in the following sections

JFrog Metrics Setup

Metrics collection is disabled by default in Artifactory by default. For non-kubernetes installations, to enable metrics in Artifactory, make the following configuration changes to the Artifactory System YAML:

shared:
    metrics:
        enabled: true

artifactory:
    metrics:
        enabled: true

Once this configuration is done and the application is restarted, metrics will be available in Open Metrics Format

๐Ÿ’ก Metrics are enabled by default in Xray.

๐Ÿ’ก For kubernetes based installs, openMetrics collection is enabled in the helm install commands listed in the sections below

Fluentd Installation

OS / Virtual Machine

Ensure you have access to the Internet from a virtual machine (VM). We recommend installation through FluentD's native OS based package installs:

OS Package Manager Link
CentOS/RHEL Linux - RPM (YUM) https://docs.fluentd.org/installation/install-by-rpm
Debian/Ubuntu Linux - APT https://docs.fluentd.org/installation/install-by-deb
MacOS/Darwin MacOS - DMG https://docs.fluentd.org/installation/install-by-dmg
Windows Windows - MSI https://docs.fluentd.org/installation/install-by-msi
Gem Install** MacOS & Linux - Gem https://docs.fluentd.org/installation/install-by-gem
Gem based install

For a Gem-based install, the Ruby Interpreter must be setup first. You can install the Ruby Interpreter by doing the following:

  1. Install Ruby Version Manager (RVM) outlined in the RVM documentation.

  2. After the RVM installation is complete, execute the command 'rvm -v' to verify.

  3. Install Ruby v3.3.0 or above with the command rvm install <ver_num>, (for example, rvm install 3.3.0).

  4. Verify the Ruby installation, execute ruby -v, gem installation gem -v and bundler -v to ensure all the components are intact.

  5. Install the FluentD gem with the command gem install fluentd.

  6. After FluentD is successfully installed, install the following plugins.

gem install fluent-plugin-concat
gem install fluent-plugin-datadog
gem install fluent-plugin-jfrog-siem
gem install fluent-plugin-jfrog-metrics
gem install fluent-plugin-jfrog-send-metrics
Configure Fluentd

We rely on environment variables to stream log files to your observability dashboards. Ensure that you fill in the .env file with the correct values. You can download the .env file here.

  • JF_PRODUCT_DATA_INTERNAL: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service, you can find its active log files in the $JFROG_HOME/<product>/var/log directory
  • DATADOG_API_KEY: API Key from Datadog
  • DATADOG_API_HOST: Your DataDog host based on your DataDog Site Parameter from this list
  • JPD_URL: Artifactory JPD URL with the format http://<ip_address>
  • JPD_ADMIN_USERNAME: Artifactory username for authentication
  • JFROG_ADMIN_TOKEN: Artifactory Access Token for authentication
  • COMMON_JPD: This flag should be set as true only for non-Kubernetes installations or installations where the JPD base URL is the same to access both Artifactory and Xray (for example, https://sample_base_url/artifactory or https://sample_base_url/xray)

Apply the .env files and run the fluentd wrapper with the following command, and note that the argument points to the fluent.conf.* file previously configured:

source jfrog.env
./fluentd $JF_PRODUCT_DATA_INTERNAL/fluent.conf.<product_name>

Docker

In order to run FluentD as a docker image to send the logs, violations, and metrics data to Datadog, execute the following commands on the host that runs the docker.

  1. Execute the docker version and docker ps commands to verify that the Docker installation is functional.

  2. If the version and process are listed successfully, build the intended docker image for Datadog using the docker file. You can download [this Dockerfile]https://raw.githubusercontent.com/jfrog/log-analytics-datadog/master/docker-build/Dockerfile to any directory that has write permissions.

  3. Download the docker.env file needed to run Jfrog/FluentD Docker Images for Datadog. You can download [this docker.env]https://raw.githubusercontent.com/jfrog/log-analytics-datadog/master/docker-build/docker.env to the directory where the docker file was downloaded.

  4. Execute the following command to build the docker image: docker build --build-arg SOURCE="JFRT" --build-arg TARGET="DATADOG" -t <image_name>. For example:

     docker build --build-arg SOURCE="JFRT" --build-arg TARGET="DATADOG" -t jfrog/fluentd-datadog-rt .'
  5. Fill out the necessary information in the docker.env file:

    • JF_PRODUCT_DATA_INTERNAL: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service you will find its active log files in the $JFROG_HOME/<product>/var/log directory
    • DATADOG_API_KEY: API Key from Datadog
    • DATADOG_API_HOST: Your DataDog host based on your DataDog Site Parameter from this list
    • JPD_URL: Artifactory JPD URL of the format http://<ip_address>
    • JPD_ADMIN_USERNAME: Artifactory username for authentication
    • JFROG_ADMIN_TOKEN: Artifactory Access Token for authentication
    • COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)
  6. Execute 'docker run -it --name jfrog-fluentd-datadog-rt -v <path_to_logs>:/var/opt/jfrog/artifactory --env-file docker.env <image_name>'

    The <path_to_logs> should be an absolute path where the Jfrog Artifactory Logs folder resides, such as a Docker based Artifactory Installation like/var/opt/jfrog/artifactory/var/logs on the docker host. For example:

     docker run -it --name jfrog-fluentd-datadog-rt -v $JFROG_HOME/artifactory/var/:/var/opt/jfrog/artifactory --env-file docker.env jfrog/fluentd-datadog-rt

Kubernetes Deployment with Helm

The recommended installation method for Kubernetes is to utilize the helm chart with the associated values.yaml in this repo.

Product Example Values File
Artifactory helm/artifactory-values.yaml
Artifactory HA helm/artifactory-ha-values.yaml
Xray helm/xray-values.yaml

Warning

The old docker registry partnership-pts-observability.jfrog.io, which contains older versions of this integration is now deprecated. We'll keep the existing docker images on this old registry until August 1st, 2024. After that date, this registry will no longer be available. Please helm upgrade your JFrog kubernetes deployment in order to pull images as specified on the above helm value files, from the new releases-pts-observability-fluentd.jfrog.io registry. Please do so in order to avoid ImagePullBackOff errors in your deployment once this registry is gone.

Add JFrog Helm repository:

helm repo add jfrog https://charts.jfrog.io
helm repo update

Throughout the exampled helm installations we'll use jfrog-dd as an example namespace. That said, you can use a different or existing namespace instead by setting the following environment variable

export INST_NAMESPACE=jfrog-dd

If you don't have an existing namespace for the deployment, create it and set the kubectl context to use this namespace

kubectl create namespace $INST_NAMESPACE
kubectl config set-context --current --namespace=$INST_NAMESPACE

Generate masterKey and joinKey for the installation

export JOIN_KEY=$(openssl rand -hex 32)
export MASTER_KEY=$(openssl rand -hex 32)

Artifactory โŽˆ:

  1. Skip this step if you already have Artifactory installed. Else, install Artifactory using the command below

    helm upgrade --install artifactory jfrog/artifactory \
           --set artifactory.masterKey=$MASTER_KEY \
           --set artifactory.joinKey=$JOIN_KEY \
           --set artifactory.license.secret=artifactory-license \
           --set artifactory.license.dataKey=artifactory.cluster.license \
           --set artifactory.metrics.enabled=true \
           -n $INST_NAMESPACE --create-namespace

    ๐Ÿ’ก Metrics collection is disabled by default in Artifactory. Please make sure that you are following the above helm upgrade command to enable them in Artifactory by setting artifactory.metrics.enabled=true. For Artifactory versions <=7.86.x, please enable metrics by setting the flag artifactory.openMetrics.enabled=true

    Get the ip address of the newly deployed Artifactory:

    export SERVICE_IP=$(kubectl get svc -n $INST_NAMESPACE artifactory-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')   
    echo $SERVICE_IP
  2. Create a secret for JFrog's admin token - Access Token using any of the following methods

    kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file>
    
    OR
    
    kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMN_TOKEN>
  3. For Artifactory installation, download the .env file from here. Fill in the jfrog_helm.env file with correct values.

    • JF_PRODUCT_DATA_INTERNAL: Helm based installs will already have this defined based upon the underlying Docker images. Not a required field for k8s installation
    • DATADOG_API_KEY: API Key from Datadog
    • DATADOG_API_HOST: Your DataDog host based on your DataDog Site Parameter from this list
    • JPD_URL: Artifactory JPD URL of the format http://<ip_address>
    • JPD_ADMIN_USERNAME: Artifactory username for authentication
    • COMMON_JPD: This flag should be set as true only for non-Kubernetes installations or installations where the JPD base URL is the same to access both Artifactory and Xray (for example, https://sample_base_url/artifactory or https://sample_base_url/xray)

    Apply the .env files using the helm command below

    source jfrog_helm.env
  4. Postgres password is required to upgrade Artifactory. Run the following command to get the current password

    POSTGRES_PASSWORD=$(kubectl get secret artifactory-postgresql -n $INST_NAMESPACE -o jsonpath="{.data.postgresql-password}" | base64 --decode)
  5. Upgrade Artifactory installation using the command below

    helm upgrade --install artifactory jfrog/artifactory \
             --set artifactory.joinKey=$JOIN_KEY \
             --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \
             --set datadog.api_key=$DATADOG_API_KEY \
             --set datadog.api_host=$DATADOG_API_HOST \
             --set jfrog.observability.jpd_url=$JPD_URL \
             --set jfrog.observability.username=$JPD_ADMIN_USERNAME \
             --set jfrog.observability.common_jpd=$COMMON_JPD \
             -f helm/artifactory-values.yaml \
             -n $INST_NAMESPACE

Artifactory-HA โŽˆ:

  1. For HA installation, please create a license secret on your cluster prior to installation.

    kubectl create secret generic artifactory-license --from-file=<path_to_license_file>artifactory.cluster.license 
  2. Skip this step if you already have Artifactory installed. Else, install Artifactory using the command below

    helm upgrade --install artifactory-ha  jfrog/artifactory-ha \
       --set artifactory.masterKey=$MASTER_KEY \
       --set artifactory.joinKey=$JOIN_KEY \
       --set artifactory.license.secret=artifactory-license \
       --set artifactory.license.dataKey=artifactory.cluster.license \
       --set artifactory.metrics.enabled=true \
       -n $INST_NAMESPACE

    ๐Ÿ’ก Metrics collection is disabled by default in Artifactory-HA. Please make sure that you are following the above helm upgrade command to enable them in Artifactory by setting artifactory.metrics.enabled=true. For Artifactory versions <=7.86.x, please enable metrics by setting the flag artifactory.openMetrics.enabled=true

    Get the ip address of the newly deployed Artifactory:

    export SERVICE_IP=$(kubectl get svc -n $INST_NAMESPACE artifactory-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')   
    echo $SERVICE_IP
  3. Create a secret for JFrog's admin token - Access Token using any of the following methods

    kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file>
    
    OR
    
    kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMN_TOKEN>
  4. Download the .env file from here. Fill in the jfrog_helm.env file with correct values.

    • JF_PRODUCT_DATA_INTERNAL: Helm based installs will already have this defined based upon the underlying Docker images. Not a required field for k8s installation
    • DATADOG_API_KEY: API Key from Datadog
    • DATADOG_API_HOST: Your DataDog host based on your DataDog Site Parameter from this list
    • JPD_URL: Artifactory JPD URL of the format http://<ip_address>
    • JPD_ADMIN_USERNAME: Artifactory username for authentication
    • COMMON_JPD: This flag should be set as true only for non-Kubernetes installations or installations where the JPD base URL is the same to access both Artifactory and Xray (for example, https://sample_base_url/artifactory or https://sample_base_url/xray)

    Apply the .env files and then run the helm command below

    source jfrog_helm.env
  5. Postgres password is required to upgrade Artifactory. Run the following command to get the current password

    POSTGRES_PASSWORD=$(kubectl get secret artifactory-ha-postgresql -n $INST_NAMESPACE -o jsonpath="{.data.postgresql-password}" | base64 --decode)
  6. Upgrade Artifactory HA installation using the command below

    helm upgrade --install artifactory-ha  jfrog/artifactory-ha \
        --set artifactory.joinKey=$JOIN_KEY \
        --set artifactory.openMetrics.enabled=true \
        --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \
        --set datadog.api_key=$DATADOG_API_KEY \
        --set datadog.api_host=$DATADOG_API_HOST \
        --set jfrog.observability.jpd_url=$JPD_URL \
        --set jfrog.observability.username=$JPD_ADMIN_USERNAME \
        --set jfrog.observability.common_jpd=$COMMON_JPD \
        -f helm/artifactory-ha-values.yaml \
        -n $INST_NAMESPACE

Xray โŽˆ:

Create a secret for JFrog's admin token - Access Token using any of the following methods if it doesn't exist

kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file>

OR

kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMN_TOKEN>

For Xray installation, download the .env file from here. Fill in the jfrog_helm.env file with correct values.

  • JF_PRODUCT_DATA_INTERNAL: Helm based installs will already have this defined based upon the underlying Docker images. Not a required field for k8s installation
  • DATADOG_API_KEY: API Key from Datadog
  • JPD_URL: Artifactory JPD URL of the format http://<ip_address>
  • JPD_ADMIN_USERNAME: Artifactory username for authentication
  • COMMON_JPD: This flag should be set as true only for non-Kubernetes installations or installations where the JPD base URL is the same to access both Artifactory and Xray (for example, https://sample_base_url/artifactory or https://sample_base_url/xray)

Apply the .env files and then run the helm command below

source jfrog_helm.env

Generate a master key for xray

export XRAY_MASTER_KEY=$(openssl rand -hex 32)

Use the same joinKey as you used in Artifactory installation to allow Xray node to successfully connect to Artifactory.

helm upgrade --install xray jfrog/xray --set xray.jfrogUrl=$JPD_URL \
       --set xray.masterKey=$XRAY_MASTER_KEY \
       --set xray.joinKey=$JOIN_KEY \
       --set datadog.api_key=$DATADOG_API_KEY \
       --set datadog.api_host=$DATADOG_API_HOST \
       --set jfrog.observability.jpd_url=$JPD_URL \
       --set jfrog.observability.username=$JPD_ADMIN_USERNAME \
       --set jfrog.observability.common_jpd=$COMMON_JPD \
       -f helm/xray-values.yaml \
       -n $INST_NAMESPACE

Dashboards

JFrog Artifactory Dashboard

This dashboard is divided into three sections Application, Audit and Requests

  • Application - This section tracks Log Volume(information about different log sources) and Artifactory Errors over time(bursts of application errors that may otherwise go undetected)
  • Audit - This section tracks audit logs help you determine who is accessing your Artifactory instance and from where. These can help you track potentially malicious requests or processes (such as CI jobs) using expired credentials.
  • Requests - This section tracks HTTP response codes, Top 10 IP addresses for uploads and downloads

JFrog Artifactory Metrics dashboard

This dashboard tracks Artifactory System Metrics, JVM memory, Garbabe Collection, Database Connections, and HTTP Connections metrics

JFrog Xray Logs dashboard

This dashboard provides a summary of access, service and traffic log volumes associated with Xray. Additionally, customers are also able to track various HTTP response codes, HTTP 500 errors, and log errors for greater operational insight

JFrog Xray Violations Dashboard

This dashboard provides an aggregated summary of all the license violations and security vulnerabilities found by Xray. Information is segment by watch policies and rules. Trending information is provided on the type and severity of violations over time, as well as, insights on most frequently occurring CVEs, top impacted artifacts and components.

JFrog Xray Metrics Dashboard

This dashboard tracks System Metrics, and data metrics about Scanned Artifacts and Scanned Components

Demo Requirements

Generating Data for Testing

Partner Integration Test Framework can be used to generate data for metrics.

References

  • Datadog - Cloud monitoring as a service

log-analytics-datadog's People

Contributors

benharosh avatar betarelease avatar danielmkn avatar mahithab avatar peters95 avatar turhsus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

log-analytics-datadog's Issues

Mention of Metrics Enablement

Hi Team,

It might be beneficial to mention that metrics must be enabled in Artifactory for this plugin's metric outputs to function. There has been some confusion regarding this issue, and although it's mentioned in the docs here, it's not mentioned in this repository from what I can see.

Metrics is disabled by default in the helm charts for HA and non-HA. I believe explicitly mentioning JFrog's Documentation on metrics enablement in the instructions of this plugin would be helpful, especially for helm installations.

Thanks!
Alex G (DSE - Atlanta)

Configuration not sending data to Datadog (Uncaught processing exception in datadog forwarder execution expired)

Hi,
i can see that logs are being tailed but no information sent to Datadog:
2022-01-28 14:46:32 +0000 [info]: #0 [artifactory_service_tail] following tail of /opt/jfrog/artifactory/var/log/artifactory-service.log
2022-01-28 14:46:32 +0000 [info]: #0 disable filter chain optimization because [Fluent::Plugin::RecordTransformerFilter, Fluent::Plugin::RecordTransformerFilter] uses #filter_stream method.
2022-01-28 14:46:32 +0000 [info]: #0 [access_service_tail] following tail of /opt/jfrog/artifactory/var/log/access-service.log
2022-01-28 14:46:32 +0000 [info]: #0 fluentd worker is now running worker=0
2022-01-28 14:48:33 +0000 [error]: #0 [datadog_agent_jfrog_artifactory] Uncaught processing exception in datadog forwarder execution expired

i have /opt/jfrog/artifactory/var/fluentd-1.11.0-linux-x86_64/lib/ruby/bin/gem list --local | grep fluent
fluent-plugin-datadog (0.14.1, 0.12.1)
fluent-plugin-elasticsearch (4.2.2, 4.0.9)
fluent-plugin-jfrog-metrics (0.1.0)
fluent-plugin-jfrog-siem (2.0.1, 2.0.0, 1.0.0, 0.1.9, 0.1.7, 0.1.6, 0.1.5, 0.1.4)
fluent-plugin-record-modifier (2.1.0)
fluent-plugin-splunk-enterprise (0.10.2)
fluentd (1.11.0)

and also i can see that this API url to get system usage doesnt exist in artifactory (version 7)

endpoint http://localhost:8081/artifactory/api/system/usage

"status" : 405,
"message" : "Method Not Allowed"

Impossible to supply admin token to the k8s sidecar container on first install

I can't find a way to supply the custom sidecar container with the admin token since the admin token can't be created until Artifactory has started once.

Why are you suggesting a pattern that can't be supported on a fresh kubernetes install?

To use the pattern currently suggested I need to start Artifactory once using helm, bootstrap an admin token that in turn can create a more permanent access token. Then when I have this new access token I need to redeploy the artifactory helm release with the custom sidecar container pattern. This is not a functional pattern for automatic deployments.

datadog eu region

it appears this config doesn't support datadog eu region when using helm yaml
i need some place to set
host 'http-intake.logs.datadoghq.eu'

frontend-request.logs are not parsing properly

I just found out that the logs parser for frontend-request.logs are not being parsed accordingly.

I am using this config for fluentd:
https://github.com/jfrog/log-analytics-datadog/blob/master/fluent.conf.rt

To reproduce the error you can use the following tool:

https://fluentular.herokuapp.com/

This is the same regexp for all request logs:

^(?[^ ])|(?<trace_id>[^\|])|(?<remote_address>[^\|])|(?[^\|])|(?<request_method>[^\|])|(?<request_url>[^\|])|(?<return_status>[^\|])|(?<response_content_length>[^\|])|(?<request_content_length>[^\|])|(?<request_duration>[^\|])|(?<request_user_agent>.+)$

These are the logs for frontend-request.logs

2021-08-17T10:53:01.427Z|2ab11e927753a3b |127.0.0.1|anonymous|GET|/api/v1/system/version|200|20|-|1.654|Chrome|92.0.4515.131|Windows|10

2021-08-17T10:53:01.509Z|70362cd9f01a74b9|127.0.0.1|[email protected]|GET|/api/v1/ui/storagesummary/projects|200|132|-|23.228|Chrome|92.0.4515.131|Windows|10

The first one has a space in the trace_id, and it is parsed properly
The second one is no being parsed properly

For the rest of request.logs (artifactory, metadata, event or access) logs are being parsed properly.

export.json Unable to import dashboard with different layout type

I have added fluentd as a sidecar to my artifactory deployment, and can see artifactory logs in Datadog. When I try to import the dashboard JSON in export.json, I get an error message saying "Unable to import dashboard with different layout type".

Comparing the JSON from export.json to the JSON editor in Datadog, it looks like maybe the schema has changed? Is there a newer JSON dashboard available anywhere?

Adding Tags to the metrics

How do I add tags to the metric level in the fluentd config, I'm using the datadog integrations/dashboards and I want to be able to filter based on artifactory version.

<match jfrog.metrics.**>
  @type jfrog_send_metrics
  target_platform "DATADOG"
  apikey "#{ENV['DATADOG_API_KEY']}"
  url "https://api.datadoghq.eu/api/v2/series"
  include_tag_key true
  dd_source jfrog_platform
  dd_tags version:7.71.4,env:test
</match>

This doesn't seem to work, i'm assuming because @type jfrog_send_metrics doesn't recognise those values ?

Can i have some advice on this
thanks

Fluentd does not parse stacktraces

Support ticket link: https://support.jfrog.com/s/tickets/50069000042OVyu/artifactory-fluentd-sidecar-not-parsing-stacktraces-and-sending-them-to-datadog

We have fluentd sidecars set up on our deployment in kubernetes. When artifactory has an error and outputs a stacktrace, these are not properly parsed by fluentd and sent to datadog.

2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data 'org.artifactory.security.props.auth.BadPropsAuthException: Bad authentication Key'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612223493 +0000 record={"message"=>"org.artifactory.security.props.auth.BadPropsAuthException: Bad authentication Key"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat org.artifactory.security.db.apikey.PropsAuthenticationProvider.authenticate(PropsAuthenticationProvider.java:96)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612224743 +0000 record={"message"=>"\tat org.artifactory.security.db.apikey.PropsAuthenticationProvider.authenticate(PropsAuthenticationProvider.java:96)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat org.artifactory.security.PasswordDecryptingManager.authenticate(PasswordDecryptingManager.java:136)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612225643 +0000 record={"message"=>"\tat org.artifactory.security.PasswordDecryptingManager.authenticate(PasswordDecryptingManager.java:136)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat org.artifactory.rest.resource.security.AuthDelegationHandler.authenticateCredentials(AuthDelegationHandler.java:116)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612226840 +0000 record={"message"=>"\tat org.artifactory.rest.resource.security.AuthDelegationHandler.authenticateCredentials(AuthDelegationHandler.java:116)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat org.artifactory.rest.resource.security.AuthDelegationHandler.handleBasicAuth(AuthDelegationHandler.java:99)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612227833 +0000 record={"message"=>"\tat org.artifactory.rest.resource.security.AuthDelegationHandler.handleBasicAuth(AuthDelegationHandler.java:99)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat org.artifactory.rest.resource.security.AuthDelegationHandler.handleRequest(AuthDelegationHandler.java:69)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612228747 +0000 record={"message"=>"\tat org.artifactory.rest.resource.security.AuthDelegationHandler.handleRequest(AuthDelegationHandler.java:69)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat org.artifactory.addon.sso.openid.OpenIdGatewayResource.auth(OpenIdGatewayResource.java:160)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612229745 +0000 record={"message"=>"\tat org.artifactory.addon.sso.openid.OpenIdGatewayResource.auth(OpenIdGatewayResource.java:160)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612230439 +0000 record={"message"=>"\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612231101 +0000 record={"message"=>"\tat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612231835 +0000 record={"message"=>"\tat java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)"}
2022-03-10 16:02:25 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '\tat java.base/java.lang.reflect.Method.invoke(Method.java:566)'" location=nil tag="jfrog.rt.artifactory.service" time=2022-03-10 16:02:25.612232568 +0000 record={"message"=>"\tat java.base/java.lang.reflect.Method.invoke(Method.java:566)"}

From the ticket I learned that there is an internal JIRA with no ETA. I wanted to open this for other community members that may be looking for this as well.

This makes us not trust the logging platform as not everything is being sent to datadog.

Issue with documented artifactory-ha-values.yaml file

Using this line as is:

image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }}

causes the following error:

create Pod artifactory-ha-artifactory-ha-primary-1 in StatefulSet artifactory-ha-artifactory-ha-primary failed error: Pod "artifactory-ha-artifactory-ha-primary-1" is invalid: spec.initContainers[0].image: Required value

For the artifactory-ha chart it should be:
image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }}

Can't use #{ENV['FOO_BAR']} expressions in @jfrog_send_metrics ddtags

The ddtags array doesn't support rupy expressions in the strings.

In most places you can use #{} to include ruby code but in the ddtags array there is no such support.

config

      <match jfrog.metrics.**>
        @type jfrog_send_metrics
        target_platform "DATADOG"
        apikey "#{ENV['DATADOG_API_KEY']}"
        url "https://api.#{ENV['DATADOG_API_HOST']}/api/v2/series"
        ddtags [
          "env:#{ENV['JPD_ENV']}",
          "hostname:#{ENV['JPD_HOSTNAME']}",
          "version:#{ENV['JPD_VERSION']}"
        ]
      </match>

fluentd logs

Additional tags to be added to metrics are
env:#{ENV['JPD_ENV']}
hostname:#{ENV['JPD_HOSTNAME']}
version:#{ENV['JPD_VERSION']}
Sending received metrics data

In datadog the values become

env:_env_jpd_env
hostname:_env_jpd_hostname
version:_env_jpd_version

DD tags should be able to use dynamic values.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.