Giter VIP home page Giter VIP logo

log-analytics-splunk's Introduction

SPLUNK

Versions Supported

This integration is last tested with Artifactory 7.71.11 and Xray 3.88.12 versions.

Table of Contents

Note! You must follow the order of the steps throughout Splunk Configuration

  1. Splunk Setup
  2. JFrog Metrics Setup
  3. Fluentd Installation
  4. Dashboards
  5. Splunk Demo
  6. References

Splunk Setup

Splunkbase App

Install the JFrog Log Analytics Platform app from Splunkbase here!

1. Download file from Splunkbase
2. Open Splunk web console as administrator
3. From homepage click on settings wheel in top right of Apps section
4. Click on "Install app from file"
5. Select download file from Splunkbase on your computer
6. Click upgrade 
7. Click upload

Restart Splunk post installation of App.

1. Open Splunk web console as adminstrator
2. Click on Settings then Server Controls
3. Click on Restart 

Login to Splunk after the restart completes.

Confirm the version is the latest version available in Splunkbase.

Configure Splunk

Our integration uses the Splunk HEC to send data to Splunk.

Users will need to configure the HEC to accept data (enabled) and also create a new token. Steps are below.

Create index jfrog_splunk

1. Open Splunk web console as administrator
2. Click on "Settings" in dropdown select "Indexes"
3. Click on "New Index"
4. Enter Index name as jfrog_splunk
5. Click "Save"

Create index jfrog_splunk_metrics

1. Open Splunk web console as administrator
2. Click on "Settings" in dropdown select "Indexes"
3. Click on "New Index"
4. Enter Index name as jfrog_splunk_metrics
5. Select Index Data Type as Metrics
6. Click "Save"

Configure new HEC token to receive Logs

1. Open Splunk web console as administrator
2. Click on "Settings" in dropdown select "Data inputs"
3. Click on "HTTP Event Collector"
4. Click on "New Token"
5. Enter a "Name" in the textbox
6. (Optional) Enter a "Description" in the textbox
7. Click on the green "Next" button
8. Add "jfrog_splunk" index to store the JFrog platform log data into.
9. Click on the green "Review" button
10. If good, Click on the green "Done" button
11. Save the generated token value

Configure new HEC token to receive Metrics

1. Open Splunk web console as administrator
2. Click on "Settings" in dropdown select "Data inputs"
3. Click on "HTTP Event Collector"
4. Click on "New Token"
5. Enter a "Name" in the textbox
6. (Optional) Enter a "Description" in the textbox
7. Click on the green "Next" button
8. Add "jfrog_splunk_metrics" index to store the JFrog platform metrics data into.
9. Click on the green "Review" button
10. If good, Click on the green "Done" button
11. Save the generated token value

JFrog Metrics Setup

To enable metrics in Artifactory, make the following configuration changes to the Artifactory System YAML

artifactory:
    metrics:
        enabled: true
    openMetrics:
        enabled: true

Once this configuration is done and the application is restarted, metrics will be available in Open Metrics Format

Metrics are enabled by default in Xray. For kubernetes based installs, openMetrics are enabled in the helm install commands listed below

Fluentd Installation

OS / Virtual Machine

Ensure you have access to the Internet from VM. Recommended install is through fluentd's native OS based package installs:

OS Package Manager Link
CentOS/RHEL Linux - RPM (YUM) https://docs.fluentd.org/installation/install-by-rpm
Debian/Ubuntu Linux - APT https://docs.fluentd.org/installation/install-by-deb
MacOS/Darwin MacOS - DMG https://docs.fluentd.org/installation/install-by-dmg
Windows Windows - MSI https://docs.fluentd.org/installation/install-by-msi
Gem Install** MacOS & Linux - Gem https://docs.fluentd.org/installation/install-by-gem
** For Gem based install, Ruby Interpreter has to be setup first, following is the recommended process to install Ruby

1. Install Ruby Version Manager (RVM) as described in https://rvm.io/rvm/install#installation-explained, ensure to follow all the onscreen instructions provided to complete the rvm installation
	* For installation across users a SUDO based install is recommended, the installation is as described in https://rvm.io/support/troubleshooting#sudo

2. Once rvm installation is complete, verify the RVM installation executing the command 'rvm -v'

3. Now install ruby v2.7.0 or above executing the command 'rvm install <ver_num>', ex: 'rvm install 2.7.5'

4. Verify the ruby installation, execute 'ruby -v', gem installation 'gem -v' and 'bundler -v' to ensure all the components are intact

5. Post completion of Ruby, Gems installation, the environment is ready to further install new gems, execute the following gem install commands one after other to setup the needed ecosystem

	'gem install fluentd'

After FluentD is successfully installed, the below plugins are required to be installed

gem install fluent-plugin-concat
gem install fluent-plugin-splunk-hec
gem install fluent-plugin-jfrog-siem
gem install fluent-plugin-jfrog-metrics

Configure Fluentd

We rely heavily on environment variables so that the correct log files are streamed to your observability dashboards. Ensure that you fill in the .env file with correct values. Download the .env file from here

  • JF_PRODUCT_DATA_INTERNAL: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service you will find its active log files in the $JFROG_HOME/<product>/var/log directory
  • SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
  • SPLUNK_HEC_HOST: Splunk Instance URL
  • SPLUNK_HEC_PORT: Splunk HEC configured port
  • SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
  • SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
  • SPLUNK_INSECURE_SSL: false for test environments only or if http scheme.
  • JPD_URL: Artifactory JPD URL of the format http://<ip_address>
  • JPD_ADMIN_USERNAME: Artifactory username for authentication
  • JFROG_ADMIN_TOKEN: Artifactory Access Token for authentication
  • COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)

Apply the .env files and then run the fluentd wrapper with one argument pointed to the fluent.conf.* file configured.

source jfrog.env
./fluentd $JF_PRODUCT_DATA_INTERNAL/fluent.conf.<product_name>

Docker

Note! These steps were not tested to work out of the box on MAC In order to run fluentd as a docker image to send the logs, violations and metrics data to splunk, the following commands needs to be executed on the host that runs the docker.

  1. Check the docker installation is functional, execute command 'docker version' and 'docker ps'.

  2. Once the version and process are listed successfully, build the intended docker image for Splunk using the docker file,

    • Download Dockerfile from here to any directory which has write permissions.
  3. Download the docker.env file needed to run Jfrog/FluentD Docker Images for Splunk,

    • Download docker.env from here to the directory where the docker file was downloaded.

For Splunk as the observability platform, execute these commands to setup the docker container running the fluentd installation

1. Execute 'docker build --build-arg SOURCE="JFRT" --build-arg TARGET="SPLUNK" -t <image_name> .'

    Command example

    'docker build --build-arg SOURCE="JFRT" --build-arg TARGET="SPLUNK" -t jfrog/fluentd-splunk-rt .'

    The above command will build the docker image.

2. Fill the necessary information in the docker.env file

    JF_PRODUCT_DATA_INTERNAL: The environment variable JF_PRODUCT_DATA_INTERNAL must be defined to the correct location. For each JFrog service you will find its active log files in the `$JFROG_HOME/<product>/var/log` directory
    SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
    SPLUNK_HEC_HOST: Splunk Instance URL
    SPLUNK_HEC_PORT: Splunk HEC configured port
    SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
    SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
    SPLUNK_INSECURE_SSL: false for test environments only or if http scheme
    JPD_URL: Artifactory JPD URL of the format `http://<ip_address>`
    JPD_ADMIN_USERNAME: Artifactory username for authentication
    JFROG_ADMIN_TOKEN: Artifactory [Access Token](https://jfrog.com/help/r/how-to-generate-an-access-token-video/artifactory-creating-access-tokens-in-artifactory) for authentication
    COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)

3. Execute 'docker run -it --name jfrog-fluentd-splunk-rt -v <path_to_logs>:/var/opt/jfrog/artifactory --env-file docker.env <image_name>' 

    The <path_to_logs> should be an absolute path where the Jfrog Artifactory Logs folder resides, i.e for an Docker based Artifactory Installation,  ex: /var/opt/jfrog/artifactory/var/logs on the docker host.

    Command example

    'docker run -it --name jfrog-fluentd-splunk-rt -v $JFROG_HOME/artifactory/var/:/var/opt/jfrog/artifactory --env-file docker.env jfrog/fluentd-splunk-rt'


Kubernetes Deployment with Helm

Recommended installation for Kubernetes is to utilize the helm chart with the associated values.yaml in this repo.

Product Example Values File
Artifactory helm/artifactory-values.yaml
Artifactory HA helm/artifactory-ha-values.yaml
Xray helm/xray-values.yaml

Warning

The old docker registry partnership-pts-observability.jfrog.io, which contains older versions of this integration is now deprecated. We'll keep the existing docker images on this old registry until August 1st, 2024. After that date, this registry will no longer be available. Please helm upgrade your JFrog kubernetes deployment in order to pull images as specified on the above helm value files, from the new releases-pts-observability-fluentd.jfrog.io registry. Please do so in order to avoid ImagePullBackOff errors in your deployment once this registry is gone.

Add JFrog Helm repository:

helm repo add jfrog https://charts.jfrog.io
helm repo update

Replace placeholders with your masterKey and joinKey. To generate each of them, use the command openssl rand -hex 32

Artifactory ⎈:

  1. Skip this step if you already have Artifactory installed. Else, install Artifactory using the command below

    helm upgrade --install artifactory  jfrog/artifactory \
           --set artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \
           --set artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \
           --set artifactory.license.secret=artifactory-license \
           --set artifactory.license.dataKey=artifactory.cluster.license \
           --set artifactory.metrics.enabled=true \
           --set artifactory.openMetrics.enabled=true
  2. Create a secret for JFrog's admin token - Access Token using any of the following methods

    kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file>
    
    OR
    
    kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMN_TOKEN>
  3. For Artifactory installation, download the .env file from here. Fill in the jfrog_helm.env file with correct values.

    • SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
    • SPLUNK_HEC_HOST: Splunk Instance URL
    • SPLUNK_HEC_PORT: Splunk HEC configured port
    • SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
    • SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
    • SPLUNK_INSECURE_SSL: false for test environments only or if http scheme
    • JPD_URL: Artifactory JPD URL of the format http://<ip_address>
    • JPD_ADMIN_USERNAME: Artifactory username for authentication
    • COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)

    Apply the .env files using the helm command below

    source jfrog_helm.env
  4. Postgres password is required to upgrade Artifactory. Run the following command to get the current password

    POSTGRES_PASSWORD=$(kubectl get secret artifactory-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
  5. Upgrade Artifactory installation using the command below

    helm upgrade --install artifactory jfrog/artifactory \
           --set artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \
           --set artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \
           --set artifactory.metrics.enabled=true --set artifactory.openMetrics.enabled=true \
           --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \
           --set splunk.host=$SPLUNK_HEC_HOST \
           --set splunk.port=$SPLUNK_HEC_PORT \
           --set splunk.logs_token=$SPLUNK_HEC_TOKEN \
           --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \
           --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \
           --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \
           --set jfrog.observability.jpd_url=$JPD_URL \
           --set jfrog.observability.username=$JPD_ADMIN_USERNAME \
           --set jfrog.observability.common_jpd=$COMMON_JPD \
           -f helm/artifactory-values.yaml

Artifactory-HA ⎈:

  1. For HA installation, please create a license secret on your cluster prior to installation.

    kubectl create secret generic artifactory-license --from-file=<path_to_license_file>artifactory.cluster.license 
  2. Skip this step if you already have Artifactory installed. Else, install Artifactory using the command below

    helm upgrade --install artifactory-ha  jfrog/artifactory-ha \
       --set artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \
       --set artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \
       --set artifactory.license.secret=artifactory-license \
       --set artifactory.license.dataKey=artifactory.cluster.license \
       --set artifactory.metrics.enabled=true \
       --set artifactory.openMetrics.enabled=true
  3. Create a secret for JFrog's admin token - Access Token using any of the following methods

    kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file>
    
    OR
    
    kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMN_TOKEN>
  4. Download the .env file from here. Fill in the jfrog_helm.env file with correct values.

    • SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
    • SPLUNK_HEC_HOST: Splunk Instance URL
    • SPLUNK_HEC_PORT: Splunk HEC configured port
    • SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
    • SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
    • SPLUNK_INSECURE_SSL: false for test environments only or if http scheme
    • JPD_URL: Artifactory JPD URL of the format http://<ip_address>
    • JPD_ADMIN_USERNAME: Artifactory username for authentication
    • COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)

    Apply the .env files and then run the helm command below

    source jfrog_helm.env
  5. Postgres password is required to upgrade Artifactory. Run the following command to get the current password

    POSTGRES_PASSWORD=$(kubectl get secret artifactory-ha-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode)
  6. Upgrade Artifactory HA installation using the command below

    helm upgrade --install artifactory-ha  jfrog/artifactory-ha \
        --set artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \
        --set artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE --set artifactory.replicaCount=0 \
        --set artifactory.metrics.enabled=true --set artifactory.openMetrics.enabled=true \
        --set databaseUpgradeReady=true --set postgresql.postgresqlPassword=$POSTGRES_PASSWORD --set nginx.service.ssloffload=true \
        --set splunk.host=$SPLUNK_HEC_HOST \
        --set splunk.port=$SPLUNK_HEC_PORT \
        --set splunk.logs_token=$SPLUNK_HEC_TOKEN \
        --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \
        --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \
        --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \
        --set jfrog.observability.jpd_url=$JPD_URL \
        --set jfrog.observability.username=$JPD_ADMIN_USERNAME \
        --set jfrog.observability.common_jpd=$COMMON_JPD \
        -f helm/artifactory-ha-values.yaml
    

Xray ⎈:

Create a secret for JFrog's admin token - Access Token using any of the following methods if it doesn't exist

kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file>

OR

kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMN_TOKEN>

For Xray installation, download the .env file from here. Fill in the jfrog_helm.env file with correct values.

  • SPLUNK_COM_PROTOCOL: HTTP Scheme, http or https
  • SPLUNK_HEC_HOST: Splunk Instance URL
  • SPLUNK_HEC_PORT: Splunk HEC configured port
  • SPLUNK_HEC_TOKEN: Splunk HEC Token for sending logs to Splunk
  • SPLUNK_METRICS_HEC_TOKEN: Splunk HEC Token for sending metrics to Splunk
  • SPLUNK_INSECURE_SSL: false for test environments only or if http scheme
  • JPD_URL: Artifactory JPD URL of the format http://<ip_address>
  • JPD_ADMIN_USERNAME: Artifactory username for authentication
  • JFROG_ADMIN_TOKEN: For security reasons, this value will be pulled from the secret jfrog-admin-token created in the step above
  • COMMON_JPD: This flag should be set as true only for non-kubernetes installations or installations where JPD base URL is same to access both Artifactory and Xray (ex: https://sample_base_url/artifactory or https://sample_base_url/xray)

Apply the .env files and then run the helm command below

source jfrog_helm.env

Use the same joinKey as you used in Artifactory installation to allow Xray node to successfully connect to Artifactory.

helm upgrade --install xray jfrog/xray --set xray.jfrogUrl=http://my-artifactory-nginx-url \
       --set xray.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF \
       --set xray.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE \
       --set splunk.host=$SPLUNK_HEC_HOST \
       --set splunk.port=$SPLUNK_HEC_PORT \
       --set splunk.logs_token=$SPLUNK_HEC_TOKEN \
       --set splunk.metrics_token=$SPLUNK_METRICS_HEC_TOKEN \
       --set splunk.com_protocol=$SPLUNK_COM_PROTOCOL \
       --set splunk.insecure_ssl=$SPLUNK_INSECURE_SSL \
       --set jfrog.observability.jpd_url=$JPD_URL \
       --set jfrog.observability.username=$JPD_ADMIN_USERNAME \
       --set jfrog.observability.common_jpd=$COMMON_JPD \
       -f helm/xray-values.yaml

Dashboards

Artifactory dashboard

JFrog Artifactory Dashboard is divided into multiple sections Application, Audit, Requests, Docker, System Metrics, Heap Metrics and Connection Metrics

  • Application - This section tracks Log Volume(information about different log sources) and Artifactory Errors over time(bursts of application errors that may otherwise go undetected)
  • Audit - This section tracks audit logs help you determine who is accessing your Artifactory instance and from where. These can help you track potentially malicious requests or processes (such as CI jobs) using expired credentials.
  • Requests - This section tracks HTTP response codes, Top 10 IP addresses for uploads and downloads
  • Docker - To monitor Dockerhub pull requests users should have a Dockerhub account either paid or free. Free accounts allow up to 200 pull requests per 6 hour window. Various widgets have been added in the new Docker tab under Artifactory to help monitor your Dockerhub pull requests. An alert is also available to enable if desired that will allow you to send emails or add outbound webhooks through configuration to be notified when you exceed the configurable threshold.
  • System Metrics - This section tracks CPU Usage, System Memory and Disk Usage metrics
  • Heap Metrics - This section tracks Heap Memory and Garbage Collection
  • Connection Metrics - This section tracks Database connections and HTTP Connections

Xray dashboard

JFrog Xray Dashboard is divided into three sections Logs, Violations and Metrics

  • Logs - This section provides a summary of access, service and traffic log volumes associated with Xray. Additionally, customers are also able to track various HTTP response codes, HTTP 500 errors, and log errors for greater operational insight
  • Violations - This section provides an aggregated summary of all the license violations and security vulnerabilities found by Xray. Information is segment by watch policies and rules. Trending information is provided on the type and severity of violations over time, as well as, insights on most frequently occurring CVEs, top impacted artifacts and components.
  • Metrics - This section tracks CPU usage, System Memory, Disk Usage, Heap Memory and Database Connections

CIM Compatibility

Log data from JFrog platform logs is translated to pre-defined Common Information Models (CIM) compatible with Splunk. This compatibility enables new advanced features where users can search and access JFrog log data that is compatible with data models. For example

| datamodel Web Web search
| datamodel Change_Analysis All_Changes search
| datamodel Vulnerabilities Vulnerabilities search

Splunk Demo

To run this integration for Splunk users can create a Splunk instance with the correct ports open in Kubernetes by applying the yaml file:

kubectl apply -f k8s/splunk.yaml

This will create a new Splunk instance that can be used for a demo to send JFrog logs, violations and metrics over to. Follow the setup steps listed above to see data in the Dashboards.

References

  • Fluentd - Fluentd Logging Aggregator/Agent
  • Splunk - Splunk Logging Platform
  • Splunk HEC - Splunk HEC used to upload data into Splunk

log-analytics-splunk's People

Contributors

benharosh avatar betarelease avatar danielmkn avatar jefferyfry avatar mahithab avatar mritunjaykumar avatar peters95 avatar turhsus avatar vasukinjfrog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

log-analytics-splunk's Issues

Trouble rendering with Helm

I'm following the instructions on the README: https://github.com/jfrog/log-analytics-splunk#kubernetes-deployment-with-helm

I've saved this file as k8s/fluentd/artifactory.yml and just have my license and db config in a separate k8s/values.yml. The only thing I've changed in the artifactory.yml is the splunk parameters.

However when I run

helm template jfrog-platform --version 10.4.1 --namespace jfrog-platform  jfrog/jfrog-platform -f k8s/values.yml -f k8s/fluentd/artifactory.yml > chart.yaml

The rendered template doesn't include the sidecar or initContainer, that bit's just blank. What am I doing wrong?

eventtypes.conf does not make use of the settings in macros.conf

The first 3 stanzas within the default/eventtypes.conf use searches with a hardcoded index which means if the user/admin changes the index to something other than the default (jfrog_splunk) then the eventtypes and tags won't work.

I believe that the three instances of index="jfrog_splunk" should be changed to `default_index`

fluentd not detecting log rotation

I am seeing an issue where fluentd is not detecting when artifactory-request.log rotates. When I restart td-agent, it detects the rotation, restarts, and picks up the log again. However, it will not detect the rotation on it's own. We are using the fluentd configs in this repo and have the default logback.xml file.

Xray Top Vulnerable Artifact Downloads panels don't work

  1. The searches that power the "Top Vulnerable Artifact Downloads by XX" panels seem to have a logical problem:
index="jfrog_splunk"  log_source="jfrog.rt.artifactory.request" return_status=200 [search index=xray_violations impacted_artifacts{}=* | stats count by impacted_artifacts{}  | rex field=impacted_artifacts{} "default\/(?<rex_repo_path>.*)" | return 500000 $rex_repo_path] | stats count(username) by request_url | rename request_url as impacted_artifact

stats count(username) by request_url should be stats count(request_url) by username I think. this affects both the downloads by user, and downloads by ip panel.

  1. For docker images, the nested search returns strings which never appear in artifact download request URLs

For images such as reponame/team/app:1.0, the subquery search returns strings like (team/app/1.0) OR (team/app/1.0/manifest.json)
Actual requests logged by artifactory during download look like: request_url: /api/docker/repo_name/v2/app/manifests/1.0.0

So the search returns no data even when vulnerable images have been downloaded.

  1. Several panels in the xray dashboard use searches containing term index="main". Since the documentation calls for the index used to be jfrog_splunk, these searches never return any data

log_source field alway blank/null or missing

We are finding that our logs from Xray into Splunk are always blank on the "log_source" field. This field is used in the JFrog Splunk App across several dashboard such as "Log Volume", etc. so that the volume of logs is zero. Please advise on solution.

Error installing jfrog_siem plugin

I am running fluentd in the official image from dockerhub. I am using the fluent.conf.xray from this repo (with required modifications) and I get error starting fluentd:
[error]: config error file="/fluentd/etc/fluent.conf" error_class=Fluent::ConfigError error="Unknown input plugin 'jfrog_siem'. Run 'gem search -rd fluent-plugin to find plugins

Ok, just run the fluen-gem install command, blam!:

root@13eb89abbb26:/# fluent-gem install fluent-plugin-jfrog-siem
Fetching http-accept-1.7.0.gem
Fetching unf-0.1.4.gem
Fetching domain_name-0.5.20190701.gem
Fetching unf_ext-0.0.8.gem
Fetching mime-types-3.4.1.gem
Fetching netrc-0.11.0.gem
Fetching http-cookie-1.0.4.gem
Fetching mime-types-data-3.2022.0105.gem
Fetching fluent-plugin-jfrog-siem-2.0.1.gem
Fetching rest-client-2.1.0.gem
Fetching concurrent-ruby-edge-0.6.0.gem
Successfully installed http-accept-1.7.0
Building native extensions. This could take a while...
ERROR:  Error installing fluent-plugin-jfrog-siem:
        ERROR: Failed to build gem native extension.

    current directory: /usr/local/bundle/gems/unf_ext-0.0.8/ext/unf_ext
/usr/local/bin/ruby -I /usr/local/lib/ruby/2.6.0 -r ./siteconf20220110-11-1sis4ah.rb extconf.rb
checking for -lstdc++... *** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of necessary
libraries and/or headers.  Check the mkmf.log file for more details.  You may
need configuration options.

Provided configuration options:
        --with-opt-dir
        --without-opt-dir
        --with-opt-include
        --without-opt-include=${opt-dir}/include
        --with-opt-lib
        --without-opt-lib=${opt-dir}/lib
        --with-make-prog
        --without-make-prog
        --srcdir=.
        --curdir
        --ruby=/usr/local/bin/$(RUBY_BASE_NAME)
        --with-static-libstdc++
        --without-static-libstdc++
        --with-stdc++lib
        --without-stdc++lib
/usr/local/lib/ruby/2.6.0/mkmf.rb:467:in `try_do': The compiler failed to generate an executable file. (RuntimeError)
You have to install development tools first.
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:552:in `try_link0'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:570:in `try_link'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:789:in `try_func'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:1016:in `block in have_library'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:959:in `block in checking_for'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:361:in `block (2 levels) in postpone'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:331:in `open'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:361:in `block in postpone'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:331:in `open'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:357:in `postpone'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:958:in `checking_for'
        from /usr/local/lib/ruby/2.6.0/mkmf.rb:1011:in `have_library'
        from extconf.rb:6:in `<main>'

To see why this extension failed to compile, please check the mkmf.log which can be found here:

  /usr/local/bundle/extensions/x86_64-linux/2.6.0/unf_ext-0.0.8/mkmf.log

extconf failed, exit code 1

Gem files will remain installed in /usr/local/bundle/gems/unf_ext-0.0.8 for inspection.
Results logged to /usr/local/bundle/extensions/x86_64-linux/2.6.0/unf_ext-0.0.8/gem_make.out

I'm stuck here. Other required plugins weren't any issue: fluent-plguin-splunk-enterprise fluent-plugin-splunk-hec

fluentd unable to find concat plugin

All 4 required plugins are installed, including concat:

fluent-gem install fluent-plugin-concat
Fetching fluent-plugin-concat-2.5.0.gem
Successfully installed fluent-plugin-concat-2.5.0
Parsing documentation for fluent-plugin-concat-2.5.0
Installing ri documentation for fluent-plugin-concat-2.5.0
Done installing documentation for fluent-plugin-concat after 0 seconds
1 gem installed

However, when starting fluentd, I am getting this error:

[error]: config error file="/etc/calyptia-fluentd/calyptia-fluentd.conf" error_class=Fluent::NotFoundPluginError error="Unknown filter plugin 'concat'. Run 'gem search -rd fluent-plugin' to find plugins"

I am stuck on this and can't seem to find a way to move forward.

Unable to find the Vulnerailities report in the Splunk.

Hi Team, we have used the same configuration for our X-ray integration, and we see all the logs except the Vulnerabilities.
We would like to see the impacted artifacts URL, CVSS score of the artifacts on the splunk.
Please let us know what needs to be done to get those logs too.

Get owners from JFrog logs

I want to create a simple Splunk dashboard with all JFrog artifacts and corresponding owner/creator of artifact. The question is whether I can retrieve owners of artifacts(names) from JFrog logs? I'm just curious if I'm looking in a right direction. I appreciate any advice. Thanks in advance!

fluent.conf.rt log parsing issue

in https://github.com/jfrog/log-analytics-splunk/blob/master/fluent.conf.rt line 111

expression ^(?<timestamp>[^ ]*)\|(?<trace_id>[^\|]*)\|(?<remote_address>[^\|]*)\|(?<username>[^\|]*)\|(?<request_method>[^\|]*)\|(?<request_url>[^\|]*)\|(?<return_status>[^\|]*)\|(?<response_content_length>[^\|]*)\|(?<request_content_length>[^\|]*)\|(?<request_duration>[^\|]*)\|(?<request_user_agent>.+)$

response_content_length comes first before request_content_length

As per https://www.jfrog.com/confluence/display/JFROG/Logging request log format looks like this:

Timestamp | Trace ID | Remote Address | Username | Request method | Request URL | Return Status | Request Content Length | Response Content Length | Request Duration | Request User Agent

Request Content Length comes first, which means request_content_length should come first as well in fluent.conf.rt

Fluentd parsing issues with example configuration

I've followed the examples on this site, specifically the one at https://github.com/jfrog/log-analytics-splunk/blob/master/helm/artifactory-ha-values.yaml, in order to get fluentd forwarding to Splunk. My setup is almost identical to that link, with two changes that I don't believe would have any impact here:

  • I want to be in control of the version of the fluent file rather than possibly getting a different file each time (i.e. a wget on https://raw.githubusercontent.com/jfrog/log-analytics-splunk/master/fluent.conf.rt) so I load the fluent.conf.rt as a volume via a ConfigMap rather than use a wget.
  • I'm not checking the HEC token into Git so instead I use an env variable from an existing Secret for the sed step.

Artifactory is running, fluentd is running, but my sidecar container is full of parsing errors:

2021-10-12 18:45:09 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="invalid time format: value = 2021-10-12T18:45:09Z, error_class = ArgumentError, error = invalid date or strptime format - `2021-10-12T18:45:09Z' `%Y-%m-%dT%H:%M:%S.%LZ'" location="/opt/bitnami/fluentd/gems/fluentd-1.13.2/lib/fluent/plugin/parser.rb:196:in `rescue in parse_time'" tag="jfrog.rt.artifactory.request" time=2021-10-12 18:45:09.001736865 +0000 record={"message"=>"2021-10-12T18:45:09Z|56c1a63d7e218fc3|127.0.0.1|jffe@000|POST|/api/auth/loginRelatedData|200|46|0|12|JFrog-Frontend/1.27.7"}
2021-10-12 16:22:37 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '2021-10-12T16:22:37.004Z [jffe ] [\e[34M[INFO ]\e[39M]
 [                ] [                              ] [main                ] - frontend (jffe) service initialization completed in 38.62 seconds. Listening on port: port 8070'" location=nil tag="jfro
g.rt.frontend.service" time=2021-10-12 16:22:37.004951790 +0000 record={"message"=>"2021-10-12T16:22:37.004Z [jffe ] [\e[34M[INFO ]\e[39M] [                ] [                              ] [main
              ] - frontend (jffe) service initialization completed in 38.62 seconds. Listening on port: port 8070"}
2021-10-12 16:22:36 +0000 [warn]: #0 dump an error event: error_class=Fluent::Plugin::Parser::ParserError error="pattern not matched with data '2021-10-12T16:22:36.183Z [jffe ] [\e[34M[INFO ]\e[39M]
 [                ] [                              ] [main                ] - artifactory was pinged successfully'" location=nil tag="jfrog.rt.frontend.service" time=2021-10-12 16:22:36.183468419 +0
000 record={"message"=>"2021-10-12T16:22:36.183Z [jffe ] [\e[34M[INFO ]\e[39M] [                ] [                              ] [main                ] - artifactory was pinged successfully"}

I know the token is right, I've tested sending a single event from the container to test it and that gets sent OK. It is definitely a parsing issue, and it just makes me worry I may lose crucial logs.

Many thanks
Michael

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.