Giter VIP home page Giter VIP logo

newrelic / entity-definitions Goto Github PK

View Code? Open in Web Editor NEW
63.0 14.0 239.0 5.22 MB

The definition files contained in this repository are mappings between the telemetry attributes NewRelic ingests, and the entities users can interact with. If you have telemetry from any source that is not supported out of the box, you can propose a mapping for it by opening a PR.

License: Apache License 2.0

JavaScript 98.21% Dockerfile 1.79%

entity-definitions's Introduction

Community Plus header

Entity Definitions

This repository holds all the entity types that exist in New Relic and their configurations.

By proposing changes to this repository you can achieve the following:

  • Create a new entity type
  • Generate entities from a new source of data (telemetry, logs, etc)
  • Change how an entity is represented in different experiences via golden metrics ( New Relic Lookout, workloads, etc) and summary metrics (entity explorer)
  • Modify the summary of an entity type
  • Modify the lifecycle of an entity and make them alertable (see Lifecycle for more information about this attribute)

Changelog

All notable changes are defined in the releases page.

Getting started

For newcomers, we recommend checking the creating an entity type example. This document will walk through creating an entity type from scratch.

If you have experience with the repo and are looking for a specific section documentation:

If you're interested in exploring experimental features, you can find them here:

Validation

Whenever there's a contribution via pull request, some validations are automatically executed to verify that the provided definition meets the basic requirements:

  • The definition files are not malformed, incorrect or missing mandatory fields.
  • The identifier cannot be extracted from an attribute with the same name for two different Domain-Types, unless conditions are set to differentiate them, so that the conditions from one entity are not a superset of the other.

You can execute the validations locally using our dockerized validator:

docker-compose run validate-definitions

Remember that you may need to rebuild the images to pick up validation changes if you have run this in the past.

docker-compose build validate-definitions

Read more about the current validations.

Testing

You can test that the synthesis rules from your entity definition match the expected telemetry, thus generating the expected entities. In order to do this, we offer the possibility of adding test data that would simulate telemetry events. Whenever there's a contribution via pull request, the test data is checked against the synthesis rules, ensuring your changes match.

How to add testing data

  1. If it does not exist, create a folder named tests under your entity definition directory. If it already exists, skip this step.

i.e. definitions/ext-pihole/tests/

  1. Build one or more test files that represent the telemetry data that would synthesize entities of your domain and type. Each file must comply:
  • The file name is the event name reported to New Relic. i.e. Log, CustomEvent

  • The file name has .json extension. i.e. Log.json, CustomEvent.json

  • The file content is a valid json that consists of an array of objects, where every object represents a telemetry data point

    Log.json

[
	{
		"attribute1": "value1"
	},
	{
		"attribute1": "value1",
		"attribute2": "value2",
		"attribute3": "value3"
	}
]
  1. Create your pull request normally and the test would be executed in the background. If the synthesis rules from the definition don't match the test data, a bot will let you know with an explanatory comment in the pull request.

See ext-pihole definition for an example of test data.

Support

Is the information provided in the repository not enough to solve your doubts? Get in touch with the team by opening an issue!

Other Support Channels

Contributing

We encourage you to add new entity types! Keep in mind when you submit your pull request, you'll need to sign the CLA via the click-through using CLA-Assistant. You only have to sign the CLA one time per project.

If you have any questions, or to execute our corporate CLA, required if your contribution is on behalf of a company, please drop us an email at [email protected].

A note about vulnerabilities

As noted in our security policy, New Relic is committed to the privacy and security of our customers and their data. We believe that providing coordinated disclosure by security researchers and engaging with the security community are important means to achieve our security goals.

If you believe you have found a security vulnerability in this project or any of New Relic's products or websites, we welcome and greatly appreciate you reporting it to New Relic through HackerOne

License

Entity Synthesis Definitions is licensed under the Apache 2.0 License.

entity-definitions's People

Contributors

albertoolivan avatar andre-nr avatar apoloa avatar benkilimnik avatar brettj-newrelic avatar edyu002 avatar flisflis avatar fryckbos avatar glennthomas1 avatar gvassallo avatar helenapm avatar htroisi avatar irenemlc avatar javimb avatar jlegoff avatar kvilius avatar lujop avatar manuelrodriguez-nr avatar mesverrum avatar mkrichevsky avatar morecar avatar naxhh avatar nr-jbiggley avatar otaviocarvalho avatar r0xbike avatar rahul188 avatar samuelbaltanas avatar srvaroa avatar thezackm avatar vetusbs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

entity-definitions's Issues

Need NRQL alert condition example

I am first timer. I need more information or examples on how I can tie NRQL alert condition to a synthesized entity.
Customer using custom events/attributes in the NRQL alerts.

๐Ÿ› Dashboard Sanitization script is improperly changing certain patterns

Description

Using the dashboard validation script during local development (npm --prefix validator run sanitize-dashboards) is creating broken widgets on a specific query pattern seen in both the existing kentik_default dashboard and the new dashboard being added in PR #468

Steps to Reproduce

Clone the repo locally and run the dashboard validation script. No changes should be necessary as the current dashboard for Kentik Default entities is in "violation" and should show the problem.

Expected Behavior

This pattern should be ignored by the validation script since it's a valid dashboard definition and NRQL syntax before being adjusted.

Relevant Logs / Console output

This is only happening on the following pattern:

"FROM Metric SELECT round(latest(kentik.snmp.hrStorageUsed)*100/latest(kentik.snmp.hrStorageSize), .01) AS 'Used %', round(latest(kentik.snmp.hrStorageSize)*1e-6,.01) AS 'Total (MB)', round(latest(kentik.snmp.hrStorageUsed)*1e-6,.01) AS 'Used (MB)' WHERE provider = 'kentik-poweriq' FACET storage_description LIMIT MAX"

Specifically; this portion of the string:

(kentik.snmp.hrStorageSize), .01)

Which is being improperly sanitized to this: (replacing ) with } and removing ,)

(kentik.snmp.hrStorageSize} .01)

Additional context

This is negatively impacting each time a PR is run and the sanitization script is executed locally as it puts these dashboards at risk of a breaking change in production with a bad NRQL syntax on these widgets.

Image 2022-02-28 at 2 04 11 PM

Image 2022-02-28 at 2 05 50 PM

Adding IO/VM Wait Time Host Golden Metrics

Summary

I'm not sure how useful golden metrics such as average(cpuPercent) are generally, so I'd like to propose some new ones.

  • Cpu time spent waiting for I/O (while non-idle): percentile((cpuIOWaitPercent / cpuPercent) * 100, 50, 75, 95, 99, 99.9)
  • CPU time spent waiting for VM (wile non-idle) percentile((cpuStealPercent / cpuPercent) * 100, 50, 75, 95, 99, 99.9)

Also, what is the policy about percentiles with golden metrics? I don't see many using percentiles. Is a golden metric of percentile(cpuPercent, 50, 75, 95, 99, 99.9) an option?

Desired Behavior

Averages for golden metrics aren't terribly useful in my opinion. Would like to see more percentile based golden metrics so I can compare the median to outliers.

Possible Solution

If a percentile() with multiple percentiles specified is discouraged, could do one percentile per golden metric.

Additional context

Can submit a PR, just looking for some guidance on the best approach.

Adding support for Prometheus Redis

I have customer using Prometheus heavily for Redis instances in their Kubernetes cluster.

I tried creating the config file for this specific case, but I tried to stay away from Kubernetes specifics so I expect this to also work for Redis outside of Kubernetes.

ref: #5

Add Mikrotik router synthesis definition

Summary

Hello, I want to add an entity definition Mikrotik routers (https://mikrotik.com), but have doubts on the best way of doing it.

Desired Behavior

Either have a specific Mikrotik router entity type, or have them in a generic router type.

Possible Solution

I see that there currently is an EXT-ROUTER definition, which code talks about Kentik routers, but also has this comment

  • # Leaving this ambiguous to allow future technical partners to use the same entity type with additional conditions

Should I:

  1. submit a new entity definition,
  2. or modify the EXT-ROUTER to synthesize also the data from the Mikrotik routers?

If answer is 2:

  • How do I add a mikrotik condition to the type definition?
  • How do I modify these summary metrics to pick the metrics starting with mikrotik.? For example, this is what we have for Kentik:
 - cpuUtilization:
  title: CPU Utilization (%)
  unit: PERCENTAGE
  query:
    select: average(kentik.snmp.CPU)
    from: Metric
    where: "provider = 'kentik-router'"
    eventId: entity.guid
  • How do I modify the dashboards definition to condiitonally use metric names kentik.x.y or mikrotik.a.b? FOr example:
"nrqlQueries": [
  {
    "accountId": 0,
      "query": "FROM Metric SELECT average(kentik.snmp.CPU) FACET entity.name WHERE provider = 'kentik-router' TIMESERIES 5 MINUTES"
  }
],

Additional context

The router's data is dumped via the Metrics API with a script that runs on the router itself (pretty much like an agent), found here https://github.com/OscarDCorbalan/mikrotik-newrelic.

By impersonating userId 2848364, you should see a dashboard Mikrotik Router in the EU region (one.eu.newrelic) that uses that data.

This the shape of the data that is sent:

[
   {
      "common":{
         "attributes":{
            "mikrotik.currentfirmware":"6.48.2",
            "mikrotik.model":"RBD52G-5HacD2HnD",
            "mikrotik.serialnumber":"C6140C7EBEF8",
            "mikrotik.upgradefirmware":"6.48.2"
         },
         "timestamp":1620969329
      },
      "metrics":[
         {
            "name":"mikrotik.firewall.connection.established",
            "type":"gauge",
            "value":32
         },
         {
            "name":"mikrotik.firewall.connection.tcp",
            "type":"gauge",
            "value":32
         },
         {
            "name":"mikrotik.firewall.connection.udp",
            "type":"gauge",
            "value":11
         },
         {
            "name":"mikrotik.interface.ether1.fprxbps",
            "type":"gauge",
            "value":656
         },
         {
            "name":"mikrotik.interface.ether1.fptxbps",
            "type":"gauge",
            "value":2736
         },
         [ ... rest of interfaces' throughput data ...] 
         {
            "name":"mikrotik.ip.dhcpserver.leases",
            "type":"gauge",
            "value":3
         },
         {
            "name":"mikrotik.ip.dns.cache.size",
            "type":"gauge",
            "value":1024
         },
         {
            "name":"mikrotik.ip.dns.cache.used",
            "type":"gauge",
            "value":589
         },
         {
            "name":"mikrotik.ip.pool.used",
            "type":"gauge",
            "value":2
         },
         {
            "name":"mikrotik.system.cpu.load",
            "type":"gauge",
            "value":2
         },
         {
            "name":"mikrotik.system.memory.free",
            "type":"gauge",
            "value":80629760
         },
         {
            "name":"mikrotik.system.memory.total",
            "type":"gauge",
            "value":134217728
         }
      ]
   }
]

ext-host metrics are malformed

Description

The metrics in ext-host/golden-metrics.yml (e.g., https://github.com/newrelic/entity-definitions/blame/main/definitions/ext-host/golden_metrics.yml#L4) are malformed in a way that is not wrong but is likely misleading.

A metric has a single value; these have multiple values. . This is likely not what was intended.

The stream processing system that generates the golden metric data stored in NRDB will only use the first aggregation (in the example, it will construct newrelic.goldenmetrics.ext.host.systemLoad using only sum(datadog.system.load.1); the other values are ignored.

If those other values are important, then they should be included as separate metrics.

Steps to Reproduce

Expected Behavior

Relevant Logs / Console output

Your Environment

  • ex: Browser name and version:
  • ex: Operating System and version:

Additional context

New entity synthesis for RF Scanner devices

Summary

Opening enhancement request to add support for RF Scanner devices.
NOTE: # ( Provide a brief overview of what the new feature is all about. )

Desired Behavior

Possible Solution

Create RF Scanner entity. The required metrics are generated after receiving and parsing a scanner generated file in JSON format and ingesting via the Events API to create two custom event types.

Additional context

Summary metrics for Unix host

Description

Summary metrics for Unix host appear empty (CPU, memory, network)

Steps to Reproduce

Solaris hosts are reporting metrics, but this entity dashboard is empty save for the entity names/account info:

The backing metrics are present, as unixMonitor:Process, unixMonitor:Stats, etc, and the host entities exist.

Screenshot 2023-03-09 at 4 06 55 PM

Expected Behavior

Expected summary metrics to appear in the metric columns on the dashboard.

Your Environment

  • Solaris

Additional context

Metrics queries here: https://github.com/newrelic/entity-definitions/blob/main/definitions/ext-unix_host/golden_metrics.yml

Query for Memory relies on memory.used, but these hosts aren't sending it. A workaround could be to use other data, but I'm not sure if this is only an issue for a particular OS. For example this query works instead on the same data, by calculating used memory as total minus free:

FROM `unixMonitor:Stats` select ((sum(memory.total) - sum(memory.free)) / sum(memory.total)) * 100 as 'Memory utilization (%)' facet hostname timeseries

UPS Entity Dashboard does not fully populate

Description

Only the Device Uptime is populating when viewing the UPS Entity dashboard for Vertiv UPS devices whose data is gathered via SNMP. Data for other fields is visible when you view the data via NRQL. Vertiv-ups.yml is a recent addition to the Kentik SNMP profile and likely needs to have a entity definition defined for it.

Steps to Reproduce

Search for and select a UPS entity to display the dashboard.
image

I am able to see all of the data for the Vertiv UPS's when I run a standalone NRQL query.
FROM Metric SELECT latest(kentik.snmp.PollingHealth), latest(kentik.snmp.lgpPwrNominalBatteryCapacity), latest(kentik.snmp.upsEstimatedMinutesRemaining), latest(kentik.snmp.upsBatteryTemperature), latest(kentik.snmp.gpPwrBatteryCapacity), latest(kentik.snmp.upsSecondsOnBattery), latest(kentik.snmp.upsBypassNumLines), latest(kentik.snmp.lgpPwrBatteryChargeStatus), latest(kentik.snmp.lgpPwrBatteryCharger), latest(kentik.snmp.upsInputLineBads), latest(kentik.snmp.upsOutputFrequency), latest(kentik.snmp.upsEstimatedChargeRemaining), latest(kentik.snmp.upsTestStartTime), latest(kentik.snmp.upsBatteryStatus), latest(kentik.snmp.lgpPwrBrownOutCount), latest(kentik.snmp.upsTestResultsSummary), latest(kentik.snmp.lgpPwrBatteryTestResult), latest(kentik.snmp.upsOutputSource), latest(kentik.snmp.upsTestResultsDetail), latest(kentik.snmp.lgpPwrBatteryLastCommissionTime), latest(kentik.snmp.upsAlarmsPresent), latest(kentik.snmp.upsBatteryVoltage), latest(kentik.snmp.lgpPwrBatteryTimeRemaining), latest(kentik.snmp.upsBypassFrequency), latest(kentik.snmp.lgpEnvTemperatureMeasurementDegF), latest(kentik.snmp.upsOutputVoltage), latest(kentik.snmp.upsOutputPower), latest(kentik.snmp.upsOutputPercentLoad), latest(kentik.snmp.upsInputCurrent), latest(kentik.snmp.upsInputFrequency), latest(kentik.snmp.upsBypassVoltage), latest(kentik.snmp.upsInputVoltage), latest(kentik.snmp.upsOutputCurrent), latest(kentik.snmp.lgpEnvTemperatureMeasurementDegC), latest(kentik.snmp.upsInputCurrent), latest(kentik.snmp.upsInputFrequency), latest(kentik.snmp.upsBypassVoltage), latest(kentik.snmp.upsInputVoltage), latest(kentik.snmp.upsOutputCurrent), latest(kentik.snmp.lgpEnvTemperatureMeasurementDegC), latest(kentik.snmp.lgpEnvTemperatureMeasurementDegF), latest(kentik.snmp.upsOutputVoltage), latest(kentik.snmp.upsOutputPower), latest(kentik.snmp.upsOutputPercentLoad), latest(kentik.snmp.upsInputFrequency), latest(kentik.snmp.Uptime), latest(kentik.snmp.upsOutputNumLines), latest(kentik.snmp.upsBatteryCurrent), latest(kentik.snmp.lgpPwrBlackOutCount), latest(kentik.snmp.upsInputNumLines) where provider='kentik-ups' FACET device_name since 5 minute ago LIMIT MAX

Expected Behavior

I would expect the built in Entity dashboard for UPS to be able to populate the Vertiv UPS data.

Entity infra-elasticsearchnode Does Not Exist as Entity Type

Description

When navigating to Explorer, ElasticSearch entity type is not defined

Steps to Reproduce

  1. Login to New relic
  2. Click Explorer
  3. On left navigation menu scroll down to On Host

Expected Behavior

Expected to see Elasticsearch
[NOTE]: # ( Tell us what you expected to happen. )

Relevant Logs / Console output

Metrics are currently captured by NR via on host infrastructure agent.

Your Environment

  • Newrelic Infrastructure Agent 1.27.4

  • nri-elasticsearch 4.5.3

  • ex: Browser name and version:

  • ex: Operating System and version:

Additional context

Definition is defined here: https://github.com/newrelic/entity-definitions/tree/main/definitions/infra-elasticsearchnode

Custom Entity Definitions Summary Metrics Null

Support AWS Database Migration Service (DMS) as one or more entities

Summary

I want to add one or more entity synthesis definitions to support the AWS service of database migration service (DMS) so that we can see DMS instances, DMS tasks, and DMS tables as entities.

Possible Solution

Create 3 different entities

  1. DMS Instance - Track each instance which holds many tasks which run many tables. There are many metrics like CPU and rows per second which are meaningful to track at a higher level than a DMS task.
  2. DMS Replication Task - Track each task for replication which has 1 source endpoint and 1 target endpoint for 1 or more tables. Would want to track the name, task stats, logs, etc. here.
  3. DMS Table - Track each table and its counts and status.

Additional context

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.