Comments (8)
I feel there we should maintain the kind of the original instrument, but I agree with the comments above. Suppose I export a Measure instrument. If I know that it was absolute then I can automatically show you a plot of the sum as a rate. If the Measure is not absolute, I should not try to plot the sum as a rate. In both cases, the value being exported is a histogram or a summary.
@bogdandrutu I am suggesting that we keep the original instrument kind as well as be explicit about the aggregation (somehow). @tigrannajaryan I was imagining that there would only one value present, which is unambiguous in the sense that you have exactly one value, but doesn't actually state what the aggregation is.
@bogdandrutu If I read you correctly, you're agreeing with a structure like
Metric {
MetricDescriptor
repeated Aggregation{ AggregationType, repeated timeseries/points }
}
is that right?
from opentelemetry-proto.
@bogdandrutu @jmacd what do you think?
from opentelemetry-proto.
@tigrannajaryan I like the MONOTONIC version (first proposal).
from opentelemetry-proto.
I would suggest the descriptor match the API more closely. There are 12 combinations of { Counter, Gauge, Measure }, { int64, float64 }, { monotonic/non-monotonic or absolute/non-absolute }.
As discussed in open-telemetry/opentelemetry-specification#364, there is a motion to avoid using a single boolean to reflect the bit for { monotonic/non-monotonic or absolute/non-absolute }. In the protocol it matters because we'd prefer to set the bit only when the non-default cause is chosen, but counters and gauges have different defaults. In this respect, maybe we're best off with 6 kinds:
{ monotonic_counter, nonmonotonic_counter, monotonic_gauge, nonmonotonic_gauge, absolute_measure, nonabsolute_measure }
I am not sure about "data type". I would prefer if "data type" referred to the original input values (i.e., { int64, float64 }), which may not be important to include in the export.
I am more familiar with using "value type" to refer to { Int64, Double, Histogram, Summary }, and I would not call "value type" information part of the descriptor itself. The same descriptor could generate any of these values, so I would let that be implied by the type of value that accompanies it in the Metric
message, not part of the descriptor.
To summarize, I would go with a 6-value enum (monotonic_counter, nonmonotonic_counter, monotonic_gauge, nonmonotonic_gauge, absolute_measure, nonabsolute_measure) and I would leave out whether the inputs are integer or floating point.
from opentelemetry-proto.
Do we need to differentiate between absolute and non-absolute measures? I cannot think of anything that requires interpreting them differently and they are going to be represented as signed numbers in the proto anyway.
I am not sure about "data type". I would prefer if "data type" referred to the original input values (i.e., { int64, float64 }), which may not be important to include in the export.
The data type must be reflected in the metric descriptor otherwise there is no way to know which data
field in Metric message to use (e.g. int64 or double?). So this either needs to be an explicit field or must be unambiguously inferrable from the Value Type (or Kind) enum if we use one enum only.
from opentelemetry-proto.
I believe we should. An example case that has come up is when using an absolute measure to convey individual measurements where the sum is relevant. If the measure is absolute, the sum is monotonic, which makes it easier to automate the rate-of-sum calculation. I see this as metadata that the transport and protocol itself probably doesn't care about, but might matter for presentation and downstream use.
from opentelemetry-proto.
This was written before I read few times the conversation
I would suggest the descriptor match the API more closely. There are 12 combinations of { Counter, Gauge, Measure }, { int64, float64 }, { monotonic/non-monotonic or absolute/non-absolute }.
I am not sure what is the representation of the Measure in this case. We need to map the values to an aggregation that the SDK produces, otherwise you are proposing only raw values (no aggregation) which is sub-optimal and I think we should not do that (see for example OpenMetrics which does NOT send raw values for all the good reasons).
So that being said I think we should design the protocol around the aggregation we produce (similar with OpenMetrics) + time interval (cumulative value over time OR value set at a specific moment).
This was written after I read >3 times the conversation
If I understand correctly @jmacd proposes something like this:
Metric {
MetricDescriptor
repeated Aggregation{ AggregationType, repeated timeseries/points }
}
The current proposal does not try to preserve the initial api instrument
description but rather unify the initial descriptor with the aggregation type and labels used, so then from an instrument we may produce multiple metrics (based on the aggregation defined), for example from an instrument called rpc_latency we may want to produce two histograms (different labels, different buckets) they will become two metrics. This way our protocol is compatible with OpenMetrics and much easier to transform to/from.
from opentelemetry-proto.
Closing this since we went with an approach to describe the aggregation of the data not the instrument that was initially used.
from opentelemetry-proto.
Related Issues (20)
- Add checks for markdown files as we have in the spec repo HOT 1
- Add ability for OTLP to indicate which records were rejected for retryable reasons HOT 5
- Move examples out of spec into an examples folder HOT 1
- Enum element naming consistency HOT 1
- Conflicting requirements in OTLP JSON field names HOT 14
- Schema support for span with key-value attributes HOT 2
- Question about ResourceSpans HOT 1
- Mention 1.0.0 in CHANGELOG.md? HOT 3
- Difference between proto and SDK HOT 2
- Why TraceID and SpanID are defined []byte type? HOT 4
- AnyValue diverging from spec on website HOT 2
- The comment of schema_url doesn't describe it's purpose.
- Clarify whether OTLP/gRPC codes should cover both client and server HOT 1
- Unclear whether Span.flags represents span's parent or own flags HOT 10
- Fix releasing notes HOT 3
- Clarify meaning of *DNS name prefix* in protobuf code comment for metric name
- Allow consumers to use map<string, AnyValue> instead of repeated KeyValue HOT 3
- Generated OpenAPI files wrongly encode enums
- Improve metrics.proto by mentioning Resource Attributes and Scope HOT 1
- How to use Java service to receive data in the OpenTelemetry (Otel) protocol? HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from opentelemetry-proto.