Giter VIP home page Giter VIP logo

opentelemetry-collector-contrib's Introduction


Getting Started   •   Getting Involved   •   Getting In Touch

Build Status Go Report Card Codecov Status GitHub release (latest by date including pre-releases) Beta

Vision   •   Monitoring   •   Security   •  


OpenTelemetry Collector Contrib

This is a repository for OpenTelemetry Collector components that are not suitable for the core repository of the collector.

The official distributions, core and contrib, are available as part of the opentelemetry-collector-releases repository. Some of the components in this repository are part of the "core" distribution, such as the Jaeger and Prometheus components, but most of the components here are only available as part of the "contrib" distribution. Users of the OpenTelemetry Collector are also encouraged to build their own custom distributions with the OpenTelemetry Collector Builder, using the components they need from the core repository, the contrib repository, and possibly third-party or internal repositories.

Each component has its own support levels, as defined in the following sections. For each signal that a component supports, there's a stability level, setting the right expectations. It is possible then that a component will be Stable for traces but Alpha for metrics and Development for logs.

Stability levels

Stability level for components in this repository follow the definitions from the OpenTelemetry Collector repository.

Gated features

Some features are hidden behind feature gates before they are part of the main code path for the component. Note that the feature gates themselves might be at different lifecycle stages.

Support

Each component is supported either by the community of OpenTelemetry Collector Contrib maintainers, as defined by the GitHub group @open-telemetry/collector-contrib-maintainer, or by specific vendors. See the individual README files for information about the specific components.

The OpenTelemetry Collector Contrib maintainers may at any time downgrade specific components, including vendor-specific ones, if they are deemed unmaintained or if they pose a risk to the repository and/or binary distribution.

Even though the OpenTelemetry Collector Contrib maintainers are ultimately responsible for the components hosted here, actual support will likely be provided by individual contributors, typically a code owner for the specific component.

Contributing

See CONTRIBUTING.md.

Triagers (@open-telemetry/collector-contrib-triagers)

Emeritus Triagers:

Approvers (@open-telemetry/collector-contrib-approvers):

Emeritus Approvers:

Maintainers (@open-telemetry/collector-contrib-maintainer):

Emeritus Maintainers

Learn more about roles in the community repository.

PRs and Reviews

When creating a PR please follow the process described here.

New PRs will be automatically associated with the reviewers based on CODEOWNERS. PRs will be also automatically assigned to one of the maintainers or approvers for facilitation.

The facilitator is responsible for helping the PR author and reviewers to make progress or if progress cannot be made for closing the PR.

If the reviewers do not have approval rights the facilitator is also responsible for the official approval that is required for the PR to be merged and if the facilitator is a maintainer they are responsible for merging the PR as well.

The facilitator is not required to perform a thorough review, but they are encouraged to enforce Collector best practices and consistency across the codebase and component behavior. The facilitators will typically rely on codeowner's detailed review of the code when making the final approval decision.

opentelemetry-collector-contrib's People

Contributors

aneurysm9 avatar asuresh4 avatar atoulme avatar bogdandrutu avatar codeboten avatar crobert-1 avatar dashpole avatar dependabot[bot] avatar djaglowski avatar dmitryax avatar evan-bradley avatar fatsheep9146 avatar flands avatar frapschen avatar james-bebbington avatar jchengsfx avatar jpkrohling avatar jrcamp avatar mx-psi avatar my-git9 avatar odeke-em avatar opentelemetrybot avatar owais avatar pjanotti avatar renovate[bot] avatar rmfitzpatrick avatar songy23 avatar sumo-drosiek avatar tigrannajaryan avatar tylerhelmuth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opentelemetry-collector-contrib's Issues

Add ability to build multiple binaries from this repo

We should be able to build multiple binaries of OpenTelemetry Contrib Collector from this repository. The binaries would bundle a different set of components, use different libraries or protocols etc.

In future, we could even have a form on the OpenTelemetry website that would allow users to check which components they want and generate a custom build for them. For now we just need the ability to generate more than one builds i.e, add another cmd/<buildName> directory and the tooling to support it.

Perf test: TestTrace10kSPS/OpenCensus CPU consumption is 40.3%, max expected is 35%

CI failure for PR #88

--- FAIL: TestTrace10kSPS (22.32s)
    --- FAIL: TestTrace10kSPS/OpenCensus (6.00s)
        test_case.go:354: CPU consumption is 40.3%, max expected is 35%
    --- PASS: TestTrace10kSPS/SAPM (16.32s)
FAIL
exit status 1
FAIL	github.com/open-telemetry/opentelemetry-collector-contrib/testbed/tests	54.980s
# Test Results
Started: Fri, 03 Jan 2020 22:23:23 +0000

Test                                    |Result|Duration|CPU Avg%|CPU Max%|RAM Avg MiB|RAM Max MiB|Sent Items|Received Items|
----------------------------------------|------|-------:|-------:|-------:|----------:|----------:|---------:|-------------:|
Metric10kDPS/SignalFx                   |PASS  |     15s|    20.0|    20.7|         36|         45|    150000|        150000|
Metric10kDPS/OpenCensus                 |PASS  |     18s|     7.5|     8.0|         42|         52|    149900|        149900|
Trace10kSPS/OpenCensus                  |FAIL  |      6s|    32.5|    40.3|         39|         59|     59660|         57400|CPU consumption is 40.3%, max expected is 35%
Trace10kSPS/SAPM                        |PASS  |     16s|    43.7|    53.0|         69|         88|    149590|        149590|

Style: organize imports for current sources

The imports statements are not consistent across the repo. This is important specially taking into account that others can use the implementations here as starting point for their own.

exporter/honeycomb: Collector crashes if Node.ServiceInfo not set on TraceData.

Zipkin and Jaeger clients support sending trace data without populating Node.ServiceInfo on TraceData instances. This will currently cause the collector to crash if the honeycomb exporter is configured as part of a pipeline exporting those traces. We should check that the field is not nil before trying to set it as an attribute.

Host Metrics Receiver

Add a receiver to generate Host Metrics about the machine the collector is running on. Initially this will be for Windows only, but extendable so that the same receiver can be used to collect Linux metrics when implemented

Add Splunk HEC exporter

Splunk HTTP Event Collector can digest traces and events as event data, and data points as metrics data.

Add support to send to Splunk HEC, along with configuration items.

Example RBAC configuration for k8sprocessor

Hi everyone,

We're currently trying to implement the K8s processor in our agent pods, but seem to have hit a hurdle. When checking the logs to the pods we are seeing the following:

pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:98: Failed to list *v1.Pod: Get "https://172.20.0.1:443/api/v1/pods?limit=500&resourceVersion=0": Forbidden

Adding the following Service Account, ClusterRole and Clusterrolebinding to the pod does not seem to have resolved the issue:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: opentracing-agent

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: opentracing-agent
rules:
- apiGroups: [""]
  resources:
  - configmaps
  - daemonsets
  - deployments
  - endpoints
  - events
  - namespaces
  - nodes
  - pods
  - replicasets
  - services
  - statefulset
  verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: opentracing-agent
subjects:
- kind: ServiceAccount
  name: opentracing-agent
  namespace: tracing
roleRef:
  kind: ClusterRole
  name: opentracing-agent
  apiGroup: "rbac.authorization.k8s.io"

Is there an example of the required RBAC configuration? Below is some additional information if its of use. I've had a look around

Many thanks

  • K8s version (EKS): v1.14.9-eks-502bfb
  • Deployed using a Daemonset.
  • Service account seems to have permission to access the API:
kubectl auth can-i get pods --as=system:serviceaccount:tracing:opentracing-agent
  yes 
  • Service accounts token is correctly mounted on the pod and in /var/run/secrets/kubernetes.io/serviceaccount
  • K8sprocessor config:
processors:
  k8s_tagger:
    filter:
      node_from_env_var: NODE_NAME
    passthrough: false
    extract:
      metadata:
        - podName

K8S processor: Add detailed user documentation

Add a README file to k8sprocessor container user documentations. In addition to documenting all the config options and what they do, we should also mention recommended deployment scenarios and configurations.

Allow denylisting in the string-attribute-filter

Currently, the string-attribute-filter in OpenCensus collector allows spans to be exported if they match specified strings, i.e. whitelisted. It is, however, currently not possible to create a blacklist.

One specific use-case for this is to exclude traces for grpc.health.v1.HealthCheck, which are a standard way of adding liveness and readiness endpoints to GRPC servers, but which add lots of noise to trace lists. (The current SpanContext for OpenCensus does not include the span name, so exclusion at that point is not possible either.)

From a flexibility point of view, it would be ideal if there was some sort of simple syntax for including or excluding partial matches too, whether through regexes, globs etc, though latency might be a consideration. At a minimum though, the ability to blacklist certain strings would be great.

Per @songy23, this blocked by open-telemetry/opentelemetry-collector#221.

Re-evaluate gosec error about using "weak cryptographic primitive"

crypto/sha1 is imported in processor/processorhelper/hasher.go and some tests, and we suppress the warning from Gosec about it being a weak cryptographic primitive. We should document why SHA1 is appropriate (e.g. it's part of an external specification), or switch to something else.

[/Users/lazy/github/opentelemetry-collector/processor/attributesprocessor/attribute_hasher.go:18] - G505 (CWE-327): Blacklisted import crypto/sha1: weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
  > "crypto/sha1"


[/Users/lazy/github/opentelemetry-collector/processor/attributesprocessor/attribute_hasher.go:61] - G401 (CWE-326): Use of weak cryptographic primitive (Confidence: HIGH, Severity: MEDIUM)
  > sha1.New()

Give approvers/maintainers permissions

Please give service-approvers Write access and service-maintainers Admin access to this repository (this will mirror the permissions we already have for opentelemetry-service repo).

LightStep exporter missing parentid

When the otel collector is configured to receive Zipkin spans and export them into Lightstep and Zipkin, it sounds like the parentId is not propagated with the LightStep exporter.

In LightStep, the spans are then not linked, so all traces are unique to the service operation and not fully assembled as they are with the Zipkin exporter. The expected behavior is that the traces are fully assembled in LightStep as well.

How to reproduce it:

  • Context propagation is properly setup and configured with B3 headers and working in Zipkin.
  • The otel collector version: v0.3.x
  • The configuration is similar to this:
    receivers:
    zipkin:
    endpoint: "0.0.0.0:80"
    exporters:
    zipkin:
    url: "http://localhost:8080/api/v2/spans"
    lightstep:
    access_token: "Lightstep access token"
    service:
    pipelines:
    traces:
    receivers: [zipkin]
    exporters: [zipkin, lightstep]

honeycombexporter.TestEmptyNode() is unstable

See https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-collector-contrib/991/workflows/3099dddd-0525-46c2-b781-aef9b0d88bc5/jobs/1932/steps

==================

WARNING: DATA RACE

Write at 0x00000165f518 by goroutine 98:

  github.com/honeycombio/libhoney-go.Init()

      /home/circleci/go/pkg/mod/github.com/honeycombio/[email protected]/libhoney.go:222 +0x202

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.newHoneycombTraceExporter()

      /home/circleci/project/exporter/honeycombexporter/honeycomb.go:94 +0x1c6

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.(*Factory).CreateTraceExporter()

      /home/circleci/project/exporter/honeycombexporter/factory.go:55 +0x73

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.testTraceExporter()

      /home/circleci/project/exporter/honeycombexporter/honeycomb_test.go:82 +0x3a5

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.TestEmptyNode()

      /home/circleci/project/exporter/honeycombexporter/honeycomb_test.go:278 +0x3fd

  testing.tRunner()

      /usr/local/go/src/testing/testing.go:991 +0x1eb



Previous read at 0x00000165f518 by goroutine 52:

  github.com/honeycombio/libhoney-go.TxResponses()

      /home/circleci/go/pkg/mod/github.com/honeycombio/[email protected]/libhoney.go:511 +0x6b

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.(*honeycombExporter).RunErrorLogger()

      /home/circleci/project/exporter/honeycombexporter/honeycomb.go:287 +0x5b



Goroutine 98 (running) created at:

  testing.(*T).Run()

      /usr/local/go/src/testing/testing.go:1042 +0x660

  testing.runTests.func1()

      /usr/local/go/src/testing/testing.go:1284 +0xa6

  testing.tRunner()

      /usr/local/go/src/testing/testing.go:991 +0x1eb

  testing.runTests()

      /usr/local/go/src/testing/testing.go:1282 +0x527

  testing.(*M).Run()

      /usr/local/go/src/testing/testing.go:1199 +0x2ff

  main.main()

      _testmain.go:116 +0x337



Goroutine 52 (finished) created at:

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.(*honeycombExporter).pushTraceData()

      /home/circleci/project/exporter/honeycombexporter/honeycomb.go:121 +0xd5

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.(*honeycombExporter).pushTraceData-fm()

      /home/circleci/project/exporter/honeycombexporter/honeycomb.go:114 +0xe4

  go.opentelemetry.io/collector/exporter/exporterhelper.traceDataPusherOld.withObservability.func1()

      /home/circleci/go/pkg/mod/go.opentelemetry.io/[email protected]/exporter/exporterhelper/tracehelper.go:96 +0x129

  go.opentelemetry.io/collector/exporter/exporterhelper.(*traceExporterOld).ConsumeTraceData()

      /home/circleci/go/pkg/mod/go.opentelemetry.io/[email protected]/exporter/exporterhelper/tracehelper.go:48 +0x14a

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.testTraceExporter()

      /home/circleci/project/exporter/honeycombexporter/honeycomb_test.go:86 +0x4b2

  github.com/open-telemetry/opentelemetry-collector-contrib/exporter/honeycombexporter.TestExporter()

      /home/circleci/project/exporter/honeycombexporter/honeycomb_test.go:182 +0x1ae9

  testing.tRunner()

      /usr/local/go/src/testing/testing.go:991 +0x1eb

==================

--- FAIL: TestEmptyNode (0.02s)

    testing.go:906: race detected during execution of test

FAIL

Add a Dockerfile that can build the binary too

Currently, it seems that there is a Dockerfile that can create an image from a binary built on the host machine. It would be nice to have a Dockerfile that can actually build the binary too, for example this one - https://github.com/anuraaga/opentelemetry-collector-contrib/blob/custom/Dockerfile

This way, it's easy to locally build a docker image for master without worrying about tooling using something like docker build -t opentelemetry-collector:test https://github.com/opentelemetry/opentelemetry-collector-contrib.git or similarly for local changes. This can work especially well with docker-compose which will always start up containers after building source without any other steps.

If this seems like a reasonable idea, I can send a PR with the Dockerfile I linked.

Merge coverage report each module into a single coverage report

After #65 test is run for each module in its own sub-directory. The raw coverage report for each module is is separate. These reports can be merged with following bash script.

#!/bin/bash

files=`find . -name 'coverage.txt' | egrep -v '^\./coverage.txt' | tr '\n' ' '`
echo "mode: set" > $1 && cat $files | grep -v mode: | sort -r | awk '{if($1 != last) {print $0;last=$1}}' >> $1

The merged report can then be converted to html format using.

go tool cover -html=coverage.txt -o coverage.html

The issue with merging is that it doesn't work for the module which is new. Following error occurs for the new module

cover: no matching versions for query "latest"

Not sure what the root cause is but it is probably related go tool attempting to retrieve latest version of the new module from github which doesn't exists (because it is new - chicken-and-egg problem).

How to handle receiver start errors in receiver_creator at runtime

It's currently unclear how to appropriately handle start errors for receivers started in receiver_creator. This includes errors returned by Start() as well as errors reported through ReportFatalError. The existing methods are built around the assumption that start errors always occur at process start or shortly thereafter.

See discussion in #173 (comment)

Revisit association of workloads and pods in k8s_cluster receiver

Relates to #175

The k8s_receiver currently associates Pods with its workloads using k8s.workload.name and k8s.workload.kind. For example, if a Pod is spawned by a CronJob called foo, the created Pod will have labels like k8s.workload.kind:cronjob and k8s.workload.name:foo. This applies to workloads of other types as well.

The above mechanism of association should be revisited after we finalize on the new conventions for naming Kubernetes metadata.

Move Makefile common targets to Makefile.Common

There is some redundancy between the Makefile that builds the contrib executable and the Makefile.Common that is used to build individual components. We should have a single source for the "common" targets to keep the convenience of quick builds and custom targets for components while avoiding divergence of targets between Makefile and Makefile.common. /cc @owais

Add ability to sync metadata to SignalFx from Kubernetes

This issue relates to #175. The k8scluster receiver currently is capable of syncing metrics representing the state of a Kubernetes cluster. Apart from these metrics, services like SignalFx are also interested in cluster state information in the form of metadata (properties and tags in SignalFx's case). Add ability to sync this information.

helm chart

Hello,

I am aware of a k8s operator to deploy the collector but would like to propose providing a helm chart. If so we could have a repo such as open-telemetry/helm-charts

K8S Processor: Include more tags

Currently, the processor supports following tags:

  • namespace
  • podName
  • deployment
  • cluster
  • node
  • startTime

It would be great to include more tags, such as:

  • hostName
  • containerName
  • daemonSetName
  • serviceName
  • statefulSetName

Additionally, it would be good to have an ability to include all labels and annotations

Effectively handle Kubernetes container spec name in naming convention

Kubernetes has a notion of "container spec name". This is different from the actual name of the running container. The latter is a construct of the container engine, whereas the former seems specific to Kubernetes.

Container spec name is exposed via the Pod spec: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#podspec-v1-core and this not the same as the name of the running container. Currently this information is captured in a label called container_spec_name.

Handle these differences appropriately in k8s_cluster receiver once details are finalized.

Relates to: #175

[azuremonitorexporter] ai.cloud.role and ai.cloud.roleinstance are not populated

Collector and SDK version

  • Collector version: v0.3.0 (otel/opentelemetry-collector-contrib:0.3.0)
  • SDK: Go SDK 0.4.2

Problem

azuremonitorexporter does not populate servicename to cloudRole and cloudRoleInstance properties into the envelope, so that application map shows incorrect service name.

The exporter code is supposed to populate the service name, but app insights telemetry doesn't include cloudRole field.

Application map

Test code(See below) uses Jaeger exporter and configure "trace-demo" service name. The expected service name on the bottom of the circle is "trace-demo", but it shows "dapr-dev-insights".

image

Dependency Telemetry

timestamp [UTC] | 2020-04-24T13:38:22.411464Z
  | id | 71c8d87ea48d50b5
  | name | bar
  | success | True
  | resultCode | 0
  | duration | 0.004
  | performanceBucket | <250ms
  | itemType | request
  | customDimensions | {"float":"312.23","span.kind":"server","exporter":"jaeger"}
  | operation_Name | bar
  | operation_Id | 07a742e8cd5e7afa33d8bb34c2c59f9b
  | operation_ParentId | dfc8071f1b93436b
  | client_Type | PC
  | client_Model | Other
  | client_OS | Other
  | client_IP | 0.0.0.0
  | client_City | xxx
  | client_StateOrProvince | xx
  | client_CountryOrRegion | United States
  | client_Browser | Go-http-client 1.1
  | appId | b61b23aa-7a2e-4182-9431-8689af7bd8d5
  | appName | dapr-dev-insight
  | iKey | b723ef3d-a015-4e6e-84bf-e898d528f677
  | itemId | ddd5f422-8630-11ea-bc9c-936b910cbc1c
  | itemCount | 1

Configurations

Opentelemetry configuration

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-conf
  labels:
    app: opentelemetry
    component: otel-collector-conf
data:
  otel-collector-config: |
    receivers:
      jaeger:
        protocols:
          thrift_http:
            endpoint: "0.0.0.0:14268"
    processors:
      queued_retry:
      batch:
    extensions:
      health_check:
      pprof:
        endpoint: :1888
      zpages:
        endpoint: :55679
    exporters:
      azuremonitor:
      azuremonitor/2:
        endpoint: "https://dc.services.visualstudio.com/v2/track"
        instrumentation_key: "ikey"
        # maxbatchsize is the maximum number of items that can be queued before calling to the configured endpoint
        maxbatchsize: 100
        # maxbatchinterval is the maximum time to wait before calling the configured endpoint.
        maxbatchinterval: 10s
    service:
      extensions: [pprof, zpages, health_check]
      pipelines:
        traces:
          receivers: [jaeger]
          exporters: [azuremonitor/2]
          processors: [batch, queued_retry]
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: opencesus
    component: otel-collector
spec:
  ports:
  - name: otel # Default endpoint for Opencensus receiver.
    port: 14268
    protocol: TCP
    targetPort: 14268
  selector:
    component: otel-collector
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: opentelemetry
    component: otel-collector
spec:
  replicas: 1  # scale out based on your usage
  selector:
    matchLabels:
      app: opentelemetry
  template:
    metadata:
      labels:
        app: opentelemetry
        component: otel-collector
    spec:
      containers:
      - name: otel-collector
        image: otel/opentelemetry-collector-contrib:0.3.0
        command:
          - "/otelcontribcol"
          - "--config=/conf/otel-collector-config.yaml"
        resources:
          limits:
            cpu: 1
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 400Mi
        ports:
          - containerPort: 14268 # Default endpoint for Opencensus receiver.
        volumeMounts:
          - name: otel-collector-config-vol
            mountPath: /conf
          #- name: otel-collector-secrets
          #  mountPath: /secrets
        livenessProbe:
          httpGet:
            path: /
            port: 13133
        readinessProbe:
          httpGet:
            path: /
            port: 13133
      volumes:
        - configMap:
            name: otel-collector-conf
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
          name: otel-collector-config-vol
#       - secret:
#            name: otel-collector-secrets
#            items:
#              - key: cert.pem
#                path: cert.pem
#              - key: key.pem
#                path: key.pem

Test Code

package main

import (
	"context"
	"log"

	"go.opentelemetry.io/otel/api/core"
	"go.opentelemetry.io/otel/api/global"
	"go.opentelemetry.io/otel/api/key"
	"go.opentelemetry.io/otel/api/trace"

	"go.opentelemetry.io/otel/exporters/trace/jaeger"
	sdktrace "go.opentelemetry.io/otel/sdk/trace"
)

// initTracer creates a new trace provider instance and registers it as global trace provider.
func initTracer() func() {
	// Create and install Jaeger export pipeline
	_, flush, err := jaeger.NewExportPipeline(
		jaeger.WithCollectorEndpoint("http://localhost:14268/api/traces"),
		jaeger.WithProcess(jaeger.Process{
			ServiceName: "trace-demo",
			Tags: []core.KeyValue{
				key.String("exporter", "jaeger"),
				key.Float64("float", 312.23),
			},
		}),
		jaeger.RegisterAsGlobal(),
		jaeger.WithSDK(&sdktrace.Config{DefaultSampler: sdktrace.AlwaysSample()}),
	)
	if err != nil {
		log.Fatal(err)
	}

	return func() {
		flush()
	}
}

func main() {
	fn := initTracer()
	defer fn()

	ctx := context.Background()

	tr := global.Tracer("component-main")
	ctx, span := tr.Start(ctx, "foo", trace.WithSpanKind(trace.SpanKindClient))
	bar(ctx)
	span.End()
}

func bar(ctx context.Context) {
	tr := global.Tracer("component-bar")
	_, span := tr.Start(ctx, "bar", trace.WithSpanKind(trace.SpanKindServer))
	defer span.End()

	// Do bar...
}

Performance Benchmarking of Attributes processor

Investigate using Integers instead of strings for Actions during attributes logic application to a span. The investigation would do a comparison between string value comp and integer value comp to determine if the current implementation is a bottle neck.

Tail-Based Sampling - Scalability Issues

I like the idea of tail based sampling. I would like to enhance tail based sampling in 2 ways:

Consider a case where we are tail-sampling based on span->http.status_code=500.

  1. I would want to sample 100% of all http.status_code "500"s and 10% of http.status_code "200"s. I would like to create a new sampler that combines string_tag_filter and probabilistic sampler.
  2. What would happen when Spans for a trace land-up in different Otel-Collector instances, one instance can sample the trace when it find http.status_code as 500 and the other could decide not to sample. I was thinking of using group-cache to solve this problem.
    Please, share your thoughts on the above.

Specify requirements for components to be added to the repo

We intend to build distribution of the contrib repo with all components present. In order to ensure that quality of this build we should define a minimum set of requirements (test, documentation, etc) for all components available in the repo.

Fix build for go 1.13

Attempting to build this repo with go 1.13 currently fails. Here is the output of make test:

$ make test
go: directory receiver/zipkinscribereceiver is outside main module
go test -race -timeout 30s 
build .: cannot find module for path .
make: *** [test] Error 1

It appears the behavior of go list in go 1.13 is different. It prints the warning "outside main module" which then causes the problem when makefile attempts to parse that output.

Tested on Mac OS X, go1.13 darwin/amd64


Likely caused by golang/go@b9edee3


Possible solution is to exclude nested modules from ALL_SRC variable in makefile. However this will also remove them from make test target. To bring back testing of components in nested modules we can call make test in component subdirectories.

This will probably also require merging test coverage results into one report.

[QUESTION] Possibility to use duration as policy for tail-based sampling

Does Otel collector provide a possibility to use duration as a policy?
I am trying to achieve that with the following collector configuration:

...
processors:
  batch:
  queued_retry:
  tail_sampling:
    decision_wait: 10s
    policies:
      - name: sample-long-running-requests
        type: numeric_attribute
        numeric_attribute: {key: duration, min_value: 1000, max_value: 10000}
...

tailsamplingprocessor: Sampling decision from OnDroppedSpans() is not honoured

In the tail sampling processor, the following lines of code delete the trace from memory without honouring the sampling decision from OnDroppedSpans() for the given policies.

https://github.com/open-telemetry/opentelemetry-collector/blob/fe3782c76de259c34c381e7e2d0a732c014a87fc/processor/samplingprocessor/tailsamplingprocessor/processor.go#L341-L346

We could either -

  • Modify the OnDroppedSpans() signature to never return a sampling decision (force the drop of an old trace). Currently the implementation nullifies the always_sample policy.
  • Modify the dropTrace() / ConsumeTraceData() method to incorporate for custom use cases where the user might want to stop trace ingestion if the queue is full, instead of dropping old traces which are already batched for processing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.