Giter VIP home page Giter VIP logo

common's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

common's Issues

common logging has type error

D:...\github.com\weaveworks\common\logging\logging.go:35:13: cannot use hook (type *promrus.PrometheusHook) as type "github.com/sirupsen/logrus".Hook in argument to "github.com/sirupsen/logrus".AddHook:

*promrus.PrometheusHook does not implement "github.com/sirupsen/logrus".Hook (wrong type for Fire method)

	have Fire(*"github.com/weaveworks/promrus/vendor/github.com/sirupsen/logrus".Entry) error
	want Fire(*"github.com/sirupsen/logrus".Entry) error

test.Diff output too verbose

Here's an example of a test failure, reported with test.Diff

--- FAIL: TestContainerHostnameRenderer (0.08s)
	container_test.go:82: 
		--- want
		+++ have
		@@ -247,6 +247,45 @@
		 }
		 , }
		 }
		+, }) {
		+   psMap: (*ps.tree)(0xc420c760e0)(({;10.10.10.20;54001: {;10.10.10.20;54001 endpoint {} {} [;192.168.1.1;80] {0001-01-01 00:00:00 +0000 UTC []} {} {} map[] {} {}}, a1b2c3d4e5;<container>: {a1b2c3d4e5;<container> container {} {} [5e4d3c2b1a;<container>] {0001-01-01 00:00:00 +0000 UTC []} {} {} map[] {} {;10.10.10.20;54001: (report.Node) {
...
		+ Children: (report.NodeSet) {}
		+}
		+, }) {
		+           psMap: (*ps.tree)(0xc4208f7b90)(({;10.10.10.20;54002: {;10.10.10.20;54002 endpoint {} {} [;192.168.1.1;80] {0001-01-01 00:00:00 +0000 UTC []} {} {} map[] {} {}}, }
		+) {
		+            count: (int) 1,
		+            hash: (uint64) 14557804364885503053,
		+            key: (string) (len=18) ";10.10.10.20;54002",
		+            value: (report.Node) {
		+             ID: (string) (len=18) ";10.10.10.20;54002",
		+             Topology: (string) (len=8) "endpoint",
		+             Counters: (report.Counters) ({}) {
		+              psMap: (*ps.tree)(0x25e4c00)(({}
		+) {
		+               count: (int) 0,
		+               hash: (uint64) 0,
		+               key: (string) "",
		+               value: (interface {}) <nil>,
		+               children: ([8]*ps.tree) (len=8 cap=8) {
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>)
		+               }
		+              })
		+             },
		+             Sets: (report.Sets) ({}) {
		+              psMap: (*ps.tree)(0x25e4c00)(({}
		+) {
		+               count: (int) 0,
		+               hash: (uint64) 0,
		+               key: (string) "",
		+               value: (interface {}) <nil>,
		+               children: ([8]*ps.tree) (len=8 cap=8) {
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>),
		+                (*ps.tree)(0x25e4c00)(<already shown>)
		+               }
		+              })
		+             },
...

This is showing the internal structures of ps.Maps, which is clearly not desirable.

This used to work much better until 544c349. That set config.ContinueOnMethod = true, which causes structures to be traversed even when they implement Stringer.

Jaeger agent setup error

When configuring jaeger tracing. It would be good to return the error to caller rather than exiting it. The client can decide if they want to exit or still continue.

FYR: grafana/loki#1397

Jaeger is always enabled

if cfg.Sampler.SamplingServerURL == "" && cfg.Reporter.LocalAgentHostPort == "" && cfg.Reporter.CollectorEndpoint == "" {

Jaeger is always enabled because LocalAgentHostPort is by default ('localhost:6831').

Annoyingly the env variables used by jaeger are not publicly exposed by the jaeger package so a quick&clean fix is not that easy.

node_exporter/https moved to exporter-toolkit/web

go: finding module for package github.com/prometheus/node_exporter/https
github.com/grafana/agent/cmd/grafana-agent-crow imports
	github.com/weaveworks/common/server imports
	github.com/prometheus/node_exporter/https: module github.com/prometheus/node_exporter@latest found (v1.2.2), but does not contain package github.com/prometheus/node_exporter/https

It appears as though github.com/prometheus/node_exporter/https was moved to github.com/prometheus/exporter-toolkit/https (see prometheus/node_exporter#1907)
That was then moved to github.com/prometheus/exporter-toolkit/web (see prometheus/exporter-toolkit#29)

Better handle configuring jaeger tracing with thrift http transport

I'm trying to configure loki to send traces to tempo over the thrift http protocol for for the opentelemetry jaeger receiver.
Loki uses weaveworks/common to set this up.

I've configured my components with the JAEGER_ENDPOINT env var, but I'm getting an error:

020-11-04T14:53:17-08:00 level=error ts=2020-11-04T22:53:17.150980442Z caller=main.go:112 msg="error in initializing tracing. tracing will not be enabled" err="no trace report agent or config server specified"

This appears to come from here:
https://github.com/grafana/loki/blob/master/vendor/github.com/weaveworks/common/tracing/tracing.go#L32-L43

t appears there may be a work-around by specifying some other options, like JAEGER_AGENT_HOST, as that will cause the LocalAgentHostPort configuration to get a default value, bypassing the validation error. And according to https://github.com/grafana/loki/blob/master/vendor/github.com/uber/jaeger-client-go/config/config.go#L416, if the CollectorEndpoint (JAEGER_ENDPOINT) is set, then then the UDP transport isn't even used.

I'm still testing the work around, but https://github.com/grafana/tempo/blob/master/example/docker-compose/docker-compose.loki.yaml#L70-L73 indicates it might work.

Allow to disable GRPC/HTTP listener

Problem:

When using common/server/server.go to serve only HTTP traffic, there currently is no way to entirely disable the GRPC listener. Conversely, when only using GRPC, the HTTP listener will always be started.

This can be an issue when you want to e.g. limit the attack surface area, reduce the resource usage etc.

Proposed solution:

Add HTTPDisabled and GRPCDisabled configuration options that, when set to true, will disable HTTP and GRPC servers respectively.

Discussion

Have we ever considered this in the past, and is there a reason we decided against it?
I’m curious to hear what are your thoughts on this approach?
Are there any alternatives that you would recommend?
Would you be open to a contribution?

Logging: avoid expensive formatting when level is disabled.

This function:

common/logging/gokit.go

Lines 39 to 41 in 53b7240

func (g gokit) Debugf(format string, args ...interface{}) {
level.Debug(g.Logger).Log("msg", fmt.Sprintf(format, args...))
}

will do the printf with all string formatting, memory allocation, etc., regardless of whether debug logging is enabled.

This appears to be the intention of go-kit/log/level; it does not expose any way to ask about levels outside of Log().
We could keep a note of which level is allowed, thus shortcut the Sprintf call above.

We depend on too much stuff

$ gvt fetch github.com/weaveworks/common
2017/01/25 14:21:24 Fetching: github.com/weaveworks/common
2017/01/25 14:21:26 · Skipping (existing): golang.org/x/net/context
2017/01/25 14:21:26 · Fetching recursive dependency: golang.org/x/tools/cover
2017/01/25 14:21:27 · Skipping (existing): github.com/opentracing/opentracing-go/ext
2017/01/25 14:21:27 · Skipping (existing): github.com/davecgh/go-spew/spew
2017/01/25 14:21:27 · Fetching recursive dependency: github.com/weaveworks/docker/pkg/mflag
2017/01/25 14:21:35 ·· Fetching recursive dependency: github.com/docker/docker/pkg/homedir
2017/01/25 14:22:04 ··· Fetching recursive dependency: github.com/docker/docker/vendor/github.com/opencontainers/runc/libcontainer/user
2017/01/25 14:22:04 ··· Fetching recursive dependency: github.com/docker/docker/pkg/idtools
2017/01/25 14:22:04 ···· Fetching recursive dependency: github.com/docker/docker/pkg/system
2017/01/25 14:22:04 ····· Fetching recursive dependency: github.com/docker/docker/vendor/github.com/Microsoft/go-winio
2017/01/25 14:22:04 ······ Fetching recursive dependency: github.com/docker/docker/vendor/golang.org/x/sys/windows
2017/01/25 14:22:04 ····· Fetching recursive dependency: github.com/docker/docker/vendor/github.com/docker/go-units
2017/01/25 14:22:04 ····· Fetching recursive dependency: github.com/docker/docker/vendor/github.com/Sirupsen/logrus
2017/01/25 14:22:04 ······ Fetching recursive dependency: github.com/docker/docker/vendor/golang.org/x/sys/unix
2017/01/25 14:22:05 ·· Fetching recursive dependency: github.com/docker/docker/pkg/mflag
2017/01/25 14:22:05 command "fetch" failed: error fetching github.com/docker/docker/pkg/mflag: lstat /var/folders/_b/ktq_dxhx0nbbw7gjtdzb3tn40000gn/T/gvt-769657176/pkg/mflag: no such file or directory

In particular, the mflag package is painful.

Restarting a server leads to metrics re-registration issues

Problem:

When I run a common/server/server.go, then shut it down and start a new one (e.g. with a different port) in the same metrics namespace, I get a panic due to re-registration of metrics.

I tried working around this by keeping track of the metrics and explicitly Unregister-ing them, but this will still lead to errors such as:

An error has occurred while serving metrics:

1 error(s) occurred:
* collected metric "my_metric_name" { label:<name:"test_label" value:"test_value" > gauge:<value:0 > } was collected before with the same name and label values

This is because even after unregistering, it is not possible to register a new Collector that is inconsistent with the unregistered collector (as per prometheus.Registerer docs).

Proposed solution:

Instead of using reg.MustRegister(<collector>), the server should use reg.Register. When the AlreadyRegisteredError is returned by reg.Register, it will contain the previously registered Collector. The server should use that previously registered Collector instead.

Discussion

I’m curious to hear what are your thoughts on this approach?
Are there any alternatives that you would recommend?
Would you be open to a contribution?

Broken grpc.WithBalancerName

After updating to google.golang.org/grpc v1.46.0 I cannot compile anymore with the following error:

../../../../pkg/mod/github.com/weaveworks/[email protected]/httpgrpc/server/server.go:137:8: undefined: grpc.WithBalancerName

It seems that the WithBalancerName was actually removed in grpc after it was deprecated for some time grpc/grpc-go#5232

The code in this repository that needs updating can be found here:

grpc.WithBalancerName(roundrobin.Name),

Dot not allowed for aws azure minio proxy

With grafana/loki#800 I found a problem with the dot handling in this file: https://github.com/weaveworks/common/blob/master/aws/config.go

What I like to do is having a s3 talking minio azureblobs proxy next to my loki service in k8s. For this added the following url to the loki values.yaml:

s3://<azure-storage-key>:<azure-storage-secret>@minio:9000/logs
s3forcepathstyle: true

This is a normal requirement for a docker or k8s environment.

But the the problem now is that the script https://github.com/weaveworks/common/blob/master/aws/config.go thinks (because of a missing dot) that the minio string belongs to an aws standard defined url. But it's not in this case. It shall be an endpoint.

Renaming into minio.local would only help in docker but not in k8s where dots in names are not allowed.

Do you have any suggestions how to handle this? Now I must use minio as an external service.

Logging middleware is reporting every websocket call

Seeing things like this:

WARN: 2017/10/10 15:49:01.291677 GET /api/app/proud-wind-05/api/notification/sender (0) 58.274676905s
WARN: 2017/10/10 15:49:01.291704 Is websocket request: true
GET /api/app/proud-wind-05/api/notification/sender HTTP/1.1
Host: frontend.dev.weave.works
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en;q=0.5
Cache-Control: no-cache
Connection: keep-alive, Upgrade
Cookie: [redacted]
Origin: https://frontend.dev.weave.works
Pragma: no-cache
Sec-Websocket-Extensions: permessage-deflate
Sec-Websocket-Key: Q9tVfJ1lS+qSY6tr+UT00g==
Sec-Websocket-Version: 13
Upgrade: websocket
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:56.0) Gecko/20100101 Firefox/56.0


WARN: 2017/10/10 15:49:01.291728 Response: 

the important part is (0) at the top which is why it was printed out - the code doesn't print if 100 <= statusCode && statusCode < 500.

I think this is because it's a websocket request.

Add gzip compression middleware to server

Hi, we noticed that Server by default does not compress its responses, even if the client accepts this. I.e. when a http client sends a request with Accept-Encoding: gzip, the server can choose to compress the response (using gzip in this case) which will save bandwidth and is usually more efficient.

I suggest adding this to the middleware that is added by default. I'm basically echoing @bboreham's comments:

  • If it's not enabled by default, most users won't be able to profit from this
  • The Go http client will request compression and decompress responses by default, so a user will not even notice compression is used.

To implement this we can use nytimes/gziphandler. It's interface matches very closely with middleware.Interface so the entire implementation would be pretty small. gziphandler will by default only compress responses larger than 1400 bytes.

package middleware

import (
	"net/http"

	"github.com/NYTimes/gziphandler"
)

type Gzip struct {
}

func (g Gzip) Wrap(next http.Handler) http.Handler {
	return gziphandler.GzipHandler(next)
}

gziphandler is already being used by Cortex, so it has been battle tested with Server.

If this makes sense I can create a PR for this with tests etc.

Future of weaveworks/common repo

I'm writing this to make the situation clear for projects that depend on this repo.

weaveworks/common was created at Weaveworks, for the Weave Cloud service.
Today Weave Cloud is shut down, and Weaveworks the company has no motivation to support the repo.
I (Bryan Boreham) have been the only maintainer for several years, and I left Weaveworks 2 years ago.

As of ~March 2023, Continuous Integration started to fail intermittently. I do not seem to have permissions to fix it.

We have two options:

  1. Transfer the repo to another organisation.
  2. Archive the repo, everyone wishing to update it makes a copy of what they need.

I reached out to the main communities of downstream projects I know - Grafana, Cortex and Thanos.
To date nobody has gone for option 1.

trace: Migrate to open-telemetry for instrumentation.

Currently Loki is instrumented with opentracing/jaeger client libraries for tracing (I hope, it's same for Mimir and Tempo as well)

This instrumentation comes from weaveworks/common package and dskit's spanlogger package.

Those client libraries(opentracing, jaeger) are deprecated in the favor of opentelemetry client sdk. It's better to migrate.

I hope, It should be completely possible to migrate underlying dependencies without changing any API of those packages.

Example of using otel tracing client libraries for instrumentation in Go is here.

HTTPoGRPC masking errors

As long as all the IO works, all HTTP responses will be returned as success. We should ensure we return HTTP 500s as error.

Trace sampling should not be on by default when trace reporting is configured

Trace sampling should not be enabled by default if trace reporting is configured. It should be enabled separately

The decision about whether to capture a trace should be made by one service, as close to the request source as possible.
Then that decision should be passed along with the request and honored by each service involved in the request processing.

For example in cortex, if a service is configured to send samples to jaeger, it also defaults to a sampling rate of 10 per second.
If both the auth layer, distributor and ingester were configured thusly, you would see 10-20 traces per second from the distributor, and 20-30 traces per second from the ingester. Additionally you'd see some traces which start at the distributor or the ingester.

Cortex issue: cortexproject/cortex#885

Enable advanced TLS configuration parameters

Hi @pracucci & @bboreham Good Day!

We need the full capabilities of TLS config parameters to be available via weaveworks/common package to be configured that are available via exporter-toolkit/web.

We see as part of. #245 it was removed. We are using Cortex and we as per our organization standard we want to use a set of strong ciphers for all the HTTPS listening endpoints. If we have the above config parameters we can fix it by using the cipher_suites option and prefer_server_cipher_suites.

We see the same problem for Loki, Tempo and Mimir is applicable. Let us know if you need any additional information.

Note: We already enabled the client authentication by setting "RequireAndVerifyClientCert".

HTTP Middleware Error Logging: Vision on StatusBadGateway/ServiceUnavailable

The http logging middleware splits out different request results and logs them as either debug or warn. Generally errors are logged as warn and successes are logged as debug.

if 100 <= statusCode && statusCode < 500 || statusCode == http.StatusBadGateway || statusCode == http.StatusServiceUnavailable {
l.logWithRequest(r).Debugf("%s %s (%d) %s", r.Method, uri, statusCode, time.Since(begin))
if l.LogRequestHeaders && headers != nil {
l.logWithRequest(r).Debugf("ws: %v; %s", IsWSHandshakeRequest(r), string(headers))
}
} else {
l.logWithRequest(r).Warnf("%s %s (%d) %s Response: %q ws: %v; %s",
r.Method, uri, statusCode, time.Since(begin), buf.Bytes(), IsWSHandshakeRequest(r), headers)
}

We need to log the below error conditions that are currently being logged as debug. Unfortunately, due to volume, we can't turn on debug logging.

statusCode == http.StatusBadGateway || statusCode == http.StatusServiceUnavailable

My guess is that these two statuses are logged at a debug level due to volume if the backend is unavailable. We would like to log these failures at a higher level than debug, but also recognize that the volume would be too great to log if a backend is down.

The change we'd like to make:

  • Move http.StatusBadGateway and http.StatusServiceUnavailable to be logged at a Warn level with the other errors
  • Use a configurable rate limited logger to log errors instead of logging 100% of all errors at Warn

Thoughts?

If this (or something similar) is acceptable I'd be glad to PR it.

@bboreham

panic: duplicate metrics collector registration attempted

Just want to support reload in Loki/promtail, call server.Shutdown() and re-call server.New():

panic: duplicate metrics collector registration attempted

goroutine 212 [running]:
github.com/grafana/loki/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0000de370, 0xc000670090, 0x1, 0x1)
	/deploy/go/src/github.com/grafana/loki/vendor/github.com/prometheus/client_golang/prometheus/registry.go:391 +0xad
github.com/grafana/loki/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(...)
	/deploy/go/src/github.com/grafana/loki/vendor/github.com/prometheus/client_golang/prometheus/registry.go:176
github.com/grafana/loki/vendor/github.com/weaveworks/common/server.New(0x0, 0x0, 0x0, 0x0, 0x2378, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/deploy/go/src/github.com/grafana/loki/vendor/github.com/weaveworks/common/server/server.go:111 +0x464
github.com/grafana/loki/pkg/promtail/server.New(0x0, 0x0, 0x0, 0x0, 0x2378, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/deploy/go/src/github.com/grafana/loki/pkg/promtail/server/server.go:36 +0x55
github.com/grafana/loki/pkg/promtail.New(0x7fff0d6b2248, 0xf, 0x6, 0x0, 0x0)
	/deploy/go/src/github.com/grafana/loki/pkg/promtail/promtail.go:52 +0x319
main.(*Master).DoReload(0xc00002a420, 0x0, 0x0)
	/deploy/go/src/github.com/grafana/loki/cmd/promtail/main.go:48 +0x74
main.(*Master).Reload.func1(0xc00002a420)
	/deploy/go/src/github.com/grafana/loki/cmd/promtail/main.go:40 +0x2b
created by main.(*Master).Reload
	/deploy/go/src/github.com/grafana/loki/cmd/promtail/main.go:39 +0x3f

Similar issue found, but seems no fix here.

Further secure TLS communications

Currently when using TLS, the servers will accept requests from any client that has a certificate signed by the specified Certificate Authority. As such, I'd like to see custom server certificate validation supported. This will help enforce deny-by-default.

I'd like to be able to pass a flag, such as -cert-allowed-cn, that can be used to create a custom VerifyPeerCertificate (part of the crypto/tls package) and can be passed as a callback directly to the tls config. All this function needs to do is verify that the seen common-name is the same as the expected common-name.

Willing to submit a PR if the maintainers think this is a good idea. Thanks!

httpgrpc client/server does not preserve root trace ID

We see disconnected traces in jaeger where requests cross this boundary, i.e. we separate traces for auth -> distributor and distributor -> ingestor.
Oddly, it does seem to preserve the fact the request ought to be sampled.

Provide function for finding address of interface used in outgoing connections

I'm working on instrumenting go services so they may make Zipkin aware of their location and found (through this SO answer) this way for fetching service IP.

I saw there's already a function for getting address given an interface:

func GetFirstAddressOf(name string) (string, error) {

I guess services will almost (?) always make use of eth0 interface but it seems to me that getting the address without fixing it for a given interface is a more robust way of finding service IP.

If so, shall we provide this, for instance, GetOutboundIP function in weaveworks/commons?

grpc_logging interceptor logs full request bodies as ASCII byte arrays, not strings

A prime example of this is the behavior seen on a failed push to an ingester where the entire push body is logged as a frighteningly large byte array.

Reference #89 and cortexproject/cortex#709 for a more thorough conversation on the details.

The above-referenced weaveworks/common PR avoids this issue by simply allowing a per-service configuration that truncates these logs, omitting the request body.

It might be better, though, to retain the request bodies and simply log them as the strings they are, rather than their ASCII code representation.

error of "github.com/uber/jaeger-lib/metrics/testutils"

gitlab.com/momentum-valley/rome/cmd/rome imports
github.com/weaveworks/common/middleware imports
github.com/uber/jaeger-client-go tested by
github.com/uber/jaeger-client-go.test imports
github.com/uber/jaeger-lib/metrics/testutils: module github.com/uber/jaeger-lib@latest found (v2.2.0+incompatible), but does not contain package github.com/uber/jaeger-lib/metrics/testutils

flaky test TestRunReturnsError

As seen in https://circleci.com/gh/weaveworks/common/412, https://circleci.com/gh/weaveworks/common/406

panic: test timed out after 1m0s

goroutine 12 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:1240 +0x146
created by time.goFunc
	/usr/local/go/src/time/sleep.go:172 +0x52

goroutine 1 [chan receive]:
testing.(*T).Run(0xc4202be000, 0xc8d45a, 0x13, 0xcaa220, 0xc4201c9c01)
	/usr/local/go/src/testing/testing.go:825 +0x597
testing.runTests.func1(0xc4202be000)
	/usr/local/go/src/testing/testing.go:1063 +0xa5
testing.tRunner(0xc4202be000, 0xc4201c9d48)
	/usr/local/go/src/testing/testing.go:777 +0x16e
testing.runTests(0xc42024e2a0, 0x1040a60, 0x3, 0x3, 0xc42026e080)
	/usr/local/go/src/testing/testing.go:1061 +0x4e2
testing.(*M).Run(0xc42026e080, 0x0)
	/usr/local/go/src/testing/testing.go:978 +0x2ce
main.main()
	_testmain.go:184 +0x335

goroutine 50 [syscall]:
os/signal.signal_recv(0x487db1)
	/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:22 +0x30
created by os/signal.init.0
	/usr/local/go/src/os/signal/signal_unix.go:28 +0x4f

goroutine 6 [select, locked to thread]:
runtime.gopark(0xcabfa8, 0x0, 0xc85824, 0x6, 0x18, 0x1)
	/usr/local/go/src/runtime/proc.go:291 +0xf9
runtime.selectgo(0xc420097f50, 0xc4200262a0)
	/usr/local/go/src/runtime/select.go:392 +0x11d4
runtime.ensureSigM.func1()
	/usr/local/go/src/runtime/signal_unix.go:549 +0x19f
runtime.goexit()
	/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1

goroutine 41 [chan receive]:
github.com/weaveworks/common/server.TestRunReturnsError.func1(0xc4202c01e0)
	/go/src/github.com/weaveworks/common/server/server_test.go:116 +0x218
testing.tRunner(0xc4202c01e0, 0xc42025cb60)
	/usr/local/go/src/testing/testing.go:777 +0x16e
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:824 +0x565

goroutine 54 [select]:
github.com/weaveworks/common/signals.(*Handler).Loop(0xc4201b43f0)
	/go/src/github.com/weaveworks/common/signals/signals.go:47 +0x284
github.com/weaveworks/common/server.(*Server).Run.func1(0xc4202c6000, 0xc42028c3c0)
	/go/src/github.com/weaveworks/common/server/server.go:187 +0x71
created by github.com/weaveworks/common/server.(*Server).Run
	/go/src/github.com/weaveworks/common/server/server.go:186 +0x91

goroutine 62 [select]:
github.com/weaveworks/common/signals.(*Handler).Loop(0xc420472c00)
	/go/src/github.com/weaveworks/common/signals/signals.go:47 +0x284
github.com/weaveworks/common/server.(*Server).Run.func1(0xc420272a00, 0xc4201da960)
	/go/src/github.com/weaveworks/common/server/server.go:187 +0x71
created by github.com/weaveworks/common/server.(*Server).Run
	/go/src/github.com/weaveworks/common/server/server.go:186 +0x91

goroutine 40 [chan receive]:
testing.(*T).Run(0xc4202c00f0, 0xc83cb0, 0x4, 0xc42025cb60, 0x0)
	/usr/local/go/src/testing/testing.go:825 +0x597
github.com/weaveworks/common/server.TestRunReturnsError(0xc4202c00f0)
	/go/src/github.com/weaveworks/common/server/server_test.go:105 +0x11e
testing.tRunner(0xc4202c00f0, 0xcaa220)
	/usr/local/go/src/testing/testing.go:777 +0x16e
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:824 +0x565

goroutine 43 [chan receive]:
github.com/weaveworks/common/server.(*Server).Run(0xc420272a00, 0xc420084fb8, 0x45c96d)
	/go/src/github.com/weaveworks/common/server/server.go:222 +0x1d1
github.com/weaveworks/common/server.TestRunReturnsError.func1.1(0xc4202415c0, 0xc420272a00)
	/go/src/github.com/weaveworks/common/server/server_test.go:112 +0x39
created by github.com/weaveworks/common/server.TestRunReturnsError.func1
	/go/src/github.com/weaveworks/common/server/server_test.go:111 +0x175

goroutine 27 [IO wait]:
internal/poll.runtime_pollWait(0x7fa45dd95ea0, 0x72, 0xc4202d5be0)
	/usr/local/go/src/runtime/netpoll.go:173 +0x5e
internal/poll.(*pollDesc).wait(0xc42026e518, 0x72, 0xc4201d8400, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0xe5
internal/poll.(*pollDesc).waitRead(0xc42026e518, 0xffffffffffffff00, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x4b
internal/poll.(*FD).Accept(0xc42026e500, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/internal/poll/fd_unix.go:372 +0x2e2
net.(*netFD).accept(0xc42026e500, 0xc4202802a8, 0xc420084d90, 0xc420226d80)
	/usr/local/go/src/net/fd_unix.go:238 +0x53
net.(*TCPListener).accept(0xc4202501f8, 0xc4201d8470, 0xc420084df0, 0x4538bd)
	/usr/local/go/src/net/tcpsock_posix.go:136 +0x4e
net.(*TCPListener).Accept(0xc4202501f8, 0xcab348, 0xc4202801c0, 0xc420232f60, 0x0)
	/usr/local/go/src/net/tcpsock.go:259 +0x50
github.com/weaveworks/common/vendor/google.golang.org/grpc.(*Server).Serve(0xc4202801c0, 0xd0d5a0, 0xc4202501f8, 0x0, 0x0)
	/go/src/github.com/weaveworks/common/vendor/google.golang.org/grpc/server.go:544 +0x2e1
github.com/weaveworks/common/server.(*Server).Run.func3(0xc420272a00, 0xc4201da960)
	/go/src/github.com/weaveworks/common/server/server.go:211 +0x9e
created by github.com/weaveworks/common/server.(*Server).Run
	/go/src/github.com/weaveworks/common/server/server.go:210 +0x199
FAIL	github.com/weaveworks/common/server	60.044s

Can't pass in options to opentracing handler

So, the query endpoint AND the /metrics endpoint both end up under the operation: "HTTP GET" and this clutters up a traces a fair bit.

We could pass in something like this: https://godoc.org/github.com/opentracing-contrib/go-stdlib/nethttp#OperationNameFunc to differentiate both but because opentracing is a default middleware, there is no way to do that, see: https://github.com/weaveworks/common/blob/master/middleware/http_tracing.go#L17

Now, we can pass in the options to the tracer, but would you open to adding something like OpentracingOptions to server.Config?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.