weaveworks / common Goto Github PK
View Code? Open in Web Editor NEWLibraries used in multiple Weave projects
License: Other
Libraries used in multiple Weave projects
License: Other
Currently I am doing this:
req.Header.Set("Authorization", fmt.Sprintf("Scope-Probe token=%s", token))
Should there be a helper in weaveworks/common/user?
Please see the original issue: grafana/loki#759
The problem seems to be located with the logic at:
https://github.com/weaveworks/common/blob/master/aws/config.go#L48
D:...\github.com\weaveworks\common\logging\logging.go:35:13: cannot use hook (type *promrus.PrometheusHook) as type "github.com/sirupsen/logrus".Hook in argument to "github.com/sirupsen/logrus".AddHook:
*promrus.PrometheusHook does not implement "github.com/sirupsen/logrus".Hook (wrong type for Fire method)
have Fire(*"github.com/weaveworks/promrus/vendor/github.com/sirupsen/logrus".Entry) error
want Fire(*"github.com/sirupsen/logrus".Entry) error
Here's an example of a test failure, reported with test.Diff
--- FAIL: TestContainerHostnameRenderer (0.08s)
container_test.go:82:
--- want
+++ have
@@ -247,6 +247,45 @@
}
, }
}
+, }) {
+ psMap: (*ps.tree)(0xc420c760e0)(({;10.10.10.20;54001: {;10.10.10.20;54001 endpoint {} {} [;192.168.1.1;80] {0001-01-01 00:00:00 +0000 UTC []} {} {} map[] {} {}}, a1b2c3d4e5;<container>: {a1b2c3d4e5;<container> container {} {} [5e4d3c2b1a;<container>] {0001-01-01 00:00:00 +0000 UTC []} {} {} map[] {} {;10.10.10.20;54001: (report.Node) {
...
+ Children: (report.NodeSet) {}
+}
+, }) {
+ psMap: (*ps.tree)(0xc4208f7b90)(({;10.10.10.20;54002: {;10.10.10.20;54002 endpoint {} {} [;192.168.1.1;80] {0001-01-01 00:00:00 +0000 UTC []} {} {} map[] {} {}}, }
+) {
+ count: (int) 1,
+ hash: (uint64) 14557804364885503053,
+ key: (string) (len=18) ";10.10.10.20;54002",
+ value: (report.Node) {
+ ID: (string) (len=18) ";10.10.10.20;54002",
+ Topology: (string) (len=8) "endpoint",
+ Counters: (report.Counters) ({}) {
+ psMap: (*ps.tree)(0x25e4c00)(({}
+) {
+ count: (int) 0,
+ hash: (uint64) 0,
+ key: (string) "",
+ value: (interface {}) <nil>,
+ children: ([8]*ps.tree) (len=8 cap=8) {
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>)
+ }
+ })
+ },
+ Sets: (report.Sets) ({}) {
+ psMap: (*ps.tree)(0x25e4c00)(({}
+) {
+ count: (int) 0,
+ hash: (uint64) 0,
+ key: (string) "",
+ value: (interface {}) <nil>,
+ children: ([8]*ps.tree) (len=8 cap=8) {
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>),
+ (*ps.tree)(0x25e4c00)(<already shown>)
+ }
+ })
+ },
...
This is showing the internal structures of ps.Maps, which is clearly not desirable.
This used to work much better until 544c349. That set config.ContinueOnMethod = true
, which causes structures to be traversed even when they implement Stringer.
https://github.com/weaveworks/common/blob/master/middleware/instrument.go#L70
On this line, it’s saying if template is error, make template
.
template will be empty or invalid. The logic should be inverted.
When configuring jaeger tracing. It would be good to return the error to caller rather than exiting it. The client can decide if they want to exit or still continue.
FYR: grafana/loki#1397
server/server.go imports github.com/opentracing-contrib/go-grpc
This repo doesn't have a license (opentracing-contrib/go-grpc#3), so this makes weaveworks/common unusable for us.
go-grpc looks trivial, so if getting it licensed is somehow hard, it would probably not take very long to rewrite...
I.e. this line:
ext.SpanKindRPCClient.Set(sp)
is unwarranted. Maybe it should be parameterised?
Line 38 in 47e357f
Jaeger is always enabled because LocalAgentHostPort is by default ('localhost:6831').
Annoyingly the env variables used by jaeger are not publicly exposed by the jaeger package so a quick&clean fix is not that easy.
He have repeated/list flag handling code in multiple repos. Collect it and stick it here.
go: finding module for package github.com/prometheus/node_exporter/https
github.com/grafana/agent/cmd/grafana-agent-crow imports
github.com/weaveworks/common/server imports
github.com/prometheus/node_exporter/https: module github.com/prometheus/node_exporter@latest found (v1.2.2), but does not contain package github.com/prometheus/node_exporter/https
It appears as though github.com/prometheus/node_exporter/https
was moved to github.com/prometheus/exporter-toolkit/https
(see prometheus/node_exporter#1907)
That was then moved to github.com/prometheus/exporter-toolkit/web
(see prometheus/exporter-toolkit#29)
We'd like to retire our CircleCI subscription this year.
Need to find an alternative to the recorder.
I'm trying to configure loki to send traces to tempo over the thrift http protocol for for the opentelemetry jaeger receiver.
Loki uses weaveworks/common
to set this up.
I've configured my components with the JAEGER_ENDPOINT
env var, but I'm getting an error:
020-11-04T14:53:17-08:00 level=error ts=2020-11-04T22:53:17.150980442Z caller=main.go:112 msg="error in initializing tracing. tracing will not be enabled" err="no trace report agent or config server specified"
This appears to come from here:
https://github.com/grafana/loki/blob/master/vendor/github.com/weaveworks/common/tracing/tracing.go#L32-L43
t appears there may be a work-around by specifying some other options, like JAEGER_AGENT_HOST
, as that will cause the LocalAgentHostPort
configuration to get a default value, bypassing the validation error. And according to https://github.com/grafana/loki/blob/master/vendor/github.com/uber/jaeger-client-go/config/config.go#L416, if the CollectorEndpoint
(JAEGER_ENDPOINT
) is set, then then the UDP transport isn't even used.
I'm still testing the work around, but https://github.com/grafana/tempo/blob/master/example/docker-compose/docker-compose.loki.yaml#L70-L73 indicates it might work.
I read that there are edge cases where intercepting status codes from a response writer can fail:
https://github.com/felixge/httpsnoop#why-this-package-exists
Consider replacing https://github.com/weaveworks/common/blob/master/middleware/instrument.go#L44
When using common/server/server.go
to serve only HTTP traffic, there currently is no way to entirely disable the GRPC listener. Conversely, when only using GRPC, the HTTP listener will always be started.
This can be an issue when you want to e.g. limit the attack surface area, reduce the resource usage etc.
Add HTTPDisabled
and GRPCDisabled
configuration options that, when set to true, will disable HTTP and GRPC servers respectively.
Have we ever considered this in the past, and is there a reason we decided against it?
I’m curious to hear what are your thoughts on this approach?
Are there any alternatives that you would recommend?
Would you be open to a contribution?
This function:
Lines 39 to 41 in 53b7240
will do the printf with all string formatting, memory allocation, etc., regardless of whether debug logging is enabled.
This appears to be the intention of go-kit/log/level; it does not expose any way to ask about levels outside of Log()
.
We could keep a note of which level is allowed, thus shortcut the Sprintf
call above.
$ gvt fetch github.com/weaveworks/common
2017/01/25 14:21:24 Fetching: github.com/weaveworks/common
2017/01/25 14:21:26 · Skipping (existing): golang.org/x/net/context
2017/01/25 14:21:26 · Fetching recursive dependency: golang.org/x/tools/cover
2017/01/25 14:21:27 · Skipping (existing): github.com/opentracing/opentracing-go/ext
2017/01/25 14:21:27 · Skipping (existing): github.com/davecgh/go-spew/spew
2017/01/25 14:21:27 · Fetching recursive dependency: github.com/weaveworks/docker/pkg/mflag
2017/01/25 14:21:35 ·· Fetching recursive dependency: github.com/docker/docker/pkg/homedir
2017/01/25 14:22:04 ··· Fetching recursive dependency: github.com/docker/docker/vendor/github.com/opencontainers/runc/libcontainer/user
2017/01/25 14:22:04 ··· Fetching recursive dependency: github.com/docker/docker/pkg/idtools
2017/01/25 14:22:04 ···· Fetching recursive dependency: github.com/docker/docker/pkg/system
2017/01/25 14:22:04 ····· Fetching recursive dependency: github.com/docker/docker/vendor/github.com/Microsoft/go-winio
2017/01/25 14:22:04 ······ Fetching recursive dependency: github.com/docker/docker/vendor/golang.org/x/sys/windows
2017/01/25 14:22:04 ····· Fetching recursive dependency: github.com/docker/docker/vendor/github.com/docker/go-units
2017/01/25 14:22:04 ····· Fetching recursive dependency: github.com/docker/docker/vendor/github.com/Sirupsen/logrus
2017/01/25 14:22:04 ······ Fetching recursive dependency: github.com/docker/docker/vendor/golang.org/x/sys/unix
2017/01/25 14:22:05 ·· Fetching recursive dependency: github.com/docker/docker/pkg/mflag
2017/01/25 14:22:05 command "fetch" failed: error fetching github.com/docker/docker/pkg/mflag: lstat /var/folders/_b/ktq_dxhx0nbbw7gjtdzb3tn40000gn/T/gvt-769657176/pkg/mflag: no such file or directory
In particular, the mflag package is painful.
When I run a common/server/server.go
, then shut it down and start a new one (e.g. with a different port) in the same metrics namespace, I get a panic due to re-registration of metrics.
I tried working around this by keeping track of the metrics and explicitly Unregister
-ing them, but this will still lead to errors such as:
An error has occurred while serving metrics:
1 error(s) occurred:
* collected metric "my_metric_name" { label:<name:"test_label" value:"test_value" > gauge:<value:0 > } was collected before with the same name and label values
This is because even after unregistering, it is not possible to register a new Collector
that is inconsistent with the unregistered collector (as per prometheus.Registerer
docs).
Instead of using reg.MustRegister(<collector>)
, the server should use reg.Register
. When the AlreadyRegisteredError
is returned by reg.Register
, it will contain the previously registered Collector
. The server should use that previously registered Collector
instead.
I’m curious to hear what are your thoughts on this approach?
Are there any alternatives that you would recommend?
Would you be open to a contribution?
never initializes buffer
in the struct.
After updating to google.golang.org/grpc v1.46.0
I cannot compile anymore with the following error:
../../../../pkg/mod/github.com/weaveworks/[email protected]/httpgrpc/server/server.go:137:8: undefined: grpc.WithBalancerName
It seems that the WithBalancerName
was actually removed in grpc after it was deprecated for some time grpc/grpc-go#5232
The code in this repository that needs updating can be found here:
common/httpgrpc/server/server.go
Line 137 in f83ccc7
With grafana/loki#800 I found a problem with the dot handling in this file: https://github.com/weaveworks/common/blob/master/aws/config.go
What I like to do is having a s3 talking minio azureblobs proxy next to my loki service in k8s. For this added the following url to the loki values.yaml:
s3://<azure-storage-key>:<azure-storage-secret>@minio:9000/logs
s3forcepathstyle: true
This is a normal requirement for a docker or k8s environment.
But the the problem now is that the script https://github.com/weaveworks/common/blob/master/aws/config.go
thinks (because of a missing dot) that the minio
string belongs to an aws standard defined url. But it's not in this case. It shall be an endpoint.
Renaming into minio.local
would only help in docker but not in k8s where dots in names are not allowed.
Do you have any suggestions how to handle this? Now I must use minio as an external service.
Seeing things like this:
WARN: 2017/10/10 15:49:01.291677 GET /api/app/proud-wind-05/api/notification/sender (0) 58.274676905s
WARN: 2017/10/10 15:49:01.291704 Is websocket request: true
GET /api/app/proud-wind-05/api/notification/sender HTTP/1.1
Host: frontend.dev.weave.works
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en;q=0.5
Cache-Control: no-cache
Connection: keep-alive, Upgrade
Cookie: [redacted]
Origin: https://frontend.dev.weave.works
Pragma: no-cache
Sec-Websocket-Extensions: permessage-deflate
Sec-Websocket-Key: Q9tVfJ1lS+qSY6tr+UT00g==
Sec-Websocket-Version: 13
Upgrade: websocket
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:56.0) Gecko/20100101 Firefox/56.0
WARN: 2017/10/10 15:49:01.291728 Response:
the important part is (0)
at the top which is why it was printed out - the code doesn't print if 100 <= statusCode && statusCode < 500
.
I think this is because it's a websocket request.
Hi, we noticed that Server
by default does not compress its responses, even if the client accepts this. I.e. when a http client sends a request with Accept-Encoding: gzip
, the server can choose to compress the response (using gzip in this case) which will save bandwidth and is usually more efficient.
I suggest adding this to the middleware that is added by default. I'm basically echoing @bboreham's comments:
To implement this we can use nytimes/gziphandler. It's interface matches very closely with middleware.Interface
so the entire implementation would be pretty small. gziphandler will by default only compress responses larger than 1400 bytes.
package middleware
import (
"net/http"
"github.com/NYTimes/gziphandler"
)
type Gzip struct {
}
func (g Gzip) Wrap(next http.Handler) http.Handler {
return gziphandler.GzipHandler(next)
}
gziphandler is already being used by Cortex, so it has been battle tested with Server
.
If this makes sense I can create a PR for this with tests etc.
RegisterInstrumentation
option can not be changed in yaml, refer to https://github.com/weaveworks/common/blob/master/server/server.go#L33.
But in https://github.com/weaveworks/common/blob/master/server/server.go#L158-L160, still do check.
Better to expose this setting to yaml setting.
I'm writing this to make the situation clear for projects that depend on this repo.
weaveworks/common
was created at Weaveworks, for the Weave Cloud service.
Today Weave Cloud is shut down, and Weaveworks the company has no motivation to support the repo.
I (Bryan Boreham) have been the only maintainer for several years, and I left Weaveworks 2 years ago.
As of ~March 2023, Continuous Integration started to fail intermittently. I do not seem to have permissions to fix it.
We have two options:
I reached out to the main communities of downstream projects I know - Grafana, Cortex and Thanos.
To date nobody has gone for option 1.
Currently Loki is instrumented with opentracing/jaeger
client libraries for tracing (I hope, it's same for Mimir and Tempo as well)
This instrumentation comes from weaveworks/common package and dskit's spanlogger package.
Those client libraries(opentracing, jaeger) are deprecated in the favor of opentelemetry client sdk. It's better to migrate.
I hope, It should be completely possible to migrate underlying dependencies without changing any API of those packages.
Example of using otel tracing client libraries for instrumentation in Go is here.
As long as all the IO works, all HTTP responses will be returned as success
. We should ensure we return HTTP 500s as error
.
Trace sampling should not be enabled by default if trace reporting is configured. It should be enabled separately
The decision about whether to capture a trace should be made by one service, as close to the request source as possible.
Then that decision should be passed along with the request and honored by each service involved in the request processing.
For example in cortex, if a service is configured to send samples to jaeger, it also defaults to a sampling rate of 10 per second.
If both the auth layer, distributor and ingester were configured thusly, you would see 10-20 traces per second from the distributor, and 20-30 traces per second from the ingester. Additionally you'd see some traces which start at the distributor or the ingester.
Cortex issue: cortexproject/cortex#885
Hi @pracucci & @bboreham Good Day!
We need the full capabilities of TLS config parameters to be available via weaveworks/common package to be configured that are available via exporter-toolkit/web.
We see as part of. #245 it was removed. We are using Cortex and we as per our organization standard we want to use a set of strong ciphers for all the HTTPS listening endpoints. If we have the above config parameters we can fix it by using the cipher_suites option and prefer_server_cipher_suites.
We see the same problem for Loki, Tempo and Mimir is applicable. Let us know if you need any additional information.
Note: We already enabled the client authentication by setting "RequireAndVerifyClientCert".
The http logging middleware splits out different request results and logs them as either debug or warn. Generally errors are logged as warn and successes are logged as debug.
Lines 56 to 64 in 4b18475
We need to log the below error conditions that are currently being logged as debug. Unfortunately, due to volume, we can't turn on debug logging.
statusCode == http.StatusBadGateway || statusCode == http.StatusServiceUnavailable
My guess is that these two statuses are logged at a debug level due to volume if the backend is unavailable. We would like to log these failures at a higher level than debug, but also recognize that the volume would be too great to log if a backend is down.
The change we'd like to make:
http.StatusBadGateway
and http.StatusServiceUnavailable
to be logged at a Warn level with the other errorsThoughts?
If this (or something similar) is acceptable I'd be glad to PR it.
Currently, if it can't find a match for the URL it uses the whole thing (lightly munged) as a value for the route
label.
This is a bad idea, if you have malicious folks on other the end of the wire.
See discussion on #cortex channel.
While I think the idea was to print user/org ids, with http req logging, as seen here: https://github.com/weaveworks/common/blob/master/middleware/logging.go#L40 we don't.
This is because logging is currently the first middleware and auth is usually the last. Which means the context that the logging middleware has access to doesn't have the user/org-ids.
Not sure how to put the auth middleware at the first though....
Just want to support reload in Loki/promtail, call server.Shutdown() and re-call server.New():
panic: duplicate metrics collector registration attempted
goroutine 212 [running]:
github.com/grafana/loki/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0000de370, 0xc000670090, 0x1, 0x1)
/deploy/go/src/github.com/grafana/loki/vendor/github.com/prometheus/client_golang/prometheus/registry.go:391 +0xad
github.com/grafana/loki/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(...)
/deploy/go/src/github.com/grafana/loki/vendor/github.com/prometheus/client_golang/prometheus/registry.go:176
github.com/grafana/loki/vendor/github.com/weaveworks/common/server.New(0x0, 0x0, 0x0, 0x0, 0x2378, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/deploy/go/src/github.com/grafana/loki/vendor/github.com/weaveworks/common/server/server.go:111 +0x464
github.com/grafana/loki/pkg/promtail/server.New(0x0, 0x0, 0x0, 0x0, 0x2378, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/deploy/go/src/github.com/grafana/loki/pkg/promtail/server/server.go:36 +0x55
github.com/grafana/loki/pkg/promtail.New(0x7fff0d6b2248, 0xf, 0x6, 0x0, 0x0)
/deploy/go/src/github.com/grafana/loki/pkg/promtail/promtail.go:52 +0x319
main.(*Master).DoReload(0xc00002a420, 0x0, 0x0)
/deploy/go/src/github.com/grafana/loki/cmd/promtail/main.go:48 +0x74
main.(*Master).Reload.func1(0xc00002a420)
/deploy/go/src/github.com/grafana/loki/cmd/promtail/main.go:40 +0x2b
created by main.(*Master).Reload
/deploy/go/src/github.com/grafana/loki/cmd/promtail/main.go:39 +0x3f
Similar issue found, but seems no fix here.
Remove dependency on https://github.com/gogo/protobuf .
This would be a breaking change, thus requires a major version bump.
Currently when using TLS, the servers will accept requests from any client that has a certificate signed by the specified Certificate Authority. As such, I'd like to see custom server certificate validation supported. This will help enforce deny-by-default.
I'd like to be able to pass a flag, such as -cert-allowed-cn
, that can be used to create a custom VerifyPeerCertificate
(part of the crypto/tls package) and can be passed as a callback directly to the tls config. All this function needs to do is verify that the seen common-name is the same as the expected common-name.
Willing to submit a PR if the maintainers think this is a good idea. Thanks!
We see disconnected traces in jaeger where requests cross this boundary, i.e. we separate traces for auth -> distributor and distributor -> ingestor.
Oddly, it does seem to preserve the fact the request ought to be sampled.
I'm working on instrumenting go services so they may make Zipkin aware of their location and found (through this SO answer) this way for fetching service IP.
I saw there's already a function for getting address given an interface:
Line 9 in 955c130
I guess services will almost (?) always make use of eth0
interface but it seems to me that getting the address without fixing it for a given interface is a more robust way of finding service IP.
If so, shall we provide this, for instance, GetOutboundIP
function in weaveworks/commons?
A prime example of this is the behavior seen on a failed push to an ingester where the entire push body is logged as a frighteningly large byte array.
Reference #89 and cortexproject/cortex#709 for a more thorough conversation on the details.
The above-referenced weaveworks/common PR avoids this issue by simply allowing a per-service configuration that truncates these logs, omitting the request body.
It might be better, though, to retain the request bodies and simply log them as the strings they are, rather than their ASCII code representation.
gitlab.com/momentum-valley/rome/cmd/rome imports
github.com/weaveworks/common/middleware imports
github.com/uber/jaeger-client-go tested by
github.com/uber/jaeger-client-go.test imports
github.com/uber/jaeger-lib/metrics/testutils: module github.com/uber/jaeger-lib@latest found (v2.2.0+incompatible), but does not contain package github.com/uber/jaeger-lib/metrics/testutils
As seen in https://circleci.com/gh/weaveworks/common/412, https://circleci.com/gh/weaveworks/common/406
panic: test timed out after 1m0s
goroutine 12 [running]:
testing.(*M).startAlarm.func1()
/usr/local/go/src/testing/testing.go:1240 +0x146
created by time.goFunc
/usr/local/go/src/time/sleep.go:172 +0x52
goroutine 1 [chan receive]:
testing.(*T).Run(0xc4202be000, 0xc8d45a, 0x13, 0xcaa220, 0xc4201c9c01)
/usr/local/go/src/testing/testing.go:825 +0x597
testing.runTests.func1(0xc4202be000)
/usr/local/go/src/testing/testing.go:1063 +0xa5
testing.tRunner(0xc4202be000, 0xc4201c9d48)
/usr/local/go/src/testing/testing.go:777 +0x16e
testing.runTests(0xc42024e2a0, 0x1040a60, 0x3, 0x3, 0xc42026e080)
/usr/local/go/src/testing/testing.go:1061 +0x4e2
testing.(*M).Run(0xc42026e080, 0x0)
/usr/local/go/src/testing/testing.go:978 +0x2ce
main.main()
_testmain.go:184 +0x335
goroutine 50 [syscall]:
os/signal.signal_recv(0x487db1)
/usr/local/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:22 +0x30
created by os/signal.init.0
/usr/local/go/src/os/signal/signal_unix.go:28 +0x4f
goroutine 6 [select, locked to thread]:
runtime.gopark(0xcabfa8, 0x0, 0xc85824, 0x6, 0x18, 0x1)
/usr/local/go/src/runtime/proc.go:291 +0xf9
runtime.selectgo(0xc420097f50, 0xc4200262a0)
/usr/local/go/src/runtime/select.go:392 +0x11d4
runtime.ensureSigM.func1()
/usr/local/go/src/runtime/signal_unix.go:549 +0x19f
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2361 +0x1
goroutine 41 [chan receive]:
github.com/weaveworks/common/server.TestRunReturnsError.func1(0xc4202c01e0)
/go/src/github.com/weaveworks/common/server/server_test.go:116 +0x218
testing.tRunner(0xc4202c01e0, 0xc42025cb60)
/usr/local/go/src/testing/testing.go:777 +0x16e
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x565
goroutine 54 [select]:
github.com/weaveworks/common/signals.(*Handler).Loop(0xc4201b43f0)
/go/src/github.com/weaveworks/common/signals/signals.go:47 +0x284
github.com/weaveworks/common/server.(*Server).Run.func1(0xc4202c6000, 0xc42028c3c0)
/go/src/github.com/weaveworks/common/server/server.go:187 +0x71
created by github.com/weaveworks/common/server.(*Server).Run
/go/src/github.com/weaveworks/common/server/server.go:186 +0x91
goroutine 62 [select]:
github.com/weaveworks/common/signals.(*Handler).Loop(0xc420472c00)
/go/src/github.com/weaveworks/common/signals/signals.go:47 +0x284
github.com/weaveworks/common/server.(*Server).Run.func1(0xc420272a00, 0xc4201da960)
/go/src/github.com/weaveworks/common/server/server.go:187 +0x71
created by github.com/weaveworks/common/server.(*Server).Run
/go/src/github.com/weaveworks/common/server/server.go:186 +0x91
goroutine 40 [chan receive]:
testing.(*T).Run(0xc4202c00f0, 0xc83cb0, 0x4, 0xc42025cb60, 0x0)
/usr/local/go/src/testing/testing.go:825 +0x597
github.com/weaveworks/common/server.TestRunReturnsError(0xc4202c00f0)
/go/src/github.com/weaveworks/common/server/server_test.go:105 +0x11e
testing.tRunner(0xc4202c00f0, 0xcaa220)
/usr/local/go/src/testing/testing.go:777 +0x16e
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x565
goroutine 43 [chan receive]:
github.com/weaveworks/common/server.(*Server).Run(0xc420272a00, 0xc420084fb8, 0x45c96d)
/go/src/github.com/weaveworks/common/server/server.go:222 +0x1d1
github.com/weaveworks/common/server.TestRunReturnsError.func1.1(0xc4202415c0, 0xc420272a00)
/go/src/github.com/weaveworks/common/server/server_test.go:112 +0x39
created by github.com/weaveworks/common/server.TestRunReturnsError.func1
/go/src/github.com/weaveworks/common/server/server_test.go:111 +0x175
goroutine 27 [IO wait]:
internal/poll.runtime_pollWait(0x7fa45dd95ea0, 0x72, 0xc4202d5be0)
/usr/local/go/src/runtime/netpoll.go:173 +0x5e
internal/poll.(*pollDesc).wait(0xc42026e518, 0x72, 0xc4201d8400, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:85 +0xe5
internal/poll.(*pollDesc).waitRead(0xc42026e518, 0xffffffffffffff00, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:90 +0x4b
internal/poll.(*FD).Accept(0xc42026e500, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:372 +0x2e2
net.(*netFD).accept(0xc42026e500, 0xc4202802a8, 0xc420084d90, 0xc420226d80)
/usr/local/go/src/net/fd_unix.go:238 +0x53
net.(*TCPListener).accept(0xc4202501f8, 0xc4201d8470, 0xc420084df0, 0x4538bd)
/usr/local/go/src/net/tcpsock_posix.go:136 +0x4e
net.(*TCPListener).Accept(0xc4202501f8, 0xcab348, 0xc4202801c0, 0xc420232f60, 0x0)
/usr/local/go/src/net/tcpsock.go:259 +0x50
github.com/weaveworks/common/vendor/google.golang.org/grpc.(*Server).Serve(0xc4202801c0, 0xd0d5a0, 0xc4202501f8, 0x0, 0x0)
/go/src/github.com/weaveworks/common/vendor/google.golang.org/grpc/server.go:544 +0x2e1
github.com/weaveworks/common/server.(*Server).Run.func3(0xc420272a00, 0xc4201da960)
/go/src/github.com/weaveworks/common/server/server.go:211 +0x9e
created by github.com/weaveworks/common/server.(*Server).Run
/go/src/github.com/weaveworks/common/server/server.go:210 +0x199
FAIL github.com/weaveworks/common/server 60.044s
So, the query endpoint AND the /metrics
endpoint both end up under the operation: "HTTP GET" and this clutters up a traces a fair bit.
We could pass in something like this: https://godoc.org/github.com/opentracing-contrib/go-stdlib/nethttp#OperationNameFunc to differentiate both but because opentracing is a default middleware, there is no way to do that, see: https://github.com/weaveworks/common/blob/master/middleware/http_tracing.go#L17
Now, we can pass in the options to the tracer, but would you open to adding something like OpentracingOptions
to server.Config
?
gorilla mux is archived. This repo is using mux for routing. It would be better to step down to net/http or to jump on to other alternatives to remove the dependency on mux. Thoughts?
Which it might not:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.