linkerd / linkerd-examples Goto Github PK
View Code? Open in Web Editor NEWExamples of how to configure and run linkerd
Home Page: https://linkerd.io
License: Apache License 2.0
Examples of how to configure and run linkerd
Home Page: https://linkerd.io
License: Apache License 2.0
rbac is enabled by default on newer versions of kubernetes. we should make this part of the default kubernetes linkerd config.
this commit added rbac to the default linkerd config, but did not update any docs around it:
8967603
re: #129
The codebase has some conventions/tribal knowledge that would be good to put into a written format.
I am informed that version 3 of the docker-compose config is supported by Swarm. If we upgrade the docker example configs, maybe it will Just Work for Swarm?
For the kubernetes examples in this repo and the getting started guides on linkerd.io, I'd like to start assuming that folks are running kubernetes 1.4+, which gives us access to the node name as part of the downward api. This will make our example configs and guides a lot more intelligible.
For instance, in our current getting started guide, we link to this hostIP.sh script, and have the following config:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NS
valueFrom:
fieldRef:
fieldPath: metadata.namespace
command:
- "/bin/bash"
- "-c"
- "http_proxy=`./hostIP.sh`:4140 myCoolApp"
Instead, if we assume kubernetes 1.4+, then we can omit the script, we can stop overriding the container's default entrypoint via the command:
field, and we can constrain all of the changes needed setup the http_proxy
environment variable to the env:
field, with the following config:
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
We could still provide the hostIP.sh script as a legacy option, but it shouldn't be the initially recommended approach.
As part of this change, I think we should rename hello-world.yml to hello-world-legacy.yml
, and hello-world-1_4.yml to hello-world.yml
.
I trying to run linkerd-examples for k8s but I can't run named. When namerctl start I got this error
$ kubectl logs -f namerd-sqpko namerd
-XX:+AggressiveOpts -XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark -XX:InitialHeapSize=33554432 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:OldPLABSize=16 -XX:+PrintCommandLineFlags -XX:+ScavengeBeforeFullGC -XX:-TieredCompilation -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseStringDeduplication
Nov 22, 2016 10:36:30 AM com.twitter.finagle.http.HttpMuxer$$anonfun$4 apply
INFO: HttpMuxer[/admin/metrics.json] = com.twitter.finagle.stats.MetricsExporter(<function1>)
Nov 22, 2016 10:36:30 AM com.twitter.finagle.http.HttpMuxer$$anonfun$4 apply
INFO: HttpMuxer[/admin/per_host_metrics.json] = com.twitter.finagle.stats.HostMetricsExporter(<function1>)
I 1122 10:36:31.642 THREAD1: Finagle version 6.39.0 (rev=f49b26aa1a89cbaec5aa7ac956ae7f379b4a5d97) built at 20161011-165646
I 1122 10:36:32.160 THREAD1: Resolver[inet] = com.twitter.finagle.InetResolver(com.twitter.finagle.InetResolver@189f665)
I 1122 10:36:32.162 THREAD1: Resolver[flag] = com.twitter.server.FlagResolver(com.twitter.server.FlagResolver@c16548)
I 1122 10:36:32.163 THREAD1: Resolver[zk] = com.twitter.finagle.zookeeper.ZkResolver(com.twitter.finagle.zookeeper.ZkResolver@105edc)
I 1122 10:36:32.165 THREAD1: Resolver[zk2] = com.twitter.finagle.serverset2.Zk2Resolver(com.twitter.finagle.serverset2.Zk2Resolver@b972d7)
I 1122 10:36:32.160 THREAD1: Resolver[fixedinet] = com.twitter.finagle.FixedInetResolver(com.twitter.finagle.FixedInetResolver@1f1cff6)
I 1122 10:36:32.161 THREAD1: Resolver[neg] = com.twitter.finagle.NegResolver$(com.twitter.finagle.NegResolver$@d926d3)
I 1122 10:36:32.162 THREAD1: Resolver[nil] = com.twitter.finagle.NilResolver$(com.twitter.finagle.NilResolver$@1ce27f2)
I 1122 10:36:32.162 THREAD1: Resolver[fail] = com.twitter.finagle.FailResolver$(com.twitter.finagle.FailResolver$@96a4ea)
I 1122 10:36:32.695 THREAD1: serving http admin on /0.0.0.0:9990
I 1122 10:36:32.772 THREAD1: serving io.l5d.thriftNameInterpreter interface on /0.0.0.0:4100
I 1122 10:36:32.802 THREAD1: serving io.l5d.httpController interface on /0.0.0.0:4180
I 1122 10:39:51.767 THREAD22: Drainer is disabled; bypassing
E 1122 10:40:36.490 THREAD19: adminhttp
io.buoyant.k8s.Api$NotFound
at io.buoyant.k8s.Api$.parse(Api.scala:64)
at io.buoyant.k8s.ListResource$$anonfun$get$2.apply(resources.scala:136)
at io.buoyant.k8s.ListResource$$anonfun$get$2.apply(resources.scala:136)
at com.twitter.util.Future$$anonfun$flatMap$1.apply(Future.scala:1092)
at com.twitter.util.Future$$anonfun$flatMap$1.apply(Future.scala:1091)
at com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:107)
at com.twitter.util.Promise$Transformer.k(Promise.scala:107)
at com.twitter.util.Promise$Transformer.apply(Promise.scala:117)
at com.twitter.util.Promise$Transformer.apply(Promise.scala:98)
at com.twitter.util.Promise$$anon$1.run(Promise.scala:421)
at com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:200)
at com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:158)
at com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:272)
at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:108)
at com.twitter.util.Promise.runq(Promise.scala:405)
at com.twitter.util.Promise.updateIfEmpty(Promise.scala:801)
at com.twitter.util.Promise.update(Promise.scala:775)
at com.twitter.util.Promise.setValue(Promise.scala:751)
at com.twitter.concurrent.AsyncQueue.offer(AsyncQueue.scala:120)
at com.twitter.finagle.netty3.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:55)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
at com.twitter.finagle.netty3.channel.ChannelStatsHandler.messageReceived(ChannelStatsHandler.scala:68)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)
at com.twitter.finagle.netty3.channel.ChannelRequestStatsHandler.messageReceived(ChannelRequestStatsHandler.scala:32)
at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at com.twitter.finagle.util.ProxyThreadFactory$$anonfun$newProxiedRunnable$1$$anon$1.run(ProxyThreadFactory.scala:19)
at java.lang.Thread.run(Thread.java:745)
Env: Kubernetes 1.7
Netwoking: Canal (flannel+calico)
Application architecture: [took it from istio sample application - (bookinfo)]
In this sample we will deploy a simple application that displays information about a book, similar to a single catalog entry of an online book store. Displayed on the page is a description of the book, book details (ISBN, number of pages, and so on), and a few book reviews.
The BookInfo application is broken into four separate microservices:
There are 3 versions of the reviews microservice:
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: details-v1
spec:
replicas: 1
template:
metadata:
labels:
app: details
version: v1
spec:
containers:
- name: details
image: istio/examples-bookinfo-details-v1
imagePullPolicy: IfNotPresent
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
value: details
- name: http_proxy
value: $(NODE_NAME):4140
ports:
- containerPort: 9080
---
################################################
# Ratings service
################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v1
spec:
replicas: 1
template:
metadata:
labels:
app: ratings
version: v1
spec:
containers:
- name: ratings
image: istio/examples-bookinfo-ratings-v1
imagePullPolicy: IfNotPresent
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
- name: POD_NAME
value: ratings
ports:
- containerPort: 9080
---
########################################################
# Reviews service
########################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v1
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v1
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v1
imagePullPolicy: IfNotPresent
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
- name: POD_NAME
value: reviews1
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v2
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v2
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v2
imagePullPolicy: IfNotPresent
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
- name: POD_NAME
value: reviews2
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v3
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v3
imagePullPolicy: IfNotPresent
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
- name: POD_NAME
value: reviews3
ports:
- containerPort: 9080
---
##################################################
# Productpage service
##################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1
imagePullPolicy: IfNotPresent
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: http_proxy
value: $(NODE_NAME):4140
- name: POD_NAME
value: productpage
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
k8s-app: bookinfo
name: bookinfo
spec:
rules:
- host: bookinfo.13.54.45.20.nip.io
http:
paths:
- path: /productpage
backend:
serviceName: productpage
servicePort: 9080
- path: /login
backend:
serviceName: productpage
servicePort: 9080
- path: /logout
backend:
serviceName: productpage
servicePort: 9080
apiVersion: rbac.authorization.k8s.io/v1alpha1
kind: ClusterRoleBinding
metadata:
name: linkerd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: default
---
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
namers:
- kind: io.l5d.k8s
experimental: true
host: localhost
port: 8001
telemetry:
- kind: io.l5d.prometheus
- kind: io.l5d.zipkin
host: zipkin-collector.default.svc.cluster.local
port: 9410
sampleRate: 1.0
- kind: io.l5d.recentRequests
sampleRate: 0.25
usage:
orgId: linkerd-examples-daemonset-zipkin
routers:
- protocol: http
label: outgoing
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/reviews1 => /srv/reviews;
/host/reviews2 => /srv/reviews;
/host/reviews3 => /srv/reviews;
/host/details => /srv/details;
/host/ratings => /srv/ratings;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
hostNetwork: true
servers:
- port: 4140
ip: 0.0.0.0
service:
responseClassifier:
kind: io.l5d.http.retryableRead5XX
- protocol: http
label: incoming
dtab: |
/srv => /#/io.l5d.k8s/default/http;
/host => /srv;
/svc => /host;
/host/reviews1 => /srv/reviews;
/host/reviews2 => /srv/reviews;
/host/reviews3 => /srv/reviews;
/host/details => /srv/details;
/host/ratings => /srv/ratings;
interpreter:
kind: default
transformers:
- kind: io.l5d.k8s.localnode
hostNetwork: false
servers:
- port: 4141
ip: 0.0.0.0
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: l5d
name: l5d
spec:
template:
metadata:
labels:
app: l5d
spec:
hostNetwork: true
volumes:
- name: l5d-config
configMap:
name: "l5d-config"
containers:
- name: l5d
image: buoyantio/linkerd:1.1.2
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
args:
- /io.buoyant/linkerd/config/config.yaml
ports:
- name: outgoing
containerPort: 4140
hostPort: 4140
- name: incoming
containerPort: 4141
- name: admin
containerPort: 9990
volumeMounts:
- name: "l5d-config"
mountPath: "/io.buoyant/linkerd/config"
readOnly: true
- name: kubectl
image: buoyantio/kubectl:v1.6.2
args:
- "proxy"
- "-p"
- "8001"
---
apiVersion: v1
kind: Service
metadata:
name: l5d
spec:
selector:
app: l5d
ports:
- name: outgoing
port: 4140
- name: incoming
port: 4141
- name: admin
port: 9990
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
k8s-app: linkerd-dashboard
name: linkerd-dashboard
namespace: default
spec:
rules:
- host: linkerd-dashboard.13.54.45.20.nip.io
http:
paths:
- backend:
serviceName: l5d
servicePort: 9990
######################################################
# Details service
######################################################
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
spec:
ports:
- port: 9080
name: http
selector:
app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: details-v1
spec:
replicas: 1
template:
metadata:
labels:
app: details
version: v1
spec:
containers:
- name: details
image: istio/examples-bookinfo-details-v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
####################################################
# Ratings service
####################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
spec:
ports:
- port: 9080
name: http
selector:
app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v1
spec:
replicas: 1
template:
metadata:
labels:
app: ratings
version: v1
spec:
containers:
- name: ratings
image: istio/examples-bookinfo-ratings-v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
####################################################
# Reviews service
####################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
spec:
ports:
- port: 9080
name: http
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v1
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v1
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v2
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v2
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v3
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
####################################################
# Productpage service
####################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
ports:
- port: 9080
name: http
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
######################################
# Ingress resource
######################################
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
k8s-app: bookinfo
name: bookinfo
spec:
rules:
- host: bookinfo.13.54.45.20.nip.io
http:
paths:
- path: /productpage
backend:
serviceName: productpage
servicePort: 9080
- path: /login
backend:
serviceName: productpage
servicePort: 9080
- path: /logout
backend:
serviceName: productpage
servicePort: 9080
Without http_proxy environment variable application is perfectly working and output coming as follows.
But when I add http_proxy environment variable its not responding, output as follows.
As per my understanding its not able to connect "details" & "reviews" microservices and fetch the data.
When I tried to deploy k8s-daemonset/k8s/hello-world-1_4.yml
in my minikube I ran into an issue where the value given to spec.nodeName
(minikube
) wasn't resolvable by kube-dns, causing an exception in hello.py
. I'm not sure why this wasn't a problem in other k8s deployments. In minikube and k8s 1.4, hostIP.sh
or similar approach looking up status.hostIP
may still be necessary.
There is a proposal in kubernetes to improve node-local service discovery when using DaemonSets that could solve this issue.
Hello,
I'm trying "A Service Mesh for Kubernetes, Part IV: Continuous deployment via traffic shifting". I have a private k8s cluster using CNI. Svc NodePort is used instead of LoadBalancer.
namerd is working.
$ namerctl dtab get internal
# version MjQ5OTUxNjM1
/srv => /#/io.l5d.k8s/default/http ;
/host => /srv ;
/tmp => /srv ;
/http/*/* => /host ;
/host/world => /srv/world-v1 ;
I'm stuck in STEP 3 and can't get Hello (10.196.2.5) world (10.196.2.6)!
$ ./kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello 10.1.170.214 <none> 7777/TCP,80/TCP 13m
l5d 10.1.212.42 <nodes> 80:30334/TCP,4140:30052/TCP,4141:31597/TCP,9990:31450/TCP 16m
namerd 10.1.234.160 <nodes> 4100:30608/TCP,4180:32058/TCP,9990:31167/TCP 2h
world-v1 None <none> 7778/TCP 13m
$ curl http://10.128.112.28:30334
exceeded 10.seconds to unspecified while dyn binding /http/1.1/GET/10.128.112.28:30334. Remote Info: Not Available
10.128.112.28
is one of my worker ip. Port 30334
points to l5d port 80.
I try to put hostNetwork: true
in configmap, but it is not working. (see #88)
apiVersion: v1
kind: ConfigMap
metadata:
name: l5d-config
data:
config.yaml: |-
admin:
port: 9990
routers:
- protocol: http
label: outgoing
interpreter:
kind: io.l5d.namerd
dst: /$/inet/namerd.default.svc.cluster.local/4100
namespace: internal
transformers:
- kind: io.l5d.k8s.daemonset
namespace: default
port: incoming
service: l5d
hostNetwork: true
servers:
- port: 4140
ip: 0.0.0.0
- protocol: http
label: incoming
interpreter:
kind: io.l5d.namerd
dst: /$/inet/namerd.default.svc.cluster.local/4100
namespace: internal
transformers:
- kind: io.l5d.k8s.localnode
hostNetwork: true
servers:
- port: 4141
ip: 0.0.0.0
- protocol: http
label: external
interpreter:
kind: io.l5d.namerd
dst: /$/inet/namerd.default.svc.cluster.local/4100
namespace: external
servers:
- port: 4142
ip: 0.0.0.0
Any idea how to continue?
we observed the add-steps
demo failing with docker for mac, specifically the slow_cooker instances were unable to connect to the application instances.
@aalimovs reports that he's seeing 502s when trying to do a simple test against a kubernetes l2l setup on AWS t2.large instances.
Using this configuration: https://github.com/BuoyantIO/linkerd-examples/blob/master/k8s-daemonset/k8s/linkerd-zipkin.yml
{ "clnt/daemonsetTransformer/available" : 1.0, "clnt/daemonsetTransformer/cancelled_connects" : 0, "clnt/daemonsetTransformer/closes" : 0, "clnt/daemonsetTransformer/connect_latency_ms.count" : 0, "clnt/daemonsetTransformer/connection_duration.count" : 0, "clnt/daemonsetTransformer/connection_received_bytes.count" : 0, "clnt/daemonsetTransformer/connection_requests.count" : 0, "clnt/daemonsetTransformer/connection_sent_bytes.count" : 0, "clnt/daemonsetTransformer/connections" : 1.0, "clnt/daemonsetTransformer/connects" : 1, "clnt/daemonsetTransformer/dispatcher/serial/queue_size" : 0.0, "clnt/daemonsetTransformer/dtab/size.count" : 0, "clnt/daemonsetTransformer/failed_connect_latency_ms.count" : 0, "clnt/daemonsetTransformer/failfast/marked_available" : 0, "clnt/daemonsetTransformer/failfast/marked_dead" : 0, "clnt/daemonsetTransformer/failfast/unhealthy_for_ms" : 0.0, "clnt/daemonsetTransformer/failfast/unhealthy_num_tries" : 0.0, "clnt/daemonsetTransformer/failure_accrual/probes" : 0, "clnt/daemonsetTransformer/failure_accrual/removals" : 0, "clnt/daemonsetTransformer/failure_accrual/removed_for_ms" : 0, "clnt/daemonsetTransformer/failure_accrual/revivals" : 0, "clnt/daemonsetTransformer/loadbalancer/adds" : 1, "clnt/daemonsetTransformer/loadbalancer/available" : 1.0, "clnt/daemonsetTransformer/loadbalancer/busy" : 0.0, "clnt/daemonsetTransformer/loadbalancer/closed" : 0.0, "clnt/daemonsetTransformer/loadbalancer/load" : 1.0, "clnt/daemonsetTransformer/loadbalancer/max_effort_exhausted" : 0, "clnt/daemonsetTransformer/loadbalancer/meanweight" : 1.0, "clnt/daemonsetTransformer/loadbalancer/p2c" : 1.0, "clnt/daemonsetTransformer/loadbalancer/rebuilds" : 99, "clnt/daemonsetTransformer/loadbalancer/removes" : 0, "clnt/daemonsetTransformer/loadbalancer/size" : 1.0, "clnt/daemonsetTransformer/loadbalancer/updates" : 99, "clnt/daemonsetTransformer/namer/bind_latency_us.count" : 0, "clnt/daemonsetTransformer/namer/dtabcache/evicts" : 0, "clnt/daemonsetTransformer/namer/dtabcache/misses" : 1, "clnt/daemonsetTransformer/namer/dtabcache/oneshots" : 0, "clnt/daemonsetTransformer/namer/namecache/evicts" : 0, "clnt/daemonsetTransformer/namer/namecache/misses" : 1, "clnt/daemonsetTransformer/namer/namecache/oneshots" : 0, "clnt/daemonsetTransformer/namer/nametreecache/evicts" : 0, "clnt/daemonsetTransformer/namer/nametreecache/misses" : 1, "clnt/daemonsetTransformer/namer/nametreecache/oneshots" : 0, "clnt/daemonsetTransformer/pending" : 0.0, "clnt/daemonsetTransformer/pool_cached" : 0.0, "clnt/daemonsetTransformer/pool_num_too_many_waiters" : 0, "clnt/daemonsetTransformer/pool_num_waited" : 0, "clnt/daemonsetTransformer/pool_size" : 1.0, "clnt/daemonsetTransformer/pool_waiters" : 0.0, "clnt/daemonsetTransformer/received_bytes" : 4557, "clnt/daemonsetTransformer/request_latency_ms.count" : 0, "clnt/daemonsetTransformer/requests" : 2, "clnt/daemonsetTransformer/retries/budget" : 100.0, "clnt/daemonsetTransformer/retries/budget_exhausted" : 0, "clnt/daemonsetTransformer/retries/cannot_retry" : 0, "clnt/daemonsetTransformer/retries/not_open" : 0, "clnt/daemonsetTransformer/retries/request_limit" : 0, "clnt/daemonsetTransformer/retries/requeues" : 0, "clnt/daemonsetTransformer/retries/requeues_per_request.count" : 0, "clnt/daemonsetTransformer/sent_bytes" : 491, "clnt/daemonsetTransformer/service_creation/service_acquisition_latency_ms.count" : 0, "clnt/daemonsetTransformer/socket_unwritable_ms" : 0, "clnt/daemonsetTransformer/socket_writable_ms" : 0, "clnt/daemonsetTransformer/success" : 2, "clnt/zipkin-tracer/loadbalancer/adds" : 1, "clnt/zipkin-tracer/loadbalancer/available" : 1.0, "clnt/zipkin-tracer/loadbalancer/busy" : 0.0, "clnt/zipkin-tracer/loadbalancer/closed" : 0.0, "clnt/zipkin-tracer/loadbalancer/load" : 0.0, "clnt/zipkin-tracer/loadbalancer/max_effort_exhausted" : 0, "clnt/zipkin-tracer/loadbalancer/meanweight" : 1.0, "clnt/zipkin-tracer/loadbalancer/p2c" : 1.0, "clnt/zipkin-tracer/loadbalancer/rebuilds" : 1, "clnt/zipkin-tracer/loadbalancer/removes" : 0, "clnt/zipkin-tracer/loadbalancer/size" : 1.0, "clnt/zipkin-tracer/loadbalancer/updates" : 1, "clnt/zipkin-tracer/retries/budget" : 100.0, "clnt/zipkin-tracer/retries/budget_exhausted" : 0, "clnt/zipkin-tracer/retries/cannot_retry" : 0, "clnt/zipkin-tracer/retries/not_open" : 0, "clnt/zipkin-tracer/retries/request_limit" : 0, "clnt/zipkin-tracer/retries/requeues" : 0, "clnt/zipkin-tracer/retries/requeues_per_request.count" : 0, "clnt/zipkin-tracer/service_creation/service_acquisition_latency_ms.count" : 0, "clnt/zipkin-tracer/tries/pending" : 0.0, "clnt/zipkin-tracer/tries/request_latency_ms.count" : 0, "clnt/zipkin-tracer/tries/requests" : 0, "clnt/zipkin-tracer/tries/success" : 0, "inet/dns/cache/evicts" : 0.0, "inet/dns/cache/hit_rate" : 1.0, "inet/dns/cache/size" : 0.0, "inet/dns/dns_lookup_failures" : 0, "inet/dns/dns_lookups" : 0, "inet/dns/failures" : 0, "inet/dns/lookup_ms.count" : 48, "inet/dns/lookup_ms.max" : 0, "inet/dns/lookup_ms.min" : 0, "inet/dns/lookup_ms.p50" : 0, "inet/dns/lookup_ms.p90" : 0, "inet/dns/lookup_ms.p95" : 0, "inet/dns/lookup_ms.p99" : 0, "inet/dns/lookup_ms.p9990" : 0, "inet/dns/lookup_ms.p9999" : 0, "inet/dns/lookup_ms.sum" : 0, "inet/dns/lookup_ms.avg" : 0.0, "inet/dns/queue_size" : 0.0, "inet/dns/successes" : 388, "jvm/application_time_millis" : 494709.0, "jvm/classes/current_loaded" : 7280.0, "jvm/classes/total_loaded" : 7280.0, "jvm/classes/total_unloaded" : 0.0, "jvm/compilation/time_msec" : 2722.0, "jvm/fd_count" : 42.0, "jvm/fd_limit" : 1048576.0, "jvm/gc/ConcurrentMarkSweep/cycles" : 2.0, "jvm/gc/ConcurrentMarkSweep/msec" : 22.0, "jvm/gc/ParNew/cycles" : 32.0, "jvm/gc/ParNew/msec" : 126.0, "jvm/gc/cycles" : 34.0, "jvm/gc/eden/pause_msec.count" : 3, "jvm/gc/eden/pause_msec.max" : 2, "jvm/gc/eden/pause_msec.min" : 2, "jvm/gc/eden/pause_msec.p50" : 2, "jvm/gc/eden/pause_msec.p90" : 2, "jvm/gc/eden/pause_msec.p95" : 2, "jvm/gc/eden/pause_msec.p99" : 2, "jvm/gc/eden/pause_msec.p9990" : 2, "jvm/gc/eden/pause_msec.p9999" : 2, "jvm/gc/eden/pause_msec.sum" : 6, "jvm/gc/eden/pause_msec.avg" : 2.0, "jvm/gc/msec" : 148.0, "jvm/heap/committed" : 3.244032E7, "jvm/heap/max" : 1.05630925E9, "jvm/heap/used" : 1.8293512E7, "jvm/mem/allocations/eden/bytes" : 1.40168416E8, "jvm/mem/buffer/direct/count" : 8.0, "jvm/mem/buffer/direct/max" : 266240.0, "jvm/mem/buffer/direct/used" : 266240.0, "jvm/mem/buffer/mapped/count" : 0.0, "jvm/mem/buffer/mapped/max" : 0.0, "jvm/mem/buffer/mapped/used" : 0.0, "jvm/mem/current/CMS_Old_Gen/max" : 8.9928499E8, "jvm/mem/current/CMS_Old_Gen/used" : 1.1704056E7, "jvm/mem/current/Code_Cache/max" : 5.0331648E7, "jvm/mem/current/Code_Cache/used" : 1737792.0, "jvm/mem/current/Compressed_Class_Space/max" : 1.07374182E9, "jvm/mem/current/Compressed_Class_Space/used" : 6411144.0, "jvm/mem/current/Metaspace/max" : -1.0, "jvm/mem/current/Metaspace/used" : 3.6028464E7, "jvm/mem/current/Par_Eden_Space/max" : 1.3959168E8, "jvm/mem/current/Par_Eden_Space/used" : 6266520.0, "jvm/mem/current/Par_Survivor_Space/max" : 1.7432576E7, "jvm/mem/current/Par_Survivor_Space/used" : 327840.0, "jvm/mem/current/used" : 6.2500528E7, "jvm/mem/metaspace/max_capacity" : 1.0968105E9, "jvm/mem/postGC/CMS_Old_Gen/max" : 8.9928499E8, "jvm/mem/postGC/CMS_Old_Gen/used" : 8868664.0, "jvm/mem/postGC/Par_Eden_Space/max" : 1.3959168E8, "jvm/mem/postGC/Par_Eden_Space/used" : 0.0, "jvm/mem/postGC/Par_Survivor_Space/max" : 1.7432576E7, "jvm/mem/postGC/Par_Survivor_Space/used" : 327840.0, "jvm/mem/postGC/used" : 9196504.0, "jvm/nonheap/committed" : 4.5699072E7, "jvm/nonheap/max" : -1.0, "jvm/nonheap/used" : 4.4191576E7, "jvm/num_cpus" : 2.0, "jvm/postGC/CMS_Old_Gen/max" : 8.9928499E8, "jvm/postGC/CMS_Old_Gen/used" : 8868664.0, "jvm/postGC/Par_Eden_Space/max" : 1.3959168E8, "jvm/postGC/Par_Eden_Space/used" : 0.0, "jvm/postGC/Par_Survivor_Space/max" : 1.7432576E7, "jvm/postGC/Par_Survivor_Space/used" : 327840.0, "jvm/postGC/used" : 9196504.0, "jvm/safepoint/count" : 510.0, "jvm/safepoint/sync_time_millis" : 84.0, "jvm/safepoint/total_time_millis" : 266.0, "jvm/start_time" : 1.48795372E12, "jvm/tenuring_threshold" : 6.0, "jvm/thread/count" : 17.0, "jvm/thread/daemon_count" : 16.0, "jvm/thread/peak_count" : 19.0, "jvm/uptime" : 495175.0, "larger_than_threadlocal_out_buffer" : 0, "namer/namer/#/io.l5d.k8s/default/available" : 1.0, "namer/namer/#/io.l5d.k8s/default/cancelled_connects" : 0, "namer/namer/#/io.l5d.k8s/default/closes" : 0, "namer/namer/#/io.l5d.k8s/default/connect_latency_ms.count" : 0, "namer/namer/#/io.l5d.k8s/default/connection_duration.count" : 0, "namer/namer/#/io.l5d.k8s/default/connection_received_bytes.count" : 0, "namer/namer/#/io.l5d.k8s/default/connection_requests.count" : 0, "namer/namer/#/io.l5d.k8s/default/connection_sent_bytes.count" : 0, "namer/namer/#/io.l5d.k8s/default/connections" : 1.0, "namer/namer/#/io.l5d.k8s/default/connects" : 1, "namer/namer/#/io.l5d.k8s/default/dispatcher/serial/queue_size" : 0.0, "namer/namer/#/io.l5d.k8s/default/dtab/size.count" : 0, "namer/namer/#/io.l5d.k8s/default/failed_connect_latency_ms.count" : 0, "namer/namer/#/io.l5d.k8s/default/failfast/marked_available" : 0, "namer/namer/#/io.l5d.k8s/default/failfast/marked_dead" : 0, "namer/namer/#/io.l5d.k8s/default/failfast/unhealthy_for_ms" : 0.0, "namer/namer/#/io.l5d.k8s/default/failfast/unhealthy_num_tries" : 0.0, "namer/namer/#/io.l5d.k8s/default/failure_accrual/probes" : 0, "namer/namer/#/io.l5d.k8s/default/failure_accrual/removals" : 0, "namer/namer/#/io.l5d.k8s/default/failure_accrual/removed_for_ms" : 0, "namer/namer/#/io.l5d.k8s/default/failure_accrual/revivals" : 0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/adds" : 1, "namer/namer/#/io.l5d.k8s/default/loadbalancer/available" : 1.0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/busy" : 0.0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/closed" : 0.0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/load" : 1.0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/max_effort_exhausted" : 0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/meanweight" : 1.0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/p2c" : 1.0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/rebuilds" : 95, "namer/namer/#/io.l5d.k8s/default/loadbalancer/removes" : 0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/size" : 1.0, "namer/namer/#/io.l5d.k8s/default/loadbalancer/updates" : 95, "namer/namer/#/io.l5d.k8s/default/namer/bind_latency_us.count" : 0, "namer/namer/#/io.l5d.k8s/default/namer/dtabcache/evicts" : 0, "namer/namer/#/io.l5d.k8s/default/namer/dtabcache/misses" : 1, "namer/namer/#/io.l5d.k8s/default/namer/dtabcache/oneshots" : 0, "namer/namer/#/io.l5d.k8s/default/namer/namecache/evicts" : 0, "namer/namer/#/io.l5d.k8s/default/namer/namecache/misses" : 1, "namer/namer/#/io.l5d.k8s/default/namer/namecache/oneshots" : 0, "namer/namer/#/io.l5d.k8s/default/namer/nametreecache/evicts" : 0, "namer/namer/#/io.l5d.k8s/default/namer/nametreecache/misses" : 1, "namer/namer/#/io.l5d.k8s/default/namer/nametreecache/oneshots" : 0, "namer/namer/#/io.l5d.k8s/default/pending" : 0.0, "namer/namer/#/io.l5d.k8s/default/pool_cached" : 0.0, "namer/namer/#/io.l5d.k8s/default/pool_num_too_many_waiters" : 0, "namer/namer/#/io.l5d.k8s/default/pool_num_waited" : 0, "namer/namer/#/io.l5d.k8s/default/pool_size" : 1.0, "namer/namer/#/io.l5d.k8s/default/pool_waiters" : 0.0, "namer/namer/#/io.l5d.k8s/default/received_bytes" : 3654, "namer/namer/#/io.l5d.k8s/default/request_latency_ms.count" : 0, "namer/namer/#/io.l5d.k8s/default/requests" : 2, "namer/namer/#/io.l5d.k8s/default/retries/budget" : 100.0, "namer/namer/#/io.l5d.k8s/default/retries/budget_exhausted" : 0, "namer/namer/#/io.l5d.k8s/default/retries/cannot_retry" : 0, "namer/namer/#/io.l5d.k8s/default/retries/not_open" : 0, "namer/namer/#/io.l5d.k8s/default/retries/request_limit" : 0, "namer/namer/#/io.l5d.k8s/default/retries/requeues" : 0, "namer/namer/#/io.l5d.k8s/default/retries/requeues_per_request.count" : 0, "namer/namer/#/io.l5d.k8s/default/sent_bytes" : 491, "namer/namer/#/io.l5d.k8s/default/service_creation/service_acquisition_latency_ms.count" : 0, "namer/namer/#/io.l5d.k8s/default/socket_unwritable_ms" : 0, "namer/namer/#/io.l5d.k8s/default/socket_writable_ms" : 0, "namer/namer/#/io.l5d.k8s/default/success" : 2, "retries.count" : 0, "retries/budget_exhausted" : 0, "rt/incoming/bindcache/bound/evicts" : 0, "rt/incoming/bindcache/bound/misses" : 0, "rt/incoming/bindcache/bound/oneshots" : 0, "rt/incoming/bindcache/client/evicts" : 0, "rt/incoming/bindcache/client/misses" : 0, "rt/incoming/bindcache/client/oneshots" : 0, "rt/incoming/bindcache/path/evicts" : 0, "rt/incoming/bindcache/path/misses" : 0, "rt/incoming/bindcache/path/oneshots" : 0, "rt/incoming/bindcache/tree/evicts" : 0, "rt/incoming/bindcache/tree/misses" : 0, "rt/incoming/bindcache/tree/oneshots" : 0, "rt/incoming/srv/0.0.0.0/4141/dtab/size.count" : 0, "rt/incoming/srv/0.0.0.0/4141/handletime_us.count" : 0, "rt/incoming/srv/0.0.0.0/4141/nacks" : 0, "rt/incoming/srv/0.0.0.0/4141/nonretryable_nacks" : 0, "rt/incoming/srv/0.0.0.0/4141/pending" : 0.0, "rt/incoming/srv/0.0.0.0/4141/request_latency_ms.count" : 0, "rt/incoming/srv/0.0.0.0/4141/requests" : 0, "rt/incoming/srv/0.0.0.0/4141/status/1XX" : 0, "rt/incoming/srv/0.0.0.0/4141/status/2XX" : 0, "rt/incoming/srv/0.0.0.0/4141/status/3XX" : 0, "rt/incoming/srv/0.0.0.0/4141/status/4XX" : 0, "rt/incoming/srv/0.0.0.0/4141/status/5XX" : 0, "rt/incoming/srv/0.0.0.0/4141/status/error" : 0, "rt/incoming/srv/0.0.0.0/4141/success" : 0, "rt/incoming/srv/0.0.0.0/4141/time/1XX.count" : 0, "rt/incoming/srv/0.0.0.0/4141/time/2XX.count" : 0, "rt/incoming/srv/0.0.0.0/4141/time/3XX.count" : 0, "rt/incoming/srv/0.0.0.0/4141/time/4XX.count" : 0, "rt/incoming/srv/0.0.0.0/4141/time/5XX.count" : 0, "rt/incoming/srv/0.0.0.0/4141/time/error.count" : 0, "rt/incoming/srv/0.0.0.0/4141/transit_latency_ms.count" : 0, "rt/outgoing/bindcache/bound/evicts" : 0, "rt/outgoing/bindcache/bound/misses" : 1, "rt/outgoing/bindcache/bound/oneshots" : 0, "rt/outgoing/bindcache/client/evicts" : 0, "rt/outgoing/bindcache/client/misses" : 1, "rt/outgoing/bindcache/client/oneshots" : 0, "rt/outgoing/bindcache/path/evicts" : 0, "rt/outgoing/bindcache/path/misses" : 2, "rt/outgoing/bindcache/path/oneshots" : 0, "rt/outgoing/bindcache/tree/evicts" : 0, "rt/outgoing/bindcache/tree/misses" : 2, "rt/outgoing/bindcache/tree/oneshots" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/available" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/cancelled_connects" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/closes" : 2342, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connect_latency_ms.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.count" : 495, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.max" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.min" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.p50" : 63, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.p90" : 329, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.p95" : 547, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.p99" : 680, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.p9990" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.p9999" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.sum" : 57355, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_duration.avg" : 115.86868686868686, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.count" : 495, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.max" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.min" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.p50" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.p90" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.p95" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.p99" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.p9990" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.p9999" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.sum" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_received_bytes.avg" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.count" : 495, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.max" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.min" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.p50" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.p90" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.p95" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.p99" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.p9990" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.p9999" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.sum" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_requests.avg" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.count" : 495, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.max" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.min" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.p50" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.p90" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.p95" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.p99" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.p9990" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.p9999" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.sum" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connection_sent_bytes.avg" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connections" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/connects" : 1171, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/dtab/size.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/exn/org.jboss.netty.channel.ConnectTimeoutException" : 1171, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.count" : 495, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.max" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.min" : 1003, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.p50" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.p90" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.p95" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.p99" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.p9990" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.p9999" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.sum" : 499282, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failed_connect_latency_ms.avg" : 1008.6505050505051, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failure_accrual/probes" : 5, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failure_accrual/removals" : 1, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failure_accrual/removed_for_ms" : 228371, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/failure_accrual/revivals" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/adds" : 1, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/available" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/busy" : 1.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/closed" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/load" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/max_effort_exhausted" : 1153, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/meanweight" : 1.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/p2c" : 1.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/rebuilds" : 1154, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/removes" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/size" : 1.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/loadbalancer/updates" : 1, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/pending" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/pool_cached" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/pool_num_too_many_waiters" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/pool_num_waited" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/pool_size" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/pool_waiters" : 0.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/received_bytes" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/request_latency_ms.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/requests" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/retries/budget" : 100.0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/retries/budget_exhausted" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/retries/cannot_retry" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/retries/not_open" : 1143, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/retries/request_limit" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/retries/requeues" : 4, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/retries/requeues_per_request.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/sent_bytes" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/failures" : 1171, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/failures/com.twitter.finagle.ChannelWriteException" : 1147, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/failures/com.twitter.finagle.ChannelWriteException/org.jboss.netty.channel.ConnectTimeoutException" : 1147, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/failures/com.twitter.finagle.Failure" : 24, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/failures/com.twitter.finagle.Failure/com.twitter.finagle.CancelledRequestException" : 24, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.count" : 495, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.max" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.min" : 152, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.p50" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.p90" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.p95" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.p99" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.p9990" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.p9999" : 1013, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.sum" : 494421, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/service_creation/service_acquisition_latency_ms.avg" : 998.8303030303031, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/socket_unwritable_ms" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/socket_writable_ms" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/status/1XX" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/status/2XX" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/status/3XX" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/status/4XX" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/status/5XX" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/status/error" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/success" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/time/1XX.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/time/2XX.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/time/3XX.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/time/4XX.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/time/5XX.count" : 0, "rt/outgoing/dst/id/%/io.l5d.k8s.daemonset/default/incoming/l5d/#/io.l5d.k8s/default/http/nginx/time/error.count" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/failures" : 18, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/failures/com.twitter.finagle.NoBrokersAvailableException" : 18, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/pending" : 0.0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.count" : 18, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.max" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.min" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.p50" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.p90" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.p95" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.p99" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.p9990" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.p9999" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.sum" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/request_latency_ms.avg" : 0.0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/requests" : 18, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/budget_exhausted" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.count" : 18, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.max" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.min" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.p50" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.p90" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.p95" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.p99" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.p9990" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.p9999" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.sum" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/per_request.avg" : 0.0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/retries/total" : 0, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/sourcedfailures//svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140" : 18, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/sourcedfailures//svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/com.twitter.finagle.NoBrokersAvailableException" : 18, "rt/outgoing/dst/path/svc/aeb6fe752fa9711e6a23906490c9cdb1-246420995.eu-west-1.elb.amazonaws.com:4140/success" : 0, "rt/outgoing/dst/path/svc/nginx/failures" : 46, "rt/outgoing/dst/path/svc/nginx/failures/com.twitter.finagle.ChannelWriteException" : 22, "rt/outgoing/dst/path/svc/nginx/failures/com.twitter.finagle.ChannelWriteException/org.jboss.netty.channel.ConnectTimeoutException" : 22, "rt/outgoing/dst/path/svc/nginx/failures/interrupted" : 24, "rt/outgoing/dst/path/svc/nginx/failures/interrupted/com.twitter.finagle.Failure" : 24, "rt/outgoing/dst/path/svc/nginx/failures/interrupted/com.twitter.finagle.Failure/com.twitter.finagle.CancelledRequestException" : 24, "rt/outgoing/dst/path/svc/nginx/failures/restartable" : 24, "rt/outgoing/dst/path/svc/nginx/failures/restartable/com.twitter.finagle.Failure" : 24, "rt/outgoing/dst/path/svc/nginx/failures/restartable/com.twitter.finagle.Failure/com.twitter.finagle.CancelledRequestException" : 24, "rt/outgoing/dst/path/svc/nginx/pending" : 0.0, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.count" : 37, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.max" : 30138, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.min" : 1003, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.p50" : 22360, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.p90" : 30138, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.p95" : 30138, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.p99" : 30138, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.p9990" : 30138, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.p9999" : 30138, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.sum" : 641204, "rt/outgoing/dst/path/svc/nginx/request_latency_ms.avg" : 17329.837837837837, "rt/outgoing/dst/path/svc/nginx/requests" : 46, "rt/outgoing/dst/path/svc/nginx/retries/budget_exhausted" : 22, "rt/outgoing/dst/path/svc/nginx/retries/per_request.count" : 37, "rt/outgoing/dst/path/svc/nginx/retries/per_request.max" : 29, "rt/outgoing/dst/path/svc/nginx/retries/per_request.min" : 0, "rt/outgoing/dst/path/svc/nginx/retries/per_request.p50" : 22, "rt/outgoing/dst/path/svc/nginx/retries/per_request.p90" : 29, "rt/outgoing/dst/path/svc/nginx/retries/per_request.p95" : 29, "rt/outgoing/dst/path/svc/nginx/retries/per_request.p99" : 29, "rt/outgoing/dst/path/svc/nginx/retries/per_request.p9990" : 29, "rt/outgoing/dst/path/svc/nginx/retries/per_request.p9999" : 29, "rt/outgoing/dst/path/svc/nginx/retries/per_request.sum" : 603, "rt/outgoing/dst/path/svc/nginx/retries/per_request.avg" : 16.2972972972973, "rt/outgoing/dst/path/svc/nginx/retries/total" : 1121, "rt/outgoing/dst/path/svc/nginx/success" : 0, "rt/outgoing/srv/0.0.0.0/4140/closes" : 0, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.count" : 36, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.max" : 11595, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.min" : 0, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.p50" : 161, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.p90" : 7264, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.p95" : 9694, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.p99" : 11595, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.p9990" : 11595, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.p9999" : 11595, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.sum" : 83092, "rt/outgoing/srv/0.0.0.0/4140/connection_duration.avg" : 2308.1111111111113, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.count" : 36, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.max" : 480, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.min" : 0, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.p50" : 139, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.p90" : 386, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.p95" : 386, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.p99" : 480, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.p9990" : 480, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.p9999" : 480, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.sum" : 5680, "rt/outgoing/srv/0.0.0.0/4140/connection_received_bytes.avg" : 157.77777777777777, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.count" : 36, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.max" : 5, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.min" : 0, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.p50" : 1, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.p90" : 4, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.p95" : 4, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.p99" : 5, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.p9990" : 5, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.p9999" : 5, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.sum" : 52, "rt/outgoing/srv/0.0.0.0/4140/connection_requests.avg" : 1.4444444444444444, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.count" : 36, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.max" : 3584, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.min" : 0, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.p50" : 654, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.p90" : 2686, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.p95" : 2686, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.p99" : 3584, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.p9990" : 3584, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.p9999" : 3584, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.sum" : 29322, "rt/outgoing/srv/0.0.0.0/4140/connection_sent_bytes.avg" : 814.5, "rt/outgoing/srv/0.0.0.0/4140/connections" : 1.0, "rt/outgoing/srv/0.0.0.0/4140/connects" : 65, "rt/outgoing/srv/0.0.0.0/4140/dtab/size.count" : 0, "rt/outgoing/srv/0.0.0.0/4140/exn/java.nio.channels.ClosedChannelException" : 24, "rt/outgoing/srv/0.0.0.0/4140/failures" : 64, "rt/outgoing/srv/0.0.0.0/4140/failures/com.twitter.finagle.service.ResponseClassificationSyntheticException" : 64, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.count" : 44, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.max" : 187, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.min" : 75, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.p50" : 118, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.p90" : 140, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.p95" : 171, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.p99" : 187, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.p9990" : 187, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.p9999" : 187, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.sum" : 5327, "rt/outgoing/srv/0.0.0.0/4140/handletime_us.avg" : 121.06818181818181, "rt/outgoing/srv/0.0.0.0/4140/nacks" : 0, "rt/outgoing/srv/0.0.0.0/4140/nonretryable_nacks" : 0, "rt/outgoing/srv/0.0.0.0/4140/pending" : 0.0, "rt/outgoing/srv/0.0.0.0/4140/read_timeout" : 0, "rt/outgoing/srv/0.0.0.0/4140/received_bytes" : 7696, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.count" : 55, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.max" : 30138, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.min" : 1, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.p50" : 7050, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.p90" : 30138, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.p95" : 30138, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.p99" : 30138, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.p9990" : 30138, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.p9999" : 30138, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.sum" : 641295, "rt/outgoing/srv/0.0.0.0/4140/request_latency_ms.avg" : 11659.90909090909, "rt/outgoing/srv/0.0.0.0/4140/requests" : 64, "rt/outgoing/srv/0.0.0.0/4140/sent_bytes" : 31418, "rt/outgoing/srv/0.0.0.0/4140/socket_unwritable_ms" : 0, "rt/outgoing/srv/0.0.0.0/4140/socket_writable_ms" : 0, "rt/outgoing/srv/0.0.0.0/4140/status/1XX" : 0, "rt/outgoing/srv/0.0.0.0/4140/status/2XX" : 0, "rt/outgoing/srv/0.0.0.0/4140/status/3XX" : 0, "rt/outgoing/srv/0.0.0.0/4140/status/4XX" : 0, "rt/outgoing/srv/0.0.0.0/4140/status/502" : 64, "rt/outgoing/srv/0.0.0.0/4140/status/5XX" : 64, "rt/outgoing/srv/0.0.0.0/4140/status/error" : 0, "rt/outgoing/srv/0.0.0.0/4140/success" : 0, "rt/outgoing/srv/0.0.0.0/4140/time/1XX.count" : 0, "rt/outgoing/srv/0.0.0.0/4140/time/2XX.count" : 0, "rt/outgoing/srv/0.0.0.0/4140/time/3XX.count" : 0, "rt/outgoing/srv/0.0.0.0/4140/time/4XX.count" : 0, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.count" : 55, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.max" : 30138, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.min" : 1, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.p50" : 7050, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.p90" : 30138, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.p95" : 30138, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.p99" : 30138, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.p9990" : 30138, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.p9999" : 30138, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.sum" : 641325, "rt/outgoing/srv/0.0.0.0/4140/time/5XX.avg" : 11660.454545454546, "rt/outgoing/srv/0.0.0.0/4140/time/error.count" : 0, "rt/outgoing/srv/0.0.0.0/4140/transit_latency_ms.count" : 0, "rt/outgoing/srv/0.0.0.0/4140/write_timeout" : 0, "zipkin/zipkin/log_span/ok" : 0, "zipkin/zipkin/log_span/try_later" : 0, "zk2/inet/dns/cache/evicts" : 0.0, "zk2/inet/dns/cache/hit_rate" : 1.0, "zk2/inet/dns/cache/size" : 0.0, "zk2/inet/dns/dns_lookup_failures" : 0, "zk2/inet/dns/dns_lookups" : 0, "zk2/inet/dns/failures" : 0, "zk2/inet/dns/lookup_ms.count" : 0, "zk2/inet/dns/queue_size" : 0.0, "zk2/inet/dns/successes" : 0, "zk2/observed_serversets" : 0.0, "zk2/session_cache_size" : 0.0 }
What especially stands out here is that the client's /closes
metric is more than twice that of /connects
-- I assume that /closes
counts failed connections and /connects
does not?
We have disabled telemetry, usage reporting, etc, with no improvement. Why can't we keep connections established?
for each linkerd config file tested, ci.sh
calls /admin/shutdown
. if that shutdown fails, it force kills the process. this should not be necessary once twitter/util#189 ships.
I am using linkerD to set up an authorization plugin, in Java. Currently I am using an Identifier to intercept the request & check if the user is authorized. (Using : https://github.com/linkerd/linkerd-examples/tree/master/plugins/header-classifier as an example)
If the user is not authorized, the request should not reach the backend service. I am throwing a runtime exception in such cases (do let me know if there is another way for preventing the request to reach the backend). Linkerd sends the response as 502 Bad Gateway. Linkerd should have a mechanism by which I can specify the HTTP status code I want to send back.
I tried using a responseClassifier along with the identifier, but it looks like the responseClassifier does not get invoked when the exception is thrown.
Got a report that the tls part of the blog post no longer works
https://buoyant.io/2017/04/06/a-service-mesh-for-kubernetes-part-viii-linkerd-as-an-ingress-controller/
This may be due to the recent 1.2 upgrade where keys must now be in the pkcs8 format
I try running the latest example in marathon but get:
ov 23, 2017 12:34:25 PM com.twitter.finagle.http.HttpMuxer$ $anonfun$new$1
INFO: HttpMuxer[/admin/metrics.json] = com.twitter.finagle.stats.MetricsExporter(<function1>)
Nov 23, 2017 12:34:25 PM com.twitter.finagle.http.HttpMuxer$ $anonfun$new$1
INFO: HttpMuxer[/admin/per_host_metrics.json] = com.twitter.finagle.stats.HostMetricsExporter(<function1>)
Error parsing flag "com.twitter.finagle.tracing.debugTrace": flag undefined
usage: io.buoyant.linkerd.Main [<flag>...]
flags:
-help='false': Show this help
-log.append='true': If true, appends to existing logfile. Otherwise, file is truncated.
-log.async='true': Log asynchronously
-log.async.inferClassNames='false': Infer class and method names synchronously. See com.twitter.logging.Queuein
....
This article, and this example setup are great reference points for implementing the blog series, but it's missing examples of Namerd setup.
Maybe it would be nice to add a corresponding sample for Linkerd+Namerd?
Docker for Mac is the recommended install option for osx: https://docs.docker.com/engine/getstarted/step_one/
But our readme is using previous vm-style commands:
https://github.com/BuoyantIO/linkerd-examples/blob/master/linkerd-tcp/README.md
for example on my machine:
$ docker-machine ip
Error getting IP address: Host is not running
Some example files inline linkerd and namerd configs, for example, linkerd's config yaml is embedded in this k8s yaml:
https://github.com/linkerd/linkerd-examples/blob/master/k8s-daemonset/k8s/linkerd-ingress.yml#L8
It would be helpful to call this out with a comment, to indicate the delineation between linkerd config and environment config.
I got the error below while trying to open admin dashboard according to the instruction on the doc(linkerd-examples/k8s-daemonset)
error: error executing jsonpath "{.status.loadBalancer.ingress[0].ip}": ip is not found -bash: open: command not found
When I check the content of the JSON using:
kubectl get svc l5d --output=json
I discovered that ip has been changed to hostname from kubernetes 1.5.0.
When I test with https://github.com/linkerd/linkerd-examples/tree/master/getting-started/k8s
Because I have deploy linkerd, so I just run the command:
kubectl apply -f nginx.yml
The nginx service starts successfully. But when I use 'curl -H "Host: nginx" :4140', there is a unsuccessful message:
No hosts are available for /svc/nginx, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/svc=>/host;/host/world=>/srv/world-v1], Dtab.local=[]. Remote Info: Not Available
How to add a service after linkerd started.
i have done following steps
2017-04-18T05:43:24.398460821Z starting gRPC server on :7777
2017-04-18T05:43:34.508996477Z 2017/04/18 05:43:34 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp: lookup minikube on 10.0.0.10:53: server misbehaving"; Reconnecting to {minikube:4140 }
can you please suggest whats going wrong.
Thanks,
Jayesh
The examples configs and instructions at https://github.com/BuoyantIO/linkerd-examples/tree/master/gob/dcos need updating.
I configured linkerd with the settings mentioned in the docs, but how do I do the service discovery part? Do I manually set k/v stuff? Does it rely on DNS (ie: consul.service.consul
or mynode.node.dc1.consul
)? It's not entirely clear what I'm supposed to do next after linkerd is running on the host.
I've been trying to come up with a good configuration for my DCOS setup where I use linkerd-to-linkerd routing. I was hoping to find some examples for how to setup egress traffic to the outside world, but wasn't successful. Is there any chance an example like this could be provided in the repo?
I'm trying to start linkerd in docker under /getting-started/docker
When running docker-compose up, I get the following error in terminal
l5d_1 | -XX:+AggressiveOpts -XX:+CMSClassUnloadingEnabled -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:+CMSScavengeBeforeRemark -XX:InitialHeapSize=33554432 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=357916672 -XX:MaxTenuringThreshold=6 -XX:OldPLABSize=16 -XX:+PrintCommandLineFlags -XX:+ScavengeBeforeFullGC -XX:-TieredCompilation -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseStringDeduplication
l5d_1 | Nov 04, 2016 1:03:12 PM com.twitter.finagle.http.HttpMuxer$$anonfun$4 apply
l5d_1 | INFO: HttpMuxer[/admin/metrics.json] = com.twitter.finagle.stats.MetricsExporter()
l5d_1 | Nov 04, 2016 1:03:12 PM com.twitter.finagle.http.HttpMuxer$$anonfun$4 apply
l5d_1 | INFO: HttpMuxer[/admin/per_host_metrics.json] = com.twitter.finagle.stats.HostMetricsExporter()
l5d_1 | I 1104 13:03:12.413 THREAD1: linkerd 0.7.0 (rev=f3bfd36f30ef832667cdbe271701e24c9d6947f4) built at 20160628-132911
l5d_1 | I 1104 13:03:12.511 THREAD1: Finagle version 6.35.0 (rev=4f353f2d009ea37dc853bbee875a04cf8f720b52) built at 20160427-114558
l5d_1 | com.fasterxml.jackson.databind.JsonMappingException: requirement failed: disco is not a directory
l5d_1 | at [Source: java.io.StringReader@355e34c7; line: 6, column: 12] (through reference chain: com.fasterxml.jackson.module.scala.deser.BuilderWrapper[0])
l5d_1 | at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
l5d_1 | at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:770)
l5d_1 | at io.buoyant.config.ConfigDeserializer.catchMappingException(Parser.scala:25)
l5d_1 | at io.buoyant.config.types.DirectoryDeserializer.deserialize(DirectoryDeserializer.scala:14)
l5d_1 | at io.buoyant.config.types.DirectoryDeserializer.deserialize(DirectoryDeserializer.scala:12)
l5d_1 | at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:344)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1064)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:264)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeOther(BeanDeserializer.java:156)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:126)
l5d_1 | at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:113)
l5d_1 | at com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:84)
l5d_1 | at com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:132)
l5d_1 | at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:234)
l5d_1 | at com.fasterxml.jackson.databind.deser.std.CollectionDeserializer.deserialize(CollectionDeserializer.java:206)
l5d_1 | at com.fasterxml.jackson.module.scala.deser.SeqDeserializer.deserialize(SeqDeserializerModule.scala:76)
l5d_1 | at com.fasterxml.jackson.module.scala.deser.SeqDeserializer.deserialize(SeqDeserializerModule.scala:59)
l5d_1 | at com.fasterxml.jackson.module.scala.deser.OptionDeserializer$$anonfun$deserialize$1.apply(OptionDeserializerModule.scala:54)
l5d_1 | at com.fasterxml.jackson.module.scala.deser.OptionDeserializer$$anonfun$deserialize$1.apply(OptionDeserializerModule.scala:54)
l5d_1 | at scala.Option.map(Option.scala:146)
l5d_1 | at com.fasterxml.jackson.module.scala.deser.OptionDeserializer.deserialize(OptionDeserializerModule.scala:54)
l5d_1 | at com.fasterxml.jackson.module.scala.deser.OptionDeserializer.deserialize(OptionDeserializerModule.scala:15)
l5d_1 | at com.fasterxml.jackson.databind.deser.SettableBeanProperty.deserialize(SettableBeanProperty.java:538)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializer._deserializeUsingPropertyBased(BeanDeserializer.java:344)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.deserializeFromObjectUsingNonDefault(BeanDeserializerBase.java:1064)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:264)
l5d_1 | at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:124)
l5d_1 | at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3066)
l5d_1 | at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2175)
l5d_1 | at com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper$class.readValue(ScalaObjectMapper.scala:180)
l5d_1 | at io.buoyant.config.Parser$$anon$1.readValue(Parser.scala:71)
l5d_1 | at io.buoyant.linkerd.Linker$.parse(Linker.scala:64)
l5d_1 | at io.buoyant.Linkerd$.loadLinker(Linkerd.scala:63)
l5d_1 | at io.buoyant.Linkerd$.main(Linkerd.scala:26)
l5d_1 | at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
l5d_1 | at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
l5d_1 | at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
l5d_1 | at java.lang.reflect.Method.invoke(Method.java:498)
l5d_1 | at com.twitter.app.App$$anonfun$nonExitingMain$3.apply(App.scala:176)
l5d_1 | at com.twitter.app.App$$anonfun$nonExitingMain$3.apply(App.scala:175)
l5d_1 | at scala.Option.foreach(Option.scala:257)
l5d_1 | at com.twitter.app.App$class.nonExitingMain(App.scala:175)
l5d_1 | at io.buoyant.Linkerd$.nonExitingMain(Linkerd.scala:18)
l5d_1 | at com.twitter.app.App$class.main(App.scala:141)
l5d_1 | at io.buoyant.Linkerd$.main(Linkerd.scala:18)
l5d_1 | at io.buoyant.Linkerd.main(Linkerd.scala)
l5d_1 | Exception thrown in main on startup
Am I doing something wrong?
followup from #32, we should consolidate all k8s example configs and apps into one place, specifically k8s-daemonsets
and getting-started/k8s
rbac is no longer in beta when starting up new Kubernetes clusters - in fact starting a new cluster with GKE is enabling rbac by default.
We should rename the title, as we will need to reference this file in our blog post series "Linkerd + Kubernetes" and having beta
in the title might be confusing to new users, since this blog post series is built on a GKE environment.
I have a private k8s 1.5.1 cluster with three workers. I use NodePort instead of LoadBalancer in l5d service yml definition. Pods and svc are created successfully.
Just like #81, below error appears.
$ http_proxy=10.128.112.27:31396 curl -s http://hello
No hosts are available for /http/1.1/GET/hello, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/http/*/*=>/host;/host/world=>/srv/world-v1], Dtab.local=[]. Remote Info: Not Available
10.128.112.27
is one of the worker ip, 31396
is the NodePort of l5d 4140.
$ ./kubectl describe svc l5d
Name: l5d
Namespace: default
Labels: <none>
Selector: app=l5d
Type: NodePort
IP: 10.1.224.144
Port: outgoing 4140/TCP
NodePort: outgoing 31396/TCP
Endpoints: 10.128.112.26:4140,10.128.112.27:4140,10.128.112.28:4140
Port: incoming 4141/TCP
NodePort: incoming 31375/TCP
Endpoints: 10.128.112.26:4141,10.128.112.27:4141,10.128.112.28:4141
Port: admin 9990/TCP
NodePort: admin 31957/TCP
Endpoints: 10.128.112.26:9990,10.128.112.27:9990,10.128.112.28:9990
Session Affinity: None
No events.
world-v1 pod is working since I can check it with an ubuntu pod inside k8s.
$ ./kubectl get po
NAME READY STATUS RESTARTS AGE
hello-851zz 1/1 Running 0 1h
hello-dq33d 1/1 Running 0 1h
hello-mk6m2 1/1 Running 0 1h
l5d-8fdrr 2/2 Running 0 1h
l5d-n8hfm 2/2 Running 0 1h
l5d-wfrtg 2/2 Running 0 1h
ubuntu 1/1 Running 0 2h
world-v1-h38bl 1/1 Running 0 1h
world-v1-n0fql 1/1 Running 0 1h
world-v1-x19n4 1/1 Running 0 1h
$ ./kubectl exec -it ubuntu bash
root@ubuntu:/# curl http://world-v1:7778
world (10.2.15.174)!
hello pod is not working.
root@ubuntu:/# nslookup hello
Server: 10.1.0.10
Address: 10.1.0.10#53
Name: hello.default.svc.cluster.local
Address: 10.1.143.36
root@ubuntu:/# curl http://hello:7777
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p>
I can go into a hello pod and look around.
$ ./kubectl exec -it hello-dq33d bash
root@hello-dq33d:/usr/src/app# echo $NODE_NAME
10.128.112.27
root@hello-dq33d:/usr/src/app# echo $POD_IP
10.2.17.227
root@hello-dq33d:/usr/src/app# echo $http_proxy
10.128.112.27:4140
root@hello-dq33d:/usr/src/app# curl -s http://hello
No hosts are available for /http/1.1/GET/hello, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/http/*/*=>/host;/host/world=>/srv/world-v1], Dtab.local=[]. Remote Info: Not Available
I'm stuck. Any idea how to continue?
Could we have more documentation in that config file, so users could read and understand what is being configured in there?
I would love that a user could just read that and understand what is happening and re-work that config file for their own purpose.
I try to set up linkerd on kubernetes 1.5.0 using https://github.com/BuoyantIO/linkerd-examples/tree/master/k8s-daemonset. I keep getting the error below for >>
http_proxy=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].hostname}"):4140 curl -s http://hello
No hosts are available for /http/1.1/GET/hello, Dtab.base=[/srv=>/#/io.l5d.k8s/default/http;/host=>/srv;/http/*/*=>/host;/host/world=>/srv/world-v1], Dtab.local=[].Remote Info: Not Available
When I downgrade to kubernetes 1.4.0, it works
Anyone with an idea of what's is happening here.
Version 1.2 of grpc-go has a bug where the :authority
header doesn't have the port set when the port isn't the default port. This creates interop problems.
Env : kubernetes 1.7.0
Using linkerd-zipkin yml file (https://github.com/linkerd/linkerd-examples/blob/master/k8s-daemonset/k8s/linkerd-zipkin.yml)
oc get all output in default namespace:
root@ip-172-31-29-6:~/zipkin# oc get all
NAME READY STATUS RESTARTS AGE
po/l5d-f8193 2/2 Running 0 4m
po/mongo-2011101372-c92qq 1/1 Running 0 1h
po/myemp-2587386090-mxx9q 1/1 Running 0 1h
po/zipkin-274mz 1/1 Running 0 1h
NAME DESIRED CURRENT READY AGE
rc/zipkin 1 1 1 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kubernetes 10.96.0.1 <none> 443/TCP 2h
svc/l5d 10.104.38.74 <pending> 4140:30428/TCP,4141:30973/TCP,9990:32462/TCP 4m
svc/mongo 10.99.95.4 <none> 27017/TCP 1h
svc/myemp 10.99.64.252 <none> 80/TCP 1h
svc/zipkin 10.111.128.125 <pending> 80:32418/TCP 1h
svc/zipkin-collector 10.100.55.82 <none> 9410/TCP 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/mongo 1 1 1 1 1h
deploy/myemp 1 1 1 1 1h
NAME DESIRED CURRENT READY AGE
rs/mongo-2011101372 1 1 1 1h
rs/myemp-2587386090 1 1 1 1h
output of pod (l5d -container) logs (root@ip-172-31-29-6:~oc logs -f po/l5d-f8193 l5d)
I 0716 10:48:18.356 UTC THREAD10: k8s initializing default
E 0716 10:48:18.362 UTC THREAD17: k8s failed to list endpoints
io.buoyant.k8s.Api$UnexpectedResponse: Response("HTTP/1.1 Status(403)"): User "system:serviceaccount:default:default" cannot list endpoints in the namespace "default".
at io.buoyant.k8s.Api$.parse(Api.scala:72)
looks like services are not discovered by linkerd pod
by searching url (linkerd/linkerd#1347) error gone
Not sure if zipkins required cluster role binding ?
Please help
Hi,
I was not able to run the helloworld sample in google cloud 1.8.7-gke.1. I follow the instructions but i got this error message in l5d log:
The workarround was give "default" service account permissions to requests endpoints api in the default namespace and everything works now.
Regards.
The test environment introduced in BuoyantIO/emojivoto#41, intended to exercising route, telemetry, and service discovery lifecycles in Conduit, includes the following:
We should extend this environment to exercise additional dimensions, namely:
The Prometheus Benchmark
Grafana dashboard introduced in linkerd/linkerd2#984 should provide better performance analysis of Prometheus in Conduit.
Relates to BuoyantIO/emojivoto#42.
(this issue copied from BuoyantIO/emojivoto#43)
I've followed along several of the blog posts in the "A Service Mesh for Kubernetes" series. Each time, the LoadBalancer is not created and the IP is pending indefinitely.
Maybe I am missing a service account / the correct permissions on the VMs?
At least for kubernetes 1.7 one has to specifically request dnsPolicy: ClusterFirstWithHostNet
for linkerd container in deployment config linkerd-cni.yml if one wants to use zipkin telemetry
...
telemetry:
- kind: io.l5d.zipkin
host: zipkin-collector.default.svc.cluster.local
port: 9410
sampleRate: 1.0
...
Otherwise due to hostNetwork: true
Pod specifier (for CNI deployment) the dnsPolicy: default
is applied so that "cluster" addresses (....svc.cluster.local
) are not resolved because default
dns policy is to use node's host /etc/resolv.conf which may or may not (as it is in my case) refer to kube-dns for name resolution, at least this kind (the latter) of resolv.conf setup is done by kubeadm. Anyways this clarification must be stated somewhere (https://discourse.linkerd.io/t/flavors-of-kubernetes/53 ?)
P.S. I do not know since what kubernetes version dns policy ClusterFirstWithHostNet
is available.
"We should have a version of the hello world app that can be deployed with namerd automatically to Kubernetes."
HelloWorldIdentifierConfig.scala fails to compile.
issue-type: bug
scala version: 2.12.2
JVM: 8
steps to reproduce:
./sbt headerClassifier/assembly
Error log:
[error] /root/linkerd-official/linkerd-examples/plugins/header-classifier/src/main/scala/io/buoyant/http/identifiers/HelloWorldIdentifierConfig.scala:10: class HelloWorldIdentifierConfig needs to be abstract, since method newIdentifier in class HttpIdentifierConfig of type (prefix: com.twitter.finagle.Path, baseDtab: () => com.twitter.finagle.Dtab, routerParams: com.twitter.finagle.Stack.Params)io.buoyant.router.RoutingFactory.Identifier[com.twitter.finagle.http.Request] is not defined
[error] class HelloWorldIdentifierConfig extends HttpIdentifierConfig{
[error] ^
[error] /root/linkerd-official/linkerd-examples/plugins/header-classifier/src/main/scala/io/buoyant/http/identifiers/HelloWorldIdentifierConfig.scala:15: method newIdentifier overrides nothing.
[error] Note: the super classes of class HelloWorldIdentifierConfig contain the following, non final members named newIdentifier:
[error] def newIdentifier(prefix: com.twitter.finagle.Path,baseDtab: () => com.twitter.finagle.Dtab,routerParams: com.twitter.finagle.Stack.Params): io.buoyant.router.RoutingFactory.Identifier[com.twitter.finagle.http.Request]
[error] override def newIdentifier(prefix: Path, baseDtab: () => Dtab): Identifier[Request] = {
[error] ^
[error] two errors found
This will be good if we get configuration file with & without RBAC for kubernetes
in addition to the existing simple-proxy and linker-to-linker dc/os configurations, provide two more example configurations:
depends on linkerd/linkerd#797
Running ./sbt headerClassifier/assembly
from the plugins
directory is failing to compile the HeaderClassifier
example.
Compiler error:
[info] Compiling 3 Scala sources and 3 Java sources to ~/IdeaProjects/linkerd-examples/plugins/header-classifier/target/scala-2.11/classes...
[info] 'compiler-interface' not yet compiled for Scala 2.11.7. Compiling...
[info] Compilation completed in 7.915 s
[error] ~/IdeaProjects/linkerd-examples/plugins/header-classifier/src/main/java/io/buoyant/http/classifiers/HeaderClassifierConfig.java:13: io.buoyant.http.classifiers.HeaderClassifierConfig is not abstract and does not override abstract method kind_$eq(java.lang.String) in io.buoyant.config.PolymorphicConfig
[error] public class HeaderClassifierConfig implements ResponseClassifierConfig {
[error]
[error] /* This public member is populated by the json property of the same name. */
[error] public String headerName;
[error]
[error] /**
[error] * Construct the repsonse classifier.
[error] */
[error] @Override
[error] @JsonIgnore
[error] public PartialFunction<ReqRep, ResponseClass> mk() {
[error] String headerName = this.headerName;
[error] if (headerName == null) headerName = "status";
[error] return new HeaderClassifier(headerName);
[error] }
[error] }
[error] (headerClassifier/compile:compileIncremental) javac returned nonzero exit code
Scala: 2.11.7
sbt: 0.13.9
java: 1.8.0-b132
Applying the current k8s/linkerd.yml
example on Kubernetes 1.6.4 (master & nodes) results in the following exceptions:
W 0718 14:40:28.865 UTC THREAD31: Exception propagated to the default monitor (upstream address: /10.52.6.74:47528, downstream address: n/a, label: 0.0.0.0/4140).
io.buoyant.router.RoutingFactory$UnknownDst: Unknown destination: Request("PRI *", from /10.52.6.74:47528) / Host header is absent
W 0718 14:40:28.881 UTC THREAD32: Exception propagated to the default monitor (upstream address: /10.52.6.74:47530, downstream address: n/a, label: 0.0.0.0/4140).
io.buoyant.router.RoutingFactory$UnknownDst: Unknown destination: Request("PRI *", from /10.52.6.74:47530) / Host header is absent
[indefinitely(?) stepping through all even port numbers, same IP]
They don't occur on Kubernetes 1.5.2 (master: 1.5.7). Tested both on Google Container Engine.
The request PRI *
seems to be a HTTP/2 request.
When running linkerd as a daemonset in a CNI environment (such as Calico or Weave) we need to set hostNetwork
on the daemonset spec and on the daemonset and localnode transformer configs. We should have example configs demonstrating this.
As https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-third-party-resource/ describes ThirdPartyResource
will be removed in 1.8.
ThirdPartyResource is deprecated as of Kubernetes 1.7 and has been removed in version 1.8 in accordance with the deprecation policy for beta features.
To avoid losing data stored in ThirdPartyResources, you must migrate to CustomResourceDefinition before upgrading to Kubernetes 1.8 or higher.
I guess the examples need to be updated to use CustomResourceDefinition
- I'm having a go at getting stuff working using CustomResourceDefinition
so we'll see
The hello world app that we use in the k8s-daemonset examples is kinda crappy. It just runs the built-in Flask webserver, which only serves HTTP/1.0 responses and is not threadsafe. Linkerd (correctly) closes connections when it receives an HTTP/1.0 response, so this makes our examples much less performant.
I'm inclined to rewrite the services in go, push that to a separate docker image (buoyantio/hello-world-go
?), and then update all of our configs to use that image instead.
I have followed your documentation and looks like when the service type is load balancer then the External Ip remains in the pending state.
kubectl get svc/linkerd
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
linkerd 10.3.0.52 4140:31286/TCP,9990:30618/TCP 13m
Is there some alternative to this?
I have deployed couple of containers into a cluster and one of the pods is a daemonset and is configured with a host port. When I tried to connect to the host_ip and the host_port it fails. When manually tried to execute Test-NetConnection from the pod with the host IP and the port, it says Tcp connection didn't succeed. I'm using the kubernetes version v1.11. Are there any more dependencies, like a particular docker version etc..
I'm using the following in the daemonset yaml file.
ports:
- name: http
containerPort: 1277
hostPort: 1277
protocol: TCP
Can someone please help me with this.
First reported by @dlaidlaw.
As of Kubernetes 1.7 (kubernetes/kubernetes#42717), we can now retrieve the host IP directly from the downward API via the status.hostIP
field, without needing to rely on the spec.nodeName
field, which isn't addressable in all Kubernetes environments (including minikube -- #130).
We should update our example configs in this repo to start using status.hostIP
instead of spec.nodeName
, but we also need to figure out a way to ensure backwards compatibility with older versions of Kubernetes, either by versioning the config files or doing something smarter with environment variable fallback.
Pull the mixer and pilot configs from the istio repo instead of copying them.
We should (maybe) consider updating the servicemesh.yml
config to use the new io.l5d.k8s.configMap
interpreter, such that dtabs for the running linkerds and be updated without restart. That might give users more flexibility over configuration.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.