projectcontour / contour Goto Github PK
View Code? Open in Web Editor NEWContour is a Kubernetes ingress controller using Envoy proxy.
Home Page: https://projectcontour.io
License: Apache License 2.0
Contour is a Kubernetes ingress controller using Envoy proxy.
Home Page: https://projectcontour.io
License: Apache License 2.0
This is mainly intended as a debugging aide.
When running contour local outside the cluster it would be useful to have a grpc client to interrogate contour without having to deploy the pod to k8s and divine it’s workings via envoy’s debug logs
At the moment Contour configures Envoy to accept http from upstream connections (either ELB or a NLB in pass through mode). However on the backend, from Envoy to Pods is always HTTP.
There is no reason this has to be--each port on a service has its own envoy cluster stanza which can hold relevant connection parameters--but there is no established mechanism on the service object to say what protocol to talk on a port.
This issue tracks two things
Maybe we should add a FAQ document to address common questions such as a the difference between contour and istio.
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
Something like this, requires #10
Seems I run into this issue, after cluster is up, this is the first thing I tried. Let me know what you need to help me debug this.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-gke-cluster-default-pool-97bc52ee-076l Ready <none> 19h v1.8.1-gke.1
gke-gke-cluster-default-pool-97bc52ee-b0wh Ready <none> 19h v1.8.1-gke.1
gke-gke-cluster-default-pool-97bc52ee-g42r Ready <none> 19h v1.8.1-gke.1
$ kubectl apply -f http://j.hept.io/contour-deployment-rbac
namespace "heptio-contour" created
serviceaccount "contour" created
deployment "contour" created
clusterrolebinding "contour" created
service "contour" created
Error from server (Forbidden): error when creating "http://j.hept.io/contour-deployment-rbac": clusterroles.rbac.authorization.k8s.io "contour" is forbidden: attempt to grant extra privileges: [PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["configmaps"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["endpoints"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["pods"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["secrets"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["nodes"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["get"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["list"]} PolicyRule{Resources:["services"], APIGroups:[""], Verbs:["watch"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["get"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["list"]} PolicyRule{Resources:["ingresses"], APIGroups:["extensions"], Verbs:["watch"]}] user=&{[email protected] [system:authenticated] map[]} ownerrules=[PolicyRule{Resources:["selfsubjectaccessreviews"], APIGroups:["authorization.k8s.io"], Verbs:["create"]} PolicyRule{NonResourceURLs:["/api" "/api/*" "/apis" "/apis/*" "/healthz" "/swaggerapi" "/swaggerapi/*" "/version"], Verbs:["get"]}] ruleResolutionErrors=[]
$ kubectl delete -f http://j.hept.io/contour-deployment-rbac
namespace "heptio-contour" deleted
serviceaccount "contour" deleted
deployment "contour" deleted
clusterrolebinding "contour" deleted
service "contour" deleted
Error from server (NotFound): error when deleting "http://j.hept.io/contour-deployment-rbac": clusterroles.rbac.authorization.k8s.io "contour" not found
@timothysc suggested that watching all endpoints is expensive in high scale systems.
Evaluate the cost of this; does this put a load on the api server, or contour.
Regardless of the former or the latter, the result will be we have to filter the endpoints we need to watch to the set of endpoints that are referenced by services that are referenced by ingresses.
I would like to be able to bind contour/envoy on a port other than 8080 for daemonset deployments.
Hi,
I am trying to use contour in conjunction with ingress objects for host-based routing to backend services. Here is an example of the spec
section from the ingress definition:
"spec": {
"rules": [
{
"host": "my-very-very-long-service-host-name.my.domainname",
"http": {
"paths": [
{
"backend": {
"serviceName": "my-service-name",
"servicePort": 8088
}
}
]
}
}
]
}
Contour picks it up but throws an error because the generated virtual host name turns out to be too long:
[2017-11-01 14:22:54.787][1][warning][upstream] source/common/router/rds_subscription.cc:65] rds: fetch failure: Invalid virtual host name: Length of default/my-service-name/my-very-very-long-service-host-name.my.domainname (73) exceeds allowed maximum length (60)
Unfortunately, I am unable to change the enforced naming convention to shorten the host names very quickly. Is there a workaround or a plan to make this configurable? What would the considerations be if the virtual host could be overridden through an annotation? Thanks!
Add basic benchmarks for insert, delete, and each for the three watcher caches (maybe one is sufficient).
Benchmark sizes should be 1, 10, 100 and 10,000
This will inform #5 and possibly inform a move from using a map to something like xtgo's sorted set implementation.
This is a follow up from our slack conversation some time ago. I'll outline what would be involved in this work and how people would use it.
A ksonnet library is collected into a set of parts that can be used to configure and deploy an application on Kubernetes for many different scenarios. For example, in redis, we expose parts.networkPolicy
, parts.deployment
, parts.secret
(for storing the password, if we need one), parts.pvc
(if we wish to deploy redis backed by a persistent volume), and so on.
We expose a set of pre-fabricated combinations of these parts using prototypes, which can be used with ks generate
. For example, the redis library exposes the following prototypes:
io.ksonnet.pkg.redis-all-features
io.ksonnet.pkg.redis-persistent
io.ksonnet.pkg.redis-stateless
Each of these uses a subset of the parts exposed by the library. For example, io.ksonnet.pkg.redis-stateless
uses a deployment, a secret, but does not use a PVC. Using the ks
tool to generate a fully-formed and functional manifest is as easy as:
ks generate io.ksonnet.pkg.redis-stateless lovely-cache --name redis
This would generate a file that contains (roughly) the following:
{
kind: "List",
apiVersion: "v1",
items: [
redis.parts.deployment.nonPersistent(namespace, name, name),
redis.parts.secret(namespace, name, redisPassword),
redis.parts.svc.metricDisabled(namespace, name),
],
}
The ultimate goal is that getting started for Contour users should be a couple of commands:
$ PATH=$PATH:$GOPATH/bin
$ go get github.com/ksonnet/ksonnet
$ ks init simple-contour && cd simple-contour
$ ks generate com.heptio.pkg.simple-contour contour --name simple-contour [... other flags here ...]
To accomplish this, I think we can follow basically the same pattern here:
deployment/
(e.g., the DaemonSet
, Deployment
, NetworkPolicy
, and so on) into a contour.libsonnet
.parts.yaml
folder.Following this, and out of scope of this issue, is to talk about how we can make it very easy for people to write new apps against the Contour ingress controller. I think it's possible to have a good story there, but I don't think it's purely a question of templating, so we should defer this discussion for later.
The Ingress spec permits the routes for a vhost to be defined across several Ingress objects. This could be in the same, or across, namespaces.
Currently contour does not handle this, generating duplicate vhost definitions which envoy will therefor reject.
This is needed for #78 because cert managers like jetstack/kube-logo expect to be able to inject a route onto an ingress from a different namespace.
This ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: kuard
name: kuard
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: kuard
servicePort: 80
Will register the route on ingress_https even though it has not hostname or TLS stanza. This should not happen.
Envoy provides a collection of features that are beyond host/URI routing formalized in the Ingress resources (advanced load balancing, fault injection, custom filters, etc). Do you have any plans or thoughts how to expose those features in the ingress controller?
Contour should be defining probes so that traffic isn't sent to an instance until it is ready. Similarly, this should support a more graceful drain mechanism when downscaling.
The way I thought about doing this was injecting a default vhost into Envoy pointing to Contour that would provide a /healthz endpoint. The idea being that until both Envoy and Contour were up; hitting the node via IP or internal host name (as the ELB does) would not return healthy until both processes were running.
Support the ingress.kubernetes.io/rewrite-target
annotation.
Add support for ingress.kubernetes.io/whitelist-source-range
Hello,
I have a use case where I need multiple hostnames routed to the same backend service, example:
http://foo.webrelay.io -> relayServiceName:9400
http://bar.webrelay.io -> relayServiceName:9400
https://foobar.webrelay.io -> relayServiceName:9500
https://foobarfoo.webrelay.io -> relayServiceName:9500
This list could be expanded and contracted dynamically, therefore wildcard hostnames would be great. Ingress example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: relay-ingress
spec:
rules:
- host: '*.webrelay.io'
http:
paths:
- backend:
serviceName: whr
servicePort: 9400
path: /
This is a high level tracking issue for promoting gRPC support to GA. Specifically
ERROR: logging before flag.Parse: W1218 11:31:33.776933 73199 reflector.go:341] github.com/heptio/contour/internal/k8s/watcher.go:61: watch of *v1.Endpoints ended with: too old resource version: 8605078 (8606302)
cmd/contour uses kingpin, not flag, but I suspect the glog infection via client-go requires flag.Parse to be called unconditionally.
It was outlined in the Roadmap that Ingress Status updates were coming, just adding an issue for tracking.
In addition to the example given, there are several projects that rely on this information being present. Of particular interest to us would be External-DNS, which utilizes the status to determine the appropriate destination for the relevant DNS entries.
Blocked:
Add some basic benchmarks for the CDS, SDS, and RDS for 1, 10, 100, and 10,000 cache entires.
The key things I want to see is the processing time, and the bytes allocated (JSON is expensive).
We need this information to judge the cost of making v1 REST a wrapper around v2 gRPC.Fetch.
Support, probably via an annotation, an Ingress object which exists to redirect www.example.com to example.com (or vice versa)
This might look like, in yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
annotations:
kubernetes.io/ingress.host-rewrite "example.com"
spec:
rules:
- host: www.example.com
backend:
serviceName: foo // ignored
servicePort: http // ignored
How does Contour differ from the Istio Ingress Controller?
Add support for
ingress.kubernetes.io/auth-type
ingress.kubernetes.io/auth-secret
ingress.kubernetes.io/auth-realm
Annotations.
Hi,
I'm trying Contour (and envoy too) in my Kubernetes cluster, but I'm not able to make it work due of Envoy that bind in IPv4 only in the pod.
It would be awesome if all services can bind in IPv6 too, by default ?
/ # netstat -lptn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:8001 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:9001 0.0.0.0:* LISTEN 1/envoy
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 1/envoy
By looking the code, it seem that it need a small change in internal/contour/json.go
(l147) :
Listeners: []envoy.Listener{{
Name: "ingress_http",
- Address: "tcp://0.0.0.0:8080", // TODO(dfc) should come from pod.hostIP
+ Address: "tcp://[::]:8080", // TODO(dfc) should come from pod.hostIP
Filters: []envoy.Filter{
What do you think about that ?
Contour is currently deployed as a daemonset on each node with the bootstrap configuration provided by Contour in a shared Volume Mount. The configuration is written via an init container, which is Contour with the -initconfig
flag.
There is no fixed requirement for this to be requirement when deploying Contour, this information could be provided by a ConfigMap, or perhaps a PV, noting that hot reloading of configuration via the filesystem is not the recommended use case for Contour.
This issue exists mainly for documentation of the use of -initconfig
and the options for providing it in other ways.
Add support for adding / removing headers at the Ingress, namespace, and default level
Add support for the proxy protocol
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol="*"
annotationRight now envoy is locked to 8080 and contour is 8000. This is fine with pod networking but may not be great with host networking.
The README mentions that SLL/TLS support is a work in progess. SNI has landed in envoy now (envoyproxy/envoy#95).
I'm wondering what is the plan / timeline for using that SNI support in contour? I have some projects blocked by the support, and if it's not fairly soon was wondering if it's an open project I could take on.
Running on K8s 1.8.4 + RBAC through OpenStack, LoadBalancer support (I think..), I re-deployed Contour just now to make sure I was on a new version of Contour too. Testing out a simple rule.
This Ingress definition works, confirmed with curl -i http://<LB IP>
:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello
labels:
app: hello
spec:
backend:
serviceName: hello
servicePort: 80
This however does not, giving instead a 404 when I curl -i http://<LB IP>/hello
:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello
labels:
app: nelson
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: hello
servicePort: 80
Here are the Deployment and Service objects I'm using:
kind: Deployment
apiVersion: apps/v1beta1
metadata:
labels:
app: hello
version: v1
name: hello
spec:
replicas: 1
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
version: v1
name: hello
spec:
containers:
- name: hello
image: gcr.io/hightowerlabs/server:0.0.1
imagePullPolicy: Always
args:
- "-name=istio-test-v1"
apiVersion: v1
kind: Service
metadata:
name: hello
labels:
app: hello
spec:
ports:
- port: 80
name: http
targetPort: 80
protocol: TCP
selector:
app: hello
type: ClusterIP
Tracking issue to identify any issues related to running Contour on arm64, and to report back successes.
Gcp switched to gogo for grpc, watch that on the upgrade.
From #22
kubectl apply is arbitrary code execution as your user on a cluster. As such, the urls should not be trivially MITM-able to malicious code on. e.g. coffee shop wifi.
We should find a solution for serving the short links over https.
Add support for ingress.kubernetes.io/limit-whitelist
Hello, it would be good to know whether Contour supports ssl passthrough and if it doesn't - whether it would be possible/reasonable to add it.
NGINX ingress controller has this ingress.kubernetes.io/ssl-passthrough: "true"
annotation, it might make sense to keep the format the same to make it easier to switch between ingresses.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: contour
ingress.kubernetes.io/ssl-passthrough: "true"
name: relay-ingress
namespace: default
Add support for ingress.kubernetes.io/limit-rpm
Add support for ingress.kubernetes.io/force-ssl-redirect
per Ingress which will require creating a redirect route in the http version listener. This will need LDS to become a proper cache.
@ncdc recommends that we raise the current shared informer resync interval from 30 minutes to 12 or 24 hours, or perhaps disable it completely.
sw := cache.NewSharedInformer(lw, objType, 30*time.Minute)
Starting in contour 0.3, Envoy will only open a listening socket if there is something to listen for. For example, if no ingress objects in your cluster that are visible to contour (ingress class can exclude ingress objects from contour) use TLS, envoy will not open the HTTPS listening socket.
Possibly more surprisingly if there are no ingress objects visible to contour (maybe because they are all assigned to a different ingress class) or because this is a fresh cluster, envoy will not open any listening sockets. This can be confusing for admins who are expecting envoy to open a listening socket before there is anything to listen for.
We probably need to document this in a troubleshooting page.
Updates #48
/cc @Bradamant3
Possibly support the ingress.kubernetes.io/configuration-snippet
annotation
Most of the configuration for Envoy has a json or yaml representation so in theory it could be overlaid with the configuration passed back to Envoy in the caches. In practice this sounds quite hard to do, merging, then retaining the configuration in the face of dynamic updates from k8s, but it's still an interesting prospect.
Contour currently supports the existing v1 polling APIs. This is a meta issue to track support for the v2 API.
See the design document for more information.
Add basic benchmarks for insert, delete, and each for the three watcher caches (maybe one is sufficient).
Benchmark sizes should be 1, 10, 100 and 10,000
This will inform #5 and possibly inform a move from using a map to something like xtgo's sorted set implementation.
Hi,
Actually my ELB is configured to handle the SSL stuff, and its working fine but I want to enforce https when hitting the url. It will be awesome if contour could support ingress.kubernetes.io/force-ssl-redirect: "true"
like NGINX does. see here
The envoy image has moved, the new location is
https://hub.docker.com/r/envoyproxy/envoy-alpine/
We should make this change before 0.2
Hello,
We recently were trying out contour on our 1.7.7 k8s dev cluster. We needed to upgrade the cluster to 1.8.4 in order to resolve a bug. The cluster itself was deployed using kops, so it was upgraded using this tool. The upgrade went fine, but noticed that the contour deployment was trying to run on a node that no longer existed:
node 'ip-172-20-95-73.ec2.internal' not found
I deleted the deployment, service, namespace, service account, cluster role, and cluster role binding. Tried redeploying, and I still keep getting the same error. Any thoughts on how to resolve this? I was using the following to deploy contour:
kubectl apply -f https://j.hept.io/contour-deployment-rbac
Please let me know if you require further info. TIA!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.