kubernetes-retired / service-catalog Goto Github PK
View Code? Open in Web Editor NEWConsume services in Kubernetes using the Open Service Broker API
Home Page: https://svc-cat.io
License: Apache License 2.0
Consume services in Kubernetes using the Open Service Broker API
Home Page: https://svc-cat.io
License: Apache License 2.0
We currently use 2.0 RC which now can be replaced by the final 2.0.
We'll want to do it after #68 is in.
In the F2F meetings in Boulder and Seattle, we made significant progress on the provision/bind workflow. We still need to explore the API flows for unbind. Some quick thoughts:
The current precedent for this type of thing in kubernetes is to delete the resource to release it. For example, to unbind a PV from a PVClaim, you delete the claim. Here's a wrinkle for our specific situation: in the PV/PVC domain, you have something to reconcile against (the PV) if your controller misses the PVC delete. However, in the scenario of a controller missing the delete event for a Binding
, there is nothing to reconcile against a priori. So, this seems to indicate that we need to have a way to preserve the Binding
resource until we can be sure it is fully deleted and unbound at the broker. The pattern we have for this in Kubernetes is called 'finalization'.
Let's assume you have a Binding
B to an Instance
I of a ServiceClass
S. There are a few permutations to talk through, but I'll keep it simple for now. Imagine the following:
/v2/service_instances/:instance_id/service_bindings/:binding_id
From here it gets a little tricky -- do you allow the delete of B to happen, even if the broker doesn't respond with a 200? Here's where finalization comes in. The REST API for deleting a binding will instead of fully deleting the object manipulate the status in a way that indicates that the object is in a 'unbinding' condition. There are a vast number of details to understanding finalization, which I will gloss over here for brevity and ease of understanding. Essentially, part of a namespaces spec is the remaining finalizers that must run for the namespace to be fully deleted. When the first delete request happens, the namespace's status is updated to reflect the finalizing condition, and various controllers go to work deleting resources and updating the list of finalizers to remove themselves as they complete their work. There is a namespace controller that handles doing the final delete once the list of finalizers is empty, and the REST API only allows a 'full' delete of a namespace once that list is empty. We can leverage a similar pattern here to allow the following:
When a binding is undone, the resources created from that binding should be deleted. This should be implemented by a controller doing another part of finalization.
Per discussion in #52, this predates the watch on k8s 3prs of interest to the controller, isn't strictly needed, and should probably be eliminated.
However... I would propose that we do implement a single /healthz
endpoint to facilitate HTTP-based health checks.
The controller does not currently set status when managing objects.
This depends on #127.
In #86 a client was introduced for the CF SB API v2.x. Now, we need a fake client and some tests of the existing code to exercise it.
We need someone to volunteer to set up and tend a basic CI setup for the service catalog. We will undoubtedly increase complexity of the CI setup as we go, and can graduate to more sophisticated setups, but travisci is probably sufficient for the next couple weeks.
Any takers?
@kubernetes-incubator/maintainers-service-catalog
The controller/server
package seems to implement the CF service broker API. Our (Deis') understanding is that the controller's responsibility is only to watch k8s resources and reify based on events affecting those resources. This obviously does require the controller to act as a client in its interaction with brokers that do implement the CF service broker API, but it's not clear why the controller itself implements a server listening on the endpoints described by that API. Can anyone clarify the reasoning behind this? What is the client that we anticipate being on the other end of such an interaction?
This is a compound issue. It includes the following TODO items, preferably done in this order:
It's been tough trying to get through the walkthrough on minikube. (And because I am working locally, I really only want to push built images to a local registry.) I think the difficulty I'm encountering exposes a fault that the build, push, and (helm) install are too tightly coupled to an (unfair) expectation that such Google services are being used.
I would propose that we abandon such assumptions and use a more generic option to allow users to simply specify a docker registry URL without any assumption that it is gcr.
This directly seems to include helm and glide binaries, README, etc. for both Linux/64 and Darwin/64. I can't think of any good reason to package these with this repository.
As discussed in the 20161024 SIG meeting, we should finalize the use cases for the v1 milestone of service catalog.
Use cases articulated during the meeting:
This is a follow up to some discussion in #110
I'd like to propose that test-wise, we have too many moving pieces and that one of those pieces comes with some undesirable baggage.
First the case for why there's one moving part too many:
To power e2e tests, it seems the current approach is that a client program (written in Python) drives traffic to the bookstore app (a Node app, as far as I can tell). That bookstore app depends on a service provisioned and bound via use of our APIs and the service catalog controller. So the test program (client) is testing connectivity and usability of the provisioned / bound service indirectly. Why so much indirection? The bookstore app is a middleman that can be removed if the client / test program itself is executed within k8s and is itself the direct consumer of the service in question.
To make this suggestion more concrete, imagine that the test program is just a k8s job that we load into a cluster. It connects to a database, does some writes, does some reads, then disconnects. We can watch that job for completion and regardless of success or failure, capture its logs.
Then the case against bookstore baggage... even is it's a very small application, it is essentially a distraction:
The bookstore app implements use cases that aren't specifically part of the service catalog problem domain. Service catalog is concerned with provisioning and binding services... building an example that does anything of significance with such a service seems like it should be out of scope (and could be a potential maintenance burden).
To be fair, I recognize there is a counter-argument to this where using a "real" app as a consumer is a more compelling demo, but I don't think that's a good argument for the source code of said "real app" to necessarily live in this repository. If we wanted to provide a truly compelling demo (entiely separate from the optimized test app I have described above), we could instead create a chart that pairs some existing, popular application like Drupal or Wordpress with a database provisioned and bound through our APIs and the service catalog controller.
Node isn't necessarily a competency for developers working in this repo. There's already been some discussion (see #103) about limiting the number of languages a contributor needs to know. Don't get hung up on this tho... this is minor in comparison to the previous point.
The walkthrough neglects to document the need for glide install
to be run prior to building the project.
Currently we are using the prototype object model in model/service_controller
. We should migrate to the object model defined our API.
...as agreed to in the SIG:
brokers directory will move to contrib/brokers
examples directory will move to contrib/examples
We should document all the resources like k8s does, for example Pods:
http://kubernetes.io/docs/api-reference/v1/definitions/#_v1_pod
After #139 and similar PRs, jenkins scripts are "all over the place". This issue is for standardizing a location for them and making the change to move them to the right directory.
To approve a PR, 3 committers must LGTM it. We should create PR labels so that committers can simply label a PR if they have LGTM-ed it. Additionally, the labels should accommodate the veto policy.
The cfV2BrokerClient
has methods that make calls to CF service broker APIs. We should use ctxhttp to ensure that a call that hangs for any reason (no connection timeout, etc...) is cancelled. ctxhttp
is a very convenient mechanism for doing these cancellations.
Currently it is in ./deploy/catalog
Includes
This issue will be a bucket to hold use cases focused on issues service producers are facing. After we have a solid set we can organize them into a document and review in a sig meeting.
After #63, we'll have tests running with code coverage, but we won't (yet) have a place to send that coverage data or surface it to developers. If we get that, it will allow developers to make measurable gains on test coverage in each PR.
We've used codecov.io successfully in the past. Here is an example for how to send coverage data up to the service.
We have not yet made an agreement as a group about whether service plans are an API resource or a type subordinate to ServiceClass
. I personally do not think that plans warrant a resource at this point -- they are more of an attribute of service classes than they are a first-level thing in the API. Additionally, I do not think there is enough information about them to warrant an API resource. Finally, it is MUCH easier to add a resource later than it is to subtract one.
Forking from #39
We need to establish a consensus for the API mechanics for deprovision. I think this can be similar to the mechanics described in #39. This would go something like:
Instance
Binding
instances associated with the Instance
Instance
once the Bindings
are ultimately deleted@arschles asked about where deprovision should mean that bindings are deleted. I do think deletes should cascade -- if an instance has been deprovisioned, a binding against is has no meaning.
This needs to be implemented in the BrokerClient, and also needs to be plumbed through in the Controller where it currently just routes to CreateServiceInstance.
User-provided services in Cloud Foundry are a way to share services which do not appear in the CF marketplace. Applications can bind to user-provided services and receive credentials as they would with ordinary broker based services, with the caveat that the credentials are the same for every app that binds to the service instance.
I think there is a strong need for the Kubernetes sevice catalog to implement this notion, but I have another use case to add:
I have ideas on how to do this that need to be thought through more carefully before I share them, wanted to get this idea out there for now.
The BrokerClient does not implement the complete service broker API spec. It needs to be flushed out to align with the full spec.
Currently, we have glide files, but the code dependencies (in the vendor
directory) are not checked in. To bring this repository closer in structure and style with kubernetes/kubernetes, I propose checking in the vendor directory alongside the glide files.
We have addressed this concern piecemeal in a couple different places, but I think it is advantageous to open a high-level issue for it and explain the rationale in detail.
I propose that we make the service-catalog repo, to the closest possible degree, similar to the main kubernetes repo. This includes things like project layout, development tooling, dependency management, and other facets of developer experience.
My rationale for this is:
With that said, I realize that this is a big ask from the group. I have tried to make a good case for the ask on the merits, but I also want to acknowledge the fact that it is an ask. Some reasons why it is an ask:
Let me call out special attention to (3) and emphasize this point: basically, I am asking the group to trust me when I say this way is the best one for the long-term. Asking for trust on matters like this is something that I do not do lightly. I understand that this is new territory for all of us -- myself included, and I want to ensure that it is known that I value and appreciate others' opinions on matters of project structure and developer experience.
Let's discuss this in a SIG meeting, no matter what discussion happens in github. Thanks for reading.
Currently the helm reifier execs to the helm binary to get the job done. @martinmaly and I discussed a bit and agreed it wasn't ideal. Tiller uses GRPC, so that presents an easy(ish), low-friction option for improving that integration.
Currently the service controller model has instance-to-instance bindings, so the consumption mechanism needs to be moved to native kubernetes resources as outlined in the catalog discussions.
Needs the following:
The Helm chart found in the deployments
dir configures namespaces through --set
or a values.yaml
. This is not how Helm charts are recommended to be written. The recommendation is that namespace:
be entirely omitted from the metadata for each resource. Operators should specify a namespace when installing a chart by using the --namespace
flag.
This directory structure follows what kubernetes/kubernetes does. We may have other directories in the future that have Go code in them, but this issue starts with moving code just to pkg
cc/ @pmorie
Functions in storage/storage_util.go
Given that registry is a dynamic registry of service classes which underpins the broker implementation, we probably want to move it next to the brokers, into contrib/ directory.
As has been pointed out (tangentially) in a few other threads, I don't think we need to be checking bytecode into the repo. Let's remove .pyc files.
Acceptance criteria for this issue:
status.state
fields of relevant Kubernetes resources when it receives an asynchronous responseAfter #84, there will be an API server running that implements a /healthz
endpoint which simply returns a 200 OK to the client. That endpoint should do something "real" and return the actual status of the process.
Currently the *ServiceInstance operations in the BrokerClient just return a ServiceInstance, but should return the actual response object that a broker returns.
Currently there appears to be a Makefile for each go package in the project. This is not really sustainable and adds overhead to refactors. Packages should be buildable with go build
, and there should be a single make target (in kubernetes, we call this make all
) that builds all the go packages in the project.
As a result of the initial seed code evolution, the catalog component is named 'controller' rather than catalog. It should be renamed to match the terminology used by the SIG.
Proposal:
*.md
files as well.This replaces #67
The docker
(Docker build) and push
(Docker push) make
targets currently require either a GCP project ID or GCR repository URL to be provided. The docker
target uses this information when tagging images that are subsequently pushed by the push
target, however, I propose this bias toward GCR is a hindrance for anyone wishing to use another Docker registry-- Docker Hub, quay.io, a local registry, or other.
Why this is important: The beauty of Docker and k8s is that they run on most any platform, yet give you (more or less) a uniform experience across those many platforms. Expressing a bias toward one vendor in particular harms the developer experience by raising the barrier for entry vis a vis project contribution by requiring familiarity with and a budget for the vendor in question. i.e. What if I have no money and want to push images to a local registry that my minikube can pull from?
What I'd like to propose is we adopt something similar (not necessarily identical) to how we have long managed this issue at Deis. At Deis, all our build processes respect an environment variable DEIS_REGISTRY
. When building and tagging images, the image's unqualified name (a component name like "router" or "controller") is prefixed with the value of the DEIS_REGISTRY
environment variable so that a subsequent push uploads the image to the user-specified location.
I would further propose that we maximize the simplicity of this equation by not building in any specific support for authenticating to registries. I propose instead that we simply document the requirement that anyone using the push
target must already be authenticated to whatever registry it is they intend to push to.
Some options discussed in #120 included:
/test/jenkins
/hack/jenkins
/ci/jenkins
/jenkins
/test/ci/jenkins
At the offsite, we discussed how to make service graphs. We have almost everything we need in the current design except a way to map instances to instances. Namespaces/label-selectors bind to instances so if we want to figure out the instance to instance binding, we need to have the instance to namespace/label-selector mapping so that we can traverse the chain (permissions allowing).
This is probably just a matter of storing one more piece of metadata somewhere on the binding operation. This issue to to figure out the best implementation and to add that to the design.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.