Giter VIP home page Giter VIP logo

service-catalog's People

Contributors

arschles avatar artmello avatar ash2k avatar bmelville avatar carolynvs avatar derekwaynecarr avatar directxman12 avatar duglin avatar jberkhahn avatar jeremyrickard avatar jhvhs avatar jichenjc avatar jpeeler avatar kibbles-n-bytes avatar krancour avatar luksa avatar martinmaly avatar mhbauer avatar mooncak avatar mszostok avatar n3wscott avatar nilebox avatar piotrmiskiewicz avatar pmorie avatar polskikiel avatar richardfung avatar shouhong avatar staebler avatar taragu avatar wayilau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

service-catalog's Issues

Create consensus on unbind

In the F2F meetings in Boulder and Seattle, we made significant progress on the provision/bind workflow. We still need to explore the API flows for unbind. Some quick thoughts:

Indicating intention to unbind

The current precedent for this type of thing in kubernetes is to delete the resource to release it. For example, to unbind a PV from a PVClaim, you delete the claim. Here's a wrinkle for our specific situation: in the PV/PVC domain, you have something to reconcile against (the PV) if your controller misses the PVC delete. However, in the scenario of a controller missing the delete event for a Binding, there is nothing to reconcile against a priori. So, this seems to indicate that we need to have a way to preserve the Binding resource until we can be sure it is fully deleted and unbound at the broker. The pattern we have for this in Kubernetes is called 'finalization'.

Why finalization is needed - unbind example

Let's assume you have a Binding B to an Instance I of a ServiceClass S. There are a few permutations to talk through, but I'll keep it simple for now. Imagine the following:

  1. The consumer wants to unbind B, so they delete B.
  2. The controller receives a delete event for B and issues a DELETE to /v2/service_instances/:instance_id/service_bindings/:binding_id

From here it gets a little tricky -- do you allow the delete of B to happen, even if the broker doesn't respond with a 200? Here's where finalization comes in. The REST API for deleting a binding will instead of fully deleting the object manipulate the status in a way that indicates that the object is in a 'unbinding' condition. There are a vast number of details to understanding finalization, which I will gloss over here for brevity and ease of understanding. Essentially, part of a namespaces spec is the remaining finalizers that must run for the namespace to be fully deleted. When the first delete request happens, the namespace's status is updated to reflect the finalizing condition, and various controllers go to work deleting resources and updating the list of finalizers to remove themselves as they complete their work. There is a namespace controller that handles doing the final delete once the list of finalizers is empty, and the REST API only allows a 'full' delete of a namespace once that list is empty. We can leverage a similar pattern here to allow the following:

  1. The consumer wants to unbind B and makes a delete request to the service catalog API server to do so.
  2. The REST API updates B's status to reflect that it is deleting
  3. The controller continues calling delete at the broker until receiving a 200
  4. When the controller is satisfied that the delete at the broker succeeded, it can update B in a way that indicates that the finalization is complete, then delete B.

Deleting services / secrets / SIPs after unbinding

When a binding is undone, the resources created from that binding should be deleted. This should be implemented by a controller doing another part of finalization.

Set up a basic CI

We need someone to volunteer to set up and tend a basic CI setup for the service catalog. We will undoubtedly increase complexity of the CI setup as we go, and can graduate to more sophisticated setups, but travisci is probably sufficient for the next couple weeks.

Any takers?

@kubernetes-incubator/maintainers-service-catalog

Clarification: Why does the controller implement the CF service broker API?

The controller/server package seems to implement the CF service broker API. Our (Deis') understanding is that the controller's responsibility is only to watch k8s resources and reify based on events affecting those resources. This obviously does require the controller to act as a client in its interaction with brokers that do implement the CF service broker API, but it's not clear why the controller itself implements a server listening on the endpoints described by that API. Can anyone clarify the reasoning behind this? What is the client that we anticipate being on the other end of such an interaction?

Design document for API

This is a compound issue. It includes the following TODO items, preferably done in this order:

  • Propose and agree on common terms in the service catalog / broker space (#15)
  • Propose and agree on Kubernetes resource types for service catalogs (#16)
  • Propose and agree on API level policies. Examples are listed below. Note that the answers to these questions may affect the existence and/or structure of the aforementioned Kubernetes resource types
    • Do we want to separate {single,multi}tenancy concerns from provision/bind actions?
    • If so, how?
    • Do we want to add an in-cluster mechanism to track and assign running service catalog controllers
    • Do we want to allow service catalog controllers to automatically publish their service catalogs to cluster-global resources

Build / push is painful if not using gcr and gke

It's been tough trying to get through the walkthrough on minikube. (And because I am working locally, I really only want to push built images to a local registry.) I think the difficulty I'm encountering exposes a fault that the build, push, and (helm) install are too tightly coupled to an (unfair) expectation that such Google services are being used.

I would propose that we abandon such assumptions and use a more generic option to allow users to simply specify a docker registry URL without any assumption that it is gcr.

Remove entire contents of deps dir

This directly seems to include helm and glide binaries, README, etc. for both Linux/64 and Darwin/64. I can't think of any good reason to package these with this repository.

Meta: Use case documentation

As discussed in the 20161024 SIG meeting, we should finalize the use cases for the v1 milestone of service catalog.

Use cases articulated during the meeting:

  1. Adding service brokers into catalog
  2. Catalog curation: which broker-provided services are globally visible?
  3. Catalog curation: which namespaces can see which catalog services?
  4. As a consumer of a service, how do I indicate that I want to consume a particular service?
  5. As a consumer of a service, how do I indicate that I want to bind to an instance of a service?
  6. What is the unit of consumption of a service? Namespace? Pod? Something else? (brian to comment)
  7. What is the versioning and update story for a service: example: what happens when a broker changes the services it provides?
  8. When a user binds to an instance of a service, what does that look like? Which resources are created in their namespace?
  9. What is the update story for bindings to a service instance?
  10. How does the catalog support multiple consumers in different Kubernetes namespaces of the same instance of a service?
  11. How do I control the number and identity of consumers allowed to bind to my instance?
  12. Curation: how do I control who can see a service instance that I have provisioned?
  13. Curation: how do I control the number of consumers of my service instance?

Proposal: Remove bookstore and its client; replace with a single component

This is a follow up to some discussion in #110

I'd like to propose that test-wise, we have too many moving pieces and that one of those pieces comes with some undesirable baggage.

First the case for why there's one moving part too many:

To power e2e tests, it seems the current approach is that a client program (written in Python) drives traffic to the bookstore app (a Node app, as far as I can tell). That bookstore app depends on a service provisioned and bound via use of our APIs and the service catalog controller. So the test program (client) is testing connectivity and usability of the provisioned / bound service indirectly. Why so much indirection? The bookstore app is a middleman that can be removed if the client / test program itself is executed within k8s and is itself the direct consumer of the service in question.

To make this suggestion more concrete, imagine that the test program is just a k8s job that we load into a cluster. It connects to a database, does some writes, does some reads, then disconnects. We can watch that job for completion and regardless of success or failure, capture its logs.

Then the case against bookstore baggage... even is it's a very small application, it is essentially a distraction:

  1. The bookstore app implements use cases that aren't specifically part of the service catalog problem domain. Service catalog is concerned with provisioning and binding services... building an example that does anything of significance with such a service seems like it should be out of scope (and could be a potential maintenance burden).

    To be fair, I recognize there is a counter-argument to this where using a "real" app as a consumer is a more compelling demo, but I don't think that's a good argument for the source code of said "real app" to necessarily live in this repository. If we wanted to provide a truly compelling demo (entiely separate from the optimized test app I have described above), we could instead create a chart that pairs some existing, popular application like Drupal or Wordpress with a database provisioned and bound through our APIs and the service catalog controller.

  2. Node isn't necessarily a competency for developers working in this repo. There's already been some discussion (see #103) about limiting the number of languages a contributor needs to know. Don't get hung up on this tho... this is minor in comparison to the previous point.

Document code review / merge protocol

To approve a PR, 3 committers must LGTM it. We should create PR labels so that committers can simply label a PR if they have LGTM-ed it. Additionally, the labels should accommodate the veto policy.

Clean up storage code

Includes

  • Move to storage package
  • Refactor common storage methods into util for use across the different storage impls
  • Align naming with concepts
    • Simplify class names
    • Rename *Services() functions to *ServiceInstances()
    • Rename *Inventory() to *Catalog() or *ServiceClasses()

Gather use cases for Service Producers

This issue will be a bucket to hold use cases focused on issues service producers are facing. After we have a solid set we can organize them into a document and review in a sig meeting.

Store & report code coverage data

After #63, we'll have tests running with code coverage, but we won't (yet) have a place to send that coverage data or surface it to developers. If we get that, it will allow developers to make measurable gains on test coverage in each PR.

We've used codecov.io successfully in the past. Here is an example for how to send coverage data up to the service.

Create consensus on service plans

We have not yet made an agreement as a group about whether service plans are an API resource or a type subordinate to ServiceClass. I personally do not think that plans warrant a resource at this point -- they are more of an attribute of service classes than they are a first-level thing in the API. Additionally, I do not think there is enough information about them to warrant an API resource. Finally, it is MUCH easier to add a resource later than it is to subtract one.

Create consensus on deprovision

Forking from #39

We need to establish a consensus for the API mechanics for deprovision. I think this can be similar to the mechanics described in #39. This would go something like:

  1. In order to deprovision an instance, a user issues a DELETE request for that Instance
  2. The REST API endpoint accepts the request and updates the status of the Instance to show that it is being finalized
  3. The service-catalog controller calls the deprovision endpoint at the instance's broker
  4. On a success response from the broker, the service-catalog controller deletes the Binding instances associated with the Instance
  5. The service-catalog controller issues the final DELETE for the Instance once the Bindings are ultimately deleted

@arschles asked about where deprovision should mean that bindings are deleted. I do think deletes should cascade -- if an instance has been deprovisioned, a binding against is has no meaning.

Implement UpdateServiceInstance

This needs to be implemented in the BrokerClient, and also needs to be plumbed through in the Controller where it currently just routes to CreateServiceInstance.

Use case: user-provided service instances

User-provided services in Cloud Foundry are a way to share services which do not appear in the CF marketplace. Applications can bind to user-provided services and receive credentials as they would with ordinary broker based services, with the caveat that the credentials are the same for every app that binds to the service instance.

I think there is a strong need for the Kubernetes sevice catalog to implement this notion, but I have another use case to add:

  1. As a service operator, I want to be able to supply an endpoint that the service catalog can control to handle the bind and unbind API calls for a user provided service, so that I can hook into these flows and handle generating new credentials for each binding.

I have ideas on how to do this that need to be thought through more carefully before I share them, wanted to get this idea out there for now.

Make this repo in kubernetes' image

We have addressed this concern piecemeal in a couple different places, but I think it is advantageous to open a high-level issue for it and explain the rationale in detail.

I propose that we make the service-catalog repo, to the closest possible degree, similar to the main kubernetes repo. This includes things like project layout, development tooling, dependency management, and other facets of developer experience.

My rationale for this is:

  1. In many, many ways, the eventual shape of the project we have discussed is very similar to Kubernetes - we are basically building a Kubernetes-like system with a smaller problem space
  2. I expect that as this project becomes known and popular, that there will be a large influx of people from the wider community; it will save all of us time and effort if someone familiar with Kubernetes at a development level can understand how to internalize and develop on service-catalog with minimal mental context-switching
  3. We are one of the first incubator repos that has the smaller-kubernetes shape, and I would like to be able to point to our repo as a how-to example for future such projects (I expect there will be very many of these once API federation is baked into how)
  4. If we want assistance from Kubernetes maintainers (as I expect that we will when we get to things like API conversion generation), it is most time efficient if they do not have to learn a new project structure
  5. Much of the tooling around code generation that we will want to pull in is very opinionated about project structure, and the path of least resistance is conforming to those opinions rather than trying to change the tooling to be more generic

With that said, I realize that this is a big ask from the group. I have tried to make a good case for the ask on the merits, but I also want to acknowledge the fact that it is an ask. Some reasons why it is an ask:

  1. Developers like to make projects structured the way they like and are used to, and most people in the SIG are new to the Kubernetes world to some degree
  2. Certain aspects of this ask might be alien to folks who have used go in the past in other projects / communities
  3. Certain aspects of the structure may be non-standard or have a cultural history that informs them that is not obvious, and to a great extent, the ask in this case is one of trust that it will be easiest in the future if we go this way

Let me call out special attention to (3) and emphasize this point: basically, I am asking the group to trust me when I say this way is the best one for the long-term. Asking for trust on matters like this is something that I do not do lightly. I understand that this is new territory for all of us -- myself included, and I want to ensure that it is known that I value and appreciate others' opinions on matters of project structure and developer experience.

Let's discuss this in a SIG meeting, no matter what discussion happens in github. Thanks for reading.

Inject binding credentials into native k8s resources

Currently the service controller model has instance-to-instance bindings, so the consumption mechanism needs to be moved to native kubernetes resources as outlined in the catalog discussions.

Needs the following:

  • Define BindingInjector interface
  • Extract existing binding injection code to InstanceBindingInjector
  • Replace InstanceBindingInjector with K8sBindingInjector

Helm chart doesn't follow best practices for specifying namespace

The Helm chart found in the deployments dir configures namespaces through --set or a values.yaml. This is not how Helm charts are recommended to be written. The recommendation is that namespace: be entirely omitted from the metadata for each resource. Operators should specify a namespace when installing a chart by using the --namespace flag.

Move registry/ into contrib/ directory

Given that registry is a dynamic registry of service classes which underpins the broker implementation, we probably want to move it next to the brokers, into contrib/ directory.

Remove .pyc files

As has been pointed out (tangentially) in a few other threads, I don't think we need to be checking bytecode into the repo. Let's remove .pyc files.

Codify how asynchronous provision and deprovision responses are handled in the service catalog controller

See https://github.com/kubernetes-incubator/service-catalog/pull/40/files#diff-7bc5ab85846e6317175a21b5acb68061R65

Acceptance criteria for this issue:

  • Documentation that describes how the controller should modify the status.state fields of relevant Kubernetes resources when it receives an asynchronous response
  • Documentation that describes, at a high level, what logic the controller should implement to handle async responses. This should include what state the controller may need to store internally

Refactor to a single Makefile

Currently there appears to be a Makefile for each go package in the project. This is not really sustainable and adds overhead to refactors. Packages should be buildable with go build, and there should be a single make target (in kubernetes, we call this make all) that builds all the go packages in the project.

Rename controller to catalog

As a result of the initial seed code evolution, the catalog component is named 'controller' rather than catalog. It should be renamed to match the terminology used by the SIG.

Proposal:

  • directory /controller is renamed to /catalog
  • identifiers, or their parts, in the code which refer to the same will be renamed from [Cc]ontroller to [Cc]atalog.
  • update any documentation *.md files as well.

Proposal: Eliminate GCR bias

This replaces #67

The docker (Docker build) and push (Docker push) make targets currently require either a GCP project ID or GCR repository URL to be provided. The docker target uses this information when tagging images that are subsequently pushed by the push target, however, I propose this bias toward GCR is a hindrance for anyone wishing to use another Docker registry-- Docker Hub, quay.io, a local registry, or other.

Why this is important: The beauty of Docker and k8s is that they run on most any platform, yet give you (more or less) a uniform experience across those many platforms. Expressing a bias toward one vendor in particular harms the developer experience by raising the barrier for entry vis a vis project contribution by requiring familiarity with and a budget for the vendor in question. i.e. What if I have no money and want to push images to a local registry that my minikube can pull from?

What I'd like to propose is we adopt something similar (not necessarily identical) to how we have long managed this issue at Deis. At Deis, all our build processes respect an environment variable DEIS_REGISTRY. When building and tagging images, the image's unqualified name (a component name like "router" or "controller") is prefixed with the value of the DEIS_REGISTRY environment variable so that a subsequent push uploads the image to the user-specified location.

I would further propose that we maximize the simplicity of this equation by not building in any specific support for authenticating to registries. I propose instead that we simply document the requirement that anyone using the push target must already be authenticated to whatever registry it is they intend to push to.

Metadata for tracking instance to deployed artifacts to enable service graphs

At the offsite, we discussed how to make service graphs. We have almost everything we need in the current design except a way to map instances to instances. Namespaces/label-selectors bind to instances so if we want to figure out the instance to instance binding, we need to have the instance to namespace/label-selector mapping so that we can traverse the chain (permissions allowing).

This is probably just a matter of storing one more piece of metadata somewhere on the binding operation. This issue to to figure out the best implementation and to add that to the design.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.