Giter VIP home page Giter VIP logo

operator-sdk's Introduction

Build Status License

Documentation

Docs can be found on the Operator SDK website.

Overview

This project is a component of the Operator Framework, an open source toolkit to manage Kubernetes native applications, called Operators, in an effective, automated, and scalable way. Read more in the introduction blog post.

Operators make it easy to manage complex stateful applications on top of Kubernetes. However writing an Operator today can be difficult because of challenges such as using low level APIs, writing boilerplate, and a lack of modularity which leads to duplication.

The Operator SDK is a framework that uses the controller-runtime library to make writing operators easier by providing:

  • High level APIs and abstractions to write the operational logic more intuitively
  • Tools for scaffolding and code generation to bootstrap a new project fast
  • Extensions to cover common Operator use cases

Dependency and platform support

Go version

Release binaries will be built with the Go compiler version specified in the developer guide. A Go Operator project's Go version can be found in its go.mod file.

Kubernetes versions

Supported Kubernetes versions for your Operator project or relevant binary can be determined by following this compatibility guide.

Platforms

The set of supported platforms for all binaries and images can be found in these tables.

Community and how to get involved

How to contribute

Check out the contributor documentation.

License

Operator SDK is under Apache 2.0 license. See the LICENSE file for details.

operator-sdk's People

Contributors

alexnpavel avatar asmacdo avatar bharathi-tenneti avatar camilamacedo86 avatar dependabot[bot] avatar djzager avatar dymurray avatar eparis avatar estroz avatar everettraven avatar fabianvf avatar fanminshi avatar github-actions[bot] avatar hasbro17 avatar hongchaodeng avatar jberkhahn avatar jmrodri avatar joelanford avatar laxmikantbpandhare avatar lilic avatar matthewcarleton avatar mhrivnak avatar oceanc80 avatar rashmigottipati avatar rithujohn191 avatar robszumski avatar theishshah avatar varshaprasad96 avatar venkat19967 avatar venkatramaraju avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

operator-sdk's Issues

Change the Deployment apiVersion of operator.yaml to apps/v1

Currently, the Deployment inside operator.yaml has apiVersion of extensions/v1beta1 and is deprecated. Since we are testing operator-sdk on kubernetes 1.9.x which supports apps/v1, we should default the deployment apiVersion to that as well.

Print the progress of operator-sdk new command

Currently, there aren't any outputs when running operator-sdk new command, I would like it to print out what it is doing like how rails does it.

For example:

$ Rails new app
      create
      create  README.md
      create  Rakefile
      create  config.ru
      create  .gitignore
      create  Gemfile
         run  git init from "."
Initialized empty Git repository in /Users/fanminshi/work/src/github.com/coreos/app/.git/
      create  app
      create  app/assets/config/manifest.js
      create  app/assets/javascripts/application.js
      create  app/assets/javascripts/cable.js
      create  app/assets/stylesheets/application.css
      create  app/channels/application_cable/channel.rb
      create  app/channels/application_cable/connection.rb
...

cc/ @hasbro17 @hongchaodeng

What to strip from SDK's [project name] for setting defaults

"operator-sdk new play-operator
This will create the play-operator project with scaffolding with dependency code ready. It generates Kubernetes custom resource API of APIGroup play.example.com and Kind PlayService by default. APIGroups and Kinds can be overridden and added by flags." from the readme.

It seems like the sdk new needs to exact play out of play-operator to set up the defaults, we need to figure out the exact rule for extraction. e.g what if the project name is playoperator, playOperator, or simply play.

cc/ @hasbro17 @hongchaodeng

Pod chaos middleware

The etcd Operator has the great feature to kill pods periodically as a way of testing the system. It would be great to introduce a middleware that introduced chaos into pod types like deleting resources or starting/stopping them.

Setup CI for operator-sdk

We need to setup continuous integration testing for the operator-sdk.

Action items:

  • Test scripts to run the generator unit tests locally
  • Setup up the CI with travis
  • Add a framework for running e2e tests on a k8s cluster
  • Add an e2e test for the user-guide example i.e build and test a memcached-operator using the SDK

API Get unmarshaller doesn't respect the APIVersion when the Kind is the same

For example, Get can retrieve a object at APIVersion apps/v1beta1 and still unmarshall the data into the object d with a different type apps/v1 as shown in d := &appsv1.Deployment{}.

       name := "memcached-operator"
	namespace := "default"
	d := &appsv1.Deployment{
		TypeMeta: metav1.TypeMeta{
			Kind:       "Deployment",
			APIVersion: "apps/v1beta1",
		},
		ObjectMeta: metav1.ObjectMeta{
			Name:      name,
			Namespace: namespace,
		},
	}
	err := query.Get(d)
	if err != nil {
		logrus.Infof("Failed to get Deployment via sdk %v : %v", name, err)
	}

Output:
None

Move Dockerfile to the project root

In case the operator implementation requires changes to be made to the Dockerfile (most likely) we'd commit the changes made to Dockerfile to GitHub. Since the Dockerfile generated by the operator-sdk is placed under tmp/build developers inadvertently may commit the whole tmp/ directory which is not desirable.

Suggest moving the Dockerfile at the root of the project.

build: failed to build a new operator project

On latest master: a115601
I couldn't build the binary for a new operator project.

Reproducible Steps:

Create a app-operator project:

operator-sdk new app-operator --api-version=app.example.com/v1alpha1 --kind=AppService

Change to app-operator/:
cd app-operator/

Involke dep init:
dep init

Check dep files:

$ cat Gopkg.toml
...
[[constraint]]
  branch = "master"
  name = "github.com/coreos/operator-sdk"

[[constraint]]
  name = "k8s.io/apimachinery"
  version = "kubernetes-1.8.0"
$ cat Gopkg.lock
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.

[[projects]]
  branch = "master"
  name = "github.com/coreos/operator-sdk"
  packages = ["pkg/k8sclient","pkg/sdk","pkg/sdk/action","pkg/sdk/handler","pkg/sdk/informer","pkg/sdk/types","pkg/util/k8sutil"]
  revision = "a11560116f61ab3147e9b3a8e27be15fde10e827"
...
[[projects]]
  branch = "master"
  name = "k8s.io/api"
  packages = ["admissionregistration/v1alpha1","apps/v1beta1","apps/v1beta2","authentication/v1","authentication/v1beta1","authorization/v1","authorization/v1beta1","autoscaling/v1","autoscaling/v2beta1","batch/v1","batch/v1beta1","batch/v2alpha1","certificates/v1beta1","core/v1","extensions/v1beta1","networking/v1","policy/v1beta1","rbac/v1","rbac/v1alpha1","rbac/v1beta1","scheduling/v1alpha1","settings/v1alpha1","storage/v1","storage/v1beta1"]
  revision = "b378c47b2dcba7f5c3ef97d8a5a0b3821ec6a001"

[[projects]]
  name = "k8s.io/apimachinery"
  packages = ["pkg/api/equality","pkg/api/errors","pkg/api/meta","pkg/api/resource","pkg/apis/meta/internalversion","pkg/apis/meta/v1","pkg/apis/meta/v1/unstructured","pkg/apis/meta/v1alpha1","pkg/conversion","pkg/conversion/queryparams","pkg/conversion/unstructured","pkg/fields","pkg/labels","pkg/runtime","pkg/runtime/schema","pkg/runtime/serializer","pkg/runtime/serializer/json","pkg/runtime/serializer/protobuf","pkg/runtime/serializer/recognizer","pkg/runtime/serializer/streaming","pkg/runtime/serializer/versioning","pkg/selection","pkg/types","pkg/util/cache","pkg/util/clock","pkg/util/diff","pkg/util/errors","pkg/util/framer","pkg/util/intstr","pkg/util/json","pkg/util/net","pkg/util/runtime","pkg/util/sets","pkg/util/validation","pkg/util/validation/field","pkg/util/wait","pkg/util/yaml","pkg/version","pkg/watch","third_party/forked/golang/reflect"]
  revision = "019ae5ada31de202164b118aee88ee2d14075c31"
  version = "kubernetes-1.8.0"

[[projects]]
  name = "k8s.io/client-go"
  packages = ["discovery","discovery/cached","dynamic","kubernetes","kubernetes/scheme","kubernetes/typed/admissionregistration/v1alpha1","kubernetes/typed/apps/v1beta1","kubernetes/typed/apps/v1beta2","kubernetes/typed/authentication/v1","kubernetes/typed/authentication/v1beta1","kubernetes/typed/authorization/v1","kubernetes/typed/authorization/v1beta1","kubernetes/typed/autoscaling/v1","kubernetes/typed/autoscaling/v2beta1","kubernetes/typed/batch/v1","kubernetes/typed/batch/v1beta1","kubernetes/typed/batch/v2alpha1","kubernetes/typed/certificates/v1beta1","kubernetes/typed/core/v1","kubernetes/typed/extensions/v1beta1","kubernetes/typed/networking/v1","kubernetes/typed/policy/v1beta1","kubernetes/typed/rbac/v1","kubernetes/typed/rbac/v1alpha1","kubernetes/typed/rbac/v1beta1","kubernetes/typed/scheduling/v1alpha1","kubernetes/typed/settings/v1alpha1","kubernetes/typed/storage/v1","kubernetes/typed/storage/v1beta1","pkg/version","rest","rest/watch","tools/cache","tools/clientcmd/api","tools/metrics","tools/pager","tools/reference","transport","util/cert","util/flowcontrol","util/integer","util/workqueue"]
  revision = "35ccd4336052e7d73018b1382413534936f34eee"
  version = "kubernetes-1.8.2"

[[projects]]
  branch = "master"
  name = "k8s.io/kube-openapi"
  packages = ["pkg/common"]
  revision = "50ae88d24ede7b8bad68e23c805b5d3da5c8abaf"

[solve-meta]
  analyzer-name = "dep"
  analyzer-version = 1
  inputs-digest = "0f44f1453a6bc20f0d0a55ee8acd57bc589e0981f9578b3e830b237938cca28d"
  solver-name = "gps-cdcl"
  solver-version = 1

Build app-operator:

$ ./tmp/build/build.sh
building app-operator...
# github.com/coreos/app-operator/vendor/k8s.io/client-go/kubernetes/typed/admissionregistration/v1alpha1
vendor/k8s.io/client-
go/kubernetes/typed/admissionregistration/v1alpha1/externaladmissionhookconfiguration.go:36:5132: too many errors
# github.com/coreos/app-operator/vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1
vendor/k8s.io/client-go/kubernetes/typed/extensions/v1beta1/thirdpartyresource.go:36:40: undefined: v1beta1.ThirdPartyResource
too many errors

Leader election locking for Operators

We need to ensure that in each namespace an Operator is a singleton and that only one Operator is reconciling against the resources at a single time.

To accomplish this we should provide automatic singleton leader election logic in the SDK.

Watch multiple resources: Requeue events for the primary resource

Background

A common pattern for operators is to have a sync handler that reconciles events for a primary resource which is usually the Custom Resource(CR) that defines the custom API for the application.
However there are other secondary resources such as:

  • Resources that the operator creates e.g deployments, services
  • Resources that the operator does not create but depends on e.g user provided secrets for TLS setup.

The operator needs to receive an event for the primary resource to reconcile the state anytime any of the secondary resources are changed.

The current SDK Watch API allows watching multiple resources that would send all events to the same handler but provides no way to define or requeue events for the primary resource.

Goal

The SDK should provide an API that allows the operator to watch a primary resource and receive requeued primary resource events upon changes to any secondary resources.

Automatic CRD registration

CRD registration can be easily performed in the SDK by generating a few extra fields in register.go in the api folder, creating the CRD in the codebase, and then pushing it to Kubernetes when the operator starts up. This is a nice usability feature because people deploying the operator no longer need to worry about pushing the CRD into Kubernetes before starting the operator.

Ideally this would be a separate API so that users who prefer manually pushing the CRD to Kubernetes are still able to get that behavior.

Discuss: What's default view for cmd/main.go?

Before: the cmd/main.go is generated based on user inputs Kind and APIVersion such as PlayService kind. The reason behind this is that users can just watch their customized resource out of the box as the default behavior.

func main() {
	namespace := "default"
        // old Watch API.
	sdk.Watch(api.PlayServicePlural, namespace, api.PlayService)
 	sdk.Handle(&stub.Handler{})
 	sdk.Run(context.TODO())
}

Now: Should we change the above default behavior to watch on Deployment instead?

func main() {
	namespace := "default"
	sdk.Watch("apps/v1", "Deployment", namespace)
 	sdk.Handle(&stub.Handler{})
 	sdk.Run(context.TODO())
}

Discuss the behavior of operator-sdk build

Currently, operator-sdk build $IMAGE builds a docker image of the operator using $IMAGE and creates ./tmp/operator.yaml that contains crd and deployment details. What if I call the operator-sdk build $IMAGE2 again? What should the behavior be?

The current behavior:

./tmp/operator.yaml will be overwritten. Suppose that the user has made some changes to it previously, those will be gone.

Proposed behavior 1:

Calling operator-sdk build $IMAGE2 again doesn't do anything because ./tmp/operator.yaml exists already. However, since $IMAGE2 is changed, ./tmp/operator.yaml should be updated with $IMAGE2 instead.

Proposed behavior 2:

Calling operator-sdk build $IMAGE2 only update the image field of ./tmp/operator.yaml.

Proposed behavior 3:

Calling first operator-sdk build $IMAGE1 creates a one time template ./tmp/operator_tmpl.yaml file. We expect user to copy and paste from template for any customized modification. Subsequent calls on operator-sdk build $IMAGE-X doesn't create any new manifests.

cc/ @hasbro17 @hongchaodeng

Make '/tmp' better

The name tmp might cause a user to not to consider to put its content under version control. However, that's not the case, the tmp/ folder contain necessary scripts for operator-sdk command to work such as build and generate. We probably need to think about a better naming scheme for the /tmp folder.

Document the Operator SDK API usage

The operator sdk repo should have a doc explaining how do use each of the APIs. In this way, it helps the user to learn/understand about the APIs much better.

dep problem on centos 7

in trying to build operator-sdk I ran into dep issues on my centos7 dev box...basically
dep would hang after prompting me for my github userid...

after several tries of fixing this with no luck, I tried exactly the same steps using a
Ubuntu VM and 'dep ensure' worked (after removing the examples directory).

I suspect perhaps a version/package is causing this on centos 7...would be curious if anyone else builds or has run into this issue on centos 7.

@fanminshi pointed out this dep Issue while debugging this:
golang/dep#1726

none of the fixes in that Issue worked for me on centos 7.

Change imagePullPolicy to speed up the development lifecycle

The operator-sdk build regenerates operator.yaml with imagePullPolicy: Always
During development phase of an operator the imagePullPolicy: Always requires developers to push the image to a Docker repo before being able to test the changes.

This extra step could be avoided if the operator-sdk build generated operator.yaml with imagePullPolicy: IfNotPresent as in that case the image that is already on the developer's machine will be used.

Keep deployment and CRD manifests separate

Currently we generate a single file deploy/operator.yaml that has the CRD and deployment manifests:

$ cat deploy/operator.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: appservices.app.example.com
spec:
  group: app.example.com
  names:
    kind: AppService
    listKind: AppServiceList
    plural: appservices
    singular: appservice
  scope: Namespaced
  version: v1alpha1
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: app-operator
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: app-operator
    spec:
      containers:
        - name: app-operator
          image: quay.io/coreos/operator-sdk-dev:app-operator
          command:
          - app-operator

The CRD is something that the user will only create once, but the operator Deployment is something the user might want to delete, edit and recreate multiple times.
For instance on my first run I forgot to specify the pull secret in the deployment manifest and had to edit the manifest and recreate it.

From a usability perspective it would be better to keep them as two separate files deploy/operator.yaml and deploy/<kind>-CRD.yaml and just specify in the README that the user should do the following:

$ kubectl create -f deploy/<kind>-CRD.yaml
$ kubectl -n <ns> -f deploy/operator.yaml

/cc @fanminshi @hongchaodeng

Allow configuration of the informer resync parameter

Resync tells the informer to resend the latest version of the custom resource to the operator at a given time interval (in seconds) even if there are no updates to the custom resource. This behavior is necessary for operators that want to periodically check the actual state of the cluster and compare it to the desired state. This will allow operators to take advantage of using the existing informer worker goroutines instead of having to create a second goroutine for a reconcile loop.

pkg/sdk/{action,handler,informer,query,types} should probably just be one package called sdk

All of these packages implement small bits of code and it's not clear why they're separate. Things like handler.Handler and informer.Informer are good examples of types that should just be in a common package.

We can have more than one type per package. This ain't Java.

https://rakyll.org/style-packages/

$ go doc github.com/coreos/operator-sdk/pkg/sdk/handler
package handler // import "github.com/coreos/operator-sdk/pkg/sdk/handler"

type Handler interface{ ... }
    var RegisteredHandler Handler
$ go doc github.com/coreos/operator-sdk/pkg/sdk/informer
package informer // import "github.com/coreos/operator-sdk/pkg/sdk/informer"

type Informer interface{ ... }
    func New(resourcePluralName, namespace string, resourceClient dynamic.ResourceInterface, ...) Informer
$ go doc github.com/coreos/operator-sdk/pkg/sdk/query
package query // import "github.com/coreos/operator-sdk/pkg/sdk/query"

Package query contains a set of APIs for accessing kubernetes objects.

func Get(into sdkTypes.Object, opts ...GetOption) error
func List(namespace string, into sdkTypes.Object, opts ...ListOption) error
type GetOp struct{ ... }
    func NewGetOp() *GetOp
type GetOption func(*GetOp)
    func WithGetOptions(metaGetOptions *metav1.GetOptions) GetOption
type ListOp struct{ ... }
    func NewListOp() *ListOp
type ListOption func(*ListOp)
    func WithListOptions(metaListOptions *metav1.ListOptions) ListOption
$ go doc github.com/coreos/operator-sdk/pkg/sdk/types
package types // import "github.com/coreos/operator-sdk/pkg/sdk/types"

type Context struct{ ... }
type Event struct{ ... }
type Object runtime.Object

Action Items

This tracks the action items needed for milestone 0.0.1.
Target Date: 03/12/18

Operator-SDK APIs:

Operator-SDK Generation:

$GOPATH/github.com/example.com/play
├── cmd
│   └── play
├── deploy
│   └── play
├── config
├── tmp
│   ├── build
│   └── codegen
└── pkg
    ├── apis
    │   └── play
    │       └── v1alpha1
    ├── client
    └── stub  
  • check operator-sdk new input args and flags ref: #106
  • save sdk command inputs into config/config.yaml ref: #71
  • populate files for each dir in the order of following:
    • cmd/play ref: #29
    • pkg/stub ref: #32
    • deploy/play ref: #72
    • pkg/apis/play/v1alpha1 ref: #37
    • tmp/build ref: #46 #69
    • tmp/codegen ref: #47
  • generate Gopkg.lock Gopkg.toml ref: #39 #67

cc/ @hasbro17 @hongchaodeng @xiang90 for any modification.

Metadata about Kubernetes/OpenShift

SDK should provide information about the Kubernetes distribution in use. The information should contain information like:

  • kubectl version
  • oc version
  • kubectl api-versions
  • ...

Use cases:

  • Take appropriate measures if run on OpenShift.
  • Determine if feature X is available.
  • Different versions might require special handling. E.g. version upgrade / migrations.

CLI should guide through version upgrades

The CLI should be able to guide the operator author in case of an upgrade to a newer version of the SDK:

  1. Perform automatic migrations if possible.
  2. Point out guidelines the author needs to tackle manually.

Don't auto-gen clientset, listers, and informers for Custom Resource.

Due to using dynamic client under skd.Watch(). It is not necessary to auto generated clientset, listers, or informers for Custom Resource anymore. However, operator-sdk generate k8s still creates those:

$ operator-sdk generate k8s
Generating deepcopy funcs
Generating clientset for app:v1alpha1 at github.com/coreos/app-operator/pkg/generated/clientset
Generating listers for app:v1alpha1 at github.com/coreos/app-operator/pkg/generated/listers
Generating informers for app:v1alpha1 at github.com/coreos/app-operator/pkg/generated/informers

We need to change tmp/codegen/update-generated.sh to avoid generate clientset, listers, and informers for the operator's custom resource.

cc/ @hasbro17 @hongchaodeng

edit: fixed via #103

Provide RBAC manifests

We need to generate a basic RBAC role and rolebinding. To start with it can be just set to access all resources. Later on we can instruct the user on setting the permissions according to the resources that are being Watched and have Actions taken on them.

For e.g:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: memcached-operator
rules:
- apiGroups:
  - "*"
  resources:
  - "*"
  verbs:
  - "*"

---

kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: default-account-memcached-operator
subjects:
- kind: ServiceAccount
  name: default
roleRef:
  kind: Role
  name: memcached-operator
  apiGroup: rbac.authorization.k8s.io

Generic k8s resource client

Currently when we use client-go per resource API, it requires to use pick specific gvk. For example:

kubecli.CoreV1().Services(ns).Create(svc)

In the SDK, we can't pick a client for the GroupVersionKind (GVK) of objects output from the handler. Even though we can hardcode predefined k8s APIs, there are unknown custom resource GVKs.

This is actually the same problem that kubectl has -- how does kubectl know about custom resource?

Create this issue to collect discussion.

Prevent resource version conflicts

Resource version conflicts can occur when the operator writes to the same object twice in the same reconcile loop.
The SDK Action API would provide generic Create and Update functions that potentially allows a Handler to run into resource conflicts.

The SDK should prevent the operator from writing to an object twice in its reconcile loop. This is to discourage the use any retry loops inside the handler to handle resource conflicts.

/cc @philips

Update k8s dependencies to 1.9.3

The k8s.io dependencies should be updated to 1.9.3 to allow testing against minikube k8s v1.9.0.

[[constraint]]
  name = "k8s.io/api"
  version = "kubernetes-1.9.3"

[[constraint]]
  name = "k8s.io/apimachinery"
  version = "kubernetes-1.9.3"

[[constraint]]
  name = "k8s.io/client-go"
  version = "kubernetes-1.9.3"

Discuss the behavior of operator-sdk new

Current runoperator-sdk new app-operator --api-version=app.example.com/v1alpha1 --kind=AppService creates a new folder called app-operator. If I call operator-sdk new app-operator --api-version=app.example.com/v1alpha1 --kind=AppService, the command overrides everything there is in the app-operator folder. I don't think that's behavior we want. In rails, the new command skips any files that have been created and generates the ones that aren't there. The rails behavior seems great but might complicate the current operator-sdk new code base to support that. I think the simpler way is to check if app-operator folder has been created already; if not, create it; if created, then skip the files and dirs creation.

cc/ @hasbro17 @hongchaodeng

Better error message when dep isn't in the current PATH

Please use https://golang.org/pkg/os/exec/#LookPath to figure out if dep is installed. Also this needs to be in the README.

$ operator-sdk new app-operator --api-version=app.example.com/v1alpha1 --kind=App
Create app-operator/cmd/app-operator/main.go 
Create app-operator/config/config.yaml 
Create app-operator/deploy/rbac.yaml 
Create app-operator/deploy/cr.yaml 
Create app-operator/pkg/apis/app/v1alpha1/doc.go 
Create app-operator/pkg/apis/app/v1alpha1/register.go 
Create app-operator/pkg/apis/app/v1alpha1/types.go 
Create app-operator/pkg/stub/handler.go 
Create app-operator/tmp/build/build.sh 
Create app-operator/tmp/build/docker_build.sh 
Create app-operator/tmp/build/Dockerfile 
Create app-operator/tmp/codegen/boilerplate.go.txt 
Create app-operator/tmp/codegen/update-generated.sh 
Create app-operator/Gopkg.toml 
Create app-operator/Gopkg.lock 
Run dep ensure ...
Error: failed to ensure dependencies: ()

operator-sdk new should also generate deepcopy functions for custom resource

The current operator-sdk generates a skeleton that doesn't include deepcopy functions for the custom resource. It is still compilable because non of the custom resource objects are being used by sdk.Watch() on default. However, if user changes sdk.Watch() to watch on the custom resource and then converts the watched object to the custom resource type, then the code becomes non-compilable.

To resolve this potential issue, the operator-sdk new should generate deepcopy functions for the custom resource when creating a new project.

Customizable Dockerfile

Currently, the Dockerfile used for build the operator image under the tmp/build/ is pre-generated with alphine-3.6 as the base image. User might want to customize how their operator image is built. We need to provide a way for user to customize the image.

A workaround for this is to change the tmp/build/Dockerfile directly and invoke operator-sdk build $image to build the image based on the modify the Dockerfile. Underneath, operator-sdk build command simply calls tmp/build/docker_build.sh which in turn builds the docker image based on the tmp/build/Dockerfile.

Operator panics if it starts before creating the CRD

When deploying the operator image built by operator-sdk, if the deployment of the operator image happens before the creation of the CRD the operator panics.

Reproducible steps:

# setup
$ operator-sdk new app-operator --api-version=app.example.com/v1alpha1 --kind=App
$ cd app-operator/
$ operator-sdk build quay.io/coreos/operator-sdk-dev:app-operator
$ docker push quay.io/coreos/operator-sdk-dev:app-operator

# remove crd creation from operator.yaml.
$ cat deploy/operator.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-operator
spec:
  replicas: 1
  selector:
    matchLabels:
      name: app-operator
  template:
    metadata:
      labels:
        name: app-operator
    spec:
      containers:
        - name: app-operator
          image: quay.io/coreos/operator-sdk-dev:app-operator
          command:
          - app-operator
          imagePullPolicy: Always
      imagePullSecrets:
      - name: operator-sdk-secret

# deploy app-operator
kubectl create -f deploy/operator.yaml

# logs
$ kubectl logs -f app-operator-67c6b694-jtgjt
time="2018-04-11T17:57:57Z" level=info msg="Go Version: go1.10"
time="2018-04-11T17:57:57Z" level=info msg="Go OS/Arch: linux/amd64"
time="2018-04-11T17:57:57Z" level=info msg="operator-sdk Version: 0.0.4"
time="2018-04-11T17:57:57Z" level=error msg="failed to get resource client for (apiVersion:app.example.com/v1alpha1, kind:App, ns:default): failed to get resource type: failed to get the resource REST mapping for GroupVersionKind(app.example.com/v1alpha1, Kind=App): no matches for app.example.com/, Kind=App"
panic: failed to get resource type: failed to get the resource REST mapping for GroupVersionKind(app.example.com/v1alpha1, Kind=App): no matches for app.example.com/, Kind=App

goroutine 1 [running]:
github.com/coreos-inc/app-operator/vendor/github.com/coreos/operator-sdk/pkg/sdk.Watch(0x1025c40, 0x18, 0x1017e6d, 0x3, 0x10199f2, 0x7, 0x5)
	/Users/fanminshi/work/src/github.com/coreos-inc/app-operator/vendor/github.com/coreos/operator-sdk/pkg/sdk/api.go:48 +0x389
main.main()
	/Users/fanminshi/work/src/github.com/coreos-inc/app-operator/cmd/app-operator/main.go:22 +0x72

Because the operator deployment has a default restartPolicy: Always, the operator will be re-started until the CRD is created. Hence, the operator will still work as usual.

Potential fix:
The operator should wait until the crd is created before proceeding.

discuss: difference between apiVersion and apiGroup

according to api-groups:

The API group is specified in a REST path and in the apiVersion field of a serialized object.

The operator-sdk has a flag --api-group that represents the above specification. But it seems to me that api-group and apiVersion has the same string. We should figure out the distinction between those two and choose which one to use.

ref: #21

operator-sdk build does not print stderr

Currently running the operator-sdk build command does not show the error output of why a build failed. E.g:

$ operator-sdk build quay.io/coreos/operator-sdk-dev:haseeb
Error: failed to build: (exit status 1)

Whereas the actual error is

$ ./tmp/build/build.sh
building memcached-operator...
pkg/stub/handler.go:6:2: case-insensitive import collision: "github.com/hasbro17/memcached-operator/vendor/github.com/Sirupsen/logrus" and "github.com/hasbro17/memcached-operator/vendor/github.com/sirupsen/logrus"

This is because in commands/operator-sdk/cmd/build.go we don't print the output buffer returned from running the command if it fails. That output buffer holds stderr and so it should be printed in case of error.
https://github.com/coreos/operator-sdk/blob/master/commands/operator-sdk/cmd/build.go#L42-L44

/cc @fanminshi

Don't use a custom Context type

This block is particularly odd

// Context is the special context that is passed to the Handler.
// It includes:
// - Context: standard Go context that is used to pass cancellation signals and deadlines
type Context struct {
	Context context.Context
}

Are there plans to expand it? If not we should just pass additional arguments to the Handler.

Better local deployment workflow

Currently, the the deployment workflow of an operator is:

  • Build the image: operator-sdk build $Image
  • Push the image to a public registry: docker push $Image
  • Create the operator deployment: kubectl create -f deploy/operator.yaml
  • Then test out the operator image via kubectl

Development cycle can be faster if we can avoid building/pushing the image. A workflow like the following maybe better (idea learned from kubebuilder):

  • Build the operator binary bin/operator that knows kubeconfig so that the client can access the kubernetes cluster that's associated with kubeconfig.
  • Execute ./bin/operator
  • Then test out the operator via kubectl

The proposed approach saves developer time from building and pushing Image to a registry and operator deployment.

Action Items:

  • Implement operator-sdk up command ref: #219
  • Document how to use the operator-sdk up ref: #279

Operator SDK CLI should report its version

Currently, user can't determine the version of Operator SDK CLI easily.
We can add additional command or flags to operator-sdk to print out the version:

$ operator-sdk version
v0.0.2
...

or

 $ operator-sdk --version
v0.0.2
...
 $ operator-sdk -v
v0.0.2
...

Handler does not receive delete event

When user deletes a existing Custom Resource(CR) that the operator is watching, user expects a delete event for that CR to be delivered to the Handle(ctx types.Context, event types.Event) error. However, that's not the case.

Steps to reproduce:

Change pkg/stub/handler.go to print out received event:

func (h *Handler) Handle(ctx types.Context, event types.Event) error {
	switch o := event.Object.(type) {
	case *v1alpha1.App:
		logrus.Printf("Name %v Deleted? %v", o.Name, event.Deleted)
	}
	return nil
}

Create a CR:

$ kubectl create -f deploy/cr.yaml

output:

$ kubectl logs -f app-operator-67c6b694-nh7l7
...
time="2018-04-04T23:15:12Z" level=info msg="Name example Deleted? false"

Delete a CR:

$ kubectl delete -f deploy/cr.yaml

Expect "Name example Deleted? true", but got none.

Allow configuration of the number of informer workers

Currently the number of workers in the informer is hardcoded to 1. Being able to set this number higher will allow operator to process multiple CRD updates at once. This will help the operator be more responsive when it is managing a large number of CRD's.

Note: I think 1 is definitely the right default.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.