Giter VIP home page Giter VIP logo

operator-marketplace's Introduction

Marketplace Operator

Marketplace is a conduit to bring off-cluster operators to your cluster.

Prerequisites

In order to deploy the Marketplace Operator, you must:

  1. Have an OKD with Operator Lifecycle Manager (OLM) installed.
  2. Be logged in as a user with Cluster Admin role.

Using the Marketplace Operator

Description

The operator manages one CRD: OperatorHub.

OperatorHub

The OperatorHub named cluster is used to manage default catalogSources found on OpenShift distributions.

Here is a description of the spec fields:

  • disableAllDefaultSources allows you to disable all the default hub sources. If this is true, a specific entry in sources can be used to enable a default source. If this is false, a specific entry in sources can be used to disable or enable a default source.

  • sources is the list of default hub sources and their configuration. If the list is empty, it implies that the default hub sources are enabled on the cluster unless disableAllDefaultSources is true. If disableAllDefaultSources is true and sources is not empty, the configuration present in sources will take precedence. The list of default hub sources and their current state will always be reflected in the status block.

Please see [here][https://docs.openshift.com/container-platform/4.13/operators/understanding/olm-understanding-operatorhub.html] for more information.

Deploying the Marketplace Operator with OKD

The Marketplace Operator is deployed by default with OKD and no further steps are required.

Marketplace End to End (e2e) Tests

A full writeup on Marketplace e2e testing can be found here

operator-marketplace's People

Contributors

amalkurup89 avatar anik120 avatar aravindhp avatar awgreene avatar benluddy avatar dinhxuanvu avatar dtfranz avatar ecordell avatar everettraven avatar exdx avatar gallettilance avatar grokspawn avatar jbuchananr avatar jianzhangbjz avatar kevinrizza avatar lack avatar njhale avatar oceanc80 avatar openshift-bot avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar perdasilva avatar ravisantoshgudimetla avatar stevekuznetsov avatar timflannagan avatar tkashem avatar vrutkovs avatar wking avatar yselkowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

operator-marketplace's Issues

Command in README refers to removed fields

The README instructs the user to verify the install of the OperatorSource for community-operators with this command:

kubectl get opsrc upstream-community-operators -o=custom-columns=NAME:.metadata.name,PACKAGES:.status.packages -n marketplace

This command fails as the PACKAGES field has been removed:

kubectl get opsrc upstream-community-operators -o=custom-columns=NAME:.metadata.name,PACKAGES:.status.packages -n marketplace
error: status is not found
kubectl get opsrc upstream-community-operators -n marketplace
NAME                           KIND
upstream-community-operators   OperatorSource.v1.operators.coreos.com

kubectl get opsrc upstream-community-operators -n marketplace -o=json
{
    "apiVersion": "operators.coreos.com/v1",
    "kind": "OperatorSource",
    "metadata": {
        "annotations": {
            "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"operators.coreos.com/v1\",\"kind\":\"OperatorSource\",\"metadata\":{\"annotations\":{},\"name\":\"upstream-community-operators\",\"namespace\":\"marketplace\"},\"spec\":{\"displayName\":\"Upstream Community Operators\",\"endpoint\":\"https://quay.io/cnr\",\"publisher\":\"Red Hat\",\"registryNamespace\":\"upstream-community-operators\",\"type\":\"appregistry\"}}\n"
        },
        "creationTimestamp": "2019-04-22T13:49:53Z",
        "generation": 1,
        "name": "upstream-community-operators",
        "namespace": "marketplace",
        "resourceVersion": "32872",
        "selfLink": "/apis/operators.coreos.com/v1/namespaces/marketplace/operatorsources/upstream-community-operators",
        "uid": "81b68191-6505-11e9-b685-020ee67d5454"
    },
    "spec": {
        "displayName": "Upstream Community Operators",
        "endpoint": "https://quay.io/cnr",
        "publisher": "Red Hat",
        "registryNamespace": "upstream-community-operators",
        "type": "appregistry"
    }
}

Ref:

bug 1683422: [csc] Make CLI output less verbose
- Fixes #116
- Removed TARGETNAMESPACE and PACKAGES fields from output
- Modified catalogsourceconfig.crd to remove fields from `additionalPrinterColumns`

Question: 2 operators having different install plan deployed within same namespace

Question

How to manage such use case as currently it is only possible to install an operatorGroup / namespace ?

We would like to install within the namespace marketplace 2 operators. The namespace marketplace contains the OperatorSource and generated catalogSource of the Community Operators. This namespace contains also an operatorGroup which do not include any targetNamespaces

Question: Is it possible to install 2 operators within the namespace marketplace where :

  • one has an InstallPlan defined with the CSV equal to AllNamespace
  • Another where the InstallPlan is Namespace scoped

If this is not possible, what is then the alternative solution ?

Question: How to use

I have built a catalog and pushed it to a private registry, but having issues pulling the image. From my reading of the code, this should work:

apiVersion: v1
kind: Secret
metadata:
  name: testcatalogkey
  namespace: openshift-marketplace
data:
  .dockerconfigjson: <a valid base64 of my dockerconfigjson>
type: kubernetes.io/dockerconfigjson
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: test-operators
  namespace: openshift-marketplace
spec:
  authorizationToken:
    secretName: testcatalogkey
  displayName: mytestoperators
  sourceType: grpc
  image: someprivate.docker.registry/hktest/catalog-test:0.0.1

But when I deploy this and look in the openshift-marketplace namespace, my container is stuck in ImagePullBackOff and `......hktest/catalog-test:0.0.1": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password: unknown: Authentication is required

Doing a describe on the pod I see no reference to the secret I created.

Is there any guidance on what I'm doing wrong?

Alerting rules incomplete

Hello from the openshift monitoring team, I noticed that you could improve your alerts a bit. ๐Ÿ‘‹

Couple of things to change:

  • Name should be PascalCase, see alerts for examples. so community-operators-alert -> CommunityOperatorsAlert
  • Naming of the alerts don't fall into best practices, I would suggest instead of CommunityOperatorsAlert -> CommunityOperatorsRegistryErrors
  • The alert expression itself you are not doing an actual error rate, you should compare the errors with total success.
  • Another thing missing is description, so to each alert you should add:
 annotations:
        description: 
        summary: Fill in the summary.
  • Also would suggest adding:
      for: 15m
      labels:
        severity: warning

Currently we are seeing this in Alertmanager, this does not customers what to do in the case this alert fires:
Screenshot 2019-10-30 at 13 36 14
We want to end up with something like this:
Screenshot 2019-10-31 at 12 57 34

See alerting rules docs for more information. Also example of error rate alert here.

Let me know if you have any questions. :)

Openshift cluster `operatorsource` stuck in Configuring state

Operatorsource is stuck in Configuring state with message Get https://quay.io/cnr/api/v1/packages?namespace=certified-operators: dial tcp: lookup quay.io on 172.30.0.10:53: server misbehaving.

oc get operatorsource -n openshift-marketplace

NAME                  TYPE          ENDPOINT              REGISTRY              DISPLAYNAME           PUBLISHER   STATUS        MESSAGE                                                                                                                                 AGE
certified-operators   appregistry   https://quay.io/cnr   certified-operators   Certified Operators   Red Hat     Configuring   Get https://quay.io/cnr/api/v1/packages?namespace=certified-operators: dial tcp: lookup quay.io on 172.30.0.10:53: server misbehaving   46h
community-operators   appregistry   https://quay.io/cnr   community-operators   Community Operators   Red Hat     Configuring   Get https://quay.io/cnr/api/v1/packages?namespace=community-operators: dial tcp: lookup quay.io on 172.30.0.10:53: server misbehaving   46h
redhat-marketplace    appregistry   https://quay.io/cnr   redhat-marketplace    Red Hat Marketplace   Red Hat     Configuring   Get https://quay.io/cnr/api/v1/packages?namespace=redhat-marketplace: dial tcp: lookup quay.io on 172.30.0.10:53: server misbehaving    46h
redhat-operators      appregistry   https://quay.io/cnr   redhat-operators      Red Hat Operators     Red Hat     Configuring   Get https://quay.io/cnr/api/v1/packages?namespace=redhat-operators: dial tcp: lookup quay.io on 172.30.0.10:53: server misbehaving      46h

In addition to this, the clusteroperator Insightsis in Degraded state with message Unable to report: unable to build request to connect to Insights server: Post https://cloud.redhat.com/api/ingress/v1/upload: dial tcp: lookup cloud.redhat.com on 172.30.0.10:53: server misbehaving

oc version - 
Client Version: 4.4.0-202006061254-d038424
Server Version: 4.4.8
Kubernetes Version: v1.17.1+3f6f40d

OperatorHub screen empty in the console

Issue

I installed successfully the OLM and Operator marketplace. I can see the upstream and community operators installed but they don't appear within the OperatorHub screen

Screenshot 2019-04-04 08 42 59

Do I miss something in order to see the operators within the UI screen ?

Steps to reproduce using okd 3.11

oc cluster up 
oc adm policy add-cluster-role-to-user cluster-admin admin
oc login -u admin -p admin

echo "Install olm"
oc create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml 

echo "Install marketplace"
oc create -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/upstream/01_namespace.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/upstream/02_catalogsourceconfig.crd.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/upstream/03_operatorsource.crd.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/upstream/04_service_account.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/upstream/05_role.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/upstream/06_role_binding.yaml
oc create -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/upstream/07_operator.yaml

echo "Install upstream and community examples"
oc apply -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/examples/upstream.operatorsource.cr.yaml
oc apply -f https://raw.githubusercontent.com/operator-framework/operator-marketplace/master/deploy/examples/community.operatorsource.cr.yaml

echo "Build and run new console"
git clone https://github.com/openshift/console.git
cd console
./build.sh
source ./contrib/oc-environment.sh
./bin/bridge

open browser at the address localhost:9000

Check the operators

oc get opsrc upstream-community-operators -o=custom-columns=NAME:.metadata.name,PACKAGES:.status.packages -n marketplace
NAME                           PACKAGES
upstream-community-operators   jaeger,prometheus,aws-service,etcd,mongodb-enterprise,redis-enterprise,federation,planetscale,strimzi-kafka-operator,cockroachdb,microcks,vault,percona,couchbase-enterprise,postgresql,oneagent

oc get opsrc community-operators -o=custom-columns=NAME:.metadata.name,PACKAGES:.status.packages -n marketplace
NAME                  PACKAGES
community-operators   prometheus,jaeger,kiecloud-operator,elasticsearch-operator,node-network-operator,microcks,metering,descheduler,cluster-logging,planetscale,cockroachdb,etcd,camel-k,oneagent,templateservicebroker,federation,node-problem-detector,automationbroker,percona,postgresql,strimzi-kafka-operator

Help with updating OperatorSource

I have created a csv https://github.com/eclipse/che-operator/blob/0.3.0/olm-catalog/cheoperator.0.0.1.csv.yaml which works fine with OLM.

Followed https://github.com/operator-framework/operator-marketplace/blob/master/docs/how-to-upload-artifact.md and pushed an artifact https://quay.io/application/eivantsov/cheoperator?tab=releases

Updated operatorsource.cr.yaml and applied it:

apiVersion: "marketplace.redhat.com/v1alpha1"
kind: "OperatorSource"
metadata:
  name: "global-operators"
  namespace: "openshift-marketplace"
spec:
  type: appregistry
  endpoint: "https://quay.io/cnr"
  registryNamespace: "eivantsov"
  displayName: "Marketplace Operators"
  publisher: "Red Hat"

Once configured, I see the following error in operator-marketplace log:

time="2019-01-23T09:59:33Z" level=error msg="Error \"package [] not found\" getting manifest" name=global-operators targetNamespace=openshift-marketplace type=CatalogSourceConfig

At this point I am puzzled - there must be some errors in one of the yamls but I can't figure out which.

Error "found invalid field namespace for v1beta1.RoleRef" installing on OS4

Seeing this error installing on an OS4 cluster:

kubectl apply -f deploy/upstream/
namespace "marketplace" created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition "catalogsourceconfigs.operators.coreos.com" configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition "operatorsources.operators.coreos.com" configured
serviceaccount "marketplace-operator" created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole "marketplace-operator" configured
role "marketplace-operator" created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding "marketplace-operator" configured
deployment "marketplace-operator" created
error: error validating "deploy/upstream/06_role_binding.yaml": error validating 
data: found invalid field namespace for v1beta1.RoleRef; if you choose to ignore 
these errors, turn validation off with --validate=false

Errors you may encounter when upgrading the library

(The purpose of this report is to alert operator-framework/operator-marketplace to the possible problems when operator-framework/operator-marketplace try to upgrade the following dependencies)

An error will happen when upgrading library prometheus/client_golang:

github.com/prometheus/client_golang

-Latest Version: v1.7.1 (Latest commit fe7bd95 20 days ago)
-Where did you use it:
https://github.com/operator-framework/operator-marketplace/search?q=prometheus%2Fclient_golang%2Fprometheus&unscoped_q=prometheus%2Fclient_golang%2Fprometheus
-Detail:

github.com/prometheus/client_golang/go.mod

module github.com/prometheus/client_golang
require (
	github.com/beorn7/perks v1.0.1
	github.com/cespare/xxhash/v2 v2.1.1
	โ€ฆ
)
go 1.11

github.com/prometheus/client_golang/blob/prometheus/desc.go

package prometheus
import (
	"github.com/cespare/xxhash/v2"
	โ€ฆ
)

This problem was introduced since prometheus/client_golang v1.2.0(committed 9a2ab94 on 16 Oct 2019) .Now you used version v0.9.3. If you try to upgrade ** prometheus/client_golang ** to version v1.2.0 and above, you will get an error--- no package exists at " github.com/cespare/xxhash/v2 "

I investigated the libraries (prometheus/client_golang >= v1.2.0) release information and found the root cause of this issue is that----

  1. These dependencies all added Go modules in the recent versions.

  2. They all comply with the specification of "Releasing Modules for v2 or higher" available in the Modules documentation. Quoting the specification:

A package that has migrated to Go Modules must include the major version in the import path to reference any v2+ modules. For example, Repo github.com/my/module migrated to Modules on version v3.x.y. Then this repo should declare its module path with MAJOR version suffix "/v3" (e.g., module github.com/my/module/v3), and its downstream project should use "github.com/my/module/v3/mypkg" to import this repoโ€™s package.

  1. This "github.com/my/module/v3/mypkg" is not the physical path. So earlier versions of Go (including those that don't have minimal module awareness) plus all tooling (like dep, glide, govendor, etc) don't have minimal module awareness as of now and therefore don't handle import paths correctly.

Note: creating a new branch is not required. If instead you have been previously releasing on master and would prefer to tag v3.0.0 on master, that is a viable option. (However, be aware that introducing an incompatible API change in master can cause issues for non-modules users who issue a go get -u given the go tool is not aware of semver prior to Go 1.11 or when module mode is not enabled in Go 1.11+).
Pre-existing dependency management solutions such as dep currently can have problems consuming a v2+ module created in this way. See for example dep#1962.
https://github.com/golang/go/wiki/Modules#releasing-modules-v2-or-higher

Solution

1. Migrate to Go Modules.

Go Modules is the general trend of ecosystem, if you want a better upgrade package experience, migrating to Go Modules is a good choice.

Migrate to modules will be accompanied by the introduction of virtual paths(It was discussed above).

This "github.com/my/module/v3/mypkg" is not the physical path. So Go versions older than 1.9.7 and 1.10.3 plus all third-party dependency management tools (like dep, glide, govendor, etc) don't have minimal module awareness as of now and therefore don't handle import paths correctly.

Then the downstream projects might be negatively affected in their building if they are module-unaware (Go versions older than 1.9.7 and 1.10.3; Or use third-party dependency management tools, such as: Dep, glide, govendorโ€ฆ).

2. Maintaining v2+ libraries that use Go Modules in Vendor directories.

If operator-framework/operator-marketplace want to keep using the dependency manage tools (like dep, glide, govendor, etc), and still want to upgrade the dependencies, can choose this fix strategy.
Manually download the dependencies into the vendor directory and do compatibility dispose(materialize the virtual path or delete the virtual part of the path). Avoid fetching the dependencies by virtual import paths. This may add some maintenance overhead compared to using modules.

As the import paths have different meanings between the projects adopting module repos and the non-module repos, materialize the virtual path is a better way to solve the issue, while ensuring compatibility with downstream module users. A textbook example provided by repo github.com/moby/moby is here:
https://github.com/moby/moby/blob/master/VENDORING.md
https://github.com/moby/moby/blob/master/vendor.conf
In the vendor directory, github.com/moby/moby adds the /vN subdirectory in the corresponding dependencies.
This will help more downstream module users to work well with your package.

3. Request upstream to do compatibility processing.

The prometheus/client_golang have 1039 module-unaware users in github, such as: AndreaGreco/mqtt_sensor_exporter, seekplum/plum_exporter, arl/monitoringโ€ฆ
https://github.com/search?q=prometheus%2Fclient_golang+filename%3Avendor.conf+filename%3Avendor.json+filename%3Aglide.toml+filename%3AGodep.toml+filename%3AGodep.json

Summary

You can make a choice when you meet this DM issues by balancing your own development schedules/mode against the affects on the downstream projects.

For this issue, Solution 1 can maximize your benefits and with minimal impacts to your downstream projects the ecosystem.

References

Do you plan to upgrade the libraries in near future?
Hope this issue report can help you ^_^
Thank you very much for your attention.

Best regards,
Kate

Missing operators in packagemanifests

So I'm running OKD v4.5 and I wanted to install the cluster-logging operator in my cluster but I ran into a few problems. I followed the instructions in the OKD Docs where at some point I have to create a Subscription resource pointing to the redhat-operators CatalogSource. My first problem was that the only CatalogSource I had in my cluster was the community-operators, is this normal? Because from this issue it seems that it's not.

I solved my first problem by just manually creating the CatalogSource CR but unfortunately then when I listed the available package manifests with oc get packagemanifest -n openshift-marketplace the cluster-logging operator doesn't show up. I also noticed that the number of operators that appeared by adding redhat-operators was very low (only 11), where other users have many more, as seen in this comment.

Community operator pod is crashing

Issue

The community operator pod is crashing and reports the following error invalid source, secret specified is malformed - upstream-community-operators on okd 3.11

time="2019-04-04T07:10:02Z" level=info msg="Using in-cluster kube client config" port=50051 type=appregistry
--
ย  | time="2019-04-04T07:10:02Z" level=info msg="operator source(s) specified are - [https://quay.io/cnr\|community-operators  --registry=https://quay.io/cnr\|upstream-community-operators ]" port=50051 type=appregistry
ย  | time="2019-04-04T07:10:02Z" level=info msg="package(s) specified are - percona,postgresql,strimzi-kafka-operator,node-problem-detector,automationbroker,kiecloud-operator,elasticsearch-operator,node-network-operator,microcks,prometheus,jaeger,cluster-logging,planetscale,cockroachdb,metering,descheduler,oneagent,templateservicebroker,federation,etcd,camel-k" port=50051 type=appregistry
ย  | time="2019-04-04T07:10:02Z" level=info msg="can't proceed, bailing out" port=50051 type=appregistry
ย  | time="2019-04-04T07:10:02Z" level=error msg="the following error(s) occurred while parsing input - invalid source, secret specified is malformed - upstream-community-operators" port=50051 type=appregistry
ย  | time="2019-04-04T07:10:02Z" level=fatal msg="error loading manifest from remote registry - invalid source, secret specified is malformed - upstream-community-operators" port=50051 type=appregistry

Pod deployed

apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/scc: restricted
  creationTimestamp: '2019-04-04T06:53:28Z'
  generateName: community-operators-664fc79df5-
  labels:
    marketplace.catalogSourceConfig: community-operators
    pod-template-hash: '2209735891'
  name: community-operators-664fc79df5-j8n92
  namespace: marketplace
  ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: community-operators-664fc79df5
      uid: 5a36e2c0-56a6-11e9-b581-080027f60d62
  resourceVersion: '31663'
  selfLink: /api/v1/namespaces/marketplace/pods/community-operators-664fc79df5-j8n92
  uid: 5a3c5ef5-56a6-11e9-b581-080027f60d62
spec:
  containers:
    - command:
        - appregistry-server
        - >-
          --registry=https://quay.io/cnr|community-operators
          --registry=https://quay.io/cnr|upstream-community-operators
        - '-o'
        - >-
          percona,postgresql,strimzi-kafka-operator,node-problem-detector,automationbroker,kiecloud-operator,elasticsearch-operator,node-network-operator,microcks,prometheus,jaeger,cluster-logging,planetscale,cockroachdb,metering,descheduler,oneagent,templateservicebroker,federation,etcd,camel-k
      image: quay.io/openshift/origin-operator-registry
      imagePullPolicy: Always
      livenessProbe:
        exec:
          command:
            - grpc_health_probe
            - '-addr=localhost:50051'
        failureThreshold: 30
        initialDelaySeconds: 5
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      name: community-operators
      ports:
        - containerPort: 50051
          name: grpc
          protocol: TCP
      readinessProbe:
        exec:
          command:
            - grpc_health_probe
            - '-addr=localhost:50051'
        failureThreshold: 30
        initialDelaySeconds: 5
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
      resources: {}
      securityContext:
        capabilities:
          drop:
            - KILL
            - MKNOD
            - SETGID
            - SETUID
        runAsUser: 1000200000
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
        - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
          name: default-token-pgrsw
          readOnly: true
  dnsPolicy: ClusterFirst
  imagePullSecrets:
    - name: default-dockercfg-kkjkn
  nodeName: localhost
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000200000
    seLinuxOptions:
      level: 's0:c14,c9'
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  volumes:
    - name: default-token-pgrsw
      secret:
        defaultMode: 420
        secretName: default-token-pgrsw
status:
  conditions:
    - lastProbeTime: null
      lastTransitionTime: '2019-04-04T06:53:29Z'
      status: 'True'
      type: Initialized
    - lastProbeTime: null
      lastTransitionTime: '2019-04-04T06:53:29Z'
      message: 'containers with unready status: [community-operators]'
      reason: ContainersNotReady
      status: 'False'
      type: Ready
    - lastProbeTime: null
      lastTransitionTime: null
      message: 'containers with unready status: [community-operators]'
      reason: ContainersNotReady
      status: 'False'
      type: ContainersReady
    - lastProbeTime: null
      lastTransitionTime: '2019-04-04T06:53:29Z'
      status: 'True'
      type: PodScheduled
  containerStatuses:
    - containerID: >-
        docker://26d42781129f924cf8586848d3dff3665056ef16f0198c3df6c53f9db7f36b7a
      image: 'quay.io/openshift/origin-operator-registry:latest'
      imageID: >-
        docker-pullable://quay.io/openshift/origin-operator-registry@sha256:aa424fce55ad5a28b19fcc9399cba6d62bc04a6d9a4730dfd150e9b36d4262f6
      lastState:
        terminated:
          containerID: >-
            docker://26d42781129f924cf8586848d3dff3665056ef16f0198c3df6c53f9db7f36b7a
          exitCode: 1
          finishedAt: '2019-04-04T07:15:18Z'
          message: >
            time="2019-04-04T07:15:18Z" level=fatal msg="error loading manifest
            from remote registry - invalid source, secret specified is malformed
            - upstream-community-operators" port=50051 type=appregistry
          reason: Error
          startedAt: '2019-04-04T07:15:18Z'
      name: community-operators
      ready: false
      restartCount: 9
      state:
        waiting:
          message: >-
            Back-off 5m0s restarting failed container=community-operators
            pod=community-operators-664fc79df5-j8n92_marketplace(5a3c5ef5-56a6-11e9-b581-080027f60d62)
          reason: CrashLoopBackOff
  hostIP: 10.0.3.15
  phase: Running
  podIP: 172.17.0.13
  qosClass: BestEffort
  startTime: '2019-04-04T06:53:29Z'

Failed to build the `marketplace-operator` binary

What should I do before running the make command? Thanks!

[root@preserve-olm-env operator-marketplace]# make osbs-build
# hack/build.sh
./build/build.sh
building marketplace-operator...
# k8s.io/client-go/tools/clientcmd/api/v1
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:29:15: scheme.AddConversionFuncs undefined (type *runtime.Scheme has no field or method AddConversionFuncs)
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:31:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:34:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:37:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:40:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:43:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:46:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:49:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
../../../../pkg/mod/k8s.io/client-go@v12.0.0+incompatible/tools/clientcmd/api/v1/conversion.go:52:12: s.DefaultConvert undefined (type conversion.Scope has no field or method DefaultConvert)
# sigs.k8s.io/controller-runtime/pkg/metrics
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/metrics/client_go_adapter.go:133:24: not enough arguments in call to metrics.Register
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/metrics/client_go_adapter.go:133:25: undefined: metrics.RegisterOpts
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/metrics/workqueue.go:99:48: cannot use workqueueMetricsProvider literal (type workqueueMetricsProvider) as type workqueue.MetricsProvider in argument to workqueue.SetProvider:
	workqueueMetricsProvider does not implement workqueue.MetricsProvider (missing NewDeprecatedAddsMetric method)
# sigs.k8s.io/controller-runtime/pkg/client
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:46:5: too many arguments in call to o.resourceMeta.Interface.Post().NamespaceIfScoped(o.Object.GetNamespace(), o.resourceMeta.isNamespaced()).Resource(o.resourceMeta.resource()).Body(obj).VersionedParams(createOpts.AsCreateOptions(), c.paramCodec).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:65:5: too many arguments in call to o.resourceMeta.Interface.Put().NamespaceIfScoped(o.Object.GetNamespace(), o.resourceMeta.isNamespaced()).Resource(o.resourceMeta.resource()).Name(o.Object.GetName()).Body(obj).VersionedParams(updateOpts.AsUpdateOptions(), c.paramCodec).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:84:5: too many arguments in call to o.resourceMeta.Interface.Delete().NamespaceIfScoped(o.Object.GetNamespace(), o.resourceMeta.isNamespaced()).Resource(o.resourceMeta.resource()).Name(o.Object.GetName()).Body(deleteOpts.AsDeleteOptions()).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:103:5: too many arguments in call to o.resourceMeta.Interface.Delete().NamespaceIfScoped(deleteAllOfOpts.ListOptions.Namespace, o.resourceMeta.isNamespaced()).Resource(o.resourceMeta.resource()).VersionedParams(deleteAllOfOpts.ListOptions.AsListOptions(), c.paramCodec).Body(deleteAllOfOpts.DeleteOptions.AsDeleteOptions()).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:126:5: too many arguments in call to o.resourceMeta.Interface.Patch(patch.Type()).NamespaceIfScoped(o.Object.GetNamespace(), o.resourceMeta.isNamespaced()).Resource(o.resourceMeta.resource()).Name(o.Object.GetName()).VersionedParams(patchOpts.ApplyOptions(opts).AsPatchOptions(), c.paramCodec).Body(data).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:139:20: too many arguments in call to r.Interface.Get().NamespaceIfScoped(key.Namespace, r.isNamespaced()).Resource(r.resource()).Name(key.Name).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:154:5: too many arguments in call to r.Interface.Get().NamespaceIfScoped(listOpts.Namespace, r.isNamespaced()).Resource(r.resource()).VersionedParams(listOpts.AsListOptions(), c.paramCodec).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:175:5: too many arguments in call to o.resourceMeta.Interface.Put().NamespaceIfScoped(o.Object.GetNamespace(), o.resourceMeta.isNamespaced()).Resource(o.resourceMeta.resource()).Name(o.Object.GetName()).SubResource("status").Body(obj).VersionedParams((&UpdateOptions literal).ApplyOptions(opts).AsUpdateOptions(), c.paramCodec).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/typed_client.go:199:5: too many arguments in call to o.resourceMeta.Interface.Patch(patch.Type()).NamespaceIfScoped(o.Object.GetNamespace(), o.resourceMeta.isNamespaced()).Resource(o.resourceMeta.resource()).Name(o.Object.GetName()).SubResource("status").Body(data).VersionedParams(patchOpts.ApplyOptions(opts).AsPatchOptions(), c.paramCodec).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/unstructured_client.go:56:5: too many arguments in call to o.resourceMeta.Interface.Post().NamespaceIfScoped(o.Object.GetNamespace(), o.resourceMeta.isNamespaced()).Resource(o.resourceMeta.resource()).Body(obj).VersionedParams(createOpts.AsCreateOptions(), uc.paramCodec).Do
	have (context.Context)
	want ()
../../../../pkg/mod/sigs.k8s.io/controller-runtime@v0.6.0/pkg/client/unstructured_client.go:56:5: too many errors
# github.com/openshift/client-go/config/clientset/versioned/typed/config/v1
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:56:5: too many arguments in call to c.client.Get().Resource("apiservers").Name(name).VersionedParams(&options, scheme.ParameterCodec).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:72:5: too many arguments in call to c.client.Get().Resource("apiservers").VersionedParams(&opts, scheme.ParameterCodec).Timeout(timeout).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:88:8: too many arguments in call to c.client.Get().Resource("apiservers").VersionedParams(&opts, scheme.ParameterCodec).Timeout(timeout).Watch
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:98:5: too many arguments in call to c.client.Post().Resource("apiservers").VersionedParams(&opts, scheme.ParameterCodec).Body(aPIServer).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:111:5: too many arguments in call to c.client.Put().Resource("apiservers").Name(aPIServer.ObjectMeta.Name).VersionedParams(&opts, scheme.ParameterCodec).Body(aPIServer).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:126:5: too many arguments in call to c.client.Put().Resource("apiservers").Name(aPIServer.ObjectMeta.Name).SubResource("status").VersionedParams(&opts, scheme.ParameterCodec).Body(aPIServer).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:137:5: too many arguments in call to c.client.Delete().Resource("apiservers").Name(name).Body(&opts).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:152:5: too many arguments in call to c.client.Delete().Resource("apiservers").VersionedParams(&listOpts, scheme.ParameterCodec).Timeout(timeout).Body(&opts).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/apiserver.go:165:5: too many arguments in call to c.client.Patch(pt).Resource("apiservers").Name(name).SubResource(subresources...).VersionedParams(&opts, scheme.ParameterCodec).Body(data).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/authentication.go:56:5: too many arguments in call to c.client.Get().Resource("authentications").Name(name).VersionedParams(&options, scheme.ParameterCodec).Do
	have (context.Context)
	want ()
../../../../pkg/mod/github.com/openshift/client-go@v0.0.0-20200326155132-2a6cd50aedd0/config/clientset/versioned/typed/config/v1/authentication.go:56:5: too many errors
# github.com/openshift/library-go/pkg/operator/v1helpers
../../../../pkg/mod/github.com/openshift/library-go@v0.0.0-20200518140451-8b2ad0d4eeef/pkg/operator/v1helpers/core_getters.go:39:35: cannot use combinedConfigMapInterface literal (type combinedConfigMapInterface) as type "k8s.io/client-go/kubernetes/typed/core/v1".ConfigMapInterface in return argument:
	combinedConfigMapInterface does not implement "k8s.io/client-go/kubernetes/typed/core/v1".ConfigMapInterface (wrong type for Get method)
		have Get(context.Context, string, "k8s.io/apimachinery/pkg/apis/meta/v1".GetOptions) (*"k8s.io/api/core/v1".ConfigMap, error)
		want Get(string, "k8s.io/apimachinery/pkg/apis/meta/v1".GetOptions) (*"k8s.io/api/core/v1".ConfigMap, error)
../../../../pkg/mod/github.com/openshift/library-go@v0.0.0-20200518140451-8b2ad0d4eeef/pkg/operator/v1helpers/core_getters.go:93:32: cannot use combinedSecretInterface literal (type combinedSecretInterface) as type "k8s.io/client-go/kubernetes/typed/core/v1".SecretInterface in return argument:
	combinedSecretInterface does not implement "k8s.io/client-go/kubernetes/typed/core/v1".SecretInterface (wrong type for Get method)
		have Get(context.Context, string, "k8s.io/apimachinery/pkg/apis/meta/v1".GetOptions) (*"k8s.io/api/core/v1".Secret, error)
		want Get(string, "k8s.io/apimachinery/pkg/apis/meta/v1".GetOptions) (*"k8s.io/api/core/v1".Secret, error)
../../../../pkg/mod/github.com/openshift/library-go@v0.0.0-20200518140451-8b2ad0d4eeef/pkg/operator/v1helpers/test_helpers.go:160:9: cannot use &fakeNodeLister literal (type *fakeNodeLister) as type "k8s.io/client-go/listers/core/v1".NodeLister in return argument:
	*fakeNodeLister does not implement "k8s.io/client-go/listers/core/v1".NodeLister (missing ListWithPredicate method)
../../../../pkg/mod/github.com/openshift/library-go@v0.0.0-20200518140451-8b2ad0d4eeef/pkg/operator/v1helpers/test_helpers.go:168:46: too many arguments in call to n.client.CoreV1().Nodes().List
	have (context.Context, "k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions)
	want ("k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions)
make: *** [osbs-build] Error 2

The golang env:

[root@preserve-olm-env operator-marketplace]# go env
GO111MODULE=""
GOARCH="amd64"
GOBIN="/usr/local/go/bin"
GOCACHE="/root/.cache/go-build"
GOENV="/root/.config/go/env"
GOEXE=""
GOFLAGS="-mod=mod"
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOINSECURE=""
GONOPROXY=""
GONOSUMDB=""
GOOS="linux"
GOPATH="/data/goproject"
GOPRIVATE=""
GOPROXY="https://proxy.golang.org,direct"
GOROOT="/usr/local/go"
GOSUMDB="sum.golang.org"
GOTMPDIR=""
GOTOOLDIR="/usr/local/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
AR="ar"
CC="gcc"
CXX="g++"
CGO_ENABLED="1"
GOMOD="/data/goproject/src/operator-framework/operator-registry/go.mod"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
PKG_CONFIG="pkg-config"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build571846914=/tmp/go-build -gno-record-gcc-switches"

The branch:

[root@preserve-olm-env operator-marketplace]# git branch
  cs
* master
[root@preserve-olm-env operator-marketplace]# git log
commit b1ba97ff4cf3999fd8fcdc2c97700d5291dca1f0
Merge: c53c1e3 c6457f8
Author: OpenShift Merge Robot <[email protected]>
Date:   Fri May 8 20:36:15 2020 +0200

It works well in the container:

[root@preserve-olm-env operator-marketplace]# docker build . --tag=quay.io/olmqe/operator-marketplace:binary
Sending build context to Docker daemon  261.4MB
Step 1/12 : FROM registry.svc.ci.openshift.org/openshift/release:golang-1.10 AS builder
 ---> eae4d37ce64f
Step 2/12 : WORKDIR /go/src/github.com/operator-framework/operator-marketplace
 ---> Using cache
 ---> e0bc5f711cc4
Step 3/12 : COPY . .
 ---> 3a79a9aabd3a
Step 4/12 : RUN make osbs-build
 ---> Running in 16cd3fa57d63
# hack/build.sh
./build/build.sh
building marketplace-operator...
Removing intermediate container 16cd3fa57d63
 ---> ae3cd120cd43
Step 5/12 : FROM registry.svc.ci.openshift.org/openshift/origin-v4.0:base
 ---> f779ab8af41a
Step 6/12 : RUN useradd marketplace-operator
 ---> Using cache
 ---> 9e808118e7b3
Step 7/12 : USER marketplace-operator
 ---> Using cache
 ---> a1297ffa8ddd
Step 8/12 : COPY --from=builder /go/src/github.com/operator-framework/operator-marketplace/build/_output/bin/marketplace-operator /usr/bin
 ---> Using cache
 ---> 82a3cd539038
Step 9/12 : ADD manifests /manifests
 ---> Using cache
 ---> 70e616c6a11b
Step 10/12 : ADD defaults /defaults
 ---> 9dcd9a17b562
Step 11/12 : LABEL io.k8s.display-name="OpenShift Marketplace Operator"       io.k8s.description="This is a component of OpenShift Container Platform and manages the OpenShift Marketplace."       io.openshift.tags="openshift,marketplace"       io.openshift.release.operator=true       maintainer="AOS Marketplace <[email protected]>"
 ---> Running in 30252d058fee
Removing intermediate container 30252d058fee
 ---> d71ccb29bb3e
Step 12/12 : CMD ["/usr/bin/marketplace-operator"]
 ---> Running in ad566766d15a
Removing intermediate container ad566766d15a
 ---> 512a6589d127
Successfully built 512a6589d127
Successfully tagged quay.io/olmqe/operator-marketplace:binary

Unknown field namespace in RoleRef

With minikube 0.35.0 and kubectl 1.13.4, I see the following when trying to install marketplace:

error: error validating "deploy/upstream/06_role_binding.yaml": 
error validating data: ValidationError(RoleBinding.roleRef): 
unknown field "namespace" in io.k8s.api.rbac.v1beta1.RoleRef; 
if you choose to ignore these errors, turn validation off with --validate=false

Proposal: marketplace CLI

It's nice to install operator via UI, but when you run things from command line, or in a CI environment, it's not that easy to deal with subscriptions, operatorgroups and CSV.

In some cases we added bash scripts to have more control on operatorgroups or wait for a subscription to be satisfied.

I'm asking around, so I ask also here.

Why not adding a CLI to do stuff with operators available in a cluster marketplace?
Listing, installing, checking or create groups etc.

That could be also technically embedded in the oc or kubectl CLI.

Unable to reintroduce "redhat-operators" operatorsource

This is more a question - I accidentally deleted the "redhat-operators" operatorsource. Is there a way to reintroduce redhat-operators to the operatorsources?

The only operatorsource ATM is upstream-community-operators.

$ oc get operatorsource
NAME                           TYPE          ENDPOINT              REGISTRY                       DISPLAYNAME                    PUBLISHER   STATUS      MESSAGE                                       AGE
upstream-community-operators   appregistry   https://quay.io/cnr   upstream-community-operators   Upstream Community Operators   Red Hat     Succeeded   The object has been successfully reconciled   40m

Upgrade: help needed

I have successfully added my custom operator to the OperatorHub:

  1. oc new-project eclipse-che
  2. oc apply -f $operatorsource.yaml:
apiVersion: marketplace.redhat.com/v1alpha1
kind: OperatorSource
metadata:
  name: che-operator
  namespace: openshift-marketplace
spec:
  type: appregistry
  endpoint: https://quay.io/cnr
  registryNamespace: eivantsov
  displayName: "Community Operators"
  publisher: "Red Hat"
  1. oc apply -f operatorgroup.yaml:
apiVersion: operators.coreos.com/v1alpha2
kind: OperatorGroup
metadata:
  name: che-operator
  namespace: eclipse-che
spec:
  targetNamespaces:
  - eclipse-che

I can see my operator in the Hub, I can install it, and it works as expected.

Now I want to test upgrade procedure. My subscription has automatic approval strategy. So, I simply pushed a new app to quay.io. I can see a new tag with 0.0.2 version in quay.

Should I expect my operator to automatically upgrade or do I need to do smth else to make it happen?

apiVersion: marketplace.redhat.com/v1alpha1
kind: OperatorSource
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"marketplace.redhat.com/v1alpha1","kind":"OperatorSource","metadata":{"annotations":{},"name":"che-operator","namespace":"openshift-marketplace"},"spec":{"displayName":"Community Operators","endpoint":"https://quay.io/cnr","publisher":"Red Hat","registryNamespace":"eivantsov","type":"appregistry"}}
  creationTimestamp: 2019-02-12T13:10:10Z
  finalizers:
  - finalizer.operatorsources.marketplace.redhat.com
  generation: 1
  name: che-operator
  namespace: openshift-marketplace
  resourceVersion: "4449662"
  selfLink: /apis/marketplace.redhat.com/v1alpha1/namespaces/openshift-marketplace/operatorsources/che-operator
  uid: 86a1e748-2ec7-11e9-ac57-0a0047439e26
spec:
  displayName: Community Operators
  endpoint: https://quay.io/cnr
  publisher: Red Hat
  registryNamespace: eivantsov
  type: appregistry
status:
  currentPhase:
    lastTransitionTime: 2019-02-12T13:10:12Z
    lastUpdateTime: 2019-02-12T13:10:12Z
    phase:
      message: did not find replaces CSV[cheoperator.v0.0.1], channel[alpha] package[che]
      name: Failed

Status above suggests that there's no csv cheoperator.v0.0.1. IN what namesapce does it look for it?

I do have csv in the namespaces where the operator is installed:

NAME                 AGE
cheoperator.v0.0.1   51m

Can't install an operator from a subscription - csv in namespace with no operatorgroups

Issue

When we deploy on okd-3.11 the following CSV using one the of upstream community operator, then we get the error csv in namespace with no operatorgroups

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    capabilities: Seamless Upgrades
    categories: Logging & Tracing
    certified: 'false'
    containerImage: 'docker.io/jaegertracing/jaeger-operator:1.8.2'
    createdAt: '2019-01-09 12:00:00'
    description: >-
      Provides tracing, monitoring and troubleshooting microservices-based
      distributed systems
    support: Jaeger
  selfLink: >-
    /apis/operators.coreos.com/v1alpha1/namespaces/operators/clusterserviceversions/jaeger-operator.v1.8.2
  resourceVersion: '15909'
  name: jaeger-operator.v1.8.2
  uid: 34276920-56be-11e9-8d22-08002717d626
  creationTimestamp: '2019-04-04T09:44:13Z'
  generation: 1
  namespace: operators
  labels:
    olm.api.Jaeger.v1alpha1.io.jaegertracing: provided
spec:
  customresourcedefinitions:
    owned:
      - description: A configuration file for a Jaeger custom resource.
        displayName: Jaeger
        kind: Jaeger
        name: jaegers.io.jaegertracing
        version: v1alpha1
  apiservicedefinitions: {}
  keywords:
    - tracing
    - monitoring
    - troubleshooting
  displayName: jaeger-operator
  provider:
    name: Jaeger
  installModes:
    - supported: true
      type: OwnNamespace
    - supported: true
      type: SingleNamespace
    - supported: false
      type: MultiNamespace
    - supported: true
      type: AllNamespaces
  version: 1.8.2
  icon:
    - base64data: >-
      ...
      mediatype: image/png
  links:
    - name: Jaeger Operator Source Code
      url: 'https://github.com/jaegertracing/jaeger-operator'
  install:
    spec:
      deployments:
        - name: jaeger-operator
          spec:
            replicas: 1
            selector:
              matchLabels:
                name: jaeger-operator
            template:
              metadata:
                labels:
                  name: jaeger-operator
              spec:
                containers:
                  - args:
                      - start
                      - '--platform=openshift'
                    env:
                      - name: WATCH_NAMESPACE
                        valueFrom:
                          fieldRef:
                            fieldPath: 'metadata.annotations[''olm.targetNamespaces'']'
                      - name: POD_NAME
                        valueFrom:
                          fieldRef:
                            fieldPath: metadata.name
                      - name: OPERATOR_NAME
                        value: jaeger-operator
                    image: 'jaegertracing/jaeger-operator:1.8.2'
                    imagePullPolicy: Always
                    name: jaeger-operator
                    ports:
                      - containerPort: 60000
                        name: metrics
                serviceAccountName: jaeger-operator
      permissions:
        - rules:
            - apiGroups:
                - ''
              resources:
                - pods
                - services
                - endpoints
                - persistentvolumeclaims
                - events
                - configmaps
                - secrets
                - serviceaccounts
              verbs:
                - '*'
            - apiGroups:
                - apps
              resources:
                - deployments
                - daemonsets
                - replicasets
                - statefulsets
              verbs:
                - '*'
            - apiGroups:
                - monitoring.coreos.com
              resources:
                - servicemonitors
              verbs:
                - get
                - create
            - apiGroups:
                - io.jaegertracing
              resources:
                - '*'
              verbs:
                - '*'
            - apiGroups:
                - extensions
              resources:
                - ingresses
              verbs:
                - '*'
            - apiGroups:
                - batch
              resources:
                - jobs
                - cronjobs
              verbs:
                - '*'
          serviceAccountName: jaeger-operator
    strategy: deployment
  maintainers:
    - email: [email protected]
      name: Jaeger Google Group
  description: >-
    Jaeger, inspired by [Dapper](https://research.google.com/pubs/pub36356.html)
    and [OpenZipkin](http://zipkin.io/), is a distributed tracing system
    released as open source by Uber Technologies. It is used for monitoring and
    troubleshooting microservices-based distributed systems.


    ### Supported Features


    * **Multiple modes** - Supports `allInOne`, `production`, and `streaming`
    [modes of
    deployment](https://github.com/jaegertracing/jaeger-operator#strategies).


    * **Configuration** - Directly pass down all supported Jaeger configuration
    through the Operator.


    * **Storage** - Configure storage used by Jaeger. By default, `memory` is
    used.


    ### Accessing the UI


    By default, the Operator exposes the Jaeger UI as an Ingress. This can be
    disabled via configuration. See [Accessing the
    UI](https://github.com/jaegertracing/jaeger-operator#accessing-the-ui) in
    the documentation for more details.


    ### Example Configuration


    A more complex Jaeger instance taken from the
    [documentation](https://github.com/jaegertracing/jaeger-operator#creating-a-new-jaeger-instance):

        apiVersion: io.jaegertracing/v1alpha1
        kind: Jaeger
        metadata:
          name: my-jaeger
        spec:
          strategy: allInOne
          allInOne:
            image: jaegertracing/all-in-one:latest
            options:
              log-level: debug
          storage:
            type: memory
            options:
              memory:
                max-traces: 100000
          ingress:
            enabled: false
          agent:
            strategy: DaemonSet
          annotations:
            scheduler.alpha.kubernetes.io/critical-pod: ""
  selector:
    matchLabels:
      name: jaeger-operator
  labels:
    name: jaeger-operator
status:
  certsLastUpdated: null
  certsRotateAt: null
  conditions:
    - lastTransitionTime: '2019-04-04T09:44:13Z'
      lastUpdateTime: '2019-04-04T09:44:13Z'
      message: csv in namespace with no operatorgroups
      phase: Failed
      reason: NoOperatorGroup
  lastTransitionTime: '2019-04-04T09:44:13Z'
  lastUpdateTime: '2019-04-04T09:44:33Z'
  message: csv in namespace with no operatorgroups
  phase: Failed
  reason: NoOperatorGroup

Marketplace Operator in OKD

README says

The Marketplace Operator is deployed by default with OKD and no further steps are required.

However, with OKD 3.11 I see only two operators:

eugene@ivantsoft ~/Downloads/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit $ oc get pods --all-namespaces
NAMESPACE                       NAME                                                      READY     STATUS      RESTARTS   AGE
default                         docker-registry-1-8992s                                   1/1       Running     0          8m
default                         persistent-volume-setup-k2jpx                             0/1       Completed   0          8m
default                         router-1-8n27x                                            1/1       Running     0          8m
kube-dns                        kube-dns-xhkfk                                            1/1       Running     0          10m
kube-proxy                      kube-proxy-8chgs                                          1/1       Running     0          10m
kube-system                     kube-controller-manager-localhost                         1/1       Running     0          10m
kube-system                     kube-scheduler-localhost                                  1/1       Running     0          9m
kube-system                     master-api-localhost                                      1/1       Running     0          9m
kube-system                     master-etcd-localhost                                     1/1       Running     0          9m
openshift-apiserver             openshift-apiserver-jp9vk                                 1/1       Running     0          10m
openshift-controller-manager    openshift-controller-manager-4zglb                        1/1       Running     0          8m
openshift-core-operators        openshift-service-cert-signer-operator-6d477f986b-8mc6l   1/1       Running     0          10m
openshift-core-operators        openshift-web-console-operator-664b974ff5-lf6sr           1/1       Running     0          8m
openshift-service-cert-signer   apiservice-cabundle-injector-8ffbbb6dc-6wfrb              1/1       Running     0          9m
openshift-service-cert-signer   service-serving-cert-signer-668c45d5f-m2fqv               1/1       Running     0          9m
openshift-web-console           webconsole-595b6b5d9-mgprj                                1/1       Running     0          7m

There's no OLM in OKD by default as well, so I wonder if existing instruction are accurate.
Thanks

Looks latest image "quay.io/openshift/origin-operator-marketplace:latest" is broken

Image log:

time="2020-08-17T07:21:32Z" level=info msg="Go Version: go1.10.8"
time="2020-08-17T07:21:32Z" level=info msg="Go OS/Arch: linux/amd64"
time="2020-08-17T07:21:32Z" level=info msg="operator-sdk Version: v0.8.0"
flag provided but not defined: -registryServerImage
Usage of marketplace-operator:
  -clusterOperatorName string
    	the name of the OpenShift ClusterOperator that should reflect this operator's status, or the empty string to disable ClusterOperator updates
  -defaultsDir string
    	the directory where the default CatalogSources are stored
  -kubeconfig string
    	Paths to a kubeconfig. Only required if out-of-cluster.
  -master string
    	The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.
  -tls-cert string
    	Path to use for certificate (requires tls-key)
  -tls-key string
    	Path to use for private key (requires tls-cert)
  -version
    	displays marketplace source commit info.

When I replaced and used stable tag, image and operator installation become recovered...

error: unable to recognize "operator-marketplace/deploy/upstream/07_upstream_operatorsource.cr.yaml": no matches for kind "OperatorSource" in version "operators.coreos.com/v1"

The error error: unable to recognize "operator-marketplace/deploy/upstream/07_upstream_operatorsource.cr.yaml": no matches for kind "OperatorSource" in version "operators.coreos.com/v1" is faced when we run kubectl apply -f operator-marketplace/deploy/upstream/ as described here: https://github.com/operator-framework/community-operators/blob/master/docs/testing-operators.md#3-install-the-operator-marketplace

Missing AddToScheme for API

All the go client API has a function called AddToScheme like

import (
	"k8s.io/apimachinery/pkg/runtime"
)

// AddToSchemes may be used to add all resources defined in the project to a Scheme
var AddToSchemes runtime.SchemeBuilder

// AddToScheme adds all Resources to the Scheme
func AddToScheme(s *runtime.Scheme) error {
	return AddToSchemes.AddToScheme(s)
}

To enable using the API in other operators

Deleting marketplace-operator leaves service behind

After creating and deleting the marketplace operator, a search for services in the namespace reveals a leftover service named marketplace-operator.

$ operator-marketplace oc project
Using project "marketplace" on server "https://192.168.42.164:8443".
$ operator-marketplace oc apply -f deploy/operatorsource.crd.yaml
customresourcedefinition.apiextensions.k8s.io/operatorsources.marketplace.redhat.com created
$ operator-marketplace oc apply -f deploy/catalogsourceconfig.crd.yaml
customresourcedefinition.apiextensions.k8s.io/catalogsourceconfigs.marketplace.redhat.com created
$ operator-marketplace oc apply -f deploy/rbac.yaml
clusterrole.rbac.authorization.k8s.io/marketplace-operator created
clusterrolebinding.rbac.authorization.k8s.io/default-account-marketplace-operator created
$ operator-marketplace oc apply -f deploy/operator.yaml
deployment.apps/marketplace-operator created
$ operator-marketplace oc get svc
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
marketplace-operator   ClusterIP   172.30.26.44   <none>        60000/TCP   3s
$ operator-marketplace oc delete -f deploy/
customresourcedefinition.apiextensions.k8s.io "catalogsourceconfigs.marketplace.redhat.com" deleted
deployment.apps "marketplace-operator" deleted
customresourcedefinition.apiextensions.k8s.io "operatorsources.marketplace.redhat.com" deleted
clusterrole.rbac.authorization.k8s.io "marketplace-operator" deleted
clusterrolebinding.rbac.authorization.k8s.io "default-account-marketplace-operator" deleted
$ operator-marketplace oc get svc
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
marketplace-operator   ClusterIP   172.30.26.44   <none>        60000/TCP   9s

Liveness/readiness probe killing community pod before it has a chance to download package

Hello!

I have encountered a problem with the Operators Marketplace in CRC. However, it's not CRC specific.

When the marketplace container "community-operators-XXX" is deployed, it starts by downloading a list of packages. This is over 50MB of data downloaded in small chunks and it takes some time to download even on a decent internet connection.

While this is happening the readiness and/or liveness probes report failure.

What happens in my case is that the pods get repeatedly killed before they collected the necessary data and could have become ready. This happens several times before OpenShift gives up and the deployment stays failed.

This a very well hidden problem because the Operator (list of operators) still shows items, there's no indication of the problem except for the failed Pods in the operator-marketplace project and a lower than expected the number of items in the .....

$ oc get packagemanifests -n openshift-marketplace | wc -l
124

vs

$ oc get packagemanifests -n openshift-marketplace | wc -l
250

In my case, I solved the problem by increasing the initialDelaySeconds to 300 and the failureThreshold to 100. That gave the container enough time to download the data before it would get killed and redeployed.

However, I am not sure what is the correct place to do that.

I think that I am surely not the only person having this issue. Especially with CRC people might be testing OpenShift on not-that-great internet connectivity. The issue is well hidden. In the console (web interface), there's no indication of a problem, just that operators are missing in the list. I have noticed only because I was missing a particular operator that I wanted to work with. Also, the wording in CRC docs suggests that some things might be degraded or reporting issues due to memory limitations so it's easy to miss the problem of a failed deployment.

A temporary solution to the problem could be increasing the initialDelaySeconds etc. in the right place. Fixing it properly might involve reimplementing the initialization with the "Init Container" pattern, or changing the readiness/liveness probes or something else?

Thanks and regards!

Reinstall Operator - Openshift 4.3

People,

I have big mistake in my cluster, I tried to install Spinnaker Operator and I was wrong to uninstall Operator instead Spinnaker.

I saw this github and appears hope in me and I tried to install Operator again and appears about ConfigMap.

Already have Operator-Lifecycle-Manager.

Captura de tela de 2020-06-17 10-22-05
Captura de tela de 2020-06-17 08-09-08

shared package importing non existing package k8sutil from operator-sdk

Hi, recently operator-sdk v1.0.0 was released we're trying to upgrade to that, we are using the master of operator-marketplace which has shared package, tries to import k8sutil package from operator-sdk which is now moved to an internal package in this release see. This import causes the following stack trace:

 github.com/operator-framework/operator-marketplace/pkg/apis/operators/v1 imports
        github.com/operator-framework/operator-marketplace/pkg/apis/operators/shared imports
        github.com/operator-framework/operator-sdk/pkg/k8sutil: module github.com/operator-framework/operator-sdk@latest found (v1.0.0), but does not contain package github.com/operator-framework/operator-sdk/pkg/k8sutil

Subscription status stuck at Upgrading

I have installed OpenShift 4.0 (installer v0.11.0)

Luckily in this version OperatorHub console page is fixed, and I can see Operators.

However, upon installing an operator, a subscription is created, however, it is stuck at Upgrading phase, no matter what upgrade policy is chosen.

This happens to RH, certified and community operators, as well as the one I have added myself (by editing OperatorSource for community operators)

image
image

Which logs may I look at to try to figure out what's happening?

No matches for CatalogSourceConfig in marketplace.redhat.com/v1alpha1

Marketplace deployment fails to deploy when doing kubectl apply -f deploy/upstream :

time="2019-03-25T13:51:17Z" level=info msg="Go Version: go1.10.3"
time="2019-03-25T13:51:17Z" level=info msg="Go OS/Arch: linux/amd64"
time="2019-03-25T13:51:17Z" level=info msg="operator-sdk Version: v0.3.0"
time="2019-03-25T13:51:17Z" level=info msg="Registering Components."
time="2019-03-25T13:51:17Z" level=warning msg="[status] ClusterOperator API not present: server does not support API version \"config.openshift.io/v1\""
time="2019-03-25T13:51:17Z" level=fatal msg="no matches for kind \"CatalogSourceConfig\" in version \"marketplace.redhat.com/v1alpha1\""

Test flake (409 Conflict) in TestMarketplace/no-setup-test-group/default-catalogsource-test-suite/update-default-catalogsource

             --- FAIL: TestMarketplace/no-setup-test-group/default-catalogsource-test-suite/update-default-catalogsource (0.14s)
            	require.go:794: 
            			Error Trace:	defaultcatsrctests.go:75
            			Error:      	Received unexpected error:
            			            	Operation cannot be fulfilled on catalogsources.operators.coreos.com "redhat-operators": the object has been modified; please apply your changes to the latest version and try again
            			Test:       	TestMarketplace/no-setup-test-group/default-catalogsource-test-suite/update-default-catalogsource
            			Messages:   	Default CatalogSource could not be updated successfully 

operator source version

xxxxxxx ~ % kubectl apply -f /Users/xxxxxx/Documents/GitHub/operator-marketplace/deploy/examples/community.operatorsource.cr.yaml
error: unable to recognize "/Users/xxxxxx/Documents/GitHub/operator-marketplace/deploy/examples/community.operatorsource.cr.yaml": no matches for kind "OperatorSource" in version "operators.coreos.com/v1"
xxxxxxxxx~ %

I have used the below YAML for creating the operator source in my CRC cluster. please help me with API version.

apiVersion: operators.coreos.com/v1
kind: OperatorSource
metadata:
name: community-operators
namespace: marketplace
spec:
type: appregistry
endpoint: https://quay.io/cnr
registryNamespace: community-operators
displayName: "Community Operators"
publisher: "Red Hat"

openshift installer hangs waiting for marketplace operator

I ran the installer several times a day for CodeReady Containers and I notice this afternoon that the marketplace operator was stuck. It blocked the whole installation process. Deleting the pod to force the recreation fixed the issue.

kubectl --kubeconfig crc-tmp-install-data/auth/kubeconfig -n openshift-sdn get co             
NAME                                       VERSION      AVAILABLE   PROGRESSING   DEGRADED   SINCE                                 
...
machine-config                             4.5.0-rc.6   True        False         False      19m
marketplace                                                                       False      
monitoring                                 4.5.0-rc.6   True        False         False      15m
...

I searched for error in the log before the restart:

time="2020-07-23T13:39:18Z" level=error msg="no matches for kind \"CatalogSourceConfig\" in version \"operators.coreos.com/v2\"[migration] Error in migrating Marketplace away from CatalogSourceConfig API"
time="2020-07-23T13:42:08Z" level=error msg="Could not determine operator upgradeablity. Error: Get https://172.25.0.1:443/apis/operators.coreos.com/v1/operatorsources: http2: server sent GOAWAY and closed the connection; LastStreamID=75, ErrCode=NO_ERROR, debug=\"\""

Perhaps the operator doesn't like when the apiserver restarts ? It happens very frequently during the install.

(It was using 4.5.0-rc.6 but the marketplace operator code didn't change since then I believe)

Proposal: maintenance window and maintenance blackout window

This is issue is to discussed a proposal in order to improve the operator-marketplace CatalogSourceConfig CRD.

If I understand well the current behavior of this controller, the catalog sources are fetch from an app-registry using a polling mechanism every hour.

If it is the current behavior, the operator-marketplace could be improved by letting a user configure a maintenance window (e.g. update the catalog source every Monday of each week) and with an optional blackout window where the catalog source would not be updated (e.g. start date: 2019-12-23 to end date: 2020-01-10).

@awgreene, @aravindhp, @tkashem, @kevinrizza, and @ecordell WDYT of that kind of proposal? Is it a feature that could be added to the operator-marketplace?

Add a command line arguments for resyncInterval

It would be very helpful to be able to configure the resyncInterval as an argument using the command line.

Example:

marketplace-operator --resync-interval 1h

I could contribute to this issue if this feature is accepted.

Can't install a custom operator

It was working on a cluster installed from 0.12.0 installer tag, after update to 0.13.1 I cannot install an operator from a custom OperatorSource:

apiVersion: marketplace.redhat.com/v1alpha1
kind: OperatorSource
metadata:
  name: codeready-operator
  namespace: openshift-marketplace
spec:
  type: appregistry
  endpoint: https://quay.io/cnr
  registryNamespace: eivantsov
  displayName: "Custom Operators"
  publisher: "Red Hat"

After I create a custom operatorsource, I can see my operator in the Operator Marketplace. I then install it. A subscription is created. However, this subscription is bound to a non-existing catalogSource installed-custom-openshift-operators

What is the right way to install an Operator that comes from a custom OperatorSource?

I can see this in catalog operator pod in olm namespace:

E0301 11:32:31.082993       1 queueinformer_operator.go:155] Sync "openshift-operators" failed: {codeready alpha crwoperator.v1.0.1 {installed-custom-openshift-operators openshift-operators}} not found: CatalogSource {installed-custom-openshift-operators openshift-operators} not found
time="2019-03-01T11:32:31Z" level=info msg="retrying openshift-operators"

I can see that my CatalogSourceConfig has problems:

eugene@ivantsoft ~/go/src/github.com/operator-framework/operator-marketplace/scripts (master) $ oc get catalogsourceconfig/installed-custom-openshift-operators -n=openshift-marketplace -o=yaml
apiVersion: marketplace.redhat.com/v1alpha1
kind: CatalogSourceConfig
metadata:
  creationTimestamp: 2019-03-01T10:38:33Z
  finalizers:
  - finalizer.catalogsourceconfigs.marketplace.redhat.com
  generation: 1
  name: installed-custom-openshift-operators
  namespace: openshift-marketplace
  resourceVersion: "47494"
  selfLink: /apis/marketplace.redhat.com/v1alpha1/namespaces/openshift-marketplace/catalogsourceconfigs/installed-custom-openshift-operators
  uid: 29b74be8-3c0e-11e9-be55-024be96b7e64
spec:
  csDisplayName: Custom Operators
  csPublisher: Custom
  packages: codeready
  targetNamespace: openshift-operators
status:
  currentPhase:
    lastTransitionTime: 2019-03-01T10:38:33Z
    lastUpdateTime: 2019-03-01T10:38:33Z
    phase:
      message: Still resolving package(s) - codeready. Please make sure these are
        valid packages.
      name: Configuring

How can I find out what causes problems with resolving a package?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.