Giter VIP home page Giter VIP logo

image-registry's Introduction

OpenShift Image Registry

Go Report Card GoDoc Build Status Coverage Status Licensed under Apache License version 2.0

OpenShift Image Registry is a tightly integrated with OpenShift Origin application that lets you distribute Docker images.

Installation and configuration instructions can be found in the OpenShift documentation.

Features:

  • Pull and cache images from remote registries.
  • Role-based access control (RBAC).
  • Audit log.
  • Prometheus metrics.

image-registry's People

Contributors

adambkaplan avatar akhil-rane avatar bparees avatar coreydaley avatar deads2k avatar dmage avatar enj avatar flavianmissi avatar ironcladlou avatar jhadvig avatar kargakis avatar legionus avatar liggitt avatar mfojtik avatar nichitagutu avatar openshift-bot avatar openshift-ci-robot avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar pweil- avatar qiwang19 avatar ricardomaraschini avatar sadasu avatar smarterclayton avatar soltysh avatar stbenjam avatar stevekuznetsov avatar yupengzte avatar yuwamsft2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

image-registry's Issues

allowedRegistriesForImport Behavior

Issue

Recently working in a somewhat disconnected environment, although the use of ImageContentSourcePolicy objects functions through a Proxy instance, we have seen "odd" behavior in the allowedRegistriesForImport and registrySources.insecureRegistries.

When attempting a cluster version upgrade, the clusterversion pod had logs saying Forbidden: registry "quay.io" not allowed by whitelist: "quay.io/openshift-release-dev:443" similar to what is seen in this Solution from access.redhat.com: https://access.redhat.com/solutions/6531701. We have quay.io/openshift-release-dev in our ImageContentSourcePolicy to pull from our proxy instance and no other OpenShift images have issues except during the cluster upgrade when updating/creating an ImageStream. I looked at the manifests and they were targeting the digest of the image, so the ImageContentSourcePolicy should have triggered, but it did not (separate question in that).

To attempt to get around this, we updated the below config (and waiting for the nodes to rollout the new config) and see where we stand, but we were met with the same error as before.

Here is the configuration (trimmed down of course) that we were using:

apiVersion: config.openshift.io/v1
kind: Image
metadata:
  name: cluster
spec:
  allowedRegistriesForImport: 
  - domainName: quay.io
    insecure: true
  - domainName: someotherregistries.com
    insecure: true
  registrySources:
    insecureRegistries:
    - quay.io
    - someotherregistries.com

We are aware that quay.io is not an insecure registry, this config was just copied over from one of the other lines in the list and wasn't changed to false. I did a manual test via oc import-image to the exact same digest (quay.io/openshift-release-dev@sha256:something) and did at least get the same error as the clusterversion pod had, which was expected. When we updated the above config to state insecure=false for quya.io and let the nodes roll, it was able to get past the issue we were seeing and was then part of the "whitelist". Still strange considering the manifests being applied were targeting the a digest which should have had the ImageContentSourcePolicy trigger.

I don't know the codebase for this repo all that well, but I did go looking around for how it handles the allowedRegistriesForImport and nothing seemed to be "unique" with how quay.io was handled.

Expected Behavior

quay.io "working" as expected even with insecure=true listed in the image.config CRD for OpenShift as being on the allowed list.

This was tested on OCP 4.6 during the -> 4.7 upgrade (same issue was seen for a different ImageStream from the Samples operator with identical issue looking back now at least [quay.io/openshift-release-dev]) and the -> 4.8 upgrade.

image-registry is tied to Sirupsen (uppercase)

This means we can't pick up newer kube deps (which are tied to sirupsen lowercase)

It looks like the master branch of docker distribution has handled this(moved to lowercase), so we should probably do what we proposed a few weeks ago and move up to master.

The only alternative I see (other than getting left behind on really old kube deps) would be to do a carry patch for docker distribution to switch it to lowercase sirupsen and that seems terrible.

@legionus @dmage any other options you see?

Problems with non AWS-S3 storage backend

We are experiencing lots of problems when using a rados-gw-based S3 backend (ceph luminous) for the image-registr (3.11). We are not able to push any images, pushing is aborted with an HTTP-500 error
When looking at the raw requests we see problems with multipart uploads.
After searching the web for some hours it seems that we are not the only ones having problems with rados-gw-s3.
It seems to related to AWS GO SDK which didn't properly support other backends than aws.
for example see the following issue for docker-registry distribution/distribution#2563

Any ideas / clues ?

Thank you

tests in tools directory are broken

$ go test ./tools/junitreport/pkg/parser/gotest
--- FAIL: TestFlatParse (0.00s)
    --- FAIL: TestFlatParse/failure (0.00s)
        parser_flat_test.go:374: did not produce the correct test suites from file:
                &api.TestSuites{XMLName:xml.Name{Space:"", Local:""}, Suites:[]*api.TestSuite{(*api.TestSuite)(0xc42009caa0)}}
                &api.TestSuites{XMLName:xml.Name{Space:"", Local:""}, Suites:[]*api.TestSuite{(*api.TestSuite)(0xc42009d360)}}
    --- FAIL: TestFlatParse/skip (0.00s)
        parser_flat_test.go:374: did not produce the correct test suites from file:
                &api.TestSuites{XMLName:xml.Name{Space:"", Local:""}, Suites:[]*api.TestSuite{(*api.TestSuite)(0xc42009cb40)}}
                &api.TestSuites{XMLName:xml.Name{Space:"", Local:""}, Suites:[]*api.TestSuite{(*api.TestSuite)(0xc42009d400)}}
    --- FAIL: TestFlatParse/multiple_suites (0.00s)
        parser_flat_test.go:374: did not produce the correct test suites from file:
                &api.TestSuites{XMLName:xml.Name{Space:"", Local:""}, Suites:[]*api.TestSuite{(*api.TestSuite)(0xc42009cc80), (*api.TestSuite)(0xc42009cd20)}}
                &api.TestSuites{XMLName:xml.Name{Space:"", Local:""}, Suites:[]*api.TestSuite{(*api.TestSuite)(0xc42009d540), (*api.TestSuite)(0xc42009d5e0)}}
    --- FAIL: TestFlatParse/nested (0.00s)
        parser_flat_test.go:374: did not produce the correct test suites from file:
                &api.TestSuites{XMLName:xml.Name{Space:"", Local:""}, Suites:[]*api.TestSuite{(*api.TestSuite)(0xc42009cfa0), (*api.TestSuite)(0xc42009d040), (*api.TestSuite)(0xc42009d0e0)}}
                &api.TestSuites{XMLName:xml.Name{Space:"", Local:""}, Suites:[]*api.TestSuite{(*api.TestSuite)(0xc42009d860), (*api.TestSuite)(0xc42009d900), (*api.TestSuite)(0xc42009d9a0)}}
--- FAIL: TestNestedParse (0.00s)
    --- FAIL: TestNestedParse/failure (0.00s)
        parser_nested_test.go:535: did not produce the correct test suites from file:
                 
                object.Suites[0].TestCases[0].FailureOutput.Output:
                  a: "file_test.go:11: Error message\nfile_test.go:11: Longer\nerror\nmessage.\n
                  b: "file_test.go:11: Error message\nfile_test.go:11: Longer\nerror\nmessage."
    --- FAIL: TestNestedParse/skip (0.00s)
        parser_nested_test.go:535: did not produce the correct test suites from file:
                 
                object.Suites[0].TestCases[0].SkipMessage.Message:
                  a: "file_test.go:11: Skip message\n"
                  b: "file_test.go:11: Skip message"
    --- FAIL: TestNestedParse/multiple_suites (0.00s)
        parser_nested_test.go:535: did not produce the correct test suites from file:
                 
                object:
                  a: []*api.TestSuite{(*api.TestSuite)(0xc42009dea0), (*api.TestSuite)(0xc42009d
                  b: []*api.TestSuite{(*api.TestSuite)(0xc42011cb40), (*api.TestSuite)(0xc42011c
    --- FAIL: TestNestedParse/nested (0.00s)
        parser_nested_test.go:535: did not produce the correct test suites from file:
                 
                object:
                  a: []*api.TestSuite{(*api.TestSuite)(0xc42011c1e0), (*api.TestSuite)(0xc42011c
                  b: []*api.TestSuite{(*api.TestSuite)(0xc42011ce60), (*api.TestSuite)(0xc42011c
    --- FAIL: TestNestedParse/nested_tests_with_inline_output (0.00s)
        parser_nested_test.go:535: did not produce the correct test suites from file:
                 
                object.Suites[0]:
                  a: []*api.TestCase{(*api.TestCase)(0xc420109c00), (*api.TestCase)(0xc420109c80
                  b: []*api.TestCase{(*api.TestCase)(0xc42014a580), (*api.TestCase)(0xc42014a600
    --- FAIL: TestNestedParse/multi-suite_nested_output_with_coverage (0.00s)
        parser_nested_test.go:535: did not produce the correct test suites from file:
                 
                object.Suites[0]:
                  a: []*api.TestCase{(*api.TestCase)(0xc420109e00), (*api.TestCase)(0xc420109e80
                  b: []*api.TestCase{(*api.TestCase)(0xc42014a880), (*api.TestCase)(0xc42014a900
FAIL
FAIL    github.com/openshift/image-registry/tools/junitreport/pkg/parser/gotest 0.004s

[Need help]How to get a username/password (do not refresh)who has the admin permission for openshift registry

OCP 4.3

I want to get a user/passowrd. or serviceaccount/token, that has a admin permisson for openshift internal registry

I have a use case that a user with password (will not be expired) can access the OCP image-registry

Tried cases:
admin user with token (generated by OC and will be expired,)

oc whoami -t
9XIUr1LqQ7eIzR4DnFVaWundefinedwFAeHtsFnYQcB97u4AiH90
bash-4.4$ curl -k -s -u admin:9XIUr1LqQ7eIzR4DnFVaWundefinedwFAeHtsFnYQcB97u4AiH90 "https://image-registry.openshift-image-registry.svc:5000/openshift/token?service=token-service&scope=registry:catalog:*"
{"access_token":"9XIUr1LqQ7eIzR4DnFVaWundefinedwFAeHtsFnYQcB97u4AiH90","token":"9XIUr1LqQ7eIzR4DnFVaWundefinedwFAeHtsFnYQcB97u4AiH90"}
bash-4.4$ export TOKEN=9XIUr1LqQ7eIzR4DnFVaWundefinedwFAeHtsFnYQcB97u4AiH90
bash-4.4$ curl -k -s -H "Authorization: Bearer $TOKEN " "https://image-registry.openshift-image-registry.svc.cluster.local:5000/v2/_catalog"
{"repositories":["openshift/apicast-gateway","openshift/apicurito-ui","openshift/cli","openshift/cli-artifacts","openshift/dotnet","openshift/dotnet-runtime","openshift/eap-cd-openshift","openshift/fis-java-openshift","openshift/fis-karaf-openshift","openshift/fuse-apicurito-generator","openshift/fuse7-console","openshift/fuse7-eap-openshift","openshift/fuse7-java-openshift","openshift/fuse7-karaf-openshift","openshift/golang","openshift/httpd","openshift/installer","openshift/installer-artifacts","openshift/java","openshift/jboss-amq-62","openshift/jboss-amq-63","openshift/jboss-datagrid65-client-openshift","openshift/jboss-datagrid65-openshift","openshift/jboss-datagrid71-client-openshift","openshift/jboss-datagrid71-openshift","openshift/jboss-datagrid72-openshift","openshift/jboss-datagrid73-openshift","openshift/jboss-datavirt64-driver-openshift","openshift/jboss-datavirt64-openshift","openshift/jboss-decisionserver64-openshift","openshift/jboss-eap64-openshift","openshift/jboss-eap70-openshift","openshift/jboss-eap71-openshift","openshift/jboss-eap72-openshift","openshift/jboss-fuse70-console","openshift/jboss-fuse70-eap-openshift","openshift/jboss-fuse70-java-openshift","openshift/jboss-fuse70-karaf-openshift","openshift/jboss-processserver64-openshift","openshift/jboss-webserver30-tomcat7-openshift","openshift/jboss-webserver30-tomcat8-openshift","openshift/jboss-webserver31-tomcat7-openshift","openshift/jboss-webserver31-tomcat8-openshift","openshift/jboss-webserver50-tomcat9-openshift","openshift/jenkins","openshift/jenkins-agent-maven","openshift/jenkins-agent-nodejs","openshift/mariadb","openshift/modern-webapp","openshift/mongodb","openshift/must-gather","openshift/mysql","openshift/nginx","openshift/nodejs","openshift/openjdk-11-rhel7","openshift/perl","openshift/php","openshift/postgresql","openshift/python","openshift/redhat-openjdk18-openshift","openshift/redhat-sso70-openshift","openshift/redhat-sso71-openshift","openshift/redhat-sso72-openshift","openshift/redhat-sso73-openshift","openshift/redis","openshift/rhdm74-decisioncentral-openshift","openshift/rhdm74-kieserver-openshift","openshift/rhdm74-optaweb-employee-rostering-openshift","openshift/rhpam74-businesscentral-monitoring-openshift","openshift/rhpam74-businesscentral-openshift","openshift/rhpam74-kieserver-openshift","openshift/rhpam74-smartrouter-openshift","openshift/ruby","openshift/tests"]}

this method can has a permission to list all images in the OCP registry but the password generated by

oc whoami -t and this password will be expired.

How could I get a user name and password/token, that will not expired, ? and has a admin permission to the OCP internal registry ? or some service account token with username serviceaccount in the default docker secret created by Openshift in namespaces

Origin Registry Console Logout not working when using KeyCloak with Openshift

Having setup Openshift with OpenID Auth with Keycloak, when logging out of the Registry Console with the "Logout" link, I'm logged out of Openshift, however I am not logged out of KeyCloak. Clicking on "Login Again" and selecting the "Keycloak" Login Dialog automatically loggs me in.

The Web console of Openshift does not have this issue (If I configure the logoutUrl correctly)

Version: 3.9

Steps To Reproduce

  • Deploy Openshift with Registry
  • Configure Openshift with Keycloak
  • Login to Openshift with Keycloak User
  • Browse to Registry Console
  • Logout of Registry Console
  • Follow "Login Again"

Current Result

  • You are automatically logged in using the old KeyCloak Session.

Expected Result

  • You have to reauthenticate (Cockpit redirects to logoutUrl after logging out locally)

See openshift/origin#20288

TLS handshake timeout

How to split image-registry

To create this repo, I did...

  1. clone openshift/origin
  2. delete everything except cmd/dockerregistry and pkg/dockerregistry
  3. clean git history
  4. rewrite imports to re-root this repo at openshift/image-registry
  5. create a glide.yaml to vendor dependencies
  6. create the upstream patch required to have the conflicting (for now) docker/distribution vendor
  7. make sure that go build github.com/openshift/image-registry/cmd/dockerregistry produces a binary

@bparees @dmage @smarterclayton @liggitt @mfojtik @derekwaynecarr Let's talk here about whether or not we should use this as a starting point and bring in what's needed from here.

I see next steps as:

  1. create a Makefile for build and unit test @stevekuznetsov
  2. move the image producing pieces here @stevekuznetsov
  3. convert existing openshift/origin integration tests for the registry to integration or e2e tests in this repo @bparees
  4. update openshift/origin CI for new master (after we fork 3.7) to use the latest available image-registry image.

Mirroring uses a wrong context

time="2018-02-19T07:45:44.666096171Z" level=error msg="Error committing to storage: openshift.auth.completed missing from context" go.version=go1.9.2 http.request.host="172.30.170.106:5000" http.request.id=f639c20e-8b09-4e22-8b9a-f78222cab75e http.request.method=GET http.request.remoteaddr="10.128.0.1:57412" http.request.uri="/v2/extended-test-prune-images-xkcfs-7jg44/origin-release/blobs/sha256:ae60289ae57f6a7da4a23756109b6078c209a2e280be031a70bf00e198b6b1a0" http.request.useragent=Go-http-client/2.0 instance.id=765a5503-241f-4a08-bce9-6868f18ee885 openshift.auth.user=extended-test-prune-images-xkcfs-7jg44-user openshift.auth.userid=e1003c5e-1548-11e8-b6a5-0e6fc89d94ce vars.digest="sha256:ae60289ae57f6a7da4a23756109b6078c209a2e280be031a70bf00e198b6b1a0" vars.name=extended-test-prune-images-xkcfs-7jg44/origin-release

DELETE operation is unsupported

Using version: image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:71145c39d67b47163c5b9ecace119ee286f30e06fd6645cb07d61dc8f27c96e2

$ oc version
Client Version: 4.10.0-202203170916.p0.g6de42bd.assembly.stream-6de42bd
Server Version: 4.7.44
Kubernetes Version: v1.20.11+e880017
$ oc exec -it -n openshift-image-registry image-registry-86466fdfd7-8t9bf -- cat /config.yml
version: 0.1
log:
  level: info
http:
  addr: :5000
storage:
  cache:
    blobdescriptor: inmemory
  filesystem:
    rootdirectory: /registry
  delete:
    enabled: true
auth:
  openshift: {}
openshift:
  version: 1.0
  auth:
    realm: openshift
    # tokenrealm is a base URL to use for the token-granting registry endpoint.
    # If unspecified, the scheme and host for the token redirect are determined from the incoming request.
    # If specified, a scheme and host must be chosen that all registry clients can resolve and access:
    #
    # tokenrealm: https://example.com:5000
  audit:
    enabled: false
  metrics:
    enabled: false
    # secret is used to authenticate to metrics endpoint. It cannot be empty.
    # Attention! A weak secret can lead to the leakage of private data.
    #
    # secret: TopSecretLongToken
  requests:
    # GET and HEAD requests
    read:
      # maxrunning is a limit for the number of in-flight requests. A zero value means there is no limit.
      maxrunning: 0
      # maxinqueue sets the maximum number of requests that can be queued if the limit for the number of in-flight requests is reached.
      maxinqueue: 0
      # maxwaitinqueue is how long a request can wait in the queue. A zero value means it can wait forever.
      maxwaitinqueue: 0
    # PUT, PATCH, POST, DELETE requests and internal mirroring requests
    write:
      # See description of openshift.requests.read.
      maxrunning: 0
      maxinqueue: 0
      maxwaitinqueue: 0
  quota:
    enabled: false
    cachettl: 1m
  cache:
    blobrepositoryttl: 10m
  pullthrough:
    enabled: true
    mirror: true
  compatibility:
    acceptschema2: true

Hi guys,

I am trying to delete an image as documentation states for V2 registries:
$ curl -Ik -u <user:password> -H "Accept: application/vnd.docker.distribution.manifest.v2+json" https://<registry>/v2/<image>/manifests/<tag> | awk '$1 == "Docker-Content-Digest:" { print $2 }' | tr -d $'\r'
$ curl -k -u <user:password> -X DELETE https://<registry>/v2/<image>/manifests/<digest>

However, I am getting {"errors":[{"code":"UNSUPPORTED","message":"The operation is unsupported."}]}. Please note that the env variable storage.delete.enabled: true is correctly set.

Notifications/Webhooks for repository notifications

Expected Behavior

The Internal Openshift registry has some easily accessible way to add endpoints that will be notified of events, preferably in a dynamic way such as an endpoint like https://docs.docker.com/docker-hub/webhooks/.

Actual Behavior

I do not see any documentation for such an api within https://docs.openshift.com/container-platform/4.2/welcome/index.html, but I may be mistaken. Also, there might be a CRD approach to achieving this? It seems like there were some CRD approaches, but they didn't seem to accomplish the same thing?

Additional Information

As per tektoncd/experimental#56, I have been looking into methods to get notifications/webhook for the internal openshift registry. Older (v3.5) Openshift documentation mentions ways to further configure the registry here, however it later versions it is omitted. When digging into this configuration approach, I see that the docker/distribution repo/middleware is still being utilized s.t. this should work. However, the configuration file is mounted inside the Dockerfile for the registry itself (https://github.com/openshift/image-registry/blob/master/Dockerfile#L14 and https://github.com/openshift/image-registry/blob/master/images/dockerregistry/config.yml) where it would seem there is likely a very round about way to achieve this. Also, I believe this would require the registry be restarted.

Track client operations using tokens

Docker for each operation (pull, push) gets a new OAuth2 token. The current implementation of the token endpoint is basically a stub.

We can use this feature to tie the sequence of requests together:

  1. /openshift/token endpoint returns a signed tuple (openshift token, random id) as a registry token.
  2. On each operation, the registry checks the signature of the provided token.
  3. If the signature is valid, the registry extracts the random id and puts it into the log context as token.id (like http.request.id).

While Docker doesn't share tokens between operations, this token id should be helpful for debugging and analytics.

registry fail to start (4.2-2019-08-08-070705)

Registry fails to start after nightly build on azure

openshift-image-registry                                cluster-image-registry-operator-c8d6678cd-v8zmp                   0/1     CrashLoopBackOff   10         33m
[mjudeiki@redhat installer]$ oc describe pod cluster-image-registry-operator-c8d6678cd-v8zmp 
Name:               cluster-image-registry-operator-c8d6678cd-v8zmp
Namespace:          openshift-image-registry
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               mjtest-2gslw-master-0/10.0.0.7
Start Time:         Thu, 08 Aug 2019 09:40:26 +0100
Labels:             name=cluster-image-registry-operator
                    pod-template-hash=c8d6678cd
Annotations:        openshift.io/scc: restricted
Status:             Running
IP:                 10.129.0.20
Controlled By:      ReplicaSet/cluster-image-registry-operator-c8d6678cd
Containers:
  cluster-image-registry-operator:
    Container ID:  cri-o://c07f5c2d6d41e4ad5e78b34f50c0081202d065e2d5ce3a5af1029ac3c518c632
    Image:         registry.svc.ci.openshift.org/origin/4.2-2019-08-08-070705@sha256:1f03caa36cec488f8b8a854c81577f7e89e822623396e83cbb9275c8090643db
    Image ID:      registry.svc.ci.openshift.org/origin/4.2-2019-08-08-070705@sha256:1f03caa36cec488f8b8a854c81577f7e89e822623396e83cbb9275c8090643db
    Port:          60000/TCP
    Host Port:     0/TCP
    Command:
      cluster-image-registry-operator
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 08 Aug 2019 10:11:11 +0100
      Finished:     Thu, 08 Aug 2019 10:11:30 +0100
    Ready:          False
    Restart Count:  10
    Requests:
      cpu:  10m
    Environment:
      RELEASE_VERSION:  4.2.0-0.okd-2019-08-08-070705
      WATCH_NAMESPACE:  openshift-image-registry (v1:metadata.namespace)
      OPERATOR_NAME:    cluster-image-registry-operator
      IMAGE:            registry.svc.ci.openshift.org/origin/4.2-2019-08-08-070705@sha256:e252d48cc5108337c1a1b4a90366898da3454b50e4407a620d8e7ba146758e15
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from cluster-image-registry-operator-token-dc5kt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  cluster-image-registry-operator-token-dc5kt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-image-registry-operator-token-dc5kt
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 120s
                 node.kubernetes.io/unreachable:NoExecute for 120s
Events:
  Type     Reason     Age                    From                            Message
  ----     ------     ----                   ----                            -------
  Normal   Scheduled  34m                    default-scheduler               Successfully assigned openshift-image-registry/cluster-image-registry-operator-c8d6678cd-v8zmp to mjtest-2gslw-master-0
  Normal   Pulling    34m                    kubelet, mjtest-2gslw-master-0  Pulling image "registry.svc.ci.openshift.org/origin/4.2-2019-08-08-070705@sha256:1f03caa36cec488f8b8a854c81577f7e89e822623396e83cbb9275c8090643db"
  Normal   Pulled     33m                    kubelet, mjtest-2gslw-master-0  Successfully pulled image "registry.svc.ci.openshift.org/origin/4.2-2019-08-08-070705@sha256:1f03caa36cec488f8b8a854c81577f7e89e822623396e83cbb9275c8090643db"
  Normal   Created    30m (x5 over 33m)      kubelet, mjtest-2gslw-master-0  Created container cluster-image-registry-operator
  Normal   Started    30m (x5 over 33m)      kubelet, mjtest-2gslw-master-0  Started container cluster-image-registry-operator
  Normal   Pulled     28m (x5 over 33m)      kubelet, mjtest-2gslw-master-0  Container image "registry.svc.ci.openshift.org/origin/4.2-2019-08-08-070705@sha256:1f03caa36cec488f8b8a854c81577f7e89e822623396e83cbb9275c8090643db" already present on machine
  Warning  BackOff    3m54s (x110 over 33m)  kubelet, mjtest-2gslw-master-0  Back-off restarting failed container

Logs:

[mjudeiki@redhat installer]$ oc logs -f cluster-image-registry-operator-c8d6678cd-v8zmp 
I0808 09:11:11.318896       1 main.go:20] Cluster Image Registry Operator Version: 4.0.0-300-g4a57a26-dirty
I0808 09:11:11.318989       1 main.go:21] Go Version: go1.12.5
I0808 09:11:11.318996       1 main.go:22] Go OS/Arch: linux/amd64
I0808 09:11:11.322967       1 controller.go:473] waiting for informer caches to sync
I0808 09:11:12.925302       1 controller.go:482] started events processor
I0808 09:11:13.381040       1 azure.go:144] attempt to create azure storage account mjtest2gslwmtnn4 (resourceGroup="mjtest-2gslw-rg", location="eastus")...
I0808 09:11:30.889054       1 azure.go:174] azure storage account mjtest2gslwmtnn4 has been created
E0808 09:11:30.925871       1 runtime.go:69] Observed a panic: "slice bounds out of range" (runtime error: slice bounds out of range)
/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:54
/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/azure/azure.go:471
/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:102
/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:143
/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:121
/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:159
/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:249
/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:256
/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152
/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153
/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: slice bounds out of range [recovered]
	panic: runtime error: slice bounds out of range

goroutine 253 [running]:
github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1863020, 0x3052d20)
	/usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/openshift/cluster-image-registry-operator/pkg/storage/azure.(*driver).CreateStorage(0xc000a01ec0, 0xc0005d0840, 0xc0005c2101, 0x1e66f20)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/storage/azure/azure.go:471 +0x18da
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).syncStorage(0xc0000be280, 0xc0005d0840, 0x41365a, 0xc000a41580)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:102 +0xc9
github.com/openshift/cluster-image-registry-operator/pkg/resource.(*Generator).Apply(0xc0000be280, 0xc0005d0840, 0x0, 0x0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/resource/generator.go:143 +0x4d
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).createOrUpdateResources(0xc0005c4180, 0xc0005d0840, 0x7, 0xc0000f6801)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:121 +0x1a9
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).sync(0xc0005c4180, 0x1e765b0, 0xc0000be320)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:159 +0x108e
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor.func1(0xc0005c4180, 0x17bbd00, 0x1dde3d0)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:249 +0x98
github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).eventProcessor(0xc0005c4180)
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:256 +0x45
github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc000a41560)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x54
github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a41560, 0x3b9aca00, 0x0, 0x1, 0xc0000886c0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc000a41560, 0x3b9aca00, 0xc0000886c0)
	/go/src/github.com/openshift/cluster-image-registry-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/openshift/cluster-image-registry-operator/pkg/operator.(*Controller).Run
	/go/src/github.com/openshift/cluster-image-registry-operator/pkg/operator/controller.go:480 +0xf5e
[mjudeiki@redhat installer]$ 

/kind bug

Update regions

Via openshift/origin#12454 (comment)

@pweil- +1 for cn-northwest.

Regions are pulled from the generated endpoints in aws-sdk-go. It looks like our version is 1.12.36. Latest is 1.16.26 which includes the cn-northwest region mentioned in the request.

@tj13 To set expectations, this requires evaluation of risk when we rebase new versions of the upstream repo since features may be pulled in, removed, changed, etc and must be tested. Not a quick update and will be prioritized against the existing feature queue. Thanks for the request!

registry image reports "unknown" openshift version

time="2017-12-11T18:50:48.773847179Z" level=info msg="start registry" distribution_version="v2.6.2+unknown" go.version=go1.8.3 instance.id=0bfa960a-ac57-4138-abf9-f9445aa5eed7 kubernetes_version=v1.7.6+$Format:%h$ openshift_version=unknown

Use latest version of origin in tests

We want to test the master code against the latest version of origin. See #50.

At the same time, the code from the release-3.8 branch is supposed to be tested with origin's branch release-3.8. And when we'll have the release-3.9 branch, it should use origin from the branch release-3.9.

Request ID is inconsistent

In our access logs the request ID doesn't match to the request ID from the rest of the log.

time="2018-10-28T02:39:20.637639967Z" level=info msg="response completed" go.version=go1.10.3 http.request.contenttype=application/octet-stream http.request.host="docker-registry.default.svc:5000" http.request.id=729f671f-38a9-4407-ae25-704ab17ae046 http.request.method=PATCH http.request.remoteaddr="172.16.2.15:51286" http.request.uri="/v2/e2e-test-build-image-source-r8gd7/inputimage/blobs/uploads/c38c3834-65d9-4378-9b85-d0c6d841abb3?_state=tzyBmSeGmAgk-_E4AFF96Avab9m5f66x8uxJMw6gQOR7Ik5hbWUiOiJlMmUtdGVzdC1idWlsZC1pbWFnZS1zb3VyY2UtcjhnZDcvaW5wdXRpbWFnZSIsIlVVSUQiOiJjMzhjMzgzNC02NWQ5LTQzNzgtOWI4NS1kMGM2ZDg0MWFiYjMiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTgtMTAtMjhUMDI6Mzk6MDkuOTE0NzAzODk2WiJ9" http.request.useragent=Go-http-client/1.1 http.response.duration=10.539792322s http.response.status=202 http.response.written=0
time="2018-10-28T02:39:20.637721561Z" level=info msg=response go.version=go1.10.3 http.request.contenttype=application/octet-stream http.request.host="docker-registry.default.svc:5000" http.request.id=2810569a-6dc4-4d91-961a-8bad629ab9e5 http.request.method=PATCH http.request.remoteaddr="172.16.2.15:51286" http.request.uri="/v2/e2e-test-build-image-source-r8gd7/inputimage/blobs/uploads/c38c3834-65d9-4378-9b85-d0c6d841abb3?_state=tzyBmSeGmAgk-_E4AFF96Avab9m5f66x8uxJMw6gQOR7Ik5hbWUiOiJlMmUtdGVzdC1idWlsZC1pbWFnZS1zb3VyY2UtcjhnZDcvaW5wdXRpbWFnZSIsIlVVSUQiOiJjMzhjMzgzNC02NWQ5LTQzNzgtOWI4NS1kMGM2ZDg0MWFiYjMiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTgtMTAtMjhUMDI6Mzk6MDkuOTE0NzAzODk2WiJ9" http.request.useragent=Go-http-client/1.1 http.response.duration=10.539920957s http.response.status=202 http.response.written=0

/cc @legionus

Fix build scripts to accept go1.10

We stuck with go1.8 and we need to modify our build scripts to use another version.

[FATAL] Detected Go version: go version go1.10.1 linux/amd64.
[FATAL] Builds require Go version go1.8 or greater.

REGISTRY_OPENSHIFT_SERVER_ADDR env var does not appear to work

the registry does not appear to actually respect this value properly:

$ oc logs docker-registry-1-knwnf -n default | grep URL

time="2018-01-28T03:00:12.310853118Z" level=info msg="Using \"172.30.183.222:5000\" as Docker Registry URL" go.version=go1.9.2 instance.id=a82b7ca8-0b55-4567-92ef-2c1744e6074a 

$ oc get pod docker-registry-1-knwnf -o yaml | grep -A 2 ADDR 
--
    - name: REGISTRY_OPENSHIFT_SERVER_ADDR
      value: docker-registry.default.svc:5000

OCP 4.2 ImageRegistry fails with nfs-storage on s390x

I am trying to install OCP 4.2 on s390x with Image Registry nfs-storage. But I am getting below error for image-registry pod

 oc describe po image-registry-5598d76584-4wfxq -n openshift-image-registry                                                               Name:               image-registry-5598d76584-4wfxq
Namespace:          openshift-image-registry
Priority:           2000000000
PriorityClassName:  system-cluster-critical
Node:               infnod-0.ocp-s390x-test-8c646a.redhat.com/192.168.79.27
Start Time:         Fri, 24 Jan 2020 08:35:12 -0500
Labels:             docker-registry=default
                    pod-template-hash=5598d76584
Annotations:        imageregistry.operator.openshift.io/dependencies-checksum: sha256:fd243d455b973898e45266abe841357bb6ad51aae1516a62f8c41e1882b75431
                    openshift.io/scc: restricted
Status:             Pending
IP:
Controlled By:      ReplicaSet/image-registry-5598d76584
Containers:
  registry:
    Container ID:
    Image:          quay.io/multi-arch/s390x-openshift4-ose-docker-registry@sha256:c63d5d969507636e727d053bb4f41771102dfe8a4b01a554181c7d289a77e3e5
    Image ID:
    Port:           5000/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   256Mi
    Liveness:   http-get https://:5000/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get https://:5000/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
    Environment:
      REGISTRY_STORAGE:                           filesystem
      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY:  /registry
      REGISTRY_HTTP_ADDR:                         :5000
      REGISTRY_HTTP_NET:                          tcp
      REGISTRY_HTTP_SECRET:                       4ca47b5fbe085ff4b7a1d0f2ec515a5a9526ba077f56c4fbcde291ebdce41a6f2450232373143410a2a8568fe5962f9d2188f2b40033ebcde1ef1f97591e3d6a
      REGISTRY_LOG_LEVEL:                         info
      REGISTRY_OPENSHIFT_QUOTA_ENABLED:           true
      REGISTRY_STORAGE_CACHE_BLOBDESCRIPTOR:      inmemory
      REGISTRY_STORAGE_DELETE_ENABLED:            true
      REGISTRY_OPENSHIFT_METRICS_ENABLED:         true
      REGISTRY_OPENSHIFT_SERVER_ADDR:             image-registry.openshift-image-registry.svc:5000
      REGISTRY_HTTP_TLS_CERTIFICATE:              /etc/secrets/tls.crt
      REGISTRY_HTTP_TLS_KEY:                      /etc/secrets/tls.key
    Mounts:
      /etc/pki/ca-trust/source/anchors from registry-certificates (rw)
      /etc/secrets from registry-tls (rw)
      /registry from registry-storage (rw)
      /usr/share/pki/ca-trust-source from trusted-ca (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from registry-token-r7fjj (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  registry-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  image-registry-storage
    ReadOnly:   false
  registry-tls:
    Type:                Projected (a volume that contains injected data from multiple sources)
    SecretName:          image-registry-tls
    SecretOptionalName:  <nil>
  registry-certificates:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      image-registry-certificates
    Optional:  false
  trusted-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      trusted-ca
    Optional:  true
  registry-token-r7fjj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  registry-token-r7fjj
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age   From                                                Message
  ----     ------       ----  ----                                                -------
  Normal   Scheduled    45m   default-scheduler                                   Successfully assigned openshift-image-registry/image-registry-5598d76584-4wfxq to infnod-0.ocp-s390x-test-8c646a.redhat.com
  Warning  FailedMount  43m   kubelet, infnod-0.ocp-s390x-test-8c646a.redhat.com  MountVolume.SetUp failed for volume "nfs-storage" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/59083bdd-3eae-11ea-8a3e-525400da7041/volumes/kubernetes.io~nfs/nfs-storage --scope -- mount -t nfs 192.168.79.1:/home/nfsshare /var/lib/kubelet/pods/59083bdd-3eae-11ea-8a3e-525400da7041/volumes/kubernetes.io~nfs/nfs-storage
Output: Running scope as unit: run-ra160e91a12db44649e45f7405cf0124a.scope
mount.nfs: No route to host
  Warning  FailedMount  41m  kubelet, infnod-0.ocp-s390x-test-8c646a.redhat.com  MountVolume.SetUp failed for volume "nfs-storage" : mount failed: exit status 32

Please guide...

Fix integration test framework

With the latest version of the origin container we get the authorization error in the integration tests.

OpenShift access denied: User "admin" cannot get imagestreams/layers.image.openshift.io in project "namespace"

git bisect said that openshift/origin@b81b982 is the first bad commit

commit b81b982406f5b9a7dbba2b3b0ff7edd702356408
Author: David Eads <[email protected]>
Date:   Wed Jan 3 15:41:10 2018 -0500

    use cluster role aggregation for admin, edit, and view

:040000 040000 5d24ce1704026fa9c922a5e7c3c0bebb07e15e06 533af19293f56511d24d88b0a75c14528211cee3 M      pkg
:040000 040000 c8909287a9a6f4c5a6cc0bfafd40a890c092429c 9a7cb8beae0222f9d4ce2db4f18bf6f87d7d83e3 M      test

redirect parameter not working

When using the docker-registry on azure the registry uses SAS Tokens and redirect to redirect clients to the blob URL's for download / upload. If you have VNet Security enabled, this does not work, unless you remove this access (403 is returned).

The upstream registry provides a parameter for this (redirect), it is part of the documentation https://docs.openshift.com/container-platform/3.11/install_config/registry/extended_registry_configuration.html#docker-registry-configuration-reference-storage, however it is not working.

I need a way to disable redirects.

invalid repository/tag

[INFO] hack/build-images.sh exited with code 0 after 00h 01m 01s
++ cat /data/src/github.com/openshift/aos-cd-jobs/ORIGIN_COMMIT
+ docker tag openshift/origin-docker-registry:latest 'openshift/origin-docker-registry:5eb8353
5eb8353'
Error parsing reference: "openshift/origin-docker-registry:5eb8353\n5eb8353" is not a valid repository/tag
++ export status=FAILURE
++ status=FAILURE

https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_image-registry/5/test_pull_request_image_registry_e2e/8/

[Need help] Is openshift internal registry has token service api ?

As other image registry solution

there is a token service api like

curl -k -s -u admin:password "https://xxxx/service/token?service=token-service&scope=repository:project/image-name:pull,push" 

we can use it to get a access token with different permission

Then use it to access registry api for example list all image name:

curl -k -s -H "Authorization: Bearer $token" "https://xxxxx:8500/v2/_catalog" | python -m json.tool

After check the doc of openshift internal image registry, seems there are no token services like that, and we can use servicaccount token of kube-system or user token to access regstry api without permission limitation.

Could you guys help to clarify if there is a token service for openshift image registry ?

Thanks very much.

Data race in pullthrough_test

WARNING: DATA RACE
Read at 0x00c420498268 by goroutine 99:
  github.com/openshift/image-registry/test/integration/pullthrough.TestPullThroughInsecure.func1()
      /go/src/github.com/openshift/image-registry/_output/local/go/src/github.com/openshift/image-registry/test/integration/pullthrough/pullthrough_test.go:143 +0x9f7
  net/http.HandlerFunc.ServeHTTP()
      /usr/local/go/src/net/http/server.go:1918 +0x51
  net/http.serverHandler.ServeHTTP()
      /usr/local/go/src/net/http/server.go:2619 +0xbc
  net/http.(*conn).serve()
      /usr/local/go/src/net/http/server.go:1801 +0x83b

Previous write at 0x00c420498268 by goroutine 82:
  github.com/openshift/image-registry/test/integration/pullthrough.TestPullThroughInsecure.func1()
      /go/src/github.com/openshift/image-registry/_output/local/go/src/github.com/openshift/image-registry/test/integration/pullthrough/pullthrough_test.go:143 +0xa16
  net/http.HandlerFunc.ServeHTTP()
      /usr/local/go/src/net/http/server.go:1918 +0x51
  net/http.serverHandler.ServeHTTP()
      /usr/local/go/src/net/http/server.go:2619 +0xbc
  net/http.(*conn).serve()
      /usr/local/go/src/net/http/server.go:1801 +0x83b

Goroutine 99 (running) created at:
  net/http.(*Server).Serve()
      /usr/local/go/src/net/http/server.go:2720 +0x37c
  net/http.Serve()
      /usr/local/go/src/net/http/server.go:2323 +0xe2
  github.com/openshift/image-registry/pkg/testframework.NewHTTPServer.func1()
      /go/src/github.com/openshift/image-registry/_output/local/go/src/github.com/openshift/image-registry/pkg/testframework/httptest.go:47 +0x5d

Goroutine 82 (running) created at:
  net/http.(*Server).Serve()
      /usr/local/go/src/net/http/server.go:2720 +0x37c
  net/http.Serve()
      /usr/local/go/src/net/http/server.go:2323 +0xe2
  github.com/openshift/image-registry/pkg/testframework.NewHTTPServer.func1()
      /go/src/github.com/openshift/image-registry/_output/local/go/src/github.com/openshift/image-registry/pkg/testframework/httptest.go:47 +0x5d

openshift create ca server certificate for docker registry

How to create the "ca.crt" file in order to login to openshift docker registry. The URL have the command to create ca.crt file in the docker registry and this ca.crt to be copied to docker client machine. here is the command.

oc adm ca create-server-cert
--signer-cert=/etc/origin/master/ca.crt
--signer-key=/etc/origin/master/ca.key
--signer-serial=/etc/origin/master/ca.serial.txt
--hostnames='docker-registry-default.company.cl,openshift.lab.company.corp,<ip_registry_host>,<ip_oc_cluster>'
--cert=/etc/secrets/registry.crt
--key=/etc/secrets/registry.key

I have the registry.key and registry.crt file from default namespace >resources > secrets > registry-certificates, copied to /etc/secrets in the docker client machine.

How can I login to openshift docker registry host or container?
How to get the credentials to login to openshift docker registry?
Where to execute the command "oc adm ca create-server-cert", docker server or client machine?
Could you please someone explain the values to pass for options signer-cert, signer-key, siger-serial, what hostnames and ip address to be added?

CRC Image Registry docker login issue on Windows - 127.0.0.1:80 connection refused

When trying to docker login to CRC internal image registry default-route-openshift-image-registry.apps-crc.testing throws error.
Its only happening on windows. Works fine when running on Linux.

Version
Client Version: 4.7.5
Kubernetes Version: v1.20.0+bafe72f

Steps To Reproduce
crc start (Windows)
docker login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.apps-crc.testing
Current Result
Get http://default-route-openshift-image-registry.apps-crc.testing/v2/: dial tcp 127.0.0.1:80: connect: connection refused

Expected Result
Login Succeeded!

Additional Information
I have tried the following -

oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"defaultRoute":true}}'
oc policy add-role-to-user registry-editor kubeadmin
oc policy add-role-to-user registry-viewer kubeadmin
oc get clusteroperator image-registry
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
image-registry 4.7.5 True False False 15d

oc registry info --public
default-route-openshift-image-registry.apps-crc.testing

oc get pods -n openshift-image-registry NAME READY STATUS RESTARTS AGE
cluster-image-registry-operator-66b48f9bd-ln99j 1/1 Running 0 33d
image-registry-77979c4dc7-nszrj 1/1 Running 0 15d
node-ca-q6gc8 1/1 Running 0 34d

oc edit configs.imageregistry.operator.openshift.io
apiVersion: imageregistry.operator.openshift.io/v1
kind: Config
metadata:
creationTimestamp: "2021-04-08T06:11:05Z"
finalizers:

imageregistry.operator.openshift.io/finalizer
generation: 5
name: cluster
resourceVersion: "28869"
selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster
uid: 4644048c-fdd2-4c9d-a86a-d398326ee3e3
spec:
defaultRoute: true
httpSecret: a2361d60ce4c3abf3d39b2810532c3ad6252772f0c1894f68013bbb82904475f8ecd5425ab8df66fad0a6aee616a4c1fc95bb463440b07fced01908543e804d6
logLevel: Normal
managementState: Managed
observedConfig: null
operatorLogLevel: Normal
proxy: {}
replicas: 1
requests:
read:
maxWaitInQueue: 0s
write:
maxWaitInQueue: 0s
rolloutStrategy: RollingUpdate
storage:
managementState: Managed
pvc:
claim: crc-image-registry-storage
unsupportedConfigOverrides: null
status:
conditions:
lastTransitionTime: "2021-04-09T06:37:34Z"
reason: PVC Exists
status: "True"
type: StorageExists
lastTransitionTime: "2021-04-26T15:55:33Z"
message: The registry is ready
reason: Ready
status: "True"
type: Available
lastTransitionTime: "2021-04-26T15:55:56Z"
message: The registry is ready
reason: Ready
status: "False"
type: Progressing
lastTransitionTime: "2021-04-26T15:55:33Z"
status: "False"
type: Degraded
lastTransitionTime: "2021-04-08T06:11:06Z"
status: "False"
type: Removed
lastTransitionTime: "2021-04-08T06:11:06Z"
reason: AsExpected
status: "False"
type: ImageRegistryCertificatesControllerDegraded
lastTransitionTime: "2021-04-08T06:11:07Z"
reason: AsExpected
status: "False"
type: ImageConfigControllerDegraded
lastTransitionTime: "2021-04-08T06:11:07Z"
reason: AsExpected
status: "False"
type: NodeCADaemonControllerDegraded
generations:
group: apps
hash: ""
lastGeneration: 5
name: image-registry
namespace: openshift-image-registry
resource: deployments
group: apps
hash: ""
lastGeneration: 0
name: node-ca
namespace: openshift-image-registry
resource: daemonsets
observedGeneration: 5
readyReplicas: 0
storage:
managementState: Managed
pvc:
claim: crc-image-registry-storage
storageManaged: true

Rework registry logging

We need to change the approach to logging in such a way that one error message contained all the information (maximum information) necessary for the problem investigation. In this case, we can change the logging level to error and remove some of the debug messages that we have now.

After discussion with Oleg, we came to the conclusion that we can add prefixes to the error as it passes through our code.

disclaimer: We have to review all the errors (ideally they should be reproduced) and check whether there is enough information from each stage.

After we solve this, we can change the loglevel as suggested in #75 .

error getting secrets: <nil>

secrets, err := secretsGetter()
if err != nil {
dcontext.GetLogger(ctx).Errorf("error getting secrets: %v", err)
}

To track the errors source, we have our type for errors. Unfortunately there is a side-effect: when we assign rerrors.Error(nil) to an error interface, we get a typed nil. So the condition err != nil becomes true.

To avoid this, we should use an interface, but we should still have a compile-time check that we wrap error in pkg/imagestream and other hard-to-debug places, i.e. this interface should have not only the Error() string method. I'd say it also needs to implement golang.org/x/xerrors.Wrapper and maybe some other private/public methods.

Openshift 4.3 - Getting timeout trying to log into image-registry using podman

Getting timeout trying to log into the image-registry using podman

[root@config ocp4]# oc login -u kubeadmin -p ***************
Login successful.

You have access to 56 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "test".

[root@config ocp4]# podman login -u $(oc whoami) -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000 --tls-verify=false

Error: error authenticating creds for "image-registry.openshift-image-registry.svc:5000": pinging docker registry returned: Get http://image-registry.openshift-image-registry.svc:5000/v2/: dial tcp 92.242.140.21:5000: i/o timeout

Any suggestions on how to get this to work will be greatly appreciated.

Sync labels with openshift/origin

The repository has no labels for priorities, statuses, kinds, so the bots commands do not work. I can not create or synchronize them because I do not have permissions for that.

Get blob may return 200/206 when blob is not present on storage

10.128.0.1 - - [19/Feb/2018:07:45:44 +0000] "GET /v2/extended-test-prune-images-xkcfs-7jg44/origin-release/blobs/sha256:ae60289ae57f6a7da4a23756109b6078c209a2e280be031a70bf00e198b6b1a0 HTTP/2.0" 206 2 "" "Go-http-client/2.0"

10.128.0.1 - - [19/Feb/2018:07:45:51 +0000] "DELETE /v2/extended-test-prune-images-xkcfs-7jg44/origin-release/blobs/sha256:ae60289ae57f6a7da4a23756109b6078c209a2e280be031a70bf00e198b6b1a0 HTTP/2.0" 202 0 "" "oc/v1.9.1+a0ce1bc657 (linux/amd64) kubernetes/a0ce1bc"

10.128.0.1 - - [19/Feb/2018:07:45:53 +0000] "DELETE /admin/blobs/sha256:ae60289ae57f6a7da4a23756109b6078c209a2e280be031a70bf00e198b6b1a0 HTTP/2.0" 204 0 "" "oc/v1.9.1+a0ce1bc657 (linux/amd64) kubernetes/a0ce1bc"

10.128.0.1 - - [19/Feb/2018:07:47:13 +0000] "GET /v2/extended-test-prune-images-xkcfs-rqqql/origin-release/blobs/sha256:ae60289ae57f6a7da4a23756109b6078c209a2e280be031a70bf00e198b6b1a0 HTTP/2.0" 206 0 "" "Go-http-client/2.0"

Failed to create image registry with Swift storage on s390x

I'm trying to deploy OCP 4.5 on s390x openstack platform, but image registry with Swift storage cannot be created with error message about x509 certificate:

$ oc logs image-registry-57446c46b7-2zdpf -n openshift-image-registry
time="2020-08-04T06:37:29.698512303Z" level=info msg="start registry" distribution_version=v2.6.0+unknown go.version=go1.13.4 openshift_version=4.5.0-202006291853.p0-861af92
time="2020-08-04T06:37:29.700502106Z" level=info msg="caching project quota objects with TTL 1m0s" go.version=go1.13.4
panic: Swift authentication failed: Post https://xx.xx.xx.xx:5000/v3/auth/tokens: x509: certificate signed by unknown authority

goroutine 1 [running]:
github.com/docker/distribution/registry/handlers.NewApp(0x1aac9e0, 0xc0000440e0, 0xc000614e00, 0xc0005f2f60)
	/go/src/github.com/openshift/image-registry/vendor/github.com/docker/distribution/registry/handlers/app.go:127 +0x2ef6
github.com/openshift/image-registry/pkg/dockerregistry/server/supermiddleware.NewApp(0x1aac9e0, 0xc0000440e0, 0xc000614e00, 0x1ab5c60, 0xc00067ccf0, 0x1abe9c0)
	/go/src/github.com/openshift/image-registry/pkg/dockerregistry/server/supermiddleware/app.go:96 +0x86
github.com/openshift/image-registry/pkg/dockerregistry/server.NewApp(0x1aac9e0, 0xc0000440e0, 0x1a78020, 0xc0006821c8, 0xc000614e00, 0xc000362320, 0x0, 0x0, 0xc0001202d0, 0x1658760)
	/go/src/github.com/openshift/image-registry/pkg/dockerregistry/server/app.go:138 +0x2a2
github.com/openshift/image-registry/pkg/cmd/dockerregistry.NewServer(0x1aac9e0, 0xc0000440e0, 0xc000614e00, 0xc000362320, 0x0, 0x0, 0x1ae7c80)
	/go/src/github.com/openshift/image-registry/pkg/cmd/dockerregistry/dockerregistry.go:210 +0x1bc
github.com/openshift/image-registry/pkg/cmd/dockerregistry.Execute(0x1a67ea0, 0xc000130020)
	/go/src/github.com/openshift/image-registry/pkg/cmd/dockerregistry/dockerregistry.go:164 +0x9da
main.main()
	/go/src/github.com/openshift/image-registry/cmd/dockerregistry/main.go:93 +0x454

clouds.yaml:

clouds:
  openstack:
    auth:
      auth_url: https://xxxx:5000/v3
      username: root
      project_name: ibm-default
      project_id: e4363f795191482fb7478a6fd1852da0
      domain_name: Default
      password: dfltpass
      user_domain_name: Default
    region_name: "RegionOne"
    cacert: /etc/pki/tls/certs/icic.crt
    interface: "public"
    identity_api_version: 3
$ oc edit configs.imageregistry.operator.openshift.io
  storage:
    swift:
      authURL: https://xx.xx.xx.xx:5000/v3
      authVersion: "3"
      container: ocpcic-d5lj8-image-registry-cijqbvlwdundbvrdkwniwqjytqsgjcqxks
      domain: Default
      regionName: RegionOne
      tenant: ibm-default
      tenantID: e4363f795191482fb7478a6fd1852da0
  storageManaged: true

Document how to work around unknown AWS regions in S3 driver

much like k8s has done, we (well the upstream s3 filesystem driver) should trust the region list from AWS instead of us hardcoding it:
kubernetes/kubernetes#38880

that way when new regions are introduced, we automatically support them.

@legionus @dmage any reason this is a bad idea or hard to do?

Edit: see comment #104 (comment) which summarizes why we can't easily do this and describes a workaround we should document for users that allows them to bypass this check entirely.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.