Giter VIP home page Giter VIP logo

cluster-authentication-operator's Introduction

cluster-authentication-operator

The authentication operator is an OpenShift ClusterOperator.
It installs and maintains the Authentication Custom Resource in a cluster and can be viewed with:

oc get clusteroperator authentication -o yaml

The Custom Resource Definition authentications.operator.openshift.io
can be viewed in a cluster with:

$ oc get crd authentications.operator.openshift.io -o yaml

Many OpenShift ClusterOperators share common build, test, deployment, and update methods.
For more information about how to build, deploy, test, update, and develop OpenShift ClusterOperators, see
OpenShift ClusterOperator and Operand Developer Document

This section explains how to deploy OpenShift with your test cluster-authentication-operator image:
Testing a ClusterOperator/Operand image in a cluster

Add a basic IdP to test your stuff

The most common identity provider for demoing and testing is the HTPasswd IdP.

To set it up, take the following steps:

  1. Create a new htpasswd file
$ htpasswd -bBc /tmp/htpasswd testuser testpasswd
  1. (optional) Add more users
$ htpasswd -bB /tmp/htpasswd testuser2 differentpassword
  1. Create a secret from that htpasswd in the openshift-config namespace
oc create secret generic myhtpasswdidp-secret -n openshift-config --from-file=/tmp/htpasswd
  1. Configure the OAuth server to use the HTPasswd IdP from the secret by editing the spec of the cluster-wide OAuth/cluster object so that it looks like the one in this example:
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: htpassidp
    type: HTPasswd
    htpasswd:
      fileData:
        name: myhtpasswdidp-secret
  1. The operator will now restart the OAuth server deployment and mount the new config
  2. When the operator is available again (oc get clusteroperator authentication), you should be able to log in:
oc login -u testuser -p testpasswd

cluster-authentication-operator's People

Contributors

csrwng avatar deads2k avatar derekwaynecarr avatar dgrisonnet avatar dmage avatar emilym1 avatar enj avatar ibihim avatar ingvagabund avatar jaormx avatar liouk avatar marun avatar mfojtik avatar openshift-bot avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar p0lyn0mial avatar ravisantoshgudimetla avatar rhamilto avatar s-urbaniak avatar sallyom avatar sanchezl avatar sg00dwin avatar stlaz avatar sttts avatar tkashem avatar vareti avatar vrutkovs avatar wking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-authentication-operator's Issues

"failed to find PEM block" after successfully configuring ingress controller with custom certs on OCP 4.6.4

I have a custom wildcard cert, which I managed to configure IngressController with, after much struggle.

I chained the cert, intermediate cert, root cert and the key into the pem, and only then ingress controller was up and I finally saw that indeed my custom cert is served (for example upon accessing the console route).

However, now the oauth-openshift complains that:

Copying system trust bundle
I0203 14:17:38.623709 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt::/var/config/system/secrets/v4-0-config-system-serving-cert/tls.key"
F0203 14:17:38.624117 1 cmd.go:49] failed to load SNI cert and key: tls: failed to find PEM block with type ending in "PRIVATE KEY" in key input after skipping PEM blocks of the following types: [CERTIFICATE CERTIFICATE CERTIFICATE]
g

I take it that the ingress controller operator, upon creation of my custom cert secret, also updates the v4-0-config-system-serving-cert secret that the openshift-oauth uses..?

How should the pem be constructed so that both the ingress controller and the oauth pod are at peace?

SAN validation fails on intermediate/issuing cert

For bug 2052467 in merge-request #545 validation of all certs for a SAN was added:

for _, cert := range certs {
if !crypto.CertHasSAN(cert) {
errs = append(errs, newErrNoSAN(cert))
}
}

This check fails, if one provides additional intermediate/issuing certs. Leaving the cluster in not-upgradable/degraded state. This is a not uncommon configuration if clients only receive the root-ca (the one signed the intermediate/issuing cert). If the server sends the intermediate/issuing cert additionally, this allows the client to verify the trust chain. Trust chain:

  1. "Route"/sender/server Cert (sent by the server)
  2. Intermediate/Issuing CA (sent by the server)
  3. Root CA (in clients trust-store)

Providing additional certs is covered in RFC-8446 Section 4.4.2.

If the corresponding certificate type extension
("server_certificate_type" or "client_certificate_type") was not
negotiated in EncryptedExtensions, or the X.509 certificate type was
negotiated, then each CertificateEntry contains a DER-encoded X.509
certificate. The sender's certificate MUST come in the first
CertificateEntry in the list. Each following certificate SHOULD
directly certify the one immediately preceding it. Because
certificate validation requires that trust anchors be distributed
independently, a certificate that specifies a trust anchor MAY be
omitted from the chain, provided that supported peers are known to
possess any omitted certificates.

IMHO the SAN validation should only cover the FIRST certificate and not EVERY, as this MUST be the senders certificate according to the RFC.

Configuring OpenID Connect IDP is failing

Steps to Reproduce

  • Install Openshift cluster 4.2
  • Create Realm toolchain-dev and client toolchain for our own running keycloak.
  • Create OAuth, secret, user, identity, UserIdentityMapping from following yaml files.
  • When we try to login, we can see login with rhd, but failing with authentication error.

Openshift version tried

  • 4.2.0-0.okd-2019-07-22-044315
  • 4.1.4

Pod logs

I0723 15:16:22.650706       1 log.go:172] http: TLS handshake error from 10.128.2.16:51866: EOF
E0723 15:16:44.494035       1 errorpage.go:26] AuthenticationError: Code not valid

Yaml manifest

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: rhd
    mappingMethod: lookup
    type: OpenID
    openID:
      clientID: toolchain
      clientSecret:
        name: rhd-idp-secret
      claims:
        preferredUsername:
        - preferred_username
        name:
        - name
        email:
        - email
      issuer: https://sso.prod-preview.openshift.io/auth/realms/toolchain-dev
---
apiVersion: v1
kind: Secret
metadata:
  name: rhd-idp-secret
  namespace: openshift-config
type: Opaque
data:
  clientSecret: <replace_wih_secret>
---
apiVersion: user.openshift.io/v1
groups: null
identities: null
kind: User
metadata:
  name: ${USERNAME}
---
apiVersion: user.openshift.io/v1
kind: Identity
metadata:
  name: rhd:${USER_ID_FROM_SUB_CLAIM}
providerName: rhd
providerUserName: ${USER_ID_FROM_SUB_CLAIM}
---
apiVersion: user.openshift.io/v1
kind: UserIdentityMapping
metadata:
  name: rhd:${USER_ID_FROM_SUB_CLAIM}
user:
  name: ${USERNAME}
identity:
  name: rhd:${USER_ID_FROM_SUB_CLAIM}

Note:

  1. This setup was working previously.(Not sure about the openshift version)
  2. We need mappingMethod: lookup from OAuth configuration

cc @xcoulon @alexeykazakov

Authentication degraded v4.1 : failed to GET route: x509: certificate signed by unknown authority

I can't boostrap new cluster, tried 3 times, same results
The certificate used by authentication pods is not valid

*** openshift-install version
./openshift-install v4.1.0-201905212232-dirty
built from commit 71d8978039726046929729ad15302973e3da18ce
release image quay.io/openshift-release-dev/ocp-release@sha256:b8307ac0f3ec4ac86c3f3b52846425205022da52c16f56ec31cbe428501001d6

*** oc get nodes
oc get nodes
NAME STATUS ROLES AGE VERSION
okd-master-0 Ready master 3h6m v1.13.4+cb455d664
okd-master-1 Ready master 3h6m v1.13.4+cb455d664
okd-master-2 Ready master 3h6m v1.13.4+cb455d664
okd-worker-1 Ready worker 3h6m v1.13.4+cb455d664
okd-worker-2 Ready worker 3h6m v1.13.4+cb455d664
okd-worker-3 Ready worker 3h6m v1.13.4+cb455d664

*** oc get co
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication Unknown Unknown True 9m52s
cloud-credential 4.1.0 True False False 3h6m
cluster-autoscaler 4.1.0 True False False 3h6m
console 4.1.0 True False False 177m
dns 4.1.0 True False False 3h5m
image-registry False False True 3h
ingress 4.1.0 True False False 179m
kube-apiserver 4.1.0 True False False 3h4m
kube-controller-manager 4.1.0 True False False 3h3m
kube-scheduler 4.1.0 True False False 3h3m
machine-api 4.1.0 True False False 3h6m
machine-config 4.1.0 True False False 3h5m
marketplace 4.1.0 True False False 3h
monitoring 4.1.0 True False False 177m
network 4.1.0 True False False 3h5m
node-tuning 4.1.0 True False False 3h2m
openshift-apiserver 4.1.0 True False False 3h2m
openshift-controller-manager 4.1.0 True False False 3h5m
openshift-samples 4.1.0 True False False 174m
operator-lifecycle-manager 4.1.0 True False False 3h4m
operator-lifecycle-manager-catalog 4.1.0 True False False 3h4m
service-ca 4.1.0 True False False 3h5m
service-catalog-apiserver 4.1.0 True False False 3h2m
service-catalog-controller-manager 4.1.0 True False False 3h2m
storage 4.1.0 True False False 3h

*** oc get co authentication -oyaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
creationTimestamp: "2019-06-19T13:04:33Z"
generation: 1
name: authentication
resourceVersion: "54550"
selfLink: /apis/config.openshift.io/v1/clusteroperators/authentication
uid: c890f7f2-9292-11e9-843e-3ad4e0cc66cb
spec: {}
status:
conditions:

  • lastTransitionTime: "2019-06-19T13:04:33Z"
    message: 'Degraded: error checking current version: unable to check route health:
    failed to GET route: x509: certificate signed by unknown authority'
    reason: DegradedOperatorSyncLoopError
    status: "True"
    type: Degraded
  • lastTransitionTime: "2019-06-19T13:04:33Z"
    reason: NoData
    status: Unknown
    type: Progressing
  • lastTransitionTime: "2019-06-19T13:04:33Z"
    reason: NoData
    status: Unknown
    type: Available
  • lastTransitionTime: "2019-06-19T13:04:33Z"
    reason: NoData
    status: Unknown
    type: Upgradeable
    extension: null
    relatedObjects:
  • group: operator.openshift.io
    name: cluster
    resource: authentications
  • group: config.openshift.io
    name: cluster
    resource: authentications
  • group: config.openshift.io
    name: cluster
    resource: infrastructures
  • group: config.openshift.io
    name: cluster
    resource: oauths
  • group: ""
    name: openshift-config
    resource: namespaces
  • group: ""
    name: openshift-config-managed
    resource: namespaces
  • group: ""
    name: openshift-authentication
    resource: namespaces
  • group: ""
    name: authentication-operator
    resource: namespaces

*** oc get pods --all-namespaces | grep -i auth
openshift-authentication-operator authentication-operator-69d5d8bf84-6pdvq 1/1 Running 0 11m
openshift-authentication oauth-openshift-6889dd56f6-6s6rx 1/1 Running 0 10m
openshift-authentication oauth-openshift-6889dd56f6-kgnmm 1/1 Running 0 10m

*** oc logs -n openshift-authentication-operator authentication-operator-69d5d8bf84-6pdvq

W0619 13:11:14.297959 1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 14653 (55932)
W0619 13:11:14.298005 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 54387 (55421)
W0619 13:11:14.298042 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 54520 (55421)
W0619 13:11:14.310693 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 54487 (55422)
W0619 13:11:14.314416 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 54449 (56288)
W0619 13:11:14.314452 1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 14655 (55932)
W0619 13:11:14.324679 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 54531 (56288)
W0619 13:11:14.341407 1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 54575 (56420)
W0619 13:11:14.357583 1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 14655 (56420)
W0619 13:11:14.440928 1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 14655 (56422)
W0619 13:11:14.459253 1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 14655 (56422)
E0619 13:11:18.097725 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: x509: certificate signed by unknown authority
E0619 13:11:20.501783 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: x509: certificate signed by unknown authority
E0619 13:15:55.724989 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: x509: certificate signed by unknown authority

things david doesn't like

  1. https://github.com/openshift/cluster-authentication-operator/blob/master/pkg/operator2/configmap.go#L55-L59 . route.spec isn't the same as route.status.
    Your route hasn't been accepted yet, so the kube-apiserver could end up forwarding to an endpoint that isn't you!
  2. This configmap is an output, not an input. Please make it a separate control loop to keep your main loop tight with logical dependencies. Also, you don't want a rollout failure to disrupt keeping this outbound information correct.
    // make sure API server sees our metadata as soon as we've got a route with a host
    metadata, _, err := resourceapply.ApplyConfigMap(c.configMaps, c.recorder, getMetadataConfigMap(route))
    if err != nil {
    return fmt.Errorf("failure applying configMap for the .well-known endpoint: %v", err)
    }
    resourceVersions = append(resourceVersions, metadata.GetResourceVersion())
  3. Use a different FooDegraded for each control loop and allow the status union to combine them. It will make each condition write more obvious.
  4. handleAuthConfig is outbound state. Move into a different sync loop
  5. your check functions are good, but they should each set different FooDegraded conditions.
  6. serviceCA, servingCert, err := c.handleServiceCA() appears unnecessary. Directly depend on the key and the kubelet will properly put your pod into pending.
  7. hard code the hardcoded values in your oauth config. Service name for instance. Flexibility that isn't flexible is really hard to read.
  8. instead of passing a bunch of state through various handles, use your clients to lookup values. You can make them caching if the load becomes too high.
  9. accessTokenInactivityTimeoutSeconds appears to be a default. why didn't you default it?
  10. handleOAuthConfig appears to be an attempt at combining multiple different configobservers into a single loop. You do logically own all these things, but configobservation (even a single value) distinct from the main loop will give you working generations and logicaly separation you're lacking here.
  11. sync-ing of user config is driven outside the main loop of other operators to ensure that it always works regardless of the state of other rollouts. Secrets must rotate without choice, the old values are invalid.
  12. ensureBootstrappedOAuthClients looks completely distinct.
  13. looks like availability may be worth separating like static pod operators that have complicated availability rules.

x509: certificate signed by unknown authority error using keycloak operator

Hi!
We try to deploy keycloak as external identity provider via keycloak operator during openshift installation process.
Keycloak operator uses OpenShift service-ca-operator to generate service serving certificate as described here

By default, authentication-operator does not trust certificates generated by service-ca-operator.
It only trusts CAs located in trusted-ca-bundle ConfigMap.
We have noticed that serving-cert CA has already mounted to authentication-operator pod.

Entrypoint args of authentication-operator:

if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then
              echo "Copying system trust bundle"
              cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
          fi
          exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt

Perhaps we have to make serving-cert CA trusted by default instead of configuring ca section in the oauth config after deploying Keycloak instance:

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - mappingMethod: claim
    name: keycloak
    openID:
      ca:  <--- this section
        name:  <ConfigMap with keycloak certificate>
      claims:
        email:
        - email
        name:
        - name
        preferredUsername:
        - preferred_username
      clientID: openshift
      clientSecret:
        name: openid-client-secret-djbhz
      extraScopes: []
      issuer: https://keycloak-url
    type: OpenID

For example we can add serving ca to tls-ca-bundle.pem:

if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then
              echo "Copying system trust bundle"
              cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
              echo "Copying service trust bundle"
              cat /var/run/configmaps/service-ca-bundle/service-ca.crt >> /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
          fi
          exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt

Could you please suggest any better solution?

oauth-openshift pod failed to start due to permission issue.

I have an ocp 4.3 cluster deploy.

[root@hchenfly-inf ~]# oc version
Client Version: 4.3.13
Server Version: 4.3.13
Kubernetes Version: v1.16.2

but the oauth pods on my cluster can not startup due to permission.

[root@hchenfly-inf ~]# oc get pods -n openshift-authentication
NAME                               READY   STATUS             RESTARTS   AGE
oauth-openshift-795cf97644-b7m6n   0/1     CrashLoopBackOff   1          17s
oauth-openshift-7f8dd5d86d-9qbc4   0/1     CrashLoopBackOff   1          17s
oauth-openshift-855b8c64-pq97f     0/1     CrashLoopBackOff   1          17s
[root@hchenfly-inf ~]# oc logs -n openshift-authentication oauth-openshift-855b8c64-pq97f
Copying system trust bundle
cp: cannot remove '/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem': Permission denied

not sure how to solve this.

Missing auth server pods when running etcd Disaster Recovery

I noticed that the oauth server deployment has 2 replicas and the pods are generally present on 2 different master nodes (because of the configuration).

It is observed that in cases where 2 master vms (running the oauth server pods) are wedged at the same time and disaster recovery is run on the remaining nodes, it takes a while (about 5 mins) for the two pods to get rescheduled on the remaining master node after DR is performed. Related BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1722807

Would it make sense to run it as a DaemonSet so that auth server pods are present on all master nodes?

Having pods on all three master nodes will help reduce additional steps in the DR procedures of waiting for the auth pods.

Login URL shown for token display is not configurable

"Copy Login Command" currently restrict the api url to be one from the in-cluster configuration; however, it is absolutely possible to place a load balancer in front of master servers and have users interact with api via loadbalancer. The only way the users will discover is by going to "Copy Login Command", so I need a way to configure the api loginUrl displayed there. A user configurable configmap or env value would be a good way to expose it.

[Solved] Authentication operator degraded

I'm not sure how to report this correctly, but there is what i have.
Context:
4.2 UPI iPXE booted Libvirt setup of 1 (HAProxy, iPXE, TFTP, DNS, DHCP) + 3 masters + 2 infras + 3 workers. Masters and workers 4 vCPUs, 6G Ram, 20G volumes. (iPXE booted Test setup).
Client Version: openshift-clients-4.2.0-201910041700
Server Version: 4.2.0
Kubernetes Version: v1.14.6+2e5ed54

Everything went well except, that authentication and console operators fails.

oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                                       Unknown     Unknown       True       12h
cloud-credential                           4.2.0     True        False         False      12h
cluster-autoscaler                         4.2.0     True        False         False      12h
console                                    4.2.0     False       True          False      12h
dns                                        4.2.0     True        False         False      12h
...
oc logs authentication-operator-59bd6dffb8-r4phm -n openshift-authentication-operator
....
controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp.example.com: []

and

E1101 20:31:15.352344       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF

Found similar issues:
https://bugzilla.redhat.com/show_bug.cgi?id=1740121#c10
https://bugzilla.redhat.com/show_bug.cgi?id=1743353
https://bugzilla.redhat.com/show_bug.cgi?id=1744370
https://bugzilla.redhat.com/show_bug.cgi?id=1744599

Would be grateful for any hints of resolution and/or dependencies to check and fix.

What i did so far:

  1. HAProxy ports opened like 6443, 22623 and others.
  2. Selinux ports added - http_port_t 6443; 22623
  3. Can dig *.apps.* from any cluster node when ssh core@*

My HAProxy rules:

frontend ocp-kubernetes-api-server
    mode tcp
    option tcplog
    bind api.ocp.example.com:6443
    default_backend ocp-kubernetes-api-server

backend ocp-kubernetes-api-server
    balance source
    mode tcp
    server bootstrap-0 bootstrap-0.ocp.example.com:6443 check
    server master-0 master-0.ocp.example.com:6443 check
    server master-1 master-1.ocp.example.com:6443 check
    server master-2 master-2.ocp.example.com:6443 check

frontend ocp-machine-config-server
    mode tcp
    option tcplog
    bind api.ocp.example.com:22623
    default_backend ocp-machine-config-server

backend ocp-machine-config-server
    balance source
    mode tcp
    server bootstrap-0 bootstrap-0.ocp.example.com:22623 check
    server master-0 master-0.ocp.example.com:22623 check
    server master-1 master-1.ocp.example.com:22623 check
    server master-2 master-2.ocp.example.com:22623 check

frontend ocp-router-http
    mode tcp
    option tcplog
    bind apps.ocp.example.com:80
    default_backend ocp-router-http

backend ocp-router-http
    balance source
    mode tcp
    server infnod-0 infnod-0.ocp.example.com:80 check
    server infnod-1 infnod-1.ocp.example.com:80 check

frontend ocp-router-https
    mode tcp
    option tcplog
    bind apps.ocp.example.com:443
    default_backend ocp-router-https

backend ocp-router-https
    balance source
    mode tcp
    server infnod-0 infnod-0.ocp.example.com:443 check
    server infnod-1 infnod-1.ocp.example.com:443 check

failed to decode metadata: invalid character '<' looking for beginning of value

Openshift 4.1.7
UPI: vSphere
ENV: 3 masrers 3 workers

I configure OpenID OAuth:

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  annotations:
    release.openshift.io/create-only: 'true'
  creationTimestamp: '2019-07-29T19:12:50Z'
  generation: 19
  name: cluster
  resourceVersion: '250219'
  selfLink: /apis/config.openshift.io/v1/oauths/cluster
  uid: dbce3f57-b234-11e9-ba76-005056a5829d
spec:
  identityProviders:
    - mappingMethod: claim
      name: ibm
      openID:
        claims:
          email:
            - email
          name:
            - name
          preferredUsername:
            - preferred_username
        clientID: *****
        clientSecret:
          name: openid-client-secret-9z8dr
        issuer: 'https://w3id.sso.ibm.com/isam'
      type: OpenID

Getting error from the operator:

Update is failing. Cluster operator authentication is reporting a failure: Degraded: failed to apply IDP ibm config: failed to decode metadata: invalid character '<' looking for beginning of value. View Cluster Operators for more details.

I see the same in the authentication-operator pod:

E0730 09:33:40.282106       1 oauth.go:69] failed to honor IDP v1.IdentityProvider{Name:"ibm", MappingMethod:"claim", IdentityProviderConfig:v1.IdentityProviderConfig{Type:"OpenID", BasicAuth:(*v1.BasicAuthIdentityProvider)(nil), GitHub:(*v1.GitHubIdentityProvider)(nil), GitLab:(*v1.GitLabIdentityProvider)(nil), Google:(*v1.GoogleIdentityProvider)(nil), HTPasswd:(*v1.HTPasswdIdentityProvider)(nil), Keystone:(*v1.KeystoneIdentityProvider)(nil), LDAP:(*v1.LDAPIdentityProvider)(nil), OpenID:(*v1.OpenIDIdentityProvider)(0xc42098ab00), RequestHeader:(*v1.RequestHeaderIdentityProvider)(nil)}}: failed to decode metadata: invalid character '<' looking for beginning of value

Am i missing something?
Thanks

Fresh installed authentication operator keeps "Degraded" status.

When installing OKD on bare metal, all operators except authentication successfully reach Available status.

$ oc get clusteroperators
NAME                                       VERSION                          AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.10.0-0.okd-2022-03-07-131213   True        False         True       14h     CustomRouteControllerDegraded: Ingress.config.openshift.io "cluster" is invalid: [status.componentRoutes.currentHostnames: Invalid value: "oauth-openshift.apps.okd.ict620": status.componentRoutes.currentHostnames in body must be of type hostname: "oauth-openshift.apps.okd.ict620", status.componentRoutes.defaultHostname: Invalid value: "oauth-openshift.apps.okd.ict620": status.componentRoutes.defaultHostname in body must be of type hostname: "oauth-openshift.apps.okd.ict620"]...
baremetal                                  4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
cloud-controller-manager                   4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
cloud-credential                           4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
cluster-autoscaler                         4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
config-operator                            4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
console                                    4.10.0-0.okd-2022-03-07-131213   True        False         False      5h4m
csi-snapshot-controller                    4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
dns                                        4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
etcd                                       4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
image-registry                             4.10.0-0.okd-2022-03-07-131213   True        False         False      39h
ingress                                    4.10.0-0.okd-2022-03-07-131213   True        False         False      5h4m
insights                                   4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
kube-apiserver                             4.10.0-0.okd-2022-03-07-131213   True        False         False      39h
kube-controller-manager                    4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
kube-scheduler                             4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
kube-storage-version-migrator              4.10.0-0.okd-2022-03-07-131213   True        False         False      39h
machine-api                                4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
machine-approver                           4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
machine-config                             4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
marketplace                                4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
monitoring                                 4.10.0-0.okd-2022-03-07-131213   True        False         False      38h
network                                    4.10.0-0.okd-2022-03-07-131213   True        False         False      39h
node-tuning                                4.10.0-0.okd-2022-03-07-131213   True        False         False      39h
openshift-apiserver                        4.10.0-0.okd-2022-03-07-131213   True        False         False      14h
openshift-controller-manager               4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
openshift-samples                          4.10.0-0.okd-2022-03-07-131213   True        False         False      39h
operator-lifecycle-manager                 4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
operator-lifecycle-manager-catalog         4.10.0-0.okd-2022-03-07-131213   True        False         False      39h
operator-lifecycle-manager-packageserver   4.10.0-0.okd-2022-03-07-131213   True        False         False      39h
service-ca                                 4.10.0-0.okd-2022-03-07-131213   True        False         False      40h
storage                                    4.10.0-0.okd-2022-03-07-131213   True        False         False      40h

openshift-install wait-for install-complete successfully returned and I can login to the web console.

$ ./openshift-install --dir ict-okd wait-for install-complete
INFO Waiting up to 40m0s (until 11:55AM) for the cluster at https://api.okd.ict620:6443 to initialize...
INFO Waiting up to 10m0s (until 11:25AM) for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/stream/openshift/ict-okd/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.okd.ict620
INFO Login to the console with user: "******", and password: "******"
INFO Time elapsed: 0s

Full MESSAGE for authentication operator:

CustomRouteControllerDegraded: Ingress.config.openshift.io "cluster" is invalid: [status.componentRoutes.currentHostnames: Invalid value: "oauth-openshift.apps.okd.ict620": status.componentRoutes.currentHostnames in body must be of type hostname: "oauth-openshift.apps.okd.ict620", status.componentRoutes.defaultHostname: Invalid value: "oauth-openshift.apps.okd.ict620": status.componentRoutes.defaultHostname in body must be of type hostname: "oauth-openshift.apps.okd.ict620"]
OAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the "openshift-authentication/oauth-openshift" route

okd.ict620 is only resolvable in cluster.

I added an OpenID Connect identity provider and it works as expected.

authentication-operator uses a proxy to access the internal address of the cluster

I deployed the OCP (4.8.0-0.nightly-2021-06-09-214128) cluster by IPI. After the deployment was successful, I found that the authentication-operator would access the address inside(oauth-openshift.apps.openshift.zz.local) the cluster through the proxy,which would lead to access failure:
The following is the log of the proxy service:

2021/06/10 07:42:17 server.go:96: [http] 192.168.30.101:56436 <-> oauth-openshift.apps.openshift.zz.local:443 [c] via ***.***.***.***:8080, 
error in dial: [http] can not connect remote address: oauth-openshift.apps.openshift.zz.local:443. error code: 503

I log in to the authentication-operator container and find that the no_proxy I set is effective (for readability, I only show the address related to this address):

sh-4.4# env | grep NO_PROXY
NO_PROXY=.cluster.local,.svc,.zz.local,oauth-openshift.apps.openshift.zz.local

The following is the log of authentication-operator:

2021-06-10T07:25:36.852124986+00:00 stderr F I0610 07:25:36.851991       1 request.go:668] Waited for 2.587263238s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/kube-system/secrets/kubeadmin
2021-06-10T07:25:38.051260934+00:00 stderr F I0610 07:25:38.051205       1 request.go:668] Waited for 2.196184297s due to client-side throttling, not priority and fairness, request: DELETE:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/encryption-config
2021-06-10T07:25:39.052030581+00:00 stderr F I0610 07:25:39.051901       1 request.go:668] Waited for 1.782880288s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-metadata
2021-06-10T07:25:40.052123592+00:00 stderr F I0610 07:25:40.051998       1 request.go:668] Waited for 1.791930805s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-cliconfig
2021-06-10T07:35:29.176269702+00:00 stderr F I0610 07:35:29.176226       1 request.go:668] Waited for 1.032187262s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config-managed/secrets/router-certs
2021-06-10T07:35:30.376324628+00:00 stderr F I0610 07:35:30.376247       1 request.go:668] Waited for 1.618937069s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-config/secrets/webhook-authentication-integrated-oauth
2021-06-10T07:35:31.376560351+00:00 stderr F I0610 07:35:31.376446       1 request.go:668] Waited for 1.997080649s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/secrets/v4-0-config-system-session
2021-06-10T07:35:32.376732953+00:00 stderr F I0610 07:35:32.376648       1 request.go:668] Waited for 2.391812071s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-metadata
2021-06-10T07:35:33.376750827+00:00 stderr F I0610 07:35:33.376696       1 request.go:668] Waited for 2.426829904s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver
2021-06-10T07:35:34.576035008+00:00 stderr F I0610 07:35:34.575988       1 request.go:668] Waited for 2.595683923s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/endpoints/oauth-openshift
2021-06-10T07:35:35.576654495+00:00 stderr F I0610 07:35:35.576520       1 request.go:668] Waited for 1.995346443s due to client-side throttling, not priority and fairness, request: DELETE:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/encryption-config
2021-06-10T07:35:36.776318219+00:00 stderr F I0610 07:35:36.776184       1 request.go:668] Waited for 1.993754073s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api
2021-06-10T07:35:37.976966359+00:00 stderr F I0610 07:35:37.976919       1 request.go:668] Waited for 1.596880981s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/kube-system
2021-06-10T07:38:29.339042704+00:00 stderr F I0610 07:38:29.338965       1 request.go:668] Waited for 1.035359616s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca
2021-06-10T07:38:31.739780519+00:00 stderr F I0610 07:38:31.739650       1 request.go:668] Waited for 1.035100315s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca
2021-06-10T07:38:32.739951870+00:00 stderr F I0610 07:38:32.739820       1 request.go:668] Waited for 1.395485729s due to client-side throttling, not priority and fairness, request: DELETE:https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/secrets/encryption-config

Secrets "v4-0-config-system-router-certs" not found for cluster-authentication operator

Not able to start the auth service because of "v4-0-config-system-router-certs" not found error.

$ openshift-install version
openshift-install unreleased-master-577-g9777df02d2a0dc287bb520c5ea2f409499c28eca
built from commit 9777df02d2a0dc287bb520c5ea2f409499c28eca

$ oc get co
NAME                                  VERSION                           AVAILABLE   PROGRESSING   FAILING   SINCE
authentication                                                          False       False         True      5m15s
cluster-autoscaler                    4.0.0-0.alpha-2019-03-18-221255   True        False         False     19m
dns                                   4.0.0-0.alpha-2019-03-18-221255   True        False         False     24m
kube-apiserver                        4.0.0-0.alpha-2019-03-18-221255   True        False         False     21m
kube-controller-manager               4.0.0-0.alpha-2019-03-18-221255   True        False         False     18m
kube-scheduler                        4.0.0-0.alpha-2019-03-18-221255   True        False         False     20m
machine-api                           4.0.0-0.alpha-2019-03-18-221255   True        False         False     25m
machine-config                        4.0.0-0.alpha-2019-03-18-221255   True        False         False     24m
network                               4.0.0-0.alpha-2019-03-18-221255   True        False         False     25m
node-tuning                           4.0.0-0.alpha-2019-03-18-221255   True        False         False     16m
openshift-apiserver                   4.0.0-0.alpha-2019-03-18-221255   True        False         False     18m
openshift-cloud-credential-operator   4.0.0-0.alpha-2019-03-18-221255   True        False         False     24m
openshift-controller-manager          4.0.0-0.alpha-2019-03-18-221255   True        False         False     18m
operator-lifecycle-manager            4.0.0-0.alpha-2019-03-18-221255   True        False         False     25m
service-ca                                                              True        False         False     18m
service-catalog-apiserver             4.0.0-0.alpha-2019-03-18-221255   True        False         False     17m
service-catalog-controller-manager    4.0.0-0.alpha-2019-03-18-221255   True        False         False     17m

$ oc get co authentication -oyaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  creationTimestamp: 2019-03-19T05:42:40Z
  generation: 1
  name: authentication
  resourceVersion: "15617"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/authentication
  uid: cf8b6794-4a09-11e9-b47e-664f163f5f0f
spec: {}
status:
  conditions:
  - lastTransitionTime: 2019-03-19T05:42:40Z
    message: 'Failing: secrets "v4-0-config-system-router-certs" not found'
    reason: Failing
    status: "True"
    type: Failing
  - lastTransitionTime: 2019-03-19T05:42:40Z
    reason: AsExpected
    status: "False"
    type: Progressing
  - lastTransitionTime: 2019-03-19T05:42:40Z
    reason: Available
    status: "False"
    type: Available
  - lastTransitionTime: 2019-03-19T05:42:40Z
    reason: NoData
    status: Unknown
    type: Upgradeable
  extension: null
  relatedObjects:
  - group: operator.openshift.io
    name: cluster
    resource: authentications
  - group: config.openshift.io
    name: cluster
    resource: authentications
  - group: config.openshift.io
    name: cluster
    resource: oauths
  - group: ""
    name: openshift-config
    resource: namespaces
  - group: ""
    name: openshift-config-managed
    resource: namespaces
  - group: ""
    name: openshift-authentication
    resource: namespaces
  - group: ""
    name: openshift-authentication-operator
    resource: namespaces
  versions: null

$ oc get pods --all-namespaces| grep -i auth
openshift-authentication-operator                       openshift-authentication-operator-7d69d9795-nds62                 1/1     Running     0          2m55s

$ oc logs openshift-authentication-operator-7d69d9795-kwnv5 -n openshift-authentication-operator
W0319 05:42:23.713608       1 cmd.go:134] Using insecure, self-signed certificates
I0319 05:42:23.714007       1 crypto.go:493] Generating new CA for cluster-authentication-operator-signer@1552974143 cert, and key in /tmp/serving-cert-030527269/serving-signer.crt, /tmp/serving-cert-030527269/serving-signer.key
I0319 05:42:24.737477       1 observer_polling.go:106] Starting file observer
W0319 05:42:25.221662       1 authorization.go:47] Authorization is disabled
W0319 05:42:25.221769       1 authentication.go:55] Authentication is disabled
I0319 05:42:25.223943       1 secure_serving.go:116] Serving securely on 0.0.0.0:8443
I0319 05:42:25.224768       1 leaderelection.go:205] attempting to acquire leader lease  openshift-authentication-operator/cluster-authentication-operator-lock...
I0319 05:42:40.782056       1 leaderelection.go:214] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock
I0319 05:42:40.790265       1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"177c52f7-4a08-11e9-b47e-664f163f5f0f", APIVersion:"v1", ResourceVersion:"15612", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' c6383bf3-4a09-11e9-aeee-0a580a80003d became leader
I0319 05:42:40.810675       1 status_controller.go:173] Starting StatusSyncer-authentication
I0319 05:42:40.820314       1 resourcesync_controller.go:207] Starting ResourceSyncController
I0319 05:42:40.827469       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"openshift-authentication-operator", UID:"081df1fe-4a08-11e9-b47e-664f163f5f0f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'StatusNotFound' Unable to determine current operator status for authentication
I0319 05:42:40.828105       1 controller.go:54] Starting AuthenticationOperator2
I0319 05:42:40.865859       1 status_controller.go:98] clusteroperator/authentication not found
I0319 05:42:40.875153       1 status_controller.go:150] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-03-19T05:42:40Z","message":"Failing: secrets \"v4-0-config-system-router-certs\" not found","reason":"Failing","status":"True","type":"Failing"},{"lastTransitionTime":"2019-03-19T05:42:40Z","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2019-03-19T05:42:40Z","reason":"Available","status":"False","type":"Available"},{"lastTransitionTime":"2019-03-19T05:42:40Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}],"relatedObjects":[{"group":"operator.openshift.io","name":"cluster","resource":"authentications"},{"group":"config.openshift.io","name":"cluster","resource":"authentications"},{"group":"config.openshift.io","name":"cluster","resource":"oauths"},{"group":"","name":"openshift-config","resource":"namespaces"},{"group":"","name":"openshift-config-managed","resource":"namespaces"},{"group":"","name":"openshift-authentication","resource":"namespaces"},{"group":"","name":"openshift-authentication-operator","resource":"namespaces"}]}}
E0319 05:42:41.217898       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:41.417729       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:41.617912       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:41.819724       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:42.021670       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:42.219946       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:42.421906       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:42.772591       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:43.435065       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:44.732913       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:47.326001       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:42:52.468559       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:43:02.729566       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found
E0319 05:43:23.228771       1 controller.go:130] {🐼 🐼} failed with: secrets "v4-0-config-system-router-certs" not found

Impossible to update route - Error in route oauth-openshift - 503 Service Unavailable

Hello,

I am installing a new cluster OCP 4.6.1

[root@paas-dev masters]# oc version
Client Version: 4.6.1
Server Version: 4.6.1
Kubernetes Version: v1.19.0+d59ce34

So far, all the operators are fine and available (not degraded) except the Authentication and the Console.

NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.6.1     False       True          True       25h
cloud-credential                           4.6.1     True        False         False      4d10h
cluster-autoscaler                         4.6.1     True        False         False      4d5h
config-operator                            4.6.1     True        False         False      4d5h
console                                    4.6.1     False       True          True       34h
csi-snapshot-controller                    4.6.1     True        False         False      53m
dns                                        4.6.1     True        False         False      4d5h
etcd                                       4.6.1     True        False         False      4d5h
image-registry                             4.6.1     True        False         False      4d5h
ingress                                    4.6.1     True        False         False      4d4h
insights                                   4.6.1     True        False         False      4d5h
kube-apiserver                             4.6.1     True        False         False      4d5h
kube-controller-manager                    4.6.1     True        False         False      4d5h
kube-scheduler                             4.6.1     True        False         False      4d5h
kube-storage-version-migrator              4.6.1     True        False         False      4d4h
machine-api                                4.6.1     True        False         False      4d5h
machine-approver                           4.6.1     True        False         False      4d5h
machine-config                             4.6.1     True        False         False      3h49m
marketplace                                4.6.1     True        False         False      50m
monitoring                                 4.6.1     True        False         False      48m
network                                    4.6.1     True        False         False      4d5h
node-tuning                                4.6.1     True        False         False      4d5h
openshift-apiserver                        4.6.1     True        False         False      54m
openshift-controller-manager               4.6.1     True        False         False      4d5h
openshift-samples                          4.6.1     True        False         False      4d5h
operator-lifecycle-manager                 4.6.1     True        False         False      4d5h
operator-lifecycle-manager-catalog         4.6.1     True        False         False      4d5h
operator-lifecycle-manager-packageserver   4.6.1     True        False         False      48m
service-ca                                 4.6.1     True        False         False      4d5h
storage                                    4.6.1     True        False         False      4d5h

Checking in the console project, all pods are not running because they can not access to the OAUTH endpoint.

[root@gvapaas-dev-bastion1 install_07122020]# oc get pods
NAME                         READY   STATUS    RESTARTS   AGE
console-574b85c888-77j6q     0/1     Running   15         73m
console-64cd86cd6c-5wxzk     0/1     Running   15         73m
console-9bbf5899-pjdzf       0/1     Running   14         68m
downloads-85df645c7c-8nr7h   1/1     Running   0          2d3h
downloads-85df645c7c-n8jcq   1/1     Running   0          2d3h

The error comes because the oauth-openshift route is not properly working:

2020-12-11T20:02:22Z auth: error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.ocp-dev.mydomain.com/oauth/token failed: Head "https://oauth-openshift.apps.ocp-dev.mydomain.com": read tcp 10.129.0.17:46794->192.168.10.10:443: read: connection reset by peer

So I check the pods in the openshift-authentication and openshift-authentication-projects and all pods are running fine, but the one in the openshift-authentication-projects with an error:

[root@paas-dev masters]# oc get pods -n openshift-authentication
NAME                               READY   STATUS    RESTARTS   AGE
oauth-openshift-6dbc889fc8-96wbj   1/1     Running   0          55m
oauth-openshift-6dbc889fc8-mxhmp   1/1     Running   0          58m

[root@paas-dev masters]# oc get pods -n openshift-authentication-operator
NAME                                       READY   STATUS    RESTARTS   AGE
authentication-operator-5d575b6f8b-9q8jz   1/1     Running   0          55m

The error is the following:

E1211 19:47:18.497520       1 base_controller.go:250] "OAuthRouteCheckEndpointAccessibleController" controller failed to sync "key", err: "https://oauth-openshift.apps.ocp-dev.mydomain/healthz" returned "503 Service Unavailable"

Checking about this error, I found some information in Red Hat and it is related to certificates in the route, so everything make sense:
https://access.redhat.com/solutions/4601031

Then, the solution should be upgrade the route called oauth-openshift in the project openshift-authentication to change the configuration to reencrypt the connection using the certificates from my load balancer in (apps.ocp-dev.mydomain.com). So, in order to do it properly, I follow the instructions to set up a certificate for all the subdomain apps. The instructions are here:

https://docs.openshift.com/container-platform/4.6/security/certificates/replacing-default-ingress-certificate.html

So far, this is working fine for all the routes, except for the one called oauth-openshift in the project openshift-authentication.

Next step is to try to modify this route by oc edit route oauth-openshift. So far, this is not possible and every time I try to change it, it comes to the default and original value. I tried creating a new route but then it happens the same and it is not possible to update the other projects with the new route even if I am cluster-admin.

This is the original value:

NAME                        HOST/PORT                                            PATH   SERVICES          PORT   TERMINATION            WILDCARD
oauth-openshift             oauth-openshift.apps.ocp-dev.corp.sch.ch                    oauth-openshift   6443   passthrough/Redirect   None

This is the desired output:

NAME                        HOST/PORT                                            PATH   SERVICES          PORT   TERMINATION            WILDCARD
oauth-openshift             oauth-openshift.apps.ocp-dev.corp.sch.ch                    oauth-openshift   6443   reencrypt/Redirect   None

So my question, is there a way to update this route? Is there something I am doing wrong?

Thanks a lot in advance,
Best regards,

Proxy enabled Issue

Hello,

I have setup an Openshift cluster with the "4.2.0-0.nightly-2019-09-18-114152" tag and I installed it using a proxy server. The authentication is setting up that the PROXY environment variables, but I thought the operator should use an svc.cluster.local address rather than the IP.

Also, I tried editing the Deployment Config oc edit deployment.apps/authentication-operator
from the operator and it worked, but after a minute it gets overwritten. Could you please help me find a way to edit the NO_PROXY without reinstalling the cluster?

Error happening

Openshift42 # oc log authentication-operator-575d4c97c5-pflvk -f
E0930 14:29:39.212961 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check the .well-known endpoint: failed to GET well-known https://10.18.46.51:6443/.well-known/oauth-authorization-server: Tunnel or SSL Forbidden

The proxy configuration

[ openshift42]# oc rsh authentication-operator-575d4c97c5-pflvk
sh-4.2# env | grep -i proxy
NO_PROXY=.apps.ose.example.com,.cluster.local,.example.com,.ose.example.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,api-int.ose.example.com,api.ose.example.com,etcd-0.ose.example.com,etcd-1.ose.example.com,etcd-2.ose.example.com,localhost
HTTPS_PROXY=http://proxy.prod.example.com:8080
HTTP_PROXY=http://proxy.prod.example.com:8080

Exporting the CIDR did not work.

sh-4.2# export NO_PROXY=$NO_PROXY,10.18.0.0/16
sh-4.2# curl -k https://10.18.46.51:6443/.well-known/oauth-authorization-server
curl: (56) Received HTTP code 403 from proxy after CONNECT

Exporting the ip itself worked

sh-4.2# export NO_PROXY=$NO_PROXY,10.18.46.51
sh-4.2# curl -k https://10.18.46.51:6443/.well-known/oauth-authorization-server
{
"issuer": "https://oauth-openshift.apps.ose.example.com",
"authorization_endpoint": "https://oauth-openshift.apps.ose.example.com/oauth/authorize",
"token_endpoint": "https://oauth-openshift.apps.ose.example.com/oauth/token",
"scopes_supported": [
"user:check-access",
"user:full",
"user:info",
"user:list-projects",
"user:list-scoped-projects"
],

The svc.cluster.local address worked

sh-4.2# curl -k https://10-18-46-51.kubernetes.default.svc.cluster.local:6443/.well-known/oauth-authorization-server
{
"issuer": "https://oauth-openshift.apps.ose.example.com",
"authorization_endpoint": "https://oauth-openshift.apps.ose.example.com/oauth/authorize",
"token_endpoint": "https://oauth-openshift.apps.ose.example.com/oauth/token",
"scopes_supported": [
"user:check-access",
"user:full",
"user:info",
"user:list-projects",
"user:list-scoped-projects"
],

'WellKnownAvailable: The well-known endpoint is not yet available:

I installed OCP 4.6.8 on-prem with Assisted Bare Metal Clusters, and i cannot seem to access the console. Did i miss a step?

apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  annotations:
    exclude.release.openshift.io/internal-openshift-hosted: "true"
    include.release.openshift.io/self-managed-high-availability: "true"
  creationTimestamp: "2021-02-07T01:48:11Z"
  generation: 1
  managedFields:
  - apiVersion: config.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:exclude.release.openshift.io/internal-openshift-hosted: {}
          f:include.release.openshift.io/self-managed-high-availability: {}
      f:spec: {}
      f:status:
        .: {}
        f:extension: {}
    manager: cluster-version-operator
    operation: Update
    time: "2021-02-07T01:48:11Z"
  - apiVersion: config.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions: {}
        f:relatedObjects: {}
        f:versions: {}
    manager: authentication-operator
    operation: Update
    time: "2021-02-07T04:13:17Z"
  name: authentication
  resourceVersion: "55392"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/authentication
  uid: 75fdd654-5936-4c33-8515-7afc08302eb8
spec: {}
status:
  conditions:
  - lastTransitionTime: "2021-02-07T01:54:41Z"
    message: |-
      APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()
      WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.1.179:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)
    reason: APIServerDeployment_UnavailablePod::WellKnownReadyController_SyncError
    status: "True"
    type: Degraded
  - lastTransitionTime: "2021-02-07T04:13:17Z"
    reason: AsExpected
    status: "False"
    type: Progressing
  - lastTransitionTime: "2021-02-07T01:52:41Z"
    message: 'WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver
      oauth endpoint https://192.168.1.179:6443/.well-known/oauth-authorization-server
      is not yet served and authentication operator keeps waiting (check kube-apiserver
      operator, and check that instances roll out successfully, which can take several
      minutes per instance)'
    reason: WellKnown_NotReady
    status: "False"
    type: Available
  - lastTransitionTime: "2021-02-07T01:52:42Z"
    reason: AsExpected
    status: "True"
    type: Upgradeable
  extension: null
  relatedObjects:
  - group: operator.openshift.io
    name: cluster
    resource: authentications
  - group: config.openshift.io
    name: cluster
    resource: authentications
  - group: config.openshift.io
    name: cluster
    resource: infrastructures
  - group: config.openshift.io
    name: cluster
    resource: oauths
  - group: route.openshift.io
    name: oauth-openshift
    namespace: openshift-authentication
    resource: routes
  - group: ""
    name: oauth-openshift
    namespace: openshift-authentication
    resource: services
  - group: ""
    name: openshift-config
    resource: namespaces
  - group: ""
    name: openshift-config-managed
    resource: namespaces
  - group: ""
    name: openshift-authentication
    resource: namespaces
  - group: ""
    name: openshift-authentication-operator
    resource: namespaces
  - group: ""
    name: openshift-ingress
    resource: namespaces
  - group: ""
    name: openshift-oauth-apiserver
    resource: namespaces
  versions:
  - name: oauth-apiserver
    version: 4.6.8
  - name: oauth-openshift
    version: 4.6.8_openshift
  - name: operator
version: 4.6.8

Update degraed

HEllo i can't update to 4.1.6 my cluster operator Authentication

[core@ tmp]$ oc get clusterversions.config.openshift.io
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.1.4     True        True          43m     Unable to apply 4.1.6: the cluster operator authentication has not yet successfully rolled out
[core@ tmp]$ oc get clusteroperator authentication
NAME             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication   4.1.4     True        False         True       46h
[core@ tmp]$

need Help

Configurable id for OIDC auth

Hello,

After reading openshift/oauth-server#33 and having the same problem in my organization, I came here in order to understand the reasoning behind this comment:

// There is no longer a user-facing setting for ID as it is considered unsafe

Can you somehow justify this claim?
Otherwise I am going to submit a PR to allow this overriding to adventurous users...

Authentication degraded v4.1 - Working towards 4.1.12: 99% complete, waiting on authentication, console

During running openshift-install Authentication Operator does not reach AVAILABLE state

$ openshift-install wait-for install-complete --log-level debug
<snip>
DEBUG Still waiting for the cluster to initialize: Some cluster operators are still updating: authentication, console
$ oc get clusteroperator authentication
NAME             VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication             Unknown     Unknown       True       56m
$ openshift-install version
openshift-install v4.1.12-201908150609-dirty
built from commit c67fe4d87dcb325a374d8438e3b8f31eb3ac6f2a
release image quay.io/openshift-release-dev/ocp-release@sha256:c28afba66cc09233f7dfa49177423e124d939cf5b0cd60d71bbb918edb0ed739
$ oc get nodes
NAME                     STATUS   ROLES    AGE   VERSION
master0.ocp67.nrgy.lan   Ready    master   63m   v1.13.4+d81afa6ba
master1.ocp67.nrgy.lan   Ready    master   63m   v1.13.4+d81afa6ba
master2.ocp67.nrgy.lan   Ready    master   63m   v1.13.4+d81afa6ba
worker0.ocp67.nrgy.lan   Ready    worker   63m   v1.13.4+d81afa6ba
worker1.ocp67.nrgy.lan   Ready    worker   63m   v1.13.4+d81afa6ba
$ oc get co
NAME                                 VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                                 Unknown     Unknown       True       59m
cloud-credential                     4.1.12    True        False         False      62m
cluster-autoscaler                   4.1.12    True        False         False      62m
console                              4.1.12    False       True          False      58m
dns                                  4.1.12    True        False         False      62m
image-registry                       4.1.12    True        False         False      55m
ingress                              4.1.12    True        False         False      57m
kube-apiserver                       4.1.12    True        False         False      61m
kube-controller-manager              4.1.12    True        False         False      59m
kube-scheduler                       4.1.12    True        False         False      59m
machine-api                          4.1.12    True        False         False      62m
machine-config                       4.1.12    True        False         False      61m
marketplace                          4.1.12    True        False         False      57m
monitoring                           4.1.12    True        False         False      56m
network                              4.1.12    True        False         False      62m
node-tuning                          4.1.12    True        False         False      59m
openshift-apiserver                  4.1.12    True        False         False      59m
openshift-controller-manager         4.1.12    True        False         False      61m
openshift-samples                    4.1.12    True        False         False      56m
operator-lifecycle-manager           4.1.12    True        False         False      61m
operator-lifecycle-manager-catalog   4.1.12    True        False         False      61m
service-ca                           4.1.12    True        False         False      61m
service-catalog-apiserver            4.1.12    True        False         False      59m
service-catalog-controller-manager   4.1.12    True        False         False      59m
storage                              4.1.12    True        False         False      58m
$ oc get co authentication -oyaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  creationTimestamp: "2019-08-19T13:48:19Z"
  generation: 1
  name: authentication
  resourceVersion: "20374"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/authentication
  uid: 008db2ad-c288-11e9-abef-00505686b42b
spec: {}
status:
  conditions:
  - lastTransitionTime: "2019-08-19T13:50:53Z"
    message: 'RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout'
    reason: RouteHealthDegradedError
    status: "True"
    type: Degraded
  - lastTransitionTime: "2019-08-19T13:48:19Z"
    reason: NoData
    status: Unknown
    type: Progressing
  - lastTransitionTime: "2019-08-19T13:48:19Z"
    reason: NoData
    status: Unknown
    type: Available
  - lastTransitionTime: "2019-08-19T13:48:19Z"
    reason: AsExpected
    status: "True"
    type: Upgradeable
  extension: null
  relatedObjects:
  - group: operator.openshift.io
    name: cluster
    resource: authentications
  - group: config.openshift.io
    name: cluster
    resource: authentications
  - group: config.openshift.io
    name: cluster
    resource: infrastructures
  - group: config.openshift.io
    name: cluster
    resource: oauths
  - group: ""
    name: openshift-config
    resource: namespaces
  - group: ""
    name: openshift-config-managed
    resource: namespaces
  - group: ""
    name: openshift-authentication
    resource: namespaces
  - group: ""
    name: openshift-authentication-operator
    resource: namespaces
$ oc get pods --all-namespaces | grep -i auth
openshift-authentication-operator                       authentication-operator-65b84f95d4-btv85                          1/1     Running            0          62m
openshift-authentication                                oauth-openshift-dfffdbc88-q7w7x                                   1/1     Running            0          60m
openshift-authentication                                oauth-openshift-dfffdbc88-sf9dp                                   1/1     Running            0          60m
$ oc logs authentication-operator-65b84f95d4-btv85 -n openshift-authentication-operator
I0819 13:48:16.320719       1 cmd.go:160] Using service-serving-cert provided certificates
I0819 13:48:16.333142       1 observer_polling.go:106] Starting file observer
I0819 13:48:18.908674       1 secure_serving.go:116] Serving securely on 0.0.0.0:8443
I0819 13:48:18.911107       1 leaderelection.go:205] attempting to acquire leader lease  openshift-authentication-operator/cluster-authentication-operator-lock...
I0819 13:48:18.955683       1 leaderelection.go:214] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock
I0819 13:48:18.994621       1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"00646f22-c288-11e9-abef-00505686b42b", APIVersion:"v1", ResourceVersion:"15750", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 005f7db3-c288-11e9-ac7f-0a580afe040c became leader
I0819 13:48:19.040566       1 remove_stale_conditions.go:71] Starting RemoveStaleConditions
I0819 13:48:19.088713       1 resourcesync_controller.go:219] Starting ResourceSyncController
I0819 13:48:19.089275       1 controller.go:53] Starting AuthenticationOperator2
I0819 13:48:19.089295       1 status_controller.go:187] Starting StatusSyncer-authentication
I0819 13:48:19.089304       1 unsupportedconfigoverrides_controller.go:151] Starting UnsupportedConfigOverridesController
I0819 13:48:19.089313       1 logging_controller.go:82] Starting LogLevelController
I0819 13:48:19.089322       1 management_state_controller.go:99] Starting management-state-controller-authentication
I0819 13:48:19.196312       1 status_controller.go:108] clusteroperator/authentication not found
I0819 13:48:19.228898       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Degraded"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}],"relatedObjects":[{"group":"operator.openshift.io","name":"cluster","resource":"authentications"},{"group":"config.openshift.io","name":"cluster","resource":"authentications"},{"group":"config.openshift.io","name":"cluster","resource":"infrastructures"},{"group":"config.openshift.io","name":"cluster","resource":"oauths"},{"group":"","name":"openshift-config","resource":"namespaces"},{"group":"","name":"openshift-config-managed","resource":"namespaces"},{"group":"","name":"openshift-authentication","resource":"namespaces"},{"group":"","name":"openshift-authentication-operator","resource":"namespaces"}]}}
I0819 13:48:19.245497       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Upgradeable"}]}}
I0819 13:48:19.261871       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I0819 13:48:19.268623       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("")
I0819 13:48:19.297284       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Upgradeable changed from Unknown to True ("")
I0819 13:48:19.512472       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-19T13:48:19Z","message":"RouteStatusDegraded: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
E0819 13:48:19.524503       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
I0819 13:48:19.564443       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "RouteStatusDegraded: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []"
E0819 13:48:19.733390       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:19.956916       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:20.101491       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:20.327844       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:20.499150       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:20.708373       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:20.897823       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:21.185658       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:22.495211       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:25.114348       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:30.269628       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:48:40.518573       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
E0819 13:49:01.013881       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
I0819 13:49:24.479039       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-19T13:49:24Z","message":"RouteStatusDegraded: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []","reason":"RouteStatusDegradedError","status":"True","type":"Degraded"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I0819 13:49:24.505383       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from False to True ("RouteStatusDegraded: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []")
I0819 13:49:30.975608       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing
E0819 13:49:31.046439       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
I0819 13:49:31.388187       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretUpdated' Updated Secret/v4-0-config-system-router-certs -n openshift-authentication because it changed
E0819 13:49:31.404479       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp67.nrgy.lan: []
I0819 13:49:39.001252       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing
I0819 13:49:39.018148       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing
I0819 13:49:39.062550       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ServiceCreated' Created Service/oauth-openshift -n openshift-authentication because it was missing
E0819 13:49:39.132951       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling service CA: config map has no service ca data: &v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"v4-0-config-system-service-ca", GenerateName:"", Namespace:"openshift-authentication", SelfLink:"/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca", UID:"3029cb8c-c288-11e9-abef-00505686b42b", ResourceVersion:"19121", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701819379, loc:(*time.Location)(0x2b543c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"oauth-openshift"}, Annotations:map[string]string{"service.alpha.openshift.io/inject-cabundle":"true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}
I0819 13:49:39.133340       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-19T13:49:39Z","message":"OperatorSyncDegraded: failed handling service CA: config map has no service ca data: \u0026v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"v4-0-config-system-service-ca\", GenerateName:\"\", Namespace:\"openshift-authentication\", SelfLink:\"/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca\", UID:\"3029cb8c-c288-11e9-abef-00505686b42b\", ResourceVersion:\"19121\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701819379, loc:(*time.Location)(0x2b543c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"oauth-openshift\"}, Annotations:map[string]string{\"service.alpha.openshift.io/inject-cabundle\":\"true\"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:\"\"}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I0819 13:49:39.182297       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from True to False ("OperatorSyncDegraded: failed handling service CA: config map has no service ca data: &v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"v4-0-config-system-service-ca\", GenerateName:\"\", Namespace:\"openshift-authentication\", SelfLink:\"/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca\", UID:\"3029cb8c-c288-11e9-abef-00505686b42b\", ResourceVersion:\"19121\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701819379, loc:(*time.Location)(0x2b543c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"oauth-openshift\"}, Annotations:map[string]string{\"service.alpha.openshift.io/inject-cabundle\":\"true\"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:\"\"}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}")
I0819 13:49:41.398033       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'SecretCreated' Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing
I0819 13:49:42.593332       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing
I0819 13:49:43.225903       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentCreated' Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing
E0819 13:49:53.830797       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
I0819 13:49:53.833427       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-19T13:49:39Z","message":"RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I0819 13:49:53.850122       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "OperatorSyncDegraded: failed handling service CA: config map has no service ca data: &v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"v4-0-config-system-service-ca\", GenerateName:\"\", Namespace:\"openshift-authentication\", SelfLink:\"/api/v1/namespaces/openshift-authentication/configmaps/v4-0-config-system-service-ca\", UID:\"3029cb8c-c288-11e9-abef-00505686b42b\", ResourceVersion:\"19121\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63701819379, loc:(*time.Location)(0x2b543c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"oauth-openshift\"}, Annotations:map[string]string{\"service.alpha.openshift.io/inject-cabundle\":\"true\"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:\"\"}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}" to "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout"
E0819 13:50:04.680965       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 13:50:15.501485       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 13:50:26.137654       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
I0819 13:50:53.258459       1 status_controller.go:164] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-08-19T13:50:53Z","message":"RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout","reason":"RouteHealthDegradedError","status":"True","type":"Degraded"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-08-19T13:48:19Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I0819 13:50:53.270878       1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"6d3d24c2-c287-11e9-88f5-00505686d413", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded changed from False to True ("RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout")
W0819 13:53:52.210443       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19166 (22162)
E0819 13:54:03.822680       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823021       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823199       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823290       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823510       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823686       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.824127       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.824587       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.825027       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.825372       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.825921       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.826377       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823688       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823761       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823771       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823779       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823787       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
E0819 13:54:03.823796       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=721, ErrCode=NO_ERROR, debug=""
W0819 13:54:04.379908       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19339 (22240)
W0819 13:54:04.380108       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 19339 (22240)
W0819 13:54:04.407809       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 15767 (22240)
W0819 13:54:04.408028       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 12822 (17773)
W0819 13:54:04.408157       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 12721 (17513)
W0819 13:54:04.415323       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 12917 (18068)
E0819 13:54:17.002839       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 13:54:27.698544       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 13:55:48.841770       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.842107       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.842291       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.868467       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.872949       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.907020       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.907388       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.943854       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.944179       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.944364       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.949162       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.955760       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.959776       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.960022       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:48.967948       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:49.009511       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:49.040388       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
E0819 13:55:49.040695       1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=255, ErrCode=NO_ERROR, debug=""
W0819 13:55:49.129882       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 19741 (22400)
W0819 13:55:49.130020       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 19111 (23156)
W0819 13:55:49.130095       1 reflector.go:270] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: watch of *v1.Authentication ended with: too old resource version: 19805 (23063)
W0819 13:55:49.130123       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Secret ended with: too old resource version: 23208 (23253)
W0819 13:55:49.130176       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.OAuth ended with: too old resource version: 18068 (23173)
W0819 13:55:49.130234       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.ClusterOperator ended with: too old resource version: 23112 (23172)
W0819 13:55:49.130287       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Ingress ended with: too old resource version: 17773 (23103)
W0819 13:55:49.130318       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22319 (23415)
W0819 13:55:49.130372       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Console ended with: too old resource version: 19162 (23063)
W0819 13:55:49.137393       1 reflector.go:270] github.com/openshift/client-go/config/informers/externalversions/factory.go:101: watch of *v1.Infrastructure ended with: too old resource version: 17513 (23207)
W0819 13:55:49.137466       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22380 (23415)
W0819 13:55:49.137494       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22380 (23415)
W0819 13:55:49.143569       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Service ended with: too old resource version: 19200 (22390)
W0819 13:55:49.232128       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 22380 (23415)
E0819 13:56:02.572971       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 13:56:13.263584       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 14:00:59.820225       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:01:03.155379       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23547 (24879)
W0819 14:01:51.150011       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23547 (25079)
W0819 14:02:03.169187       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23547 (25134)
W0819 14:03:09.534351       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 23549 (25407)
E0819 14:03:21.358807       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:05:44.257331       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
E0819 14:05:55.896932       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 14:08:30.179637       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 14:08:40.811089       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:09:04.160943       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25007 (26972)
W0819 14:09:10.173289       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25264 (26990)
W0819 14:10:00.538922       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25536 (27222)
E0819 14:10:12.363241       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:10:25.136686       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 23103 (24472)
W0819 14:10:36.155019       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 25204 (27398)
E0819 14:10:36.771666       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:14:16.166383       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27101 (28346)
W0819 14:15:35.543247       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27371 (28720)
W0819 14:15:39.159295       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27525 (28731)
W0819 14:15:39.301731       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
E0819 14:15:47.363069       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 14:15:57.994778       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 14:16:08.624141       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:17:23.177068       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 27121 (29167)
E0819 14:17:50.460124       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:21:41.175295       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28484 (30312)
W0819 14:22:34.182029       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 29303 (30546)
W0819 14:23:47.373889       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
E0819 14:23:59.015309       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:24:56.142621       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 27309 (28443)
W0819 14:25:06.547549       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28846 (31226)
E0819 14:25:07.776852       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 14:25:18.407211       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:25:24.163326       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 28857 (31320)
W0819 14:27:08.179903       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 30439 (31756)
E0819 14:28:30.181649       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:29:17.186235       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 30674 (32300)
W0819 14:32:07.551712       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31371 (33078)
E0819 14:32:19.385124       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:32:20.459979       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
E0819 14:32:32.099320       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:33:19.167224       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31448 (33377)
E0819 14:34:41.096673       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:35:50.184791       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 31882 (34052)
E0819 14:36:01.386717       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 14:36:12.026533       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:38:31.192263       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 32440 (34749)
W0819 14:39:22.556434       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 33205 (34964)
E0819 14:39:34.419279       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:42:05.544860       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 34191 (35703)
W0819 14:42:22.174548       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 33508 (35781)
W0819 14:44:41.149306       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 31278 (32597)
E0819 14:44:52.782124       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:44:54.560161       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 35108 (36442)
E0819 14:45:06.387927       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:46:54.198914       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 34875 (36970)
W0819 14:48:10.549384       1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 35843 (37298)
E0819 14:48:30.180644       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
E0819 14:48:40.819468       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
W0819 14:49:41.504229       1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
E0819 14:49:53.147333       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
$ oc logs oauth-openshift-dfffdbc88-q7w7x -n openshift-authentication
Command "openshift-osinserver" is deprecated, will be removed in 4.0
I0819 13:49:52.695973       1 clientca.go:92] [0] "/tmp/requestheader-client-ca-file771429603" client-ca certificate: "aggregator-signer" [] issuer="<self>" (2019-08-19 13:30:35 +0000 UTC to 2019-08-20 13:30:35 +0000 UTC (now=2019-08-19 13:49:52.695950863 +0000 UTC))
I0819 13:49:52.696928       1 clientca.go:92] [0] "/tmp/client-ca-file728276116" client-ca certificate: "admin-kubeconfig-signer" [] issuer="<self>" (2019-08-19 13:30:31 +0000 UTC to 2029-08-16 13:30:31 +0000 UTC (now=2019-08-19 13:49:52.696908068 +0000 UTC))
I0819 13:49:52.696999       1 clientca.go:92] [1] "/tmp/client-ca-file728276116" client-ca certificate: "kube-csr-signer_@1566222323" [] issuer="kubelet-signer" (2019-08-19 13:45:22 +0000 UTC to 2019-08-20 13:30:38 +0000 UTC (now=2019-08-19 13:49:52.696987669 +0000 UTC))
I0819 13:49:52.697069       1 clientca.go:92] [2] "/tmp/client-ca-file728276116" client-ca certificate: "kubelet-signer" [] issuer="<self>" (2019-08-19 13:30:38 +0000 UTC to 2019-08-20 13:30:38 +0000 UTC (now=2019-08-19 13:49:52.697056462 +0000 UTC))
I0819 13:49:52.697112       1 clientca.go:92] [3] "/tmp/client-ca-file728276116" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2019-08-19 13:30:39 +0000 UTC to 2020-08-18 13:30:39 +0000 UTC (now=2019-08-19 13:49:52.697103253 +0000 UTC))
I0819 13:49:52.697154       1 clientca.go:92] [4] "/tmp/client-ca-file728276116" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2019-08-19 13:30:39 +0000 UTC to 2020-08-18 13:30:39 +0000 UTC (now=2019-08-19 13:49:52.697145653 +0000 UTC))
I0819 13:49:52.704670       1 secure_serving.go:66] Forcing use of http/1.1 only
I0819 13:49:52.705938       1 serving.go:195] [0] "/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt" serving certificate: "oauth-openshift.openshift-authentication.svc" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer="openshift-service-serving-signer@1566222339" (2019-08-19 13:49:39 +0000 UTC to 2021-08-18 13:49:40 +0000 UTC (now=2019-08-19 13:49:52.70592379 +0000 UTC))
I0819 13:49:52.706081       1 serving.go:195] [1] "/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1566222339" [] issuer="<self>" (2019-08-19 13:45:39 +0000 UTC to 2020-08-18 13:45:40 +0000 UTC (now=2019-08-19 13:49:52.706068423 +0000 UTC))
I0819 13:49:52.706151       1 secure_serving.go:136] Serving securely on 0.0.0.0:6443
I0819 13:49:52.706295       1 serving.go:77] Starting DynamicLoader
I0819 13:49:52.708998       1 clientca.go:58] Starting DynamicCA: /tmp/requestheader-client-ca-file771429603
I0819 13:49:52.710449       1 clientca.go:58] Starting DynamicCA: /tmp/client-ca-file728276116
I0819 13:51:20.768404       1 log.go:172] http: TLS handshake error from 10.254.0.1:56006: EOF
$ oc logs oauth-openshift-dfffdbc88-sf9dp -n openshift-authentication
Command "openshift-osinserver" is deprecated, will be removed in 4.0
I0819 13:49:53.334362       1 clientca.go:92] [0] "/tmp/requestheader-client-ca-file223858105" client-ca certificate: "aggregator-signer" [] issuer="<self>" (2019-08-19 13:30:35 +0000 UTC to 2019-08-20 13:30:35 +0000 UTC (now=2019-08-19 13:49:53.334325767 +0000 UTC))
I0819 13:49:53.334762       1 clientca.go:92] [0] "/tmp/client-ca-file491355458" client-ca certificate: "admin-kubeconfig-signer" [] issuer="<self>" (2019-08-19 13:30:31 +0000 UTC to 2029-08-16 13:30:31 +0000 UTC (now=2019-08-19 13:49:53.334750442 +0000 UTC))
I0819 13:49:53.334786       1 clientca.go:92] [1] "/tmp/client-ca-file491355458" client-ca certificate: "kube-csr-signer_@1566222323" [] issuer="kubelet-signer" (2019-08-19 13:45:22 +0000 UTC to 2019-08-20 13:30:38 +0000 UTC (now=2019-08-19 13:49:53.33477837 +0000 UTC))
I0819 13:49:53.334799       1 clientca.go:92] [2] "/tmp/client-ca-file491355458" client-ca certificate: "kubelet-signer" [] issuer="<self>" (2019-08-19 13:30:38 +0000 UTC to 2019-08-20 13:30:38 +0000 UTC (now=2019-08-19 13:49:53.334792144 +0000 UTC))
I0819 13:49:53.334810       1 clientca.go:92] [3] "/tmp/client-ca-file491355458" client-ca certificate: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2019-08-19 13:30:39 +0000 UTC to 2020-08-18 13:30:39 +0000 UTC (now=2019-08-19 13:49:53.334804261 +0000 UTC))
I0819 13:49:53.334821       1 clientca.go:92] [4] "/tmp/client-ca-file491355458" client-ca certificate: "kube-control-plane-signer" [] issuer="<self>" (2019-08-19 13:30:39 +0000 UTC to 2020-08-18 13:30:39 +0000 UTC (now=2019-08-19 13:49:53.334815107 +0000 UTC))
I0819 13:49:53.358022       1 secure_serving.go:66] Forcing use of http/1.1 only
I0819 13:49:53.360002       1 serving.go:195] [0] "/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt" serving certificate: "oauth-openshift.openshift-authentication.svc" [serving] validServingFor=[oauth-openshift.openshift-authentication.svc,oauth-openshift.openshift-authentication.svc.cluster.local] issuer="openshift-service-serving-signer@1566222339" (2019-08-19 13:49:39 +0000 UTC to 2021-08-18 13:49:40 +0000 UTC (now=2019-08-19 13:49:53.3599828 +0000 UTC))
I0819 13:49:53.360023       1 serving.go:195] [1] "/var/config/system/secrets/v4-0-config-system-serving-cert/tls.crt" serving certificate: "openshift-service-serving-signer@1566222339" [] issuer="<self>" (2019-08-19 13:45:39 +0000 UTC to 2020-08-18 13:45:40 +0000 UTC (now=2019-08-19 13:49:53.360014519 +0000 UTC))
I0819 13:49:53.360086       1 secure_serving.go:136] Serving securely on 0.0.0.0:6443
I0819 13:49:53.360212       1 serving.go:77] Starting DynamicLoader
I0819 13:49:53.366408       1 clientca.go:58] Starting DynamicCA: /tmp/requestheader-client-ca-file223858105
I0819 13:49:53.366713       1 clientca.go:58] Starting DynamicCA: /tmp/client-ca-file491355458
$ oc get routes.route.openshift.io --all-namespaces
NAMESPACE                  NAME                HOST/PORT                                                    PATH   SERVICES            PORT    TERMINATION            WILDCARD
openshift-authentication   oauth-openshift     oauth-openshift.apps.ocp67.nrgy.lan                                 oauth-openshift     6443    passthrough/Redirect   None
openshift-console          console             console-openshift-console.apps.ocp67.nrgy.lan                       console             https   reencrypt/Redirect     None
openshift-console          downloads           downloads-openshift-console.apps.ocp67.nrgy.lan                     downloads           http    edge                   None
openshift-monitoring       alertmanager-main   alertmanager-main-openshift-monitoring.apps.ocp67.nrgy.lan          alertmanager-main   web     reencrypt/Redirect     None
openshift-monitoring       grafana             grafana-openshift-monitoring.apps.ocp67.nrgy.lan                    grafana             https   reencrypt/Redirect     None
openshift-monitoring       prometheus-k8s      prometheus-k8s-openshift-monitoring.apps.ocp67.nrgy.lan             prometheus-k8s      web     reencrypt/Redirect     None

Fix TestRouterCerts

This step is invalid. It will work until you reflect reality and create a situation where they don't match. I can merge this now, but this will fail for your next test.

Originally posted by @deads2k in #183

Cannot connect to LDAP servers with certificate with only CN

I get the following error after updating to OKD 4.6.0-0.okd-2020-12-12-135354.

E1215 13:41:23.520778       1 login.go:171] Error authenticating "tf" with provider "ldap": LDAP Result Code 200 "Network Error": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0                                                                           

It looks like golang/go#39568 is the culprit. It used to work before and I assume the new oauth-openshift deployment was compiled with newer go.

Authentication degraded v4.2

Can't get it to work anymore. Cluster setup was working fine for few times, but now every time i get Authentication degraded and so console does not come up as well. Everything else seems ok.
This is what is see in logs:

~ oc logs authentication-operator-59bd6dffb8-xhssv -n openshift-authentication-operator

I1024 13:05:25.642299       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-10-24T12:52:00Z","message":"RouteHealthDegraded: failed to GET route: EOF","reason":"RouteHealthDegradedFailedGet","status":"True","type":"Degraded"},{"lastTransitionTime":"2
019-10-24T12:47:26Z","reason":"NoData","status":"Unknown","type":"Progressing"},{"lastTransitionTime":"2019-10-24T12:47:26Z","reason":"NoData","status":"Unknown","type":"Available"},{"lastTransitionTime":"2019-10-24T12:47:26Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I1024 13:05:25.668003       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"af3b5e80-f65b-11e9-abe5-001a4a160128", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Sta
tus for clusteroperator/authentication changed: Degraded message changed from "ResourceSyncControllerDegraded: Post https://172.30.0.1:443/api/v1/namespaces/openshift-config-managed/configmaps: http2: server sent GOAWAY and closed the connection; LastStreamID=479, ErrCode=NO_ERROR, debug=\"\"\nRouteHealthDegraded: fai
led to GET route: EOF" to "RouteHealthDegraded: failed to GET route: EOF"
~ oc version
Client Version: openshift-clients-4.2.0-201910041700
Server Version: 4.2.0
Kubernetes Version: v1.14.6+2e5ed54

I have DNS and HAProxy in front of it which was working well before.

What i did so far:

  1. Downloaded latest installer from given link @ Install on Bare Metal: User-Provisioned Infrastructure
  2. Downloaded given pull_secred
  3. Created install-config.yaml with given pull_secret and ssh key
  4. Generated new ignition configs.
  5. Uploaded those to Matchbox and restarted service
  6. Downloaded given RHCOS images and uploaded to TFTP
  7. Did iPXE boot

Basically ensured that i have all latest versions of all required components.

openshift-install Authentication Operator does not reach AVAILABLE

When installing openshift4.3.29 on bare metal, I have completed to run almost operators, but only Authentication Operator does not reach AVAILABLE state.

[root@support tmp]# oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                                       Unknown     Unknown       True       23h
cloud-credential                           4.3.29    True        False         False      26h
cluster-autoscaler                         4.3.29    True        False         False      23h
console                                    4.3.29    True        False         False      4h8m
....

I have 2 workers and one master, and using HAProxy for my load balancer. I've verified it's configuration and all nodes are showing as healthy as available. I've also verified connectivity between all nodes. Details are below, please let me know if any more are needed.

During running openshift-install Authentication Operator does not reach AVAILABLE state.

oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version             False       True          26h     Unable to apply 4.3.29: the cluster operator authentication has not yet successfully rolled out
oc describe co authentication
Name:         authentication
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2020-08-04T11:47:35Z
  Generation:          1
  Resource Version:    380628
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/authentication
  UID:                 8ffd7d5c-9fd2-443e-ac8a-de8724988dc7
Spec:
Status:
  Conditions:
    Last Transition Time:  2020-08-04T12:16:16Z
    Message:               RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout
    Reason:                RouteHealthDegradedFailedGet
    Status:                True
    Type:                  Degraded
    Last Transition Time:  2020-08-04T11:47:35Z
    Reason:                NoData
    Status:                Unknown
    Type:                  Progressing
.....
oc get describe co/authentication
Name:         authentication
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2020-08-04T11:47:35Z
  Generation:          1
  Resource Version:    26914
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/authentication
  UID:                 8ffd7d5c-9fd2-443e-ac8a-de8724988dc7
Spec:
Status:
  Conditions:
    Last Transition Time:  2020-08-04T12:16:16Z
    Message:               RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout
    Reason:                RouteHealthDegradedFailedGet
    Status:                True
    Type:                  Degraded
    Last Transition Time:  2020-08-04T11:47:35Z
    Reason:                NoData
    Status:                Unknown
    Type:                  Progressing
[root@support tmp]# oc logs authentication-operator-5d9d9d49fd-chkqt -n openshift-authentication-operator
Copying system trust bundle
I0804 11:49:32.801173       1 cmd.go:188] Using service-serving-cert provided certificates
I0804 11:49:32.821707       1 observer_polling.go:137] Starting file observer
I0804 11:49:32.838408       1 observer_polling.go:137] Starting file observer
I0804 11:49:34.180699       1 secure_serving.go:123] Serving securely on [::]:8443
I0804 11:49:34.185239       1 leaderelection.go:241] attempting to acquire leader lease  openshift-authentication-operator/cluster-authentication-operator-lock...
I0804 11:50:41.519473       1 leaderelection.go:251] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock
I0804 11:50:41.543085       1 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"b34216f4-e009-46ea-bf30-88b58963f0f5", APIVersion:"v1", ResourceVersion:"18487", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 1a39105e-7fad-400d-8eb8-c5b3230adf15 became leader
I0804 11:50:41.649148       1 resourcesync_controller.go:218] Starting ResourceSyncController
I0804 11:50:41.649325       1 remove_stale_conditions.go:71] Starting RemoveStaleConditions
I0804 11:50:41.649379       1 status_controller.go:189] Starting StatusSyncer-authentication
I0804 11:50:41.649415       1 unsupportedconfigoverrides_controller.go:152] Starting UnsupportedConfigOverridesController
I0804 11:50:41.649462       1 logging_controller.go:83] Starting LogLevelController
I0804 11:50:41.649500       1 controller.go:205] Starting RouterCertsDomainValidationController
I0804 11:50:41.649585       1 management_state_controller.go:102] Starting management-state-controller-authentication
I0804 11:50:41.649896       1 controller.go:53] Starting AuthenticationOperator2
E0804 11:50:42.004416       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp4-1.example.internal: []
E0804 11:50:42.014917       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: failed handling the route: route is not available at canonical host oauth-openshift.apps.ocp4-1.example.internal: []

panic: osins.osin.openshift.io "openshift-osin" is forbidden: caches not synchronized`

$ oc get clusterversion
NAME      VERSION                           AVAILABLE   PROGRESSING   SINCE     STATUS
version   4.0.0-0.alpha-2019-01-12-162217   False       True          12m       Unable to apply 4.0.0-0.alpha-2019-01-12-162217: a required extension is not available to update
$ oc logs -f -n openshift-core-operators pods/origin-cluster-osin-operator2-5f4666bd78-4pbgh                                                                                                                       
ERROR: logging before flag.Parse: W0112 20:07:59.848110       1 cmd.go:127] Using insecure, self-signed certificates                                                                                               
ERROR: logging before flag.Parse: I0112 20:07:59.848566       1 crypto.go:459] Generating new CA for cluster-osin-operator2-signer@1547323679 cert, and key in /tmp/serving-cert-148521676/serving-signer.crt, /tmp
/serving-cert-148521676/serving-signer.key                                                                                                                                                                         
ERROR: logging before flag.Parse: I0112 20:08:00.497211       1 crypto.go:536] Generating server certificate in /tmp/serving-cert-148521676/tls.crt, key in /tmp/serving-cert-148521676/tls.key                    
ERROR: logging before flag.Parse: I0112 20:08:00.748096       1 observer_polling.go:37] Adding reactor for file "/var/run/configmaps/config/operator-config.yaml"                                                  
ERROR: logging before flag.Parse: I0112 20:08:00.748260       1 observer_polling.go:37] Adding reactor for file "/var/run/secrets/serving-cert/tls.crt"                                                            
ERROR: logging before flag.Parse: I0112 20:08:00.748346       1 observer_polling.go:37] Adding reactor for file "/var/run/secrets/serving-cert/tls.key"                                                            
ERROR: logging before flag.Parse: I0112 20:08:00.748719       1 observer_polling.go:96] Starting file observer                                                                                                     
ERROR: logging before flag.Parse: I0112 20:08:01.574507       1 serve.go:96] Serving securely on 0.0.0.0:8443                                                                                                      
ERROR: logging before flag.Parse: I0112 20:08:01.575501       1 leaderelection.go:185] attempting to acquire leader lease  openshift-core-operators/cluster-osin-operator2-lock...                                 
ERROR: logging before flag.Parse: I0112 20:08:01.578809       1 leaderelection.go:253] lock is held by 804fd06e-16a5-11e9-8974-0a580a820003 and has not yet expired                                                
ERROR: logging before flag.Parse: I0112 20:08:01.578827       1 leaderelection.go:190] failed to acquire lease openshift-core-operators/cluster-osin-operator2-lock                                                
ERROR: logging before flag.Parse: I0112 20:08:03.753321       1 leaderelection.go:253] lock is held by 804fd06e-16a5-11e9-8974-0a580a820003 and has not yet expired                                                
ERROR: logging before flag.Parse: I0112 20:08:03.753346       1 leaderelection.go:190] failed to acquire lease openshift-core-operators/cluster-osin-operator2-lock                                                
ERROR: logging before flag.Parse: I0112 20:08:08.001561       1 leaderelection.go:253] lock is held by 804fd06e-16a5-11e9-8974-0a580a820003 and has not yet expired                                                
ERROR: logging before flag.Parse: I0112 20:08:08.001702       1 leaderelection.go:190] failed to acquire lease openshift-core-operators/cluster-osin-operator2-lock                                                
ERROR: logging before flag.Parse: I0112 20:08:11.777108       1 leaderelection.go:253] lock is held by 804fd06e-16a5-11e9-8974-0a580a820003 and has not yet expired                                               
ERROR: logging before flag.Parse: I0112 20:08:11.777731       1 leaderelection.go:190] failed to acquire lease openshift-core-operators/cluster-osin-operator2-lock                                               
ERROR: logging before flag.Parse: I0112 20:08:15.661233       1 leaderelection.go:253] lock is held by 804fd06e-16a5-11e9-8974-0a580a820003 and has not yet expired                                               
ERROR: logging before flag.Parse: I0112 20:08:15.661651       1 leaderelection.go:190] failed to acquire lease openshift-core-operators/cluster-osin-operator2-lock                                               
ERROR: logging before flag.Parse: I0112 20:08:19.283125       1 leaderelection.go:194] successfully acquired lease openshift-core-operators/cluster-osin-operator2-lock                                           
ERROR: logging before flag.Parse: I0112 20:08:19.283988       1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-core-operators", Name:"cluster-osin-operator2-lock", UID:"137814e9-1$
a5-11e9-abfa-664f163f5f0f", APIVersion:"v1", ResourceVersion:"3841", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' c36ef16f-16a5-11e9-bd77-0a580a820003 became leader                                   
ERROR: logging before flag.Parse: I0112 20:08:21.301683       1 leaderelection.go:209] successfully renewed lease openshift-core-operators/cluster-osin-operator2-lock                                            
ERROR: logging before flag.Parse: I0112 20:08:23.311653       1 leaderelection.go:209] successfully renewed lease openshift-core-operators/cluster-osin-operator2-lock                                            
ERROR: logging before flag.Parse: I0112 20:08:25.335608       1 leaderelection.go:209] successfully renewed lease openshift-core-operators/cluster-osin-operator2-lock                                            
ERROR: logging before flag.Parse: I0112 20:08:27.398161       1 leaderelection.go:209] successfully renewed lease openshift-core-operators/cluster-osin-operator2-lock                                            
panic: osins.osin.openshift.io "openshift-osin" is forbidden: caches not synchronized                                                                                                                             
                                                                                                                                                                                                                  
goroutine 81 [running]:                                                                                                                                                                                           
github.com/openshift/cluster-osin-operator/vendor/github.com/openshift/library-go/pkg/operator/v1helpers.EnsureOperatorConfigExists(0x1c2bf60, 0xc4200b7378, 0xc4204cf5e0, 0x91, 0xa0, 0x1a5e164, 0x11, 0x1a529f5,
0x8, 0x1a4e7be, ...)                                                                                                                                                                                              
        /go/src/github.com/openshift/cluster-osin-operator/vendor/github.com/openshift/library-go/pkg/operator/v1helpers/helpers.go:99 +0x34e                                                                     
github.com/openshift/cluster-osin-operator/pkg/operator2.RunOperator(0xc4205b1bc0, 0x0, 0x434d48)                                                                                                                 
        /go/src/github.com/openshift/cluster-osin-operator/pkg/operator2/starter.go:83 +0x5a6                                                                                                                     
github.com/openshift/cluster-osin-operator/vendor/github.com/openshift/library-go/pkg/controller/controllercmd.(*ControllerBuilder).Run.func2(0xc420646180)                                                       
        /go/src/github.com/openshift/cluster-osin-operator/vendor/github.com/openshift/library-go/pkg/controller/controllercmd/builder.go:213 +0x4f                                                               
created by github.com/openshift/cluster-osin-operator/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run                                                                                           
        /go/src/github.com/openshift/cluster-osin-operator/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:155 +0x92

Looks like it got restarted and went fine the 6th time.

Authentication degraded v4.1 - unable to check route health and too old resource version

Auth service does not start on fresh installation because following errors:

$ openshift-install version

./openshift-install v4.1.0-201905171742-dirty
built from commit 6ba66dbb6c2c53e1901a6d167d1c813bbbf27f4d
release image quay.io/openshift-release-dev/ocp-release@sha256:dc67ad5edd91ca48402309fe0629593e5ae3333435ef8d0bc52c2b62ca725021

$ oc get nodes

NAME STATUS ROLES AGE VERSION
os-master1 Ready master 13h v1.13.4+27816e1b1
os-master2 Ready master 13h v1.13.4+27816e1b1
os-master3 Ready master 13h v1.13.4+27816e1b1
os-node1 Ready worker 13h v1.13.4+27816e1b1
os-node2 Ready worker 13h v1.13.4+27816e1b1
os-node3 Ready worker 13h v1.13.4+27816e1b1

$ oc get co

NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE
authentication Unknown Unknown True 12h
cloud-credential 4.1.0-rc.5 True False False 12h
cluster-autoscaler 4.1.0-rc.5 True False False 12h
console 4.1.0-rc.5 False True False 12h
dns 4.1.0-rc.5 True False False 12h
image-registry 4.1.0-rc.5 True False False 12h
ingress 4.1.0-rc.5 True False False 12h
kube-apiserver 4.1.0-rc.5 True False False 12h
kube-controller-manager 4.1.0-rc.5 True False False 12h
kube-scheduler 4.1.0-rc.5 True False False 12h
machine-api 4.1.0-rc.5 True False False 12h
machine-config 4.1.0-rc.5 True False False 12h
marketplace 4.1.0-rc.5 True False False 23m
monitoring 4.1.0-rc.5 True False False 22m
network 4.1.0-rc.5 True False False 12h
node-tuning 4.1.0-rc.5 True False False 12h
openshift-apiserver 4.1.0-rc.5 True False False 12h
openshift-controller-manager 4.1.0-rc.5 True False False 12h
openshift-samples 4.1.0-rc.5 True False False 12h
operator-lifecycle-manager 4.1.0-rc.5 True False False 12h
operator-lifecycle-manager-catalog 4.1.0-rc.5 True False False 12h
service-ca 4.1.0-rc.5 True False False 12h
service-catalog-apiserver 4.1.0-rc.5 True False False 12h
service-catalog-controller-manager 4.1.0-rc.5 True False False 12h
storage 4.1.0-rc.5 True False False 12h

$ oc get co authentication -oyaml

apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
creationTimestamp: 2019-06-05T18:16:06Z
generation: 1
name: authentication
resourceVersion: "21493"
selfLink: /apis/config.openshift.io/v1/clusteroperators/authentication
uid: fc937d71-87bd-11e9-a2a2-005056ab11e4
spec: {}
status:
conditions:

  • lastTransitionTime: 2019-06-05T18:17:06Z
    message: 'Degraded: error checking current version: unable to check route health:
    failed to GET route: EOF'
    reason: DegradedOperatorSyncLoopError
    status: "True"
    type: Degraded
  • lastTransitionTime: 2019-06-05T18:16:06Z
    reason: NoData
    status: Unknown
    type: Progressing
  • lastTransitionTime: 2019-06-05T18:16:06Z
    reason: NoData
    status: Unknown
    type: Available
  • lastTransitionTime: 2019-06-05T18:16:06Z
    reason: NoData
    status: Unknown
    type: Upgradeable
    extension: null
    relatedObjects:
  • group: operator.openshift.io
    name: cluster
    resource: authentications
  • group: config.openshift.io
    name: cluster
    resource: authentications
  • group: config.openshift.io
    name: cluster
    resource: infrastructures
  • group: config.openshift.io
    name: cluster
    resource: oauths
  • group: ""
    name: openshift-config
    resource: namespaces
  • group: ""
    name: openshift-config-managed
    resource: namespaces
  • group: ""
    name: openshift-authentication
    resource: namespaces
  • group: ""
    name: authentication-operator
    resource: namespaces

$ oc get pods --all-namespaces | grep -i auth

openshift-authentication-operator authentication-operator-6c9d7dc7b-zt8gx 1/1 Running 1 12h
openshift-authentication oauth-openshift-7fc9c8bd5d-85wj6 1/1 Running 0 22m
openshift-authentication oauth-openshift-7fc9c8bd5d-hbtrj 1/1 Running 0 22m

$ oc logs authentication-operator-6c9d7dc7b-zt8gx -n openshift-authentication-operator

I0606 06:41:51.169806 1 cmd.go:138] Using service-serving-cert provided certificates
I0606 06:41:51.171559 1 observer_polling.go:106] Starting file observer
I0606 06:41:52.688189 1 secure_serving.go:116] Serving securely on 0.0.0.0:8443
I0606 06:41:52.689646 1 leaderelection.go:205] attempting to acquire leader lease openshift-authentication-operator/cluster-authentication-operator-lock...
I0606 06:42:57.489872 1 leaderelection.go:214] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock
I0606 06:42:57.491075 1 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"fc7f90d4-87bd-11e9-a2a2-005056ab11e4", APIVersion:"v1", ResourceVersion:"195321", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 2b3a6f30-8826-11e9-b86d-0a580a820031 became leader
I0606 06:42:57.493049 1 status_controller.go:183] Starting StatusSyncer-authentication
I0606 06:42:57.497202 1 controller.go:53] Starting AuthenticationOperator2
I0606 06:42:57.497367 1 resourcesync_controller.go:219] Starting ResourceSyncController
I0606 06:42:59.930408 1 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"20813df4-87bd-11e9-a3df-005056ab5877", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed
E0606 06:42:59.967383 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:02.550516 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:05.130920 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:07.526528 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:09.920814 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:12.322594 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:14.722355 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:17.124281 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:19.533355 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:22.119572 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:24.523253 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:26.923203 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:31.702182 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:34.068468 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:43:37.594117 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:44:59.950581 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 06:47:44.230719 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
W0606 06:47:59.529700 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 195321 (197129)
E0606 06:48:01.170534 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
W0606 06:48:31.527213 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 195321 (197330)
W0606 06:48:42.516251 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 195321 (197408)
W0606 06:50:50.514268 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 195321 (198201)
E0606 06:53:12.347069 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
W0606 06:54:45.532230 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 197554 (199722)
W0606 06:55:32.536189 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 197356 (200010)
E0606 06:55:34.167676 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
W0606 06:56:50.550991 1 reflector.go:270] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
E0606 06:56:51.993818 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
W0606 06:57:00.519197 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 198432 (200558)
W0606 06:57:18.532695 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.Deployment ended with: too old resource version: 195622 (196329)
E0606 06:57:19.966126 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
W0606 06:58:00.523602 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 197621 (200966)
E0606 07:02:58.739133 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
E0606 07:03:01.141332 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
W0606 07:04:08.541948 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 200234 (203310)
E0606 07:04:10.178204 1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: EOF
W0606 07:04:21.541270 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 199935 (203385)
W0606 07:04:38.524900 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 200799 (203498)
W0606 07:05:41.529327 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1.ConfigMap ended with: too old resource version: 201182 (203881)

4.2 Authentication Operator Fails on vSphere Install

I'm setting up OpenShift 4.2 on vSphere, I have the cluster running and all the operators are running except the Authentication Operator. It's throwing the error below.
I have 3 workers and two masters, and using HAProxy for my load balancer. I've verified it's configuration and all nodes are showing as healthy as available. I've also verified connectivity between all nodes. Details are below, please let me know if any more are needed.

  Conditions:
    Last Transition Time:  2019-12-05T15:21:50Z
    Reason:                AsExpected
    Status:                False
    Type:                  Degraded
    Last Transition Time:  2019-12-05T00:23:36Z
    Message:               Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202.40:6443/.well-known/oauth-authorization-server endpoint data
    Reason:                ProgressingWellKnownNotReady
    Status:                True
    Type:                  Progressing
    Last Transition Time:  2019-12-05T00:23:36Z
    Reason:                Available
    Status:                False
    Type:                  Available
    Last Transition Time:  2019-12-05T00:23:36Z
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeabl

I'm running this command to complete the install

openshift-install wait-for install-complete
INFO Cluster operator authentication Progressing is True with ProgressingWellKnownNotReady: Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202endpoint data
INFO Cluster operator authentication Available is False with Available:
INFO Cluster operator insights Disabled is False with :
FATAL failed to initialize the cluster: Cluster operator authentication is still updating
[root@JX2LUTL01 ~]# oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                                       False       True          False      4d23h
cloud-credential                           4.2.2     True        False         False      5d6h
cluster-autoscaler                         4.2.2     True        False         False      5d6h
console                                    4.2.2     True        False         False      3d8h
dns                                        4.2.2     True        False         False      3d8h
image-registry                             4.2.2     True        False         False      3d8h
ingress                                    4.2.2     True        False         False      3d8h
insights                                   4.2.2     True        False         False      5d6h
kube-apiserver                             4.2.2     True        False         False      5d6h
kube-controller-manager                    4.2.2     True        False         False      5d6h
kube-scheduler                             4.2.2     True        False         False      5d6h
machine-api                                4.2.2     True        False         False      5d6h
machine-config                             4.2.2     True        False         False      5d5h
marketplace                                4.2.2     True        False         False      4d8h
monitoring                                 4.2.2     True        False         False      3d8h
network                                    4.2.2     True        False         False      5d6h
node-tuning                                4.2.2     True        False         False      3d8h
openshift-apiserver                        4.2.2     True        False         False      3d8h
openshift-controller-manager               4.2.2     True        False         False      3d8h
openshift-samples                          4.2.2     True        False         False      5d6h
operator-lifecycle-manager                 4.2.2     True        False         False      5d6h
operator-lifecycle-manager-catalog         4.2.2     True        False         False      5d6h
operator-lifecycle-manager-packageserver   4.2.2     True        False         False      3d8h
service-ca                                 4.2.2     True        False         False      5d6h
service-catalog-apiserver                  4.2.2     True        False         False      5d6h
service-catalog-controller-manager         4.2.2     True        False         False      5d6h
storage                                    4.2.2     True        False         False      5d6h
[root@JX2LUTL01 ~]# oc get pods -n=openshift-authentication-operator
NAME                                       READY   STATUS    RESTARTS   AGE
authentication-operator-75ffd7fb6c-w85qx   1/1     Running   0          4h8m
[root@JX2LUTL01 ~]# oc get pods -n=openshift-authentication
NAME                               READY   STATUS    RESTARTS   AGE
oauth-openshift-56d9f65fd7-2nr4r   1/1     Running   0          4h8m
oauth-openshift-56d9f65fd7-dk2mh   1/1     Running   0          4h7m
oc describe co authentication
Name:         authentication
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2019-12-05T00:23:36Z
  Generation:          1
  Resource Version:    2075821
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/authentication
  UID:                 7a6cb015-16f5-11ea-8114-005056a3b687
Spec:
Status:
  Conditions:
    Last Transition Time:  2019-12-09T19:36:33Z
    Reason:                AsExpected
    Status:                False
    Type:                  Degraded
    Last Transition Time:  2019-12-05T00:23:36Z
    Message:               Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202.40:6443/.well-known/oauth-authorization-server endpoint data
    Reason:                ProgressingWellKnownNotReady
    Status:                True
    Type:                  Progressing
    Last Transition Time:  2019-12-05T00:23:36Z
    Reason:                Available
    Status:                False
    Type:                  Available
    Last Transition Time:  2019-12-05T00:23:36Z
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeable
  Extension:               <nil>
  Related Objects:
    Group:     operator.openshift.io
    Name:      cluster
    Resource:  authentications
    Group:     config.openshift.io
    Name:      cluster
    Resource:  authentications
    Group:     config.openshift.io
    Name:      cluster
    Resource:  infrastructures
    Group:     config.openshift.io
    Name:      cluster
    Resource:  oauths
    Group:
    Name:      openshift-config
    Resource:  namespaces
    Group:
    Name:      openshift-config-managed
    Resource:  namespaces
    Group:
    Name:      openshift-authentication
    Resource:  namespaces
    Group:
    Name:      openshift-authentication-operator
    Resource:  namespaces
Events:        <none>

This command works from all masters and workers

curl https://10.6.202.40:6443/.well-known/oauth-authorization-server -k
{
  "paths": [
    "/apis",
    "/metrics",
    "/version"
  ]
[root@JX2LUTL01 ~]# oc logs oauth-openshift-56d9f65fd7-2nr4r -n=openshift-authentication
Copying system trust bundle
I1209 19:38:36.928941       1 secure_serving.go:65] Forcing use of http/1.1 only
I1209 19:38:36.929030       1 secure_serving.go:127] Serving securely on 0.0.0.0:6443
[root@JX2LUTL01 ~]# oc logs authentication-operator-75ffd7fb6c-w85qx -n=openshift-authentication-operator
Copying system trust bundle
I1209 19:37:24.376262       1 observer_polling.go:116] Starting file observer
I1209 19:37:24.376264       1 cmd.go:188] Using service-serving-cert provided certificates
I1209 19:37:24.377067       1 observer_polling.go:116] Starting file observer
I1209 19:37:24.884709       1 secure_serving.go:116] Serving securely on 0.0.0.0:8443
I1209 19:37:24.885329       1 leaderelection.go:217] attempting to acquire leader lease  openshift-authentication-operator/cluster-authentication-operator-lock...
I1209 19:38:25.477171       1 leaderelection.go:227] successfully acquired lease openshift-authentication-operator/cluster-authentication-operator-lock
I1209 19:38:25.477300       1 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"openshift-authentication-operator", Name:"cluster-authentication-operator-lock", UID:"faa2572c-16bb-11ea-b2f3-005056a3e087", APIVersion:"v1", ResourceVersion:"2075606", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' 53698669-1abb-11ea-968e-0a580afe0254 became leader
I1209 19:38:25.479815       1 remove_stale_conditions.go:71] Starting RemoveStaleConditions
I1209 19:38:25.479910       1 status_controller.go:188] Starting StatusSyncer-authentication
I1209 19:38:25.480105       1 unsupportedconfigoverrides_controller.go:151] Starting UnsupportedConfigOverridesController
I1209 19:38:25.480117       1 logging_controller.go:82] Starting LogLevelController
I1209 19:38:25.480124       1 controller.go:204] Starting RouterCertsDomainValidationController
I1209 19:38:25.480130       1 management_state_controller.go:101] Starting management-state-controller-authentication
I1209 19:38:25.480261       1 controller.go:53] Starting AuthenticationOperator2
I1209 19:38:25.480921       1 resourcesync_controller.go:217] Starting ResourceSyncController
I1209 19:38:28.299980       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"309b7dbf-16bb-11ea-88fe-005056a3055e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed
E1209 19:38:54.527028       1 controller.go:129] {AuthenticationOperator2 AuthenticationOperator2} failed with: error checking current version: unable to check route health: failed to GET route: net/http: TLS handshake timeout
I1209 19:38:54.527633       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-12-09T19:36:33Z","message":"RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-12-05T00:23:36Z","message":"Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202.40:6443/.well-known/oauth-authorization-server endpoint data","reason":"ProgressingWellKnownNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2019-12-05T00:23:36Z","reason":"Available","status":"False","type":"Available"},{"lastTransitionTime":"2019-12-05T00:23:36Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I1209 19:38:54.532774       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"309b7dbf-16bb-11ea-88fe-005056a3055e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "" to "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout"
I1209 19:38:55.951891       1 status_controller.go:165] clusteroperator/authentication diff {"status":{"conditions":[{"lastTransitionTime":"2019-12-09T19:36:33Z","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2019-12-05T00:23:36Z","message":"Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.6.202.40:6443/.well-known/oauth-authorization-server endpoint data","reason":"ProgressingWellKnownNotReady","status":"True","type":"Progressing"},{"lastTransitionTime":"2019-12-05T00:23:36Z","reason":"Available","status":"False","type":"Available"},{"lastTransitionTime":"2019-12-05T00:23:36Z","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}
I1209 19:38:55.956704       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-authentication-operator", Name:"authentication-operator", UID:"309b7dbf-16bb-11ea-88fe-005056a3055e", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/authentication changed: Degraded message changed from "RouteHealthDegraded: failed to GET route: net/http: TLS handshake timeout" to ""
W1209 19:44:15.503371       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2075606 (2077071)
W1209 19:46:22.589372       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
W1209 19:46:42.500749       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2075606 (2077744)
W1209 19:46:58.500759       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2075606 (2077814)
W1209 19:47:37.499418       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2075606 (2078011)
W1209 19:48:07.508717       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 2075843 (2077090)
W1209 19:49:37.508003       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2077244 (2078512)
W1209 19:52:52.506131       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2077904 (2079390)
W1209 19:53:03.503265       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2078164 (2079439)
W1209 19:53:04.504946       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2077981 (2079441)
W1209 19:55:03.515775       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2078651 (2079949)
W1209 19:58:15.511349       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2079535 (2080770)
W1209 20:01:26.507378       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2079591 (2081614)
W1209 20:01:41.509589       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2079598 (2081689)
W1209 20:01:54.520611       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2080089 (2081752)
W1209 20:01:55.514431       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 2078063 (2080300)
W1209 20:04:04.671306       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
W1209 20:05:22.516274       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2080936 (2082640)
W1209 20:08:14.511160       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2081774 (2083411)
W1209 20:10:30.517731       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2081836 (2083962)
W1209 20:11:46.525473       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2081901 (2084304)
W1209 20:12:28.521079       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2082779 (2084483)
W1209 20:16:13.514966       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2083551 (2085413)
W1209 20:17:04.538829       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2084099 (2085655)
W1209 20:17:16.721612       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
W1209 20:17:48.525928       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2084623 (2085851)
W1209 20:19:33.519985       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 2081900 (2083208)
W1209 20:20:10.530486       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2084455 (2086434)
W1209 20:24:02.530452       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2085995 (2087464)
W1209 20:24:13.518678       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2085580 (2087511)
W1209 20:25:48.544517       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2085817 (2087921)
W1209 20:26:55.535049       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2086577 (2088235)
W1209 20:27:58.523566       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.Deployment ended with: too old resource version: 2085764 (2086828)
W1209 20:30:17.536340       1 reflector.go:289] k8s.io/client-go/informers/factory.go:133: watch of *v1.ConfigMap ended with: too old resource version: 2087607 (2089082)
W1209 20:31:44.803184       1 reflector.go:289] github.com/openshift/client-go/route/informers/externalversions/factory.go:101: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
[root@JX2LUTL01 ~]# oc describe pod authentication-operator-75ffd7fb6c-w85qx -n=openshift-authentication-operator
 State:          Running
      Started:      Mon, 09 Dec 2019 14:37:24 -0500
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:     10m
      memory:  50Mi
    Environment:
      IMAGE:                   quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3e00a6a92d9443240151e5eaab44cc24a25d8fcb833c29404552494064923cb
      OPERATOR_IMAGE_VERSION:  4.2.2
      OPERAND_IMAGE_VERSION:   4.2.2_openshift
      POD_NAME:                authentication-operator-75ffd7fb6c-w85qx (v1:metadata.name)
    Mounts:
      /var/run/configmaps/config from config (rw)
      /var/run/configmaps/trusted-ca-bundle from trusted-ca-bundle (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from authentication-operator-token-7d6z8 (ro)
      /var/run/secrets/serving-cert from serving-cert (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      authentication-operator-config
    Optional:  false
  trusted-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      trusted-ca-bundle
    Optional:  true
  serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  serving-cert
    Optional:    true
  authentication-operator-token-7d6z8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  authentication-operator-token-7d6z8
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 120s
                 node.kubernetes.io/unreachable:NoExecute for 120s
Events:          <none>

[root@JX2LUTL01 ~]# oc describe pods oauth-openshift-56d9f65fd7-2nr4r -n=openshift-authentication
Name:                 oauth-openshift-56d9f65fd7-2nr4r
Namespace:            openshift-authentication
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 master2.openshift.bdlab.local/10.6.202.38
Start Time:           Mon, 09 Dec 2019 14:38:28 -0500
Labels:               app=oauth-openshift
                      pod-template-hash=56d9f65fd7
Annotations:          k8s.v1.cni.cncf.io/networks-status:
                        [{
                            "name": "openshift-sdn",
                            "interface": "eth0",
                            "ips": [
                                "10.254.1.216"
                            ],
                            "default": true,
                            "dns": {}
                        }]
                      openshift.io/scc: anyuid
                      operator.openshift.io/pull-spec:
                        quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3e00a6a92d9443240151e5eaab44cc24a25d8fcb833c29404552494064923cb
                      operator.openshift.io/rvs-hash: XxhPdZ7p7TZEWwxM-1X9YUaDpKJP34ykFlYB0NaBw3yySP7D1Up7d9YE2XUk7jccIPtaiRiyd2vcq76iUyR96g
Status:               Running
IP:                   10.254.1.216
IPs:                  <none>
Controlled By:        ReplicaSet/oauth-openshift-56d9f65fd7
Containers:
  oauth-openshift:
    Container ID:  cri-o://7aee822a5e2855b017559ba74f8221556e78aa7d242adab2b573dfc3a2cb20cd
    Image:         quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3e00a6a92d9443240151e5eaab44cc24a25d8fcb833c29404552494064923cb
    Image ID:      quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3e00a6a92d9443240151e5eaab44cc24a25d8fcb833c29404552494064923cb
    Port:          6443/TCP
    Host Port:     0/TCP
    Command:
      /bin/bash
      -ec
    Args:

      if [ -s /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt ]; then
          echo "Copying system trust bundle"
          cp -f /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
      fi
      exec oauth-server osinserver --config=/var/config/system/configmaps/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig --v=2

    State:          Running
      Started:      Mon, 09 Dec 2019 14:38:36 -0500
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     50Mi
    Liveness:     http-get https://:6443/healthz delay=30s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get https://:6443/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/config/system/configmaps/v4-0-config-system-cliconfig from v4-0-config-system-cliconfig (ro)
      /var/config/system/configmaps/v4-0-config-system-service-ca from v4-0-config-system-service-ca (ro)
      /var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle from v4-0-config-system-trusted-ca-bundle (ro)
      /var/config/system/secrets/v4-0-config-system-ocp-branding-template from v4-0-config-system-ocp-branding-template (ro)
      /var/config/system/secrets/v4-0-config-system-router-certs from v4-0-config-system-router-certs (ro)
      /var/config/system/secrets/v4-0-config-system-serving-cert from v4-0-config-system-serving-cert (ro)
      /var/config/system/secrets/v4-0-config-system-session from v4-0-config-system-session (ro)
      /var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data from v4-0-config-user-idp-0-file-data (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from oauth-openshift-token-r7dmh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  v4-0-config-system-session:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-system-session
    Optional:    true
  v4-0-config-system-cliconfig:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      v4-0-config-system-cliconfig
    Optional:  true
  v4-0-config-system-serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-system-serving-cert
    Optional:    true
  v4-0-config-system-service-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      v4-0-config-system-service-ca
    Optional:  true
  v4-0-config-system-router-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-system-router-certs
    Optional:    true
  v4-0-config-system-ocp-branding-template:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-system-ocp-branding-template
    Optional:    true
  v4-0-config-system-trusted-ca-bundle:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      v4-0-config-system-trusted-ca-bundle
    Optional:  true
  v4-0-config-user-idp-0-file-data:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  v4-0-config-user-idp-0-file-data
    Optional:    false
  oauth-openshift-token-r7dmh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  oauth-openshift-token-r7dmh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/master=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 120s
                 node.kubernetes.io/unreachable:NoExecute for 120s
Events:          <none

The vulnerability CVE-2022-2403 has been fixed, but no specific tag denotes the patched version.

Hello, we are a team researching the dependency management mechanism of Golang. During our analysis, we came across your project and noticed that you have fixed a vulnerability (snyk references, CVE: CVE-2022-2403, CWE: CWE-200, fix commit id: 4bddf47). However, we observed that you have not tagged the fixing commit or its subsequent commits. As a result, users are unable to obtain the patch version through Go tool ‘go list’.

We kindly request your assistance in addressing this issue. Tagging the fixing commit or its subsequent commits will greatly benefit users who rely on your project and are seeking the patched version to address the vulnerability.

We greatly appreciate your attention to this matter and collaboration in resolving it. Thank you for your time and for your valuable contributions to our research.

With custom networking, authentication operator never goes healthy

It's quite possible that this is due to something on my side, but I'm hoping folks here can help point me in the right direction as for what to check next.

I've started a cluster using Calico for pod networking with openshift-install version v0.16.0. The installer gets most of the way through, but then fails to complete due to what appears to be a problem with the authentication operator.

i.e.,

time="2019-06-05T13:50:20-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.0.0-0.9: 98% complete"
time="2019-06-05T13:51:54-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.0.0-0.9: 99% complete"
time="2019-06-05T13:53:07-07:00" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.0.0-0.9: 99% complete"
time="2019-06-05T14:01:24-07:00" level=debug msg="Still waiting for the cluster to initialize: Cluster operator authentication is still updating"
time="2019-06-05T14:18:24-07:00" level=debug msg="Still waiting for the cluster to initialize: Cluster operator console has not yet reported success"
time="2019-06-05T14:20:20-07:00" level=fatal msg="failed to initialize the cluster: Cluster operator console has not yet reported success: timed out waiting for the condition"

Running this command:

kubectl get clusteroperators authentication -o yaml                                                                                                                                                                                                                                                                                      

Shows me this:

apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  creationTimestamp: "2019-06-05T20:47:23Z"
  generation: 1
  name: authentication
  resourceVersion: "26265"
  selfLink: /apis/config.openshift.io/v1/clusteroperators/authentication
  uid: 1ec781df-87d3-11e9-8411-0a428212f130
spec: {}
status:
  conditions:
  - lastTransitionTime: "2019-06-05T20:49:03Z"
    message: 'Failing: error checking payload readiness: unable to check route health:
      failed to GET route: EOF'
    reason: Failing
    status: "True"
    type: Failing
  - lastTransitionTime: "2019-06-05T20:50:48Z"
    reason: AsExpected
    status: "False"
    type: Progressing
  - lastTransitionTime: "2019-06-05T20:47:29Z"
    reason: Available
    status: "False"
    type: Available
  - lastTransitionTime: "2019-06-05T20:47:23Z"
    reason: NoData
    status: Unknown
    type: Upgradeable
  extension: null
  relatedObjects:
  - group: operator.openshift.io
    name: cluster
    resource: authentications
  - group: config.openshift.io
    name: cluster
    resource: authentications
  - group: config.openshift.io
    name: cluster
    resource: infrastructures
  - group: config.openshift.io
    name: cluster
    resource: oauths
  - group: ""
    name: openshift-config
    resource: namespaces
  - group: ""
    name: openshift-config-managed
    resource: namespaces
  - group: ""
    name: openshift-authentication
    resource: namespaces
  - group: ""
    name: openshift-authentication-operator
    resource: namespaces
  versions:
  - name: integrated-oauth-server
    version: 4.0.0-0.9_openshift

I can confirm that the Route the authentication operator is attempting to hit doesn't seem to be working through the ingress controller. However, the Service backing the route is reachable from within the cluster.

Hitting the service directly:

curl -k https://openshift-authentication:443 
{
  "paths": [
    "/apis",
    "/healthz",
    "/healthz/log",
    "/healthz/ping",
    "/healthz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
    "/metrics",
    "/readyz",
    "/readyz/log",
    "/readyz/ping",
    "/readyz/poststarthook/oauth.openshift.io-startoauthclientsbootstrapping",
    "/readyz/terminating"
  ]
}

Hitting the service through the Route:

curl -k https://openshift-authentication-openshift-authentication.apps.casey-ocp.openshift.crc.aws.eng.tigera.net:443
curl: (35) Encountered end of file

Also please let me know if there is a better place to raise this issue. Thanks!

Operator is not reporting a ClusterOperator resource

Expected:

# find an operator status for this operator
# the object should have a pointer to the namespaces for this operator in RelatedObjects
# to support must-gather tool for diagnostics
oc get clusteroperators

Actual:
No ClusterOperator status appears to be reported.
This prevents the ClusterVersionOperator from knowing if a payload is rolling out properly.
It also prevents telemetry from collecting information if the operator/operand is function.

/cc @ericavonb

Oauth pod failed to response TLS 1.2 requests

I have encountered a problem with OKD 4.4 authentication, the authentication-operator is managing oauth router as passthough, and pod uses TLS 1.3 to secure connection while other route uses TLS 1.2, also cipher is not the same.

But when I replace default router https cert and key, which is trusted by browser by default, or add ingress selfsigned ca cert to browser, then from other app e.g. openshift-console redirect to oauth, the app shows not avaliable, while the oauth pod is running and returns 403 in / uri.

I check redirect, it is using code 303, so the browser is not making handshake again with oauth, and oauth seems not support TLS 1.2 and turns out not avaliable.

I made no change on any settings related to this, and set crd to Unmanaged , then change route to reencrypt seems working, but I think that's not what I supposed to do.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.