Giter VIP home page Giter VIP logo

cloud-credential-operator's Introduction

OpenShift Cloud Credential Operator

The cloud credential operator is a controller that will sync on CredentialsRequest custom resources. CredentialsRequests allow OpenShift components to request fine-grained credentials for a particular cloud provider (as opposed to using the admin credentials, or elevated permissions granted via instance roles).

Design Principles

  • The controller should be able to run in either a cluster itself, or in a centralized management cluster, most likely alongside Hive.
  • Controller expects access to a set of credentials we refer to as the "admin" credentials.
  • If the admin credentials are missing, but all credentials requests are fulfilled and valid, this is considered a valid state (i.e. the admin creds were removed from the cluster after use).
  • If the admin credentials are able to create additional credentials, we will create fine grained permissions as defined in the credentials request (best practice).
  • If the admin credentials cannot create additional credentials, but do themselves fulfill the requirements of the credentials request, they will be used (with logged warnings and a condition on the credentials request).
  • If the admin credentials fulfill neither of the above requirements, the controller will fail to generate the credentials, report failure back to the Cluster Version Operator, and thus block upgrading. The installer will also perform this check early to inform the user their cluster will not function.

Cloud Providers

Currently the operator supports AWS, Azure, GCP, KubeVirt, OpenStack. oVirt and VMWare.

Credentials Root Secret Formats

Each cloud provider utilizes a credentials root secret in the kube-system namespace (by convention), which is then used to satisfy all CredentialsRequests and create their respective Secrets. (either by minting new credentials (mint mode), or by copying the credentials root secret (passthrough mode))

The format for the secret varies by cloud, and is also used for each CredentialsRequest Secret.

AWS

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: aws-creds
data:
  aws_access_key_id: Base64encodeAccessKeyID
  aws_secret_access_key: Base64encodeSecretAccessKey

Azure

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: azure-credentials
data:
  azure_subscription_id: Base64encodeSubscriptionID
  azure_client_id: Base64encodeClientID
  azure_client_secret: Base64encodeClientSecret
  azure_tenant_id: Base64encodeTenantID
  azure_resource_prefix: Base64encodeResourcePrefix
  azure_resourcegroup: Base64encodeResourceGroup
  azure_region: Base64encodeRegion

GCP

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: gcp-credentials
data:
  service_account.json: Base64encodeServiceAccount

Kubevirt

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: kubevirt-credentials
data:
  kubeconfig: Base64encodeKubeconfig

OpenStack

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: openstack-credentials
data:
  clouds.yaml: Base64encodeCloudCreds
  clouds.conf: Base64encodeCloudCredsINI

Ovirt

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: ovirt-credentials
data:
  ovirt_url: Base64encodeURL
  ovirt_username: Base64encodeUsername
  ovirt_password: Base64encodePassword
  ovirt_insecure: Base64encodeInsecure
  ovirt_ca_bundle: Base64encodeCABundle

VSphere:

apiVersion: v1
kind: Secret
metadata:
  namespace: kube-system
  name: vsphere-creds
data:
 {{VCenter.username}}: Base64encodeUsername
 {{VCenter.password}}: Base64encodePassword

Source of templates:

Modes of Operation

1. Mint Mode

The default and recommended best practice for running OpenShift is to run the installer with an admin level cloud credential. The admin credential is stored in kube-system namespace and then used by the cloud credential operator to process the CredentialRequests in the cluster and create new users for each with fine grained permissions.

Pros:

  • Each cluster component has only the permissions it needs.
  • Automatic ongoing reconciliation for cloud credentials including upgrades, which may require additional credentials or permissions.

Cons:

  • Requires admin credential storage in a cluster kube-system secret. (However, if a user has access to all secrets in your cluster, you are severely compromised regardless.)

Supported clouds: AWS, GCP

1.1 Mint Mode With Removal/Rotation Of Admin Credential

In this mode a user installs OpenShift with an admin credential per the normal mint mode, but removes the admin credential Secret from the cluster after installation. The cloud credential operator makes its own request for a read-only credential that allows it to verify if all CredentialsRequests have their required permissions, thus the admin credential is not needed unless something needs to be changed (e.g. on upgrade). Once removed the associated credential could then be destroyed on the underlying cloud if desired.

Prior to upgrade, the admin credential should be restored. In the future upgrade may be blocked if the credential is not present (see the Secret formats above).

Pros:

  • Admin credential is not stored in the cluster permanently and does not need to be long-lived.

Cons:

  • Still requires admin credential in the cluster for brief periods of time.
  • Requires manually reinstating the Secret with admin credentials for each upgrade.

Supported clouds: AWS, GCP

2. Passthrough Mode

In this mode a user installs OpenShift with a single credential that is not an admin and cannot mint additional credentials, but itself has enough permissions to perform the installation as well as all operations needed by all components in the cluster. The cloud credential operator then shares this credential to each component.

Your passthrough mode credential will need to be manually maintained if CredentialsRequests change over time as the cluster is upgraded. This should be checked prior to every upgrade, and in the future you may be required to confirm you have done so if a change in CredentialsRequests is detected.

By default the permissions needed only for installation are required, however it is possible to reduce the permissions on your credential after install to just what is needed to run the cluster (as defined by the CredentialsRequests in the current release image). See the secret formats above for details on how to do this.

Pros:

  • Does not require installing or running with an admin credential.

Cons:

  • Includes broad permissions only needed at install time, unless manual action is taken to reduce permissions after install.
  • Credential permissions may need to be manually updated prior to any upgrade.
  • Each component has permissions used by all other components.

Supported clouds: AWS, GCP, Azure, VMWare, OpenStack, oVirt, KubeVirt

3. Manual Credentials Management

In this mode a user manually performs the job of the cloud credential operator. This requires examining the CredentialsRequests in an OpenShift 4 release image, creating credentials in the underlying cloud provider, and finally creating Kubernetes Secrets in the correct namespaces to satisfy all CredentialsRequests for the cluster's cloud provider.

Pros:

  • Admin credential never stored in the cluster.
  • Each cluster component has only the permissions it needs.

Cons:

  • Manual process required for install and every upgrade to reconcile permissions with the new release image.

Supported clouds: AWS

Documentation

4. Short Lived Tokens

OpenShift can be configured to use short-lived credentials for different in-cluster components. It enables an authentication flow allowing a component to assume a cloud role resulting in short-lived credentials. It also automates requesting and refreshing of credentials using an OpenID Connect (OIDC) Identity Provider. OpenShift can sign ServiceAccount tokens trusted by the OIDC provider, which can be projected into a Pod and used for authentication.

Pros:

  • Admin credentials are never stored in the cluster.
  • Each cluster component has only the permissions it needs.
  • Credentials for each cluster component are rotated periodically.

Cons:

  • Requires an additional cloud infrastructure setup from the user. The ccoctl tool can assist in the setup process.
  • Push-button upgrades will not work as the cluster no longer has the admin credentials to mint credentials

Read more about supported clouds by clicking on the links below:

Support Matrix

Cloud Mint Mint + Remove Admin Cred Passthrough Manual Token
AWS Y 4.4+ Y 4.3+ 4.8+
Azure N1 N Y Y 4.14+
GCP Y 4.7+ Y Y 4.10+
IBMCloud N N N Y N
KubeVirt N N Y N N
Nutanix N N N Y N
OpenStack N N Y N N
oVirt N N Y N N
VMWare N N Y N N

1 Mint mode was previously supported, but with the sunsetting of the Azure Active Directory Graph API, Mint mode support on Azure has since been removed.

Developer Instructions

Login to a cluster with admin credentials:

$ make install
$ make run

NOTE: To keep the in-cluster versions of the code from conflicting with your local copy, you should scale down the deployments for cloud-credential-operator and cluster-version-operator

$ kubectl scale -n openshift-cluster-version deployment.v1.apps/cluster-version-operator --replicas=0
$ kubectl scale -n openshift-cloud-credential-operator deployment.v1.apps/cloud-credential-operator --replicas=0

As an alternative to disabling the cluster verison operator entirely, you can add the CCO Deployment as an unmanaged object into the clusterversion resource:

spec:
  overrides:
    - kind: Deployment
      group: apps/v1
      name: cloud-credential-operator
      namespace: openshift-cloud-credential-operator
      unmanaged: true

Deploying in cluster

  1. export IMG=quay.io/dgoodwin/cloud-credential-operator:latest
    • You can upload to a personal repo if you wish to build images from source.
  2. make buildah-push
  3. make deploy

Cred Minter should now be running in the openshift-cloud-credential-operator namespace.

Credentials Requests

The primary custom resource used by this operator is the CredentialsRequest, which allows cluster components to request fine-grained credentials.

A CredentialRequest spec consists of:

  1. secretRef - Points to the secret where the credentials should be stored once generated. Can be in a separate namespace from the CredentialsRequest where it can be used by pods. If that namespace does not yet exist, the controller will immediately sync when it is created.
  2. providerSpec - Contains the cloud provider specific credentials specification.

Once created, assuming admin credentials are available, the controller will provision e.g. a user, access key, and user policy in AWS. The access and secret key will be stored in the target secret specified above.

You can freely edit a CredentialsRequest to adjust permissions and the controller will reconcile those changes out to the respective user policy (assuming valid admin credentials still exist).

AWS Sample

apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
  name: openshift-image-registry
  namespace: openshift-cloud-credential-operator
spec:
  secretRef:
    name: installer-cloud-credentials
    namespace: openshift-image-registry
  providerSpec:
    apiVersion: cloudcredential.openshift.io/v1
    kind: AWSProviderSpec
    statementEntries:
    - effect: Allow
      action:
      - s3:CreateBucket
      - s3:DeleteBucket
      resource: "*"

Azure Sample

apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
  name: openshift-image-registry
  namespace: openshift-cloud-credential-operator
spec:
  secretRef:
    name: installer-cloud-credentials
    namespace: openshift-image-registry
  providerSpec:
    apiVersion: cloudcredential.openshift.io/v1
    kind: AzureProviderSpec
    roleBindings:
      - role: Storage Account Contributor
      - role: Storage Blob Data Contributor

List of Azure built-in roles: https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles

Instructions to add new cloud provider

Please refer this documentation for adding a new provider.

For OpenShift Second Level Operators

  1. Add CredentialsRequests objects to your CVO manifests and deployed via the release payload. Please do not create them in operator code as we want to use the release manifest for auditing and dynamically checking permissions.
  2. The cred operator launches early (runlevel 30) so should be available when your component's manifests are applied.
  3. Your CredentialsRequests should be created in the openshift-cloud-credential-operator namespace.
  4. Your component should tolerate the credentials secret not existing immediately.
  5. Your component should tolerate the credentials secret periodically being rotated.

cloud-credential-operator's People

Contributors

2uasimojo avatar abhinavdahiya avatar abutcher avatar akhil-rane avatar bentito avatar bobbyradford avatar csrwng avatar dahuk avatar deads2k avatar dgoodwin avatar dlom avatar eggfoobar avatar fxierh avatar ingvagabund avatar jstuever avatar juan-lee avatar mdbooth avatar mkumatag avatar openshift-ci[bot] avatar openshift-merge-bot[bot] avatar openshift-merge-robot avatar qjkee avatar r4f4 avatar serbrech avatar sjenning avatar staebler avatar stevekuznetsov avatar suhanime avatar thunderboltsid avatar wking avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloud-credential-operator's Issues

Alibaba Cloud: delete-ram-user requires the credentials-requests

Summary:

When running the removal of ram users the ccoctl command requires the original --credentials-requests directory. This might be an issue when customers remove them after a cluster install. Having to preserve the original release payload or preserving the credentials requests becomes cumbersome.

Example:

~/go/src/github.com/openshift/cloud-credential-operator/ccoctl alibabacloud  delete-ram-users --region us-east-1 --name test-nsrlt   --credentials-requests-dir ~/tmp/alibaba/crs

Would it be possible to update the delete-ram-users command to use only the --name <cluster_id> parameter data to remove credentials? This would greatly simplify the process of removing credentials from a customer's standpoint and for the CI.

cc @DahuK

Improve awsPolicyEqualsDesiredPolicy function to handle IAM policies that may have out of order items

if currentUserPolicy != desiredUserPolicy {

There are a couple of issues with this approach:

  1. As written the above code will fail if two otherwise identical policies have permissions that are not in the exact same order.
  2. If the currentUserPolicy contains line breaks or even a tab, but the policy is the same as the desiredUserPolicy the function will return false

cloud-credential-operator gives an error with the Openshift private cluster deployed on AWS

When cloud-credential-operator is run on Openshift private cluster deployed on AWS, it gives an error.
(By "private", I mean the Openshift cluster cannot access the internet.)
It seems that cloud-credential-operator tries to access "https://iam.amazonaws.com" at the time of execution and this is causing the error.

Please refer below for a sample error message.

2020-09-23T08:37:17.068691724Z time="2020-09-23T08:37:17Z" level=debug msg="target secret exists" actuator=aws cr=openshift-cloud-credential-operator/openshift-machine-api-aws
2020-09-23T08:37:17.068702622Z time="2020-09-23T08:37:17Z" level=debug msg="found access key ID in target secret" accessKeyID=xxx actuator=aws cr=openshift-cloud-credential-operator/openshift-machine-api-aws
2020-09-23T08:37:17.068826396Z time="2020-09-23T08:37:17Z" level=debug msg="loading AWS credentials from secret" actuator=aws cr=openshift-cloud-credential-operator/openshift-machine-api-aws secret=openshift-cloud-credential-operator/cloud-credential-operator-iam-ro-creds
2020-09-23T08:37:17.06883828Z time="2020-09-23T08:37:17Z" level=debug msg="creating read AWS client" actuator=aws cr=openshift-cloud-credential-operator/openshift-machine-api-aws secret=openshift-cloud-credential-operator/cloud-credential-operator-iam-ro-creds
2020-09-23T08:37:18.827808247Z time="2020-09-23T08:37:18Z" level=error msg="error while validating cloud credentials: failed checking create cloud creds: error gathering AWS credentials details: error querying username: RequestError: send request failed\ncaused by: Post https://iam.amazonaws.com/: dial tcp xxx.xxx.xxx.xxx:443: i/o timeout" controller=secretannotator
2020-09-23T08:37:19.828069938Z time="2020-09-23T08:37:19Z" level=info msg="validating cloud cred secret" controller=secretannotator
2020-09-23T08:37:19.828120926Z time="2020-09-23T08:37:19Z" level=debug msg="Loading infrastructure name: xxx" controller=secretannotator
2020-09-23T08:39:11.679120982Z time="2020-09-23T08:39:11Z" level=info msg="calculating metrics for all CredentialsRequests" controller=metrics
2020-09-23T08:39:11.679976107Z time="2020-09-23T08:39:11Z" level=info msg="reconcile complete" controller=metrics elapsed="912.531µs"
2020-09-23T08:39:20.160892359Z time="2020-09-23T08:39:20Z" level=error msg="error while validating cloud credentials: failed checking create cloud creds: error gathering AWS credentials details: error querying username: RequestError: send request failed\ncaused by: Post https://iam.amazonaws.com/: dial tcp xxx.xxx.xxx.xxx:443: i/o timeout" controller=secretannotator
2020-09-23T08:39:21.161091474Z time="2020-09-23T08:39:21Z" level=info msg="validating cloud cred secret" controller=secretannotator
2020-09-23T08:39:21.161123255Z time="2020-09-23T08:39:21Z" level=debug msg="Loading infrastructure name: xxx" controller=secretannotator

Q1. Are there any workaround for this?
Q2. Is it MANDATORY for cloud-credential-operator to be able to access the internet? (This makes it impossible for any Openshift clusters to be private...)

Thanks

error handing is not clear

Azure cluster with broken cloud-credentials-operator should be more verbose and report more clearly.

time="2019-06-18T05:47:06Z" level=debug msg="set ClusterOperator condition" message="No credentials requests reporting errors." reason=NoCredentialsFailing status=False type=Degraded
time="2019-06-18T05:47:06Z" level=debug msg="set ClusterOperator condition" message="4 of 7 credentials requests provisioned, 0 reporting errors." reason=Reconciling status=True type=Progressing
time="2019-06-18T05:47:06Z" level=debug msg="set ClusterOperator condition" message= reason= status=True type=Available
time="2019-06-18T05:47:32Z" level=info msg="syncing credentials request" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-azure-temp
time="2019-06-18T05:47:32Z" level=debug msg="found secret namespace" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-azure-temp secret=openshift-machine-api/azure-cloud-credentials-test
time="2019-06-18T05:47:32Z" level=error msg="error checking whether credentials already exists: Secret \"azure-credentials\" not found" controller=credreq cr=openshift-cloud-credential-operator/openshift-machine-api-azure-temp secret=openshift-machine-api/azure-cloud-credentials-test

This tells that azure-credentials does not exist in kube-system namespace.
In this state, I call this operator degraded, but status shows differently:

NAME                                 VERSION                         AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                                                       Unknown     Unknown       True       19h
cloud-credential                     4.2.0-0.okd-2019-06-17-063354   True        True          False      19h

Cluster can't fully function without it.

Documentation: Improve documentation by adding more details

I was following the GCP documentation trying to Install an OpenShift Cluster with GCP Workload Identity.

I noticed that some of the steps are not well described and need to be reviewed:

  • RELEASE_IMAGE => this variable is noted, but not described. Sharing an example, and suggesting checking the release tags from quay.io should help the user to understand it.

  • ccoctl the command used to describe the syntax is almost correct but can be even better if you use some example to describe it. Furthermore, I've raised Issue #451 to address the confusing sentence to confirm the JSON .
    You should be able to explain it like this (placing this step before the openshift-installer create manifests:

INSTALL_CONFIG_FILE=install-config.yaml
CCO_NAME=$(yq -r '.metadata.name' ${INSTALL_CONFIG_FILE}); CCO_REGION=$(yq -r '.platform.gcp.region' ${INSTALL_CONFIG_FILE}); CCO_PROJECT=$(yq -r '.platform.gcp.projectID' ${INSTALL_CONFIG_FILE})
  • Also, the ccoctl is not macOS friendly, where the openshift-installer is, and the documentation should provide a way to do it.
    I managed to run it into a docker container on my laptop. Here is my command:
# Variables need to be set from  the install_config file as described previously
# Copy the osServiceAccount.json into your Install_dir
cp ~/Download/osServiceAccount.json /full/path/to/your/install_dir/
# Use docker to run ccoctl into an ubi8 image
$ docker run --rm -ti -v /full/path/to/your/install_dir:/mnt redhat/ubi8-minimal /mnt/ccoctl gcp create-all --name=${CCO_NAME} --region=${CCO_REGION} --project=${CCO_PROJECT} --credentials-requests-dir=./mnt/credreqs --output-dir=/mnt/cco_output

And to delete the cco:

docker run --rm -ti -v /full/path/to/your/install_dir:/mnt redhat/ubi8-minimal /mnt/ccoctl gcp delete --name=${CCO_NAME} --project=${CCO_PROJECT}

Otherwise, the rest of the documentation is OK.

Set metadata.ownerReferences

Status Quo

Currently, the only way to connect a CredentialsRequest to its associated Secret(s) seems to be via annotation cloudcredential.openshift.io/credentials-request on the Secret(s).
I am not sure if this is documented/stable behaviour; I could not find any mention of it in the documentation.

Proposal

Set metadata.ownerReferences as per the standard.

Precedence

external-secrets/external-secrets sets owner references on the Secret(s) created from ExternalSecret.

Use Case

In tools such as ArgoCD, it is useful to see all owned resources. Consider, for instance, this situation:
image

ℹ️ I have created an issue for considering cloudcredential.openshift.io/credentials-request in ArgoCD: argoproj/argo-cd#15353
That said, my feeling is that adding ownerReferences here is the more sustainable approach overall.

Running ccoctl with CloudFront second time generates incorrect manifests

Hi,

I am currently working on implementing STS with Cloudfront in our in-house tool for deploying new clusters. I am currently using 4.13.

Everything works when I run ccoctl the first time:

./ccoctl aws create-all \
  --create-private-s3-bucket \
  --credentials-requests-dir /home/ec2-user/installer/redacted-test12/credentials-requests \
  --name redacted-redacted-test12-irsa-test \
  --region eu-central-1 \
  --output-dir /home/ec2-user/installer/redacted-test12/irsa-config

All manifests are created together with correct cluster-authentication-02-config.yaml :

apiVersion: config.openshift.io/v1
kind: Authentication
metadata:
  name: cluster
spec:
  serviceAccountIssuer: https://redacted.cloudfront.net

However when I run the same command again (e.g. because installation failed for whatever reason), following happens:

  • manifests for existing roles are not generated (this is a known documented bug)
  • ccoctl detects that S3 bucket already exists and generates manifest with S3 URL instead of Cloudfront
apiVersion: config.openshift.io/v1
kind: Authentication
metadata:
  name: cluster
spec:
  serviceAccountIssuer: https://redacted-redacted-test12-irsa-test-oidc.s3.eu-central-1.amazonaws.com

First run logs:

2024/03/06 00:18:28 Using existing RSA keypair found at /home/ec2-user/installer/redacted-test12/irsa-config/serviceaccount-signer.private
2024/03/06 00:18:28 Copying signing key for use by installer
2024/03/06 00:18:29 Bucket redacted-redacted-test12-irsa-test-oidc created
2024/03/06 00:18:31 CloudFront origin access identity created with ID REDACTED, waiting 30s for it to become active
2024/03/06 00:19:01 Update policy for bucket redacted-redacted-test12-irsa-test-oidc to allow access from CloudFront origin access identity with ID REDACTED
2024/03/06 00:19:02 Blocked public access for the bucket redacted-redacted-test12-irsa-test-oidc
2024/03/06 00:19:03 CloudFront distribution created with ID REDACTED
2024/03/06 00:19:03 Waiting 30s for CloudFront distribution with ID REDACTED to be deployed...
2024/03/06 00:19:33 Waiting 30s for CloudFront distribution with ID REDACTED to be deployed...
2024/03/06 00:20:04 Waiting 30s for CloudFront distribution with ID REDACTED to be deployed...
2024/03/06 00:20:34 Waiting 30s for CloudFront distribution with ID REDACTED to be deployed...
2024/03/06 00:21:05 Waiting 30s for CloudFront distribution with ID REDACTED to be deployed...
2024/03/06 00:21:35 Waiting 30s for CloudFront distribution with ID REDACTED to be deployed...
2024/03/06 00:22:06 Waiting 30s for CloudFront distribution with ID REDACTED to be deployed...
2024/03/06 00:22:36 Waiting 30s for CloudFront distribution with ID REDACTED to be deployed...
2024/03/06 00:23:07 CloudFront distribution with ID REDACTED is successfully deployed
2024/03/06 00:23:07 OpenID Connect discovery document in the S3 bucket redacted-redacted-test12-irsa-test-oidc at .well-known/openid-configuration updated
2024/03/06 00:23:07 Reading public key
2024/03/06 00:23:07 JSON web key set (JWKS) in the S3 bucket redacted-redacted-test12-irsa-test-oidc at keys.json updated
2024/03/06 00:23:08 Identity Provider created with ARN: arn:aws:iam::redacted:oidc-provider/d6j73lq0exp5g.cloudfront.net
2024/03/06 00:23:08 Ignoring CredentialsRequest openshift-cloud-credential-operator/openshift-cluster-api-aws with tech-preview annotation
2024/03/06 00:23:08 Role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-machine-api-aws-c created
2024/03/06 00:23:08 Saved credentials configuration to: /home/ec2-user/installer/redacted-test12/irsa-config/manifests/openshift-machine-api-aws-cloud-credentials-credentials.yaml
2024/03/06 00:23:08 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-machine-api-aws-c
2024/03/06 00:23:09 Role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-cloud-credential- created
2024/03/06 00:23:09 Saved credentials configuration to: /home/ec2-user/installer/redacted-test12/irsa-config/manifests/openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml
2024/03/06 00:23:09 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-cloud-credential-
2024/03/06 00:23:09 Role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-image-registry-in created
2024/03/06 00:23:09 Saved credentials configuration to: /home/ec2-user/installer/redacted-test12/irsa-config/manifests/openshift-image-registry-installer-cloud-credentials-credentials.yaml
2024/03/06 00:23:09 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-image-registry-in
2024/03/06 00:23:10 Role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-ingress-operator- created
2024/03/06 00:23:10 Saved credentials configuration to: /home/ec2-user/installer/redacted-test12/irsa-config/manifests/openshift-ingress-operator-cloud-credentials-credentials.yaml
2024/03/06 00:23:10 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-ingress-operator-
2024/03/06 00:23:10 Role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-cloud-network-con created
2024/03/06 00:23:10 Saved credentials configuration to: /home/ec2-user/installer/redacted-test12/irsa-config/manifests/openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml
2024/03/06 00:23:10 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-cloud-network-con
2024/03/06 00:23:11 Role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-cluster-csi-drive created
2024/03/06 00:23:11 Saved credentials configuration to: /home/ec2-user/installer/redacted-test12/irsa-config/manifests/openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml
2024/03/06 00:23:11 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-cluster-csi-drive

Second run logs:

2024/03/06 00:23:47 Using existing RSA keypair found at /home/ec2-user/installer/redacted-test12/irsa-config/serviceaccount-signer.private
2024/03/06 00:23:47 Copying signing key for use by installer
2024/03/06 00:23:47 Bucket redacted-redacted-test12-irsa-test-oidc already exists and is owned by the user
2024/03/06 00:23:47 OpenID Connect discovery document in the S3 bucket redacted-redacted-test12-irsa-test-oidc at .well-known/openid-configuration updated
2024/03/06 00:23:47 Reading public key
2024/03/06 00:23:47 JSON web key set (JWKS) in the S3 bucket redacted-redacted-test12-irsa-test-oidc at keys.json updated
2024/03/06 00:23:48 Existing Identity Provider found with ARN: arn:aws:iam::redacted:oidc-provider/d6j73lq0exp5g.cloudfront.net
2024/03/06 00:23:48 Ignoring CredentialsRequest openshift-cloud-credential-operator/openshift-cluster-api-aws with tech-preview annotation
2024/03/06 00:23:48 Existing role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-machine-api-aws-c found
2024/03/06 00:23:49 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-machine-api-aws-c
2024/03/06 00:23:49 Existing role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-cloud-credential- found
2024/03/06 00:23:49 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-cloud-credential-
2024/03/06 00:23:49 Existing role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-image-registry-in found
2024/03/06 00:23:49 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-image-registry-in
2024/03/06 00:23:49 Existing role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-ingress-operator- found
2024/03/06 00:23:49 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-ingress-operator-
2024/03/06 00:23:50 Existing role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-cloud-network-con found
2024/03/06 00:23:50 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-cloud-network-con
2024/03/06 00:23:50 Existing role arn:aws:iam::redacted:role/redacted-redacted-test12-irsa-test-openshift-cluster-csi-drive found
2024/03/06 00:23:50 Updated Role policy for Role redacted-redacted-test12-irsa-test-openshift-cluster-csi-drive

Problem: ccoctl seems to ignore "--create-private-s3-bucket" flag when S3 bucket already exists and generates incorrect manifest

Expectation: When "--create-private-s3-bucket" is set and S3 bucket exists, I would expect ccoctl to query AWS for associated Cloudfront instance and generate "cluster-authentication-02-config.yaml" manifest with correct value.

cloud credentials secret "noobaa-aws-cloud-creds-secret" is not ready yet

OCP Version : 4.8.15
OCS Version: 4.8.3

Deployed OCP with CCO / STS.

Installed OpenShift Container Storage Operator without any issue. Created a storage cluster via the UI.

ocs-storagecluster-cephcluster deployed fine and is in ready state.
noobaa is in Configuring state.
Conditions show the following message:
cloud credentials secret "noobaa-aws-cloud-creds-secret" is not ready yet

there isn't any secret with that name.
I cannot use IAM Account Key/Secret in my setup.

I am expecting OCS to support CCO.

[IBMCloud] [4.10] ServiceID API key credentials seems to be insufficient for ccoctl '--resource-group-name' parameter

Hi @mkumatag, as discussed via Slack, I'm creating an issue here in order to track the strange behavior of ccoctl when using ServiceID API key instead of a user-based one, example:

$ ./ccoctl ibmcloud create-service-id --name="${infraID}" --credentials-requests-dir="cco-creds" --resource-group-name="${resourceGN}" --output-dir="cco-mnfst"
Error: Failed to getResourceGroupID: Failed to list resource groups for the name: pamoedo-ibmtest10-rn2q5: Can not get resource groups without account id in parameter by service id token

NOTE: The ServiceID API key has Power Users access group with default Access policies in place.

Best Regards.

mockgen dependency not pinned, generated mocks not verified

Today the mockgen binary is used to generate mocks for the cloud-specific client packages, but the generator's not installed via make and the version of the generator is not validated to be correct when running make generate. Furthermore, the mocks are not validated to be correct in CI.

Today, the correct version is

go install github.com/golang/mock/mockgen@73266f9366fcf2ccef0b880618e5a9266e4136f4

Ideally we have:

  • a make target to install this tool
  • a make target as a dependency for make generate to validate that the version of mockgen being run is correct
  • a make target under make validate that ensures the mocks are not changed when re-generated, to ensure they are up to date

cc @abutcher

Security Risk - Cloud Credential Operator Needs Credential Ephemeral or Automatically Rotate

Description of Problem

cloud-credential-operator doesn't allow a credential to be ephemeral or to be automatically rotated. This raised a serious security concern and needs to be addressed as soon as possible. Please see the code: https://github.com/openshift/cloud-credential-operator/blob/master/pkg/aws/client.go#L105 on client.go. line 105-106.

The code doesn't allow a credential to be ephemeral or to be automatically rotated. For other integrations they have with AWS it sounds like security requires them to use temporary tokens, which the cloud credential operator doesn't allow.

Currently, the testing platform is AWS, but this needs to addressed for Bare-Metal (on-prem) and other platforms.

OCP version

  • 4.2
  • 4.1
  • 3.11

Hosted Cloud

  • AWS
  • Bare Metal
  • Azure
  • GCP
  • VMWare

Expected behavior

Cloud Credential Operator should make the credential automatically rotated or be ephemeral.

Severity

Critical. This is a security concern.

Hotloop In Credentials Operator Possible

This was spotted last night by the devex team. It appeared to surface after a cluster upgrade at which point the cred operator goes into a hot loop for all credentials it manages.

The loop is essentially this block of logging repeated over and over:

time="2019-01-29T21:03:33Z" level=info msg="syncing credentials request" controller=credreq cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="found secret namespace" controller=credreq cr=openshift-cloud-credential-operator/openshift-image-registry secret=openshift-image-registry/installer-cloud-credentials
time="2019-01-29T21:03:33Z" level=debug msg="running Exists" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="target secret does not exist" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="running Exists" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="target secret does not exist" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="running sync" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="loading cluster install config"
time="2019-01-29T21:03:33Z" level=debug msg="cluster install config loaded successfully"
time="2019-01-29T21:03:33Z" level=debug msg="loading AWS credentials from secret" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry secret=kube-system/aws-creds
time="2019-01-29T21:03:33Z" level=debug msg="creating root AWS client" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry secret=kube-system/aws-creds
time="2019-01-29T21:03:33Z" level=debug msg="loading AWS credentials from secret" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry secret=openshift-cloud-credential-operator/cloud-credential-operator-iam-ro-creds
time="2019-01-29T21:03:33Z" level=debug msg="creating read AWS client" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry secret=openshift-cloud-credential-operator/cloud-credential-operator-iam-ro-creds
time="2019-01-29T21:03:33Z" level=debug msg="loading cluster version to read clusterID" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="found cluster ID" actuator=aws clusterID=8b812d0a-1795-42ff-852f-a50211c47598 cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=info msg="user exists" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry userName=sjenning-openshift-image-registry-5pbs2
time="2019-01-29T21:03:33Z" level=debug msg="desired user policy: {\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:CreateBucket\",\"s3:DeleteBucket\",\"s3:PutBucketTagging\",\"s3:GetBucketTagging\",\"s3:PutEncryptionConfiguration\",\"s3:GetEncryptionConfiguration\",\"s3:PutLifecycleConfiguration\",\"s3:GetLifecycleConfiguration\",\"s3:GetBucketLocation\",\"s3:ListBucket\",\"s3:HeadBucket\",\"s3:GetObject\",\"s3:PutObject\",\"s3:DeleteObject\",\"s3:ListBucketMultipartUploads\",\"s3:AbortMultipartUpload\"],\"Resource\":\"*\"},{\"Effect\":\"Allow\",\"Action\":[\"iam:GetUser\"],\"Resource\":\"arn:aws:iam::269733383066:user/sjenning-openshift-image-registry-5pbs2\"}]}" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="current user policy: {\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:CreateBucket\",\"s3:DeleteBucket\",\"s3:PutBucketTagging\",\"s3:GetBucketTagging\",\"s3:PutEncryptionConfiguration\",\"s3:GetEncryptionConfiguration\",\"s3:PutLifecycleConfiguration\",\"s3:GetLifecycleConfiguration\",\"s3:GetBucketLocation\",\"s3:ListBucket\",\"s3:HeadBucket\",\"s3:GetObject\",\"s3:PutObject\",\"s3:DeleteObject\",\"s3:ListBucketMultipartUploads\",\"s3:AbortMultipartUpload\"],\"Resource\":\"*\"},{\"Effect\":\"Allow\",\"Action\":[\"iam:GetUser\"],\"Resource\":\"arn:aws:iam::269733383066:user/sjenning-openshift-image-registry-5pbs2\"}]}" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="no changes to user policy" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="sync ListAccessKeys" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="secret does not exist" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="access key exists? false" accessKeyID= actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=info msg="generating new AWS access key" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=info msg="deleting all AWS access keys" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=info msg="deleting access key" accessKeyID=AKIAI2UVYWZYBVSEZJZA actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=info msg="all access keys deleted" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=info msg="access key created" accessKeyID=AKIAI6YRJPMGRMLEDHOA actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=info msg="creating secret" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry targetSecret=openshift-image-registry/installer-cloud-credentials
time="2019-01-29T21:03:33Z" level=info msg="secret created successfully" actuator=aws cr=openshift-cloud-credential-operator/openshift-image-registry targetSecret=openshift-image-registry/installer-cloud-credentials
time="2019-01-29T21:03:33Z" level=debug msg="updating credentials request status" controller=credreq cr=openshift-cloud-credential-operator/openshift-image-registry secret=openshift-image-registry/installer-cloud-credentials
time="2019-01-29T21:03:33Z" level=debug msg="parsed annotation" cr=openshift-cloud-credential-operator/openshift-image-registry
time="2019-01-29T21:03:33Z" level=debug msg="status unchanged" controller=credreq cr=openshift-cloud-credential-operator/openshift-image-registry secret=openshift-image-registry/installer-cloud-credentials

The crux of the problem is that the controller cannot see the secret it's supposed to be writing, it claims to write it successfully, and then re-syncs only to find no secret.

It knows the username, probably recorded pre-upgrade before the hotloop in the request's status. Because it cannot find a secret however, it does not know the secret key which would be effectively lost without the secret and cannot be reobtained, so it destroys all existing access keys and creates a new one, saves it, supposedly successfully, and then re-syncs.

There is no error taking place and thus backoff is not being triggered.

@sjenning also reported a 20 second watch which appears to show the secret being recreated repeatedly. (unclear why the controllers can't see it)

$ oc get secret -w
NAME                                              TYPE                                  DATA   AGE
builder-dockercfg-9cwg8                           kubernetes.io/dockercfg               1      30m
builder-token-48h2v                               kubernetes.io/service-account-token   3      30m
builder-token-6wlxb                               kubernetes.io/service-account-token   3      31m
cluster-image-registry-operator-dockercfg-l2fxw   kubernetes.io/dockercfg               1      30m
cluster-image-registry-operator-token-d2v7l       kubernetes.io/service-account-token   3      30m
cluster-image-registry-operator-token-t9nhg       kubernetes.io/service-account-token   3      31m
default-dockercfg-wr89c                           kubernetes.io/dockercfg               1      30m
default-token-4pfhm                               kubernetes.io/service-account-token   3      31m
default-token-tl4l7                               kubernetes.io/service-account-token   3      30m
deployer-dockercfg-bc8tx                          kubernetes.io/dockercfg               1      30m
deployer-token-fc77j                              kubernetes.io/service-account-token   3      31m
deployer-token-fz74s                              kubernetes.io/service-account-token   3      30m
image-registry-private-configuration              Opaque                                2      30m
image-registry-tls                                kubernetes.io/tls                     2      30m
node-ca-dockercfg-tbshp                           kubernetes.io/dockercfg               1      30m
node-ca-token-m284n                               kubernetes.io/service-account-token   3      31m
node-ca-token-vvhbd                               kubernetes.io/service-account-token   3      30m
registry-dockercfg-zkh7p                          kubernetes.io/dockercfg               1      30m
registry-token-7wn7f                              kubernetes.io/service-account-token   3      30m
registry-token-dsqd4                              kubernetes.io/service-account-token   3      30m
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     1s
image-registry-private-configuration   Opaque   2     30m
installer-cloud-credentials   Opaque   2     2s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     1s
installer-cloud-credentials   Opaque   2     1s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s
installer-cloud-credentials   Opaque   2     0s

Does this imply that something is actually deleting the secrets? What could this be? The credentials operator never deletes the secret itself, it sets a controller reference on the target secret. However it is worth noting that this controller reference is in another namespace, this behavior appears fine on Kube 1.11, perhaps something has changed and we jumped to 1.12?

Questions to answer:

  • Did this upgrade involve Kube 1.12, either before or after the upgrade?
  • Was anything else in the cluster acting unusually?
  • How do we trigger one of these upgrades and does it reproduce the problem?

Current Theory

The controller reference on target secrets, pointing to the credentials request in another namespace is the likely source of the bug. This appeared to work fine and cleanup target secrets correctly, but during the upgrade it could be that this behavior changes and Kube now things the controller of the secret is gone, deletes it, causing the cred operator to see nothing and recreate it.

Operator Doesn't Delete User in AWS if User is Added to Group

My IT department automatically adds every IAM user created to a group that Denies a number of "unsafe" (and reasonably so) AWS permissions. When a new user is created via a CredentialsRequest and later deleted, the delete operation hangs with:

    message: 'failed to deprovision resource: AWS Error: DeleteConflict: Cannot delete
      entity, must remove users from group first., status code: 409'
    reason: CloudCredDeprovisionFailure
    status: "True"
    type: CredentialsDeprovisionFailure

If I delete the user manually via AWS, the object will eventually delete successfully.

How to reproduce (I think):

  • Create a CredentialRequest in Openshift
  • Use AWS tools to add user to some AWS group
  • Delete CredentialRequest in Openshift

Possible solutions:
I've run into this same issue with users created via Terraform. Terraform has an option to "force_delete" that automatically removes the user from any groups. Also in Terraform, if you add explicitly add the user to the forced AWS group during creation, Terraform understands the group is there and removes the user automatically. If the CredentialRequest had the ability to specifiy "additionUserGroups" or something knew to remove the user from those groups before deleting it should succeed.

More info:
I run into this during IPI cluster install when the process destroys the bootstrap resources; in that case I added the force_delete to the installer Terraform templates so the install wouldn't fail. I also see this when destroying a cluster where openshift-install will get stuck with that same 409 error deleting all the users created using CredentialRequest (example, the ebs and machine-api users). I scripted up this to clear out the users and let the destroy process complete:

users=$(aws iam list-users | jq -r '.Users[] | select(.UserName|test("^'$env'-[a-z0-9]{5}-")) | .UserName')

if test "x$users" != "x" ; then
    for user in $users ; do 
        echo Removing $user...
        aws iam remove-user-from-group --user-name=$user --group-name ForcedUserGroup
        aws iam delete-user --user-name=$user    
    done
else
    echo No users found
fi

Differentiate the Cloud Credential for different compute from same cloud

Overview:

Right now I see CredentialsRequest's are mapped to individual Cloud providers like AWS, Azure, IBMCloud etc. In IBMCloud we have different offerings for the compute like VPC, PowerVS contains different policies, ACL but shares common IAM model and same CredentialsRequest code written can be used with different policies.

Existing Solution:

Solution 1: Include all the possible ACL in the single CredentialsRequest

apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: openshift-machine-api-ibmcloud
  namespace: openshift-cloud-credential-operator
  annotations:
    include.release.openshift.io/self-managed-high-availability: "true"
spec:
  providerSpec:
    apiVersion: cloudcredential.openshift.io/v1
    kind: IBMCloudProviderSpec
    policies:
      - <polices for vpc>
      - <polices for powervs>

Pros:

  • One CR for the platform

Cons:

  • in-cluster components will have wider ACLs

Solution 2: Create multiple CRs

for vpc:

apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: openshift-machine-api-ibmcloud
  namespace: openshift-cloud-credential-operator
  annotations:
    include.release.openshift.io/self-managed-high-availability: "true"
spec:
  providerSpec:
    apiVersion: cloudcredential.openshift.io/v1
    kind: IBMCloudProviderSpec
    policies:
      - <polices for vpc>

for powervs:

apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
  name: openshift-machine-api-ibmcloud
  namespace: openshift-cloud-credential-operator
  annotations:
    include.release.openshift.io/self-managed-high-availability: "true"
spec:
  providerSpec:
    apiVersion: cloudcredential.openshift.io/v1
    kind: IBMCloudProviderSpec
    policies:
      - <policies for powervs>

Pros:

  • Better ACL with different CRs

Cons:

  • Still not foolproof, still in-cluster components can see this additional CR and can be used accidentally.

Proposed solution:

  1. Include the label for the CR with the subresource mentioned
    e.g:
apiVersion: cloudcredential.openshift.io/v1
kind: CredentialsRequest
metadata:
  labels:
    controller-tools.k8s.io: "1.0"
    cloudcredential.openshift.io/compute-type: powervs
  name: openshift-machine-api-ibmcloud
  namespace: openshift-cloud-credential-operator
  annotations:
    include.release.openshift.io/self-managed-high-availability: "true"
spec:
  providerSpec:
    apiVersion: cloudcredential.openshift.io/v1
    kind: IBMCloudProviderSpec
    policies:
      - <policies for powervs>
  1. Use this label during the CR extract from the release image:
    e.g: $ oc adm release extract --credentials-requests --cloud=ibmcloud --selector cloudcredential.openshift.io/compute-type= powervs quay.io/<path_to>/ocp-release:<version>
  2. Run the ccoctl command with no changes

Pro:

  • All the required in-cluster components will have only required credentials
  • No need to change anything in the ccoctl
  • No need to change existing CRs which does not need any such classifications

Cons:

  • Need to include additional selector to the oc adm release extract
  • Doc needs to be updated
  • Any existing CRs need classifications need to be updated.

the credential secret does not seem to be monitored

using this operator I noticed that the created secret with credentials is not monitored by the operator. That is to say that:

  1. if the secret is delete, it is not recreated by the operator
  2. if the namespace is the secret is deleted, when the namespace is recreated, the secret is not recreated.
  3. (untested), if a secret is modified, the changes should be overwritten.

Is there a reason for this design?

Getting failed to fetch Terraform Variables: failed to fetch dependency of "Terraform Variables" error while creating openshift cluster on AWS

Version: ./openshift-install v4.2.0

Getting failed to fetch Terraform Variables: failed to fetch dependency of "Terraform Variables" error while creating openshift cluster on AWS.

Full Error :

FATAL failed to fetch Terraform Variables: failed to fetch dependency of "Terraform Variables": failed to fetch dependency of "Cluster ID": failed to fetch dependency of "Install Config": failed to generate asset "Base Domain": no public Route 53 hosted zones found

Please let me know anything else that needs to be configured.

Is the Credential Provider have to use the Windows password?

Hello there, great product:
I am using CredentialProvider to develop my custom login method,After my research,the Credential Provider seems to require the user's original Windows password. my question is : Is the Credential Provider have to know the original Windows password, and pass it to the operating system for verification, Otherwise the user cannot log in successfully?

Set sideEffects for pod-identity-webhook mutatingwebhook

Bug description

When MutatingWebhookConfiguration/pod-identity-webhook is deployed, its sideEffects is Unknown (=v1beta1's default).`

MutatingWebhookConfiguration/pod-identity-webhook
- apiVersion: admissionregistration.k8s.io/v1
  kind: MutatingWebhookConfiguration
  metadata:
    annotations:
      service.beta.openshift.io/inject-cabundle: "true"
    creationTimestamp: "2020-07-29T05:07:55Z"
    generation: 2
    managedFields:
    - apiVersion: admissionregistration.k8s.io/v1beta1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:annotations:
            .: {}
            f:service.beta.openshift.io/inject-cabundle: {}
        f:webhooks:
          .: {}
          k:{"name":"pod-identity-webhook.amazonaws.com"}:
            .: {}
            f:admissionReviewVersions: {}
            f:clientConfig:
              .: {}
              f:service:
                .: {}
                f:name: {}
                f:namespace: {}
                f:path: {}
                f:port: {}
            f:failurePolicy: {}
            f:matchPolicy: {}
            f:name: {}
            f:namespaceSelector: {}
            f:objectSelector: {}
            f:reinvocationPolicy: {}
            f:rules: {}
            f:sideEffects: {}
            f:timeoutSeconds: {}
      manager: cloud-credential-operator
      operation: Update
      time: "2020-07-29T05:07:55Z"
    - apiVersion: admissionregistration.k8s.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:webhooks:
          k:{"name":"pod-identity-webhook.amazonaws.com"}:
            f:clientConfig:
              f:caBundle: {}
      manager: service-ca-operator
      operation: Update
      time: "2020-07-29T05:07:55Z"
    name: pod-identity-webhook
    resourceVersion: "13547"
    selfLink: /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/pod-identity-webhook
    uid: c6ebf9c9-a279-43ae-8d91-1aecd00414a9
  webhooks:
  - admissionReviewVersions:
    - v1beta1
    clientConfig:
      caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURVVENDQWptZ0F3SUJBZ0lJTWtQSm96S3BGd1V3RFFZSktvWklodmNOQVFFTEJRQXdOakUwTURJR0ExVUUKQXd3cmIzQmxibk5vYVdaMExYTmxjblpwWTJVdGMyVnlkbWx1WnkxemFXZHVaWEpBTVRVNU5UazVPRGszTlRBZQpGdzB5TURBM01qa3dOVEF5TlRSYUZ3MHlNakE1TWpjd05UQXlOVFZhTURZeE5EQXlCZ05WQkFNTUsyOXdaVzV6CmFHbG1kQzF6WlhKMmFXTmxMWE5sY25acGJtY3RjMmxuYm1WeVFERTFPVFU1T1RnNU56VXdnZ0VpTUEwR0NTcUcKU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQzVPbG5GTEcxMjB1ZUtKOVVPS2lJMGJhSnNmQ0t1M0dxMwpjWHliME8xMmh1MTRoQ043bE1BdjFQQTAvSFhBWGFEdnJkd25vQ3Q0VnlRQ0dJWERwUDBmOEpiUmhUT1o5V05sCit6TFkvZjJaeERUdXZBOFNaR1Bhem1CdXdSWnY1Y25xRVJMRG1BaEFDaEVrQ0Q2R2g2bHVVRXpUbU9wRHo4TFEKNVdMTnJJZGhxV0xHTTZjcHg0ZHI1LytPSE5VK1FWVlJKcnV3THc5QUc0VnFhRHY3WVF6UVorbVJibmJJekZPegozZU45U25SS1JKVHY4eVJ2bG5HZEx4MGNuc3RyVWtsSUwvU1pmT3hpN3ZESzVUY3h2OUFBMENsMnpadnVpV1JUCm1pNXV5TDgvWFUzUm1TSW1DK01HdUt1eERZZ01xMit5dmJnMFdUdXpLQ2tVeWlNVDAwRzdBZ01CQUFHall6QmgKTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQQmdOVkhSTUJBZjhFQlRBREFRSC9NQjBHQTFVZERnUVdCQlFlUmN3NApIb0xRVEVnOHQwbHMrUHhlYjJwbHFqQWZCZ05WSFNNRUdEQVdnQlFlUmN3NEhvTFFURWc4dDBscytQeGViMnBsCnFqQU5CZ2txaGtpRzl3MEJBUXNGQUFPQ0FRRUFYUEMwK3dMWStldmFRejJrbUp0U211a3VjN2RJNUo4NlQzeDYKV1c2cE9YbnNrak5adHRCR0Y4OXZzWTRTbmp1YmtKb3FaUktIVzV4ZGEzVURDckgyY2JGWVFiK0xSWE9pdG9RSgowK2VicTZ1WWRadTFGdXhObG51enF5SndFR29hQnRhblN4cHpxQUFWMmxCaUF3eTY0MEF2b1JXSXcyVkVJdUNLCnp1QkdNVXZKaXpWSGI0bkorNVk2ZVhoWXErNjZHeDBxUXkrMmRWK2trL3VmVklIaVhNVjBJbDJMZ0t0elJVaEcKMEV2dUlBeTBsbGdhaW05NzhpVmVHUElsYXpBeG1WL21XSkRLU21CSnk5cVJnSEt0ZHRYOHgwZ1FKTWc1QVNUbApaM1pBU0wxNXQ5cFg2THpYUGoxSUV0UHZtMU1xOERxdGxjbmYrbE1Yb3AwaHc5MTJuQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
      service:
        name: pod-identity-webhook
        namespace: openshift-cloud-credential-operator
        path: /mutate
        port: 443
    failurePolicy: Ignore
    matchPolicy: Exact
    name: pod-identity-webhook.amazonaws.com
    namespaceSelector: {}
    objectSelector: {}
    reinvocationPolicy: Never
    rules:
    - apiGroups:
      - ""
      apiVersions:
      - v1
      operations:
      - CREATE
      resources:
      - pods
      scope: '*'
    sideEffects: Unknown
    timeoutSeconds: 30

Due to this, when we deploy another webhook for dryRun in the cluster, it does not work.

Please see: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#side-effects

Unknown: no information is known about the side effects of calling the webhook. If a request with dryRun: true would trigger a call to this webhook, the request will instead fail, and the webhook will not be called.

AWS Client does not support Session Token

Can not Assume Role and use ccoctl

ubuntu@ip-10-10-1-116:~/git/ocp-clusters/ansible$ export | grep AWS
declare -x AWS_ACCESS_KEY_ID=""
declare -x AWS_SECRET_ACCESS_KEY=""
declare -x AWS_SESSION_TOKEN=""

Results in

6a-oidc: AccessDenied: Access Denied
        status code: 403, request id: YQYJAXJ619CHP004, host id: L6hbZviaRB98dMyEzDz65LA/WGmhLZNhcRAn5zarLZ3aYPy0O3+f+aGnDluPTzugz/GkEWW3tAv5I5DtWoVyUQ==

S3 Bucket URLs and Identity Provider URLs are mismatched for us-gov-west-1 region

Hi OpenShift Team,

Thank you for making this utility to help with installing OpenShift on AWS.

When I was running through this tutorial: https://docs.openshift.com/container-platform/4.12/authentication/managing_cloud_provider_credentials/cco-mode-sts.html#cco-ccoctl-configuring_cco-mode-sts

I encountered a bug where the Identity Provider was pointing to the wrong S3 Bucket URL for the us-gov-west-1 region.

The S3 Bucket URL in us-gov-west-1 region follows this scheme with s3-: https://[name]-oidc.s3-us-gov-west-1.amazonaws.com
whereas, the Identity Provider provisioned by the ccoctl tool used this hostname scheme with s3.: https://[name]-oidc.s3.us-gov-west-1.amazonaws.com

I hope that helps. 🤞

Thanks again for this wonderful tool 🙏

GCP: Make the sentence "Enter 2 empty lines to finish" less confusing && add option to specify the Service Account file.

As a no-English first language speaker, I’ve understood the sentence "[Enter 2 empty lines to finish]" as something:

Enter 2 empty lines to “cancel”

So it will be better to use a strong word (or much better avoid to hit enter twice ?) to Confirm

Press Enter two additional times to confirm your choice

# grep -n -A2 -B2 "Enter 2 empty lines to finish" cloud-credential-operator/vendor/github.com/AlecAivazis/survey/v2/multiline.go
33-{{- else }}
34-  {{- if .Default}}{{color "white"}}({{.Default}}) {{color "reset"}}{{end}}
35:  {{- color "cyan"}}[Enter 2 empty lines to finish]{{color "reset"}}
36-{{- end}}`
37-

This will have less confusion here:

33-{{- else }}
34-  {{- if .Default}}{{color "white"}}({{.Default}}) {{color "reset"}}{{end}}
35:  {{- color "cyan"}}[Press Enter two additional times to confirm your choice]{{color "reset"}}
36-{{- end}}`
37-

Another way to avoid this will be to allow to pass the GCP service account JSON file with an option like --gcp-sa-json

Cloud credentials platform scope

Follow-up openshift/machine-api-operator#328

Currently, all cloud credentials are being added to the clusters.
This means if I'm running AWS cluster I will see references for Azure cluster in the secrets. And another way around.

[mjudeiki@redhat openshift-azure]$ oc get credentialsrequests.cloudcredential.openshift.io 
NAME                               AGE
azure-openshift-ingress            161m
cloud-credential-operator-iam-ro   161m
openshift-image-registry           161m
openshift-ingress                  161m

openshift-machine-api              161m
openshift-machine-api-azure        161m

Only the required credentials will be fulfilled based on the cloud where we are running.

OCP will be running as Managed service on Azure and AWS and potentially more clouds will follow. This means that these credentials now shows up in managed service offerings. Which is not acceptable.

In certain cloud providers, this is the first party service, sold by them for their customers. And having other cloud provider references in their offerings is not acceptable.

We need a clear way to distinguish which platform we are running and deliver only those credentials.

/cc @jim-minter @pweil- @dgoodwin

CloudCredentialsOperator Generating tons of logs

The CloudCredentialsOperator is generating tons of log entries, after searching in Kibana we're getting a lot of hits from debug messages, is there a way to configure this log level to be less verbose?
We're Using OCP 4.3.13

AWS: add encryption to pod-identity-webhook S3 bucket

As an enterprise user of OpenShift, I want to address the need for encryption of the pod-identity-webhook S3 bucket as we have corresponding security guidelines. From my own experiments in which I deployed my own pod-identity-webhook, I know that at least S3 SSE with AES-256 is working. Of course, it would be ideal to use KMS, but I haven't found a way so far to make that work.

Is there any chance for encryption support or did you choose not to encrypt the bucket by design?

The readme references cloudcredential.openshift.io/v1beta1

When trying this out I had to adjust the version to be cloudcredential.openshift.io/v1
The message I received was

 no matches for kind "CredentialsRequest" in version "cloudcredential.openshift.io/v1beta1"

Happy to create a PR if you want this change reflected in the readme

oc adm release extract --cloud=ibmcloud

I am unable to extract the credentials for the IBM Cloud Provider.

We are using OpenShift version 4.10.17.

Running the following command produces the following:

➜ oc adm release extract --cloud=powervs --credentials-requests $RELEASE_IMAGE --to=./credreqs 
error: --cloud value not recognized, must be one of: [aws azure openstack gcp ovirt vsphere]

This error message does not display the options for IBM Cloud, IBM Cloud Power VS or Alibaba.

Operator is looking for "azure-credentials" in STS mode

Build version is: 4.14.0-0.nightly-2023-08-08-222204

The operator is logging errors searching for "azure-credentials", this shouldn't happen in STS mode correct?

Error logged by cloud-credential-operator container:

time="2023-09-12T19:23:33Z" level=info msg="operator detects STS enabled cluster" controller=credreq cr=openshift-cloud-credential-operator/openshift-cloud-network-config-controller-azure
time="2023-09-12T19:23:33Z" level=info msg="syncing credentials request" controller=credreq cr=openshift-cloud-credential-operator/openshift-cloud-network-config-controller-azure
time="2023-09-12T19:23:33Z" level=error msg="error checking whether credentials already exists: Secret \"azure-credentials\" not found" con
[serial-log-bundle-20230914072950.tar.gz](https://github.com/openshift/cloud-credential-operator/files/12609977/serial-log-bundle-20230914072950.tar.gz)
troller=credreq cr=openshift-cloud-credential-operator/openshift-cloud-network-config-controller-azure secret=openshift-cloud-network-config-controller/cloud-credentials

must gather log bundle
VM serial console logs collected with openshift-install gather bootstrap

Creating credentials with ccoctl tool failing - SIGSEGV: segmentation violation

$ openshift-install version
4.14.1

Please specify the platform type: aws, libvirt, openstack or baremetal
Alibaba Cloud

Please specify:

  • IPI (automated install with openshift-install. If you don't know, then it's IPI)
  • UPI (semi-manual installation on customised infrastructure)
    IPI

What happened?
I am following steps as given in documentation to install cluster in Alibaba Cloud quickly :
Creating credentials for OpenShift Container Platform components with the ccoctl tool

ccoctl alibabacloud create-ram-users
--name
--region=
--credentials-requests-dir=


--output-dir=

Please find the error as below and I verified the oc version and cluster version is same.
screeenshoot

Please help on how to fix this issue.

Force Passthrough Mode

Hi Team,

Working through the documentation / code for the 4.5-release.

There appears to be quite a bit of development going on around configuration, changing from the ConfigMap driven way to CRD.

Is there a way for deploying CCO and forcing the passthrough mode in 4.5-release? If so, what can be used to control this?

Provision credentials for OpenStack

With openshift/cluster-image-registry-operator#279, the registry will support provisioning Swift storage for OpenStack installs. If the cloud credential operator does not provision OpenStack credentials (as it does for AWS/S3), IPI installs will hang waiting for the registry to become available.

The registry operator currently expects a clouds.yaml file to be placed in the registry's cloud credentials secret, containing all credentials and pertinent info (AuthURL, Domain, etc.) needed to connect to OpenStack [1]. If needed the registry operator can be updated to accept a different format.

[1] openshift/cluster-image-registry-operator@c3d1962#diff-1a2dfb021d90e1e32592899d022b0078R198

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.