kube-object-storage / lib-bucket-provisioner Goto Github PK
View Code? Open in Web Editor NEWLibrary for the dynamic provisioning of object store buckets to be used by object store providers.
License: Apache License 2.0
Library for the dynamic provisioning of object store buckets to be used by object store providers.
License: Apache License 2.0
This is for public FYI:
Today, once a bucket is provisioned all subsequent OBC updates are ignored. The implications of this are changes in (future) bucket properties such as ACL, size limits, etc won't be processed by the lib.
When an OBC is fulfilled, the user will have full access to the dynamically generated bucket. In practical use however, the owner should be capable of requesting keys with defined limited permissions (RO, WO, RW, etc). This differs from ACLs, which define public access.
The purpose for this feature would be to allow users to limit access to their buckets to only what the connecting app requires. An app that only servers data from the bucket should not have write access, for instance.
We need to add event recording to OBs and OBCs so that kubectl describe
can show events.
All long, blocking operations take context.Context
Context is a standard way to give users of your library control over when actions should be interrupted.
and related:
Allow clean shutdown of background goroutines
This is a preferred restriction on Goroutine lifetimes feedback. There should be a way to end any goroutines your library creates, in a way that won’t signal spurious errors.
From: https://medium.com/@cep21/aspects-of-a-good-go-library-7082beabb403
No process within NewProvisioner() generates and error; the function will only ever return nil
.
Let's remove the error
return value from the function so consumers don't end up writing unnecessary error checks.
Easily customizable loggers
There is no broadly accepted Go logging library. Expose a logging interface that does not force me to import your favorite.
From: https://medium.com/@cep21/aspects-of-a-good-go-library-7082beabb403
As the library gains use, the provisioner constructor function's signature continues to change. This is due in large part to a desire to make the library more configurable. This also means that each new feature that requires a change to the function signature breaks existing callers.
Instead of configuration being handled via unique parameters, a more flexible method should be implemented. A simple solution would be a config struct which is passed as a parameter to the constructor. Nil values should have a default meaning. e.g.:
type Provisioner struct {
config *config
}
type config struct {
namespace string
logger logr.Logger
kubeconfig *rest.Config
...
}
func NewProvisioner(
provisionerName string,
provisioner api.Provisioner,
config
) (*Provisioner, error) { ... }
There are also chaining solutions using simple functions to define the target config.
Anything is better than the current implementation 😄
Delete
, the OBC is deleted, but the OB is not. After restarting the provisioner the Delete
function is not called againHi @jeffvance @copejon
CC: @dannyzaken @tamireran @nimrod-becker
The current provisioner requires to resolve the service to a single address that is assigned to the BUCKET_HOST
and BUCKET_PORT
in the configmap. This makes sense for a global service like AWS/Google/Azure endpoints.
However when the S3 endpoint is serving inside the cluster, there are multiple ways that clients may need to connect to the service - either using internal-DNS <service-name>.<namespace>
or using nodePort, or using external load balancer, or using a route/ingress, etc. It seems weird that every OBC should be aware of the client network requirements.
I would expect that in this case the configmap will include just the BUCKET_SERVICE_NAME
and BUCKET_SERVICE_NAMESPACE
so that the OBC provisioner can assign a service to the client, and the client can decide on which service endpoint to use.
Thoughts?
phases names should begin with an upper case letter, similar to PVC phases names (Pending
, Bound
, etc. )
This issue is to be the primary thread for early design dialogue.
As far as I understand the OLM flow requires that every CRD is owned
by one OLM package, and other operators can require
a CRD and OLM will automatically pull that package and install it to satisfy the requirement. See operator-framework/community-operators#625 (comment)
Read more in owned-crds vs required-crds.
So what I think is to publish a CSV from this repo that includes just the CRDs, but without an operator deployment, so that noobaa-operator and rook-ceph and others can require it to have the CRD's installed.
Thoughts?
Per original design doc:
kind: Secret
metadata:
labels:
objectbucket.io/PROVISIONER-NAME: [3]
...
1. label may be used to associate all artifacts under a particular provisioner.
kind: ConfigMap
metadata:
labels:
objectbucket.io/PROVISIONER-NAME: [3]
...
1. label here associates all artifacts under a spoecific provisioner.
kind: ObjectBucket
metadata:
labels:
objectbucket.io/PROVISIONER-NAME: [3]
...
1. label here associates all artifacts under the particular provisioner.
Hi
I couldn't find in the design doc any reference to how permissions are granted/revoked -
https://github.com/yard-turkey/lib-bucket-provisioner/blob/master/doc/design/object-bucket-lib.md#bucket-sharing
Did I miss it specified somewhere?
Otherwise, anyone can claim any existing bucket and get unconditional access to it.
Thanks
This request came from the 2019 Cehpalocon conference in Barcelona.
Maybe Prometheus can be used?
Maybe we need to expose existing s3/ceph object metrics?
Anytime the lib generates a bucket name the obc.Spec.BucketName field should be updated w/ the generated bucket name.
The /pkg/provisioner directory was hastily written and could use a cleaning up. Helper function files should be moved to a subdirectory, (./internal/
would be preferable) to separate responsibilities of the packages.
The current design is to call the Delete
method only for new buckets (greenfield) with a reclaimPolicy of "Delete". All other cases call Revoke
.
The problem with this is that provisioners cannot distinguish between Revoke
called on a greenfield bkt's delete vs called for any brownfield bkt, regardless of its reclaimPolicy. And, perhaps, some provisioners will want to distinguish these two cases.
We could change the design such that Delete
is always called when the reclaimPolicy == "Delete" and Revoke
is always called when the reclaimPolicy == "Retain". And provisioners should delete the bkt when Delete
is called, even if it was a brownfield. bkt. If this is not the desired behavior then the storage class's reclaimPolicy should not be "delete".
This approach keeps the library neutral on greenfield vs. brownfield and relies simply on the reclaim policy.
We need to normalize brown vs greenfield detection. In docs (I think) we specify that a StorageClass should have a parameter containing a bucket name. If defined, the request is assumed to be brownfield. This isn't an honored part of the API contract though. Instead, provisioners have to duplicate the key name in code.
Fix this by adding an API call to check storage classes for brown vs green field cases.
Here are some rules agreed to by myself and @copejon:
As a note, this hierarchy does not support a fully random (no prefix) bucket name, meaning the obc's bucketName == "" and the generatedBucketName == "". Minimally a 1-letter generateBucketName value must be supplied for random names.
Allow library consumers to specify the info and debug log levels of the library. Default values should be set if not configured. Currently log levels are 0 and 1 for info and debug, respectively. Reasons for doing so are to not spam developer's logs and allow the dev's to align levels with the consumer project
From the doc:
status:
phase: {"pending", "bound", "released", "failed"} [5]
...
1. phases of bucket creation, mutually exclusive:
- _pending_: the operator is processing the request
- _bound_: the operator finished processing the request and linked the OBC and OB
- _released_: the OB has been deleted, leaving the OBC unclaimed but unavailable.
tracking go.sh lint
results:
[screeley@screeley lib-bucket-provisioner (gosh)]$ ./hack/go.sh lint
WARN [runner/megacheck] Can't run megacheck because of compilation errors in packages [github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/clientset/versioned/typed/objectbucket.io/v1alpha1 github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/clientset/versioned github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/clientset/versioned/typed/objectbucket.io/v1alpha1/fake github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/clientset/versioned/fake github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/internalinterfaces github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/listers/objectbucket.io/v1alpha1 github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/objectbucket.io/v1alpha1 github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions/objectbucket.io github.com/kube-object-storage/lib-bucket-provisioner/pkg/client/informers/externalversions github.com/kube-object-storage/lib-bucket-provisioner/pkg/provisioner]: pkg/client/clientset/versioned/typed/objectbucket.io/v1alpha1/objectbucket.io_client.go:79: DirectCodecFactory not declared by package serializer and 17 more errors: run `golangci-lint run --no-config --disable-all -E typecheck` to see all errors
pkg/provisioner/controller.go:1: /home/screeley/git/go/pkg/mod/k8s.io/[email protected]+incompatible/rest/request.go:598:91: too few arguments in call to watch.NewStreamWatcher (typecheck)
/*
pkg/apis/objectbucket.io/register.go:17:1: don't use an underscore in package name (golint)
package objectbucket_io
^
pkg/provisioner/controller.go:342:21: error strings should not be capitalized or end with punctuation or a newline (golint)
return fmt.Errorf("error updating OB %q's status to %q:", ob.Name, v1alpha1.ObjectBucketStatusPhaseBound, err)
With mods disabled (we think)
pkg/provisioner/controller.go:1: /home/screeley/git/go/pkg/mod/k8s.io/[email protected]+incompatible/kubernetes/typed/autoscaling/v1/autoscaling_client.go:74:43: DirectCodecFactory not declared by package serializer (typecheck)
/*
pkg/apis/objectbucket.io/register.go:17:1: don't use an underscore in package name (golint)
package objectbucket_io
^
pkg/provisioner/controller.go:342:21: error strings should not be capitalized or end with punctuation or a newline (golint)
return fmt.Errorf("error updating OB %q's status to %q:", ob.Name, v1alpha1.ObjectBucketStatusPhaseBound, err)
kubectl create obc baba ...
kubectl get obc,ob,secret,cm
kubectl delete obc baba
kubectl get obc,ob,secret,cm
# ob, secret, cm are still there...
This seems to be because they have a finalizer and the provisioner is not removing it...
The request came from the 2019 Cephalocon conference in Barcelona.
Perhaps we need policy CRDs or perhaps policy can be expressed in the OBC and/or storage class.
ACLs are a common feature of S3 API implementations and provide an easy method for defining access at creation time. Users requesting dynamic buckets should be the ones defining the public/private access policies of their buckets and should do so via the OBC.
A spec sub-structure should be added to encapsulate these ACLs and the library should have to capability to check that the value matches a supported list. Creation of the ACLs via an S3 interface should be left to provisioner authors.
Edit - static buckets (a.k.a. brownfield) should not be considered in this design - users should not be allowed to alter static bucket ACLs.
An ever expanding list of things that should (probably) be done
refactor unexported methods into help funcs
organize help funcs! (util/ should not be a catchall)
refactor funcs to internal/
subdir of the controller
API validation / webhooks
update unit tests - the crunch we've been under means a lot of untested funcs and stale tests
Add ACLs field in OBC (e.g. PublicReadWrite, PublicRead, etc)
Add Policy field in OBC (e.g. RO, WO, RW permissions for generated key)
API change: add API to request additional key pairs to dynamic bucket
leader election to enable fail over in deployment
Prometheus integration
event logging in OBC
CI linters/vetters should be finalized and build failure should be defined
move developer how-to doc into lib repo (from examples repo)
review design doc for adherence to overall vision (does it include the above features?)
code walk - before we can hand this off, we should give deep cleaning (reduce redundancy, detangle spaghetti code)
enable custom loggers
rework provisioner constructor. field additions/deletions force a change to the parameter list. we should find a more stable strategy to allow for structural changes that don't break NewProvisioner() callers. One alternative is a set of method calls that can be chained together arbitrarily to set individual options. Another is a variadic parameter list of type setter functions, detailed here
logging clean up and configurable verbosity levels for info and debug
There is nothing in place to provide immediate user feedback for invalid OBC on OB definitions. This should be fixed by either
or
Today, the library reacts to an OBC add event by calling Provision()/Grant(), and creating a CM, Secret, and OB. If at any point during this process an error occurs, the library calls Delete().
This creates a situation where Provision() or Grant() may return success but api server failures cause the library to take a scorched earth policy and completely destroy all artifacts, including the new bucket, authentication credentials, and api resources.
This is extremely inefficient. Once every provisioning artifact is gone, they then have to be recreated. Since most errors are persistent, this cycle continues at an exponential backoff, spamming logs, api calls, and object store operations.
It is prone to instability. If cleanup fails, the sync function exits and leaves an irreconcilably dirty environment. The next sync starts off assuming a clean environment, and either
a) errors due to collisions on statically named resources, or
b) avoids collisions with generated names, thus spamming object store artifacts and orphaning them immediately.
The conclusion is that when the library works, it's great. When it doesn't, it actively confuses debugging by thrashing artifact creation, spamming logs, and likely never reconciles.
The library must be more nuanced in how it reconciles Actual State of the World(ASotW) to Desired State of the World(DSotW). Do away with cleanup-on-error entirely. Preserve backwards compatibility.
Instead, every sync operation should first assess the ASotW, determine what is unfulfilled, and proceed to selectively reconcile the situation.
For example, this abbreviated OBC is created
kind: objectBucketClaim
metadata:
name: my-claim
spec:
storageClass: bucket-class
bucketName: my-bucket
The library calls Provision(), which succeeds, and returns an OB. At any point after this, an error may occur. Instead of immediately cleaning up and exiting, it must preserve state for the next go around. This is a 3 fold operation.
State must be stored in memory and be safely accessible between syncs. Golang's sync.Map is used by k8s' external provisioner library internally and provides a means of persisting sync state. As the data we want to store is encapsulated in the OB, this would be a good candidate to store in the map. The key would then be the queue key - the string value of the OBC's "namespace/name"
as it's always the same between sync ops.
All API resources should be created, as is reasonable. If Provision() returns an error, then nothing should posted to the API server. If Provision() succeeds, then an error creating 1 resource should not prevent the creation of the others.
OBC and OB status must reflect what's going on. Status.Phase
should reflect the currently observed state. Most likely, an error will cause the OBC to transition from Pending
, to Error
, to Bound
(assuming it was able to reconcile the error). Status.Conditions
must be used to communicate the specific errors that are occurring. This should be a purely reportive operation and not used to determine ASotW. For why this is the case, see the long running k8s issue on the subject
Related to #151
Critical questions, suggestions, and ideas welcome
@guymguym @dannyzaken @nimrod-becker @liewegas @travisn @jeffvance @erinboyd
Use of /internal
The /internal package is woefully underused. I recommend both binaries and libraries take advantage of /internal to hide public functions that aren’t intended to be imported. Hiding your public import space also makes clearer which packages users should import and where to look for useful logic.
From: https://medium.com/@cep21/aspects-of-a-good-go-library-7082beabb403
The primary build script (./hack/go.sh) is completely undocumented, meaning devs are left to figuring out the quirks of testing with generated code, using the standardized linter, and more.
For tracking adding results of vet imports
:
[screeley@screeley lib-bucket-provisioner (gosh)]$ ./hack/go.sh vet imports
-------- vetting
# k8s.io/client-go/rest
../../../../pkg/mod/k8s.io/[email protected]+incompatible/rest/request.go:598:31: not enough arguments in call to watch.NewStreamWatcher
have (*versioned.Decoder)
want (watch.Decoder, watch.Reporter)
-------- formatting
goimports -w -local sigs.k8s.io/controller-runtime,github.com/kube-object-storage/lib-bucket-provisioner/ for packages under /home/screeley/git/go/src/github.com/kube-object-storage/lib-bucket-provisioner/pkg/apis/objectbucket.io
goimports -w -local sigs.k8s.io/controller-runtime,github.com/kube-object-storage/lib-bucket-provisioner/ for packages under /home/screeley/git/go/src/github.com/kube-object-storage/lib-bucket-provisioner/pkg/apis/objectbucket.io/v1alpha1
goimports -w -local sigs.k8s.io/controller-runtime,github.com/kube-object-storage/lib-bucket-provisioner/ for packages under /home/screeley/git/go/src/github.com/kube-object-storage/lib-bucket-provisioner/pkg/provisioner
goimports -w -local sigs.k8s.io/controller-runtime,github.com/kube-object-storage/lib-bucket-provisioner/ for packages under /home/screeley/git/go/src/github.com/kube-object-storage/lib-bucket-provisioner/pkg/provisioner/api
goimports -w -local sigs.k8s.io/controller-runtime,github.com/kube-object-storage/lib-bucket-provisioner/ for packages under /home/screeley/git/go/src/github.com/kube-object-storage/lib-bucket-provisioner/pkg/provisioner/api/errors
[screeley@screeley lib-bucket-provisioner (gosh)]$ git status
# On branch gosh
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: pkg/apis/objectbucket.io/register.go
# modified: pkg/apis/objectbucket.io/v1alpha1/doc.go
# modified: pkg/apis/objectbucket.io/v1alpha1/objectbucket_types.go
# modified: pkg/apis/objectbucket.io/v1alpha1/objectbucket_types_test.go
# modified: pkg/apis/objectbucket.io/v1alpha1/register.go
# modified: pkg/provisioner/api/domain.go
# modified: pkg/provisioner/api/errors/errors.go
# modified: pkg/provisioner/api/provisioner.go
# modified: pkg/provisioner/controller.go
# modified: pkg/provisioner/fakeinterface_test.go
# modified: pkg/provisioner/helpers.go
# modified: pkg/provisioner/helpers_test.go
# modified: pkg/provisioner/log.go
# modified: pkg/provisioner/manager.go
CRDs support multiple versions, though today the lib only defines v1aplha1. Later, we might add v1apha2, v1beta1, and v1. If each of these versions at one time had the storage
property set to true then etcd has stored this version and remembers this fact (in the status.storedVersions
field).
There are suggested steps to follow when deprecating a CRD version. But we can also use a conversion webhook.
Example:
Should the provisioner importing the v1alpha1 version of the lib see the v1alpha2 OBC?. If yes then there are potential compatibility issues. If no then the lib can fail the provisioner's call to NewProvisioner
or can ignore this OBC (which will likely remain unowned).
Similarly, the provisioner could be at v1alpha2 of the lib and a v1alpha1 OBC is created. Should this scenario also fail the provisioner?
If version mismatches always fail the provisioner (at least for now) then I see no reason to support multiple versions of the CRDs.
Created a new OBC, the secret did not exist, but the provisioner failed because it got an "AlreadyExists" error on the secret. Maybe due to create retry?
Then it tries to clean everything up and fails - Perhaps the name is not initialized for those resources when the creation is partially done?
I0904 09:30:06.270492 1 resourcehandlers.go:139] "level"=0 "msg"="creating Secret" "key"="noobaa-2/kaka" "name"="noobaa-2/kaka"
E0904 09:30:06.369702 1 controller.go:243] "msg"="cleaning up reconcile artifacts" "error"="error creating secret for OBC \"noobaa-2/kaka\": secrets \"kaka\" already exists" "key"="noobaa-2/kaka"
I0904 09:30:06.370114 1 controller.go:245] "level"=0 "msg"="deleting storage artifacts" "key"="noobaa-2/kaka"
time="2019-09-04T09:30:06Z" level=info msg="Delete: got request to delete bucket \"kaka-36f0517b-4016-464c-a2c0-5
07a63041f8f\" and account \"obc-account.kaka-36f0517b-4016-464c-a2c0-507a63041f8f.5d6f841d@noobaa.io\"" provision
er=noobaa.io/noobaa-2.bucket
time="2019-09-04T09:30:06Z" level=info msg="deleting account \"obc-account.kaka-36f0517b-4016-464c-a2c0-507a63041
[email protected]\"" provisioner=noobaa.io/noobaa-2.bucket
time="2019-09-04T09:30:06Z" level=info msg="✈️ RPC: account.delete_account() Request: {Email:obc-account.kaka-36f
[email protected]}"
time="2019-09-04T09:30:06Z" level=info msg="UpdateStatus: Done generation 1" sys=noobaa-2/noobaa
time="2019-09-04T09:30:06Z" level=info msg="✅ RPC: account.delete_account() Response OK: &{Op:res RequestID:1@https://::ffff:172.17.0.1:44186(cxevzhbc) Took:30.417999999946915 Error:<nil>}"
time="2019-09-04T09:30:06Z" level=info msg="✅ Successfully deleted account \"obc-account.kaka-36f0517b-4016-464c-a2c0-507a63041f8f.5d6f841d@noobaa.io\"" provisioner=noobaa.io/noobaa-2.bucket
I0904 09:30:06.421912 1 resourcehandlers.go:230] "level"=0 "msg"="removing ObjectBucket's finalizer" "key"="noobaa-2/kaka" "name"=""
E0904 09:30:06.422056 1 controller.go:423] "msg"="error deleting objectBucket" "error"="resource name may not be empty" "key"="noobaa-2/kaka" "/"=null
E0904 09:30:06.422168 1 controller.go:429] "msg"="error releasing secret" "error"="resource name may not be empty" "key"="noobaa-2/kaka" "/"=null
Today the library calls the provisioner's Delete
method in different situations but only for greenfield provisioning where the reclaim policy is "Delete"
Provision
errorProvision
is successful but the lib is unable to complete its job, eg. can't create the OB, secret or configmap (only for greenfield).Note: assume Revoke
is called for cleanup when Grant
is called.
While proivisioners generally don't distinguish between these cases, the 3rd reason can likely be folded into the first since in both situations the provisioner knows the bucket was created and now it's being called to delete the bucket and related artifacts.
This issue addresses the 2nd case above. I propose that the lib should not call Delete when Provision
returns an error. In this case provisioners should be expected to cleanup all artifacts that were created or altered before their Provision method returns.
Note: if this is implemented we will break all provisioners that rely on the lib to call their Delete
method as one of the cleanup steps. This applies to all errors: those returned by Provision
and those lib detected by the lib when it's creating resources.
the OBC status is not updated if the provisioning flow fails early.
e.g. try to use a nonexisting storage class. the provisioner will fail, but status is not set on the obc
This request came from the 2019 Cephalocon conference in Barcelona.
It may be related to #119
It will be useful for multi-cluster storage migration (cc @erinaboyd @guymargalit @liewegas)
This came from the original design doc:
+ detects OBC update events: (post phase-0)
+ skip if the OBC's StorageClass' provisioner != the provisioner doing this watch
+ ensure the expected OB, Secret and ConfigMap are present
+ if all present:
+ update OBC status to "Bound"
+ sync the OB's status to match the OBC's status
Today we define some bucket attributes, eg. SSL, Versioned, as properties of the OBC, while other attributes, eg. Region, are defined in the storage class. Before we formalize which attributes are defined where, we should first decide if the library should include any bucket attributes.
Reasons to not define any bucket attributes:
Reasons to define some bucket properties:
If the decision is made to define bucket attributes in the OBC and storage class then we need to discuss which of these, if any, can be overridden by the OBC -- implying the SC contains the default value for this attribute.
when working on NooBaa provisioner it would have been helpful if there were more technical documentation and examples regarding the provisioner build.
Unit tests have fallen behind while we were under the crunch to get something up and running.
Post refactor, functions and methods have been refactored so much that we currently sit at 0% test coverage. This needs to be addressed sooner rather than later.
Measured using
go test -covermode=atomic -coverpkg $(go list github.com/yard-turkey/lib-bucket-provisioner/...) github.com/yard-turkey/lib-bucket-provisioner/...
E0416 14:30:26.636238 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io "screeley-provb-3": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
Full log:
./jeff-prov --kubeconfig=/var/run/kubernetes/admin.kubeconfig --master=https://localhost:6443 -alsologtostderr -v=2
I0416 14:30:06.738227 27193 aws-s3-provisioner.go:499] AWS S3 Provisioner - main
I0416 14:30:06.738429 27193 aws-s3-provisioner.go:500] flags: kubeconfig="/var/run/kubernetes/admin.kubeconfig"; masterURL="https://localhost:6443"
I0416 14:30:06.740993 27193 manager.go:81] objectbucket.io/provisioner-manager "level"=0 "msg"="constructing new Provisioner" "name"="aws-s3.io/bucket"
I0416 14:30:06.741456 27193 manager.go:92] objectbucket.io/provisioner-manager "level"=0 "msg"="generating controller manager"
I0416 14:30:06.781048 27193 manager.go:99] objectbucket.io/provisioner-manager "level"=0 "msg"="adding schemes to manager"
I0416 14:30:06.781209 27193 manager.go:104] objectbucket.io/provisioner-manager "level"=0 "msg"="constructing new ObjectBucketClaimReconciler"
I0416 14:30:06.781236 27193 reconiler.go:50] objectbucket.io/claim-reconciler "level"=0 "msg"="constructing new reconciler" "provisioner"="aws-s3.io/bucket"
I0416 14:30:06.781268 27193 reconiler.go:55] objectbucket.io/claim-reconciler "level"=1 "msg"="retry loop setting" "RetryBaseInterval"=3
I0416 14:30:06.781288 27193 reconiler.go:59] objectbucket.io/claim-reconciler "level"=1 "msg"="retry loop setting" "RetryTimeout"=30
I0416 14:30:06.781305 27193 manager.go:133] objectbucket.io/provisioner-manager "level"=0 "msg"="building controller manager"
I0416 14:30:06.783062 27193 :0] objectbucket.io/provisioner-manager/kubebuilder/controller "level"=0 "msg"="Starting EventSource" "controller"="objectbucketclaim-application" "source"={"Type":{"metadata":{"creationTimestamp":null},"spec":{"storageClassName":"","bucketName":"","ssl":false,"cannedBucketAcl":"","versioned":false,"additionalConfig":null,"ObjectBucketName":""},"status":{"Phase":""}}}
I0416 14:30:06.783160 27193 aws-s3-provisioner.go:517] main: running aws-s3.io/bucket provisioner...
I0416 14:30:06.783200 27193 manager.go:150] objectbucket.io/provisioner-manager "level"=0 "msg"="Starting manager" "provisioner"="aws-s3.io/bucket"
I0416 14:30:06.786099 27193 manager.go:114] objectbucket.io/provisioner-manager "level"=1 "msg"="event: Create() " "Kind"={"Group":"objectbucket.io","Version":"v1alpha1","Kind":"ObjectBucketClaim"} "Name"="screeley-provb-3"
I0416 14:30:06.883479 27193 :0] objectbucket.io/provisioner-manager/kubebuilder/controller "level"=0 "msg"="Starting Controller" "controller"="objectbucketclaim-application"
I0416 14:30:06.983670 27193 :0] objectbucket.io/provisioner-manager/kubebuilder/controller "level"=0 "msg"="Starting workers" "controller"="objectbucketclaim-application" "worker count"=1
I0416 14:30:06.983759 27193 reconiler.go:81] "level"=0 "msg"="new Reconcile iteration" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:06.983776 27193 helpers.go:60] "level"=0 "msg"="getting claim for key" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:06.983810 27193 helpers.go:21] "level"=0 "msg"="validating claim for provisioning" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:06.983827 27193 helpers.go:120] "level"=0 "msg"="getting storageClass for claim" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:06.983843 27193 helpers.go:127] "level"=0 "msg"="OBC defined class" "request"="s3-provisioner/screeley-provb-3" "name"="s3-buckets"
I0416 14:30:06.983858 27193 helpers.go:130] "level"=0 "msg"="getting storage class" "request"="s3-provisioner/screeley-provb-3" "name"="s3-buckets"
I0416 14:30:07.084186 27193 helpers.go:141] "level"=0 "msg"="successfully got class" "request"="s3-provisioner/screeley-provb-3" "name"=null
I0416 14:30:07.088365 27193 helpers.go:91] "level"=0 "msg"="determining bucket name" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:07.088388 27193 helpers.go:101] "level"=0 "msg"="bucket name generated" "request"="s3-provisioner/screeley-provb-3" "name"="screeley-provb-3"
I0416 14:30:07.088401 27193 helpers.go:21] "level"=0 "msg"="validating claim for provisioning" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:07.088421 27193 reconiler.go:189] "level"=0 "msg"="provisioning" "request"="s3-provisioner/screeley-provb-3" "bucket"="screeley-provb-3"
I0416 14:30:07.088434 27193 helpers.go:60] "level"=0 "msg"="getting claim for key" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:07.088465 27193 aws-s3-provisioner.go:233] initializing and setting CreateOrGrant services
I0416 14:30:07.088477 27193 util.go:34] getting storage class "s3-buckets"...
I0416 14:30:07.088698 27193 manager.go:118] objectbucket.io/provisioner-manager "level"=1 "msg"="event: Update (ignored)" "Kind"="/, Kind=" "Name"="screeley-provb-3"
I0416 14:30:07.091027 27193 iam.go:345] storage class flag createBucketUser's value, or absence of flag, indicates to create a new user
I0416 14:30:07.091042 27193 aws-s3-provisioner.go:215] Creating AWS session based on storageclass "s3-buckets"
I0416 14:30:07.091051 27193 util.go:64] getting secret "s3-provisioner/s3-bucket-owner"...
I0416 14:30:07.093978 27193 aws-s3-provisioner.go:202] Creating AWS session using credentials from storage class s3-buckets's secret
I0416 14:30:07.094035 27193 aws-s3-provisioner.go:221] Creating S3 service based on storageclass "s3-buckets"
I0416 14:30:07.094075 27193 aws-s3-provisioner.go:342] Creating bucket "screeley-provb-3"
I0416 14:30:08.411336 27193 aws-s3-provisioner.go:162] Bucket screeley-provb-3 successfully created
I0416 14:30:08.434997 27193 util.go:109] Generated user screeley-provb-3-X5gsu after 1 iterations
I0416 14:30:08.435020 27193 iam.go:49] creating user and policy for bucket "screeley-provb-3"
I0416 14:30:08.435029 27193 iam.go:303] creating IAM user "screeley-provb-3-X5gsu"
I0416 14:30:08.495822 27193 iam.go:327] successfully created IAM user "screeley-provb-3-X5gsu" with access keys
I0416 14:30:08.495856 27193 iam.go:150] createBucketPolicyDocument for bucket "screeley-provb-3" and ARN "arn:aws:s3:::screeley-provb-3"
I0416 14:30:08.560980 27193 iam.go:217] createUserPolicy "screeley-provb-3-X5gsu" successfully created
I0416 14:30:08.561005 27193 iam.go:241] attach policy "screeley-provb-3-X5gsu" to user
I0416 14:30:08.561012 27193 iam.go:223] getting ARN for policy "screeley-provb-3-X5gsu"
I0416 14:30:08.561020 27193 iam.go:259] creating new user "screeley-provb-3-X5gsu"
I0416 14:30:08.582382 27193 iam.go:272] created user "screeley-provb-3-X5gsu" and accountID "939345161466"
I0416 14:30:08.582403 27193 iam.go:234] successfully got PolicyARN "arn:aws:iam::939345161466:policy/screeley-provb-3-X5gsu" for AccountID 939345161466's Policy "screeley-provb-3-X5gsu"
I0416 14:30:08.625058 27193 iam.go:252] successfully attached policy "screeley-provb-3-X5gsu" to user "screeley-provb-3-X5gsu"
I0416 14:30:08.625083 27193 iam.go:87] successfully created user and policy for bucket "screeley-provb-3"
I0416 14:30:08.625135 27193 helpers.go:86] "level"=0 "msg"="setting OB name" "request"="s3-provisioner/screeley-provb-3" "name"=""
I0416 14:30:08.625155 27193 helpers.go:60] "level"=0 "msg"="getting claim for key" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:08.625195 27193 resourcehandlers.go:120] "level"=0 "msg"="creating ObjectBucket" "request"="s3-provisioner/screeley-provb-3" "name"="obc-s3-provisioner-screeley-provb-3"
I0416 14:30:08.625228 27193 resourcehandlers.go:45] "level"=0 "msg"="creating object until timeout" "request"="s3-provisioner/screeley-provb-3" "interval"=3000000000 "timeout"=30000000000
E0416 14:30:08.633512 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:11.636487 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:14.636467 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:17.636340 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:20.636376 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:23.636373 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:26.636238 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:29.636205 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:32.636338 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:35.636463 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:38.636270 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:38.637899 27193 resourcehandlers.go:285] "msg"="possibly intermittent, retrying" "error"="Operation cannot be fulfilled on objectbucketclaims.objectbucket.io \"screeley-provb-3\": the object has been modified; please apply your changes to the latest version and try again" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:38.637929 27193 reconiler.go:152] "msg"="cleaning up reconcile artifacts" "error"="error updating phase: timed out waiting for the condition" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:38.637956 27193 reconiler.go:154] "level"=0 "msg"="deleting bucket" "request"="s3-provisioner/screeley-provb-3" "name"="screeley-provb-3"
I0416 14:30:38.637967 27193 aws-s3-provisioner.go:395] Deleting bucket "screeley-provb-3" for OB "obc-s3-provisioner-screeley-provb-3"
I0416 14:30:38.637979 27193 util.go:34] getting storage class "s3-buckets"...
I0416 14:30:38.639315 27193 aws-s3-provisioner.go:215] Creating AWS session based on storageclass "s3-buckets"
I0416 14:30:38.639331 27193 util.go:64] getting secret "s3-provisioner/s3-bucket-owner"...
I0416 14:30:38.640574 27193 aws-s3-provisioner.go:202] Creating AWS session using credentials from storage class s3-buckets's secret
I0416 14:30:38.640617 27193 aws-s3-provisioner.go:221] Creating S3 service based on storageclass "s3-buckets"
I0416 14:30:38.640644 27193 iam.go:93] deleting user and policy for bucket "screeley-provb-3"
I0416 14:30:38.672963 27193 iam.go:107] successfully detached policy "arn:aws:iam::939345161466:policy/screeley-provb-3-X5gsu", user "screeley-provb-3-X5gsu"
I0416 14:30:38.720315 27193 iam.go:117] successfully deleted policy "arn:aws:iam::939345161466:policy/screeley-provb-3-X5gsu"
I0416 14:30:38.720340 27193 iam.go:279] getting access key for user "screeley-provb-3-X5gsu"
I0416 14:30:38.802054 27193 iam.go:129] successfully deleted access key for user "screeley-provb-3-X5gsu"
I0416 14:30:38.802078 27193 iam.go:133] Deleting User "screeley-provb-3-X5gsu"
I0416 14:30:38.830286 27193 iam.go:142] successfully deleted user and policy for bucket "screeley-provb-3"
I0416 14:30:38.830315 27193 aws-s3-provisioner.go:424] Deleting all objects in bucket "screeley-provb-3" (from OB "obc-s3-provisioner-screeley-provb-3")
I0416 14:30:39.140218 27193 aws-s3-provisioner.go:430] Deleting empty bucket "screeley-provb-3" from OB "obc-s3-provisioner-screeley-provb-3"
I0416 14:30:39.472895 27193 aws-s3-provisioner.go:437] Deleted bucket "screeley-provb-3" from OB "obc-s3-provisioner-screeley-provb-3"
I0416 14:30:39.472950 27193 resourcehandlers.go:219] "level"=0 "msg"="deleting ObjectBucket" "request"="s3-provisioner/screeley-provb-3" "name"="obc-s3-provisioner-screeley-provb-3"
I0416 14:30:39.472975 27193 helpers.go:170] "level"=0 "msg"="checking for finalizer" "request"="s3-provisioner/screeley-provb-3" "object"="obc-s3-provisioner-screeley-provb-3" "value"="objectbucket.io/finalizer"
I0416 14:30:39.472990 27193 helpers.go:173] "level"=0 "msg"="found finalizer in obj" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:39.473004 27193 helpers.go:182] "level"=0 "msg"="removing finalizer from object" "request"="s3-provisioner/screeley-provb-3" "name"="obc-s3-provisioner-screeley-provb-3"
I0416 14:30:39.473017 27193 helpers.go:191] "level"=0 "msg"="found finalizer, deleting and updating API" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:39.476087 27193 helpers.go:197] "level"=0 "msg"="finalizer deletion successful" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:39.479563 27193 resourcehandlers.go:186] "level"=0 "msg"="got nil secret, skipping" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:39.479586 27193 resourcehandlers.go:156] "level"=0 "msg"="got nil configMap pointer, skipping delete" "request"="s3-provisioner/screeley-provb-3"
E0416 14:30:39.479642 27193 :0] objectbucket.io/provisioner-manager/kubebuilder/controller "msg"="Reconciler error" "error"="error updating phase: timed out waiting for the condition" "controller"="objectbucketclaim-application" "request"={"Namespace":"s3-provisioner","Name":"screeley-provb-3"}
I0416 14:30:40.479896 27193 reconiler.go:81] "level"=0 "msg"="new Reconcile iteration" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:40.479927 27193 helpers.go:60] "level"=0 "msg"="getting claim for key" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:40.480215 27193 helpers.go:21] "level"=0 "msg"="validating claim for provisioning" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:40.480239 27193 helpers.go:120] "level"=0 "msg"="getting storageClass for claim" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:40.480265 27193 helpers.go:127] "level"=0 "msg"="OBC defined class" "request"="s3-provisioner/screeley-provb-3" "name"="s3-buckets"
I0416 14:30:40.480287 27193 helpers.go:130] "level"=0 "msg"="getting storage class" "request"="s3-provisioner/screeley-provb-3" "name"="s3-buckets"
I0416 14:30:40.480316 27193 helpers.go:141] "level"=0 "msg"="successfully got class" "request"="s3-provisioner/screeley-provb-3" "name"=null
I0416 14:30:40.484126 27193 helpers.go:91] "level"=0 "msg"="determining bucket name" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:40.484153 27193 helpers.go:101] "level"=0 "msg"="bucket name generated" "request"="s3-provisioner/screeley-provb-3" "name"="screeley-provb-3"
I0416 14:30:40.484167 27193 helpers.go:21] "level"=0 "msg"="validating claim for provisioning" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:40.484183 27193 reconiler.go:189] "level"=0 "msg"="provisioning" "request"="s3-provisioner/screeley-provb-3" "bucket"="screeley-provb-3"
I0416 14:30:40.484195 27193 helpers.go:60] "level"=0 "msg"="getting claim for key" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:40.484210 27193 aws-s3-provisioner.go:233] initializing and setting CreateOrGrant services
I0416 14:30:40.484219 27193 util.go:34] getting storage class "s3-buckets"...
I0416 14:30:40.485737 27193 iam.go:345] storage class flag createBucketUser's value, or absence of flag, indicates to create a new user
I0416 14:30:40.485752 27193 aws-s3-provisioner.go:215] Creating AWS session based on storageclass "s3-buckets"
I0416 14:30:40.485761 27193 util.go:64] getting secret "s3-provisioner/s3-bucket-owner"...
I0416 14:30:40.487057 27193 aws-s3-provisioner.go:202] Creating AWS session using credentials from storage class s3-buckets's secret
I0416 14:30:40.487147 27193 aws-s3-provisioner.go:221] Creating S3 service based on storageclass "s3-buckets"
I0416 14:30:40.487175 27193 aws-s3-provisioner.go:342] Creating bucket "screeley-provb-3"
I0416 14:30:41.229210 27193 aws-s3-provisioner.go:162] Bucket screeley-provb-3 successfully created
I0416 14:30:41.245367 27193 util.go:109] Generated user screeley-provb-3-QdoNY after 1 iterations
I0416 14:30:41.245386 27193 iam.go:49] creating user and policy for bucket "screeley-provb-3"
I0416 14:30:41.245394 27193 iam.go:303] creating IAM user "screeley-provb-3-QdoNY"
I0416 14:30:41.306068 27193 iam.go:327] successfully created IAM user "screeley-provb-3-QdoNY" with access keys
I0416 14:30:41.306091 27193 iam.go:150] createBucketPolicyDocument for bucket "screeley-provb-3" and ARN "arn:aws:s3:::screeley-provb-3"
I0416 14:30:41.360274 27193 iam.go:217] createUserPolicy "screeley-provb-3-QdoNY" successfully created
I0416 14:30:41.360299 27193 iam.go:241] attach policy "screeley-provb-3-QdoNY" to user
I0416 14:30:41.360306 27193 iam.go:223] getting ARN for policy "screeley-provb-3-QdoNY"
I0416 14:30:41.360312 27193 iam.go:259] creating new user "screeley-provb-3-QdoNY"
I0416 14:30:41.380312 27193 iam.go:272] created user "screeley-provb-3-QdoNY" and accountID "939345161466"
I0416 14:30:41.380333 27193 iam.go:234] successfully got PolicyARN "arn:aws:iam::939345161466:policy/screeley-provb-3-QdoNY" for AccountID 939345161466's Policy "screeley-provb-3-QdoNY"
I0416 14:30:41.420277 27193 iam.go:252] successfully attached policy "screeley-provb-3-QdoNY" to user "screeley-provb-3-QdoNY"
I0416 14:30:41.420301 27193 iam.go:87] successfully created user and policy for bucket "screeley-provb-3"
I0416 14:30:41.420335 27193 helpers.go:86] "level"=0 "msg"="setting OB name" "request"="s3-provisioner/screeley-provb-3" "name"=""
I0416 14:30:41.420353 27193 helpers.go:60] "level"=0 "msg"="getting claim for key" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:41.420383 27193 resourcehandlers.go:120] "level"=0 "msg"="creating ObjectBucket" "request"="s3-provisioner/screeley-provb-3" "name"="obc-s3-provisioner-screeley-provb-3"
I0416 14:30:41.420402 27193 resourcehandlers.go:45] "level"=0 "msg"="creating object until timeout" "request"="s3-provisioner/screeley-provb-3" "interval"=3000000000 "timeout"=30000000000
I0416 14:30:41.427987 27193 manager.go:118] objectbucket.io/provisioner-manager "level"=1 "msg"="event: Update (ignored)" "Kind"="/, Kind=" "Name"="screeley-provb-3"
I0416 14:30:41.428538 27193 resourcehandlers.go:133] "level"=0 "msg"="creating Secret" "request"="s3-provisioner/screeley-provb-3" "name"="screeley-provb-3" "namespace"="s3-provisioner"
I0416 14:30:41.428562 27193 resourcehandlers.go:45] "level"=0 "msg"="creating object until timeout" "request"="s3-provisioner/screeley-provb-3" "interval"=3000000000 "timeout"=30000000000
I0416 14:30:41.431599 27193 resourcehandlers.go:71] "level"=0 "msg"="defining new configMap" "request"="s3-provisioner/screeley-provb-3" "for claim"="s3-provisioner/screeley-provb-3"
I0416 14:30:41.431627 27193 resourcehandlers.go:146] "level"=0 "msg"="creating configMap" "request"="s3-provisioner/screeley-provb-3" "name"="screeley-provb-3" "namespace"="s3-provisioner"
I0416 14:30:41.431644 27193 resourcehandlers.go:45] "level"=0 "msg"="creating object until timeout" "request"="s3-provisioner/screeley-provb-3" "interval"=3000000000 "timeout"=30000000000
I0416 14:30:41.436236 27193 reconiler.go:246] "level"=0 "msg"="provisioning succeeded" "request"="s3-provisioner/screeley-provb-3"
I0416 14:30:41.436275 27193 controller.go:236] objectbucket.io/provisioner-manager/kubebuilder/controller "level"=1 "msg"="Successfully Reconciled" "controller"="objectbucketclaim-application" "request"={"Namespace":"s3-provisioner","Name":"screeley-provb-3"}
Eventually recovers and the k8s resources are created. There seems to be a lot of overhead creating and then deleting aws and k8s resources in this code path...
Even though we cannot add new reclaim policies to storage classes, we probably need to add a reclaimPolicyOverride to the StorageClass's Parameters
map. This issue proposes a library defined name for the override and some potential values:. If the override is absent then the standard reclaimPolicy is used (with its own default if it too is missing).
Override Policy Key Name: reclaimPolicyOverride
Override Policy Values:
We can decide it we want to also support the existing reclaim policy values in the override, but my opinion now is no.
Note: "Erase" could be named "Recycle" but there are negative connotations with that name in the k8s community.
The use case is when uninstalling a provisioner that is not running, we need a way to clean all of its resources.
Since all these resources OB,OBC,Secret,ConfigMap are not confined to any namespace, it would be best to add a label provisioner=<provisioner-name>
to all of them.
in the lib or in each provisioner?
E0926 13:38:26.647416 1 controller.go:286] "msg"="cleaning up reconcile artifacts" "error"="error provisioning bucket: Failed to create tier \"liran-05b3c247-e26d-49a0-821f-ef3c0482f6e0.5d8cbf52.0\" with error: Cannot read property 'hosts_pool_info' of undefined" "key"="ggg/liran"
I0926 13:38:26.647429 1 resourcehandlers.go:203] "level"=0 "msg"="got nil secret, skipping" "key"="ggg/liran"
I0926 13:38:26.647438 1 resourcehandlers.go:182] "level"=0 "msg"="got nil configmap, skipping" "key"="ggg/liran"
I0926 13:38:26.647445 1 resourcehandlers.go:223] "level"=0 "msg"="got nil obc, skipping" "key"="ggg/liran"
E0926 13:38:26.647466 1 controller.go:190] error syncing 'ggg/liran': error provisioning bucket: Failed to create tier "liran-05b3c247-e26d-49a0-821f-ef3c0482f6e0.5d8cbf52.0" with error: Cannot read property 'hosts_pool_info' of undefined, requeuing
I0926 13:39:07.612585 1 controller.go:203] "level"=0 "msg"="new Reconcile iteration" "key"="ggg/liran"
I0926 13:39:07.612704 1 helpers.go:80] "level"=0 "msg"="getting claim for key" "key"="ggg/liran"
I0926 13:39:07.616795 1 controller.go:219] "level"=0 "msg"="OBC deleted, proceeding with cleanup" "key"="ggg/liran"
I0926 13:39:07.616896 1 controller.go:546] "level"=0 "msg"="getting objectBucket for key" "key"="ggg/liran"
I0926 13:39:07.619812 1 helpers.go:106] "level"=0 "msg"="getting configMap for key" "key"="ggg/liran"
I0926 13:39:07.621375 1 helpers.go:119] "level"=0 "msg"="getting secret for key" "key"="ggg/liran"
E0926 13:39:07.623208 1 controller.go:222] "msg"="error cleaning up OBC" "error"="could not get all needed resources: error getting object bucket \"obc-ggg-liran\": objectbuckets.objectbucket.io \"obc-ggg-liran\" not found : error getting configmap \"ggg/liran\": configmaps \"liran\" not found : error getting secret \"ggg/liran\": secrets \"liran\" not found" "key"="ggg/liran" "name"="ggg/liran"
E0926 13:39:07.623285 1 controller.go:190] error syncing 'ggg/liran': could not get all needed resources: error getting object bucket "obc-ggg-liran": objectbuckets.objectbucket.io "obc-ggg-liran" not found : error getting configmap "ggg/liran": configmaps "liran" not found : error getting secret "ggg/liran": secrets "liran" not found, requeuing
From the original design doc:
spec:
SSL: true | false [6] # post phase-0, if at all
cannedBucketACL: [7] # post phase-0, if at all
versioned: true | false [8] # post phase-0, if at all
...
1. SSL defines whether the connection to the bucket requires SSL authentication.
1. predefined bucket ACLs:
{"BucketCannedACLPrivate", "BucketCannedACLPublicRead", "BucketCannedACLPublicReadWrite", "BucketCannedACLAuthenticatedRead".
1. versioned determines if versioning is enabled.
I think the most readable pattern for functions that accept k8s objects as arguments is to accept them by reference (in order to save memory and slightly improve performance), but if the func modifies the object then it must explicitly return it. For example:
Not So Good:
func Foo(obc *v1alpha1.ObjectBucketClaim) error {
obc.SetFinalizers(...)
return nil
}
Better:
func Foo(obc *v1alpha1.ObjectBucketClaim) (*v1alpha1.ObjectBucketClaim, error) {
obc.SetFinalizers(...)
return obc, nil
}
We have not checked the lib code for this pattern but recommend that the next owner do so.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.