Giter VIP home page Giter VIP logo

oci-flexvolume-driver's Introduction

⚠️ oci-flexvolume-driver is now being maintained at https://github.com/oracle/oci-cloud-controller-manager/tree/master/pkg/flexvolume. This repository will be archived soon.


OCI Flexvolume Driver

wercker status

This project implements a flexvolume driver for Kubernetes clusters running on Oracle Cloud Infrastructure (OCI). It enables mounting of OCI block storage volumes to Kubernetes Pods via the Flexvolume plugin interface.

We recommend you use this driver in conjunction with the OCI Volume Provisioner. See the oci-volume-provisioner for more information.

Install / Setup

We publish the OCI flexvolume driver as a single binary that needs to be installed on every node in your Kubernetes cluster.

Kubernetes DaemonSet Installer

The recommended way to install the driver is through the DaemonSet installer mechanism. This will create two daemonsets, one specifically for master nodes, allowing configuration via a Kubernetes Secret, and one for worker nodes.

kubectl apply -f https://github.com/oracle/oci-flexvolume-driver/releases/download/${flexvolume_driver_version}/rbac.yaml

kubectl apply -f https://github.com/oracle/oci-flexvolume-driver/releases/download/${flexvolume_driver_version}/oci-flexvolume-driver.yaml

You'll still need to add the config file as a Kubernetes Secret.

Configuration

The driver requires API credentials for a OCI account with the ability to attach and detach OCI block storage volumes from to/from the appropriate nodes in the cluster.

These credentials should be provided via a YAML file present on master nodes in the cluster at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/oracle~oci/config.yaml in the following format:

---
auth:
  tenancy: ocid1.tenancy.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  compartment: ocid1.compartment.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  user: ocid1.user.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
  region: us-phoenix-1
  key: |
    -----BEGIN RSA PRIVATE KEY-----
    <snip>
    -----END RSA PRIVATE KEY-----
  passphrase: my secret passphrase
  fingerprint: aa:bb:cc:dd:ee:ff:gg:hh:ii:jj:kk:ll:mm:nn:oo:pp
  vcn: ocid1.vcn.oc1.phx.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

If "region" and/or "compartment" are not specified in the config file they will be retrieved from the hosts OCI metadata service.

Submit configuration as a Kubernetes secret

The configuration file above can be submitted as a Kubernetes Secret onto the master nodes.

kubectl create secret generic oci-flexvolume-driver \
    -n kube-system \
    --from-file=config.yaml=config.yaml

Once the Secret is set and the daemonsets deployed, the configuration file will be placed onto the master nodes.

Using instance principals

To authenticate using instance principals the following policies must first be applied to the dynamic group of instances that intend to use the flexvolume driver:

"Allow group id ${oci_identity_group.flexvolume_driver_group.id} to read vnic-attachments in compartment id ${var.compartment_ocid}",
"Allow group id ${oci_identity_group.flexvolume_driver_group.id} to read vnics in compartment id ${var.compartment_ocid}",
"Allow group id ${oci_identity_group.flexvolume_driver_group.id} to read instances in compartment id ${var.compartment_ocid}",
"Allow group id ${oci_identity_group.flexvolume_driver_group.id} to read subnets in compartment id ${var.compartment_ocid}",
"Allow group id ${oci_identity_group.flexvolume_driver_group.id} to use volumes in compartment id ${var.compartment_ocid}",
"Allow group id ${oci_identity_group.flexvolume_driver_group.id} to use instances in compartment id ${var.compartment_ocid}",
"Allow group id ${oci_identity_group.flexvolume_driver_group.id} to manage volume-attachments in compartment id ${var.compartment_ocid}",

The configuration file requires a simple configuration in the following format:

---
useInstancePrincipals: true

Driver Kubernetes API Access

The driver needs to get node information from the Kubernetes API server. A kubeconfig file with appropriate permissions (rbac: nodes/get) needs to be provided in the same manor as the OCI auth config file above.

kubectl create secret generic oci-flexvolume-driver-kubeconfig \
    -n kube-system \
    --from-file=kubeconfig=kubeconfig

Once the Secret is set and the DaemonSet deployed, the kubeconfig file will be placed onto the master nodes.

Extra configuration values

You can set these in the environment to override the default values.

  • OCI_FLEXD_DRIVER_LOG_DIR - Directory where the log file is written (Default: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/oracle~oci)
  • OCI_FLEXD_DRIVER_DIRECTORY - Directory where the driver binary lives (Default: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/oracle~oci)
  • OCI_FLEXD_CONFIG_DIRECTORY - Directory where the driver configuration lives (Default: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/oracle~oci)
  • OCI_FLEXD_KUBECONFIG_PATH - An override to allow the fully qualified path of the kubeconfig resource file to be specified. This take precedence over additional configuration.

OCI Policies

You must ensure the user (or group) associated with the OCI credentials provided has the following level of access. See policies for more information.

"Allow group id GROUP to read vnic-attachments in compartment id COMPARTMENT",
"Allow group id GROUP to read vnics in compartment id COMPARTMENT"
"Allow group id GROUP to read instances in compartment id COMPARTMENT"
"Allow group id GROUP to read subnets in compartment id COMPARTMENT"
"Allow group id GROUP to use volumes in compartment id COMPARTMENT"
"Allow group id GROUP to use instances in compartment id COMPARTMENT"
"Allow group id GROUP to manage volume-attachments in compartment id COMPARTMENT"

Tutorial

This guide will walk you through creating a Pod with persistent storage. It assumes that you have already installed the flexvolume driver in your cluster.

See example/nginx.yaml for a finished Kubernetes manifest that ties all these concepts together.

  1. Create a block storage volume. This can be done using the oci CLI as follows:
$ oci bv volume create \
    --availability-domain="aaaa:PHX-AD-1" \
    --compartment-id "ocid1.compartment.oc1..aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
  1. Add a volume to your pod.yml in the format below and named with the last section of your volume's OCID (see limitations). E.g. a volume with the OCID
ocid1.volume.oc1.phx.aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Would be named aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa in the pod.yml as shown below.

volumes:
  - name: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
    flexVolume:
      driver: "oracle/oci"
      fsType: "ext4"
  1. Add volume mount(s) in the appropriate container(s) in your as follows:
volumeMounts:
  - name: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
    mountPath: /usr/share/nginx/html

(Where "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" is the last '.' sparated section of the volume OCID.)

Fixing a Pod to a Node

It's important to note that a block volume can only be attached to a Node that runs in the same AD. To get around this problem, you can use a nodeSelector to ensure that a Pod is scheduled on a particular Node.

This following example shows you how to do this.

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
      mountPath: /usr/share/nginx/html
  nodeSelector:
    node.info/availability.domain: 'UpwH-US-ASHBURN-AD-1'
  volumes:
  - name: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    flexVolume:
      driver: "oracle/oci"
      fsType: "ext4"

Debugging

The flexvolume driver writes logs to /usr/libexec/kubernetes/kubelet-plugins/volume/exec/oracle~oci/oci_flexvolume_driver.log by default.

Assumptions

  • If a Flexvolume is specified for a Pod, it will only work with a single replica. (or if there is more than one replica for a Pod, they will all have to run on the same Kubernetes Node). This is because a volume can only be attached to one instance at any one time. Note: This is in common with both the Amazon and Google persistent volume implementations, which also have the same constraint.

  • If nodes in the cluster span availability domain you must make sure your Pods are scheduled in the correct availability domain. This can be achieved using the label selectors with the zone/region.

    Using the oci-volume-provisioner makes this much easier.

  • For all nodes in the cluster, the instance display name in the OCI API must match with the instance hostname, start with the vnic hostnamelabel or match the public IP. This relies on the requirement that the nodename must be resolvable.

Limitations

Due to kubernetes/kubernetes#44737 ("Flex volumes which implement getvolumename API are getting unmounted during run time") we cannot implement getvolumename. From the issue:

Detach call uses volume name, so the plugin detach has to work with PV Name

This means that the Persistent Volume (PV) name in the pod.yml must be the last part of the block volume OCID ('.' separated). Otherwise, we would have no way of determining which volume to detach from which worker node. Even if we were to store state at the time of volume attachment PV names would have to be unique across the cluster which is an unreasonable constraint.

The full OCID cannot be used because the PV name must be shorter than 63 characters and cannot contain '.'s. To reconstruct the OCID we use the region of the master on which Detach() is exected so this blocks support for cross region clusters.

Support

Please checkout our documentation. If you find a bug, please raise an issue

Contributing

oci-flexvolume-driver is an open source project. See CONTRIBUTING for details.

Oracle gratefully acknowledges the contributions to this project that have been made by the community.

License

Copyright (c) 2017, Oracle and/or its affiliates. All rights reserved.

oci-flexvolume-driver is licensed under the Apache License 2.0.

See LICENSE for more details.

oci-flexvolume-driver's People

Contributors

akarshes avatar alapidas avatar garthy avatar jhorwit2 avatar kristenjacobs avatar madalinapatrichi avatar neumayer avatar owainlewis avatar prydie avatar rohitchaware avatar simonlord avatar templecloud avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oci-flexvolume-driver's Issues

System test cleanup issues

Currently, it the system test if it gets cancelled part way through then the cluster can end up in a state where one or the nodes are cordoned off. We need to ensure that either we clean up properly, or when we start a new test, we ensure that the cluster is in a good state. Initially, this might be as simple as the following:

  • On startup, check that all nodes are schedulable (i.e. not cordoned), and uncordon any that are not.

Integration test cleanup issues

If the integration test get cancelled part way though, the OCI resources (i.e. instance/volume) failed to get cleaned up. Not sure what we do about this though...

Make all should run goling/fmt/etc (+ cleanup existing errors)

It would be nice if the Make 'all' target was updated to be:
all: gofmt golint govet test build manifests build-integration-tests
instead of:
all: clean test build manifests build-integration-tests

However, currently golint produces a load of errors, so these will need fixing/ignoring first:

pkg/mount/mount.go:17:1: package comment should be of the form "Package mount ..."
pkg/mount/mount.go:34:2: exported const MountsInGlobalPDPath should have comment (or a comment on this block) or be unexported
pkg/mount/mount.go:37:6: exported type Interface should have comment or be unexported
pkg/mount/mount.go:64:1: comment on exported type MountPoint should be of the form "MountPoint ..." (with optional leading article)
pkg/mount/mount.go:65:6: type name will be used as mount.MountPoint by other packages, and that stutters; consider calling this Point
pkg/mount/mount.go:142:1: comment on exported function GetDeviceNameFromMount should be of the form "GetDeviceNameFromMount ..."
pkg/mount/mount_unsupported.go:21:6: exported type Mounter should have comment or be unexported
pkg/mount/mount_unsupported.go:25:1: exported method Mounter.Mount should have comment or be unexported
pkg/mount/mount_unsupported.go:29:1: exported method Mounter.Unmount should have comment or be unexported
pkg/mount/mount_unsupported.go:33:1: exported method Mounter.List should have comment or be unexported
pkg/mount/mount_unsupported.go:37:1: exported method Mounter.IsLikelyNotMountPoint should have comment or be unexported
pkg/mount/mount_unsupported.go:41:1: exported method Mounter.GetDeviceNameFromMount should have comment or be unexported
pkg/mount/mount_unsupported.go:45:1: exported method Mounter.DeviceOpened should have comment or be unexported
pkg/mount/mount_unsupported.go:49:1: exported method Mounter.PathIsDevice should have comment or be unexported
pkg/mount/mount_unsupported.go:61:1: exported function IsNotMountPoint should have comment or be unexported
pkg/oci/client/oci.go:47:23: interface method parameter volumeId should be volumeID
pkg/oci/client/oci.go:51:24: interface method parameter volumeAttachmentId should be volumeAttachmentID
pkg/oci/client/oci.go:59:15: interface method parameter instanceId should be instanceID
pkg/oci/client/oci.go:59:27: interface method parameter volumeId should be volumeID
pkg/oci/client/oci.go:63:15: interface method parameter volumeAttachmentId should be volumeAttachmentID
pkg/oci/client/oci.go:67:24: interface method parameter volumeAttachmentId should be volumeAttachmentID
pkg/oci/client/oci.go:124:40: method parameter volumeAttachmentId should be volumeAttachmentID
pkg/oci/client/oci.go:154:39: method parameter volumeId should be volumeID
pkg/oci/client/oci.go:386:31: method parameter instanceId should be instanceID
pkg/oci/client/oci.go:386:43: method parameter volumeId should be volumeID
pkg/oci/client/oci.go:405:31: method parameter volumeAttachmentId should be volumeAttachmentID
pkg/oci/client/oci.go:423:40: method parameter volumeAttachmentId should be volumeAttachmentID
pkg/oci/client/cache/ocicache.go:26:6: exported type OCICache should have comment or be unexported

OCI regions

The driver uses the region key (e.g phx) when dynamically creating OCIDs. Unfortunately there are inconsistencies across regions. For example

A volume in PHX uses the region key

“ocid1.volume.oc1.phx.abyhqljrsvvjw3qvfg52n3dmhh7yt3bo6zuprwmvefwx3n5xyhsiyprs6gpa”

But in frankfurt it doesn’t use the key it uses the region itself

“ocid1.volume.oc1.eu-frankfurt-1.abtheljsch5y7wfnk5jyadulsjnbpl5t7afrpbneh5rdh4gg22hlf4u3asra”

Document the running of the test image

The test image we deliver is useful to users of the volume provisioner, i.e. you can run it in a cluster to check that the deployed provisioner is operational. We need to document how to do this.

Release builds need to fail if a git tag already exists

It is possible when doing a release that we overwrite an existing release by re-pushing an image.

To avoid this the release pipeline should fail if a git tag already exists.

if git rev-parse "$VERSION" >/dev/null 2>&1; then	
    echo "Tag $VERSION already exists. Doing nothing."	
    exit 1	
fi

Another option

- script:
  name: Ensure version is unqiue
  code: |
    if curl -s https://api.github.com/repos/oracle/oci-cloud-controller-manager/git/refs/tags | grep "tags/$VERSION"; then exit 1; fi

System tests do not install with ansible role

Currently the system tests install the driver by shelling out to scp from Python. The documented installation method, however, uses the provided Ansible role. For the system tests to be a truely end-to-end test we should install the driver using the Ansible role.

Deploy as DaemonSet

We should deploy the oci flexvolume driver as a daemon set so that users don't have to manually copy the flex binary on to each node

Allow specifying a different directory for the configuration

Currently the configuration has to live in the same directory as the binary, which means we can't mount in a kubernetes secret to the same directory.

This will enable self-hosted deployments to use a side-car + secret with the kube controller manager.

Daemonset should accept secret for config

The daemonset should require a secret has been created.

This should be like the volume config (Or maybe the same secret) and mounted just on the master.

This will require creating two daemonsets, one which runs on the master (and copies the config secret to distk) and one that runs on the nodes (that doesn't copy the secret)

Pass $KUBECONFIG to system tests and remove SSH usage

Prior to merging #11 SCP/SSHing to the master node in the system tests to run the test made sense as we were provisioning the cluster in that fashion.

Now that we are provisioning the cluster with Ansible the tests could be considerably simplified by passing $KUBECONFIG in the Wercker environment and running the kubectl commands from the pipeline rather than on the master node.

Rotate keys

Currently both the system and the integration tests use an API signing key that is not associated with an account scoped to the kubernetes-test compartment (AFAIK).

We should also rotate the instance (ssh) key as well given that they are/have been associated with instances deployed into the bristol-cloud compartment.

This is a precautionary step. There is no reason to suspect we have leaked any credentials but taking this precaution prior to OSS release seems expedient.

This should be done after #14 has been implemented.

Skip validation when using instance principals

We should skip validation of AuthConfig as the credentials do not get overridden when instance principals is set to true and currently we hard fail which seems unnecessary in this scenario. This could be replaced by simply logging a message to let the user aware that AuthConfig credentials are redundant when IP is used.

Driver doesn't handle vnics with no public IP

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x692c55]

goroutine 1 [running]:
github.com/oracle/oci-flexvolume-driver/pkg/oci/client.(*client).findInstanceByNodeNameIsVnic(0xc4201725a0, 0xc42016c840, 0x7fffe3a9a775, 0x33, 0x0, 0xc4203e00c0, 0x51)
	/home/gbushell/src/github.com/oracle/oci-flexvolume-driver/pkg/oci/client/oci.go:260 +0x305
github.com/oracle/oci-flexvolume-driver/pkg/oci/client.(*client).GetInstanceByNodeName(0xc4201725a0, 0x7fffe3a9a775, 0x33, 0x0, 0x0, 0x0)
	/home/gbushell/src/github.com/oracle/oci-flexvolume-driver/pkg/oci/client/oci.go:362 +0x2e7
github.com/oracle/oci-flexvolume-driver/pkg/oci/driver.OCIFlexvolumeDriver.Attach(0xc420087d70, 0x7fffe3a9a775, 0x33, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/home/gbushell/src/github.com/oracle/oci-flexvolume-driver/pkg/oci/driver/driver.go:94 +0x17c
github.com/oracle/oci-flexvolume-driver/pkg/oci/driver.(*OCIFlexvolumeDriver).Attach(0x91b808, 0xc420087d70, 0x7fffe3a9a775, 0x33, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	<autogenerated>:1 +0xa6
github.com/oracle/oci-flexvolume-driver/pkg/flexvolume.ExecDriver(0x8cab00, 0x91b808, 0xc420080000, 0x4, 0x4)
	/home/gbushell/src/github.com/oracle/oci-flexvolume-driver/pkg/flexvolume/flexvolume.go:178 +0x85d
main.main()
	/home/gbushell/src/github.com/oracle/oci-flexvolume-driver/cmd/oci/main.go:50 +0x226

Daemonset not included in releases

The documentation states that the DaemonSet should be installed with:

kubectl apply -f https://github.com/oracle/oci-flexvolume-driver/releases/download/${flexvolume_driver_version}/oci-flexvolume-driver.yaml

However the oci-flexvolume-driver.yaml file doesn't exist in any release. E.g.:

error: unable to read URL "https://github.com/oracle/oci-flexvolume-driver/releases/download/0.6.2/oci-flexvolume-driver.yaml", server reported 404 Not Found, status code=404

Node name lookup scalability issues / rate limiting

Node name to instance OCID resolution is currently implemented by:

  • First checking whether or not the given node name matches the display name of a running instance (1 API call)
  • If it does not call ListVnicAttachments and page through results (1 API request per page)
  • For each VnicAttachment in the ATTACHED state we GET the associated Vnic (1 API request per ATTACHED VnicAttachment in compartment)
  • Check node name against the VNICshostnameLabel and publicIP

The main fault of this algorithm is that the number of HTTP requests increases linearly with the number of instances in the compartment (which is strictly equal to or greater than the number of nodes in the cluster). This (understandably) in turn triggers rate limiting when the number of instances is great enough.

Notes:

  • A similar scalability problem was resolved in the CCM by using a NodeLister to cache the Nodes in the cluster and then using the Node providerID field.
  • The calls that utilise this node name lookup are Attach, Detach, and IsAttached.
  • These calls are made from the kube-controller-manager.
  • We don't cache GetVnic() calls so every time we hit the rate limit we start over in an infinite loop of rate limiting.

Ansible is out of date

Terraform for OCI recently moved to OEL 7.4. Ansible contains superfluous information about Ubuntu (python3 etc)

Don't use VCN for Finding Nodes

Today

instance, err := c.GetInstanceByNodeName(nodeName)
GetInstanceByNodeName relies on using the VCN details to lookup from ip -> instance ocid.

However, customers can put resources in arbitrary compartments, and all LIST operations require a compartment. Thus it makes it difficult to know how many compartments may be in scope, in practice customers are split across at least two compartments one for compute resources and one for networking resources.

Instead of using the convention of IP to lookup instances, we should use node.spec.providerId as storage operations should only require the instance id and volume id to create appropriate the attachment.

RE: #18

Push tags to GH releases

When a new tag is created in Github, the wercker build should build and push a binary release to GH Releases.

  1. Separate wercker pipeline to manually create the tag and upload to github releases
  2. Wercker build runs on every tag

FlexVolume driver for OCI Classic ?

Hi,

Do we have anything similar for OCI-Classic on the roadmap ? or do you guys suggest to use something else when running k8s environment on OCI-Classic for PersistentVolumes ?

Any help will be much appreciated.

Auth configuration format

In order to unify the authentication formats across OCI K8s projects, we should change the auth format in the driver to be consistent with the CCM and volume provisioner.

Example configuration

auth:
  tenancy: ocid1.tenancy.oc1..aaaaaaaatyn7scrtwtq
  user: ocid1.user.oc1..aaaaaaaao235lbcxvdrrqlrp
  key: |
    -----BEGIN RSA PRIVATE KEY-----
    <snip>
    -----END RSA PRIVATE KEY-----
  fingerprint: 4d:f5:ff:0e:a9:10:e8:5a:d3:52:6a:f8:1e:99:a3:47
  region: us-phoenix-1

use instance id to fetch the instance

The flex volume driver has the luxury of running on every instance in a cluster, which means it'll have access to the local metadata server. Using the instance id means we can save the unnecessary calls by node name (which could be wrong if using the FQDN of the instance) or the public ip/name approach used by OKE.

We should add support for using this approach instead.

System test lockfile not getting deleted when pipeline cancelled

I'm guessing that this just kills the docker container thus the atexit hook in the system test doesnt get called for the cleanup.

The only way I can think of sorting this out is in the case where a lockfile exists, the test will need to check if there is a system test pipeline actually running, and if not, clean the file up before starting.

Support for using kubernetes secrets

Currently, this driver uses attach/detach; however, that's not compatible with using kubernetes secrets to pass in the required secrets to the flexvolume driver.

// Mount is unimplemented as we use the --enable-controller-attach-detach flow

As you can read here, the driver needs to use mount instead.

This would allow users to do

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: oci
provisioner: oracle.com/oci
parameters:
  secretName: flex-volume-driver
  secretNamespace: kube-system

Combined with daemonset deployments you wouldn't need to run anything on the Kuberenetes hosts which is most ideal.

OCI API credentials required on nodes

Currently OCI API credentials must be present on both the control plane and all nodes in the cluster. Ideally they would only be required in the control plane.

  • Update OCIFlexvolumeDriver.Init() to only require credentials be present on the control plane.
  • Update OCIFlexvolumeDriver.IsAttached() to determine whether it is being called from the Controller manager or the Kubelet and only use the OCI API in the first case.
  • Update OCIFlexvolumeDriver.Attach() to block waiting for the VolumeAttachment to reach the ATTACHED state and pass the templated device path.
  • Update OCIFlexvolumeDriver.WaitForAttach() to query the device path rather than the OCI API.

Authconfig structure is overloaded

To coincide with the volume provisioner issue raised. Some of the credentials do not make sense under AuthConfig and should perhaps be moved to the higher level Config.

Tighten RBAC rules for the test image runner

We validate the test image as part of our build pipeline. However, currently we have a very permissive set of RBAC rules for running the test inside of the cluster (see: test/system/run-test-image.yaml.template). Would be nice to tighten these up so they only allow the operations that the test requires.

Instance principal authentication doesn't work

The configuration is setting defaults for the api region which instance principals require not be set in the configuration validation code. This causes the flex volume driver to not work.

[oci-flex-volume-driver-bhwcf] 3820 2018/07/09 17:14:02 Command result: {"status":"Failure","message":"auth.region: Forbidden: cannot be used when useInstancePrincipals is enabled"}

Update documentation

  • Consolidate docs/ into README.md
  • Document install via downloading latest release
  • Developer documentation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.