Giter VIP home page Giter VIP logo

ocm-api's People

Contributors

butonic avatar glpatcern avatar gmgigi96 avatar ishank011 avatar joostfarla avatar labkode avatar lovisalugnegard avatar michielbdejong avatar mickenordin avatar schiessle avatar smesterheide avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ocm-api's Issues

updating shares

There is no way of updating shares — is this intentional?

Private data: endpoints for operations

  1. There are different endpoints for GET, POST and DELETE, but the method itself should be enough to indicate the intended operation on the private data endpoint:

    • GET /privatedata/{storename}/{key} instead of GET /privatedata/getattribute/{storename}/{key}
    • POST /privatedata/{storename}/{key} instead of GET /privatedata/setattribute/{storename}/{key}
    • DELETE /privatedata/{storename}/{key} instead of GET /privatedata/deleteattribute/{storename}/{key}
  2. Should there also be GET /privatedata and GET /privatedata/{storename} endpoints to explore what's in there?

Typos here and there

  • The Update for a share has typos (it says get information about a share)
  • Get group memberships and get members of groups data attributes are wrong. It should be viceversa.

Provide more specific media type

The documentation provides a clear description of how responses are structured:

A OCS response MUST consist of the following elements:

ocs: Array that contains the whole response
meta: Array that contains meta information
status: The status of the response, either “ok” or “fail”. MUST be “ok” if statuscode is set to 200, “fail” otherwise.
statuscode: The OCS status code of the response. 200 indicates a successful response.
message: An optional message that MAY contain a status message, such as a error message.
data: Array that contains the actual response, content of the array depends completely on the endpoint.

Describing the global response format is where media types are meant for. Media types are part of the contract between server and client and provide instructions on how the data should be handled, e.g. its format or the way certain information is embodied or embedded.

There are lots of existing media types, which provides similar structures:

http://stateless.co/hal_specification.html (application/hal+json / application/hal+xml)
http://jsonapi.org/format/ (application/vnd.api+json)

If no standard media type suffices, you could provide a custom media type, for example:

application/vnd.ocm.v1+json
application/vnd.ocm.v1+xml

This gives providers some extra advantages:

  • You can serve your own preferred media type, next to the OCM media type (using content negotiation). This makes the provider less dependent on OCM.
  • OCM can change representations across media type versions without affecting provider's API versions.
  • The media type can describe hypermedia controls (also called HATEOAS, which we will describe in a separate issue).

Custom media types could inherit from a more generic media type, like HAL. You can even register your vendor media type at IANA, the central registry for media types.

Federated sharing: missing GET operation?

It's not possible to get information about federated shares. I can imagine that sending as well as receiving clients would like to see (a paginated list of) their pending, accepted and rejected share offers:

  • GET /cloud/shares
  • GET /cloud/shares/{remoteId}

Are notifications allowed to have side-effects?

A little philosophical question I ran into while writing the test suite:

When Alice shares a resource with Bob, she cannot share it again. If she tries to share it a second time, her GUI will warn her 'Bob already has access'. But if Bob sent back a notification like SHARE_DECLINED or SHARE_REMOVED the that will in practice usually have the side-effect that Alice can share the resource with Bob again.

This suggests that either sharing with someone should be idempotent, or the notifications and their side-effects should be standardised.

Federated sharing: uniqueness of remoteId

The remoteId parameter is determined by the sending instance. However, how can uniqueness on the receiving side be guaranteed if multiple sending instances provide the same remoteId? To solve this, the remoteId should be generated by the receiving instance instead and be accessible via a GET operation on /cloud/shares and upon share offer creation.

Server alias

In order to improve the usability, it might be useful to provide a way to set aliases for a service, allowing users to share using that alias instead of the full url of the service.

Imagine the case of CERNBox: the service is deployed at cernbox.cern.ch, and this should be the endpoint used for auto discovery/getting information about the service (we're not planning to put this information in the top domain, which is not controlled by us). In the current model, a user would have to put [email protected] to share something, which is not intuitive. If we set "cern.ch" as an alias, then it would be possible to share with [email protected]- which is generally the way users know each other -, and the client would contact cernbox.cern.ch.

All of this is assuming that the services are configured manually, since - as I understood - you, as a sysadmin, need to approve it first.

Resending cookies should be unnecessary

The current documentation states:

To work together with load-balanced environments consumers SHOULD resend any cookies as defined in RFC 6265. As stated in the “Authentication” section any Basic Auth authentication header MUST be resend as the session referenced by the cookie MAY expire.

It shall be noted that OCS endpoints MUST behave properly regardless whether cookies are resent or not.

The above statement is probably intended to enable use of sticky sessions in a load-balanced environment. In a RESTful architecture, all requests between client and server are stateless. Stateless requests contain all the necessary information for the server to understand the meaning of the request, including authentication headers. Because each request is self-containing and does not depend on any previous interaction, the server (or any intermediate layer) can treat each request independently and should therefore not store session state. This is the reason why sticky sessions are meaningless and should be avoided. Sticky sessions reduce scalability because the requests can be spread less evenly between server nodes. It also makes failover scenarios more complex.

Existing standards

There are already existing standards that we could use/be based on them:

For using WebDAV as synchronisation protocol there are already documents here:
https://github.com/labkode/Internet-Storage-Sync/blob/master/applicability-of-webdav.md

Also, is worth looking into this for pros/cons of WebDAV vs REST for syncing:

subfiles parameter when getting shares

Listing shares with the subfiles parameter can be very costly.
Is it critical to have that functionality or is just a funny feature to show a shared icon on top level folders of user directory if some children have been shared?

To be discussed on follow-up call.

Deployment: preview of changes in API in a web

A PR that submits changes should have a preview URL to visualize the changes proposed rather than having to run the documentation generation locally.

For other projects we use Netlify to automatically generate these preview urls, should be straightforward to add to this repo also.

Consolidation of federated sharing and sharing modules

As discussed,
the first step is to define the federating sharing specification.

Currently, this functionality is split into two different modules (federated sharing and sharing), the idea
is to consolidate into one where the only share type is federated sharing.

From my point of view, to have a complete set of functionality the endpoints would be:

From a sender point of view:

  • Sending a share offer
  • List created shares, query parameter to filter by pending or accepted
  • Delete share
  • Update share ? (in case expiration can be set on fed shares?)

From the receiver point of view

  • List received shares, query parameter to filter by pending or accepted
  • Accept a share offer
  • Reject a share offer

From both:

  • Get information about a share, some field with status(pending|accepted)

The remoteID identifies the share ID given by the sender. In order to be unique in the receiver, the receiver must scope this remoteID to the trusted server and remote user to avoid any clash with other users/remote servers.

Example scenario:

UserA from Organization OA sends share offer with:

  • remoteID:123
  • token:$$$
  • owner:UserA

to UserB from Organization OB.

Organization OB OCM implementation must save the share offer with scoped user and remote server, this could be a simple string as with prefixes like: <trustedremoteserver>:<remoteuser>:<remoteid>, in this example it would be: ocm.OA.com:UserA:123.
This schema grants uniqueness of the remote ID.

Do you think it makes sense ?

A corner case is also what happens when the trusted server changes domain. How ownCloud/NextCloud handles this situation? Are shares not accessible? @karlitschek @LukasReschke @DeepDiver1975 @PVince81

Cheers

subadmin endpoints are really needed?

From my point of view in the provisioning API the subadmin endpoints are very tight to ownCloud/NextCloud implementations and could be avoided. These endpoints just promote users to be part of some admin groups but this logic is not used in other endpoints, as a result of this, such logic can be moved to particular implementations that will know if the user/group has admin privileges.

Why application/x-www-url-encoded

The API relies on formData for data submission.

What is the reason for choosing this encoding over JSON? JSON allows expressing complex data structures in a simple and more efficient way than application/x-www-url-encoded.

If the API were only based on POST and GET methods I could understand the use of this encoding but the API is REST oriented and contains PUT and DELETE methods that are not supported by XHTML 1.x forms.

Responses are given in JSON, why not consolidate on just one encoding format?

ownCloud custom reponse

Almost every response contains the redundant ownCloud/NextCloud meta envelope, which is a redundancy on top of the HTTP error code. The HTTP error code already told us that the request was wrong, there is no reason to return this information.

HTTP/2.0 400 Bad Request

{

    "meta": {
        "status": "error",
        "statusCode": 400
    }

}

What makes sense is to provide useful information for the end user in the response, something like this:

HTTP/2.0 400 Bad Request

{
    "code": 3,
    "message": "invalid token",
    "documentation": "url pointing to the endpoint definition"
}

As far as I've seen in the last years, this is what modern APIs return.
There are some cases the 'statusCode' is 997, and that error code is not documented at all and is coming from ownCloud/NextCloud implementation.
Idem for successful responses, the data envelope could be avoided as this API isn't HATEOAS oriented.

create new share: add permission specification

I am thinking if or not basic permission information should be made protocol independent. So it is clear for example if resource is shared read-only or the receiver may modify it in some way. Pushing everything to the protocol level actually makes the “standard” very implementation specific (i.e. not standard).

application module

The application endpoints are very ownCloud/NextCloud related. A better endpoint would be a capabilities one where clients could check which modules of the spec (shares, federatedsharing, privatedata ..) are enabled or not to behave accordingly.

Is this the objective of the application endpoint?

Proposal for notifications endpoint

In our proposals we let the providing service maintain control over 'their' share. In the end, they are the owner and are able to revert access etc. By that I mean that when provider A wants to unshare a resource with consumer B, we should not rely on B's implementation of 'unshare' to change permissions. Instead, the 'unshare' feature can be simply removing or changing the access token on the provider side, making the resource unavailable for the corresponding consumer.

However, to make life easier for the consumer we should provide a notification functionality to notify trusted services of certain events. If we do this in a generic way, we might use this feature for (future) different notifications as well.

Notifications endpoint might also return a 202 accepted

At the moment we assume that the only valid success response is a 201 Created. However, if the receiver wants to process the notification in another way, it might also return a 202 Accepted response to inform the sender that the notification has been received but it's not possible for the receiver to fetch the notification later, because the receiver implemented it's own notification handler.

How does a provider know where to send the invitation to?

In all our proposals we assume that the invitation object contains a user identifier which exists at the given endpoint. If we don't want to expose users via the API because we want to keep the attack surface as small as possible (described in issue #23), then how does a user know where to send the invitation to? In other words, if Joost (user on A) wants to share a resource with Dimitri (user on B), then:

  1. Can we make the assumption that Joost already knows that Dimitri is working with B?
    1. If not, the provider needs to know all users of every trusted user which is a problem.
    2. If yes, can we make the assumption that Joost already knows the unique identifier of Dimitri on B?
      1. If not, we could perform a user search on B (because we know that Dimitri is working with B) and we don't want to do that due to the size of the attack surface.
      2. If yes, can we use the e-mailaddress as the unique identifier per platform?
        1. If not, what can we use as the unique identifier per platform?

Security proposal: trusted services and shared secrets / request signing

In the file sharing context, security is a serious concern. Therefore, it's a best practice to keep your
attack surface as small as possible. There are various measures to take when it comes to minimizing risk as much as possible.

TLS/SSL

The TLS/SSL protocol (serving traffic over HTTPS) guarantees, when implemented correctly, an end-to-end secure connection. Support for SSL is an absolute MUST for any decent API nowadays. SSL helps encrypting any private data when transferring it over the wire. However, as we will see in the "Request signing" chapter, SSL only is not always enough.

Trust relationships

To be able to initiate resource sharing between different users on different instances, some API endpoints have to be exposed. These endpoints, especially for open-source projects, are an easy target for malicious services and should be secure and only be accessible by services you trust. Therefore, the concept of "trusted services" could be implemented. When you configure a trusted service, you are whitelisting this service to access the endpoints used for the negotiation and revocation of resource shares.

A trusted service means you trust the owner of the service and all code in the service application
to behave itself at all times. You must also be confident that the trusted application will maintain
the security of its shared secret.

Request signing

A common approach for managing trust relationships between services is by providing a
shared secret, which is generally a long sequence of randomly generated characters, known only to the parties involved. This shared secret could be generated by the trusting service admin and should be provided to the trusted party (to be). The shared secret should be provided to the latter party once and not change over time (of course you could decide to rotate all shared secrets once in a while, but we leave this out of scope for now).

Even when using SSL, there is still the small chance of a man-in-the-middle attack. Recent OpenSSL exploits (e.g. Heartbleed) have also demonstrated that the SSL protocol could not be considered 100% secure. Especially because different sharing services of different vendors could be hosted in various different environments with different security risks/policies and management levels, it's not a wise decision to fully trust on SSL encryption. Therefore, a proper choice would be to use the shared secret for signing requests.

Signing API requests means that the shared secret itself is never sent over the wire (which you often see when using traditional API keys), which takes away the risk of the shared secret being intercepted. Instead, the token is used to sign the request on the requester's side using a cryptographic algorithm like HMAC-SHA1. With the same shared secret, the receiver of the request can verify the validity of the token and the integrity of the message to make sure that the contents of the request (including metadata) has been authored by the person who claims to be the sender and has remained untouched during the transfer.

Signing and validating the request still does not mean that messages could not be intercepted. This means you still have the small chance of leaking confidential information to malicious parties. To take away this risk, one could decide to add encryption so that message are unreadable when intercepted.

Discussion

Do we want to go this far when it comes to security? Or is the data mostly not "that" sensitive? Other thoughts or alternatives?

Do we want to support more than one protocol at a time?

This discussion started on another issue, but it is important to settle on it.

Following #54 and #57, we have introduced the option to specify more than a protocol in a share action.

The rationale of this is to enable richer use-cases such as what I demonstrated at CS3 Barcelona, where the user could access the data (via webdav) and the remote application at the same time (via webapp), or what @schiessle came up with as another example: imagine you share a calendar event, and you want the recipient to access the event data via CalDAV (i.e. protocol = webdav), as well as connecting to some groupware tool such as Zoom, Nextcloud talk, etc. (i.e. protocol = webapp).

From @smesterheide:

I think the original definition was superior to having different protocols defined as properties. Can we use name as a discriminator?

The original definition was just an object, which does not offer any clue as of how implementations are supposed to encode the properties. No surprise that compatibility across different vendors was not ideal (e.g. only recently did Seafile announce compatibility with Nextcloud).

Using just a single property - the shareType rather than the name I guess - as discriminator implies that there's a single "dimension" to express all options. This has two issues:

  1. We'd need to come up with counterintuitive labels to represent the whole shareType x protocol matrix: for the above examples, something like file-webdav, file-app, caldav, calendar-talk-app?
  2. Even worse, a user that wants to share a file with two protocols / access methods at the same time, would need to create two independent shares, with poor UX (why would I need to share twice?) and inconsistent scenarios thereafter - what if one of the two shares is later removed? does the recipient keep the permissions on the other one? either way it's wrong/inconsistent.

More opinions welcome!

create new share: indicate resource type

POST /shares

Shouldn't we possibly include a type of the shared resource so the receiver may interpret correctly what is shared? Which should or could be enumerated value in the set “file”, “folder”.

Backwards compatibility

I am not sure if it was decided yet whether the next version (v2.0.0 as it stands now) will be backwards compatible with v1.0.0. As a reminder v1.0.0 is the only spec available with some degree of interoperability among vendors. As this is the stated goal of OCM we should maybe discuss this aspect before settling on the next version. As a starter maybe we can first agree on the notion of backwards compatibility.

  • Should the specification itself include all endpoints, schemas, etc. from previous versions (marked as deprecated)?
  • Should the API version be encoded in the URL path, eg. {server}/v1
  • How can an implementation advertise compatibility with multiple versions?
  • How does the discoverability endpoint /ocm-provider tie into that?
  • Do we maybe need a minor release first without any new features to facilitate backwards compatibility with new API versions?

Endpoint discovery through https://example.com/ocm-provider/

In practice, if servers A and B participate in the OpenCloudMesh, and Alice@A wants to share a resource with Bob@B, then the first thing server A will do is to do a GET to https://B/ocm-provider/, for end-point discovery.

I don't think this practice is currently documented in this spec repo, but without it, OCM doesn't work.

There is another option - we could define endpoint discovery to be out-of-scope, meaning OpenCloudMesh will only work between previously federated servers. This is for sure how it will be used in ScienceMesh, so there it's fine, but in general, I think it would be a pity to lose the open-ended host discovery.

RFE: make invitation workflow symmetric

The current OCM invite/accept workflow to create shares between user A and B is not symmetric, in the sense that once A invited B and B accepted, A can share with B but B is requested to invite A to be able to share to A.

This RFE is to evolve the workflow - currently still to be merged from #54 - such that A can choose a two-way invitation (and implicitly accepts the back invitation from B). As discussed with @michielbdejong this requires to create a "back token" for the back invitation process to take place.

To be further detailed with the exact expected sequence of operations.

support sub-shares?

What if you for instance get read access to /foo but write access to /foo/bar? In OCM that would be two separate shares, and if you open /foo and then drill down to /foo/bar, the receiving server will not recognise that you entered the folder where you do have read/write

Describe how "sharedSecret" may be used in WebDAV protocol

If you look at the $settings in https://github.com/owncloud/core/blob/e5e3da4ee8179055d479025404f7b70996d5fed3/lib/private/Http/Client/WebDavClientService.php#L90 you'll see something like:

[
  'baseUri' => 'https://oc1.docker/public.php/webdav/',
  'userName' => 'DyQARczFGQI7V3S',
  'password' => '',
  'authType' => 1,
]

(source: SURFnet/rd-sram-integration#114 (comment))

I have a strong feeling that this userName value is the protocol.webDav.sharedSecret value from https://github.com/cs3org/OCM-API/blame/c590a3d6c2c0388bb63b76c7c4c30cc9aed5f75d/spec.yaml#L224 (or "protocols" - see #62)

Trade in shared secret for an OAuth-grade refresh/access token?

We could benefit from OAuth best practices if we frame OCM as a pro-active flavour of OAuth.
The resource owner pro-actively gives the client access and sends them a shared secret via OCM.
After that, the current practice is that the client would then use this secret as-is to access the resource API (WebDAV or otherwise).
It might be better if we use the OAuth framework to make sure this client secret is not included in each network request.

(Wrong) usage of status code 503

The federated sharing module uses status code 503 to indicate that the module isn't available at the
specific endpoint.

  1. This should be a 501 Not Implemented (RFC7231, Section 6.6.2) instead of a 503 Service Unavailable, because the latter "indicates that the server is currently unable to handle the request due to a temporary overload or scheduled maintenance" (RFC7231, Section 6.6.4).
  2. It would be a good idea to use this status code for the other modules as well to indicate whether it's available.

create new share: name vs opaque resource id

POST /shares

In reality “name” should be a unique identifier of the resource in the scope of owner and providerid. It should not be a filename or path as such (although it could be if unique). But it could be an opaque fileid for example. For that purpose I would maybe call it resourceid.

If we accept the idea that resourceid string may not be interpretable by receiving end, the question arises if a name of the resource (basename) should not be provided in addition.

Group-owned shares and invites to/from groups

OCM assumes the sender of a share is a specific user. But in some situations it would be useful to think of shares as owned by a group.

This would mirror the concept of "Group Folders" which exists in some EFSS systems, but across multiple sites.

Document current translation that happens for webdav

I'm a bit embarrassed that I don't know this by heart but I'm just experimentally observing how to access a webdav share from say nc1.docker. I see my nc2.docker make requests like:

PROPFIND https://nc1.docker/remote.php/dav/files/einstein/asdf/qwer/asdf
And with a 'requesttoken' => '2oCY+AwkE9BKYBynQflAS+G4RnQcU3yyHgkewQblqNw=:q6v2ikhhSZR+I1aTcL0kGNjgCxUrfBSHcH5Q8ESPza8=' header (I'm still figuring out if this is what carries the sharedSecret and if so, how this is put into what looks like two base64-encoded strings).

I see https://nc1.docker/ocm-provider advertises https://nc1.docker/remote.php/webdav/ as the webdav root so I think that was then redirected to https://nc1.docker/remote.php/dav/

I'll get to the bottom of this so we can add it to the spec.

shareID should be a string

ShareID/RemoteID should be a string (currently an integer) as not everybody will use an auto incremental integer in a SQL DB :)

Framing in terms of OAuth

We could inherit a lot of best practices if we could somehow fit OCM into OAuth.
The flow is entirely different from classic OAuth, but maybe not that different from UMA.
When the resource owner picks a "sharee" (user@host), they are basically picking both a requesting party (user) and a client (host).

Sharing proposal: federated sharing without acceptance from consumer

It might be interesting to think about a scenario where the consumer doesn't have to accept a providers share offer at all. To have a complete picture of all scenarios I've added the image below as well.

image

  1. Provider sends an invitation to consumer, including the access token.
  2. Consumer responds that it has received the invitation and can access the resource from now on.
  3. What happens next is up to the consumer. The provider wants to share this resource after all and if it wants to revert access, it can change or delete the access token.

Federated sharing: missing notification possibility

Providers MAY inform the sending user if a share has been accepted.

I'd suggest the sending instance provides a federated sharing endpoint to receive notifications and/or is able to send a callback-URL when creating a share offer which the receiving instance can call if an offer is accepted or rejected.

Sharing proposal: federated sharing with acceptance from consumer

This issue describes the proposed scenario where the owner of a resource (provider) wants to share his/her resource with someone else (consumer) and the consumer needs to accept that this resource is being shared with him/her.

image

The image above illustrates the global workflow for this scenario:

  1. The provider sends a 'ShareOffer' to the consumer.
  2. If the consumer rejects or accepts the offer, the provider should be notified by this event.
  3. If the offer is expired, already accepted or invalid for any other reason, the provider should report this to the consumer.
  4. If the offer is still valid, the provider grants access to the resource by issuing an access token to the consumer.

Provider wants to share a resource

The next image illustrates the interaction between consumer (left) and provider (right) for the part of the flow where the provider wants to share a resource. To distinguish the contexts of 'Share' between both parties, we use the term 'Invitation' instead of a 'ShareOffer'. Security, communication, authentication and authorization between two parties are dealt with the proposal of issue #23 and there left out of the scope of this particular issue.

image

  1. The rounded rectangle represents the starting point of this process. The provider creates a share where the consumer can reference to. It's probably stored in an internal database, but as we are not interested in vendor specific logic, we'll just illustrate this 'black box' as a database with the title 'Vendor specific internals'.
  2. Whether it's stored in a database or not, the provider performs a POST request to the consumers /invitations endpoint. If the request is successful (possible errors and the structure of the invitation object are described in the Open API Specification), the consumer should notify the provider by returning the correct response. Both parties can then do whatever they want in their vendor specific logic, but we now know that succesful communication between the two parties has been established.

Consumer accepts the invitation

If there is an invitation pending at the consumers side, a consumer can accept the invitation to obtain actual access to the file and to notify the provider that the invitation has been accepted:

image

Once again the rounded rectangle is the start of the flow, and this time it starts on the left side where the consumer wants to accept the pending invitation.

  1. The consumer performs a POST request to the providers /shares/{id}/accept endpoint.
  2. The provider checks if the invitation is still valid and might do additional vendor specific business logic (which we are not interested in) before returning a succesful response (or an error) to the consumer. Possible errors and the structure of the Acceptance and Share object are described in the Open API Specification.

Consumer rejects the invitation

If there is an invitation pending at the consumers side, a consumer can reject the invitation as well. To notify the provider of this event, the consumer should do a similar request as described above, but then to the providers /shares/{id}/reject endpoint.

Apply for funding to help develop Open Cloud Mesh within this community

As discussed at the CS3 2023 conference last week, I offer myself as a contact point for applying for funding to help improve our documentation, specification, test suite, and ultimately also OCM itself.

I already spoke with @schiessle, @moscicki , @glpatcern, @gmgigi96, @labkode, @smesterheide, @andreasfk and @guidoaben about this at the conference and I got the impression (but correct me if I'm wrong) that we all think this is a good idea. Not everybody needs funding, so some people could just add their name as an "advisor" instead of as a team member, or maybe you approve but don't want to link your name to OCM, or to this particular initiative to do some more work on OCM now, that's also fine.

CC also @LovisaLugnegard and @ishank011 from https://github.com/cs3org/OCM-API/graphs/contributors. Is anyone missing from this list who we should ping about this?

I was thinking of applying for a 50,000 euro grant from https://nlnet.nl/entrust/ (deadline: 1 April).
I can create a first draft text and maybe we can have a video call some time next week to discuss it.
I'll also ping each of you individually so we can determine who wants to be involved in which way, and put together a team and some milestones we want to work on.

I think the main goal would be to:

  • improve the documentation (for instance, write the spec down properly, in RFC style)
  • improve the test suite (it's still mostly manual at this point)
  • describe optional features (a number of interesting routes to explore have been proposed)

I think an anti-goal would be to redesign OCM in a way that's incompatible with what exists now. So if you just implement OCM or use it in any way, then hopefully this activity will either benefit you, or it will leave your situation as it is. Given the open nature of this protocol we just want to help improve the quality of the work that already exists, we don't want to end up hijacking or breaking it in any way! :)

Cannot specify options per protocol in create share endpoint

As #54 introduced in the /shares endpoint the possibility of specifying multiple protocols, there is still no way of specifying the options per protocol, or at least is not clear.

This should be a map protocol -> options, for example

{
  "webdav": {
     "sharedSecret": "hfiuhworzwnur98d3wjiwhr",
    "permissions": "{https://open-cloud-mesh.org/ns}share-permissions",
    "uriTemplate": "[https://open-cloud-mesh.org/s/{path-to-shared-resources}](https://open-cloud-mesh.org/s/%7Bpath-to-shared-resources%7D)"
  },
  "other": {...}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.