Giter VIP home page Giter VIP logo

charm-relation-interfaces's Introduction

Charm Relation Interfaces

A catalogue of opinionated and standardized interface specifications for charmed operator relations.

Matrix badge

Purpose

The purpose of the repository is to outline the behavior and requirements for key interface names, ensuring that charms claiming to implement a certain interface actually are capable of being integrated with each other.

Contributing

Contributing a new interface specification is a lightweight process.

Interfaces

For the time being, to see available interfaces, their statuses, and schemas, browse the interfaces directory.

Relation interface testers

In order to automatically validate whether a charm satisfies a given relation interface, the relation interface maintainer(s) need to write one or more relation interface tests. A relation interface test is a scenario-based test case which checks that, given an intitial context, when a relation event is triggered, the charm will do what the interface specifies. For example, most interface testers will check that, on relation changed, the charm will write a certain value into its (app/unit) databag and that that value matches a certain (Pydantic) schema.

See the tester documentation for more.

charm-relation-interfaces's People

Contributors

abuelodelanada avatar alesstimec avatar arturo-seijas avatar bencekov avatar ca-scribner avatar danielarndt avatar delgod avatar deusebio avatar gboutry avatar ghislainbourgeois avatar gmerold avatar gruyaume avatar ibraaoad avatar jnsgruk avatar juditnovak avatar lucabello avatar marcoppenheimer avatar mthaddon avatar natalian98 avatar nsklikas avatar nuccitheboss avatar patriciareinoso avatar pietropasotti avatar saltiyazan avatar sed-i avatar shayancanonical avatar simskij avatar weiiwang01 avatar wood-push-melon avatar wrfitch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

charm-relation-interfaces's Issues

Rethink how interface tests are specified

          I know this is out of scope of this PR but the framework makes it non-trivial to check what is exactly validated. We're not asserting anything explicitly.

When I'm reviewing this test for example, I have a hard time figuring out, what exactly are we validating. The same goes for the next one where the devil's in the "schema=SchemaConfig.empty" detail.

Originally posted by @gruyaume in #57 (comment)

Add a CONTRIBUTING.md file

  • Should tell people how to contribute to the repo
  • Should include links to schema validation tools like the one from Newtonsoft.
  • Should outline our (slim) governance model
  • Should explain why you should get your charm validated
  • Should explain how to get your charm validated

How to share relation interface schemas between charms

Issue:
the charms implementing a charm-relation-interfaces-backed relation need to have access to the pydantic schemas in schema.py for the interface. Right now we need to copy-paste the code and share it 'manually' or bake it in the charm lib.

It would be nice if we could expose the interfaces from a pypi package (for example) or in some other way, so all the charm needs to do is:
from charm_relation_interfaces.foo.v0.bar import BarRequirerSchema

Interfaces table contains `ingress` twice with two categories

If this is not a mistake, we should rethink the table format, or at least the representation on charmhub.io. Is this going to be a common pattern? I assumed, perhaps incorrectly and without data/conversations to back it up, that interfaces would have a many-to-one relationship with a category.

Either way is fine, just want to make sure we're concrete on expectations :).

Create a relation interface for Mimir distributed

We need to add the mimir_cluster relation interface to this repository. The specifications for this relation are:

The worker (provider) side, should provide:

  • its roles;
  • its address;
  • its port;
  • its scheme (http/https).

The coordinator (requirer) side, should provide:

  • the Mimir configuration.

How to document/specify/provide schemas for nested data structures

Databags are flat string:string mappings.
Sometimes relation databags need or want more structure and therefore the values of that mapping are yaml/json/b64/pickle-encoded complex objects.
In our schemas we'd like to document what the structure of those complex objects are.

How do we document the type of encoding used?

How to deal with optional/unsupported keys?

Suppose I have two charms providing the ingress interface (interoperable).
One of them adds a new feature (say, TLS), to the ingress library and make it optional so that it will still support integrating with providers or requirers using earlier versions.

  • How does the ingress interface now look like?
  • What should the provider do when it sees a key it doesn't recognise? Block, silently ignore (risky with TLS...), or..?
  • How do we spec this in this repo? "The provider is expected to provide TLS, but only if it supports it...?"
  • What does the tester do? Is it 'happy' if the charm under test does not support the optional feature, or if it does strict validation and blocks when it sees the unknown key?

Pydantic error using cos_agent in a machine charm

I'm going through some code I've made some time ago to test the COS lite with machine charms.

https://github.com/erik78se/juju-operators-examples/tree/main/observed

After upgrading a previous build charm, using the cos_agent interface https://canonical.github.io/charm-relation-interfaces/interfaces/cos_agent/v0/

I strike an ERROR in the upgrade-hook.

unit-observed-1: 16:44:51 WARNING unit.observed/1.upgrade-charm If you got this error by calling handler() within __get_pydantic_core_schema__ then you likely need to call handler.generate_schema(<some type>) since we do not call __get_pydantic_core_schema__ on <some type> otherwise to avoid infinite recursion.

Is this an error or am I doing something wrong?

Proposal: Introduce versioning and version alignment

We know that the relation interface is at the moment lacking capabilities in terms of:

  1. TLS
  2. Authentication

The capabilities above are virtually certain to require either side of the relation to perform significant different logic than they do now, so current providers and requesters, even if they do not fail outright when confronted with, say, and endpoint requiring TLS, they will not be able to set up their workloads to interoperate with that endpoint.

IMO, we should add an optional property, called version to the root objects for provider and consumer, to represent the version of the relation interface. If not set, the version is assumed to be 0, and should correspond in terms of capabilities with the LIBAPI version 0 of the prometheus_remote_write charm library.

Future versions of the relation schema will have different structures depending on the version. Provider and requester should always post to the relation interface the highest version they are capable off, and when receiving from the other side a version that they also support, "downgrade" the relation data they expose accordingly.

If one role in the relation receives relation data with a version it does not understand, it must do nothing in response. That charm may decide to go into BlockedState, to signal the incompatibility with to the Juju admin. (WaitingState seems not semantically correct because there is no guarantee that a common version will be agreed upon.) If the other side downgrades its relation data to a common version, the charm will then recover from the BlockedState.

Proposal: Composition of interface schemas

We need some documentation, and an example, on how to do composition of interface schemas, for instance, the mongodb interface extending database.

  • Design Suggestion
  • Example
  • Validator Proof

Create an eponymous pypi package with pydantic models

Proposal

I would like to be able to pip install pydantic models

pip install charm-relation-interfaces
# or
pip install charm-relation-interfaces[alertmanager_dispatch]

so they could be easily imported from a centralized location

from charm_relation_interfaces import AlertmanagerDispatchProviderV1
# or
from charm_relation_interfaces.alertmanager_dispatch.v1 import AlertmanagerDispatchProvider

instead of storing them inside the relation's *.py file.

Background/assumptions

  1. JSON schemas are great for general interoperability, but they are not immediately usable from within charm code. What really makes a difference are pydantic models, and how properties are "static".
  2. The "currency" of charm relation interfaces should be pydantic models. Dataclasses usability is limited to very simple schemas and JSON schemas do not convert into a python object we can work with at dev time.
  3. Charm libs are, and will remain, limited to one file; yet if we choose to split provider/requirer into two separate libs, we may need to repeat the pydantic model in both requirer and provider files.
  4. Creating a dedicated charm lib to hold a pydantic model (or two models) would result in inter-lib dependencies, which is not ergonomic.
  5. Creating a dedicated charm lib to hold pydantic models per charming team (e.g. observability_libs.schemas) would artificially subject an essential part of an operator to the workflow of a particular team, and may be perceived as not inviting by community. It may also give a wrong impression, as interfaces are intended to be unique across the ecosystem.

[proposal] uniform way to specify databag structure

I think for this project to be more useful we could attach to each relation interface a template, replacing the 'schemas'.
A template should be a pydantic model with this structure:

class DataBagModel(Model):
    unit: Optional[Model]
    app: Optional[Model]
    
class Template(Model):
    provider: Optional[DataBagModel]
    requirer: Optional[DataBagModel]

We'll provide this base class in a charm-relation-interfaces/base.py
So charm-relation-interfaces/interface/template.py should contain for example:

from base import Template, Model
from typing import *

class ProviderAppModel(Model):
    foo: int
    bar: str
    baz: Dict[str, int]


class Boo(Model):
    nested: bool = True  # default value --> not required

class RequirerUnitModel(Model):
    qux: Boo

template = Template(
    provider=DataBagModel(app=ProviderAppModel)),
    requirer=DataBagModel(unit=RequirerUnitModel))
    )    

Rationale:

  1. this is easier to read at a glance than that humongous json data structure we have right now
  2. if one wanted (for some reason) to validate a databag (the requirer, for example), it could copy-paste the template, add a pydantic dependency, and use pydantic to validate it. (right now, one could do the same with jsonschema I guess, but pydantic seems to be the standard way to do this in python, so... why not?)
  3. this could one day be integrated with https://github.com/PietroPasotti/relation-wrapper or something like that, so that when you install a charm lib you automatically get the schemas (from wherever they are) and get type hints and code completion when you're using the lib

[ingress] Support exposing application on a custom path from outside

The current ingress relation systematically exposes the application on the following URL http(s)://[ingress hostname]:[ingress port]/[app-name]-[model-name]/ to the outside world
This doesn't match many web application use cases where a user wants its application to be exposed to http(s)://[ingress hostname]/[custom-path] (e.g. https://ubuntu.com/pro)

In order to support a custom path we can do the following changes:

  • Add a custom-path optional attribute to the relation interface
  • If this new attribute is set, the reverse proxy is instructed to expose the application on this path instead of the generated [app-name]-[model-name]
  • This opens risk of path collision, in case of collision the provider charm will be in blocked state with none of the requirer (asking for the same custom-path) being exposed until an operator fixes the conflict

Note: This is not a breaking change as it adds an optional attribute, therefore it won't require a new relation version.

[opensearch_client] Migrate schemas to pydantic

Upon some closer investigation, I noticed that this interface is still using JSON schema only. I suggest you have a stab at implementing the schema with pydantic before we merge this as this will likely uncover things you might like to change.

One change that comes to mind just by reading the spec is that secret_fields is a space-delimited string rather than a string array. I would change this for multiple reasons:

  • Easier to work with (with pydantic deserialization, that is)
  • And probably more importantly, JSON allows spaces in object keys, which could potentially become a problem ๐Ÿ˜.

(and sorry for this becoming so long-lived, but I do think this is important enough to stall)

Originally posted by @simskij in #99 (review)

Readme.md cleanup

Right now, the readme.md of all interfaces has a few issues:

  • unclear what the purpose of the introductory section is, given that it's always some paraphrase of

"this interface specification is meant to define what a charm providing or requiring a relation with this relation interface."

  • too much variability in the way the 'expectations' are stated. If the schema (pydantic or json) is not clear enough, then the tests should make it bulletproof-obvious what a charm is or is not expected to do.
  • there is still some lingering confusion in some specifications of whether the norms are behavioral or purely structural (i.e. if the ingress provider is actually expected to do something beyond replying with a syntactically correct url).
  • the 'directionality' graphs are somewhat useful, but also that's information that can be derived from the schema. Perhaps we can make the schemas more central and visualize them better?

I think this repo/project would benefit from some cleanup across the readme's to:

  • reduce duplication
  • uniform things across the interfaces
  • make the language more clear in general

`tox -e build-json-schemas` does not work as expected

It does not seem that the new build-json-schemas tox env is working correctly. I am currently working on adding the nfs-share integration specification to this repository and when I run tox -e build-json-schemas as a pre-commit hook it overwrites the requirer.json and provider.json file for the ingress integration:

Provider

image

Requirer

image

This issue is causing the CI jobs to fail on my pull request.

requirer/provider pattern

It seems to me that this collection of interfaces is biased towards the provider/requirer pattern.
Which is perfectly fine, but we could explain/state this more clearly in the toplevel README.

Are we only going to spec here provider/requirer interfaces? Or also other types of interfaces? (e.g. rolling-ops or other 'relation models')?
If so: we can make this clear in the toplevel readme and each interface's own readme doesn't have to repeat it or rephrase it.
If not: we have to be more clear in how we state it on a per-interface basis, and realize that that will have an effect on how the testers are setup.

Resource name on the provider side

Right now the interface for most of databases is such that requirer provides the resource name (being database or topic) on its request and the provider answers with the credentials (e.g. username, password, endpoints, etc).

This is somewhat inconsistent with the UX for s3 integrator, where the provider ALSO provides the bucket name and the requirer has a clear requirement listed that :

Is expected to tolerate that the Provider may ignore the bucket field in some cases (e.g. S3Proxy or S3 Integrator) and instead use the bucket name received.

Could we do this also for all database interfaces?

One of the things we are currently struggling with is that the resource name is present in the requirer application data bag, which can be read by the leader but not other units. In the use-case where the client applications want to scale out (adding more units), we need the resource name to be available to them as well. Of course one could include this info in the peer relations, but duplicating information (from application relation to peer relation) does not seem optimal to me. Beside, providing this on the requirer side would fix this, but also it would be more general and consistent with S3 if we want to provide the feature of overriding the requirer preference (similar to the requirement above for s3)

Tagged "releases" of this repo

Would it be possible to tag releases for this repo? I recognise it's a fluid project, BUT, the UI on charmhub.io relies on the main readme.md to be in a certain format. Tagging would allow us to pin the UI to a specific version and only update when we're sure nothing will break. Thoughts?

But why not use the git hash? It's hard to reason, at a glance, which hash is newer.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.