Giter VIP home page Giter VIP logo

data-and-domain-models's Introduction

Open Integration Hub (OIH)

Open source framework for easy data synchronization between business applications.

Visit the official Open Integration Hub homepage

Introduction

License

The Open Integration Hub enables data synchronization across a variety of applications. This requires unified data structures — the master data models of the Open Integration Hub. These master data models can make your life easier, but are optional. You can always just do a 1:1 mapping or use your own data models. The models are developed and maintained by the community, so please do not hesitate to give feedback, suggest changes or propose new models.

Join the Community

Do you have questions, ideas, feedback or just want to chat about integration? Please join our growing developer community on Slack!

Contribution

Getting Started

Within the scope of the master data models of the Open Integration Hub you can contribute in two different ways. You can either request/propose a change to an already existing model or contribute a new model for an currently not existing domain.

Propose a Model Change

An existing model can be change in different ways. The possibility exists to add or remove something from the model or to change an existing part, such as an attribute.

If you want to propose a model change please open an issue here or against the monopreo.

The unified process for changing a model is explained in the following:

  1. Fill out the request for model change
  2. Submit the proposal
  3. The relating workgroup will check the proposal according to fix rules
  4. If the proposal is approved the change will be incorporated into the model
  5. A new model version will be published within the next release

Change Request Flow:

Change Request

Realization of Request Sub-Task:

Realization of Request

Contribute a new Model

If you want to contribute a new model for a currently not yet existing domain, please consider the following steps:

Before you start please read the Introduction into Open Integration Hub master data models and How to contribute a new data model.

Contribution Guidelines

Before you contribute please read our contribution guidelines.

Code of Conduct

To see how members of the community are expected to behave, please read the code of conduct. We apply the code of conduct defined by the Contributor Covenant, which is used across many open source projects, such as NodeJS, Atom and Kubernetes.

Content

Folders

  • Decisions: This folder contains all outstanding/made decisions by the workgroup categorized by open and closed decisions
  • MasterDataModels: This folder contains general information about the data models, an explanation of the OIHDataRecord and all currently existing master data models
  • Protocols: Contains meeting protocols of the workgroup meetings
  • src: All JSON schemes can be found here. This includes JSON schemes for all existing master data models, the overarching OIHDataRecord and a generic example as a starting point for writing JSON schemes for the relating data model

Documents

  • CONTRIBUTING: Gathers any rule considering the contribution for the Open Integration Hub project
  • CODE_OF_CONDUCT: Contains an explanation of the expected behavior of the community members, following the code of conduct defined by the Contributor Covenant

data-and-domain-models's People

Contributors

ag737 avatar ddasberg avatar dennisces avatar ealtendorf avatar heggert avatar hschmidthh avatar josefbraeuer avatar jschuesslerhh avatar nils-mosbach avatar philecs avatar raphaelkomander avatar robinbrinkmann avatar spyanev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

data-and-domain-models's Issues

Provide deliverables on basis of Lutz' suggestions (Tool + Schema)

Derived from 2017-11-29Workshop.md protocol.

The deliverables for a 'generic model of addresses'

  • in the shape of an UML class diagram (attached) by @hschmidthh till 19th Dec. '17
  • in the shape of a JSON schema by @hschmidthh till 19th Dec. '17
  • as conceptual elaborations for addresses (including decision processes)

The deliverables for a 'generic model of products'

  • in the shape of an UML class diagram (attached) by @JosefBraeuer till 19th Dec. '17
  • in the shape of a JSON schema by @JosefBraeuer till 19th Dec. '17
  • as conceptual elaborations for products (including decision processes)

Challenge the Standard Data Model

The standard model (configuration) holds subsets of data fields (grouped), actually transmitted due to business transactions. Because of backward compatibility #68 the evolution process is heavily embossed by the first version. A good starting point would be granted with a sustainable selection of verified data fields. These can be proofed at nearly realistic conditions within a runtime.

OIH 0.5 access towards addresses and products

  • Challenge the standard data model, by transmitting representative aggregates
    • Invoice data as combination of contact/address and items/products

Data transmission via Standard Data Model

POC 1: Adapter (mapping via UI) <> FLOW <> Adapter (mapping via UI)
POC 2: Adapter (mapping via UI) <> FLOW <> SDM <> FLOW <> Adapter (mapping via UI)
POC 3: Adapter+Transformer <> FLOW <> SDM <> FLOW <> Transformer/Adapter

Feedback on Identity Management Description

👍 For providing a detailed description on Identity and Access Management
👍 For using illustrations/diagrams to visualize the ideas.

Tenant and Tenant Users

The highlighted names in the example workflow are different from the notation in the illustration (e.g. Vendor1 & ACME Inc. and Tenant admin). I think we should adjust it to create maintain consistency.

Sequence Diagram

The example workflow just before the sequence diagram describes that the connector transmits this tenant specific access token to OIH but in the sequence diagram the ISV Connector returns the isv_tenantId. Are the access token and isv_tenantId the same in this scenario? Or is the described scenario not illustrated in the sequence diagram?

Prototype Tier 2 (incl. Data Hub)

work in progress!

Wir definieren einen Integration flow
Im SilvERP wird die target Url mit ggf. notwendigen credentials abgelegt.
Beim speichern eines Adressdatensatzes wird der trigger ausgelöst, der den gesamten Datensatz per webhook an die target Url sendet.

Im Data Hub wird:

  • der Datensatz erstens raw gespeichert
  • der Datensatz in das OIH Datenmodell überführt und gespeichert (hierbei wird ein Data Hub Unique Identifier erstellt)
  • eine Verbindung zu SilverERP (ersteller-) Instanz gespeichert mit dem unique identifier von SilverERP
  • es wird die action zum übermitteln an jede einzelne weitere Instanz (in diesem Fall Snazzy) angestossen.
  • Snazzy empfängt diesen Datensatz, erkennt am fehlenden "Snazzy-unique-identifier", dass es sich um einen neuen Datensatz handelt
  • Daher wird ein neuer Datensatz in Snazzy erstellt und der erstellte Snazzy-identifier als Antwort an den Data Hub gesendet.
  • Der Data-Hub merkt sich für jede Instanz die ID des Records in der Instanz und den Zeitstempel (die Version) des Datenesatzes, die erfolgreich an die Instanz übergeben wurde.

Bei einer Änderung dieses Records in einem der beiden Systeme (identisch) - exemplarisch im Snazzy wird:

  • Im Snazzy beim speichern der trigger ausgelöst, der den gesamten Datensatz per webhook an die target URL des Data Hub sendet.
  • Der Data Hub speichert die Raw Daten
  • Der Data Hub überführt die Daten in das Data Hub format und wendet die Änderungen auf den gespeicherten Record an
  • Hierbei wird der Änderungszeitstempel (ggf. Änderungs-Instanz und User) im Data Hum mit gespeichert.
  • Danach wird die action zum übermitteln des Datensatzes an alle Instanzen ausgelöst (ausgenommen hier: die triggernde Instanz)
  • Die Instanzen erkennen im Call, dass eine lokale (z.B. Silvererp eigene) ID enthalten ist und interpretiert den Call als update

Issues:

  • Bei einer Übermittlung als State (kompletter Datensatz) muss beachtet werden, dass Änderungen überschrieben werden können, wenn eine Änderunge auf einem älteren Datensatz basiert.
    Beispiel:
    • User 1 in Instanz A lädt einen Adressdatensatzes
    • User 2 in Instanz B lädt den gleichen Adressdatensatz
    • User 2 ändert den Inhalt des Feldes Name1 und speichert
    • User 1 ändert den Inhalt des Feldes Name2 und speichert
    • Im Data Hub wird durch den letzten Speichervorgang das Feld Name1 mit dem Inhalt überschrieben, der beim Laden des Datensatzes von User 1 aktuell war.

      Die Änderung von User 2 geht verloren

Required:

  • Jedes System muss einen unique Identifier im eigenen System managen.

Provide a standard data model for identities

One scope of the Open Integration Hub is to foster convenience at marketplaces, where independent SaaS integrations can be used in combinations, according to the business needs of small and medium enterprises (SMEs).

In order to prevent a scenario where access permissions have to be managed in every single app, the identity management should also cover Tenants User. Derived from that a further domain model may need to be introduced that supports to manage identities.

This requirement where discussed at the MS2 planning workshop as seen at the protocol

Describe requirements for a master data model

This Epic is derived from the 3rd paragraph of the MasterDataModelAdresses.md

  • General requirements
  • User requirements
    • Types and objects
      • UseCase: properties of organization
      • UseCase: properties of persons
    • Relations
      • UseCase: relations person to organization
      • UseCase: relations organization to others
    • Rights management
    • Data integrity
  • Legal requirements
    • data protection
    • data sovereignty

Provide the proprietary interface documentation of Snazzy

Provide standardized endpoints - public. The following structure was derived in order to result in a unified documentation format #49

  • Request methods: GET/POST/PUT/PATCH/DEL
    • Interface and desciption - purpose(s)
    • HEADERS/BODY/PARMS and description - test(s)
    • Status code(s) and description - response(s)

The more convenient your documentation is, the more likely third parties will integrate your app


OData 4.0 documentation / in a nutshell

Use explicit file names to link referenced documents

Provide the actual mapping table of Snazzy

Mapping example of the proprietary data model towards the current standard data model.

Proprietary data model Direction Standard data model - address
name > person.firstName
name > person.lastName
street > person.address.street
street > person.address.number
note ? -

As seen on @ealtendorf her contribution, dot notation allows to explain nested data structured in a readable form - with columns left aligned.


Regarding nested objects, a data point mapping at a two-dimensional table isn't practical.

Rather document your application's data model as JSON format via doca's web interface - suited to present nested structures. @spyanev

JSON STRUCTURE (object.json)

"definition": {
  "datapoint": {
    "type": "string",
    "description" : "Important for the semantic data point mapping"
    "example" : "An entity would look like"
    }
},
"required": ["datapoint"],
"properties": {
   "datapoint": {"$ref": "#/defintions/datapoint"},
   "NestedObject": {"$ref": "./source.json"},
   "FetchedDefinition": {"$ref": "./source.json#/definitions/datapoint"},
   "FetchedProperty": {"$ref": "./source.json#/propterties/datapoint"}
}

In this example the source.json has a proper object defined. The "$ref:" attribute is a name space - link.

Provide the regulations on how to design a generic domain model

According to the 2017-12-11 Telko.md, @josef and @lashauer wanted to define the regulations. Those still need to be propagated with an official document, for the 1st milestone and the community (independent implementations).

Derived structure of the Regulations.md from our call at the 15th Dec. '17

  • General regulations for generic data models @lashauer till 19th Dec. '17
  • Specific regulations for the generic data model addresses @hschmidthh till 19th Dec. '17
  • Specific regulations for the generic data model products @JosefBraeuer till 19th Dec. '17

Verify the scope of standard data models

The required attributes for the standard data model (configured domain models) arise from the data which a company uses across multiple applications. The underlying business transaction determines the scope of a data set. Such an aggregate consist of nested attributes of separate domain models.

  • Ask for data that need to be exchanged
  • Make a list of representative business transaction
  • Identify the underlying data of most frequent cases
  • Create aggregates, that allow seamless data exchange
  • Derive an efficient cut for the domain model configuration

Unify the documentation for the actual prototype

As mentioned at #65, an 'ISV-N integration flow' should be convenient to create. So that third parties rather willing to invest their time in building further integrations. It is important for the Open Integration Hub acceptance to find a common API documentation format, therefore. Renat @zubairov mentioned the open data initiative in this context, as an approach for proprietary APIs.

These tasks were derived from the MS2 planning workshop protocol. Our goal is to accomplish POC1 of that roadmap. The following criteria had been found during the workshop in order to do so:

  • Create a mapping table #47 / #62
  • Create JSONata expression (mapping) #46 / #61
  • Document which Elastic components where used and why
  • Create an interface documentation #45 / #60
  • Compare the documentation in order to unify them

hub and spoke and the OIH Data Hub

Status

proposed

Context

Is the OIH Data Hub neccessary to enable the hub and spoke communication style within the OIH?
https://en.wikipedia.org/wiki/Hub_and_spokes_architecture

Alternatives

Alternative 1

The OIH Data Hub is not neccessary to enable the hub and spoke communication style within the OIH.
The OIH Data Hub must be an optional component. The basic OIH features like hub and spoke communication style must be usable without the Data Hub.

Decision

ND

Consequences

Individual transformations involving a central hub might be more complex.

Alternative 2

The OIH Data Hub is neccessary to enable the hub and spoke communication style within the OIH.

Decision

ND

Consequences

Without the OIH Data Hub it is neccessary to create point to point connections between applications resulting in many point to point connections if there are more than two applications share the same master data.

Describe the standard data fields semantically

All semantic field mappings from proprietary data models towards the standard data model (model configuration) should be able to be done by hand. A third party needs to understand the usage of every single standard data field so that mismatches become less likely.

Such vulnerable implementations would harm a seamless data flow. A proper implementation needs to rely on identical mappings of all other integrations. Therefore a detailed descriptions of the standard data fields are mandatory.

  • Quality criteria for the description
  • Generic procedure model - later extensions
  • Affect on aggregates - usage context implicit

Document a unified API for addresses and products

The API controls the way addresses- and products data can be retrieved and propagated. Its design heavily affects the data flow along the standard data model (configuration). The actual data to be transmitted are pretend by the business transaction.

The transmission itself occurs in the shape of nested fields, as a context-bound collection of data fields, also known as 'aggregate' - which can hold the content of a single receipt for instance.

Consider that only with a proper permission level operations on data should be possible, according to certain company guidelines.

  • Align endpoints with the OIH API design pattern - also #65
  • Align aggregates towards the effective data transmissions

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.