Giter VIP home page Giter VIP logo

dataspecer's People

Contributors

anastasiaakhv avatar apolicky avatar cermakmarek avatar goramartin avatar jakubklimek avatar janjanda avatar ladymalande avatar martinnec avatar radstr avatar skodapetr avatar sstenchlak avatar viktor-bujko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

dataspecer's Issues

Prefixes in jsonld context

Prefixes are specified as a part of schema definition with psm:prefixName and ``` psm:prefixIRI`` predicates. All prefixes are optional, meaning that they are used in output jsonld context only if they were applied to some IRI in the context.

With importing context all prefixes are imported as well, meaning contexts once declared in the imported context are not re-declared.

PSM operations

Following PSM operations are required for now.

Schema

  • Create schema
  • Modify schema label and description
  • Set schema root (only at creation)

Class

  • Create (interpreted) class
  • Modify class label and description
  • Modify class technical label
  • Change order of class parts

Attribute

  • Create (interpreted) attribute
  • Modify attribute label and description
  • Modify attribute technical label
  • Delete attribute (+delete itself from class parts)

Association

  • Create (interpreted) association
  • Modify association label and description
  • Modify association technical label
  • Delete association (+delete itself from class parts)

Read resources of given type from CoreModelReader

As of now, it is possible to request all resources, load them and find those with desired type. This is not optimal solution when searching for schema root. A solution is to allow user of the interface to request only resources of particular RDF type.

PIM operations

Following PIM operations are required for now.

Schema

  • Create schema
  • Modify schema label and description

Class

  • Create class
  • Modify class label and description
  • Delete class

Attribute

  • Create attribute
  • Modify attribute label and description
  • Modify attribute datatype
  • Delete attribute

Association

  • Create association
  • Modify association label and description
  • Delete association

Add specialization of operation results

As of now each operation produce general object that capture the change in the model. It would be easier to add operation specific results - for example DataPsmCreateClasss can return an extra field with name of the created class.

Harmonize naming of core.api

As of now there is CoreModelReader and CoreModelWriter. As there is now no single model, it would be better to rename them to CoreResourceReader and CoreResourceWriter.

Add new iri property to operation result

As of now the operation result contains list of changed and deleted properties. We need to add list of new properties. This information is easily available to the operations and will make use of the operation results much easier.

Design and implement output stream

As of now there is not interface for text output. Bikeshed nad ReSpec employ NodeJs WriteStream and consequently they can not be used in browser.

A solution is to introduce a new interface, e.g.

interface TextSink {
  write(content:string) : Promise<void> ;
  flush() : Promise<void> ;
 }

As a next step a browser and NodeJs version should be implemented. The browsers version can utilize string concatenation while the NodeJs version can use the NodeJs ```WriteStream``.

Adapters for RdfSink

Once #7 is solved, we need to implement this interface by all adapters to allow for RDF IO operations.

Rename data-psm to dpsm

data-psm stands for data platform specific model so it make sense to shorten that to dpsm instead of the original one.

Thread save operations

Although JS code runs in a single thread it is possible to call a single function multiple times using async/wait and Promises. This can be problem for the CoreModelWriter as we do not support running of multiple operations simultaneously. This must be at least document so the user is aware of this.

In addition we may try to introduce some mechanism that would make sure that operations are evaluated one at a time. We just nee to keep in mind, that the Store, implementing the ````CoreModelWriter``` may be on a server. So the functionality may need to be close to the store.

Add CIM interface

As of now we use CIM only as a source of data to pre-fill PIM operations. But there are some common properties, like color or "group" that we may want to use across multiple applications. A solution is to introduce a shared CIM interface.

Fix bugs in several files

The following files are breaking the build because of bugs:

  • lib/platform-independent-model/adapter/rdf/pim-resource-adapter.ts
  • lib/io/rdf/sparql/sparql-rdf-source.ts
  • lib/io/rdf/jsonld/jsonld-adapter.ts
  • lib/data-psm/adapter/rdf/data-psm-resource-adapter.ts

Null & Undefined

We need to agree on common approach to using Undefined , Null and exceptions in the API. The objective of this issue is to collect ideas and decide on how we will use those. Once decided another issue should be created to implement those decision and describe them in the readme file / wiki.

Edit: In addition we may need to think about handling "expected" errors as invalid data, invalid operations or unavailable service. As typescript does not support exception type declaration the alternative is to return error object as an alternative to the regular function result. In order to make it easy to work with other code, there can be a wrapper that will with getOrThrow method.

Atributy specifické pro CIM

Entity ze slovník.gov.cz oproti obecným entitám mají navíc atribut glosář. Obecně pak různé CIM mohou entitám přiřazovat libovolné (předem definované pro CIM) atributy. Ty je potřeba ukládat v rámci PIM modelu, neboť PIM musí být od CIMu datově nezávislý.

Tedy je potřeba rozmyslet, jak budou fungovat adaptéry na ukládání modelu, který tímto obsahuje data specifická pro konkrétní CIM.

Stores from URL

The goal is to implement a read-only store for different sources described by URL. For now, we only consider:

  • Static file located on the Internet in RDF format (Turtle for example)

CoreResourceReader query

We need to be able to query CoreResourceReader for resources with particular interpretation or ends of the association.
Possibilities are:

  • Return all resources with arbitrary predicate and object of given value
  • Let user specify RDF predicate and object
  • Let user query by example using model objects

Add support for CLI

We need to be able to bundle and execute the transformers using command lined interface.

Store IO adapters

Add interface and implementation that would allow us to load / save stores to given URL.
The URL may represent a simple REST API (GET, POST) or potentially storage of certain type LDP, Solid.

Add properties cache for SparqlSource

As stated in #12 there is need to optimize reading of remote resources. While the proposed solution address situations when user request multiple properties, it may not always be optimal from the code perspective.

A solution is to introduce a cache that would read the whole object at once, using this cache for future request. We may still implement this to not force user to request all properties at one.

As a result user can still request properties one at the time while not producing another SPARQL query for each call.

Create wiki

As of now some architectural decisions are in the readme file. Those must be moved to the Wiki together will all relevant information and decision related to this repository.

Add multiple properties fetch to RdfSource

As of now RdfSource allow use to read one property at a time. When reading multiple properties for a single object such approach is sub-optimal.

In addition we can also add new methods that allow user to request multiple properties, e.g.

properties(iri: string[], predicate: string): Promise<{[predicate:string] : RdfObject[]}>;

and similar for reverse property.

The default implementation may utilize the property and reverseProperty operations. In the final step an optimization can be implemented for SparqlSource.

Improve operation validation

Each operation should validate the prerequisites in scope of the schema. In the current state not everything that should be validated is, the purpose is to validate all that is possible. As a result if the schema is valid before the operation it must be valid after the operation.

As a part of this issue we may also introduce a better way how to handling invalid state, i.e. using exceptions of particular type rather than return statements that clutter the code with if-return statements.

SPARQL loader

The goal is to implement a Webpack loader for *.sparql files that minimizes and transforms them into a parametrized function replacing the template variables inside them.

Create PimReadOnlyMemoryStore

It is useful to have ability to read PIM/PSM using CoreModelReader interface. However, this interface is now implemented only by PimMemoryStore, which does not allow for easy modification. A solution is to introduce new store that would implement CoreModelReader using user provided list of resources as a database.

Replace || with ?? where applicable

Comming in TypeScript 3.7 the ?? is binary operator that returns the second value iff the first one is null or undefined. This is better then || that returns the second argument iff the first one is false; this includes false, 0, "".

We also need to make sure to use

"compilerOptions": {
    "strict": true,
    "target": "ES2020"
  }

as a configuration to propagate ?? to ES as it is now part of ES2020.

Bundling

We need to be able to build bundles of the code to allow to import in other projects.

Branching model

Merge develop into main and use main branch instead. As new features are developed in feature branches the main should not be broken so often.

Performance wrap

We may need to measure performance in order to be able to optimize. This is not easy due to a heavy use of async calls. A solution might be to design a wrappers that could be used with the classes from this repository that would allow to measure execution time of particular operations.

Move wiki to code

While using GitHub wiki provide us with some advantages like menu, etc .. it is not suitable for rapidly changing software as it is not easy to connect commit and wiki version. A solution is to move the wiki into the code.

Monorepository

As of now model-driven-data host a single project. It may be beneficial to create a monorepository.
This would allow us to have repository for different generators (xsd, json-schema, ... ) in addition it would support development of UI directly within this repository.

Migrate object-model

As of now the object model is not working, we need to update it to work with the current data-psm and pim classes in the develop.

Update linter rules

We need more restrictive Linter rules to increase code style consistency.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.