Giter VIP home page Giter VIP logo

atomic-data-docs's People

Contributors

agustaf9 avatar gabrielgrant avatar hoijui avatar jobeijdems avatar joepio avatar kod-kristoff avatar peeja avatar robsiera avatar volodymyrss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

atomic-data-docs's Issues

Timestamps in Commits - how strict should verification be?

I just had a commit rejected on my server because the client made a commit that had a CreatedAt timestamp greater than the Now time of the server.

Now I'm not so sure if this check should occur. Of course, I can add some accetable difference, which would probably resolve this issue, but it would still cause issues down the line when computer clocks are set incorrectly - either on the server or on the client.

Atomic Endpoints - Standardize query parameters (for example in Collections)

Some resources function like an endpoint: they accept (optional) query parameters which modify the response. A Collection, for example, might use a page parameter and a sortedBy parameter. These available query parameters and their datatypes might be standardized.

How should the client know if a certain resource has query parameters available? How do we communicate this?

One approach is to make all query parameters available as nested resources in an array, under some new property. Each parameter needs a shortname, a datatype (perhaps default to string), bool whether its optional...
We could also have to property URLs: requiredParams and optionalParams.

Authentication, key management and secrets

One of the core concepts of Atomic Data is that users sign their changes using their identity. This allows for fully traceable, verifiable information, which is one of the core features of atomic data. However, this also poses a challenge: how do you manage the private key? How do you deal with users forgetting their key, how do you make sure that the key is stored safely in apps? In this issue, we'll discuss various topics related to key management of Agents.

Storing the key safely in client apps

Client apps need the key to sign Commits. We don't want to bother users by entering their key on every single commit, but we do want to protect them.
In the current implementation of atomic-data-browser, the private key is simply stored in the front-end app, which is not great for security (see issue).

Forgetting a secret

If you don't know your secret (containing the private key), you can't log in anywhere or sign anything. Since an Agent is stored on some Server, the server's admin can always change the Agent's public key. This process should be facilitated through software, maybe even through a standardized endpoint. It might be a good idea to, as a backup, add your e-mail to your agent.

Easy-to-remember secret

At this point in time, secrets are very long strings that are practically impossible to remember. This means users have to use some form of key-manager (e.g. bitwarden / lastpass / browser password store), which is a bit inaccessible for some. We could use some form of seed that is easier to remember, but this still needs a nice bit of randomness / entropy.

The BIP39 spec uses an set of words such as welcome bar control expand desk wonder naive stove sight human furnace arrow ill exclude govern.

Session scoped keys

If a user enters their secret and the app needs to store that secret to sign items, we will always have a risk of leaking that secret somewhere. If we scope keys to sessions, we can reduce this risk. We could let users enter their secret to prove their identity to the server, and sign some nonce from the server. This signed nonce could be used as the seed for a new session scoped private key, which signs the actual commits. In theory, Commits signed by these derived keys would still be traceable to the Agent.

Service description - discovering endpoints / other functionality

Currently, there only exists one implementation of an Atomic Data Server. However, in the future, there might be different, and each one might have different features enabled.

Most of these features are described as endpoints, such as a path endpoint, or an all-versions endpoint. Ideally, the client would check this collection, and depending on these, will render different action in a context menu.

But how would the client know where to find these endpoints?

Some ideas:

  • Using HTTP header(s) from the server (lots of repeated information)
  • Adding links to resources (highly discoverable, but also redundant and expensive)
  • Using paths (see below)

Using paths

Atomic Paths can make it easy to traverse a graph. We could have a common starting point (the drive? the root URL of the server? something else?) that describes these.

Commits, Cloning and Collaboration

I love git: it enables cloning a repo, making changes and giving these changes back. That's an incredibly powerful feature to have. Atomic Data already has Commits, and with this Cloning feature there's also Forking. Should it also need a way to merge changes and make suggestions? And should this use Cloning, or is Forking too different?

Let's assume a user wants to improve some piece of data on a webpage - let's call him the changer. Say the local grocery store has an issue in its 'open times' during covid, and a customer wants to edit this. From the perspective of the changer, they could click the data they want to change, make a change, and click 'share suggestion'. What might happen under the hood to enable this?

Clone, change URL, save Commits

  • Changer clones original resource from source, changes the URL to something that he controls.
  • This creates an initial Commit in the Users Store, which contains a reference to the HTTP url and Commit hash on which that Commit is built.
  • The User makes changes the the resource as usual, adding Commits.
  • If the User wants to make these changes to the original resource, the Commits will no longer make sense, because the subject has changed.

Store Commits, create merge request

  • User creates Commits to the resouce and stores them in their own Server.
  • User creates a Merge Request at the target server. This merge request is a resource containing the sequence of Commits.
  • The Source owner opens some inbox

Tables: constrained and highly customizable collections

Tools like Airtable and Notion have re-defined how we think about tables. With these tools, users can:

  • Change the columns and set custom properties with constrained datatypes
  • Configure sorting / filter / view types, and share these views
  • Easily create / instantiate / edit data from within this view

In Atomic Data, we already have Collections that can be shown in a Table component (in Atomic Data Browser). However, Tables could offer some extra features:

  • Constrain Classes. All children need to be instances of some particular Class. This prevents setting the Class as a child, though. We could also only constrain members instead of children.
  • Support a regular-old JSON (without long HTTP urls) REST API that accepts POSTing to /new or something. Because we have Class constraints, we don't need to use long URLs in properties. Also, we could scope tokens / authorization to this collection, and remove the need for Commits.
  • Custom Views. A view contains information about filter / sort / displayed properties. This is what airtable and notion do.
  • Hierarchy guarantees. Queries always are the parents of their children, contrary to Collections which often just query over existing items. Maybe this means it always filters data in some particular way.

More thoughts:

  • Collections seems to insinuate some form of ownership, so maybe these should be renamed to Queries (thanks @Polleps)
  • If you create a table and edit its columns, are you in fact editing a Class? I'd assume so. And is this Class a child of the Table? This would not work if you constrain the children of a table.

So, how to achieve this?

Extend the existing Collection class

The Collection Class already allows for filtering by property / value, sorting on any property.
Maybe adding fields for view (e.g. grid, or some custom thing) makes sense?

But how should we indicate that it filters by parent? We could introduce a parent filter, which is an optional resource that performs a filter in the Query.

Adding a new Table class

The Table:

  • class the required class, containing the required and recommended properties
  • view the default view (e.g. table / grid / list). Maybe we also have a list of availableViews
  • members the list of child resources. We could only show items here that are both children as well as fitting the shape of the table.
  • newendpoint that allows posting a new resource to this table.

The View:

  • properties order or properties shown in header (question: how does this relate to properties in the class? Must they be the same? What if there are more properties in the Table, does that mean these are required?)
  • sort, filter, pagination... all the collection stuff

Authorization / rights and appending

Let's say I want to add a new or an existing item to a table, such as a Comment to some ChatBox. How do I do this?

We probably don't just want to determine who can read and edit tables, but also who can add items to it. I think we'll need a new right. #96

We could set the parent of the Comment to the ChatBox's Table.
No extra properties required, because parent is always required. Seems clean!

But what if you don't want the one controlling the Table to control all its items? For example, if you start a Thread you might not want to give the creator of the thread edit rights to all resources.
Should we allow users

Consider a different name

Atomic Data is a working title, and it might change. I like the way it sounds, how it refers to the smallest possible amount of data (indivisible, hence atomic), and how you can use the prefix Atomic to refer to specific elements (e.g. Atomic Mutations, Atomic Paths), but:

  • Atomic data can also refer to shared-memory communication between threads (e.g. rust/atomic)
  • There are quite a bit of 'atomic *' projects on the interwebs, it's not a really unique name. There's a business called atomic data. This also means that many atomic names in package managers (npm, cargo, etc.) are already taken.

So let's take a moment to consider some alternatives.

  • Tyli: Typed, Linked data. It's short, it's catchy, it stands for the essence of the project. However, converting the way that it's spelled is unconventional an non-trivial, one might expect 'tily', 'tai-lee', or 'ty-lee'.

SSI and privacy friendly sub-agents

The Agent model is designed to be a publicly accessible and verifiable (decentralized) identity that can be re-used by various apps. However, re-using an identity costs privacy. Of course, users could create new identities to deal with this, but ideally, these users should be able to (if they want) prove that they are made by some specific user.

How to achieve this?

Sign the key with your private key

Basically, have an anonymous signature that proves the parent Agent has signed it.
However, this does mean a signficant attack vector: simply try the public keys of agents that you suspect.

There must be a better way.

Save the sub-agents private key somewhere safe, and prove it by simply signing a commit that says "yeah, this is my parent".

Way more elegant, but this allows the sub-agent to lie about who the parent is.

Still not ideal.

Sign a resource by both Agents that confirms they are the same

This is cryptographically sound and actually proves they are the same.

Increasing traceability: Verfiable Credentials / Verifiable Claims

There are many use cases for verifiable credentials.

One of the core features of Atomic Data is the Commit model, which makes data highly traceable. However, making sure that one specific value is 'accredited' by a specific individual is kind of a bothersome process: get a resource, find the commit which updated some specific field, get the agent (which inclludes the public key), check if the public key matches the property.

A single Verifiable Credential contains a Claim(s), some Proof(s) and some metadata.

Generally, I think there are two ways of thinking about credentials

Resource level

. The first one is to think of credentials as just Resources with their own properties. This approach is the most familiar - just take the W3C VC model, create some atomic properties, and we're good to go.

But this approach introduces a few difficult problems:

  • Can we re-use a Credential resource as a normal property-value combination? For example, if my birthdate is actually verified by my municipality, can I still use this as a birthdate property in my profile? Should we convert all Credentials to regular property-values?

Atom level - Just use Commits

Atomic Commits are, in essence, all signed credentials. There is a date, an author, a signature, a subject, and a (set of) properti(es). This means all Atomic Data created using Commits is entirely verifiable!
So we don't have to invent anything new, right?
Well, with Commits we've tackled an important part of the problem already, but the next step is discoverability.

How would you know that a specific property is actually a proven, verified one, instead of something that I just made up?

We'll need a way of finding the Commit. We could use an Endpoint for that.

Path Endpoint for claim validation

One way of being able to finding credentials (the commit) for a certain atom, is by having a /verifiable-check?path="thing property" endpoint which takes an Atomic Path and returns a collection of Credentials.
For example, I might try to find a signed bachelor's degree by a university by visiting /verifiable-check?path="profile bacherlors-degree.
Maybe we could also filter by value.
It would return the Commit(s) that match that subject / property / value combination.
The client can then verify the signature, and check the set value to verify / validate the commit.

Rename Agent to User

First I felt like Agent was a bit more fitting, as it would also entail non-human actors. However, Agent feels unconventional and unfamiliarm compared to 'user', even though it is more technically correct. Changing this name has quite an impact on URLs, documentation, and various implementations. Best to do it soon.

Consistency between shortnames and URLs, or kebab-case vs camelCase

Currenlty, Shortnames only allow downcased characters and dashes. However, most properties currently use snakeCase.

For example, the https://atomicdata.dev/properties/isA has a path ending of isA but a shortname of isa, which I considered replacing with is-a.

I think a better way to go is to use kebab-case everywhere.

Dynamic / generated / computed values in Properties?

Many values are generated not by users, but by some system. Think about things like counts, createdAt dates, and paginated lists.

Whether something is generated by a machine at runtime or not can be relevant for users of the data. For example, when you want to render an edit form. Should I be able to edit the 'editedBy' or 'editedAt' field? Or manually update the amount of comments something has? I'd rather hide these fields for the user doing the editing.

So where does this information come from? How does the client know that some fields can be edited, and some fields should not be edited?

isDynamic property for Properties

One solution, is to have an optional property on Properties, which is a boolean that defaults to false. If it's true, the value should be considered dynamically generated or computed.

Array manipulation methods (e.g. Push) in the commits

Commits have a set property, which can be used for adding items to arrays.

However, this leads to sometimes unnecessarily large commits when an item is added to an array. For example, the use case of adding a paragraph to a document.

We could introduce one of mors methods to for doing this. However, keep in mind that extending commits has serious implications for all involved. They can be very complex to manage in stateful systems, such as forms in a front end library.

Insert property

The insert property takes an array of anonymous resources, each of which is an insert, which contains:

  • property url
  • location
  • value to insert

append / push

  • An object named push, for which every key is a Propery that is to be appended / pushed to
  • Eeach value is a ResourceArray, i.e. an array of Subjects or Nested Resources

Use JCS and consider switching to JSON Web Signatures (JWS) for Commit Signatures

Although the current signature spec + implementations (server, client) works, it is very much custom and not well described. I'm still kind of new to all this crypto stuff, so I just picked a proper algorithm and defined a canonical JSON serialization to make both the client and server (which use different libraries) reach consensus on signatures. I didn't know I could just use all the existing JWT / JWE / JWS tooling...

Anyway, this needs to be reflected in the spec, and in both implementations. Gonna take quite a bit of time, but it's the right decision.

  • Use JCS in docs
  • Use JWS

Tokens, invites and Share URLs with edit rights - no sign up / in required

I absolutely love saas services that provide a one-click collaboration.

Example from HackMD:

image

I think we could have this same feature with Atomic Data. Standardizing how this works could help to make this something that all apps could get for free - without burdening devs with implementation details.

Approaches

Query param containing scoped agent private key

This means the front-end should check the URL for a guestKey, and the server should make sure it sets the correct grants for the selected resource(s).

  • Everybody with that specific URL will post things as the same agent, so it's not possible to discern multiple users.
  • Server knows private key, which means it can act as if it were the guest
  • No, bad idea.

Query param containing scopend token

Front-end reads query param that contains token. Front-end generates keypair, sends public key to back-end token service, which adds the newly created agent (pubkey) to the allowed posters to some scope.

In-between Invite resource

Invites are a new Class that have some fields:

  • Target resource: Where the invite points to

  • Usage limit: If it is a multi-use or single-use invite token?

  • Target Agents: An optional set of agents that are allowed to use the Invite?

  • Allows for short, query param free easy to read URLs

  • Less clear what the identifier is of the actual resource being edited

When the client has fetches an Invite resource, it should know that it can post an Agent Subject (or public key?) to the Server, after which it gains the rights to edit the resource. Viewing if another question.

Multi-class resource with 'Invite' class

Atomic Data allows for multi-class resources. This means that we could have a resource with both a regular class (e.g. document) and an Invite class.

Invite to create new resources

The #23 Invites model allows for granting some Agent read or write rights to an existing resource. But what if you want to invite people to create something new? One usecase for this is surveys: #32

If we want to create a new thing, we need to know:

  • What the target parent is to be
  • What the resource class is
  • Whether some fields are pre-filled (default values)
  • Whether the questions shown in the form have custom labels / fields?

But we could also say: just invite people to a parent that contains this information (such as the class). A possible approach to this is constrained collections: #37

Hierarchy, folders, authorization and nested (parent-child) relationships

This is going to be a mess of various thoughts regarding hierarchies - sorry for the lack of structure

On desktops, we generally use folders and sub-folders for various goals:

  1. Identification: the path of a file is often also its ID - it tells the machine where the file can be found
  2. Authorization: setting rights - whether a user or program has read / write access
  3. Categorization: grouping related stuff together.
  4. Disk space management: calculating size / identifying data usage culprits.
  5. Navigation & Intuition: useful for humans for finding some file about some subject.

If Atomic Data is to be just as useful as a Unix filesystem or a Google Drive, we need to find solutions for the earlier mentioned five goals, too.

  1. Identification: The ID is obviously the Subject of the resource. This is the URL. There are no real restrictions on Atomic URLs - as long as they resolve using HTTP(S) or IPFS. If a resource moves and changes its URL, the owner must redirect to the new location.
  2. Authorization: Doing authorization without some hierarchical model seems almost impossible. Although a multi-parent model could work fine (if its additive).
  3. Categorization: We could use a tag like system. For this.
  4. Disk space management: When counting things
  5. Navigation & Intuition: useful for humans for finding some file about some subject.

So let's check out some approaches to hierarchy

Approaches to hierarchy

Classic Unix-style file/folder model

This is the one we're most familiar with. Each files has a path, and only one parent.
The five goals in the intro describe where we use these paths for.
That's a lot of responsibilities for a single string, but that has some merits:

  • Simple mental model. It's easy to reason about where a file is and who has rights.

But it also causes issues:

  • Changing IDs. If we move a file, we can no longer find it by its ID. That doesn't fare well on the web, we want Cool URIs. Google Drive solves this by using UUIDs in URLs, instead of paths. This leads to ugly, nontransparent URLs, though.
  • Having a single parent limits how we can organize a file. For example, you might want to have picture inside your personal vacation 2019 folder, but one on your public timeline. You could copy it, but that wastes space and makes it harder to manage items from one place.

Nested tagging

In this approach, resources can be linked to Tags. A Tag is like a parent folder, but its a 1-n relationship. Every resource can have multiple Tags - contrary to how folders work in most systems. Google drive does use a tag-like model, though. A folder can be placed in multiple places.

Tags can be nested, like folders. Should circular tags be possible? If they are, then implementations need to be very much aware of this, to prevent getting stuck in some loop.

These tags could be easily used for authorization. If you want to find out if an agent can read something, check the tag of the resource. Then, check the agents-read-access property (an array of Agents), and check if the requester is present in that array. If that is not the case, check the groups-read-access property (an array of Groups), and check if the requester is present in that group.

We could use an additive rights model for authorization. Check all the tags of a resource (and all parent tags of these tags) to find out whether the user has the correct rights. If the correct rights are present in any of the tags, you're good to go.

Folders are often also used for calculating disk space. This might be a bit harder with tags - you don't want to count items with multiple tags in each parent. So how do you decide how big a tag is, in bytes? One solution is to have like a 'main' tag, which perhaps is simply the first tag. Only that one is counted.

Consider replacing string value data with Primitives & data URI schemes

Suggestion by Thom (@fletcher91).

The current design of Atomic Data requires that Properties specify a Datatype. That way, a triple could be parsed correctly by resolving the Property. However, this also means that these Properties can be resolved in order to properly parse the data. In practice, this would mean that Properties might be included by the server or cached by the client.

Alternatively, a datatype could be included in the serialized representation, similar to how this works in RDF.

Thom suggested using data URI schemes:

data:[<media type>][;base64],<data>

In an Atom, they would look like this:

["https://example.com/john","https://example.com/name","data:primitve/string;utf8,John"]

This would also mean that a primitive MIME type would be introduced for some of the fundamental core models, such as integer, string and datetime.

Multi-class Resources

Currently, Atomic Data sets classes for a Resource using the is-a Property. This supports multiple values, which means that Resources can be instances of multiple classes.
Classes are mainly used for:

  • Rendering forms (because a class could tell something about required and recommended properties)
  • Selecting the best View for a resource (the front-end allows Views to register for some specific Class)
  • Validating resources (similar to forms, by using required / recommended properties)

However, I'm having some doubts on supporting a multi-class model.
In other words, I'm considering using an AtomicURL instead of a ResourceArray for the datatype of the isA property.
Let's use this issue to consider the merits / downsides of having multi-class support.

Using a single URL for an instance that is many things

Imagine wanting to describe Jay-Z.
He's a person, so you might want to show properies like his first name, gender, birthplace, etc.
However, he's also a musical act, with its own discography, labels and genres properties.
We might say Jay-Z is just one Subject, with one URL, which has two classes: Human and MusicAct.
That is the multi-class approach.

A different approach, is to have a URL for Jay-Z the person, and a separate URL for Jay-Z the music act.

Increased complexity in views and forms

Having multiple classes makes things harder. For example, implementing a Form that combines all the Required / Recommended properties..
Also, when rendering a View for a resource, things become more complicated if there are multiple Views available - Imagine rendering something that's both a Calendar item and a Person - which view do you want?

Nested Resources

All Atomic Data Resources that we've discussed so far have a URL as a subject.
Unfortunately, creating unique and resolvable URLs can be a bother, and sometimes not necessary.
If you've worked with RDF, this is what Blank Nodes are used for.
In Atomic Data, we have something similar: Nested Resources.

Let's use a Nested Resource in the example from the previous section:

["https://example.com/john", "https://example.com/lastName", "McLovin"]
["https://example.com/john https://example.com/employer", "https://example.com/description", "The greatest company!"]

By combining two Subject URLs into a single string, we've created a nested resource.
The Subjet of the nested resource is https://example.com/john https://example.com/employer, including the spacebar.

So how should we deal with these in atomic_lib?

Store nested in their parent

In both Db and Store, this would mean that we make a fundamental change to the internal model for storing data. In both, the entire store is a HashMap<String, Hashmap<String, String>>

We could change this to:

HashMap<String, Hashmap<String, StringOrHashmap>>, where StringOrHashMap is some Enum that is either a String or a hashmap. This will have a huge impact on the codebase, since the most used method (get_resource_string) changes. Don't think this is the way to go.

Store as new entities, with path as subject

In this approach, the nested resources are stored like all other resources, except that the subject has two URLs with a spacebar. This has a couple of implications:

  • When deleting the original resource, all its nested ones will not be deleted (but should be), so this requires some extra logic
  • When iterating over all resources, we can no longer assume that every single Key (subject) is a valid URL.

Service descriptions - servers describing their supported features and endpoints

I was working on creating tooling for Versioning for Atomic Data. I want to make this functionality discoverable for users, for example through a menu or a button on the resource. But... Not all atomic data servers will have this feature. And even if they do, how would the front-end app know?

Check hard coded endpoints

In this case, we could say that all /commits endpoints should point to the commits endpoint. However, this would severely limit implementations and users in their choices for endpoint names, and it would make extending endpoint functionality impossible to do consistently. Not good.

Paths and endpoint descriptions

Paths can be really useful to provide a predictable way to find specific things on a server.
For example, finding the first name of the owner of a server might look like profile first-name.
Similarly, the path to the commits service might be found with endpoints commits.
Now, the resource resolving to that path may be available on example.com/commits, but this is not required.
The resolve mechanism traverses resources, instead of using a predictable URL.

So, the front-end requests the root resource (the Drive), checks the endpoint resource.
When the required endpoint is there, the front-end will show the action.

Enum datatype

we don't have an enum datatype, but users have indicated that they want to constrain inputs to specific sets.

I think we should add an optional property to the Property class

Incoming messages, inbox, notifications, subscribe, publish, pubsub

I want to be able to:

  • Receive notifications when important things happen (such as someone replies to a message of mine)
  • Follow / unfollow certain feeds
  • View incoming items (such as blog posts, videos and pictures of friends) in a timeline, similar to facebook
  • Filter incoming items (depending on data available in my profile)

Usecases

  • As a user, I want to get a notification when some resource is updated
  • I want to see all updated things (of some specific class) from some point in the past
  • I want to keep a database in sync
  • I want realtime updates of data in my app (e.g. collaborative document editor, multiplayer game) (this is already covered by websockets #171 )

Interesting technologies

Possible approaches

Use commits - all Commits are messages

When you want to notify some server of a change, simply send the commit there. Incoming messages or notifications are not anything special, they are simply commits.

This however does not solve filtering and following

Model for Mutations / Deltas / Commits

I believe standardizing changes in data is very important, and the docs show some (perhaps too) ambitious goals for Deltas. Some of these rely on hashing: verifiability (that a Delta is authored by a specific actor) and prevention of conflicts (by using the hash of the previous delta to make sure that the delta was made with consideration of the previous one).

Some approaches

Atomic Deltas

This is how it appears in the docs now. A delta mutates a single Atom. These are atomic - they are as small as possible. This makes them elegant and simple, it's a flat model without any nesting. This makes serialization very simple.

One problem with this approach, is that it will lead to invalid resources, since some Classes could require multiple props.
Another problem is that it might create unnecessary overhead, like checking hashes after every single atom, if multiple are changed.

pub struct DeltaLine {
    pub method: String,
    pub subject: String,
    pub property: String,
    pub value: String,
    // Who issued the changes
    pub actor: String,
    // A signed hash, proving the actor
    pub signature: String,
    // Hash of the previous state (e.g. IPFS CID), makes sure it mutates the correct state 
    pub prevhash: String,
}

Resource Deltas with DeltaLines

Scoped to a single resource. This means that we can do a single hash check after processing the delta. It also seems very doable to implement transactionally. This approach also allows for a simple TPF query to fetch all deltas for some subject - which is nice for re-creating the state at any point in time.

pub struct Delta {
    // The set of changes
    pub lines: Vec<DeltaLine>,
    // Who issued the changes
    pub actor: String,
    // A signed hash, proving the actor
    pub signature: String,
    // Hash of the previous state (e.g. IPFS CID), makes sure it mutates the correct state. Should be "init" if its the first.
    pub prevhash: String,
    // Subject to be changed
    pub subject: String,
}

/// Individual change to a resource. Unvalidated.
pub struct DeltaLine {
    pub method: String,
    pub property: String,
    pub value: String,
}

Resource Deltas with nested resources

Instead of using DeltaLines (triples with a method, property, value), it's also possible to create one or multiple nested resources. For example, we can create an insert property, with a nesteed Resource as a value. This nested resource will contain all the prop-value combinations that need to be inserted. If we need to delete some fields, we can do the same with a remove property, which again has a nested resource inside of it. Later, this can be extended. For example, we could have a 'changeSubject' property, which has an Atomic URL as resource.

pub struct Delta {
    // Who issued the changes
    pub actor: String,
    // A signed hash, proving the actor
    pub signature: String,
    // Properties to be inserted
    pub insert: HashMap<String, String>
    // Properties to be deleted
    pub delete: Vec<String>
}

Resource with deltaprops

A delta is similar to the resource that is being modified. It contains all the props and values set during the modification. Existing keys will be overwritten.
Some props will be metadata (signature, subject, hash, actor). To explicitly ignore these props, it is useful to add a deltaProps prop which is an array of propUrls to be ignored when applying the deta.

This is very much an 'insert first' approach - as it seems harder to apply other kinds of methods (e.g. delete).

Batched Deltas

A set of changes to any resource. This is similar to how linked-delta works, except with the addition of Actor and Signature.

This approach makes it hard / impossible to check if the requesting system is aware of the previous state. It als o

pub struct Delta {
    // The set of changes
    pub lines: Vec<DeltaLine>,
    // Who issued the changes
    pub actor: String,
    // A signed hash, proving the actor
    pub signature: String,
}

/// Individual change to a resource. Unvalidated.
pub struct DeltaLine {
    pub method: String,
    pub subject: String,
    pub property: String,
    pub value: String,
}

Considerations

  • Deltas might fail - either completely or partially. How should a store deal with an incorrect delta, should it accept the correct part and throw out the rest? Or should all deltas be transactional, so they are either fully applied, or rolled back? I think the latter approach is probably the safest way to go. It makes the API easier to understand - a delta either fails or passes, not some complex in between state.
  • Perhaps I should rename Deltas to Commits. All developers know commits from Git, and these are very similar. They are changes that persist, use hashing to create a log...
  • Being able to make a bunch of changes across resources, which are applied at once, also makes sense in some contexts.
  • A delta will never change, so it makes sense to use an CID / IPFS / Hash as an identifier. But also - they should be retrieable using HTTP(S). Perhaps example.com/commits/QwSomeSHA256Hash.
  • All resource-based approached currently lack a strategy for dealing with nested resources.

Dividing responsibilities between Datatypes and Properties

In the Atomic-Data-Browser app, I use Classes, Properties and Datatypes to render the form and validate its inputs. The Class dictates which required and optional properties are used, the properties determine the labels, and the datatypes determine the type of form input. A normal string is a simple single line input, a markdown field a multiline one (perhaps with some UI elements for markdown syntax), a ResourceArray a more complex field with ordering, and so forth. And then, there is possibly some validation. A Slug, for example, checks if a string consists of only dashes, numbers and letters.

The problem is, I feel like sometimes the property should do validation, and not the datatype. If only datatypes do validation, we might get a lot of datatypes. I'm not sure if that is a bad thing, but it might be.

Approaches

I think we have (at least) two options:

Only datatypes do validation (status quo)

  • Strict distinction between semantics and data
  • Results in lots of datatypes
  • Many properties can use the same datatype
  • Might result in datatypes that are only used by a single property, which means that the user has to create both a property and a datatype.
  • We might need to add a regex field

Only Properties can do dynamic validations

  • Users cannot / should rarely create a new datatype
  • Properties have an optional regex field
  • Harder to re-use validation by other properties, which means we might get a lot of similar, yet different, validations.

Both can do dynamic validations

  • Every time a user wants to model something, they have to make this decision. Making decisions is not fun.
  • Enables re-use when it makes sense (user puts validation on datatype)

Examples

Let's consider some (not yet existing) properties that might require validation. Think about the form field, the validation, and whether the datatype could be re-usable in various contexts.

Phone number

An international phone number, such as +31123456789. Seems only relevant in a 'phonenumber' property.

Public key

A 32 bit base-64 serialized ed25519 public key. Don't think this is usable in any other property. Key can be validated with some JS - regex won't suffice.

Color

A hexadecimal color value, e.g.EEFF22. Could be usable in many semantic contexts, such as 'backgroundColor' and 'buttonCollor'. Should offer a colorpicker, probably.

Very large numbers

When describing concepts in physics, large numbers like10^36 are pretty common. These could break normal u64 ints, so they cannot be integers.
We can come across these in many properties, such as length in meters or count of a molar mass.

Current conclusions:

  • If it will require a different input type / form fiel, and can be shared across various properties, it probably is a new Datatype.

Linking resources to Commits - make every resource traceable

In the current implementation of atomic-server we can find all versions of a resource by visiting the /all-versions endpoint. It works pretty well, but it is not discoverable. Also, it requires performing a search query on the back-end. This can be optimized, of course, but having an explicit link to a resource is more elegant and will also be performant to others.

One alternative way of doing things, is adding a previous-commit property to every resource after applying the commit. With this property, it becomes easy to find the commit, and with that, also the last editor, the edited date.. And if these commits also link to previous commits, we can quickly find the first commit!

Dealing with server side generated data and forms

One of the things that Atomic Data enables, is rendering Forms for data that you've just encountered. The client will be able to fetch the Properties and Classes, and will be able to determine which HTML input fields should be rendered. However, some of these properties may not be editable. Some fields are generated by the server at Runtime, such as the members property in a Collection. This value is dependent on the filters being set in the Collection, or the query params that are being passed. The current implementation shows these fields

How to deal with this?

Add the generates property to classes, next to requires and recommends

So at this moment, Classes are responsible for generating most of the form. They define which properties are required and recommended. It therefore seems logical to also add a generates property, which tells the front-end that it doesn't need to show these fields in a form, because editing them doesn't makes sense.

However, this means that the class becomes harder to re-use in a context that has different considerations. (see next item for example).

Add a isGenerated property to Properties

Instead of making Classes responsible for providing this information, we could let Properties describe which item is generated.

However, this would mean that re-using a property becomes harder. Say we have two servers, and server 1 generates thefullName property, while server 2 has it stored as a literal, editable value. They would need to use different Properties, which would make their data harder to combine!

Thoughts on serialization (AD3, JSON-LD, JSON-AD)

The AtomicData Triples (AD3) format (described here) is inspired by hextuples - which is designed to be more performant than other existing RDF serialization formats. But I'm not entirely sold on the idea of having that as the standard way to serialize Atomic Data:

  • It's hard to read
  • It has a lot of duplication (subjects appear more than once), because it does not allow for nesting
  • NDJSON is unconventional
  • It cannot be navigated using thing.property.otherproperty[5] syntax (i.e. it's not json)

Atomic Data, however, is bit different from RDF. For one, Subject-property uniqueness means that we can use key-value stores and plain JSON objects without having key collisions. However, like RDF, Atomic Data uses URLs for keys (Properties / Predicates). Because these URLs are prone to typos and take too long to type, Atomic Data introduces the Shortname property, which means that these shortnames could be used as keys in serialization formats. This is what I've used in Atomic-Server for JSON-LD serialization - the keys are nice and short, while the long URLs are available in the @context object. This gives regular JSON users a familiar ORM-style syntax, whilst retaining a way to find out more about the properties.

But... Parsing JSON-LD is slow, if you want to actually use the linked data URLs and parse it as RDF. Using it as JSON with some embedded documentation is just fine, though.

So let's explore some serialization ideas. I think taking JSON as a strating point makes a ton of sense. It has awesome support, it has highly performant optimized parsers, and developers are familiar with it.

URLs as keys, first key as Subject URL

{
  "https://example.com/someResource": {
    "https://example.com/somePropString": "someval",
    "https://example.com/somePropBool": true,
    "https://example.com/somePropThatLinksToANestedResource": {
      "https://example.com/somePropString": "some nested value",
    }
  }
}
  • Allows for multiple resources in a single object - both anonymous nested resources, as included resources with @id
  • Intuitive, maps well to data model of Atomic Data
  • Not great to use as plain json - keys are long, easy to mess up, hard to autocomplete.
  • Might feel redundant when fetching single resources, where the fetched URL equals the first key in the JSON object.
  • It seems like it can almost be parsed as JSON-LD, but it still misses some things. All URLs should be denoted in nested @id objects, all lists need @list objects... Can't say I like that. The alternative, adding an @context object, also seems like a suboptimal way of doing things.

Langstrings as Resources or not?

Providing a standard for language strings (like RDF does) has some great benefits. For example, it allows for smart clients to show the right translation.

But how should it be modeled? There are at least a couple of options, and each one has some serious benefits and drawbacks:

Adding a language field in all Atoms

This is basically what RDF does - add a separate field in every single statement. This solves the issue, but adding a column to the Core model (of Subject, Property, Value) is very costly in many regards. The "triple" suddenly becomes a "quad" - and every part of the ecosystem has to explicitly deal with that. Serialization formats, libraries... Most importantly, the mental model becomes more complex. In Atomic Data, it also collides with the Subject Property uniqueness - how would you add two translated strings for one S P combination? You'd have to replace S P uniqueness with S P L uniqueness, again making everything more complex. It also makes translation-heavy resources very large.

Adding some custom serialization in the Value

This basically means - create some custom datatype with some custom parsing. Again, every single library has to deal with this. Even if we choose something simple (e.g. a JSON array with objects containing lang and text tags), we still require all Atomic Data parsers to also have some JSON parser, and implement some custom logic.

Another downside, is that this doesn't play nice with Atomic Mutations - it would be impossible to add a single translation, you'd have to replace the entire Value.

Just ignore it, let someone else (or some other proposed standard) deal with this issue

Tempting, but no. Not offering a default go-to solution in this book will probably lead to a fragmented landscape, incompatible formats and a lot of frustration.

Every single translation is a Resource

This actually makes a lot of sense, and does not require any weird parsing tricks. However, it requires clients to create these resources (and their respective identifiers) which can be a hassle. It also requires a model in between to provide the collections of translation resources themselves (like an array?), and that poses a new challenge: how do we make sure that the client is not required to fetch and parse every single translation, if its only interested in a single translation? Which brings us to...

Bundled translation Resources (all translations in 1 resource)

We introduce a class for Translations, and create a property for every single language.
Similar to the method above, this does not require weird parsing tricks.
It creates far less Resources than the method above, which is also nice.
The resulting resource could get quite big, though, and clients need to fetch every single one.
Combined with the Atomic Data Shortnames, it would offer some cool and clean query options:

harryPotter1.title.en => Harry Potter and the Philosopher's Stone

harryPotter1.title.nl => Harry Potter en de Steen der Wijzen

  • Could be nested in a Resource, so you don't need new Subjects
  • Search results will point to the translation, not the actual resource above it
  • We add a useLocalString hook, which will know to fetch the URL of the linked Translation and render the locale variant

All in all, this final option seems like the best for now, but if I'm missing some options or important insights - let me know below!

Combining Datatype and Predicate into a single field: Property

One of the core ideas in Atomic Data, is that the Property field (e.g. "example.com/birthdate") in an Atom should resolve to a Property Class.

This Property will tell something about:

  • A description about what it means (e.g. "The date when the person was born.")
  • A shortname (for ORM style dot.syntax) (e.g "birthDate")
  • The Datatype that it requires (e.g.) (eg. "ISO 8601 Date string")

This is what gives Atomic Data typed data and shortnames, which enables for ORM type syntax (thing.property) with type safety. I think these things have proven to be very useful for developers.

This also means that the DataType is tightly coupled to the Property, and could therefore be omitted from serialization (contrary to with RDF). Doing so, however, requires clients to dereference unknown Properties, and maintain some cache of Properties locally. It also means that when the Properties cannot be retrieved (e.g. server of Property is offline), and the client does not have a Schema Complete stored, the dataType is not known. This is a downside of Storing the Datatype somewhere else.

So here's the consideration: should Atomic Data, by default, include datatypes in serialized representations? Should it be optional, or maybe required?

Rename Subject Predicate Object to Thing Property Value

The RDF terminology subject predicate object is kind of unconventional in the computer world, and can confuse newcomers. It semantically makes sense, but since Atomic Data has subject-predicate uniqueness, it does not longer view these three rows as semantically relevant.

Perhaps we should call them something else.

  • Subject Property Value
  • Thing Property Value
  • ID Property Value
  • Key Property Value

I'm pretty sure about Property (since this should always be an Atomic Property) and Value, but I'm not sure about the subject replacement.

Using IPFS in atomic data

IPFS (or other content-addressing protocols) is a very interesting technology, especially for linked data, as it helps make static resources highly available. Atomic Data Properties are examples of where this is very important: it is essential that these resolve, and it can be harmful is the owner decides to change a datatype, for example.

Relates to #64

More powerul query system

Currenlty, I use Triple Pattern Fragments in Atomic for all my query needs. Combined with the Atomic Collections abstraction, it works pretty well for basic things: listing all instances of some class, sorting these items...

But this is still kind of limited. Most query languages allow for way more powerful kinds of queries.

Let's say I want to find all Users who are friends with John. And let's assume that the friendship relation is not a direct one, but is a Friendship class in between Persons. This happens when a relationship itself requires properties (e.g. how intense is the friendship, when was it started, etc.).

Person -> hasfriendship -> Friendship -> friendsWith -> John.

So how do I find all the Persons with a hasFriendShip relation to a Frienship with a friendsWith relation to John?

Perhaps we can use the Atomic Paths concept here.

We could describe the question as? hasFriendship friendsWith John.

To be continued...

Usecase: surveys

Surveys are an interesting case for web applications and frameworks. How could surveys work in the Atomic Data paradigm (using concepts like Commits and Agents)?

Things that I want, as a survey creator:

  • A secret (non-indexed) URL for respondents to fill in a survey
  • Unique links for specific individuals, to make sure people only fill in an item once, or set unique identifiers for individuals.
  • A management screen for my surveys
  • Provide edit rights to colleagues
  • Decent survey tooling including...
    • a form builder with open questions, multiple choice options, logic jumps...
    • an interface for looking at individual responses
    • an interface for looking at invite links #23 , and whether they have been used
    • an interface for looking at aggregate / average responses (ideally with piecharts and all that)

And as a respondent, I want:

  • The option to fill in a survey without any registration barrier
  • The ability to pre-fill questions that I've entered before in other surveys.

So, how to implement this? Here's some thoughts:

  • Use Classes as a basis for surveys. Classes also set a bunch of properties that are required and recommended, which is similar. However, Classes don't enable things like logic jumps, sorting questions, pagination, intro texts.. It could provide an easy way to have basic surveys, though, as forms for classes are already functional, and Commits already perform the actual validation of the data.
  • We don't have an enum like datatype, yet #27, which would be useful for multiple choice questions
  • Create Invite resources with secret, unindexed URLs #23, each with a Private Key and a link to the Class (survey). Upon opening, the front-end sets the private key as current agent, and uses it to sign the commit containing the filled-in survey. The back-end verifies the signature, checks if the agent's key is present in the parent's write ResourceArray.
  • Create a ResponseBox Resource which contains a box of invites. When an invite is used, the invite gets a used: true propval.
  • Pre-filling / autofilling questions can be done using Paths. A question could describe a certain path from a pre-defined starting point, which can be traversed to fetch a specific piece of data.

Prettier path URLs

I'd like to turn these:

https://atomicdata.dev/path?path=https%3A%2F%2Fatomicdata.dev%2F+https%3A%2F%2Fatomicdata.dev%2FfavMovies
https://atomicdata.dev/path?path=https%3A%2F%2Fatomicdata.dev%2Fagents%2F0XzHfi3he5xzUGAEpwg3H5PNlIrpHMrdQRV8oFqU9Fs%3D+name

Into something like these:

https://atomicdata.dev/_https://atomicdata.dev/favMovies
https://atomicdata.dev/agents/0XzHfi3he5xzUGAEpwg3H5PNlIrpHMrdQRV8oFqU9Fs=_name

Separation character

What is the best separation character? Ideally something that rarely occurs in a URL, or else we need to parse escaped URI escaped URLs.

  • (spacebar) is really clean, but will not be recognized by most linters / parsers as a URL, e.g. in a markdown document it would not appear as a clickable link. We could of course accept %20 (the URL encoded spacebar), but that would be hard to read, which defeats the purpose
  • _ underscore is very human readable, and often means "spacebar".
  • + is cool, but is also a base64 char

Companion app for authentication and share requests

Users need to store their agent's secret (which includes the private key) someplace safe, such as a password manager. However, this is still not optimal:

  • The secret is entered into the client app, which means that you need to fully trust the client and all its dependencies
  • A keylogger might help a hacker gain access to the secret
  • Storing the secret in the app is not easy to do safely, especially if the device is shared between users.

But still, I like the simplicity and the decentralized nature of the current authentication / authorization system.

One way to solve these issues (and some more) is to introduce a Companion App.

Atomic Companion App

This is a native app for smartphones that is responsible for storing the secret, signing commits, and granting other permissions.

Functionality

  1. Upon installation, the user (owner) can either generate a new keypair, or enter an existing secret. This can probably be done by scanning a QR code.
  2. When the owner tries to sign something, a notification is shown in the companion app. The owner presses accept, and the owner can use the client app
  3. The owner can temporarily grant access to some client to modify a resource / write things.
  4. If a data user wants to use some specific piece of information (e.g. access to some piece of profile data) stored on the server of the owner, the owner receives a notification and can approve of deny this.
  5. When the user's server goes down / becomes unavailable, the user receives a notification

Step 1 feels trivial, but step 2 is still kind of mystifying.

Approaches

User's Server is connected to companion app

  1. When the QR code is scanned, the companion app is linked to a server. A (websocket?) connection is opened, which allows the server to send updates to the companion.
  2. The owner enters their agent's subject, which links to their server.
  3. The client app (e.g. browser app) sends a connection request to the server. The server sees it has a connection to a companion app.
  4. Server sends a notification alert to the companion app over their Websockets connection

Use a browser extension

  • I don't like this

Connect over bluetooth

  • Even browsers can connect over bluetooth

Connect over wifi locally

  • This means the phone can be treated as a server

Foreign keys: tracking incoming dependencies

Resources move, change and get deleted. This can be problematic when people depend on these Resources. Atomic Data is largely designed to re-use external content all the time, which means that these external dependencies become more important. For example, when a Commit is parsed and validated, the Properties mentioned in the commit will need to be available (either cached, or fetched). When these are unavailable, the Commit will fail.

A partial solution to this problem is using IPFS #42, which helps make resources immutable. However, this still does not fix the updates itself.

I think that we need two things:

  • A protocol for sharing updates (perhaps simplpy posting commits to various commit endpoints)
  • A protocol for following things (perhaps also by sharing a commit that has the target resource as a value) (overlap with notifications? #28)

Foreign keys for collection caching / cache invalidation

Calculating a collection can be an expensive endeavor. So ideally, we'd use caching to prevent doing these expensive calulations.
One way of approaching this is by keeping track of the collections in which a resource is used. When the resource changes, we can 'invalidate' the cache. This could mean that we have a is-invalid property on cached collections which is set to true whenever an item holding a foreign-key is updated. The function generating the collection can then check for this property, and if it is true, it can skip the expensive step of filtering all the existing resources.

This approach, however, would miss new resources, or resources that first didn't match, but after some commit do match. In other words, when a new resource is added, it will not invalidate a cache, even if it should. For example, the new todos collection would not be invalidated when a new todo is added. What we could do, is for every commit, run all stored collections, and see if they match for that specific resource. Which... Kind of defeats the purpose of using foreign keys for this at all :').

Also, whenever the filters change in a Collection, this should mean that the Collections should be invalidated.

Subscribing to changes using an Endpoint

  • When I want to subscribe to changes for Resource X, I find the /subscribe endpoint.
  • I enter a couple of query params: resource (the resource to which is subscribed), commitEndpoint (the /commit endpoint to which the changes should be sent. Maybe we could later add subscription levels (e.g. 'delete only').

Signatures for Commits & the Author model

In Atomic Commits, every change is signed by some author. This makes Commits truly atomic, and means that they can be shared as fully verifiable pieces of information, similar to W3C Verifiable Credentials (although Atomic Commits are specifically made to describe changes instead of current state).

However, this requires some implementation. Here's what I'm thinking:

  1. The Author creates a keypair. The public key is stored publicly at the Author's URL, the private key stores somewhere safe
  2. The Commit should be serialized to some format (I'm thinking JSON-LD). This process should be deterministic - serializing some commit should always result in the exact same string, so perhaps the keys should be sorted alphabetically. Note that the Commit can only be serialized as some partial struct, since the signature is of course missing.
  3. The serialized Commit should be signed by the private key. This signature is included in the Commit that's sent to the server.
  4. The server receives the Commit and checks if the Author has the correct rights.
  5. If it does, it gets the Author resource and its public key.
  6. It generates the same serialized Commit representation from step 2, and it checks the signature using the public key.

Atomic Data as data provider in JAMstack CMS / static site generation

JAMstack is a way to manage websites, where you Serve static (HTML) files, probably using a CDN, and re-build these websites automatically when data changes. This helps makes apps fast and easy to manage. It often involves not managing a server.

Atomic Data could be really useful in this context:

  • Atomic Commits make sure that all content is properly versioned. Here, it could be a far easier to use and lighter weight alternative to Git.
  • The Atomic Data Browser provides an easy to use UI for admin purposes (write articles / content).
  • The JAM tools are already there:
    • @tomic/lib for Javascript
    • atomic-server is the API
    • @tomic/react is the Markup

We'll probably need some new tools / libraries / tutorials / templates for convincing people to use Atomic Data in a JAMstack app:

  • Netlify integration?
  • NEXT.js integration. I think this is already possible with @tomic/react, so all that remains is writing a tutorial or boilerplate.

It would be cool if the docs.atomicdata.dev would be running using this stack.

Let the examples actually resolve to Atomic Data

People will follow these links and get 404s, and that limits how fast people will understand how this all works.

So, we have at least few options on making the URLs resolve:

  • Use existing RDF resources (with the downside that it's not actually atomic data)
  • Create IPFS links (that don't work for most people, who don't have ipfs resolvers in their browser)
  • Set up a server... And actually host atomic data. Seems like the way to go, but doing this right also requires tooling.

Any other ideas?

One-liner slogan for Atomic Data

I've been trying out a couple of one-liner descriptions for atomic data, but I haven't really landed on something that feels entirely right.

  • The easiest way to create, share and model linked data.
  • A proposed standard for modeling and exchanging linked data
  • A specification for sharing, modifying and modeling graph data.
  • The web as a single database
  • Type-safe linked data
  • Re-decentralize the web
  • Take back control of your data
  • A better way to standardize
  • For an interoperable, decentralized and fair internet

Some considerations:

  • Should I emphasize linked data, or would that just confuse people? Atomic Data is not just linked data, after all, but it's a strict subset. Also, linked data could put people people off (at least some devs who've tried RDF)
  • Is the term specification the right one? Although much of the work is done into specification, it's also a lot implementation at the moment. I've built a working front end + back end + libraries, which together might feel more like a framework.
  • Should the one-liner stay mostly technical, or should it sell the dream? Maybe we need both as seperate one-liners, depending on who is reading it where?

Initializing resources with a Commit

Currently, Commits have some serious ambiguity. They do not specify if they are editing a resource, or if they are creating a new one. This can lead to accidental overwrites of existing data. How to solve this?

Require previous commit ID / signature in each edit commit

  • Makes commits more easily browseable
  • It makes commits a blockchain, basically. That's a cool word!
  • Means that the client has to find this commit. That means adding logic to the client, which now has to query some endpoint of a server to find the latest commit for a resource. Which takes precious cycles.

Require 'init' boolean in initialization commits

  • Very lightweight to implement

Arrays as Datatype

I suspect that every person working with RDF will, at one point, ask themselves: so how do I store ordered data? Some time ago I wrote a blog about this question, and it still feels... too complex. Why not just introduce a serialized Array datatype? Parse the object string as a JSON array, each value containing a URL.

Of course, this does not replace linked lists, which have their own merits (e.g. decentralized lists, which can be fun in games where everybody passes something along), and we still need to deal with very long lists (which require pagination).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.