Giter VIP home page Giter VIP logo

rio's Introduction

Oxigraph

Latest Version Released API docs PyPI npm tests status artifacts status dependency status Gitter Twitter URL

Oxigraph is a graph database implementing the SPARQL standard.

Its goal is to provide a compliant, safe, and fast graph database based on the RocksDB key-value store. It is written in Rust. It also provides a set of utility functions for reading, writing, and processing RDF files.

Oxigraph is in heavy development and SPARQL query evaluation has not been optimized yet. The development roadmap is using GitHub milestones. Oxigraph internal design is described on the wiki.

Oxigraph implements the following specifications:

It is split into multiple parts:

Also, some parts of Oxigraph are available as standalone Rust crates:

  • oxrdf, datastructures encoding RDF basic concepts (the oxigraph::model module).
  • oxrdfio, a unified parser and serializer API for RDF formats (the oxigraph::io module). It itself relies on:
    • oxttl, N-Triple, N-Quad, Turtle, TriG and N3 parsing and serialization.
    • oxrdfxml, RDF/XML parsing and serialization.
  • spargebra, a SPARQL parser.
  • sparesults, parsers and serializers for SPARQL result formats.
  • sparopt, a SPARQL optimizer.
  • oxsdatatypes, an implementation of some XML Schema datatypes.

The library layers in Oxigraph. The elements above depend on the elements below: Oxigraph libraries architecture diagram

A preliminary benchmark is provided. There is also a document describing Oxigraph technical architecture.

When cloning this codebase, don't forget to clone the submodules using git clone --recursive https://github.com/oxigraph/oxigraph.git to clone the repository including submodules or git submodule update --init to add the submodules to the already cloned repository.

Help

Feel free to use GitHub discussions or the Gitter chat to ask questions or talk about Oxigraph. Bug reports are also very welcome.

If you need advanced support or are willing to pay to get some extra features, feel free to reach out to Tpt.

License

This project is licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in Oxigraph by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Sponsors

And others. Many thanks to them!

rio's People

Contributors

alexchamberlain avatar dependabot-preview[bot] avatar dependabot[bot] avatar filippodebortoli avatar hoijui avatar pchampin avatar phillord avatar tpt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rio's Issues

Refactor the `Triple` variant of `Subject` and `Term`

Subject and Term both have a variant that goes:

    Triple(&'a Triple<'a>)

I am extending Sophia to use [TriplesFormatter], and I find the above very inconvenient. Everytime I want to create a Subject or Term "view" of my internal terms, if that term is a quoted triple, I need to allocate a Rio Triple somewhere that I can then reference. And I have to do this recursively, which is a pain. (I was unable to achieve that without some unsafe code...)

Did you run into a similar problem with Oxigraph? Did you find an elegant way to solve it?

I would suggest to change the variant in Subject and Term

    Triple(Box<Triple<'a>>)

I think that the code of the parser would be minimally impacted by that change, and it would make it much easier to use the formatters.

If you agree with the general idea, I can propose a PR.

Add Turtle* support

It would be nice to add support of Turtle* (any Maybe NTriples*, NQuads* and TriG*) to Rio.

This could be done by adding a disabled by default "star" feature that would add Triple to the term enumeration and renaming the NamedAndBlankNode enumeration to something like Node with also a Triple variant.

IDEA: improving Turtle performance with non-strict mode

If I remember correctly, we found out some time ago that a lot of time was spent in checking PN_CHARS.

We could add a strict flag in the parser configuration; when set to true, the parser would use the current code, rejecting any invalid character. When set to false (the default, following Postel's law), the parser would accept any non-ascii character (since only ascii characters are "significant" for the syntax anyway).

Not sure how much this would improve performances, but it is worth a try. @Tpt WDYT?

parser mixes its own blank node labels with those from the graph

Consider the following Turtle snippet:

:alice :likes [ :name "bob" ].
_:riog00000001 <tag:name> "charlie".    

It is parsed as

:alice :likes [ :name "bob", "charlie" ].

which is a mistake.

A simple workaround would be to detect and alter any bnode label in the source that matches riog[0-9]{8}

Wrong behavior in Wordnet-LMF XML Format

Hello,
is it possible to extend Rio to support the WN-LMF (https://globalwordnet.github.io/schemas/) format as well?

Currently I use the library to import different wordnets into a triple store.
The XML pattern parser works fine.

However, under certain circumstances, the behavior of the parser is faulty:

XML in WN-LMF format:

<LexicalEntry id="ewn-symbolization-n">
  <Lemma partOfSpeech="n" writtenForm="symbolization" />
  <Sense id="ewn-symbolization-n-06614677-01" synset="ewn-06614677-n" dc:identifier="symbolization%1:10:00::">
        <SenseRelation relType="derivation" target="ewn-symbolize-v-00989629-01" />
  </sense>
   <Sense id="ewn-symbolization-n-05773412-02" synset="ewn-05773412-n" dc:identifier="symbolization%1:09:00::">
       <SenseRelation relType="derivation" target="ewn-symbolize-v-00837915-02" />
   </sense>
   <Sense id="ewn-symbolization-n-00413284-02" synset="ewn-00413284-n" dc:identifier="symbolization%1:04:00::" /></LexicalEntry>

Output:

...
['riog000034', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'LexicalEntry' ]
('riog000034', 'id', '"ewn-symbolization-n"')
('riog000035', 'writtenForm', '"symbolization"')
('riog000035', 'partOfSpeech', '"n"')
('riog000034', 'Lemma', 'riog000035')
('riog000036', 'http://purl.org/dc/elements/1.1/identifier', '"symbolization%1:10:00::"')
('riog000036', 'synset', '"ewn-06614677-n"')
('riog000036', 'id', '"ewn-symbolization-n-06614677-01"')
('riog000037', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'SenseRelation')
('riog000037', 'target', '"ewn-symbolize-v-00989629-01"')
('riog000037', 'relType', '"derivation"')
('riog000034', 'sense', 'riog000037')
('riog000038', 'http://purl.org/dc/elements/1.1/identifier', '"symbolization%1:09:00::"')
('riog000038', 'synset', '"ewn-05773412-n"')
('riog000038', 'id', '"ewn-symbolization-n-05773412-02"')
('riog000039', 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type', 'SenseRelation')
('riog000039', 'target', '"ewn-symbolize-v-00837915-02"')
('riog000039', 'relType', '"derivation"')
('riog000034', 'sense', 'riog000039')
('riog000040', 'http://purl.org/dc/elements/1.1/identifier', '"symbolization%1:04:00::"')
('riog000040', 'synset', '"ewn-00413284-n"')
('riog000040', 'id', '"ewn-symbolization-n-00413284-02"')
('riog000034', 'sense', 'riog000040')
...

If the pattern is LexicalEntry - Sense - LexicalEntry , it works fine.
But if SenseRelation occurs within Sense, no ID linking at all is generated: ('riog000034', 'Sense', 'riog000037') -> LexicalEntry points directly to SenseRelation, skipping Sense. However, it should actually be ('riog000034', 'Sense', 'riog000036') and additionally ('riog000036', 'SenseRelation', 'riog000037') must exist.

Thx Robert

HDT parse support

Given that they are much smaller than other serialization formats, it would be very useful to load HDT (Header, Dictionary, Triples) files with rio.
There seem to be only C++ and Java libraries that handle HDT right now, so having this in the Rust world would allow users to create even more performant RDF applications with Rust.

Implement full IRI validation

The Iri parser should properly validate the IRI using the RFC grammar instead of just trying to properly split the the IRI into its different components.

Assistance with MRE

Hello, thank you all very much for putting this together. As a newcomer to Rust and this module, I'm trying to sort out this pattern. I can' seem to sort out the proper incantation for lifetimes etc to make this work. I've spent quite a while doing research and trying lifetimes in various combinations. Can this be made to work? Or maybe I'm just doing it wrong entirely?

I don't know what if anything in this construction has the same lifetime (or can be made to have the same lifetime) as some_vec.

Thanks!

use rio_turtle::{NQuadsParser, TurtleError};
use rio_api::parser::QuadsParser;
use rio_api::model::{NamedOrBlankNode};

fn main() {
    let file = b"<http://example.com/foo> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person> <http://example.com/> .
    <http://example.com/foo> <http://schema.org/name> \"Foo\" <http://example.com/> .
    <http://example.com/bar> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person> .
    <http://example.com/bar> <http://schema.org/name> \"Bar\" .";

    let count = do_work(file);
    println!("count {}", count);
}

#[allow(unused_must_use)]
fn do_work(file: &[u8]) -> i32 {
    let mut count = 0;
    let mut some_vec: Vec<&str> = Vec::new();
    NQuadsParser::new(file.as_ref()).parse_all(&mut |t| {
        let id = match t.subject {
            NamedOrBlankNode::NamedNode(node) => node.iri,
            NamedOrBlankNode::BlankNode(node) => node.id,
          };
        some_vec.push(id);
        count += 1;
        Ok(()) as Result<(), TurtleError>
    });
    count
}

The compiler error:

~/dev/basic-rio$ cargo build
   Compiling basic-rio v0.1.0 (/home/matt/dev/basic-rio)
error[E0521]: borrowed data escapes outside of closure
  --> src/main.rs:24:9
   |
18 |     let mut some_vec: Vec<&str> = Vec::new();
   |         ------------ `some_vec` declared here, outside of the closure body
19 |     NQuadsParser::new(file.as_ref()).parse_all(&mut |t| {
   |                                                      - `t` is a reference that is only valid in the closure body
...
24 |         some_vec.push(id);
   |         ^^^^^^^^^^^^^^^^^ `t` escapes the closure body here

error: aborting due to previous error

error: could not compile `basic-rio`

To learn more, run the command again with --verbose.

Add JSON-LD Support

Another common RDF format is json-ld. In fact, this is a required format for ldp servers. I'll gladly contribute to this when I get the time, but I figured I should go ahead and open this issue.

I assume this would involve adding a new crate rio_jsonld, following the paradigm already established for the other parsers.

Proposed improvements for ParseError

Add XML abbreviations and BNode ellision

Currently the XML serialization is all in long hand. It would be good to have some of the short hand syntaxes, for example typed nodes and the ability to remove explicit BNode IDs where possible.

For example:

    f.format(&Triple {
        subject: NamedNode { iri: "http://top.level/top_sub" }.into(),
        predicate: NamedNode { iri: "http://top.level/top_pred" }.into(),
        object: BlankNode{id:&bnid}.into()
    })?;

    f.format(&Triple {
        subject: BlankNode{id:&bnid}.into(),
        predicate: NamedNode{iri: "http://one.deep/one_pred"}.into(),
        object: NamedNode{iri: "http://one.deep/one_obj"}.into()
    })?;

currently produces

<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
    <rdf:Description rdf:about="http://top.level/top_sub">
        <top_pred xmlns="http://top.level/" rdf:nodeID="bn1"/>
    </rdf:Description>
    <rdf:Description rdf:nodeID="bn1">
        <one_pred xmlns="http://one.deep/" rdf:resource="http://one.deep/one_obj"/>
    </rdf:Description>
</rdf:RDF>

where as something like this:

<?xml version="1.0" encoding="utf-8"?>
<rdf:RDF
  xmlns="http://top.level/"
  xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
  xmlns:default1="http://one.deep/"
>
  <rdf:Description rdf:about="http://top.level/top_sub">
    <top_pred>
      <rdf:Description>
        <default1:one_pred rdf:resource="http://one.deep/one_obj"/>
      </rdf:Description>
    </top_pred>
  </rdf:Description>
</rdf:RDF>

would be better. Combined with the other short cut syntaxes, this will make a big difference to eventual file size!

don't ignore ascii case on BASE and PREFIX

Hello,

I was processing a graph where base was declared and used as a prefix. Example:

@prefix base: <http://data.gesis.org/claimskg/> .
@prefix dcat: <http://www.w3.org/ns/dcat#> .

base:claimskg a dcat:Dataset .

This resulted in an Unexpected char error which I traced to here: https://github.com/oxigraph/rio/blob/main/turtle/src/turtle.rs#L278

If you check Turtle's grammar specification, it allows declaring a base IRI either with @base or BASE, but not lowercase base.

So maybe the case should not be ignored? (the same is valid for PREFIX btw)

Thanks!

Access to line,column position for triples

I am currently using the RdfXmlParser to parse OWL files. While this works well, debugging my parser or the OWL file is fairly hard because my error messages are poor. I'd like to improve this. The main issue advance would be to get some line position information from the parser as it is going. Currently, I can't see any mechanism of getting this information out from the RdfXmlParser -- the underlying quick_xml parser could at least provide access to the buffer position which would be easy enough to uncover.

I am not sure if this is a good solution or not, although it is cheap and cheerful. So I thought to ask before I sent in a PR to see if there is a better design.

Is trim_text necessary in rio_xml?

Thank you for building this library!

When I use rio_xml to update an existing RDFXML file that has trailing whitespace in some literals, I'm getting whitespace changes that are unrelated to the change I'm trying to make. I believe the source is this line that tells quick_xml to trim_text:

https://github.com/oxigraph/rio/blob/master/xml/src/parser.rs#L65

When I commented out this line and tried read the same RDFXML I got this "Unexpected text event" error that I'm not sure how to manage:

https://github.com/oxigraph/rio/blob/master/xml/src/parser.rs#L661

Is trim_true essential to how rio_xml works? What changes would be required to stop trimming whitespace?

Thanks again!

RDF/XML parser could be easily optimized

The current RDF/XML parser is quite naive and copies the latest context each time an opening tag is read. I believe the parser could be easily speedup by avoiding such copies.

N3 parse support

hello @Tpt , thanks for your work.

It would be great if rio supports n3 parser. n3 is also the format chosen for n3-patch format, by solid spec, to patch solid rdf resources.

Thanks again.

Error in NTriplesParser example

Using the example from https://docs.rs/rio_turtle/0.7.0/rio_turtle/struct.NTriplesParser.html and adding a main function:

use rio_api::model::NamedNode;
use rio_api::parser::TriplesParser;
use rio_turtle::{NTriplesParser, TurtleError};

fn main() {
    let file = b"<http://example.com/foo> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person> .
<http://example.com/foo> <http://schema.org/name> \"Foo\" .
<http://example.com/bar> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person> .
<http://example.com/bar> <http://schema.org/name> \"Bar\" .";

    let rdf_type = NamedNode {
        iri: "http://www.w3.org/1999/02/22-rdf-syntax-ns#type",
    };
    let schema_person = NamedNode {
        iri: "http://schema.org/Person",
    };
    let mut count = 0;
    NTriplesParser::new(file.as_ref()).parse_all(&mut |t| {
        if t.predicate == rdf_type && t.object == schema_person.into() {
            count += 1;
        }
        Ok(()) as Result<(), TurtleError>
    })?;
    assert_eq!(2, count);
}

This results in:

error[E0277]: the `?` operator can only be used in a function that returns `Result` or `Option` (or another type that implements `FromResidual`)
  --> src/main.rs:23:7
   |
5  | / fn main() {
6  | |     let file = b"<http://example.com/foo> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person> .
7  | | <http://example.com/foo> <http://schema.org/name> \"Foo\" .
8  | | <http://example.com/bar> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://schema.org/Person> .
...  |
23 | |     })?;
   | |       ^ cannot use the `?` operator in a function that returns `()`
24 | |     assert_eq!(2, count);
25 | | }
   | |_- this function should return `Result` or `Option` to accept `?`
   |
   = help: the trait `FromResidual<Result<Infallible, TurtleError>>` is not implemented for `()`

I'm a Rust beginner so unfortunately I cannot give more details about it.

Turtleparser throw error with relative Iri whereas it shouldn't

Hello, i was testing rio_turtle to see if it's usable for one of my project and i get trouble when my rdf contain a relative iri. When the TurtleParser encounter a relative iri, it return an error about missing scheme whereas it shouldn't because it's a relative iri and relative iri are allowed in turtle.
Worst, Seems there is no way to skip an error and continuing to parse. I tried to do it with a iterator, an when i encounter an error, the iterator stuck on it (try my joined code with my joined ttl). i didn't find a way to get the content of faulty iri either. In a general context, these two elements doesn't permit to be error tolerant when parsing.

I think the origin of this issue is the iri datatypes, it doesn't allow relative iri. In more general, I think you should allow the existence of unchecked or invalid iri and let the user check it if it's needed, or maybe the check should be done by the parser itself ?

lv2core.ttl.txt
main.rs.txt

Wrong namespace splitting in XML formatter

Given the following triple:

<http://example.org/s> <http://example.org/properties:p> "o".

the XML formatter produces (indentation added for readability):

<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#">
    <rdf:Description rdf:about="http://example.org/s">
        <properties:p xmlns="http://example.org/" rdf:datatype="http://www.w3.org/2001/XMLSchema#string">o</properties:p>
  </rdf:Description>
</rdf:RDF>

which is invalid because of property:p (no namespace property is defined).

Note that Wikidata has IRIs with this structure.

When looking for a split point between namespace and prefix, the formatter should also split on :.

TurtleParser throw several fake error

Hi again,

Due to the issue i get yesterday, i decide to write a small program to test the Turtle parser by making it parse all ttl file of my computer. It found two kind of error that aren't, you can see corresponding file in tests/good_too of the joined archive. I also discovered a rdf test suit that i have tried, it also triggered error on some correct ttl file contained in tests/good of the joined archive.

The joined archive contain the project i used to do my tests. It's just take path of the files to test in parameter and print the parse error when it's triggered. For example, cargo run -- ls tests/*/*.tll will parse all ttl file of the test suit.

PS: i know there is a rio_testsuit crate, but i wasn't able to use the binary

test1.zip

Async parsing/serializing

In web contexts, with most of web frameworks being async, it would help if parsers are async like js counterparts. Currently as parsers doesn't anyway require full content in memory, it should be straight forward to make them async. It will mostly take converting Read => AsyncRead, Write => AsyncWrite, iterators to streams, and making functions async, with out changing any of parsing logic.

But it will be very braking change. But makes parsers non blocking and manifold efficient in concurrent contexts. And can have sync adapters for any synchronous usage.

IRI normalization and validation

I believe that the library should fully validate and normalize IRIs.

Missing features:

  • Full IRI validation (host...)
  • UTF-8 normalization using NFC

`Iri<String>` is a waste of space

I notice that the "default" type parameter for Iri<T> (i.e. the one used in many places in the crate) is String.

I would suggest switching to Box<str>, which consumes 33% less memory (16 bytes instead of 24 bytes). The only benefit of String over Box<str> is mutability, which is not needed here (Iri are conceptually immutable, right?).

If you agree about that, I can propose a pull request.

method for getting inner value for Term?

Hi there! Thanks for creating / maintaining this.

I'm currently parsing some turtle documents, and I need to get the plain value inside the terms. The to_string() method adds N-Triples serialization syntax, which often makes sense, but I'm looking for the inner value.

I'm assuming there must be some way for me to get the value from a Term in some easy way. At the moment, the code taht does this looks like this:

// Returns the innver value of a Term in an RDF triple. If it's a blanknode or triple inside a triple, it will return None.
use rio_api::model::Term;
fn get_inner_value(t: Term) -> String {
    match t {
        Term::Literal(lit) => match lit {
            rio_api::model::Literal::Simple { value } => value.into(),
            rio_api::model::Literal::LanguageTaggedString { value, language: _ } => {
                value.into()
            }
            rio_api::model::Literal::Typed { value, datatype: _ } => value.into(),
        },
        Term::NamedNode(nn) => nn.iri.into(),
        // etc.
    }
}

RDFa support

Thanks for building this library! Would you be interested in adding RDFa support to rio and what's the best way we can help out with that?

Option to allow duplicated rdf:ID values

I am trying to load a large RDF file from UniProt using Sophia (which uses Rio), but it fails with the error:

SourceError(RdfXmlError { kind: Other("http://purl.uniprot.org/uniprot/#_3D5AA7A276B6E5CD_up.name_uORF has already been used as rdf:ID value") })

The RDF is actually invalid as it uses the same rdf:IDfor two property elements. However with rapper I can force it to load by disabling this option:

checkRdfID RDF/XML parser checks rdf:ID values for duplicates (boolean)

Would it be at all possible to implement a similar option in Rio so I can force it to load? I already contacted the UniProt team and they have no plans to fix this unfortunately.

Resolving entity values with nested entities

I am trying to get RdfXmlParser to parse a file that contains entities whose defined in terms of other entities, as rdf and rdfs in the following example:

<?xml version="1.0"?>

<!DOCTYPE rdf:RDF [
  <!ENTITY ex "http://example.com/">
  <!ENTITY w3 "http://www.w3.org">
  <!ENTITY rdf "&w3;/1999/02/22-rdf-syntax-ns#">
  <!ENTITY rdfs "&w3;/2000/01/rdf-schema#">
  <!ENTITY xsd "&w3;/2001/XMLSchema#">
]>

<rdf:RDF xmlns:ex2="&ex;2/" xmlns:rdf="&rdf;">
	<rdf:Description rdf:about="&ex;foo">
	    <ex2:test rdf:datatype="&xsd;string">bar</ex2:test>
	</rdf:Description>
</rdf:RDF>

It seems that the parser is not performing entity resolution while parsing doctypes.
I have to admit that I am unsure whether the code above is well-formed, I encountered a situation of this kind while working with a seemingly established ontology.
I am trying to add this capability to the parser by modifying parser::parse_doctype().
I am going to address this issue in a pull request.

Am I missing some capabilities of RdfXmlParser that would allow me to get the parser to work on this example without changing code?

Is GeneralizedTerm a good name?

@pchampin I'm currently releasing a new version of Rio and, while doing so, I am questioning myself about the use of the word "generalized". Any generalization of RDF Terms/Quads could be called "generalized". Do you think about a better word to use for the generalization Rio is currently supporting?

Language tag normalization

For downstream task, I believe it would be nice that the library normalizes language tags case.

If we do it, we should maybe use RFC 5646 format (i.e. lower case language code, upper case country code...).

@pchampin What do you think about it?

Add indentation to RDF/XML

It would be nice not to have RDF/XML on a single line.

quick_xml supports this through new_with_ident but this isn't uncovered in the API.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.