Giter VIP home page Giter VIP logo

reghex's Introduction

reghex

The magical sticky regex-based parser generator


Leveraging the power of sticky regexes and JS code generation, reghex allows you to code parsers quickly, by surrounding regular expressions with a regex-like DSL.

With reghex you can generate a parser from a tagged template literal, which is quick to prototype and generates reasonably compact and performant code.

This project is still in its early stages and is experimental. Its API may still change and some issues may need to be ironed out.

Quick Start

1. Install with yarn or npm
yarn add reghex
# or
npm install --save reghex
2. Add the plugin to your Babel configuration (optional)

In your .babelrc, babel.config.js, or package.json:babel add:

{
  "plugins": ["reghex/babel"]
}

Alternatively, you can set up babel-plugin-macros and import reghex from "reghex/macro" instead.

This step is optional. reghex can also generate its optimised JS code during runtime. This will only incur a tiny parsing cost on initialisation, but due to the JIT of modern JS engines there won't be any difference in performance between pre-compiled and compiled versions otherwise.

Since the reghex runtime is rather small, for larger grammars it may even make sense not to precompile the matchers at all. For this case you may pass the { "codegen": false } option to the Babel plugin, which will minify the reghex matcher templates without precompiling them.

3. Have fun writing parsers!
import { match, parse } from 'reghex';

const name = match('name')`
  ${/\w+/}
`;

parse(name)('hello');
// [ "hello", .tag = "name" ]

Concepts

The fundamental concept of reghex are regexes, specifically sticky regexes! These are regular expressions that don't search a target string, but instead match at the specific position they're at. The flag for sticky regexes is y and hence they can be created using /phrase/y or new RegExp('phrase', 'y').

Sticky Regexes are the perfect foundation for a parsing framework in JavaScript! Because they only match at a single position they can be used to match patterns continuously, as a parser would. Like global regexes, we can then manipulate where they should be matched by setting regex.lastIndex = index; and after matching read back their updated regex.lastIndex.

Note: Sticky Regexes aren't natively supported in any versions of Internet Explorer. reghex works around this by imitating its behaviour, which may decrease performance on IE11.

This primitive allows us to build up a parser from regexes that you pass when authoring a parser function, also called a "matcher" in reghex. When reghex compiles to parser code, this code is just a sequence and combination of sticky regexes that are executed in order!

let input = 'phrases should be parsed...';
let lastIndex = 0;

const regex = /phrase/y;
function matcher() {
  let match;
  // Before matching we set the current index on the RegExp
  regex.lastIndex = lastIndex;
  // Then we match and store the result
  if ((match = regex.exec(input))) {
    // If the RegExp matches successfully, we update our lastIndex
    lastIndex = regex.lastIndex;
  }
}

This mechanism is used in all matcher functions that reghex generates. Internally reghex keeps track of the input string and the current index on that string, and the matcher functions execute regexes against this state.

Authoring Guide

You can write "matchers" by importing the match import from reghex and using it to write a matcher expression.

import { match } from 'reghex';

const name = match('name')`
  ${/\w+/}
`;

As can be seen above, the match function, is called with a "node name" and is then called as a tagged template. This template is our parsing definition.

reghex functions only with its Babel plugin, which will detect match('name') and replace the entire tag with a parsing function, which may then look like the following in your transpiled code:

import { _pattern /* ... */ } from 'reghex';

var _name_expression = _pattern(/\w+/);
var name = function name() {
  /* ... */
};

We've now successfully created a matcher, which matches a single regex, which is a pattern of one or more letters. We can execute this matcher by calling it with the curried parse utility:

import { parse } from 'reghex';

const result = parse(name)('Tim');

console.log(result); // [ "Tim", .tag = "name" ]
console.log(result.tag); // "name"

If the string (Here: "Tim") was parsed successfully by the matcher, it will return an array that contains the result of the regex. The array is special in that it will also have a tag property set to the matcher's name, here "name", which we determined when we defined the matcher as match('name').

import { parse } from 'reghex';
parse(name)('42'); // undefined

Similarly, if the matcher does not parse an input string successfully, it will return undefined instead.

Nested matchers

This on its own is nice, but a parser must be able to traverse a string and turn it into an Abstract Syntax Tree. To introduce nesting to reghex matchers, we can refer to one matcher in another! Let's extend our original example;

import { match } from 'reghex';

const name = match('name')`
  ${/\w+/}
`;

const hello = match('hello')`
  ${/hello /} ${name}
`;

The new hello matcher is set to match /hello / and then attempts to match the name matcher afterwards. If either of these matchers fail, it will return undefined as well and roll back its changes. Using this matcher will give us nested abstract output.

We can also see in this example that outside of the regex interpolations, whitespace and newlines don't matter.

import { parse } from 'reghex';

parse(hello)('hello tim');
/*
  [
    "hello",
    ["tim", .tag = "name"],
    .tag = "hello"
  ]
*/

Furthermore, interpolations don't have to just be RegHex matchers. They can also be functions returning matchers or completely custom matching functions. This is useful when your DSL becomes self-referential, i.e. when one matchers start referencing each other forming a loop. To fix this we can create a function that returns our root matcher:

import { match } from 'reghex';

const value = match('value')`
  (${/\w+/} | ${() => root})+
`;

const root = match('root')`
  ${/root/}+ ${value}
`;

Regex-like DSL

We've seen in the previous examples that matchers are authored using tagged template literals, where interpolations can either be filled using regexes, ${/pattern/}, or with other matchers ${name}.

The tagged template syntax supports more ways to match these interpolations, using a regex-like Domain Specific Language. Unlike in regexes, whitespace and newlines don't matter, which makes it easier to format and read matchers.

We can create sequences of matchers by adding multiple expressions in a row. A matcher using ${/1/} ${/2/} will attempt to match 1 and then 2 in the parsed string. This is just one feature of the regex-like DSL. The available operators are the following:

Operator Example Description
? ${/1/}? An optional may be used to make an interpolation optional. This means that the interpolation may or may not match.
* ${/1/}* A star can be used to match an arbitrary amount of interpolation or none at all. This means that the interpolation may repeat itself or may not be matched at all.
+ ${/1/}+ A plus is used like * and must match one or more times. When the matcher doesn't match, that's considered a failing case, since the match isn't optional.
| ${/1/} | ${/2/} An alternation can be used to match either one thing or another, falling back when the first interpolation fails.
() (${/1/} ${/2/})+ A group can be used to apply one of the other operators to an entire group of interpolations.
(?: ) (?: ${/1/}) A non-capturing group is like a regular group, but the interpolations matched inside it don't appear in the parser's output.
(?= ) (?= ${/1/}) A positive lookahead checks whether interpolations match, and if so continues the matcher without changing the input. If it matches, it's essentially ignored.
(?! ) (?! ${/1/}) A negative lookahead checks whether interpolations don't match, and if so continues the matcher without changing the input. If the interpolations do match the matcher is aborted.

A couple of operators also support "short hands" that allow you to write lookaheads or non-capturing groups a little quicker.

Shorthand Example Description
: :${/1/} A non-capturing group is like a regular group, but the interpolations matched inside it don't appear in the parser's output.
= =${/1/} A positive lookahead checks whether interpolations match, and if so continues the matcher without changing the input. If it matches, it's essentially ignored.
! !${/1/} A negative lookahead checks whether interpolations don't match, and if so continues the matcher without changing the input. If the interpolations do match the matcher is aborted.

We can combine and compose these operators to create more complex matchers. For instance, we can extend the original example to only allow a specific set of names by using the | operator:

const name = match('name')`
  ${/tim/} | ${/tom/} | ${/tam/}
`;

parse(name)('tim'); // [ "tim", .tag = "name" ]
parse(name)('tom'); // [ "tom", .tag = "name" ]
parse(name)('patrick'); // undefined

The above will now only match specific name strings. When one pattern in this chain of alternations does not match, it will try the next one.

We can also use groups to add more matchers around the alternations themselves, by surrounding the alternations with ( and )

const name = match('name')`
  (${/tim/} | ${/tom/}) ${/!/}
`;

parse(name)('tim!'); // [ "tim", "!", .tag = "name" ]
parse(name)('tom!'); // [ "tom", "!", .tag = "name" ]
parse(name)('tim'); // undefined

Maybe we're also not that interested in the "!" showing up in the output node. If we want to get rid of it, we can use a non-capturing group to hide it, while still requiring it.

const name = match('name')`
  (${/tim/} | ${/tom/}) (?: ${/!/})
`;

parse(name)('tim!'); // [ "tim", .tag = "name" ]
parse(name)('tim'); // undefined

Lastly, like with regexes, ?, *, and + may be used as "quantifiers". The first two may also be optional and not match their patterns without the matcher failing. The + operator is used to match an interpolation one or more times, while the * operators may match zero or more times. Let's use this to allow the "!" to repeat.

const name = match('name')`
  (${/tim/} | ${/tom/})+ (?: ${/!/})*
`;

parse(name)('tim!'); // [ "tim", .tag = "name" ]
parse(name)('tim!!!!'); // [ "tim", .tag = "name" ]
parse(name)('tim'); // [ "tim", .tag = "name" ]
parse(name)('timtim'); // [ "tim", tim", .tag = "name" ]

As we can see from the above, like in regexes, quantifiers can be combined with groups, non-capturing groups, or other groups.

Transforming as we match

In the previous sections, we've seen that the nodes that reghex outputs are arrays containing match strings or other nodes and have a special tag property with the node's type. We can change this output while we're parsing by passing a function to our matcher definition.

const name = match('name', (x) => x[0])`
  (${/tim/} | ${/tom/}) ${/!/}
`;

parse(name)('tim'); // "tim"

In the above example, we're passing a small function, x => x[0] to the matcher as a second argument. This will change the matcher's output, which causes the parser to now return a new output for this matcher.

We can use this function creatively by outputting full AST nodes, maybe even like the ones that resemble Babel's output:

const identifier = match('identifier', (x) => ({
  type: 'Identifier',
  name: x[0],
}))`
  ${/[\w_][\w\d_]+/}
`;

parse(name)('var_name'); // { type: "Identifier", name: "var_name" }

We've now entirely changed the output of the parser for this matcher. Given that each matcher can change its output, we're free to change the parser's output entirely. By returning null or undefined in this matcher, we can also change the matcher to not have matched, which would cause other matchers to treat it like a mismatch!

import { match, parse } from 'reghex';

const name = match('name')((x) => {
  return x[0] !== 'tim' ? x : undefined;
})`
  ${/\w+/}
`;

const hello = match('hello')`
  ${/hello /} ${name}
`;

parse(name)('tom'); // ["hello", ["tom", .tag = "name"], .tag = "hello"]
parse(name)('tim'); // undefined

Lastly, if we need to create these special array nodes ourselves, we can use reghex's tag export for this purpose.

import { tag } from 'reghex';

tag(['test'], 'node_name');
// ["test", .tag = "node_name"]

Tagged Template Parsing

Any grammar in RegHex can also be used to parse a tagged template literal. A tagged template literal consists of a list of literals alternating with a list of "interpolations".

In RegHex we can add an interpolation matcher to our grammars to allow it to parse interpolations in a template literal.

import { interpolation } from 'reghex';

const anyNumber = interpolation((x) => typeof x === 'number');

const num = match('num')`
  ${/[+-]?/} ${anyNumber}
`;

parse(num)`+${42}`;
// ["+", 42, .tag = "num"]

This grammar now allows us to match arbitrary values if they're input into the parser. We can now call our grammar using a tagged template literal themselves to parse this.

That's it! May the RegExp be ever in your favor.

reghex's People

Contributors

chocolateboy avatar kitten avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

reghex's Issues

Arrays as native alternatives for alternations

It would make my parsers a bit tidier if I could use arrays instead of alternations in the DSL passed to matchers.

Without arrays:

const Foo = passthrough`${A} | ${B} | ${C} | ${D} | ${E}`;
const Foos = somematcher`${Foo}+`;

With arrays:

const Foo = [A, B, C, D, E];
const Foos = somematcher`${Foo}+`;

RFC: Add a traverse function

The traverse function would only support the default tag output, so [ /* ... */, tag: 'node' ], or rather type Node = Array<Node | string> & { tag: string }. Maybe it can also be limited to support any object of the shape { tag: string } | { type: string }.

It would function similarly to GraphQL's visit function or Babel's traverse function.

traverse({
  [tagName]: node => {
    // ...
    return node;
  }
})(node)

The different visitor functions match by tag name and would execute their functions as the AST is traversed. The return value should replace the previous value. This way it'd be possible to also transform an AST into the desired shape.

Somehow it should also be possible for this traverser to output strings! If the returned value of each node function is a string, this should easily work by concatenating the child strings per node.

Lookaheads shorthands

It'd be useful to have shorthands for lookahead expressions, they would make parsers easier to read and less error prone to write, as currently there's no syntax highlighting for unbalanced parenthesis used in lookaheads.

Peg.js uses ! for the negative lookahead and & for the positive lookahead, I think we should use ! and = respectively to better align with how these expressions are written in JS regexes (I guess peg.js uses & because they already use = for something else).

Examples:

  • (?!${/foo/}) => !${/foo/}
  • (?!${Foo}) => !${Foo}
  • (?!${Foo} | ${Bar}) => !(${Foo} | ${Bar})
  • (?=${/foo/}) => =${/foo/}
  • (?=${Foo}) => =${Foo}
  • (?=${Foo} | ${Bar}) => =(${Foo} | ${Bar})

Allow starting with `|` in multiline match

I'm parsing bcp47 language tag using reghex and work perfectly!

Can we accept starting with a | character instead pattern when using a multiline match?

const irregular = match('irregular')`
-  ${/en-GB-oed/}
+ | ${/en-GB-oed/}
  | ${/i-ami/}
  | ${/i-bnn/}
  | ${/i-default/}
  | ${/i-enochian/}
  | ${/i-hak/}
  | ${/i-klingon/}
  | ${/i-lux/}
  | ${/i-mingo/}
  | ${/i-navajo/}
  | ${/i-pwn/}
  | ${/i-tao/}
  | ${/i-tay/}
  | ${/i-tsu/}
  | ${/sgn-BE-FR/}
  | ${/sgn-BE-NL/}
  | ${/sgn-CH-DE/}
`;

Regexes that can match 0 characters aren't handled properly

The following two parsers should produce the same output, in this case, but they don't:

console.log ( parse ( match ( '๐Ÿ‘Ž' )`${/\s*/} ${/foo/}` )( 'foo' ) ); // => undefined
console.log ( parse ( match ( '๐Ÿ‘' )`${/\s/}? ${/foo/}` )( 'foo' ) ); // => [ 'foo', tag: '๐Ÿ‘' ]

Improved debuggability

Currently it's pretty difficult to debug a parser written with reghex, it'd be great if that could be made easier somehow.

I'm thinking maybe there could be like an onBeforeMatch function called right before executing any matcher, with the sole purpose (that I can think of) that somebody can put a "debugger" in there. It sounds ugly though.

Types

It'd be useful if the library provided some TS types.

Built-in parser compiler

It would be useful to have a built-in CLI for bundling up a parser into a standalone file, from the user perspective it would be more user friendly if one didn't have to set-up Babel at all (I use TS most of the time), plus it would make the readme more impressive if with one command one could compile the demo parser into a 1kb file or whatever.

Miscellaneous feedback

I spent some time today benchmarking the library and playing with making a toy/useless Markdown parser with it, so here's some miscellaneous feedback after having interacted some more with the library, feel free to close this and perhaps open standalone issues for the parts you think are worth addressing.


For the Markdown parser thing I was trying to write a matcher that matched headings, and I had some problems with that:

  1. I wanted to get the leading hashes, trailing hashes, and content in between as individual strings in the tag to be transformed, that immediately implies that I have to use multiple regexes because capturing groups within a single regex are lost in the tag. Perhaps this should be changed somehow as that would be quite powerful.
  2. Not being able to use a single regex in this scenario means also that I can't use \1, \2 etc. to reference other capturing groups either, in my headings scenario the trailing hashes should really be considered trailing hashes only if they are exactly the same number as the leading hashes, otherwise they should be considered part of the body, this can't quite be expressed cleanly with the current system because the first capturing group/matcher can't be referenced.
    1. Addressing the first issue would kind of address this too.
    2. Another option would be to extend the DSL adding support for \[0-9] references, which in this case would mean referencing the 1st, 2nd... 9th whole sub-matcher.
    3. Perhaps both options should be implemented, I'm not sure.
  3. Continuing on the same scenario there's another issue once the standalone regex gets broke up:
    Standalone:
    `${/(#{1,6} )(.+?)( #{1,6})/}`
    Broken-up:
    `${/#{1,6} /} ${/.+?/} ${/ #{1,6}/}`
    
    I forgot to take note of what the issue was exactly (๐Ÿคฆโ€โ™‚๏ธ), but unless I'm misremembering the issue is that those two expressions don't quite match the same things because the lazy modifier on the broken-up version doesn't behave the same way basically.
  4. Custom regex flags are discarded, it would be nice in some scenarios to be able to write a case-insensitive regex or a regex where "^" and "$" match the start and end of the line respectively for example.
  5. The DSL to me looks kind of like a reimplementation of a subset of the regex language, so perhaps it should be extended a bit to match it more closely, for example how-many-modifiers (what are these things actually called?) like {1,3} perhaps should be supported too.

Now about performance, perhaps the more interesting part of the feedback.

From what I've seen every ~atomic thing the library does is pretty fast, so there shouldn't be any meaningful micro-optimizations available, the root issue seems to be actually that the library spends too much times on the wrong alternations.

Some actual real numbers first so that the rest of the feedback sounds less crazy:

  1. I've been benchmarking the library with this.zip. Basically there's a parser that matches against a subset of javascript and it is asked to match a bunch of expressions in a loop.
  2. Making this 14-characters diff to a single matcher of the parser cut the time it took to run the benchmark by ~40%:
    -  = $`${LogicalORExpression} ${_} ${TernaryOperatorTrue} ${Expression} ${TernaryOperatorFalse} ${Expression}`; // Slow
    +  = $`${/(?=.*\?)/} ${LogicalORExpression} ${_} ${TernaryOperatorTrue} ${Expression} ${TernaryOperatorFalse} ${Expression}`; // Fast
  3. Basically this is what's happening:
    1. This rule matches a ternary expression.
    2. Most expressions in the benchmark aren't ternary expressions.
    3. Still those expressions could be the boolean condition of a ternary expression.
    4. So the parser parses the entire thing, until it realizes the required "?" character for the ternary expression can't be found.
    5. So it backtracks and eventually matches the entire thing again with another matcher.
    6. From the "again" word here it comes the almost doubled performance because of the changed rule.
    7. The only thing the changed rule does is checking if the "?" character is present before running any other gazillion matchers.

That's kind of the root of the performance problems with RegHex parsers in my opinion, if I had to guess with enough sophistication perhaps some parsers could become 100x faster or more just by going down branches/alternations more intelligently.

At a high-level to me RegHex parsers look kinda like CPUs, individual patterns are like instructions, each alternation is a branch etc. it should follow then that the same optimizations used for CPUs could/should be used for RegHex. I know next to nothing about that really, but just to mention some potential things that crossed my mind:

  1. In my ternary expression matcher above there are a series of instructions that should succeed for the entire thing to be considered a match, but not all instructions are the same performance-wise, e.g. checking if the string to match contains a "?" is waaaaay faster than checking if the string starts with something that could be a boolean expression for the ternary operator, the fastest checks should be performed first.
    1. This optimization specifically could be performed at the parser-level with a lookahead, that works but that's kinda ugly.
    2. Another, more general, approach would be to analyze regexes, probably at build-time so that how long it takes to do that doesn't matter, and extract subsets of the automata that must match under every branch and are easy to check for, like the presence of "?" and ":", in that order, in the input string required by my ternary expression matcher.
  2. Next perhaps a branch predictor could be added, the order in which alternations are tried matters for performance, and if the 9th alternation in a set of 10 alternation is the one that matched the most in the past perhaps that should be tried first and most of the times we can skip failing to match the first 8 alternations altogether.
    1. This could be pretty tricky to optimize for automatically, because you need to know for which chunks in the alternations array the order doesn't matter. Maybe some API could be exposed to the user and just move the problem to the user, like a "isParallelizable" third argument to the match function or something.
  3. This is kinda out-there but in some sense RegHex took the individual automata I wrote (~the matchers) and built a larger one by putting the smaller ones in a tree (~the final parser), now currently what happens if I don't add the lookahead for "?" is that the parser goes down the branch for the ternary expression, matches a whole lot of stuff, then realizes the "?" character can't be found and it goes back to the starting point, taking another branch - now it might be interesting to note that there are multiple branches of the tree here that look exactly the same for a while, so they should kind of get merged together in a prefix tree, this way once the ternary expression wouldn't ultimately get matched RegHex wouldn't go back to the very root of the tree but it would take the nearest unexplored branch first instead, which in this particular case would lead to finding a match almost immediately (e.g. "so far that was an OR expression -> there are no more characters left in the input string -> done").

Depending on how many of these fancy optimizations you are willing to spend time on perhaps a seriously fast JS parser could be written on top of RegHex ๐Ÿค” that'd be really cool.


Sorry for the long post, hopefully there's some useful feedback in here.

Recursive parsing

Trying to parse something like

x and y and z

with

const token = match('formula')`
  ${/\w+/}
`

const anded = match('anded')`
  ${token} (?: ${/\s+and\s+/}) ${token}
`

I can make it match when there's exactly two but it doesn't work with one or three:

  parse(anded)('x')
// undefined

  parse(anded)('x and y')
// [ [ 'x', tag: 'formula' ], [ 'y', tag: 'formula' ], tag: 'anded' ]

  parse(anded)('x and y and z')
// [ [ 'x', tag: 'formula' ], [ 'y', tag: 'formula' ], tag: 'anded' ]

But if I try to make it recursive:

const token = match('formula')`
  ${/\w+/}
`

const anded = match('anded')`
  ${row} (?: ${/\s+and\s+/}) ${token}
`

const row = match('row')`
  ${anded} | ${token}
`

console.log([
  parse(anded)('x'),
  parse(anded)('x and y'),
  parse(anded)('x and y and z')
])
/Users/glen/src/experiments/disclosure/src/3-parse-test.js:57
var anded = function _anded(state) {
                           ^

RangeError: Maximum call stack size exceeded

I think I'm doing something wrong!

RFC: Support tagged template parsing

Currently parse(node)(input) works using a string, i.e. input is expected to be a string.
It would be interesting to allow parse(node)(quasis, ...expressions) to be passed, where quasis is an array of strings and expressions is an array of interpolations.

This way it'd be possible to parse a tagged template literal input by introducing just one new matcher: match.interpolation('interpolation').

Support for string matchers

It'd be useful if a matcher could be specified using strings also, this would make parsers a bit cleaner and easier to read as a lot of characters wouldn't need to be escaped in strings.

E.g.: /\/\// => '//'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.