Giter VIP home page Giter VIP logo

summa-solvency's Introduction

summa-solvency's People

Contributors

alikonuk1 avatar alxkzmn avatar ameya-deshmukh avatar enricobottazzi avatar sifnoc avatar teddav avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

summa-solvency's Issues

Modify `LTChip` to take `Value<F>` instead of `F`

The bug has been discussed here

The root cause of the problem is the design of the assign function from the LTChip gadget from the zkevm.

As you can see the witness assignment function takes as input lhs and rhs of typeF. Because of that this trick had to be performed in order to extract F from an assigned cell as pass it to the chip.

total_assets_cell
.value()
.zip(computed_sum_cell.value())
.map(|(total_assets, computed_sum)| {
if let Err(e) = chip.assign(
&mut region,
0,
computed_sum.to_owned(),
total_assets.to_owned(),
) {
println!("Error: {:?}", e);
};
});

The problem of such implementation is that a circuit assignment function is conditional on a witness value being Some. This is dangerous because "the keygen synthesises the circuit without having access to the witness; whereas the prover runs synthesis with the witness. Since this closure is conditioned on witness values being Some, it will only execute in the prover, meaning we would end up with an inconsistent circuit structure cf. keygen."

  • Create PR to Zkevm
  • Modify our circuit accordingly

Remove balance from public input

The balance was added to public input as a way to test the circuit.

In production, the balance of a user should not be public, as it is committed within the leaf_hash.

Merkle Sum Tree Chip Refactoring

Refactoring of the merkle sum tree chip

  • Add test to generate and verify proof without using mock prover
  • Better error handling in tests
  • Fix printing

Fix Circuit Printing

The rust compiler generates an error when printing the circuit by running
cargo test --features dev-graph -- --nocapture

For now I removed this from the github action, so they can be deployed with no errors.

  • fix the bug
  • reintegrate the printing to the github action

Add benchmarking for circuits

Support functions to benchmark the performance of the library in terms of

  • merkle sum tree generation
  • verification key generation
  • proving key generation
  • zk proof generation
  • zk proof verification

Retrieve Poseidon configs from online storage

The procedure of running python scripts to generate the hasher parameters might be daunting for new users. Instead, we can generate some Poseidon configs for number of assets from 1 to 30, and upload these to an Amazon bucket. The user will just need to download the parameters that matches with their own number of assets, something analogous to when you download a ptau file.

In that case we might think about adding the file to the .gitignore and keep it for internal use.

Originally posted by @enricobottazzi in #29 (comment)

Support Floating Points

Is your feature request related to a problem? Please describe.
Since balances of users (and exchange) are not integers most of the times, we need to support the possibility in which these balances are represented via floating points.

Let's suppose that we want to represent the value of

0.165307909071604695 ETH in the Merkle Sum Tree.

Eth is generally represent as a value with up to 18 decimal places. This smallest unit of ETH is known as a "wei".

it means that the value 100 ETH would need to be represented as

100.000000000000000000

When adding together

100000000000000000000 +
   165307909071604695 =
100165307909071604695

In order to acheive that, the value 0.165307909071604695 can be represented as FieldElement 165307909071604695 while the value 100.000000000000000000 will be represented as FieldElements 100000000000000000000. In simpler terms we'll convert any value to Wei and execute all the operations (such as sum or lessthan) with values denominated in wei.

Personally, I don't see any breaking infrastructure change to deal with Floating Point. The only downside that I see here is that there's a possibility that sum of liabilities will require more than 254 bits to be represented. That would be a problem.

EDIT:

The total supply of ETH is 120,000,000. Which means that the total supply of ETH denominated in WEI is
120,000,000.000000000000000000. It may sound an incredibly big number but it requires "only" 87 bits to be represented. Therefore we are safe.

Current BTC total supply is 21,000,000. Smallest unit is Satoshi (8 decimal digits). Total supply of BTC denominated in Satoshi is 21,000,000.00000000. Again, 51 bits to represent it. Far from overflowing the prime number of our Field Group

Fix Poseidon Config

As things stand now, the Poseidon Parameters are derived according to the number of assets being supported in the proof of solvency. This is an unnecessary overhead. As part of our effort to make the circuit more efficient and reduce the number of advice columns being used, I propose to fix the Poseidon Config used in our circuit to WIDTH=3 and RATE=2. No matter the number of inputs to the poseidon hasher, the same posiedon config will be used.

I don't know what's the tradeoff here but it by looking up online, I see that there a few projects such as ANOMA that implement a poseidon with such specifications.

@sifnoc would you be able to assess the security of such approach? We can also add it as a separate issue with lower priority right now

Benchmark the multi-asset feature

Create benchmark that measures the prover and verifier time depending on the number of assets, similar to existing "full solvency flow" benchmark:

  • Pre-generate the circuit and hasher configs up to the required max asset number;
  • Extend or generate the test data.

Merkle tree for proof of assets

Zero-Knowledge Proof of Assets

It is necessary for the exchange to prove the ownership of its m addresses whose balances sum up to an amount greater than or equal to the total liabilities L:

∑(bal1..m) >= L

The exchange wants to hide its addresses in the anonymity set of top-holding n addresses. The straightforward approach is to provide an array of n addresses and balances as a public input and binary indices and signatures as a private input to select the m addresses for a signature check and a summation. A reasonably big anonymity set is n = 214 = 16384, but an input array of such size is challenging to use for a zero-knowledge proof. Instead, it is possible to use a Merkle tree and significantly reduce the number of inputs to the circuit.

Let's assume that m << n, e.g., the exchange owns fewer than a 100 addresses out of 16384. This will allow to create a circuit with fixed number of inputs that is relatively small but enough to handle the possible m, e.g., 100 inputs. The unused inputs will be stuffed with dummy values. It is then possible to prove the ownership and the total stored amount by providing:

  • Merkle tree root (publicly known) for all n addresses where leaves of the tree are pairs of (address, balance);
  • Merkle path indices to the owned m (addresses+balances) (and 100 - m dummy ones) - array of 14 elements (tree height is 14): 100 x 14 = 1400 inputs (or even fewer if treated as a 14-bit binary number);
  • Merkle path elements to the owned m addresses (and 100 - m dummy ones). It makes sense to always provide all the top 7 level elements for the tree as a separate input because m = 100 ~= 27 and if the m exchange addresses are evenly spread among the n tree leaves, their Merkle paths will go through almost all the nodes of the top 7 levels. This way, the number of inputs for the remaining path elements will be 100 x (14-7) = 700
  • m signatures (and 100 - m dummy ones) - 100 in total.

Comparison

Address array approach:

16384 addresses + 16384 signatures (including dummy ones) + 16384 balances + array index = at least 49152 + some overhead depending on how the array index is encoded.

Merkle tree approach:

1 root + 254 "shared" top 7 level path elements + 700 "varying" lower path elements + 1400 (or 100) path indices + 100 signatures = 2455 (2355)

The downside of the MT approach is that it requires extra hashing operations to prove the inclusion of an address.

Development Notes

Put the circuit here:
https://github.com/summa-dev/circuits-halo2/tree/master/src/circuits
Reference Merkle tree implementations are here:
https://github.com/summa-dev/halo2-experiments#experiment-8---merkle-tree-v3

create overflow chip

I plan to integrate the overflow_chip_v2 from the summa/halo2-experiments repository into this repo, based on the mst-refactor branch which is updated with the latest version of Halo2.

Reconsider using Array over Vectors

Looking again over the MerkleSumTreeCircuit struct, I don't see a reason why using vectors over arrays would make it more efficient. I think that we should try again looking into using arrays and benchmark the two difference cases more carefully.

pub struct MerkleSumTreeCircuit<const LEVELS: usize, const MST_WIDTH: usize, const N_ASSETS: usize>
{
pub leaf_hash: Fp,
pub leaf_balances: Vec<Fp>,
pub path_element_hashes: Vec<Fp>,
pub path_element_balances: Vec<[Fp; N_ASSETS]>,
pub path_indices: Vec<Fp>,
pub assets_sum: Vec<Fp>,
pub root_hash: Fp,
}

The same goes for MerkleProof

pub struct MerkleProof<const N_ASSETS: usize> {
pub root_hash: Fp,
pub entry: Entry<N_ASSETS>,
pub sibling_hashes: Vec<Fp>,
pub sibling_sums: Vec<[Fp; N_ASSETS]>,
pub path_indices: Vec<Fp>,
}

and MerkleSumTree

pub struct MerkleSumTree<const N_ASSETS: usize> {
root: Node<N_ASSETS>,
nodes: Vec<Vec<Node<N_ASSETS>>>,
depth: usize,
entries: Vec<Entry<N_ASSETS>>,
}

Also note that now we have an extra generics LEVELS that can be used in these last 2 structs too!

Docs

Prepare documentation to illustrate how an exchange can use summa-solvency to generate a zk proof of solvency.
Should be structured on the user flow (where the user is the CEX). Together with that also the profile README should be updated accordingly

  • separate proof of assets/proof of liabilities/proof of solvency/proof of correct inclusion

Improve Prover Performance

To improve the prover's performance, an improvement would be to modify the design of the circuit to support vertical gates using (if possible) a single column. The current architecture of the circuit privileges the use of horizontal gates, making use of several columns for each step inside the circuit. Just by spreading the cell in a vertical row would improve the performance of the prover. That's because the number of columns has a big impact on the proof size and time to generate. This topic has been suggested by:

Description on how we intend to optimize the circuit is mentioned here => https://hackmd.io/vEhAxFVZRb6TH1QGpokneA

Perform `overflow` check outside of `merkle_prove_layer`

As of now, the overflow chip gets initiated, loaded and assigned inside merkle_prove_layer

// Initiate the overflow check chip
let overflow_chip = OverflowChip::construct(self.config.overflow_check_config.clone());
overflow_chip.load(&mut layouter)?;
// Each balance cell is constrained to be less than the maximum limit
for left_balance in left_balances.iter() {
overflow_chip.assign(
layouter.namespace(|| "overflow check left balance"),
left_balance,
)?;
}
for right_balance in right_balances.iter() {
overflow_chip.assign(
layouter.namespace(|| "overflow check right balance"),
right_balance,
)?;
}

This approach is inefficient because it requires the chip to be loaded at every merkle_prove_layer, generating a 8 bits fixed column at every level, which is not necessary.

Furthermore, I suggest performing the overflow check also on the leaf_balances which is not performed now

Supporting more than 64bits for userbalance

Current MST implementation that can accept u64::MAX as userbalance.

https://github.com/summa-dev/merkle-sum-tree-rust/blob/2e3914ad3441aeaa6d323f2c14600cdd52b04aee/src/entry.rs#L7-L11

pub struct Entry {
    username_to_big_int: u64,
    balance: u64,
    username: String,
}

If the balance unit is wei, can only handle around 18.45 ETH for each account.

The largest value account in Ethereum mainnet is around 19 Septillion(83~84 Bits) wei.

So, we should accept at least two u64s as userbalance

Chip Reorganization

As @alxkzmn pointed out, there should be a more granular implementation of the chip functionalities to make the whole code more composable. For example:

  • The functionalities of the MerkleSumTreeChip should be split into smaller functions
  • The Poseidon Hasher and the Overflow check chips functionalities should not be called inside the MerkleSumTreeChip but inside the syntesize function of the top level circuit.
  • The columns, selectors, etc.. should be initialized inside the configure function of the top level circuit and later passed to the lower level chip. Also, this allows to reusability of the same advice columns across different chips making the whole circuit more efficient

Merkle proof layer region layout refactoring

If all of the asset balance cells are arranged in a row, the proof size will grow a lot if many assets are added into the tree. For example with proof of solvency with 20 assets, we'd need to create a Poseidon hasher of Rate 42 and Width 43, resulting in 43 advice columns in the zk circuit.

An optimization that might reduce the dimension of the hasher is to perform a sort of recursive hashing where:

  • l1 is hash_left
  • l2 = H(balance_left_0, balance_left_1, ..., balance_left_19)
  • r1 is hash_right
  • r2 = H(balance_right_0, balance_right_1, ..., balance_right_19)

In this way we get to a hasher which maximum dimension is of rate 20 and width 21. If we want to reduce the number of advice even further, we can add even more hashing recursion.

Additional info => https://hackmd.io/vEhAxFVZRb6TH1QGpokneA

Originally posted by @enricobottazzi in #29 (comment)

Redesign of the LTChip

EDIT: number of advice columns in which it should fit from 5 to 3

The current LTChip works in a horizontal fashion.

lt diff[0] diff[1] diff[...] diff[N_BYTES]
1 0xff 0xff 0xff 0xff

In order to reduce the size of the proof and allows for on-chain verification, we need a more vertical design that fits in 3 advice columns.

Note that the range in which lhs and rhs lie in our case is 248 BITS.

How to do that (alternatives):

  • PR the zkevm
  • Create a new LTChip in summa-solvency
  • Find another existing gadget that works as desired to import

What is part of the issue:

What is not part of the issue:

  • Integrate the Chip in Summa

Vertical Design Of Merkle Sum Tree Chip

As part of our effort to improve the efficiency of the circuit, a new design of the MerkleSumTreeChip that makes use of fewer advice columns is proposed here.

Let's consider the situation in which we have N_ASSETS=2. The circuit as designed now would take MST_WIDTH advice columns, where pub const MST_WIDTH: usize = 3 * (1 + N_ASSETS) so, in this case, it would be 9.

The current layout would look like this:

a b c d e f g h i
bal_0_left bal_1_left bal_0_right bal_1_right - - hash_left hash_right -

At the row = 0, bool selector would be applied to check that swap_bit is either 0 or 1.

At the row = 0, swap selector would be applied to check that according to the swap_bit selector, the values are swapped correctly in the next row.

For example, considering the case in which the swap_bit is 0, this would mean that no swap needs to be performed as follow in row=1

a b c d e f g h i
bal_0_left bal_1_left bal_0_right bal_1_right sum_0 sum_1 hash_left hash_right swap_bit

At the row = 1 sum_selector would be applied to check that sum_i = bal_i_left + bal_i_right.

Furthermore, after row=1 is computed, other 2 operations are performed on the assigned cells of row 1:

  • poseidon_chip is used to hash H(hash_left, bal_0_left, bal_1_left, hash_right bal_0_right, bal_1_right, hash_leaf)
  • overflow check is performed on bal_0_left, bal_1_left, bal_0_right, bal_1_right.

This structure grows in size linearly according to N_ASSETS, in particular the relation between the number of assets and the number of advice columns is defined by MST_WIDTH. 3 assets would mean 12 columns.
Therefore we can try to redisign it as follow, considering N_ASSETS = 2 and again considering swap_bit = 0

a b c
hash_left hash_right swap_bit
hash_left hash_right
bal_0_left bal_0_right swap_bit
bal_0_left bal_0_right sum_0
bal_1_left bal_1_right swap_bit
bal_1_left bal_1_right sum_1

At row 0, 2 and 4 bool_constraint and swap_selectors need to be toogled.
At row 1 and 3 sum_selector needs to be toogled

After row 5 is computed, the same poseidon and overflow check can be performed following the same structure as before.

An additional check that need to be performed here is that the swap_bit remains consistent for an hasher round!

Note that this structure grows vertically according to N_ASSETS, since every new asset will contribute with a two more rows, while it doesn't add any advice column! Which is exactly what we are looking for.

As a result of this implementation:

  • MST_WIDTH will be removed as generics to the circuit
  • bool and swap selectors can be merged into a single selector as these are always toggled at the same time

Circuit Design

Assemble currently available Chips to design a Circuit as described in the MasterPlan

Benchmarks are broken

Describe the bug
A clear and concise description of what the bug is.
Hey! I've been trying to run the benchmarks but it seems like the benchmark CSV files are out of date. The benchmarks expect 2 assets whereas the CSV files only give one. In addition, the formatting is wrong for the CSV files as well. I think the deserialization expects ; for separation and the CSV is comma separated.

To Reproduce
Steps to reproduce the behavior:

Expected behavior
A clear and concise description of what you expected to happen.

Additional context
Add any other context about the problem here.

Add support for KZG commitments

Describe the solution you'd like
Replace the need to commit the liabilities using a merkle sum tree by using KZG commitment. The proof generated inside the circuit would no longer be a proof of inclusion inside the merkle sum tree but, rather, a proof of inclusion inside a KZG commitment

Describe alternatives you've considered
The Merkle Sum Tree commitment represents the state of the art at the moment. Replacing it with KZG commitment would remove the need to perform a further commitment outside of the circuit.

Areas of concern for the KZG commitment feature:

  • performance
  • scalability
  • how to extract a KZG commitment from halo2 API

Additional context

  • A similar solution has been adopted by Semacaulk

POC

A Poc of this implementation has been developed by @0xbok https://github.com/0xbok/proof-cex/tree/main. The part that is still missing here is described here =>

Once you execute this halo2 circuit. you have the kzg commitment to all the (user_id/hash, user_balace) pairs:
let a_poly_commitment = dirty_transcript.read_point()?;

Now theoretically you should be able to compute the kzg inclusion proof for each user just from this data at hand. 
The part where i am stuck is that i am not sure if halo2 api makes it easy to do so. 
Otherwise that has to be done by exploring the internals of halo2 code

Increase performance of the Snark Verifier

Currently, the AggregationCircuit from the snark verifier takes a lot (few minutes) to generate a recursive proof. Almost a few minutes. The goal here is to increase its performance. An idea to try is using Axiom's snark verifier => https://github.com/axiom-crypto/snark-verifier

gen_proof_shplonk axiom for standard plonk circuit => 46s in main and 30s in community edition branch
gen_proof_shplonk pse for standard plonk circuit => 182s

Error handling bug in overflow chip

let _ = value.copy_advice(|| "assign value", &mut region, self.config.a, 0);

value.copy_advice(|| "assign value", &mut region, self.config.a, 0); might return an error but this is not handled.

In fact, modifying the function to handle the error make a lot of tests failing because of

panicked at 'called Result::unwrap() on an Err value: ColumnNotInPermutation(Column { index: 9, column_type: Advice })', src/circuits/tests.rs:631:71

Efficiency improvement of `OverflowCheckChip` design

The current OverflowChip is designed to work with 2 advice column as follows:

            // |     | a (value)   | b    |
            // |-----|-------------|------|
            // |  0  | 0x1f2f3f    | 0x1f |
            // |  1  |             | 0x2f |
            // |  2  |             | 0x3f |

This change has been applied to #62. The current benchmark on my machine for proof generation is 673.770ms benchmarked via cargo test --release --features dev-graph -- --nocapture test_valid_merkle_sum_tree_with_full_prover.

From the printing of the circuit it is possible to see how there's a big space that can be used by the OverflowCheckChip
Screenshot 2023-06-27 at 15 39 41

I propose trying a design that makes full use of the 5 advice columns at our disposal. This will reduce the degree of the relative offset and, hopefully, increase the efficiency of the circuit

advice[0] advice[1] advice[2] advice[3] advice[4]
0x1f2f3f4f5f - - - -
0x1f 0x2f 0x3f 0x4f 0x5f

Once this modification is applied the on-chain-verifier function will be added

    #[test]
    fn test_standard_on_chain_verifier() {
        let params = generate_setup_params(K);

        let circuit = MstInclusionCircuit::<LEVELS, L, N_ASSETS>::init(
            "src/merkle_sum_tree/csv/entry_16.csv",
            0,
        );

        let pk = gen_pk(&params, &circuit, None);

        let num_instances = circuit.num_instance();
        let instances = circuit.instances();

        let proof_calldata = gen_evm_proof_shplonk(&params, &pk, circuit, instances.clone());

        let deployment_code = gen_evm_verifier_shplonk::<MstInclusionCircuit<LEVELS, L, N_ASSETS>>(
            &params,
            pk.get_vk(),
            num_instances,
            None,
        );

        let gas_cost = evm_verify(deployment_code, instances, proof_calldata);

        // assert gas_cost to verify the proof on chain to be between 575000 and 600000
        assert!(
            (575000..=600000).contains(&gas_cost),
            "gas_cost is not within the expected range"
        );
    }

Range Check on Less Than Chip

Describe the bug
As described here there's a known under constraint bug in the less than chip.

  • apply range check over the diff chunks
  • Test diff out of range
  • Open a PR into zkevm-circuits lib

Modify Test Suite

While working adding support for the ECDSA Circuit, it becomes apparent that we should structure our test suite directory differently. In particular, I end up adding the ecdsa circuit tests to the same tests.rs file that contains all the tests related to the merkle sum tree

I propose a directory structure that creates 2 further submodules within the circuit folder. 1 submodule for Merkle sum tree and 1 submodule for ecdsa. Each submodule would contain the implementation of the circuit and the related tests.

Add privacy for the proof of assets

Is your feature request related to a problem? Please describe.
The desired feature allows to perform the proof of solvency without revealing the assets owned by the exchange and the wallets that are controlling these assets

Describe the solution you'd like
The solution adds a further check inside the proof of solvency prover:

  • I own wallet X
  • Wallet X has a balance of z for cryptocurrency y
  • Balance of z is greater than total liabilities for cryptocurrency y

The first step can be achieved with ECDSA signature verification.
The second step can be achieved using a modified version of Axiom's circuits by proving this from the ethereum state trie. An alternative implementation of that would require to manually create a large anonymity set (address -> `balance), add these entries inside a merkle tree and prove ownership of an address included in the merkle tree inside the circuit. This approach is similar to the one used by Sismo, Proven and Provisions Paper
The third step is already implemented using the Less Than Chip.

Separate Logic of Proof Of Inclusion and Proof Of Solvency in 2 different circuits

Currently, the MerkleSumTreeCircuit performs two type of checks leveraging the MerkleSumTreeChip:

  • checks that the user identified by the leaf_hash is included in the merkle sum tree identified by the public commitment root_hash. We define it proof of inclusion.
  • checks that the computed_sum is less than the assets_sum. We define it proof of solvency.

According to this implementation, every time that a proof is generated by the CEX for a user, this will both include a proof of inclusion and a proof of solvency.

But this approach is highly inefficient. The CEX can generate a proof of solvency only once and provide individual proof of inclusion to each user.

The proof of solvency circuit would follow this logic:

    // check 1: Assets greater than liabilities
    ΣcexAssetsSum >= rootBalanceLeft + rootBalanceRight
    
    // check 2: Hash of the last level of the tree should match the publicly committed root hash
    root_hash === H(rootHashLeft, rootBalanceLeft, rootHashRight, rootBalanceRight)

The result of the PR will be two new circuits mst_inclusion and solvency. Both of them can rely on the methods provided by the MerkleSumTreeChip

Optimization for Merkle Sum Tree

Based on the benchmarks run, the creation of the Merkle Sum Tree has resulted very slow, especially considering the case in which an exchange may have more than 100M users, which would result in a merkle tree of depth 27. Therefore, optimization must be performed

Add License

Add MIT/Apache licence to the repo.

  • Check that there's no dependency that has GPL license

Notice statement for the configuration of overflow chips

Currently, a warning is issued for configurations that don't seem to be set up as intended.
For instance, if the overflow chip is configured with MOD_BITS set to 20 and MAX_BITS set to 8, it only utilizes 16 bits.
We are not sure if this was intentional.

Essentially, the chip checks if MOD_BITS is divisible by MAX_BITS without a remainder.
We should make sure to notify the developer or operator about this.

Introduce `LeChip` as an alternative method to prevent overflow.

Related discussed #8 (comment)

We can prevent overflow by evaluating left_balance < computed_sum and right_balance < computed_sum, which eliminates the need to consider the number of bytes we have to manage in the implementation.

However, we can't employ LtChip directly, as some leaves might have 0 balances in the MST.

Therefore, we should modify LtChip or create a new chip that can also function when the two numbers are equal.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.