Giter VIP home page Giter VIP logo

snark's Introduction

SNARK and Relation Traits

The arkworks ecosystem consists of Rust libraries for designing and working with zero knowledge succinct non-interactive arguments (zkSNARKs). This repository contains efficient libraries that describe interfaces for zkSNARKs, as well as interfaces for programming them.

This library is released under the MIT License and the Apache v2 License (see License).

WARNING: This is an academic proof-of-concept prototype, and in particular has not received careful code review. This implementation is NOT ready for production use.

Directory structure

This repository contains two Rust crates:

  • ark-snark: Provides generic traits for zkSNARKs
  • ark-relations: Provides generic traits for NP relations used in programming zkSNARKs, such as R1CS

Overview

This repository provides the core infrastructure for using the succinct argument systems that arkworks provides. Users who want to produce arguments about various problems of interest will first reduce those problems to an NP relation, various examples of which are defined in the ark-relations crate. Then a SNARK system defined over that relation is used to produce a succinct argument. The ark-snark crate defines a SNARK trait that encapsulates the general functionality, as well as specific traits for various types of SNARK (those with transparent and universal setup, for instance). Different repositories within the arkworks ecosystem implement this trait for various specific SNARK constructions, such as Groth16, GM17, and Marlin.

Build guide

The library compiles on the stable toolchain of the Rust compiler. To install the latest version of Rust, first install rustup by following the instructions here, or via your platform's package manager. Once rustup is installed, install the Rust toolchain by invoking:

rustup install stable

After that, use cargo, the standard Rust build tool, to build the libraries:

git clone https://github.com/arkworks-rs/snark.git
cd snark
cargo build --release

Tests

This library comes with comprehensive unit and integration tests for each of the provided crates. Run the tests with:

cargo test --all

License

The crates in this repo are licensed under either of the following licenses, at your discretion.

Unless you explicitly state otherwise, any contribution submitted for inclusion in this library by you shall be dual licensed as above (as defined in the Apache v2 License), without any additional terms or conditions.

Acknowledgements

This work was supported by: a Google Faculty Award; the National Science Foundation; the UC Berkeley Center for Long-Term Cybersecurity; and donations from the Ethereum Foundation, the Interchain Foundation, and Qtum.

An earlier version of this library was developed as part of the paper "ZEXE: Enabling Decentralized Private Computation".

snark's People

Contributors

antoinerondelet avatar brunoffranca avatar chenxingli avatar debris avatar dependabot-preview[bot] avatar gakonst avatar howardwu avatar huitseeker avatar hyemmie avatar jon-chuang avatar kobigurk avatar mmagician avatar montekki avatar npwardberkeley avatar oblivious-app avatar onewayfunc avatar paberr avatar pratyush avatar psivesely avatar rex4539 avatar rillian avatar ryanleh avatar savil avatar swasilyev avatar valardragon avatar weikengchen avatar xu-cheng avatar yelhousni avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

snark's Issues

Insufficient constraints for fn inverse in Fp2 (and other) extension field gadgets

As explained in detail in the same issue just patched in our Zexe fork, the two constraints used in the fn inverse in r1cs-std/src/fields/fp2.rs are not sufficient to enforce the inverse relation. The same issue is found in the following gadgets:

  • Fp4Gadget,
  • FP6Gadget in fp6_2over3.rs, and
  • Fp12Gadget.

The issue is easily patched, one just has to keep the equation for "real" part of the product, i.e. 1 - nonresidue * v1 = a0 * b0 in the notation of the gagdet.

Implement an efficient zero-knowledge-less variant of Groth16

When Groth16 is used for arguments without the need zero-knowledge, it's possible to save some calculations.

Specifically, as can be seen below, the term rB in C is not needed when r,s=0 (the random elements required for zero-knowledge). This means we don't need to calculate [B]_1, saving a large multiexp. It's still sound, as the verifier algorithm is the same.

image

An example of such an implementation can be seen in https://github.com/filecoin-project/bellman/pull/12/files.

Thanks to @arielgabizon for showing this to me a while ago!

Implement the `BW6_761` curve.

@yelhousni et al. have found a curve, BW6_761 which, like SW6, has scalar field = Bls12_377::Fq, but which has better efficiency properties. It would be great to have an implementation of this curve in our library to enable more efficient recursion.

The implementation consists primarily of the following steps:

  1. Constructing the field, which just requires instantiating the appropriate constants for Fq and Fq6.
  2. Writing code for pairings.

An excellent starting point is the implementation here, and the paper.

Closes #3

128 bit scalar multiplications?

Is there an exposed mechanism to do 128 bit scalar multiplications? I know the zcash crates have this internally with their BitIterator but do not expose it.

Conversion between "PrimeField" and "ScalarField"

How can I convert a type that implements the trait "PrimeField" into the associated type "ScalarField" of the trait "ProjectiveCurve"?

fn test<F: PrimeField, C: ProjectiveCurve>() {
    let rng = &mut thread_rng();
    let value = F::rand(rng);
    // convert value (F) to value_scalar (<C as algebra::ProjectiveCurve>::ScalarField`)  
    let committed_value = C::prime_subgroup_generator().mul(&value_scalar);
}

Polynomial coefficient as ScalarField

Hello! I am trying to use a polynomial coefficient as a ScalarField of an elliptic curve, like in the following function:

pub fn coeff_committment<G: ProjectiveCurve, F: PrimeField>(p: DensePolynomial<F>, g: G) -> Vec<G> {
    let mut points = Vec::new(); 
    for i in 0..p.coeffs.len() {
        let point = g.mul_assign(p.coeffs[i]);
        points.push(g);
    }
    points
}

But I am getting the following error: "the trait std::convert::From is not implemented for <::ScalarField as algebra::fields::PrimeField>::BigInt".

Can someone help me, please? Thanks!

Point with different representation

Hello! I have the following snippet of code:

use algebra::{PairingEngine, ProjectiveCurve, Group};

fn main() {
 let g = E::G1Projective::prime_subgroup_generator();
        let x = E::Fr::from(4);
        let q: E::G1Projective = g.mul(&x);
        assert_eq!(g+&(g+&(g+&g)), q);
        println!("q: {:#?}", q);
        println!("4g: {:#?}", g+&(g+&(g+&g)));
}

The assert passes, however the prints are different. Is it because one is of type ProjectiveCurve and the other Group? How can I convert one representation to/from another? Thanks!

`crates.io` release checklist

algebra:

  • Move curve specific modules to their own crates. (Reduces compilation time)
  • Investigate how to make ToBytes and FromBytes no_std compatible (WIP in #76).
  • Possibly replace identity functions in Field, Group, and {Projective,Affine}Curve traits with One/Zero traits from num_traits. (Done in #54)
  • Possibly add impl Add<Self> for Self (besides just impl Add<&Self> for Self>). (Done in #53)
  • Add a generic trait for Point Compression for AffineCurve elements (See #51; done in #73).

bench-utils:

  • Check whether it can be replaced/integrated with tracing.
  • Move into algebra behind a trace feature?

r1cs-core:

  • Refactor ConstraintSystem, ConstraintVar, LinearCombination, and Variable APIs as per #34 and #43
  • Rename ConstraintSystem to R1CS.
  • Prefer ConstraintSystem::enter_ns and ConstraintSystem::leave_ns to ConstraintSystem::ns. Additionally, refactor Namespace struct to be just a guard struct which automatically decrements the scope when dropped.

r1cs-std:

  • Make FieldVar, GroupVar, and other variable structs generic over CS: ConstraintSystem<F> as well. This enables getting rid of cs: CS arguments to various functions like add, mul, etc. In turn, this allows impl Add<Self> for FieldVar<...> impls.
  • impl new impl {Add, Mul}<Self::Constant> for FieldVar<...>, impls with the constants. (This might not be possible due to coherence rules, might have to use enum-based approach)
  • standardize UInt{8, 16, 32, 64}.
  • Make derive macros so that traits like EqGadget, ToBytesGadget are less of a pain to implement.
  • Add support for non-native FpGadgets.

groth16 + gm17:

  • Adopt similar code organization as marlin
  • Implement proving key, verification key, and proof serialization (blocked on #51).

crypto-primitives:

  • Make gadget APIs consistent with above conventions.

Unify Extension Field arithmetic

Currently we have a number of quadratic and cubic extension fields that have essentially the same arithmetic with respect to their base field. We should unify these implementations, perhaps with the following traits:

pub trait QuadraticExtensionParameters {
	/// The base field that we are building an extension over.
	type BaseField: Field;
	/// The underlying prime field.
    type Fp: PrimeField;
	/// The degree of the extension with respect to the PrimeField.
	const DEGREE_WRT_FP: usize;
	/// The non-residue used to construct the extension.
	const NONRESIDUE_FOR_TOWER: BaseField;

}

pub trait CubicExtensionParameters {
	type BaseField: Field;
    type Fp: PrimeField;
	const DEGREE_WRT_FP: usize;
	const NONRESIDUE_FOR_TOWER: BaseField;
}

Then, in a module like Fp2, we can do the following:

pub trait Fp2Parameters

Frobenius coefficients and permutation

frobenius_map() as defined at algebra/src/fields/models/fp3.rs
is only valid for fields of characteristic (3k+1), and will fail for (3k+2).
It is Ok for curves of my interest, but probably was intended (and is expected) to be general.
Probably a permutation for c1 - c2 field components (with multiplication) would fit 3k+2 case better.

Allow `/` in constraint namespace

Currently the test constraint system does not allow / to be used in names for constraints. However, I would like to use / for describing certain computations, e.g. c = a / b.

Could the separator be changed, or perhaps allow for slashes in the name?

Possible optimizations for GM17 proof verification gadget

Hi, @Pratyush!

I've been looking into the implementation of the SNARK gadget for proof verification of GM17 proofs. I noticed a couple of things regarding potential optimizations that we also discussed with @arielgabizon and I'm curious to hear your thoughts on those:

  • if I'm not mistaken, the g_psi computed here is a linear combination of the verification key and the public inputs to the SNARK, thus it requires one scalar multiplication and one addition per public input, so is quite expensive to compute inside the SNARK. Is there any particular reason why this is currently being computed in the circuit (the alternative being to provide g_psi as a public input and have the verifier check that it has been computed correctly)?
  • in the multi-Miller loop here the pairing e(pvk.g_alpha_pc.clone(), pvk.h_beta_pc.clone()) consists only of elements of the verification key (so only of public inputs), so it should also be possible to compute it outside of the circuit and provide it as a public input to the gadget
  • if g_psi is computed outside of the circuit as suggested above, I think the pairing e(g_psi_prep, pvk.h_gamma_pc.clone()) can also be computed outside and provided as a public input.

Overall, the above could potentially lead to quite a few constraints being removed from the verification gadget (at a small additional cost for the verifier), unless I'm missing something.

What are your thoughts on the above?

Add point compression + canonical serialization for `AffineCurve` elements

This is one of the critical missing pieces for this library. A sketch of the design I have in mind is to make CanonicalSerialize and CanonicalDeserialize traits:

pub trait CanonicalSerialize {
	type Output: Iterator<Item = u8>
	fn serialize(&self) -> Output;
}

pub trait CanonicalDeserialize: CanonicalSerialize {
	fn deserialize(&Self::Output) -> Self;
}

The idea would then be to implement these directly for the final instantiated curves, e.g. for Bls12_381_G1. (I don't see an easy way to have a generic implementation that works for all base fields.)

Consider storing a pointer to the value of a coefficient in the final R1CS object

Loopring put out a blog post earlier today discussing optimizations they used (https://medium.com/loopring-protocol/zksnark-prover-optimizations-3e9a3e5578c0)

One interesting memory reduction optimization is noting that for ~all circuits of interest, there are not that many distinct values of coefficients. So instead of storing one field element for each coefficient, you can instead make a list of unique coefficients used in you're circuit, and then make the coefficients in the end R1CS constraints be an index into this list. This drops the memory to store the coefficient from 32 bytes to 8 bytes. (A 1 word index)

8 bytes is enough to store an index for 2^24 unique coefficients, if for some reason the library had to be able to support R1CS constraints with more unique coefficients, then 16 bytes would be more than enough.

This optimization could be taken when producing the final R1CS object after optimizations, that gets passed into the prover.

This should be a notable reduction in the memory size of Groth16, less so for Fractal/Marlin due to them requiring degree K polynomials. This may increase proving times though, as you have to read the value of the coefficient from the pointer, though this is hopefully in L1 cache.

Similarly, I believe that this observation also suggests a better Fractal/Marlin arithmetization for circuits of interest. (Eliminating the preprocessed val oracles in exchange for more (relatively cheap) verifier work)

CanonicalSerialize/Deserialize for dynamically sized types

Hi!

I just looked into implementing CanonicalSerialize/Deserialize for proofs and keys.
After discussing a bit with @brunoffranca and @kobigurk , here are still some of the open points.

  1. CanonicalSerialize and CanonicalDeserialize currently take (mutable) byte slices as input. Most other serialisation methods rely on a type that implements Write/Read instead. @kobigurk told me the choice was made to be no_std compatible. However, there is already a no_std io replacement part of algebra, which could be used.
  2. Currently, buffer_size in CanonicalSerialize doesn’t depend on &self, but only statically on the type. While this works for constant size types, it won’t work for the keys that, e.g., depend on the number of inputs. For the serialisation, it would be perfectly fine to add the dependency on &self here. However, this wouldn't work for the length checks in CanonicalDeserialize. @kobigurk 's suggestion was:

I imagine we could introduce something like “base_buffer_size” and “instance_buffer_size”, where “instance_buffer_size” would default to the base buffer size.

Additionally, I thought about introducing convenience methods to serialize/deserialize without the need of caring about the extra_info (which is mostly required for the recursive calls anyway).
I agree with @xu-cheng in #51 that the current way of calling the (de-)serialization methods is quite unintuitive.

Alternatively, it could also be an option to look into using serde again or to adapt an easier to use interface similar to what we've done here (mainly since we rely on a BigEndian encoding instead of the current format in zexe):
https://github.com/nimiq/core-rs-albatross/blob/bls12-377/bls/src/compression.rs

Since the original issue #51 is closed, I opened up this new one for discussion.

Looking forward to hearing your thoughts on this!

Best,
Pascal

Optimizing time to run example "application" code

Context:
I'm trying to write example code implementing the Applications from the Zexe paper, starting with the MintOrConservePredicate.

Problem:
To get started, i copied the integration test in dpc/tests/plain_dpc.rs, and began modifying it to run a basic MintOrConservePredicate. However, the code-debug cycle here is very slow. The build times for dpc are slow (separate issue), but also running time for this test takes close to many painful minutes on my laptop.

I added some timing code to identify the bottleneck:

<InstantiatedDPC as DPCScheme<MerkleTreeIdealLedger>>::setup: 1111.4s
InstantiatedDPC::execute: 642.0s
InstantiatedDPC::verify: 3.9s

i.e. generating the PublicParameters from the "trusted setup" is the biggest bottleneck.

Possible fixes:
Option 1: implement serde for PublicParameters. This way i could save these parameters to a file, and load them up (not suitable for an integration test, but for a function i can put in examples/ folder), and then do InstantiantiatedDPC::execute and ::verify as before.

If you'd prefer not adding serde as an external dependency, then it could be marked as dev-dependency or an optional compilation feature?

I started down this rabbit hole, and it meant sprinkling serde implementation onto many many structs. So, thought i'd first check if this path is desirable.

Option 2:
profiling the "setup" function to understand which functions are slow in it, and possibly optimising them. Haven't done this yet.

Any other options?

Failing test for Boolean

It seems that there are missing asserts in this line and the next: https://github.com/scipr-lab/zexe/blob/14445938364f7ee97f42dfe7eaa1393f1f0b87c1/snark-gadgets/src/bits/boolean.rs#L1797, which causes test compilation to fail.

Does this change make sense?

-            Boolean::enforce_nand(&mut cs, &[Boolean::constant(false)]).is_ok();
-            Boolean::enforce_nand(&mut cs, &[Boolean::constant(true)]).is_err();
+            assert!(Boolean::enforce_nand(&mut cs, &[Boolean::constant(false)]).is_ok());
+            assert!(Boolean::enforce_nand(&mut cs, &[Boolean::constant(true)]).is_err());

DenseOrSparsePolynomial division out of bounds

I've this function which compiles perfectly:
It basically creates a numerator polynomial which is always the same and then an iterator of denominators.
Then iterates over them dividing the numerator by the denominators returning the results unwrapped.

pub fn compute_to_n_lagrange_polys(&self, size: usize) -> Vec<Polynomial<E::Fr>> {
        use ff_fft::{DenseOrSparsePolynomial, SparsePolynomial};
        use std::convert::TryInto;
        use std::str::FromStr;
        // Compute the denominator.
        // TODO: What should we do with the constant `Cx`?
        let numerator: DenseOrSparsePolynomial<E::Fr> =
            SparsePolynomial::from_coefficients_slice(&[
                (0, -E::Fr::one()),
                (size + 1, E::Fr::one()),
            ])
            .into();

        let denominators: Vec<DenseOrSparsePolynomial<E::Fr>> = {
            let mut den: Vec<DenseOrSparsePolynomial<E::Fr>> = Vec::default();
            for i in 0..=size {
                den.push(
                    SparsePolynomial::from_coefficients_slice(&[
                        (1, E::Fr::one()),
                        // Weird but `from_repr` and `from` are not suitable for this impl.
                        (0, -E::Fr::from_str(&format!("{}", i)).ok().unwrap()),
                    ])
                    .into(),
                );
            }
            den
        };

        let res: Vec<Polynomial<E::Fr>> = denominators
            .into_iter()
            .map(|den| numerator.divide_with_q_and_r(&den).unwrap().0)
            .collect();
        res
    }

When I call it with the input size = 1, it runs perfectly and provides the correct resulting polynomial.

The problem is that for size > 1 this is what happens:

panicked at 'index out of bounds: the len is 4 but the index is 4', ~/.cargo/git/checkouts/zexe-f74c42809fa6cfe1/859430c/ff-fft/src/polynomial/mod.rs:124:21

Maybe I'm doing something wrong since for size = 1 it works, but it should divide the numerator for each denominator of the iterator.

Move `crypto_primitives` and `gadgets` out of `dpc`

Currently dpc contains a number of cryptographic primitives that the DPC construction uses, such as commitments, CRHs, PRFs, signatures, etc. It also contains R1CS gadgets for these.

It would be beneficial to spin these off into their own separate crates to enable other projects to use them without pulling in the heavyweight dpc crate. This would improve compile time for those projects as well as for dpc.

There are two strategies for this:

  • Make a separate crypto-primitives crate that has modules for each primitive. The crate has an optional r1cs feature that enables the gadgets for the primitive.
  • Make separate crates for each primitive, each having an r1cs flag as above.

I don't have any strong preferences about the approach we take, but I am a bit wary of having a proliferation of tiny crates.

Use Secrecy/Zeroize for Secrets

Secrets such as Private Keys and Toxic Waste must remain in memory for as little as possible to minimize chances of extraction via side-channels and similar methods. They must also never be logged.

All such types in the repo should implement Zeroize, so that they're written with 0's once they're dropped (or manually zeroized), and they should be wrapped with Secret so that they're not accidentally logged. The internal type can be accessed via ExposeSecret.

Constant element from the finite field

To assign a random element from the field, we can do the following

let x = Fr::rand(rng);

How can we assign to x a constant element from the field? For example: let x = Fr::42, which assigns the field element 42 to x?

Failing test for Pedersen hash constraints counting

Hey @Pratyush,

The num_constraints test fails with expected constraints to be 8192 and the actual being 4096, on this line: https://github.com/scipr-lab/zexe/blob/14445938364f7ee97f42dfe7eaa1393f1f0b87c1/dpc/src/gadgets/crh/pedersen.rs#L178

It seems to me that it assumes it uses the trait's default method implementation for precomputed_base_scalar_mul: https://github.com/scipr-lab/zexe/blob/14445938364f7ee97f42dfe7eaa1393f1f0b87c1/snark-gadgets/src/groups/mod.rs#L91, rather than the custom method implemented here (which uses a 2-bit lookup table, which saves a constraint): https://github.com/scipr-lab/zexe/blob/14445938364f7ee97f42dfe7eaa1393f1f0b87c1/snark-gadgets/src/groups/curves/twisted_edwards/mod.rs#L710

Even if we fix that, that brings up to the next assert: https://github.com/scipr-lab/zexe/blob/14445938364f7ee97f42dfe7eaa1393f1f0b87c1/dpc/src/gadgets/crh/pedersen.rs#L182, where I'm not sure where the cost calculation came from.

Would greatly appreciate help :)

SWJacobian and SWProjective infinity point representation

I was looking at algebra/src/curves/models/short_weierstrass_jacobian.rs and algebra/src/curves/models/short_weierstrass_projective.rs and I noticed that:

The implementation of write() method for GroupAffine SWJacobian just serializes the x and y coordinates and, when reading, the read() method reconstructs the infinity flag as:
x.is_zero && y.is_one()
In SWProjective, instead, the infinity flag is explicitly written and read.
So why in jacobian we can safely reconstruct the infinity flag from x and y while in projective, appearently, not ?

Also implementation of zero() and is_zero() functions for GroupProjective looks exactly the same for SWProjective and SWJacobian, but I think that (of course correct me if I'm wrong):

  • In Projective, zero point is (0:1:0) and in Jacobian is any point (t^2:t^3:0) as long as t <> 0 (conventionally represented as (1:1:0));
  • Because of that, the check that a point is zero implemented as checking that z is zero, should work only in Jacobian but not in Projective.

What am I missing ?

Serialization Derive Macro does not support Tuples

As title, ref #104. Example:

#[derive(Debug, CanonicalSerialize)]
struct TestCase<E: PairingEngine> {
    a: (E::G1Affine, E::G1Affine), // this makes the macro fail
    b: E::G1Affine, // this works
    c: Vec<E::G2Affine>, // this works
}

It'd be nice if it would just serialize a.0 followed by a.1

Sidenote: In order to use the macro, I had to explicitly import algebra-core to my package. It'd be nice if the macro were usable via importing just algebra as well.

i.e. go from:

zexe_algebra = { package = "algebra", git = "https://github.com/scipr-lab/zexe", version = "0.1.0" }
zexe_algebra_core_derive = { package = "algebra-core-derive", git = "https://github.com/scipr-lab/zexe", version = "0.1.0" }
algebra_core = { package = "algebra-core", git = "https://github.com/scipr-lab/zexe", version = "0.1.0" }

to
zexe_algebra = { package = "algebra", git = "https://github.com/scipr-lab/zexe", version = "0.1.0", features = ["derive"] }

cc @paberr

[Question] Short Weirstrass curve with 381 bits subgroup

If I understand correctly the SW6 curve in this library has a prime order subgroup of 377 bits (for use with the BLS12-377 curve). Do you know a corresponding curve with prime order subgroup of 381 bits, so that I can use it with the BLS12-381 curve?

Refactor `ConstraintSystem` API

Currently the method of creating constraints is via the ConstraintSystem trait. Proof systems that rely on r1cs-core have to create structs that implement this trait. There are a couple of problems with this approach:

  • A lot of logic is duplicated across reimplementations (eg in gm17 and in Marlin).
  • Gadgets have to be generic with respect to ConstraintSystem, which can blow up compilation times in large projects (as is the case in the dpc crate), and results in gadgets having an extraneous CS: ConstraintSystem parameter (thus obscuring what's actually going on).

As an alternative, I suggest changing the API to have r1cs-core implement a ConstraintSystem struct that runs in different modes:

  • At compile-time: decide whether or not to maintain debugging information (like in the current TestConstraintSystem).
  • At run-time: use a flag to specify whether we're in setup mode or in prover mode. When in the former, ignore assignments, and when in the latter (potentially) omit generating constraint matrices.

The downside of this approach is that downstream implementers of proof systems would no longer be able to customize the behaviour of constraint generation. However, in my experience this is not really a big issue as the two impls that I have worked with (gm17 and marlin) don't really take advantage of this flexibility.

Implement PLONK

Why? Very fast. Universal setup + proof composition = dark contracts.

Original PLONK
SHPLONK (non-essential): Multiple commits
TurboPLONK: Custom gates (e.g. EC point addition gates, can make pairing computation extremely cheap!)
PLOOKUP : Pedersen hashes. Extension: permutation networks for branching programs/RAM?

Zexe PLONK: PLONK single-layer rollup with BLS12-377 + BW6-761

Things to Investigate

  • Whether using methods similar to TurboPLONK, not only elliptic curve point addition, but also pairing/G2 ops can be made primitive

Issue when importing external dependency

When I add:

[dependencies]
ff-fft = { git = "https://github.com/scipr-lab/zexe/" }

I get the error:

error: failed to load source for a dependency on ff-fft

Caused by:
Unable to update https://github.com/scipr/zexe

Caused by:
failed to clone into: /home/fiono/.cargo/git/db/zexe-3255c1405822b681

Caused by:
failed to authenticate when downloading repository
attempted to find username/password via git's credential.helper support, but failed

Caused by:
failed to acquire username/password from local configuration

Any idea why?

no-std implementation

Currently zexe is required to compile with the Rust std library. This is a problem if compiling for certain embedded devices as hardware wallets. A "no-std" implementation not using std would allow for usability across more devices.

FF improvements / comparison with GoFF

This was recently released: https://hackmd.io/@zkteam/goff

There might be improvements which we could port over. If not, it'd be useful to benchmark the algebra crate against goff.

iden3/go-iden3-crypto#15

this is amazing, just tried now on the Poseidon hash go implementation and just migrating from *big.Int to the code generated by goff is so much faster, as example some benchmark results:
- 1.550.013 ns/op with *big.Int
- 121.748 ns/op with goff generated finite fields arithmetics code

cargo-audit: Update crypto-primitives/blake2

Security advisory from cargo-audit. Not critical since we don't use HMACs anywhere

$ cargo audit
error: Vulnerable crates found!

ID:       RUSTSEC-2019-0019
Crate:    blake2
Version:  0.7.1
Date:     2019-08-25
URL:      https://rustsec.org/advisories/RUSTSEC-2019-0019
Title:    HMAC-BLAKE2 algorithms compute incorrect results
Solution:  upgrade to >= 0.8.1
Dependency tree:
blake2 0.7.1
└── crypto-primitives 0.1.0

Could be nice to also add cargo-audit to CI.

Remove duplicate constraints

If your performing exponentiation on the same variable in multiple locations in your code, you may have duplicate constraints in many places.

Ideally the backend should be able to catch these and only use a single constraint. We don't want to do this at constraint-addition, since that would be a quadratic overhead. It should be doable in nlog(n) time via a sorting algorithm on the hash of constraints.

EDIT: We could have a hashmap on the constraints, and then do it at constraint-addition time, since thats only a linear overhead.

Polynomial ring over a finite field

Hello!

I would like to construct a polynomial ring over a finite field, but I am new to Rust and this type of math.
Can this library be used for this and, if so, can someone point me in the right direction?

Thanks!

Pedersen hash doesn't seem to work with bls12-377 curve

When using a Pedersen hash (primitive and gadget) with the following parameters:

pub type CRH = PedersenCRH<bls12_377::G1Projective, CRHWindow>;

#[derive(Clone, PartialEq, Eq, Hash)]
pub struct CRHWindow;

impl PedersenWindow for CRHWindow {
    const WINDOW_SIZE: usize = 128;
    const NUM_WINDOWS: usize = 8;
}

pub type CRHGadget = PedersenCRHGadget<bls12_377::G1Projective, bls12_377::Fq, bls12_377::G1Gadget>;

pub type CRHGadgetParameters = <CRHGadget as FixedLengthCRHGadget<CRH, bls12_377::Fq>>::ParametersGadget;

The gadget and the primitive output different points even when given the same input. The tests crh_works and crh_primitive_gadget_test fail.
Weirdly enough, the tests pass with Jujub.

natural logarithm in variable_base.rs

I'm trying to make zexe work with parity's substrate runtime. In order to achieve that, I need to make zexe work with no_std and also remove all floating point operations. While doing the later, I stumbled upon an interesting line in variable_base.rs.

https://github.com/scipr-lab/zexe/blob/6c5ea20bc44f2a1edb1fd281d768ab57b0caa78f/algebra/src/msm/variable_base.rs#L14-L18

Line 17 looks very similar to ceil(ln(scalars_len)) and when I look at the commit history, this is exactly what it used to be:

https://github.com/scipr-lab/zexe/blob/bf21fd2834168e74cb2b0d4a850d76cd9ef01693/algebra/src/msm/variable_base.rs#L15-L19

Since + 2.0 looks quite suspicious to me, I compared the two implementations and it appears that it is not what it ceil(ln(scalar_len)) used to be. I don't understand how the result of this equation affects the code further, but if it can be only approximately ceil(ln(scalar_len)) maybe it is possible to replace it with the implementation that doesn't use floats?

Here is the link to the rust playground with 3 different implementation of this line:

https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=845f4ca1f1641eaf50445413f97569eb

Add is_equal gadget

It's useful to have a gadget that outputs whether two numbers are equal or not, or equivalently if an element is zero (then you just check the difference). A folklore way of doing it for x is:

  • Allocating an unconstrained variable inv_x that contains the inverse of x is x !=0 and 0 otherwise
  • Constraining x*inv_x == 1 - result
  • Constraining x*result == 0

An example implementation exists here: https://github.com/celo-org/bls-zexe/pull/148/files#diff-529c1da151158d954e4e07b51e4542d1R96

A proof of proof

Hi all,
How one can use zexe to create a circuit that verifies a GM17 proof?
Thanks,

sccache not found on nightly CI

Currently nightly CI is failing with

$ sccache -s
898No command 'sccache' found, did you mean:
899 Command 'ccache' from package 'ccache' (main)
900sccache: command not found

The log shows that cargo install sccache is never called: https://travis-ci.org/scipr-lab/zexe/jobs/656389865

Stable works, since it properly installs it: https://travis-ci.org/scipr-lab/zexe/jobs/656389864#L225

Not sure why this happens, maybe due to the install tag in the nightly section? https://github.com/scipr-lab/zexe/blob/master/.travis.yml#L10-L13

cocks-pinch curve

Hi all,

is the requirement for cocks-pinch method in your scheme to have only a subgroup order equal to the field size of BLS12-377 curve? If so there is a curve with slightly less bits that 782 (766) and with the same embedding degree (6) and the same 2-adicity (3).

Best,

Improve Gadget naming and infrastructure.

I'm not very happy with our gadget infrastructure. This issue is intended to serve as a collection of the pain points and possible solutions:

  • Gadget is appended everywhere. This makes code unnecessarily verbose and difficult to read. Possible solutions:
    • For Gadgets which are tasked solely with generating constraints, move the functionality directly to the original structs. For example, Instead of a impl CRHGadget for PedersenCRHGadget, do impl CRHGadget for PedersenCRH. It might be tricky to get this to work because the Gadget might use type variables not present in the original.
    • For Gadgets that represent variables in the constraint system, rename them to Var/Variable. Eg: FpGadget -> FpVar, GroupGadget -> GroupVar.
  • It's not possible to track the provenance of LCs once they're entered into the ConstraintSystem API. This inhibits optimizations like the following: if a LinearCombination lc consisting of many different variables in many constraints computations, then these variables are substituted into these constraints This increases the density of the constraint-system, which hurts performance for SNARKs like marlin and fractal. A solution would be to replace LinearCombinations that are used many times by a new variable v using a single constraint like lc * 1 = v. However, because LC's are not tracked within CS, this is currently not possible

A possible approach:

  • Instead of generating linear-combinations by themselves, users must ask the CS to generate a linear combination. The CS can then just return an opaque LCId that can be used in place of LinearCombinations by downstream users. Internally, CS maps these IDs to actual linear combinations. Later on, it can decide how to optimize and merge these virtual LCs into constraints they appear in.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.