dusk-network / hades252 Goto Github PK
View Code? Open in Web Editor NEWImplementation of the Hades permutation algorithm used in Poseidon Hashes with ZKProof capabilities.
Home Page: http://dusk.network
License: Mozilla Public License 2.0
Implementation of the Hades permutation algorithm used in Poseidon Hashes with ZKProof capabilities.
Home Page: http://dusk.network
License: Mozilla Public License 2.0
Since the change of API the README.md
is not up to date anymore.
Currently, running tests for a pre-image seem to be failing. This indicates that the solid version and the constraint system version are not exact parallels
Make the tests returns
Error
(as is now done in the blind bid if you need a reference).
Originally posted by @ZER0 in #54 (comment)
As it was discussed in that thread, once PLONK migrates to anyhow
for error handling, we can update this fact here.
The license headers need to correspond with what cargo-dusk-analyzer
checks for.
Describe what you want implemented
Update dusk-plonk
dependency to 0.9
.
Describe "Why" this is needed
dusk-plonk
0.9
was released with major API improvements and general bug fixes. It is desirable to have these improvements available.
Describe alternatives you've considered
N/A
Additional context
N/A
The current implementation lacks of the hash function for the Merkle Tree.
We no longer need any std
related code. Therefore the repo can be completely no_std
and only requires to use alloc
feature to enable plonk and therefore the capability to fill the StandardComposer
.
Migrate all of the methods from plonk_zexe
branch to plonk's master
branch.
This implies:
TurboComposer
.Multiplying by MDS Matrix in the Constraint system
Currently, proof generation takes more than 60 seconds, when we use the prescribed parameters:
I believe I have tracked it down to the Matrix multiplication segment. Adding two LC's concatenates both vectors together. The matrix multiplication by a vector of a 9x9 matrix will produce at least 81 terms in the Linear combination.
The multiplication is done (Partial_rounds+Full_rounds) times so we have at least 72 * 81 = 6237 terms.
This would be the minimum assuming that we started with each LC having one term.
On the code we find examples of duplicated code on which we maybe can apply pattern-matching techniques:
fn input_full<T>(&self, data: &Vec<T>) -> bool {
data.len() == self.t
}
pub fn width_left<T>(&self, data: &Vec<T>) -> usize {
self.t - data.len()
}
pub fn input_bytes(&mut self, bytes: &[u8]) -> Result<(), PermError> {
// Map arbitrary bytes to group using elligator2
let scalar = Scalar::hash_from_bytes::<Sha512>(bytes);
self.input(scalar)
}
pub fn input(&mut self, scalar: Scalar) -> Result<(), PermError> {
if self.input_full(&self.data) {
return Err(PermError::InputFull);
}
self.data.push(scalar);
Ok(())
}
or:
fn pad(&mut self) {
let pad_amount = self.width_left(&self.data);
let zero = Scalar::zero();
let zeroes = vec![zero; pad_amount];
self.data.extend(zeroes);
}
fn pad_lc(&mut self) {
let pad_amount = self.width_left(&self.data_lc);
let zero_lc: LinearCombination = Scalar::zero().into();
let zeroes = vec![zero_lc; pad_amount];
self.data_lc.extend(zeroes);
}
The idea is to have only one function that operates depending on the parameter passed.
This will reduce the code complexity and provide an easier usage for the end user.
This branch can be used to test how fast the blindbid is in terms of parallel verification.
https://github.com/dusk-network/Hades252/tree/threads_bench_blindbid
CPU : 2.2 GHz Intel Core i7
Blindbid benchmarks: width = 9, number of constraints = 3196
Verification times
1000 proofs: 121 seconds
900 proofs: 85.05 seconds
800 proofs: 74.42 seconds
700 proofs: 76.81 seconds
600 proofs: 75.62 seconds
500 proofs: 46.88 seconds
400 proofs: 39.21 seconds
300 proofs: 31.41 seconds
200 proofs: 19 seconds
100 proofs : 10 seconds
Increase of ~85% compared to sequential calculations without parallel. This is taking an average of 700ms for one blind bid proof to verify. The actual time does vary between 600ms and 800ms.
Here we are setting the first element to zero:
Line 28 in 22fecbc
We need to set it to 2^(self.perm.t) - 1
, according to appendix E of https://eprint.iacr.org/2019/458.pdf
As per discussion with Dmitry, the current implementation of the hash
function is not correct; also it should be named sponge_hash
or something like that, since it's the Sponge function – Poseidon has also a hash
function for Merkle Tree, see issue #28
This crates' logic moved to dusk-poseidon
(see issue dusk-poseidon#240).
This nitpick should be fixed.
We should add the benches in order to see the improvements achieved.
Currently both generation and usage of MDS matrix and round keys are not optimized, that should be fixed.
This is inline with the deps we are going to use in #84 and also the direction the rust ecosystem is moving towards.
Creating a new permutation returns a Result because we currently ask for R_F.
Since we are aiming for symmetric permutation, we error if R_F % 2 != 0
A nicer API would be to ask for R_f and multiply by two. The problem is that we also ask for R_P which denotes all rounds. This would be quite unintuitive if we change to R_f because one parameter is asking for half of the full_rounds, while the other is asking for all of the partial rounds.
This may be easily solved by introducing the term "half_full_round"?
According to the paper, the ARK must be applied to all elements at the beginning of every round.
Currently, for the partial round, we are applying ARK only for the last element
We have a problem regarding the documentation being uploaded to docs.rs.
See: https://docs.rs/crate/dusk-hades/0.16.0-rc.0
The main problem is how we have structured our build architecture. And in order to solve the problem the crate should be split between a bin and a lib so that one takes care of the constant generation and the other takes care only about the logic of the Hades permutation algorithm.
While building the examples of the gadget usage, I've realized that it's a bit weird how you're supposed to call the gadget functions exposed for the GadgetStrategy
.
Actually, you need to copy the StandardComposer
by value to the hades_gadget()
function and then re-declare it with the mutated state.
This is not inline with the typical gadget function usage.
The way of interacting with gadget functions should be to provide a &mut StandardComposer
to the function so it gets modified (constraints get added to it) but you never need to re-declare the same object to be able to apply a gadget to it.
This is how it actually looks like:
pub fn hades_gadget(
composer: StandardComposer,
pi: P,
x: &mut [Variable],
) -> (StandardComposer, P) {
#[cfg(feature = "trace")]
let circuit_size = composer.circuit_size();
let mut strategy = GadgetStrategy::new(composer, pi);
strategy.perm(x);
#[cfg(feature = "trace")]
{
trace!(
"Hades permutation performed with {} constraints for {} bits",
strategy.cs.circuit_size() - circuit_size,
WIDTH
);
}
strategy.into_inner()
}
So how I envision this gadget function and the GadgetStrategy
will look like:
/// Implements a Hades252 strategy for `Variable` as input values.
/// Requires a reference to a `ConstraintSystem`.
pub struct GadgetStrategy<'a> {
/// A reference to the constraint system used by the gadgets
pub cs: &'a mut StandardComposer,
}
impl<'a> GadgetStrategy<'a> {
/// Constructs a new `GadgetStrategy` with the constraint system.
/// In this case, the CS is embedded inside of a plonk `StandardComposer`.
pub fn new(cs: &'a mut StandardComposer) -> Self {
GadgetStrategy { cs }
}
/// Perform the hades permutation on a plonk circuit given a set of Public
/// Inputs and a set of Variables used as inputs.
pub fn hades_gadget(composer: &mut StandardComposer, inputs: &mut [Variable]) {
#[cfg(feature = "trace")]
let circuit_size = composer.circuit_size();
let mut strategy = GadgetStrategy::new(&mut composer);
strategy.perm(inputs);
#[cfg(feature = "trace")]
{
trace!(
"Hades permutation performed with {} constraints for {} bits",
strategy.cs.circuit_size() - circuit_size,
WIDTH
);
}
}
//.........
}
On this way, we will be able to call the gadget on the way that is recommended by PLONK, see: https://github.com/dusk-network/plonk/blob/f2e889184961b885d9e095afe35b137165112c91/examples/3_1_final_gadget_orientation.rs#L49
The call will look like:
let mut composer = StandardComposer::new();
let mut inputs = [BlsScalar::one(); WIDTH];
GadgetStrategy::hades_gadget(&mut Composer, inputs);
// Now I have my gadget with the hades gadget printed into my internal ConstraintSystem.
The Permutation
currently contains the code for both Scalar
and LinearCombination
, we should probably split; plus Hash
struct seems a mere wrapper to Permutation
, so it might be not necessary.
Currently Hades is populating ROUND_CONSTANTS
and MDS_MATRIX
at runtime (only once, due lazy_static
), but this overhead is unnecessary since it could be done at compile time (that is preferable).
Error: --> /home/runner/.cargo/registry/src/github.com-1ecc6299db9ec823/dusk-hades-0.1.3/src/lib.rs:7:12
|
7 | #![feature(external_doc)]
| ^^^^^^^^^^^^ feature has been removed
|
= note: use #[doc = include_str!("filename")] instead, which handles macro invocations
Update plonk dependencies to version 0.7
Currently, Hades is tight to the WIDTH
that we have on the Tx tree which is 5.
But in the bid-tree, we have a WIDTH
of two.
So in order to not waste gates on useless hashes, we should implement all of the hashing functions to accept a WIDTH and so, adapt themseleves with the number of hashes that we need, not more.
The code is almost undocumented, we should start to add the docs.
In the external tests, for simplicity we use unwrap
, the idiomatic way would be to use ?
with a from trait for different types of Error enums.
Currently, we allow users to do hash.add(input)
however because we can only have a limited amount of inputs, we must return a Result, in case they add more than width-1
scalars. A simpler API would be to just ask the user input a vec![input1, input2,] which would reduce the number of error enums. We could also have both approaches, in case the user does not know all of the inputs before they need to hash and does not want to create temporary values.
One possible change that could be made is to create two structs, one for the solid impl and another for gadget impl, however this will cause a lot of code duplication. Currently, there is one struct and a user will call a struct to indicate whether they want the gadget version or the solid impl
Readme.md file mentions several times the RistrettoScalar Field
which is no longer being used in master
impl.
Instead, it is using the Scalar
& G1Affine
of dusk-network/Bls12_381
which is what the PLONK repo is currently relying on as a backend.
This info should be updated as well as the parameters that provide values related to the Fr
group being used in the implementation.
Add a travis.yml
file with all of the configurations for:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.