Giter VIP home page Giter VIP logo

math's Introduction

Build Status Build Status Build Status

Math

The Cryptimeleon Math library provides the mathematical foundation for the other Cryptimeleon libraries. It implements basics such as mathematical groups, rings and fields, e.g. Zn, as well as implementations of cryptographic pairings. Furthermore, it offers serialization support for the implemented structures.

Security Disclaimer

WARNING: This library is meant to be used for prototyping and as a research tool only. It has not been sufficiently vetted for use in security-critical production environments. All implementations are to be considered experimental.

Table Of Contents

Features

Below we give a more detailed list of features.

Groups

Math offers the following algebraic groups:

  • Bilinear groups:
    • Type 1 and type 3 pairings
  • Elliptic curves without pairings:
    • Secp256k1
  • Symmetric group Sn
  • Cartesian product group

Rings

Math offers the following algebraic rings and fields:

  • Boolean ring
  • Cartesian product ring
  • Field extension class for polynomials of the form x^d + c
  • Integer ring
  • Polynomial ring
  • Ring Zn and Field Zp for prime p

Other Features

Math also implements a number of other features:

  • Multi-exponentiation algorithms
  • Deferred evaluation of group operations for automatic application of those multi-exponentiation algorithms
  • Serialization features that integrate with the implemented algebraic structures
  • Group operation counting capabilities
  • A random generator
  • Hash function implementations such as SHA256 and SHA512

Quickstart

Installation With Maven

To add the newest Math version as a dependency, add this to your project's POM:

<dependency>
    <groupId>org.cryptimeleon</groupId>
    <artifactId>math</artifactId>
    <version>[3.0,)</version>
</dependency>

Installation With Gradle

Math is published via Maven Central. Therefore, you need to add mavenCentral() to the repositories section of your project's build.gradle file. Then, add implementation group: 'org.cryptimeleon', name: 'math', version: '3.+' to the dependencies section of your build.gradle file.

For example:

repositories {
    mavenCentral()
}

dependencies {
    implementation group: 'org.cryptimeleon', name: 'math', version: '3.+'
}

Tutorials

We recommend you go through our short Math tutorial to get started.

We also provide a walkthrough where we show you how to implement a pairing-based signature scheme here.

Note Regarding Pairing Performance

The included java pairings are not optimized for performance. We recommend you use our Mcl wrapper library if you care about pairing performance. It includes an optimized type 3 pairing.

Miscellaneous Information

  • Official Documentation can be found here.
    • The For Contributors area includes information on how to contribute.
  • Math adheres to Semantic Versioning.
  • The changelog can be found here.
  • Math is licensed under Apache License 2.0, see LICENSE file.

Authors

The library was implemented at Paderborn University in the research group "Codes und Cryptography".

math's People

Contributors

ablu avatar feidens avatar janbobolz avatar rheitjoh avatar this-kramer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

ablu nh0823

math's Issues

LazyGroup (auto-)optimizing computations

The question is whether we want to keep the current model, where everything is basically computed the way the user writes it (e.g., g.pow(x).pow(y) will be computed with two exponentiations, e.apply(g,h).pow(x) will exponentiate in GT), or whether we want to auto-optimize these (e.g., in g.pow(x).pow(y) the second .pow() call would return g.pow(x*y) instead and e.apply(g,h).pow(x) would become e.apply(g.pow(x),h)).

The question may be how far this should go (though I'd ideally like an all-or-nothing approach). What about (g1.pow(x1).op(g2.pow(x2)).op(g3.pow(x3))...).pow(y) ? Should this automatically become (g1.pow(x1.mul(y)).op(g2.pow(x2.mul(y))).op(g3.pow(x3.mul(y)))...)? Should we decide this dynamically depending on whether x1, x2,... are small numbers or not?

At some point, my thinking was that we shall have the user do useful computational decisions statically instead of dynamically rewriting everything. The question is whether this is the right way to go.

Alternatively, as an idea out of the left field, we could implement some sort of OpinionatedGroup that would give feedback on how to (statically) optimize operations. This has the advantage that we won't clutter the (normal) runtime with dynamic optimization decisions.

Create compressing Representation->JSON converter

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 35)

Currently, each object is completely and independently of everything else represented in its Representations.
This often produces redundancies (e.g., a polynomial ring over F_p and a hash function to the same F_p both include the representation of F_p in their own representation).

A solution to this (without losing the ability to easily reconstruct objects from their Representations) is to create a smart Representation -> JSON converter that looks through the supplied Representation, removes redundancies(*), and then encodes the result in JSON (in the JSON -> Representation conversion, it re-introduces the redundancies).

This is an optional enhancement that would improve public parameter and key sizes for protocols.

(*): go through the Representation's object tree, find equal subtrees (equals() method), replace occurences of redundant subtrees with a reference to a single Representation of that subtree.
For example: Remove the F_p-Representation from the Representations of the polynomial ring and the hash function, add it to some top-level object at the key "ref_3", and put a reference for all occurences (like polynomialRing.__ref_baseRing = 3). When going from JSON to Representation again, resolve these references.

Comment by Jan Bobolz

We could also do all this without reinventing the wheel, by probably just gzipping the json string when sending it.

Overall, I'll leave this as an optional enhancement with very low priority.
(However, the ticket will remain here as a ward against people saying that representations are too redundant at times)

Efficiency improvements for package de.upb.crypto.math.pairings.BarretoNaehrig

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 85)

Several optimizations are pending:

  1. Currently, fields are derived fromde.upb.crypto.math.structures.quotient.FiniteFieldExtension. This is inefficient, especially the implementation of multiplication. The BN fields shall directly implement the Field interface, including a Karatsuba based implementation of FF multiplication.
  2. Only the less efficient Tate Pairing is implemented. Also Ate and optimal Ate pairing shall be implemented.
  3. The final exponentiation is not optimized. It shall be based on Frobenius Endomorphism and multi exponentiation.
  4. Remove artificial BN BaseField
  5. Implement inversion in GT based on conjugation (unitary elements)

Comment by Peter Günther

The first item is handled in branch feature/native_bn_arithmetic and already partly merged into develop with commit https://git.cs.uni-paderborn.de/ag-bloemer/sis/-/commit/0b709fe25bf8c56b423edb5d2959d04e6f5828c9

Comment by Fabian Eidens

[at]peterg can you give us a hint for 2. such that we can look at it and implement it?

Comment by Peter Günther

This should be a good starting point.

Frederik Vercauteren. “Optimal pairings.” In: IEEE Transactions on
Information Theory 56.1 (2010), pp. 455–461.

Also diss of Michael Naehrig.

Comment by Peter Günther

To get something closer to an implementation, look at Algorithm 6.3 in

http://nbn-resolving.de/urn:nbn:de:hbz:466:2-24853
http://digital.ub.uni-paderborn.de/hsx/download/pdf/2041207?originalFilename=true

The expected speedup for a product of pairings is roughly factor 4 compared to Tate pairing, I guess.

Is it critical for you? If you really need it, I could implement it together with the Jacobean coordinates when there is a bit more time.

I would first implement Jacobean coordinates because it applies to a wider class of curves and because it should bring more speedup.

Improve Zp::injectiveValueOf

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 191)

Currently, de.upb.crypto.math.structures.zn.Zp#injectiveValueOf is only injective for each byte array length, but, for example, [0, 0, 1] and [0,1] collide.

This can be avoided by mapping a vector (a1,a2,a3,...) to (1,a1,a2,a3,...) in Zp.
That way, vectors of different lengths will have different images (at the cost of only one bit that Zp must be able to accomodate).

There are other APIs that would profit from this idea as well (usually, they'll call de.upb.crypto.math.structures.zn.Zp#injectiveValueOf internally), such as the injective map for SignatureSchemes and for commitments.

Remove Log4J

The library includes log4j for logging. However, logging seems like it should be done at the application level and not by us. Therefore, we should remove log4j as well as the logging.

Empty Restorer String cannot handle Enum

Problem

@Represented does not work for Enum types as ReprUtil does not know how to handle these.

How to Reproduce

Switch to deprecate_old_repr branch of Craco and run the tests. For example, testRecreateRepresentable number 40 has the aforementioned issue.

registerProvider is misleading

The BilinearGroupFactory.registerProvider method is misleading since it takes a list of providers but its name (singular provider) suggests it would only take a single provider. Furthermore, it does not do what it javadoc says which is adding external providers; instead it just overwrites the providers.

I suggest to rename the method to setProviders and keep it the same otherwise (and adjust javadoc of course).
Also, the provider attribute should be rename accordingly.

I'm creating an issue for this since it would be an API change (and to not forget about it).

Improve Javadoc and API succinctness using more package-private classes

@JanBobolz suggested we improve the javadoc as some parts of it are fairly messy due too basically every class having public visibility.
His suggestion was to compile the javadoc with -public and --show-members protected flags, as well as mark classes we do not want to be part of the public API as package-private.
protected methods and member variables would then still be shown.

The only allowed class access modifiers are public and package-private.
By marking a class as package-private, it is only visible inside its package.

This has the advantage of allowing for more backwards-compatible API changes, i.e. reduced likelihood of major/minor version increases being necessary, as well as reducing noise inside the rendered Javadoc.

What to mark as package-private

The classes that I have selected as being likely candidates for a package-private access modifier are as follows:

Classes that implement RepresentationHandler

These are ArrayRepresentationHandler, DependentRepresentationHandler, ListAndSetRepresentationHandler, MapRepresentationHandler, StandaloneRepresentationHandler.

These classes are only used inside ReprUtil and will most likely never be used anywhere else, unless someone would want to implement their own ReprUtil (unlikely).

The serialization.annotations.internal package would then have to be merged with serialization.annotations to allow ReprUtil continued access.

Would then also make sense to make the RepresentationHandler interface itself package-private.
I think if you were to ever write your own representation handler, you would do it as part of the Math library and not in your own project.
Especially since there is no API to add new representation handlers to ReprUtil from the outside.
So would need to edit ReprUtil in any case.

Counting Impl classes

The user should always use the non-Impl version of the counting classes.
The non-Impl versions all make use of lazy group stuff to allow for counting multi-exponentiations as well as count in a way that results in the least amount of operations (since optimizations can be done once compute() is called).

The Impl classes just allow you to count via the basic group stuff.
So if you are replacing a BasicGroup with the counting group, this would lead to more accurate counting.
However, the basic group stuff should only be used for already very efficient groups.
So why even bother with counting group operations in that case?
They don't really matter for performance.
Plus you can emulate a basic group implementation by calling compute() after every call.

So would make CountingBilinearGroupImpl, CountingBilinearMapImpl, CountingGroupElementImpl, CountingGroupImpl, CountingHomomorphismImpl, and HashIntoCountingGroupImpl package-private.

Supersingular classes

Would make everything but SupersingularBilinearGroup package-private.
Then one can still use the supersingular bilinear group as usual.
Not possible to wrap SupersingularBilinearGroup in a BasicBilinearGroup anymore, but we can add a class for that if we think that is useful.

Barreto-Naehrig classes

Would make everything but BarretoNaehrigBilinearGroup and BarretoNaehrigParameterSpec package-private.
Same reasoning as for supersingular classes.

Lazy group classes

Theoretically one could make some classes package-private, like ExpLazyGroupElement.
It seems a bit random though as others, e.g. ConstLazyGroupElement are used by the counting group classes and therefore could not be made package-private.

Ring group classes

Theoretically, one could make RingGroupImpl, RingAdditiveGroupImpl, and RingUnitGroupImpl package-private.
Only RingGroup is really needed as a public API as you can get the additive and unit subgroups via additiveGroupOf() and unitGroupOf().

Would mean, since RingGroup is a BasicGroup, that you cannot use the lazy group stuff with the ring group stuff, but are there any ring groups where lazy evaluation is interesting?
If there were, we could just introduce a RingLazyGroup anyways.

An issue would be that you could not extend RingAdditiveGroupImpl or RingUnitGroupImpl for your specific ring, which would allow you to override estimateCostInvPerOp with a more accurate number.
Not sure there any other reasons to want to extend those classes.

Mcl bil group classes

Basically the same thing as for the other bil pairings, everything but MclBilinearGroup.

Improve PairingSourceGroup::cofactorMultiplication(FieldElement, FieldElement)

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 207)

This is still somewhat sloppily implemented as we steal GroupElement::op from the subgroup. Probably not a problem. Perfect solution would probably be to create a temporary PairingSourceGroup with size′=size() x cofactor and cofactor′ = 1.

Future of BilinearGroupFactory

I want to discuss getting rid of BilinearGroupFactory.

A history lesson

The idea for the factory came from theory, where we usually just write "generate some type 3 bilinear group for security parameter λ, I don't really care about the details".
Of course, nowadays, we do care about the details: we want to measure performance and be able to say "this scheme takes this long on a BN256 group". So the old vision is actually somewhat outdated.

The way it actually is right now

From personal experience, using the BilinearGroupFactory was almost never really something that made things much easier, it just added one extra step to instantiating the group I really wanted.
For example, to switch from our java bn to mcl, you need to

  1. register the mcl provider to the factory
  2. guess/check valid parameters for the factory requirements (in particular, security parameter).

Proposal

What if instead, to switch to mcl, you could just replace

BilinearGroup bg = new BarretoNaehrigProvider(128).provide();

with

BilinearGroup bg = new MclBilinearGroupProvider().provide();

i.e. we just leave out the BilinearGroupFactory middleman and remove this forced specification of requirements.

How to replace the features of the BilinearGroupFactory

Common interface for instantiation

We can still use BilinearGroupProvider interface to instantiate bilinear groups. I'd relax the interface slightly to get rid of some of the complexity in using it.

Security parameter

Instead of requiring the user to supply the desired security level to provide(), we could add a getSecurityLevel() to BilinearGroup to inform users of how many bits security it provides. This makes more sense to me in a world where some instantiations (e.g., mcl's BN256) are not arbitrarily flexible in their security parameter.

Type

Type 1/2/3 is something users are usually aware of, anyway, when instantiating the group. Also here, the BilinearGroup should, in my opinion, inform about the type, instead of making the user specify it.
(Though in this case, one can argue that it would be nice to prevent a user to accidentally do benchmarking with the wrong group type?! In the worst case, we could add the type as a provide() parameter, though it will probably be more confusing than helpful.)

Hash function requirements

Let's remove these requirements for "I need a hash into G1" and let's just supply them if they exist, otherwise throw an exception when the user tries to access them.

Number of prime factors

This should become a constructor argument for those BilinearGroupProviders that actually support composite order groups.

Discoverability

To replace the "one stop shop" function of the BilinearGroupFactory (only one thing you need to know to get either our supersingular type 1 or our type 3 pairing), we could just name our packages better and just encode the "type1/type3" distinction into the package names. Then people will be able to easily find the corresponding classes.
Also, we only have two implementations, so we could just list them exactly where we currently nudge people towards the BilinearGroupFactory.

Implement FiniteFieldExtension::getPrimitiveElement()

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Note: FiniteFieldExtension has been deleted from the project. The comments may still be relevant, though.

Original Text (Issue 201)

In "Mathematics of Public Key Cryptography" (Galbrath 2012), there is an algorithm for this on page 51

Comment by Denis Diemert

Also related Zp::getPrimitiveElement()

Comment by Denis Diemert

Maybe ExtensionField::getPrimitiveElement() might be also affected, currently here null is returned. If that is intentionally, we need to add javadoc.

Prune unused stuff

I'd like to remove from the library:

  • RingMultiplicationImpl
  • Seed
  • de/upb/crypto/math/structures/quotient

This is because those classes are unused and don't really offer many compelling features on their own.

Sliding Window (Multi-)Exponentiation: Handling Negative Exponents

Multi-Exponentiation

Currently, we do not have the standard interleaved sliding window multi-exponentiation integrated into LazyGroup due to the lack of handling of negative exponents.

The wNAF algorithm handles negative exponents as follows:

  1. The exponent is negated such that it is positive.
  2. During computation of the wNAF representation, the wNAF digits are negated. This is possible since we assume to have access to the inverted precomputation due to efficient inversion.

This approach does not work the same way for the regular sliding window algorithm as we cannot assume inversion to be cheap.
In our discussion @JanBobolz and I have identified some different approaches to handle negative exponents:

Approach 1 (A1)

Invert the basis. Meaning, that given a negative exponent e with basis g, we invert g and act as if it was a whole new group element. The precomputation would then be done for g as usual.

One problem with this is that precomputePow() is intended to be run by the user on each basis to do the precomputations for later. For a negative exponent, the user would need to be aware of this fact and call precomputePow on the inverted basis instead. This is where our previous approach using expressions had an advantage since the precomputation there was able to automatically detect the fact that a basis needed to be inverted.

Approach 2 (A2)

Take the absolute of every exponent but store the fact that it is actually negative. Then, when accessing the precomputations for this element, we invert every precomputation we access.

Comparison

We will probably need some actual benchmarks to compare these properly. Plus I don't actually know how long inversions take in Z_p compared to group operations.

Most likely, there is a tradeoff since in case the inverted group element is used many times, doing a new precomputation might be better.

Clever choice of (mult-)exponentiation algorithm

Currently, the LazyGroup has a specific (mult-)exponentiation algorithm hardcoded. Instead, the LazyGroup::compute(Multiexponentiation) method (and related ones) should more cleverly choose what algorithm to use depending on the input, and/or allow the user to configure this.

Convert/Wrap concurrent SHA-3 to new interface

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 48)

Currently, all our implemented hashing algorithms are based on JCA's SHA-256. We want to have an implementation of our hash interface that uses the project group's Keccak implementation.

This also offers the chance to do smarter hashing into structures by choosing the byte output length of Keccak according to structure size.

Comment by Jan Bobolz

The current implementation of ConcurrentSHA3 is licensed under AGPL (see javadoc), which is comparable but less persmissive than GPL (somewhat stricter copyleft).

Unless we want to GPL (or AGPL?) our whole project, this is problematic. That has to be resolved first before porting anything to our APIs.

Comment by Fabian Eidens

This is delayed until we find an Keccak implementation with a "good" license or someone has free time to implement it in java according to the reference implementation (version >= 3.2) which is written in python.

Comment by Jan Bobolz

SHA-3 will be part of Java 9 (scheduled for release in March 2017):

https://blogs.oracle.com/java/jdk-9-categories and
http://openjdk.java.net/jeps/287

That should make it easy to wrap that into our library.

Simplify Random interface

The whole de/upb/crypto/math/random package should be redesigned so that users can call the methods they're interested in statically without having to go through multiple classes to get the singleton.

Math Rewrite Bug: Wrong Window Size Used for Precomputation.

The interleavingWnafMultiExp method here passes maxExp to ensurePrecomputation in line 193. However, ensurePrecomputation expects to be given the window size, not the maximum exponent for the odd power precomputation. Hence, this leads to a much larger window size than desired.

Continuous integration

Currently, CI only builds the project, it doesn't run tests.

Please make it so that it runs tests.

Add security remark Javadoc to GroupElement etc.

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 186)

Add Javadoc to GroupElement etc. It should state that

  • for op, we assume that right hand side to be in the right group, otherwise behavior is undefined (and may be a security risk)
  • document reasons why getElement(Represenatation) should be used for untrusted Representations of GroupElements, and make clear that you would never trust Group representations from an untrusted source. (Basically, tell people not to send groups alongside their elements just so everything is neatly StandaloneRepresentable).
  • Add some remarks about 3rd party serialization frameworks that use reflection magic to serialize group elements. Not a good idea. It would not be a breach of contract if each group element stored the exponents from which it was formed (i.e. you'd end up with some dlogs in your serialization). More realistically, we do have to normalize elements before sending them. getRepresentation() makes sure that it's safe to send.

Fix normalization of EllipticCurvePoints

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 211)

Currently, there is a method normalize() that returns a new EllipticCurvePoint. I suggest instead to add a normalized version to the existing point or just to replace the point values with normalized ones (which, I think, is never bad).
Furthermore, `normalize() probably shouldn't be public. The user of such points doesn't care. protected should suffice.

Comment by Jan Bobolz

Also: let isNormalized() be protected

Implement Hash function into Supersingular Target Group

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 199)

Comment by Jan Bobolz

... if feasible. I currently wouldn't know how. [at]peterg any insights?
I guess in the random oracle model you might just hash to a random polynomial and then exponentiate with the cofactor?

Comment by Peter Günther

Actually, I think there should be a more efficient method for Type 1 and Type 3 pairings. The keyword is Torus based cryptography and pairing compression (see Chapter 3 of Naehrigs Phd Thesis or chapter 6.2 of the Galbraith book und mathematics of public key cryptography).

The method proposed by Jan does not work in general, although I forgot what the real problem was. I remember a paper that said that hashing to Gt is only securely possible for supersingular pairings. I cannot find the paper now and I forgot a lot of things.

One thing that you need to consider is that your collision resistant hash function into the extension field has to cover the complete extension field. If you hit only subfields, the combination with cofactor exp. will not be collision resistant because cofactor exponentiation maps all elements of a proper subfield to 1. (Gt is a subgroup of the Norm 1 subgroup, The Norm 1 subgroup is in the kernel of cofactor exponentiation).

For the supersingular case there is also a more efficient way. Look at Definition 6.3.7 of the Galbraith book.
Let Fq^2 = Fq(a) and a' the conjugate of a (a'=a^q). Then f(x)=(x+a)/(x+a') should be a bijective mapping from Fq to Gt. For the theory, look at the book. The point is that you get an element with norm 1. The norm 1 subgroup contains Gt. Hence, basically one inversion. (Actually, efficient impl. of cofactor exp. also fall back to the inversion trick with the conjugate).

Maybe the following work also helps to judge security. It is about G1 and G2, but still it might help to understand security implications. I did not check it again.

Indifferentiable Hashing
to Barreto–Naehrig Curves
Pierre-Alain Fouque1 and Mehdi Tibouchi2

Efficient Indifferentiable Hashing into Ordinary
Elliptic Curves
Eric Brier1, Jean-S´ebastien Coron2, Thomas Icart2,�, David Madore3,
Hugues Randriam3, and Mehdi Tibouchi2,4

Optimize final Exponentation of BN

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet.)

Original Text (Issue 203)

View on Gitlab as there is a lot of math formatting.

Improve test coverage

It would be nice to have tests for the following methods (or investigate why there is no test coverage in case IntelliJ is wrong):

  • HashIntoZnAdditiveGroup and HashIntoZp have no coverage at all (although latter should be covered by HashIntoZn)
  • PolynomialRing.Polynomial#scalarProduct(Polynomial)
  • ExtensionField is theoretically covered by the bilinear group tests, but should also be tested in the RingTests
  • None of the new Vector type classes have any test coverage and the other cartesian stuff also doesn't
  • The boolean ring structure has no test coverage
  • RingGroup has no test coverage, but perhaps not necessary here
  • The exception condition for GroupElement#pow(RingElement) is never hit
  • For some reason none of the converter classes have coverage although they are tested in our ConverterTest.
  • Boolean expression stuff has no coverage

For some reason some stuff that is definitely tested has 0% test coverage such as our representation converters, or RingElement#mul(BigInteger), Ring#extendedEuclideanAlgorithm(List<RingElement>).

LazyGroup pairing product equations

Currently, the LazyGroup computes pairing product expressions (e(g1,g2) * e(g3,g4) * ...) naively.

Instead, the BilinearMapImpl interface should get methods to compute such expressions (similar to Group::compute(Multiexponentiation)) and the LazyGroup should defer computation to those.

This in theory enables shared final exponentiation (if the BilinearMapImpl plays along).

In addition, we could offer precomputation simliar to GroupElement::precomputePow() but for pairings (GroupElement::precomputeLeftPairingArgument() 😉).

Allow changing the multi-exponentiation algorithm used for debug groups

The multi-exponentiation algorithm used by LazyGroup can be selected using setters. However, DebugGroup does not expose the underlying LazyGroup instances to the outside. Therefore the multi-exponentiation algorithm cannot be changed and will always be the wNAF algorithm.

It would be nice to somehow give the option to change the used multi-exponentiation algorithm so that we can count the operations done by other algorithms.

Hash Stuff

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 138)

  • Merge SHa256/SHA512
  • HashAccumulator should use the MessageDigest update in their corresponding update

Be more transparent about security parameter

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 181)

As it turns out, people have to select concrete security parameters for their schemes when using the BilinearGroupFactoryFactory.
The rough guideline was that security parameter x corresponds roughly to DLOG being as hard as AES with key length x.

However, there are probably better measures out there to judge how secure an elliptic curve is. I'd like the user to be able to easily see some basic measures (q, r, embedding degree, GT structure) that he can compare to values he sees on the internet.

So this ticket is threefold:

  • Find out what ecc parameters one should use (for supersingular curves, our source seems to be outdated)
  • Allow a way to view chosen curve parameters corresponding to a certain security parameters
  • Make it easier to instantiate a curve with given parameters (refactor the Factories that set up the elliptic curve groups)

This makes it easier for performance evaluation and easier for peoeple to judge what security level they actually get (if they're experts). I do not want to remove the current "just give me an AES-like security parameter" mechanism (but better explain it perhaps).

Math Rewrite Bug: Precomputations Are Not Reused

The following code leads to more group operations than expected:

DebugGroupImpl debugGroup = new DebugGroupImpl("D1", BigInteger.valueOf(1000000));
LazyGroup lazyGroup = new LazyGroup(debugGroup);
GroupElement elem = lazyGroup.getUniformlyRandomNonNeutral();
for (int i = 0; i < 5; ++i) {
    elem.pow(16).computeSync();
}

This is due to the precomputations for the exponentiation being done each time, instead of being cached as should be the case.

Admissible hash for BarretoNaehrigPointEncoding

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 212)

The BarretoNaehrigPointEncoding's hashIntoStructure currently has a todo that states:

TODO: this is not an admissible encoding in the sense of Boneh Franclin because not every element in the codomain has the same number of pre-images. E.g. by setting sel to 0, we discard 2/3 of all points. Furthermore, we can have a.e. in the sense of Boneh Franclin only if the codomain of the hash function is larger than the codomain of the encoding.

Either we should implement an admissible encoding for this or add some API for users to be made aware of potential problems when using this hash.

Update Barreto-Naehrig pairing group bit size for specific pairing

According to the IETF, the special extended tower number field sieve [KB16] reduces the security level offered by the BN pairing at some group size.

This has already been taken into account for our mclwrap wrapper (security level down to 100 bits from previously 128 as calculated in [BD18, Section 6.2]), but still needs to be done for our own BN pairing. The current calculation is to choose the group bit size as double the security parameter. According to the IETF, the new bit size for a security level of 128 bits should be 461 bits. This would imply a factor of roughly 3.6 instead of the previous 2.

However, a linear scaling seems to be wrong anyway. This can be seen from the fact that a 256 group bit size corresponds to a 100 bit security parameter, a factor of roughly 2.56. Leading to the question whether we can find a better way to compute the group bit size.

Math Rewrite Bug: Recursive Group Ops lead to Exponential Blowup in Evaluation

The following code leads to an exponential blowup in group operations performed:

DebugGroupImpl debugGroup = new DebugGroupImpl("D1", BigInteger.valueOf(1000000));
LazyGroup lazyGroup = new LazyGroup(debugGroup);
GroupElement elem = lazyGroup.getUniformlyRandomNonNeutral();
for (int i = 0; i < 5; ++i) {
    elem = elem.op(elem);
}
elem.computeSync();

The cause for this is that a tree of group operations is constructed which grows exponentially due to reusing elem in each turn. The LazyGroup evaluation does not intelligently reuse already computed operations, and instead evaluates every group operation separately, resulting in the exponential number of group operations.

Reorganize packages

I noted the following:

  • The de/upb/crypto/math/interfaces package contains some interfaces, but some interfaces are, instead, contained in a non-"interface" package (e.g., de/upb/crypto/math/pairings/generic/BilinearGroup.java). We should make this consistent.
  • de/upb/crypto/math/structures/ec feels like it belongs much more into the pairing or elliptic package.

My personal take on the first point would be to get rid of de/upb/crypto/math/interfaces and instead put all those interfaces into the packages they belong to. I think this makes for much better browsability to not have to completely switch package roots to find an interface (e.g., when you're browsing Javadoc, you're interested in, say, groups, and then you'd click "group", instead of "interfaces" and then "group").

Overall, I'd appreciate the following structure over what we have now:

  • groups (contains GroupElement interface, etc. directly)
    • elliptic (WeierstrassCurve, BilinearGroup interfaces etc.)
      • type1.supersingular
      • type3.bn
      • nopairing (Secp256k1)
    • sn, etc.
  • rings (RingElement interface, etc.)
    • zn, polynomial, etc. - also: ExtensionField and such, which is currently, confusingly, under pairings
  • expressions, hash, random, serialization as before.

So elliptic, interfaces, pairings, structures would disappear from top-level and instead, we have this nice distinction between groups and rings at the top level.

Serialization Data Tracking

Similarly to group operation counting, we would like to be able to easily track serialization statistics such as serialized group elements for each group. Since the Converter classes have no interaction with the groups themselves, only with the Representation integrating any counting within the group seems not practical.

@JanBobolz mentioned in this issue that we could just add a counter to the debug group and count the number of elements converted to their representations as a proxy for actually counting serialized group elements. This should suffice since one probably would not calculate the representation without intention to actually serialize the affected group element.

This is very similar to the current plans for group operation counting and only requires adding a counter to DebugGroupImpl.

We also want to support benchmarking the actual length of the serialized data, but I currently think that this is easy enough by just checking the length of the result that comes out of the Converter after calling serialize().

Group Operation Counting

The ability to count group operations such as ops, squarings and inversions, as well as exponentiations is useful to

  1. find bugs by comparing the actual number of ops to what one would expect from the algorithm, and
  2. benchmark an implemented scheme in a way that is mostly independent from the speed of the implementation.

Requirements

The requirements for such a feature would be as follows:

  1. Should be able to track group operations, inversions, squarings (interesting for elliptic curves), exponentiations and multi-exponentiations per group.
  2. Simple to implement to not complicate the code more.
  3. Easy to use.

Ideas

@JanBobolz had the idea of letting the DebugGroupImpl do the counting and enabling support for tracking multi-exponentiations via the LazyGroup. DebugGroupImpl, on its own, cannot properly track operations done in exponentiation algorithms since it just uses Zn multiplication to do the exponentiation, resulting in no actual group operations done that one could track. It is possible to calculate the group operations that would theoretically be done in something like binary square & multiply or wnaf, but we would need to know exactly which algorithm actually ends up being chosen. This also does not solve the issue of tracking multi-exponentiations.

These issues are fixed by using LazyGroup. LazyGroup can update the information stored in DebugGroupImpl whenever it performs a multi-exponentiation, enabling tracking of multi-exponentiations. Furthermore, it uses regular exponentiation algorithms, instead of the Zn multiplication, meaning that group operations used in, for example, wnaf, are tracked normally. The disadvantage of this is that evaluation becomes much less efficient since we don't use the efficient multiplication for exponentiation.

Proposal and Prototype

I have implemented the above approach on this branch.
It counts number of operations, inversions, squarings, exponentiations and multi-exponentiations (multi-exponentiations are tracked in the form of a list of the number of terms in each multi-exponentiation done) in the DebugGroupImpl instance.
To interact with these counters, methods for incrementing and resetting the counters, and retrieval of the numbers have been added to DebugGroupImpl. The counters have not been added to the representation as this seems not useful and just increases size of the representation.

LazyGroup has been modified in that whenever it is initialized with a DebugGroupImpl, it checks this and sets a specific boolean variable, indicating that it will track exponentiations and multi-exponentiations in the given group.
Whenever it performs the wnaf exponentiation or multi-exponentiation, the data in the debug group is updated.
One could avoid the class variable by checking for DebugGroupImpl explicitly before each (multi-)exponentiation, but this probably induces a slightly higher runtime because of the needed equality check.

Example

An example of a multi-exponentiation is given below.

DebugGroupImpl debugGroup = new DebugGroupImpl("D1", BigInteger.valueOf(1000000));
LazyGroup lazyGroup = new LazyGroup(debugGroup);
GroupElement elem = lazyGroup.getUniformlyRandomNonNeutral();
GroupElement elem2 = lazyGroup.getUniformlyRandomNonNeutral();
elem = elem.op(elem2.pow(16));
elem = elem.op(elem2.pow(16));
elem = elem.computeSync();

System.out.println("Ops: " + debugGroup.getNumOps());
System.out.println("Sqs: " + debugGroup.getNumSquarings());
System.out.println("Invs: " + debugGroup.getNumInversions());
System.out.println("Exps: " + debugGroup.getNumExps());
System.out.println("MultiExps: " + debugGroup.getMultiExpData().toString());

Output:

Ops: 32768 // precomputation for both terms in multi-exponentiation 
           // should only be done once since both bases are the same but that is a current bug
Sqs: 7
Invs: 0
Exps: 0
MultiExps: [2] // multiexponentiation contains two terms

Waiting for single-exponentiation within multiexponentiation

Consider the following code using the LazyGroup:

var h = g1.pow(x).compute();
var C = h.op(g2.pow(y)).compute();

Currently, this will (unless thread scheduling gets weird) cause computation of C to wait for the computation of h to finish before starting its own computation, i.e. the time-cost of the lines above is basically two exponentiations.

However, ideally (given you have two processors), the computation of C wouldn't wait for h, but instead immediately start with the multiexponentiation. In total, this would result in more group operations done, but the time to compute the lines would decrease to max(one pow, one multiexp) because of parallelism. So that would be faster than the current approach.

However, there may also be cases where it's stupid not to wait for a (sub-)computation to finish. For example (assume two cores):

var h = g1.pow(x).op(g3.pow(z)).compute();
var h2 = g1.pow(x).compute();
var C = h.op(g2.pow(y)).compute();

In this case, h and h2 are being computed in parallel. h2 would finish first, then C would start computing. It's likely that at that point, h is almost finished computing, so for C it's likely better to wait a short time for h instead of re-doing everything that h has already almost finished, anyway.

Provide standard ECC

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 183)

Provide a list of standard ECC parameters as a enum or something

  • Related to plugin mechanism
  • Some standard parameters for pairing curves

Add new publishing method and mode

The goal is to publish via github actions to maven central via the open source offering of ossrh.

The question is whether we stick to the current mode @rheitjoh @JanBobolz.
The current publishing mode is publish on: push on branch: release.

Pros

  • we can protect the branch
  • we can establish that we review the pull requests for the release branch. Therefore forcing us to stick to our versioning scheme

Cons

  • ?

StandaloneRepresentableHandler cannot candle generic type.

Problem

I started moving all Craco constructions over to the representation framework v2 on the branch deprecate_old_repr. The SymmetricKeyKEM class has an attribute of type KeyEncapsulationMechanism<? extends KeyMaterial>. I added a @Represented annotation here and made the KeyEncapsulationMechanism<T> interface extend StandaloneRepresentable. However, StandaloneRepresentationHandler, when asked by ReprUtil whether it can handle this field, returns false due to the type of the field not being a class.

How to reproduce

Switch to the deprecate_old_repr branch of Craco, and run the tests. The testRecreateRepresentable tests number 37, 44 are affected by this.

Snapshot Builds (for all upbcuk projects)

I am posting this in the math issue section, but this obviously applies to all upbcuk projects.

OSS Nexus Repository allows us to host snapshots as well. Currently we had only planned to make use of composite builds for development instead of snapshots.

Use Case Scenarios

I have written down some use case scenarios that came to mind and how one would solve these using either snapshot or composite builds.

Scenario 1

You are working on craco on branch feature and are simultaneously making changes to math. You want these changes to be integrated while working on craco.

Using Snapshot

Every time you make a change to math, you need to build and locally publish the new math build as a snapshot so that craco can use it. Only master builds are uploaded to the snapshot repo so can only use local build here.

Using Composite Build

You create a feature branch for math and do your changes there. The composite build script includes the math build automatically in your craco project as long as that branch is checked out in git.

Comparison

Composite Build is better here since you don't need to do any manual building and publishing to a local repo.

Scenario 2

You are working on craco on branch feature and are using a specific release version of math. You don't intend to change math.

Gradle just automatically does the dependency resolution as usual. No need to do any snapshot publishing or composite build.

Scenario 3

You are working on cranco on branch feature and are using a version of math on branch mbranch. You don't intend to make any changes to math.

Using Snapshot

You need to build and locally publish the new math build as a snapshot so that craco can use it. Only master builds are uploaded to the snapshot repo so can only use local build here.

Using Composite Build

You check out mbranch in the math git and use the useCurrentBranch parameter so that the craco build script automatically includes that math build.

Comparison

Snapshot has the advantage that you always know that the right version is being used by gradle, unless you have messed up in the versioning and have different math builds with the same version number. This can happen however and can be annoying as well as hard to track down. Composite build is simple to use, but you need to make sure that the mbranch branch is actually checked out when working on craco.I think composite builds are preferable here.

Scenario 4

You are working on craco on branch feature and are using a version of math on branch master.
You don't intend to make any changes to math.

Using Snapshot

Snapshot builds on master are automatically published to the repository. Gradle automatically does the dependency resolution for you.

Using Composite Build

You check out master in the math git and the craco build script automatically includes it in the build.

Comparison

Here Snapshots win since you don't really need to do anything, although using the composite builds is not too hard either.

Overall

Scenario 4 does present a use case for snapshot builds. Whenever you want to make any changes to the dependency, or use a version from a specific branch, composite builds become easier to use since you avoid any manual build and local publishing process. The composite build using versions from specific branches is also nice in that you don't run into problems where there are different builds with the same version number, e.g. if you are still working on that specific version. There can however be problems if you have checked out the wrong branch of the dependency since the useCurrentBranch parameter would mean the build script happily uses whatever branch is currently checked out. This should be easier to notice though if you, at any point, have the git folder open and can see the checked out branch. Another advantage of composite builds is that IntelliJ automaticall includes the included projects source which means you can jump to source without having to open math in a different project window.

I think the main question is whether we want to offer snapshot builds in addition to the composite build script. Once could add a parameter to disable the composite build script and revert back to snapshots if desired. Then one could make use of snapshot builds for scenario 4. In general though, I think that local snapshots are pointless with our composite build script.

Right now, I would say we remove the snapshot travis code so it does not constantly fail. Once we have the repository up and running, we can add snapshot builds back in if we feel the need.

Improve Javadoc Formatting

I just rendered the math Javadoc and noticed that Javadoc does not support markdown type paragraph formatting, i.e. inserting an empty line between paragraphs. This leads to a lot of block of texts without any formatting applied. Instead, HTML tags such as <p> should be utilized.

Furthermore, code blocks should be formatted using the HTML tag <pre>, or the @code annotation for inline code.

Also, the first sentence of a javadoc block is automatically used as the summary for the object being documented.
This means you should avoid using anything that contains a "." in that summary such as "e.g." or "i.e." as this leads to early termination of the summary sentence (it will be cut off).

We should take a look over the Javadoc and fix any such issues before the next release.

Implement fast pairing according to SFC specification

(This issue has been imported from the Gitlab repository because it seems to not have been addressed yet)

Original Text (Issue 70)

To get really fast pairings for various hardware architectures, the pbc Java implementation is not appropriate. This issue requests the integration of fast hardware specific pairings into the new interface. The arithmetic upto the pairing computation is provided as a library that is called from within java.

Comment by Jan Bobolz

New idea is to handle group operations, exponentiations, and pairings completely inside a C library (which [at]peterg writes). We ([at]mirkoj ?!) then write a Java (JNI) wrapper that calls the C library for group operations and exposes the Group/GroupElement interface.

The main advantage is that we only need to implement group operations once (namely, in C), and that we do not have to implement a semantic-preserving mapping of the C-representation of elliptiv curve points and their corresponding Java object hierarchy (which turns out to be a lot of code and which probably causes a lot of unnecessary runtime overhead).
So the C code produces points as simple structs and just hands the bytes in the struct's allocated memory to Java. Java would only get to see"black box" bit strings. Java can then hand back that bit string, which the C code can directly interpret as the appropriate struct.

Comment by Peter Günther

The pairing that shall be implemented as part of this issue is the (optimal) ate pairing on BN curves. Details pending.

Comment by Peter Günther

issure is handled in branch feature/fast_pairing_sfc

Comment by Jan Bobolz

One of the tasks left here is to integrate it into the build process.
I propose to make the native implementation (Java and C side) its own project so that people can build upb_pbc without any platform-dependent worries. (However, I'm not sure yet how to elegantly do the decision whether to instantiate the java or the native pairing.)

For future reference, I found

Easier instantiation of Secp256k1

For the protocol tutorial, I wrote the following code:

//Set up group and generate key
 var group = new LazyGroup(new Secp256k1()); //TODO can we make this easier somehow? Like maybe a GroupSuite or something? Like BilinearGroup has everything in one place.
 var H1 = new HashIntoLazyGroup(new Secp256k1.HashIntoSecp256k1(), group);

Can we make it easier to instantiate the Secp256k1 group (or - more generally - groups without pairings) ?
For BilinearGroup, you call one constructor and everything is immediately wrapped in the right (Lazy)Group and you have getters for the hash into the group.
Can we have something similar for non-bilinear groups? Like a GroupSuite that's basically a BilinearGroup for a single non-pairing group? Or maybe we make Secp256k1 a Singleton and have the getInstance() method return the (lazy-)wrapped GroupImpl by default. And/or maybe the Group interface gets a getHashIntoGroup(), i.e. the Group shall instantiate/know a proper hash function.

Looking for good ideas how to make sense of my rough ideas above and how to go about this the best way to make (non-pairing) groups and hashes easier to use in our library.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.