Giter VIP home page Giter VIP logo

fplll's Introduction

fplll

Build Status codecov

fplll contains implementations of several lattice algorithms. The implementation relies on floating-point orthogonalization, and LLL [LLL82] is central to the code, hence the name.

It includes implementations of floating-point LLL reduction algorithms [NS09,MSV09], offering different speed/guarantees ratios. It contains a 'wrapper' choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible [S09]. In the case of the wrapper, the succession of variants is oblivious to the user.

It includes an implementation of the BKZ reduction algorithm [SE94], including the BKZ-2.0 [CN11] improvements (extreme enumeration pruning, pre-processing of blocks, early termination). Additionally, Slide reduction [GN08] and self dual BKZ [MW16] are supported.

It also includes a floating-point implementation of the Kannan-Fincke-Pohst algorithm [K83,FP85] that finds a shortest non-zero lattice vector. Finally, it contains a variant of the enumeration algorithm that computes a lattice vector closest to a given vector belonging to the real span of the lattice.

fplll is distributed under the GNU Lesser General Public License (either version 2.1 of the License, or, at your option, any later version) as published by the Free Software Foundation.

How to cite

@unpublished{fplll,
    author = {The {FPLLL} development team},
    title = {{fplll}, a lattice reduction library, {Version}: 5.4.5},
    year = 2023,
    note = {Available at \url{https://github.com/fplll/fplll}},
    url = {https://github.com/fplll/fplll}
}

Table of contents

Compilation

Installation from packages

fplll is available as a pre-built package for a variety of operating systems; these pre-built packages typically include all mandatory dependencies, and so these packages can be used to start running fplll quickly.

Below, we give some instructions on how to install these packaged variants of fplll.

Note that these packages will be up-to-date for the most recent version of fplll. However, if you want a feature that has recently been added to master (that is not yet in a release) then it is necessary to build from source. If this is the case, please see the Installation from Source section.

Ubuntu and Debian

fplll can be installed directly via Aptitude or Synaptic. Both of these package managers package fplll in the package fplll-tools. Therefore, to install this package using Aptitude, run the following command

sudo aptitude install fplll-tools

If you want to use Synaptic, then you will need to search for the fplll-tools package using the search bar.

Conda

fplll can be installed natively as a conda package using the following command

conda install fplll

MacOS

MacOS has a package for fplll inside HomeBrew. Assuming that you have HomeBrew installed, you may install fplll using the following command

brew install fplll

Docker and AWS

We now have Docker/AWS images for fplll too. They aren't on this repository, though; you can find them here

Installation from source

fplll can also be built from source. Below, we explicate some of the dependencies for building from source, as well as operating systems specific instructions.

Dependencies

Required

Optional

NOTE: If you are intending to use fplll on Windows 10, then these packages should be installed after the Windows Subsystem for Linux has been installed and activated. Please go to the Windows 10 instructions for more information.

Linux and MacOS

You should download the source code from Github and then run

./autogen.sh

which generates the ./configure script used to configure fplll by calling the appropriate autotools command.

Then, to compile and install type

./configure
make
make install			# (as root)

If GMP, MPFR and/or MPIR are not in the $LD_LIBRARY_PATH, you have to point to the directories where the libraries are, with

./configure --with-gmp=path/to/gmp

or

./configure --with-mpfr=path/to/mpfr

The same philosophy applies to the (optional) QD library. If you want to use mpir instead of gmp, use --enable-mpir and --with-mpir=path/to/mpir.

You can remove the program binaries and object files from the source code directory by typing make clean. To also remove the files that ./configure created (so you can compile the package for a different kind of computer), type make distclean. By default, make install installs the package commands under /usr/local/bin, include files under /usr/local/include, etc. You can specify an installation directory name other than /usr/local by giving ./configure the option --prefix=dirname. Run ./configure --help for further details.

Windows 10

Windows 10 has the "Windows Subsystem for Linux"(WSL), which essentially allows you to use Linux features in Windows without the need for a dual-boot system or a virtual machine. To activate this, first go to Settings -> Update and security -> For developers and enable developer mode. (This may take a while.) Afterwards, open Powershell as an administrator and run

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

This will enable the WSL. Next, open the Windows Store and navigate to your favourite available Linux distribution - this may be installed as if it were a regular application. Afterwards, this system will act as a regular program, and so it can be opened however you like e.g. by opening command prompt and typing bash. With this Linux-like subsystem, installing fplll is then similar to above, except that most likely the package repository is not up to date, and various additional packages need to be installed first. To make sure you only install the most recent software, run:

sudo apt-get update

Then run sudo apt-get install <packages> for the (indirectly) required packages, such as make, autoconf, libtool, gcc, g++, libgmp-dev, libmpfr-dev and pkg-config. Finally, download the fplll source code, extract the contents, navigate to this folder in Bash (commonly found under /mnt/c/<local path> when stored somewhere on the C:\ drive), and run:

./autogen.sh
./configure
make 

The same comments as before apply for using e.g. make install or make distclean instead of make.

Note: to fix a potential error libfplll.so.5: cannot open shared object file: No such file or directory raised after trying to run fplll after a successful compilation, find the location of libfplll.so.5 (probably something like /../fplll/.libs/; run find -name libfplll.so.5 to find it) and run export LD_LIBRARY_PATH=<path>.

Check

Type

make check

Optimization

The default compilation flag is -O3. One may use the -march=native -O3 flag to optimize the binaries. See "this issue" for its impact on the enumeration speed.

How to use

Executable files fplll and latticegen are installed in the directory /usr/local/bin. (Note that the programs generated by make in the fplll/ directory are only wrappers to the programs in fplll/.libs/).

If you type make check, it will also generate the executable file llldiff, in fplll/.libs/.

latticegen

latticegen is a utility for generating matrices (rows form input lattice basis vectors).

The options are:

  • r d b : generates a knapsack like matrix of dimension d x (d+1) and b bits (see, e.g., [S09]): the i-th vector starts with a random integer of bit-length <=b and the rest is the i-th canonical unit vector.
  • s d b b2 : generates a d x d matrix of a form similar to that is involved when trying to find rational approximations to reals with the same small denominator (see, e.g., [LLL82]): the first vector starts with a random integer of bit-length <=b2 and continues with d-1 independent integers of bit-lengths <=b; the i-th vector for i>1 is the i-th canonical unit vector scaled by a factor 2^b.
  • u d b : generates a d x d matrix whose entries are independent integers of bit-lengths <=b.
  • n d b c : generates an ntru-like matrix. If char is 'b', then it first samples an integer q of bit-length <=b, whereas if char is 'q', then it sets q to the provided value. Then it samples a uniform h in the ring Z_q[x]/(x^n-1). It finally returns the 2 x 2 block matrix [[I, Rot(h)], [0, q*I]], where each block is d x d, the first row of Rot(h) is the coefficient vector of h, and the i-th row of Rot(h) is the shift of the (i-1)-th (with last entry put back in first position), for all i>1. Warning: this does not produce a genuine ntru lattice with h a genuine public key (see [HPS98]).
  • N d b c : as the previous option, except that the constructed matrix is [[q*I, 0], [Rot(h), I]].
  • q d k b c : generates a q-ary matrix. If char is 'b', then it first samples an integer q of bit-length <=b; if char is 'p', it does the same and updates q to the smallest (probabilistic) prime that is greater; if char is 'q', then it sets q to the provided value. It returns a 2 x 2 block matrix [[I, H], [0, q*I]], where H is (d-k) x k and uniformly random modulo q. These bases correspond to the SIS/LWE q-ary lattices (see [MR09]). Goldstein-Mayer lattices correspond to k=1 and q prime (see [GM03]).
  • t d f : generates a d x d lower-triangular matrix B with B_ii = 2^(d-i+1)^f for all i, and B_ij is uniform between -B_jj/2 and B_jj/2 for all j<i.
  • T d : also takes as input a d-dimensional vector vec read from a file. It generates a d x d lower-triangular matrix B with B_ii = vec[i] for all i and B_ij is uniform between -B_jj/2 and B_jj/2 for all j<i.

The generated matrix is printed in stdout.

Note that by default, the random bits always use the same seed, to ensure reproducibility. The seed may be changed with the option -randseed <integer> or by using the current time (in seconds) -randseed time. If you use this option, it must be the first one on the command line.

fplll

fplll does LLL, BKZ, HKZ or SVP on a matrix (considered as a set of row vectors) given in stdin or in a file as parameter.

The options are:

  • -a lll : LLL-reduction (default).

  • -a bkz : BKZ-reduction.

  • -a hkz : HKZ-reduction.

  • -a svp : prints a shortest non-zero vector of the lattice.

  • -a sdb : self dual variant of BKZ-reduction.

  • -a sld : slide reduction.

  • -a cvp : prints the vector in the lattice closest to the input vector.

  • -v : verbose mode.

  • -nolll : does not apply to LLL-reduction. In the case of bkz, hkz and svp, by default, the input basis is LLL-reduced before anything else. This option allows to remove that initial LLL-reduction (note that other calls to LLL-reduction may occur during the execution). In the case of hlll, verify if the input basis is HLLL-reduced.

  • -a hlll : HLLL-reduction.

Options for LLL-reduction:

  • -d delta : δ (default=0.99)

  • -e eta : η (default=0.51). See [NS09] for the definition of (δ,η)-LLL-reduced bases.

  • -l lovasz : if !=0 Lovasz's condition. Otherwise, Siegel's condition (default: Lovasz). See [A02] for the definition of Siegel condition.

  • -f mpfr : sets the floating-point type to MPFR (default if m=proved).

  • -p precision : precision of the floating-point arithmetic, works only with -f mpfr.

  • -f dd : sets the floating-point type to double-double.

  • -f qd : sets the floating-point type to quad-double.

  • -f dpe : sets the floating-point type to DPE (default if m=heuristic).

  • -f double : sets the floating-point type to double (default if m=fast).

  • -f longdouble : sets the floating-point type to long double.

  • -z mpz : sets the integer type to mpz, the integer type of GMP (default).

  • -z int : sets the integer type to int.

  • -z long : as -z int.

  • -z double : sets the integer type to double.

  • -m wrapper : uses the wrapper. (default if z=mpz).

  • -m fast : uses the fast method, works only with -f double.

  • -m heuristic : uses the heuristic method.

  • -m proved : uses the proved version of the algorithm.

  • -y : early reduction.

With the wrapper or the proved version, it is guaranteed that the basis is LLL-reduced with δ'=2×δ-1 and η'=2×η-1/2. For instance, with the default options, it is guaranteed that the basis is (0.98,0.52)-LLL-reduced.

Options for BKZ-reduction:

  • -b block_size : block size, mandatory, between 2 and the number of vectors.

  • -f float_type : same as LLL (-p is required if float_type=mpfr).

  • -p precision : precision of the floating-point arithmetic with -f mpfr.

  • -bkzmaxloops loops : maximum number of full loop iterations.

  • -bkzmaxtime time : stops after time seconds (up to completion of the current loop iteration).

  • -bkzautoabort : stops when the average slope of the log ||b_i*||'s does not decrease fast enough.

Without any of the last three options, BKZ runs until no block has been updated for a full loop iteration.

  • -s filename.json : use strategies for preprocessing and pruning parameter (/strategies/default.json provided). Experimental.

  • -bkzghbound factor : multiplies the Gaussian heuristic by factor (of float type) to set the enumeration radius of the SVP calls.

  • -bkzboundedlll : restricts the LLL call before considering a block to vector indices within that block.

  • -bkzdumpgso file_name : dumps the log ||b_i*|| 's in specified file.

Output formats:

  • -of : prints new line (if -a [lll|bkz])
  • -of b : prints the basis (if -a [lll|bkz], this value by default)
  • -of bk : prints the basis (if -a [lll|bkz], format compatible with sage)
  • -of c : prints the closest vector (if -a cvp, this value by default)
  • -of s : prints the closest vector (if -a svp, this value by default)
  • -of t : prints status (if -a [lll|bkz|cvp|svp])
  • -of u : prints unimodular matrix (if -a [lll|bkz])
  • -of uk : prints unimodular matrix (if -a [lll|bkz], format compatible with sage)
  • -of v : prints inverse of u (if -a lll)
  • -of vk : prints inverse of u (if -a lll, format compatible with sage)

A combination of these option is allowed (e.g., -of bkut).

Only for -a hlll:

  • -t theta : θ (default=0.001). See [MSV09] for the definition of (δ,η,θ)-HLLL-reduced bases.
  • -c c : constant for HLLL during the size-reduction (only used if fplll is compiled with -DHOUSEHOLDER_USE_SIZE_REDUCTION_TEST)

llldiff

llldiff compares two bases (b1,...,bd) and (c1,...c_d'): they are considered equal iff d=d' and for any i, bi = +- ci. Concretely, if basis B is in file 'B.txt' and if basis C is in file 'C.txt' (in the fplll format), then one may run cat B.txt C.txt | ./llldiff.

How to use as a library

See API documentation and tests as a source of examples.

Multicore support

This library does not currently use multiple cores and running multiple threads working on the same object IntegerMatrix, LLLReduction, MatGSO etc. is not supported. Running multiple threads working on different objects, however, is supported. That is, there are no global variables and it is safe to e.g. reduce several lattices in parallel in the same process.

As an exception to the above, fplll has an implementation of parallel lattice point enumeration. To enable this implementation, make sure you compile with the maximum parallel enumeration dimension greater than 0. Note that increasing this value will increase the compile-time due to the use of templates.

At present fplll does not contain strategies for multi-core pruned enumeration, and so speedups for pruned enumeration may be sub-linear (see this for more information). On the other hand, unpruned enumeration appears to scale linearly.

Examples

  1. LLL reduction

    ./latticegen r 10 1000 | ./fplll
    
  2. Fileinput for reduction. If the file matrix contains

    [[ 10 11]
    [11 12]]
    

    then

    ./fplll matrix
    

    produces

    [[0 1 ]
     [1 0 ]
    ]
    
  3. Random generator

    ./latticegen -randseed 1234 r 10 1000 | ./fplll
    ./latticegen -randseed time u 10 16 | ./fplll
    
  4. Solving SVP

    ./latticegen r 30 3000 | ./fplll -a svp
    
  5. Solving CVP

    echo "[[17 42 4][50 75 108][11 47 33]][100 101 102]" | ./fplll -a cvp
    

Alternative interfaces

Credit

Maintainers

fplll is currently maintained by:

Contributors

The following people have contributed to fplll:

  • Martin Albrecht
  • Shi Bai
  • Guillaume Bonnoron
  • David Cade
  • Léo Ducas
  • Joop van de Pol
  • Xavier Pujol
  • Damien Stehlé
  • Marc Stevens
  • Gilles Villard
  • Michael Walter

Please add yourself here if you make a contribution.

Acknowledgments

  • Patrick Pelissier and Paul Zimmermann for dpe.

  • David H. Bailey for QD.

  • Sylvain Chevillard, Christoph Lauter and Gilles Villard for the configure/make/make install packaging.

  • Timothy Abbott, Michael Abshoff, Bill Allombert, John Cannon, Sylvain Chevillard, Julien Clement, Andreas Enge, Jean-Pierre Flori, Laurent Fousse, Guillaume Hanrot, Jens Hermans, Jerry James, Christoph Lauter, Tancrède Lepoint, Andrew Novocin, Willem Jan Palenstijn, Patrick Pelissier, Julien Puydt, Michael Schneider, Thiemo Seufer, Allan Steel, Gilles Villard and Paul Zimmermann for their support and for many suggestions that helped debugging and improving this code.

  • CONTRIBUTING.md is taken, almost verbatim, from https://github.com/pydanny/djangopackages/blob/master/docs/contributing.rst

  • json.hpp is taken from https://github.com/nlohmann/json

  • This project has been supported by ERC Starting Grant ERC-2013-StG-335086-LATTAC, by the European Union PROMETHEUS project (Horizon 2020 Research and Innovation Program, grant 780701), by EPSRC grant EP/P009417/1 and by EPSRC grant EP/S020330/1.

Contributing

fplll welcomes contributions. See CONTRIBUTING.md for details.

New releases and bug reports

New releases will be announced on https://groups.google.com/forum/#!forum/fplll-devel.

Bug reports may be sent to https://groups.google.com/forum/#!forum/fplll-devel or via https://github.com/fplll/fplll/issues.

Bibliography

[A02] A. Akhavi. Random lattices, threshold phenomena and efficient reduction algorithms. Theor. Comput. Sci. 287(2): 359-385 (2002)

[Chen13] Y. Chen, Lattice reduction and concrete security of fully homomorphic encryption.

[CN11] Y. Chen and P. Q. Nguyen. BKZ 2.0: Better Lattice Security Estimates. ASIACRYPT 2011: 1-20

[GM03] D. Goldstein and A. Mayer. On the equidistribution of Hecke points. Forum Mathematicum, 15:165–189 (2003)

[GN08] N. Gama and P. Q. Nguyen. Finding Short Lattice Vectors within Mordell's Inequality. STOC 2008: 207-216

[GNR13] N. Gama, P. Q. Nguyen and Oded Regev. Lattice Enumeration Using Extreme Pruning.

[HPS98] J. Hoffstein, J. Pipher, J. H. Silverman. NTRU: A Ring-Based Public Key Cryptosystem. ANTS 1998: 267-288

[K83] R. Kannan. Improved algorithms for integer programming and related lattice problems. STOC 1983, 99-108

[FP85] U. Fincke and M. Pohst. Improved methods for calculating vectors of short length in a lattice, including a complexity analysis. Math. Comp., 44(170):463–471 (1985)

[LLL82] A. K. Lenstra, H. W. Lenstra, Jr. and L. Lovasz. Factoring polynomials with rational coefficients. Math. Ann., 261: 515–534 (1982)

[MSV09] I. Morel, D. Stehle and G. Villard. H-LLL: using Householder inside LLL. ISSAC 2009: 271-278

[MW16] D. Micciancio and M. Walter. Practical, Predictable Lattice Basis Reduction. EUROCRYPT 2016: 820-849

[MR09] D. Micciancio and O. Regev. Post-Quantum Cryptography. Chapter of Lattice-based Cryptography, 147-191 (2009)

[NS09] P. Q. Nguyen and D. Stehle. An LLL Algorithm with Quadratic Complexity. SIAM J. Comput. 39(3): 874-903 (2009)

[S09] D. Stehle. Floating-Point LLL: Theoretical and Practical Aspects. The LLL Algorithm 2009: 179-213

[SE94]: C.-P. Schnorr and M. Euchner. Lattice basis reduction: Improved practical algorithms and solving subset sum problems. Math. Program. 66: 181-199 (1994)

fplll's People

Contributors

cr-marcstevens avatar dstehle avatar embray avatar erinhales1996 avatar gbonnoron avatar gilvillard avatar hdevalence avatar hideov avatar isuruf avatar jamesjer avatar jdemeyer avatar joerowell avatar jvdpol avatar kodebro avatar lducas avatar lgremy avatar ludopulles avatar m8pple avatar malb avatar mihir-w avatar musicinmybrain avatar orlitzky avatar pfasante avatar prabhdeep311 avatar shi-bai avatar temp1337temp1337 avatar timonknigge avatar tmmlaarhoven avatar vinc17fr avatar wvanwoerden avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fplll's Issues

-Wdelete-non-virtual-dtor

I'm getting this error when building fpylll

deleting object of polymorphic class type ‘fplll::Enumeration<fplll::FP_NR<double> >’ which has non-virtual destructor might cause undefined behaviour [-Wdelete-non-virtual-dtor]

any suggestions?

-O3 by default

We should consider using -O3 by default. @cr-marcstevens new recursive pruner benefits from it and by enabling it in DEFAULT_CFLAGS in configure.ac we keep it configurable.

to LLL or not to LLL

Over at https://github.com/fplll/fplll/pull/131/files/d6ed56afd1bffa21a24080cf0ef1be0ca30cd84a..865532ececb3abd7c856d93059e4eec98d9b3e33#r70401401

we add an LLL call when self-dual reduction is used in order to initialise the local LLL object. The question came up whether we should call LLL unconditionally in order to put all objects always in a sane state. However BKZ_NO_LLL is a supported option explicitly disabling the LLL preprocessing.

Shall we drop it? Shall we just stick with the compromise we have now?

option for not compiling libfplll

The libtool seems to take a long time for compiling libfplll. In developing or debugging, each time making some changes and recompile, I have to wait for about ten secs on my laptop to wait for the compilation. It seems that libtool part can not be speeded-up by make -jN. Thus I would prefer if there is an option in makefile.am to control whether we are compiling fplll as a library; or just compiling the binaries. The later could be useful in development stages. I could assign this to myself -- would like to see if everyone is happy with such feature (or dislike) :)

autotools generated files

autotools generate a bunch of files, to take care of portability issues. These files can be generated by
"autoreconf -i" which will also create the "configure" script based on the "configure.ac".
Can these files be added to the repository, as this would allow users to build the project without the need to install autotools?

should rerandomization set clean flag?

the rerandomization called at https://github.com/fplll/fplll/blob/master/fplll/bkz.cpp#L331 runs LLL once it is done, but it is not checked if LLL caused any progress. isn't it possible that by pure chance LLL improves the first vector of a block after rerandomization, even if the randomization is restricted to the tail of the block?

furthermore, just as in https://github.com/fplll/fplll/blob/master/fplll/bkz.cpp#L317, LLL should probably be started at the beginning of the basis rather than the block, for the same reason as this: #111.

finally, i think there is a corner case where svp_reduction calls enumeration on an unreduced block, that being: 1) we're in the k-th loop in the svp-reduction with k > 1; and 2) the (k-1)th loop found a short vector and the vector found was already in the basis (but not the first one). in this case the svp post processing is called and only moves the vector to the front, but does not call LLL afterwards. the rerandomization in the k-th loop is skipped (as it should - we already changed the basis), but that means that LLL is not called before we rerun enumeration, which unsurprisingly can lead to problems in the enumeration. there are two possible fixes for that: either call LLL after the post processing no matter what, or move the LLL call from the rerandomization into the loop of svp reduction (after the if(randomize) block). the latter would get rid of the LLL call before the for loop in svp reduction.

Missing overloaded operators for NR_FP

Currently, a few natural operation like x - 1 don't compiles because NR_FP - double is not implemented.

Once added, a dirty hack in pruner.h should be removed (creation and use of a ``variable'' FT one = 1;).

code for sieving

incorporate codes from Tuple lattice sieving to fplll; enable using sieving for SVP;

BKZ 2.0

Make fplll BKZ 2.0 ready by making the necessary tweaks in bkz.cpp such as an inner loop until a certain success probability is reached etc.

Credit authors

The AUTHORS file is outdated and should be updated.

We should also consider replacing it by a section in README.md where we also encourage contributors to add themselves.

preprocessing bugfix

from fplll/fpylll@17bebd8

Normally, we don't care about the state of the GSO after inserting a vector beause we run LLL afterwards anyway, but when recursive preprocessing is used this not always the case.

  • preprocessing finishes with just a row swap in its svp_postprocessing
  • the next step is enumeration in the main svp_reduction call
  • this enumeration now finds a garbled GSO because of the row swaps

running size reduction up to kappa + first_nonzero_vector restores the GSO to a sane state.

New "ntrulike" generator seems broken

The shortest vector is always (1,1,1,1,1,0,0,0,0,0):

In [41]: A = IntegerMatrix.random(5, "ntrulike", bits=10)

In [42]: A
Out[42]: <IntegerMatrix(10, 10) at 0x7fa2fd0b8600>

In [43]: print A
[ 1 0 0 0 0 555 499  68 115 217 ]
[ 0 1 0 0 0 217 555 499  68 115 ]
[ 0 0 1 0 0 115 217 555 499  68 ]
[ 0 0 0 1 0  68 115 217 555 499 ]
[ 0 0 0 0 1 499  68 115 217 555 ]
[ 0 0 0 0 0 727   0   0   0   0 ]
[ 0 0 0 0 0   0 727   0   0   0 ]
[ 0 0 0 0 0   0   0 727   0   0 ]
[ 0 0 0 0 0   0   0   0 727   0 ]
[ 0 0 0 0 0   0   0   0   0 727 ]

In [44]: LLL.reduction(A)
Out[44]: <IntegerMatrix(10, 10) at 0x7fa2fd0b8600>

In [45]: print A
[  -1  -1  -1  -1 -1   0   0   0   0   0 ]
[  17  -4   1 -22  6   2  -4  -7  -2  11 ]
[  -1  22  -6 -17  4   7   2 -11  -2   4 ]
[  22  -6 -17   4 -1   2 -11  -2   4   7 ]
[  -8   4  -3 -19 25  -4   0  -7   5   6 ]
[  -8  12  -1  -4  3   1  14   4   9 -28 ]
[   9  -5 -10  14 -6  -9  21   4  -8  -8 ]
[ -19  -1   1  13  5   2  13 -21   3   3 ]
[ -14   6  -9   5 10   8   8   9 -21  -4 ]
[   9 -11   1 -12 11 126 152 155 148 146 ]

Functions (sqrt/log) on dd_reals

The ``easy'' interface to <NR_FP> fails when FT = dd_real (but works well with double, long double and mpfr_t).

The following error is raised when uncommenting the line 245 in tests/test_pruner.cpp (currently under pull request from https://github.com/lducas/fplll):
test_pruner.o: In functionfplll::Pruner<fplll::FP_NR<dd_real> >::load_basis_shape(int, double*)':
test_pruner.cpp:(.text._ZN5fplll6PrunerINS_5FP_NRI7dd_realEEE16load_basis_shapeEiPd[_ZN5fplll6PrunerINS_5FP_NRI7dd_realEEE16load_basis_shapeEiPd]+0x95): undefined reference to log(dd_real const&)' test_pruner.cpp:(.text._ZN5fplll6PrunerINS_5FP_NRI7dd_realEEE16load_basis_shapeEiPd[_ZN5fplll6PrunerINS_5FP_NRI7dd_realEEE16load_basis_shapeEiPd]+0x267): undefined reference to exp(dd_real const&)'
test_pruner.cpp:(.text._ZN5fplll6PrunerINS_5FP_NRI7dd_realEEE16load_basis_shapeEiPd[_ZN5fplll6PrunerINS_5FP_NRI7dd_realEEE16load_basis_shapeEiPd]+0x623): undefined reference to sqrt(dd_real const&)'

JSON parsing

It'd be nice to store configuration such as BKZ strategies in JSON.

thread safety

Is fplll parallelizable with Intel Thread Building Blocks (TBB)?

I have tried parallel run of BKZ reduction using TBB.
My code is basically as follows:
serial code:

for (int i = 0; i < num; i++) {
    IntMatrix B(40,40);
    B.gen_uniform(40);
    bkzReduction(B, 20);
 }

parallel code:

void f() {
    IntMatrix B(40,40);
    B.gen_uniform(40);
    bkzReduction(B, 20);
} 
paralle_for(blocked_range<int> (0, num), f());

It only works for very small num, say 5.
When I increase num to 8, I can observe the segmentation fault sometimes.
After debugging, the error seems to happen inside the enumeration step of BKZ.
It reports that in the function Enumeration::enumerate(), there is a double free() or memory corruption.

BTW: when I use LLL reduction instead of BKZ, it seems fine so far.

Pruning constructor

Shall we add a constructor for the Pruner which takes all the standard parameters and assigns them?

  Pruner<FP_NR<double>> pru;
  pru.enumeration_radius   = enumeration_radius;
  pru.target_probability = target_probability;
  pru.preproc_cost         = preproc_cost;
  pru.load_basis_shape(m, start_row, end_row);
  pru.optimize_coefficients(pr);
  probability = pru.svp_probability(pr);

looks strange.

versionning material: version macros + version string

Please provide versionning material somewhere in the headers (version macro: and in the library itself (version string: fplll_version): ideally something similar to what can be found in mpfr and/or gmp .
Thanks for your work.

Where are new releases to be seen?

According to README.md, one should learn about new releases checking an URI at Damien Stehle's ENS Lyon page -- but that one is a redirect on github, where there are no actual releases!

Could you document it better and tag releases on github, so I can keep my Debian package up to date?

Thanks!

enumeration peformance

Running python ./set_mdc.py from https://github.com/fplll/strategizer produces an output stating that my system performs about 40 million enumerations per second. With @cr-marcstevens unpublished libenum (on which the new fplll code is based) I used to get something close to 60 million enumerations per second.

libenum and old fplll

python ./set_mdc.py 1
[enumlib] setting verbose level to quiet.
[enumlib] setting number of threads to 1.
__main__:   fplll :: nodes est:   99492313.2, time: 2.4773s, nodes/s:   40161852.7, nodes act:   70987486
__main__: libenum :: nodes est:   99864843.8, time: 1.6496s, nodes/s:   60540109.0
utilities.mdc: number of core: 1
utilities.mdc: fplll_enum nodes per sec:    40161852.7
utilities.mdc: enumlib nodes per sec:       60540109.0

new fplll

python ./set_mdc.py 1
__main__:   fplll :: nodes:   86815246.0, time: 1.8795s, nodes/s:   46189871.0

Is that to be expected?

gen_ntrulike is misleading

Users experimenting with our NTRU-like matrices expect them to have unusually short vectors, but they don't (by construction). We should clarify this in the documentation at least:

template<class ZT> inline void ZZ_mat<ZT>::gen_ntrulike(int bits,int q) {
  int i, j, k;
  Z_NR<ZT> * h=new Z_NR<ZT>[d];

  for (i=0; i<d; i++)
    h[i].randb(bits);

  for (i=0; i<d; i++)    {
      for (j=0; j<i; j++)
        matrix[i][j] = 0;
      matrix[i][i] = 1;
      for (j=i+1; j<d; j++)
        matrix[i][j] = 0;
    }

  for (i=d; i<r; i++)
    for (j=0; j<d; j++)
      matrix[i][j] = 0;

  for (i=d; i<r; i++)   {
      for (j=d; j<i; j++)
        matrix[i][j] = 0;
      matrix[i][i] = q;
      for (j=i+1; j<c; j++)
        matrix[i][j] = 0;
    }

  for (i=0; i<d; i++)
    for (j=d; j<c; j++)   { 
        k = j+i;
        while (k>=d)k-=d;
        matrix[i][j] = h[k];
      }

  delete[] h;
}

problem using fplll as a library with g++ 6.1

Hi! Using fplll as a library with g++ version 6.1 fails on my machine when compiling with the C++11 and C++14 standards. This is caused by the use of hexadecimal floating point constants in the src/dpe.h file (lines 186-192), which are not officially recognized as being standard C++ (they will be in C++17 however!), but are part of the C99 and C11 standards. The error message is: error: exponent has no digits

Before updating to g++ 6.1 I did not have this problem (even when using the -std=c++11 and -std=c++14 flags). Hexadecimal floats in g++ are now available only as extensions, by using -std=gnu89, -std=gnu++11 or -std=gnu++14 (now, the default flag).

Maybe other users should be aware of this if they switch to a newer g++ version.

sd_tour and slide_tour don't take kappa_max

The interface of tour() vs. sd_tour() and slide_tour() doesn't match, in that the latter do not report the maximum kappa being BKZ reduced. Is that because that notion makes no sense here or is that an oversight?

Update Documentation

The documentation seems to be a bit deprecated.
When compiling the library from the github repo, the bkzReduction(BKZParam &) function does not exists anymore?

Also a brief description of what values these bkz parameters should typically be would be nice to have. E.g. when I want to control the precision, does the precision parameter corresponds to bits, is there an option to express "infinite" precision?

Quadruple precision

Shall we still use QD floating-point numbers or replace it with something more standard?

pkg-config

We should write a pkg-config file for fplll to make it easier for downstream projects to link against fplll. For example, because QD is optional.

symbol lookup error

running lll

cat d80.base | fplll -a lll

works fine, but bkz on the same input raises an error

cat d80.base | fplll -a bkz -b 10
fplll: symbol lookup error: fplll: undefined symbol: _ZN5fplll12bkzReductionEPNS_6ZZ_matIA1_12__mpz_structEES4_RKNS_8BKZParamENS_9FloatTypeEi

Have you encountered that before ? I suspect having install then removed the debian package fplll-tools before reintalling the git latest version may have caused this.

CONTRIBUTING.md

Create documentation on contributing

  • code conventions
  • building documentation
  • ???

clarify download instructions

Jeroen Demeyer writes:

I recommend to reword the part

"If you downloaded the source code from github, you’ll first need to run"

to

"You should download the source from github and then run"

more lattice generators

We use

def make_integer_matrix(n, q=None):
    A = IntegerMatrix(n, n)

    if q is None:
        q = prime_list[n]

    A[0, 0] = q
    for i in range(1, n):
        A[i, 0] = randint(0, q-1)
        A[i, i] = 1

for our benchmarketing/parameter estimation in another project. Maybe this should be added under the appropriate name to fplll?

rows vs columns in q-ary matrix generation

it seems the q-ary lattices are generated in column notation, while fplll uses row notation:

~$ latticegen q 6 1 8 p
[[127 75 47 67 72 97]
[0 1 0 0 0 0]
[0 0 1 0 0 0]
[0 0 0 1 0 0]
[0 0 0 0 1 0]
[0 0 0 0 0 1]]
~$ latticegen r 6 8
[[116 1 0 0 0 0 0]
[75 0 1 0 0 0 0]
[47 0 0 1 0 0 0]
[195 0 0 0 1 0 0]
[200 0 0 0 0 1 0]
[225 0 0 0 0 0 1]]

C++ Library

Make it easier to use fplll as a library by instantiating templates at compile time.

document thread safety guarantees

Add documentation what is and what is not thread safe

  • working on different objects (matrices, lll, bkz) should be thread safe
  • everything else isn't

fix LinearPruning constructor

The LinearPruning constructor currently lies about the success probability. This should be fixed once the pruner is included which allows to compute said probability.

nolll bug

./latticegen r 20 2000 | ./fplll -a bkz -b 10 -nolll

This crashes.

QD required for pruner test

It seems like the QD library is mandatory for the pruner test, but it is optional for fpLLL. So if one uses fpLLL without QD, the test will fail. This doesn't seem right.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.