Giter VIP home page Giter VIP logo

lwtnn's Introduction

Lightweight Trained Neural Network

Build Status Scan Status DOI

What is this?

The code comes in two parts:

  1. A set of scripts to convert saved neural networks to a standard JSON format
  2. A set of classes which reconstruct the neural network for application in a C++ production environment

The main design principles are:

  • Minimal dependencies: The C++ code depends on C++11, Eigen, and boost PropertyTree. The converters have additional requirements (Python3 and h5py) but these can be run outside the C++ production environment.

  • Easy to extend: Should cover 95% of deep network architectures we would realistically consider.

  • Hard to break: The NN constructor checks the input NN for consistency and fails loudly if anything goes wrong.

We also include converters from several popular formats to the lwtnn JSON format. Currently the following formats are supported:

  • Scikit Learn
  • Keras (most popular, see below)

Why are we doing this?

Our underlying assumption is that training and inference happen in very different environments: we assume that the training environment is flexible enough to support modern and frequently-changing libraries, and that the inference environment is much less flexible.

If you have the flexibility to run any framework in your production environment, this package is not for you. If you want to apply a network you've trained with Keras in a 6M line C++ production framework that's only updated twice a year, you'll find this package very useful.

Getting the code

Clone the project from github:

git clone [email protected]:lwtnn/lwtnn.git

Then compile with make. If you have access to a relatively new version of Eigen and Boost everything should work without errors.

If you have CMake, you can build with no other dependencies:

mkdir build
cd build
cmake -DBUILTIN_BOOST=true -DBUILTIN_EIGEN=true ..
make -j 4

Running a full-chain test

To run the tests first install h5py in a Python 3 environment, e.g. using pip

python -m pip install -r tests/requirements.txt

Starting from the directory where you built the project, run

./tests/test-GRU.sh

(note that if you ran cmake this is ../tests/test-GRU.sh)

You should see some printouts that end with *** Success! ***.

Quick Start With Keras Functional API

The following instructions apply to the model/functional API in Keras. To see the instructions relevant to the sequential API, go to Quick Start With sequential API.

After building, there are some required steps:

1) Save your network output file

Make sure you saved your architecture and weights file from Keras, and created your input variable file. See the lwtnn Keras Converter wiki page for the correct procedure in doing all of this.

Then

lwtnn/converters/kerasfunc2json.py architecture.json weights.h5 inputs.json > neural_net.json

Helpful hint: if you do lwtnn/converters/kerasfunc2json.py architecture.json weights.h5 it creates a skeleton of an input file for you, which can be used in the above command!

2) Test your saved output file

A good idea is to test your converted network:

./lwtnn-test-lightweight-graph neural_net.json

A basic regression test is performed with a bunch of random numbers. This test just ensures that lwtnn can in fact read your NN.

3) Apply your saved neural network within C++ code
// Include several headers. See the files for more documentation.
// First include the class that does the computation
#include "lwtnn/LightweightGraph.hh"
// Then include the json parsing functions
#include "lwtnn/parse_json.hh"

...

// get your saved JSON file as an std::istream object
std::ifstream input("path-to-file.json");
// build the graph
LightweightGraph graph(parse_json_graph(input));

...

// fill a map of input nodes
std::map<std::string, std::map<std::string, double> > inputs;
inputs["input_node"] = {{"value", value}, {"value_2", value_2}};
inputs["another_input_node"] = {{"another_value", another_value}};
// compute the output values
std::map<std::string, double> outputs = graph.compute(inputs);

After the constructor for the class LightweightNeuralNetwork is constructed, it has one method, compute, which takes a map<string, double> as an input and returns a map of named outputs (of the same type). It's fine to give compute a map with more arguments than the NN requires, but if some argument is missing it will throw an NNEvaluationException.

All inputs and outputs are stored in std::maps to prevent bugs with incorrectly ordered inputs and outputs. The strings used as keys in the map are specified by the network configuration.

Supported Layers

In particular, the following layers are supported as implemented in the Keras sequential and functional models:

K sequential K functional
Dense yes yes
Normalization See Note 1 See Note 1
Maxout yes yes
Highway yes yes
LSTM yes yes
GRU yes yes
Embedding sorta issue
Concatenate no yes
TimeDistributed no yes
Sum no yes

Note 1: Normalization layers (i.e. Batch Normalization) are only supported for Keras 1.0.8 and higher.

Supported Activation Functions

Function Implemented?
ReLU Yes
Sigmoid Yes
Hard Sigmoid Yes
Tanh Yes
Softmax Yes
ELU Yes
LeakyReLU Yes
Swish Yes

The converter scripts can be found in converters/. Run them with -h for more information.

Have problems?

For more in-depth documentation please see the lwtnn wiki.

If you find a bug in this code, or have any ideas, criticisms, etc, please email me at [email protected].

lwtnn's People

Contributors

aghoshpub avatar benjaminhuth avatar demarley avatar dguest avatar ductng avatar jcvoigt avatar jwsmithers avatar krasznaa avatar laurilaatu avatar makagan avatar malanfer avatar matthewfeickert avatar mickypaganini avatar quantumdancer avatar sfranchel avatar tjkhoo avatar tprocter46 avatar vukanj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

lwtnn's Issues

Move the regression test inputs somewhere more reliable than CERN AFS

Right now the regression tests are downloading the larger input data files from dguest.web.cern.ch/dguest/nn-tests/. This should be something more permanent. The easiest solution is probably one of the following:

  1. download them from another git repository (hosted on github), or
  2. store them in a submodule

RNN preprocessing

If I understand, preprocessing is done by default for RNN's. I think we need to make this optional. For instance, we don't so any preprocessing for our RNN's. Also, there is some ambiguity if the pre-processing (if done) should be per time step, or over all time steps. Can we add the functionality to turn this off if desired?

Parameter format

I was thinking, it would be really cool to support both Keras style configurations and AGILEPack style. It wouldn't be too hard to support like 95% of Keras layers so we don't have to be married to AGILEPack...

Support for 2D convolutions

Is it currently possible to convert models that include keras.layers.Conv2D layers?

If yes, then I think it would be helpful to include it in the "Supported Layers" box in the README. If not, consider this a feature request :)

Make Keras activation function name remapping use the key by default

The keras2json.py converter uses an _activation_map to map the Keras activation function names to lwtnn names. The key is often the same as the value, so we could probably just use something like

lwtnn_act_name = _activation_map.get(key, key)

so that the key is used if no other name is supplied.

Simple lxplus setup

I got an email from someone who must have tried the instructions on the wiki. They (mostly) work, but they are a bit heavy to maintain and I think it would make sense to remove those instructions in favor of something that gets more use.

@jwsmithers I had a more recent example in a tutorial I wrote for b-tagging, specifically the setup script here, but do you have better instructions to get things working on lxplus?

Also, the top level wiki should point to some list of "quick start" pages, i.e. a repository you can clone on lxplus that "just works".

Make test suite more robust to execution path

Ideally the test suite should be more robust to where the test-runner.sh Bash script is executed, as it currently requires that the tests are executed from the project top level directory.

If tests/check-version-number.sh changes to the tests directory before executing the tests, the tests fail with

grep: CMakeLists.txt: No such file or directory
ERROR: Can't find CMake project name. The CMakeLists.txt file should contain 'project( lwtnn VERSION X.X)

given that

CMAKE_PROJECT=$(egrep "project *\(.*\)" CMakeLists.txt)

The tests should either be made more robust against this path dependence, or it should be made more strict and fail quickly and cleanly if the execution path is wrong.

Originally posted by @dguest in #121 (comment)

Add unit test for NNs we'll need in ATLAS code

We'll be adding several taggers to an Athena release soon, so it would be useful to write a unit test around a representative set of them. We currently already have something for ipmp which covers a good fraction of the code (for GRUs, highway layers, and maxout), but it would be nice to add something close to the DL1 implementation we end up using.

Option to avoid installing Eigen code in BUILTIN_EIGEN mode

The current cmake list correctly installs eigen headers when running in BUILTIN_EIGEN mode, but in some cases (i.e. when using the high level interfaces) eigen isn't required.

We should add an option to the cmake installation that avoids installing the Eigen headers and the low level interfaces that require Eigen. As a starting point, it might be reasonable to rearrange the headers in the CMake list into the high and low level classes.

Templating importent classes

Hello,

It would be very useful if one could choose the underlying data-type which lwtnn uses. E.g., one could use single precision numbers, or even differentiable types. I tried this in a fork of lwtnn.

As a starting point for the discussion, the following snippet shows how I approached this:

  • LWTNN right now:

     class Stack
     {
         // something with double...
     };
  • Proposal:

     template<typename T>
     class StackT
     {
         // something generic
     };
     
     using Stack = StackT<double>;

The typedefs are introduced to maintain compatibility with the rest of the library.

I also think, the interface-classes can stay unchanged, we could ensure with a static_assert(...) that the type T can be assigned with double/converted to double.

I would propose this as a change, but maybe there are also other approaches out there which we can disucss.

Best regards,
Benjamin

Maintain tensorflow and theano parameter saving between versions

From an email with Andreas Sogaard he reported an error when trying to convert a NN.

>>> File "/afs/cern.ch/work/a/asogaard/private/lwtnn/converters/keras_v2_layer_converters.py", line 52, in _normalization_parameters
>>>     gamma = layers['gamma'+BACKEND_SUFFIX]
>>> KeyError: 'gamma:0'

which appears to be related to the converters reading of the HDF5 weights file. The `layer_group` and `layers` in question are

>>> layer_group = <HDF5 group "/classifier/batch_normalization_1" (1 members)>
>>> layers = {'batch_normalization_1_1': array(['beta:0', 'gamma:0', 'moving_mean:0', 'moving_variance:0'], dtype='<U17โ€™)}

I have attached the outputs from json.tool and h5ls for the architecture- and weights files, resp. They look pretty healthy to me, e.g. 

$ cat output_h5ls.txt | head -9
/                        Group
/classifier              Group
/classifier/batch_normalization_1 Group
/classifier/batch_normalization_1/classifier Group
/classifier/batch_normalization_1/classifier/batch_normalization_1_1 Group
/classifier/batch_normalization_1/classifier/batch_normalization_1_1/beta:0 Dataset {10}
/classifier/batch_normalization_1/classifier/batch_normalization_1_1/gamma:0 Dataset {10}
/classifier/batch_normalization_1/classifier/batch_normalization_1_1/moving_mean:0 Dataset {10}
/classifier/batch_normalization_1/classifier/batch_normalization_1_1/moving_variance:0 Dataset {10}

for 10 input variables, but what do I know. I am saving the model in a setup with the following package versions:
  Python version: 2.7.13 (so not python3)
  Numpy version: 1.13.1
  Keras version: 2.0.8
  TensorFlow version: 1.3.0

However, it actually looks like the gamma:0 does exist. This will need to be looked into. He also sent architecture and weights file which I won't attach here.

Sequential model input variable file

Right now a template for the variable file is provided for the sequential model.
I understand that the Functional model is recommended. However, is there any technical challenge to generate the input variable file (with keras2json.py) for the sequential model?

Implement Maxout

It would be useful to support maxout. This should be pretty trivial, but there's some room for bugs storing the weights so it will have to be tested carefully.

New tag for CMake version?

What version do you want to put in AtlasExternals? It probably shouldn't be master. But are we ready for a new tag?

Missing activation function in the config file

Hello!
I am trying to use lwtnn to convert the pixel NN model.
to a json config file. In the training structure, it applies the same activation function several times. I can generate the config file with "kerasfunc2json.py", but in the config file, it seems the activation function is missing for the second hidden layer.
I find that to fix this issue, I need to create multiple activation functions in the training script.
Best regards,
Boping

Support for trained TF models?

Hello,

I am very interested in using lwtnn to evaluate models trained directly in TensorFlow. Have you ever looked into supporting this?

Obviously, I am very happy to contribute the needed converter, but thought I'd ask before I get going with the implementation.

I guess one could hack something together, reconstructing a trained TF model within Keras and then use the established pipeline to get the weight and architecture into the correct formats. The other obvious option is to get a dedicated, stand-alone converter that avoids using Keras.

From your experience and perspective, which of the two paths (or any other) would be preferable?

Thanks!

Cheers,
Philipp

Point unit tests tagged versions of data files

Some of the unit tests download datasets from the lwtnn-test-data master branch. They should point to specific tags, so that if someone updates the tests in the future it doesn't break older revisions of the lwtnn unit tests.

Keras converter

Probably good to add the Keras-to-JSON converter in the next weeks to have the full process chain up and running.

Create regression tests for JSON to C++ step

All our regression tests currently cover the full Keras -> JSON -> C++ conversion. This leaves us in danger of accidentally breaking networks which are saved as JSON if we update the Keras converters and the C++ code at the same time.

Using lwtnn from python code

Hi guys,

I would like to use lwtnn from a python program. I trained a model using Keras, but for deployment I would like to "replace" in my python program the old cut-based event selection with the DNN. Is there an easy way to load lwtnn from python? Perhaps via PyROOT (which will be necessary anyway)?

Cheers,
Riccardo

Add test for clean CMake build

Merge request #63 teaches CMake to download the eigen and boost dependencies and install them. Given all the things that could go wrong, it would be nice to have a regression test around this.

Linting with pre-commit hooks?

@dguest What are your feelings about running linting with pre-commit hooks (both in CI and optionally with pre-commit installed locally)? I'm not suggesting trying to enforce a C++ style linter or anything like that (as much as I'd like that ๐Ÿ˜‰) but more about just using a subset of https://github.com/pre-commit/pre-commit-hooks.

If you like the idea I have a branch on my fork that is ready to go for a PR that I can demo with you as well. (c.f. https://github.com/matthewfeickert/lwtnn/runs/2522324028)

Implement time distributed wrapper for sequential graph inputs

Some people would like to be able to use the Keras TimeDistributed wrapper. This should be a relatively straightforward wrapper on the current feed-forward layers. I don't see any good reason to implement it in the sequential code, so it will mean implementing a new Node class in the Graph API.

Checklist:

  • Implement code that compiles
  • Verify that it works at runtime
  • Verify against keras
  • Write regression test

Add unit test for FastGraph

Almost everything is in place to have a regression test for FastGraph, I just haven't gotten around to writing one.

Add linting through pre-commit GHA workflow

With pre-commit it is easy to setup a linting workflow that is able to enforce the Coding Standards section of the CONTRIBUTING.md. Other projects like pandamonium already use pre-commit and with the https://github.com/pre-commit/pre-commit-hooks repo it is easy to get common linting tasks as 1-liners hooks. pandamonium also uses the pre-commit.ci service to make linting even faster, but if the goal for ltwnn is to be as unfancy as possible while still having CI just using pre-commit in a GHA workflow should work great when run with pre-commit run --all-files.

Even if a developer doesn't want to install pre-commit locally for whatever reason, the GHA workflow will still run and give linting feedback if any of the pre-commit hooks fail.

Add some unit tests

Good news everyone!

I added Travis CI integration into master (see the .travis.yml file). If anyone is interested (@Marie89, @mickypaganini), it would be fun to try adding some unit tests. Basically what I have in mind:

  • Call the tests lwtnn-unittest-XXX, they can live in scripts/ or bin/
  • They have to return non-zero on failure for Travis CI to realize something is wrong, see the example output that I'm turning on with the likes here
  • I'd rather not include much (or any) data in the repository (since git won't handle raw hdf5 well) but just checking to make sure the output of some of the basic nets doesn't change with new commits would already be very useful.

Obviously this is just for "fun", so low priority but it may also be very useful down the road.

Get embedding working in graphs

It's not clear that we'll ever fully embedding in the sequential mode, but it may be worth adding in the graph models. Steps to implement:

  • Save a graph from the Keras functional API that uses embedding
    • Make sure this correctly handles masking
  • Add support for embedding and masking in the converter
  • Implement in C++
  • Test against Keras output

Support Keras 2.0

The converter scripts have to be updated to support Keras version 2.0. Right now we only support version 1.2, which is confusing given that the Keras documentation assumes 2.0 and higher.

Note that this being addressed in pull request #44

Add unit test for LSTM

There's currently no unit test for the LSTM layer, which is a bit uncomfortable given that we're now using an LSTM in ATLAS flavor tagging.

Move defaults to the input variable objects in JSON

The way the default value for variables is stored in its own dictionary within the JSON file doesn't really make sense given that the rest of the input variable attributes are already stored in the input variable list. They should be moved.

This requires some work in:

  • The existing converters
  • The JSON parser

The C++ JSON configuration object should probably keep the default values in another map, on the other hand, since these aren't part of the normal input for the LightweightNeuralNet constructor.

One-line export from python

Maybe I missed it but currently it seems it's quite complex to export the model (and one needs to execute external scripts). It would be nice if one could run one line in python and export it there directly (e.g. after saving the h5 file one also exports the json for lwtnn).

Add support for merge.Maximum()

Given the deprecation of maxout layers in Keras 2, we should probably consider supporting merge.Maximum() (which can be used in conjunction with Dense layers to do the same thing).

This is to keep track of a request from @Marie89. Marie, feel free to comment if this is urgent.

Rename `compute`, `reduce` and `scan` in the API

I originally called the method that runs the networks compute, but I didn't think very hard about it. Now there are a few other method names kicking around:

  • scan for the RNN methods, which map a MatrixXd to another MatrixXd
  • reduce for RecursiveStack, which maps a MatrixXd to a VectorXd. I named it this for consistency with the standard reduce function in most languages.

Maybe this isn't totally intuitive, and @makagan had the suggestion that we change the API to use predict for consistency with sklearn or Keras. If we're going to be changing the API for readability, I think we should do it consistently. So we should consider the following places:

  • The ILayer derived classes and the Stack class.
  • The IRecurrent derived classes and the RecursiveStack class.
  • The LightweightNeuralNetwork high-level wrapper on theStack` class.
  • I'm also going to write a LightweightRNN wrapper around RecursiveStack anyway, to take care of the variable naming, normalization, offsets, various sanity checks, etc. We could change both the LightweightRNN and LightweightNeuralNetwork classes to use predict, and keep the underlying api the same

It's a small change, but it will have to happen consistently and obviously we'll only do it when everyone using the code is ready.

Add sequential nodes to Graph

The current Graph implementation only includes feed forward and concatenate layers. To include recurrent layers, several additional will be required:

  • Add a new method to the ISource class that returns a matrix
  • Add a ISequenceNode base class that returns a matrix from compute(...)
  • Add a ReductionNode which inherits from INode and truncates sequences
  • Factorize RecurrentStack to make the internal methods available to the sequential nodes
  • Modify the Keras functional converter to support sequence nodes
  • Test full chain against a Keras model

Restructure LightweightNeuralNetwork and LightweightRNN

As it currently stands the code is a bit difficult to read, but it may be more clear with some restructuring. I was thinking about the following:

  • Move all the Stack and RecurrentStack code into one file, and put the high-level wrappers in another. This has two advantages:
    • The "user" code is all in one place, while the lower-level code is in another. Anyone who wants a bare-bones implementation can ignore the high-level stuff.
    • Including the high-level headers doesn't force anyone to include Eigen, which as a header-only library slows down compilation quite a bit.
  • Move the more important classes to the top. The "Stack" file would be something like:
    • Stack
    • Layers
    • RecurrentStack
    • RecurrentLayers
    • Activation functions
    • Utility functions
    • Exceptions
  • The "wrapper" classes would be something like:
    • LightweightNeuralNetwork
    • RecurrentNeuralNetwork
    • Input preprocessors
  • Obviously we'd want to keep the .cxx files and the .hh files in the same order.

Support "advanced" activation functions

Activation functions like ELUs aren't really supported well in this library.

The problem is that activation functions are currently just named functions with no parameters, whereas things like ELUs take at least one parameter. I'm not sure about the best way to fix this: we could replace all the activation functions in the C++ with things that inherit from ILayer or maybe we could opt for an intermediate solution like making them all std::functions.

Add normalization layer

Having a "normalization" layer could be useful in a few ways:

  • We could replace the InputPreprocessor with a normalization layer
  • More importantly, we could support "batch" normalization without the batches

Keras stores the running mean and standard deviation from batch normalization, so converting this into layer information should be relatively easy.

Request new tag due to issue with keras2json.py in v2.9

Hi,
I'm going to use this tool as part of external packages in our framework and thus must stick to version with tags. In the most up date v2.9 there is some issue with keras2json.py, it fails with:
lwtnn converter being configured for keras (v2.2).
Traceback (most recent call last):
File "/nfs/dust/cms/user/karavdia/lwtnn/converters/keras2json.py", line 159, in
_run()
File "/nfs/dust/cms/user/karavdia/lwtnn/converters/keras2json.py", line 59, in _run
'layers': _get_layers(arch, inputs, h5),
File "/nfs/dust/cms/user/karavdia/lwtnn/converters/keras2json.py", line 125, in _get_layers
layer_arch = in_layers[layer_n]
KeyError: 0

This seems to be fixed in current master branch some time ago (35c6b52).
Is it possible to create new tag with fixed script, please?

UnicodeDecodeError with keras2json

Hello,
I'm trying to run the keras2json.py script with the attached architecture to get the file I need to use my NN

While running in, I get the following issue - which I cannot reproduce when I try to json.load the attached files :

Traceback (most recent call last):
  File "/lwtnn/converters/keras2json.py", line 165, in <module>
    _run()
  File "/lwtnn/converters/keras2json.py", line 46, in _run
    inputs = json.load(inputs_file)
  File "/usr/lib/python3.6/json/__init__.py", line 296, in load
    return loads(fp.read(),
  File "/usr/lib/python3.6/codecs.py", line 321, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position
0: invalid start byte

Another relevant information that @dguest noticed : I did use the Sequential API, explicitely adding an InputLayer at the beginning of the network. It seems this is the origin of the problem ?

Kind regards, and thanks for the nice tool ! :)
Cheers
Cyril

trained_model.zip

Support for sequences of length 0 in LSTM/GRU

The implementation of LSTM in keras also handles input sequences that are completely masked by returning a vector of zeros.

import numpy as np
from keras.layers import Input, Masking, LSTM
from keras.models import Model

masked_input = np.ones((1, 5, 4))

x = Input(shape=(5, 4))
mask = Masking(mask_value=1.0)(x)
lstm = LSTM(3, return_sequences=False)(mask)

model = Model(x, lstm)

model.predict(masked_input)
# array([[0., 0., 0.]], dtype=float32)

At the moment lwtnn does not allow to compute networks when passing empty input vectors. For networks with a single recurrent branch and no other inputs this feature is not needed. However, if the recurrent branch is an auxiliary input to the network it makes sense to return a zero vector to allow to compute the network output.
Given the current structure of lwtnn, would this be feasible to implement without large structural changes?

Revising minimal build for CI

@dguest As the "minimal" CI builds are now failing

-- Build files have been written to: /home/runner/work/lwtnn/lwtnn/build
[  1%] Creating directories for 'Eigen'
[  3%] Creating directories for 'Boost'
[  5%] Performing download step (download, verify and extract) for 'Eigen'
-- Downloading...
   dst='/home/runner/work/lwtnn/lwtnn/build/externals/src/eigen-3.3.7.tar.bz2'
   timeout='none'
   inactivity timeout='none'
-- Using src='https://gitlab.com/libeigen/eigen/-/archive/3.3.7/eigen-3.3.7.tar.bz2'
[  7%] Performing download step (download, verify and extract) for 'Boost'
-- Downloading...
   dst='/home/runner/work/lwtnn/lwtnn/build/externals/src/boost_1_64_0.tar.gz'
   timeout='none'
   inactivity timeout='none'
-- Using src='https://dl.bintray.com/boostorg/release/1.64.0/source/boost_1_64_0.tar.gz'
-- verifying file...
       file='/home/runner/work/lwtnn/lwtnn/build/externals/src/eigen-3.3.7.tar.bz2'
-- Downloading... done
-- extracting...
     src='/home/runner/work/lwtnn/lwtnn/build/externals/src/eigen-3.3.7.tar.bz2'
     dst='/home/runner/work/lwtnn/lwtnn/build/externals/src/Eigen'
-- extracting... [tar xfz]
-- [download 0% complete]
CMake Error at Boost-stamp/download-Boost.cmake:170 (message):
  Each download failed!

it seems that the idea of the "minimal" build only comes up in

if [[ ${MINIMAL+x} ]]; then
ARGS+=" -DBUILTIN_BOOST=TRUE -DBUILTIN_EIGEN=TRUE"
fi

then the idea of running a "minimal" build in a CI job seems a bit weird as that is making assumptions on what the image should already contain, no? Wouldn't it be better to just drop this test from CI as it makes assumptions on installed software?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.