Giter VIP home page Giter VIP logo

energyflow's People

Contributors

andersjohanandreassen avatar ericmetodiev avatar j-s-ashley avatar matthewfeickert avatar pkomiske avatar rikab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

energyflow's Issues

Update EnergyFlow release procedure

First things I would recommend doing:

c.f. thaler-lab/Wasserstein#7

Using advanced activation functions

Hi,

I am trying to build a network using a "LeakyReLU" activation function (one of the "advanced" activation functions from Keras (https://keras.io/layers/advanced-activations/)).

However, I am hit with the error: "ValueError: Unknown activation function:LeakyReLU" when calling this:

"pfn = PFN(input_dim=X.shape[-1], ppm_sizes=ppm_sizes, dense_sizes=dense_sizes, dense_acts='LeakyReLU')"

norm=True causes jet constituents to be outside of simplex

Hi all,

I see errors from POT when I try to turn on the norm=True option when calculating EMDs:

/usr/local/lib/python2.7/site-packages/energyflow/emd.py:255: UserWarning: Problem infeasible. Check that a and b are in the simplex

These are thrown for jets which seem to be otherwise working just fine: if I don't try to normalise the pT's, then I get EMDs out which are comparable to the ones in the demo you provide, if I also disable the normalisation in that calculation.

Is there something I need to be aware of when using this option? I'm interested in trying to re-derive the plots from the EMD demo using my own input jets, and the correlation dimension example seems to depend on this normalisation switch being turned on (or at least, it is done in the demo -- I don't see details about this being done in your paper?).

Cheers,
Matt

Update NumPy use to be compatible with NumPy v2.0

Relevant for preparing for thaler-lab/Wasserstein#22

../../../../.pyenv/versions/3.11.7/envs/wasserstein-dev/lib/python3.11/site-packages/energyflow/algorithms/einsumfunc.py:15
  /home/feickert/.pyenv/versions/3.11.7/envs/wasserstein-dev/lib/python3.11/site-packages/energyflow/algorithms/einsumfunc.py:15: DeprecationWarning: `np.compat`, which was used during the Python 2 to 3 transition, is deprecated since 1.26.0, and will be removed
    from numpy.compat import basestring

../../../../.pyenv/versions/3.11.7/envs/wasserstein-dev/lib/python3.11/site-packages/energyflow/algorithms/einsumfunc.py:16
  /home/feickert/.pyenv/versions/3.11.7/envs/wasserstein-dev/lib/python3.11/site-packages/energyflow/algorithms/einsumfunc.py:16: DeprecationWarning: numpy.core.multiarray is deprecated and has been renamed to numpy._core.multiarray. The numpy._core namespace contains private NumPy internals and its use is discouraged, as NumPy internals can change without warning in any release. In practice, most real-world usage of numpy.core is to access functionality in the public NumPy API. If that is the case, use the public NumPy API. If not, you are using NumPy internals. If you would still like to access an internal attribute, use numpy._core.multiarray.c_einsum.
    from numpy.core.multiarray import c_einsum

../../../../.pyenv/versions/3.11.7/envs/wasserstein-dev/lib/python3.11/site-packages/energyflow/algorithms/einsumfunc.py:17
  /home/feickert/.pyenv/versions/3.11.7/envs/wasserstein-dev/lib/python3.11/site-packages/energyflow/algorithms/einsumfunc.py:17: DeprecationWarning: numpy.core.numeric is deprecated and has been renamed to numpy._core.numeric. The numpy._core namespace contains private NumPy internals and its use is discouraged, as NumPy internals can change without warning in any release. In practice, most real-world usage of numpy.core is to access functionality in the public NumPy API. If that is the case, use the public NumPy API. If not, you are using NumPy internals. If you would still like to access an internal attribute, use numpy._core.numeric.asarray.
    from numpy.core.numeric import asarray, asanyarray, result_type, tensordot, dot

../../../../.pyenv/versions/3.11.7/envs/wasserstein-dev/lib/python3.11/site-packages/energyflow/efm.py:44
  /home/feickert/.pyenv/versions/3.11.7/envs/wasserstein-dev/lib/python3.11/site-packages/energyflow/efm.py:44: DeprecationWarning: numpy.core.multiarray is deprecated and has been renamed to numpy._core.multiarray. The numpy._core namespace contains private NumPy internals and its use is discouraged, as NumPy internals can change without warning in any release. In practice, most real-world usage of numpy.core is to access functionality in the public NumPy API. If that is the case, use the public NumPy API. If not, you are using NumPy internals. If you would still like to access an internal attribute, use numpy._core.multiarray.c_einsum.
    from numpy.core.multiarray import c_einsum

should all get updated to be NumPy 2.0 compatible.

from numpy.compat import basestring
from numpy.core.multiarray import c_einsum
from numpy.core.numeric import asarray, asanyarray, result_type, tensordot, dot

from numpy.core.multiarray import c_einsum

Cannot pip install energyflow on M1 mac

Hi,

I am unable to pip install energyflow on an M1 mac ... the resulting error messages are below.

I think there needs to be a cibuildwheel for M1 installations? @matthewfeickert might know better ...

๐Ÿป MLB

(emd) eduroam-hci-dock-1-305:benchmarks mleblanc$ pip install energyflow
Collecting energyflow
  Using cached EnergyFlow-1.3.2-py2.py3-none-any.whl (700 kB)
Collecting numpy>=1.16.0
  Downloading numpy-1.23.0-cp39-cp39-macosx_11_0_arm64.whl (13.3 MB)
     โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 13.3/13.3 MB 16.4 MB/s eta 0:00:00
Collecting wasserstein>=0.3.1
  Using cached Wasserstein-1.0.1.tar.gz (382 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting six>=1.10.0
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting h5py>=2.9.0
  Using cached h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl (2.6 MB)
Collecting wurlitzer>=2.0.0
  Using cached wurlitzer-3.0.2-py3-none-any.whl (7.3 kB)
Building wheels for collected packages: wasserstein
  Building wheel for wasserstein (pyproject.toml) ... error
  error: subprocess-exited-with-error

  ร— Building wheel for wasserstein (pyproject.toml) did not run successfully.
  โ”‚ exit code: 1
  โ•ฐโ”€> [29 lines of output]
      /private/var/folders/0c/bns72rts4694w4ftngm2_mh80000gn/T/pip-build-env-pqlly00e/overlay/lib/python3.9/site-packages/setuptools/config/setupcfg.py:463: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
        warnings.warn(msg, warning_class)
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build/lib.macosx-12-arm64-cpython-39
      creating build/lib.macosx-12-arm64-cpython-39/wasserstein
      copying wasserstein/__init__.py -> build/lib.macosx-12-arm64-cpython-39/wasserstein
      copying wasserstein/wasserstein.py -> build/lib.macosx-12-arm64-cpython-39/wasserstein
      running build_ext
      building 'wasserstein._wasserstein' extension
      creating build/temp.macosx-12-arm64-cpython-39
      creating build/temp.macosx-12-arm64-cpython-39/wasserstein
      clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -DSWIG_TYPE_TABLE=wasserstein -I/private/var/folders/0c/bns72rts4694w4ftngm2_mh80000gn/T/pip-build-env-pqlly00e/overlay/lib/python3.9/site-packages/numpy/core/include -I. -I/Users/mleblanc/svjets/benchmarks/emd/include -I/opt/homebrew/Cellar/[email protected]/3.9.12_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c wasserstein/wasserstein.cpp -o build/temp.macosx-12-arm64-cpython-39/wasserstein/wasserstein.o -Xpreprocessor -fopenmp -ffast-math -std=c++14 -g0
      In file included from wasserstein/wasserstein.cpp:5457:
      In file included from ./wasserstein/Wasserstein.hh:47:
      In file included from wasserstein/internal/PairwiseEMD.hh:54:
      wasserstein/internal/PairwiseEMDBase.hh:54:10: fatal error: 'omp.h' file not found
      #include <omp.h>
               ^~~~~~~
      clang: error: unable to execute command: Segmentation fault: 11
      clang: error: clang frontend command failed due to signal (use -v to see invocation)
      Apple clang version 13.1.6 (clang-1316.0.21.2.3)
      Target: arm64-apple-darwin21.1.0
      Thread model: posix
      InstalledDir: /Library/Developer/CommandLineTools/usr/bin
      clang: note: diagnostic msg: Error generating preprocessed source(s).
      error: command '/usr/bin/clang' failed with exit code 254
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for wasserstein
Failed to build wasserstein
ERROR: Could not build wheels for wasserstein, which is required to install pyproject.toml-based projects
(emd) eduroam-hci-dock-1-305:benchmarks mleblanc$ pip install wheel
Collecting wheel
  Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Installing collected packages: wheel
Successfully installed wheel-0.37.1
(emd) eduroam-hci-dock-1-305:benchmarks mleblanc$ pip install energyflow
Collecting energyflow
  Using cached EnergyFlow-1.3.2-py2.py3-none-any.whl (700 kB)
Collecting numpy>=1.16.0
  Using cached numpy-1.23.0-cp39-cp39-macosx_11_0_arm64.whl (13.3 MB)
Collecting h5py>=2.9.0
  Using cached h5py-3.7.0-cp39-cp39-macosx_11_0_arm64.whl (2.6 MB)
Collecting wasserstein>=0.3.1
  Using cached Wasserstein-1.0.1.tar.gz (382 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting six>=1.10.0
  Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting wurlitzer>=2.0.0
  Using cached wurlitzer-3.0.2-py3-none-any.whl (7.3 kB)
Building wheels for collected packages: wasserstein
  Building wheel for wasserstein (pyproject.toml) ... error
  error: subprocess-exited-with-error

  ร— Building wheel for wasserstein (pyproject.toml) did not run successfully.
  โ”‚ exit code: 1
  โ•ฐโ”€> [29 lines of output]
      /private/var/folders/0c/bns72rts4694w4ftngm2_mh80000gn/T/pip-build-env-cdkgzfz2/overlay/lib/python3.9/site-packages/setuptools/config/setupcfg.py:463: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
        warnings.warn(msg, warning_class)
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build/lib.macosx-12-arm64-cpython-39
      creating build/lib.macosx-12-arm64-cpython-39/wasserstein
      copying wasserstein/__init__.py -> build/lib.macosx-12-arm64-cpython-39/wasserstein
      copying wasserstein/wasserstein.py -> build/lib.macosx-12-arm64-cpython-39/wasserstein
      running build_ext
      building 'wasserstein._wasserstein' extension
      creating build/temp.macosx-12-arm64-cpython-39
      creating build/temp.macosx-12-arm64-cpython-39/wasserstein
      clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -DSWIG_TYPE_TABLE=wasserstein -I/private/var/folders/0c/bns72rts4694w4ftngm2_mh80000gn/T/pip-build-env-cdkgzfz2/overlay/lib/python3.9/site-packages/numpy/core/include -I. -I/Users/mleblanc/svjets/benchmarks/emd/include -I/opt/homebrew/Cellar/[email protected]/3.9.12_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c wasserstein/wasserstein.cpp -o build/temp.macosx-12-arm64-cpython-39/wasserstein/wasserstein.o -Xpreprocessor -fopenmp -ffast-math -std=c++14 -g0
      In file included from wasserstein/wasserstein.cpp:5457:
      In file included from ./wasserstein/Wasserstein.hh:47:
      In file included from wasserstein/internal/PairwiseEMD.hh:54:
      wasserstein/internal/PairwiseEMDBase.hh:54:10: fatal error: 'omp.h' file not found
      #include <omp.h>
               ^~~~~~~
      clang: error: unable to execute command: Segmentation fault: 11
      clang: error: clang frontend command failed due to signal (use -v to see invocation)
      Apple clang version 13.1.6 (clang-1316.0.21.2.3)
      Target: arm64-apple-darwin21.1.0
      Thread model: posix
      InstalledDir: /Library/Developer/CommandLineTools/usr/bin
      clang: note: diagnostic msg: Error generating preprocessed source(s).
      error: command '/usr/bin/clang' failed with exit code 254
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for wasserstein
Failed to build wasserstein
ERROR: Could not build wheels for wasserstein, which is required to install pyproject.toml-based projects

Padding Values

We (@matthewfeickert, @mattleblanc, @kratsg) were finding that the stability of the training is rather dependent on the choice of default value for padding the inputs. We find that padding with zeros works well, but any other value results in instability. This sort of makes sense because the network has to learn a nonzero bias in order to make the padding values to not contribute to the sum in the lambda layer. However, if you pad with zeros, then it doesn't have to learn any bias for the padding values to be ignored in the sum. One suggestion around this is to have a way to simply ignore padded values in the lambda layer. Since the default value is application-specific, maybe the padding value could be specified in your initialization?

Thank you for such nice documentation and helpful working examples!

Sincerely,
Ben

Energyflow without zero padding

I am planing to use energyflow for a classification task and I see that the examples involve data in which zero padding has been added. Reading the paper it looks like the method allows a variable size of the events. Is this correct ?

Module 'tensorflow.keras' has no attribute '__version__' when running examples

An error was encountered while running the EFN example:

/home/w568w/Projects/efn/.venv/lib/python3.11/site-packages/energyflow/archs/__init__.py:30: UserWarning: could not import some architectures - cannot import name '__version__' from 'tensorflow.keras' (/home/w568w/Projects/efn/.venv/lib/python3.11/site-packages/keras/api/_v2/keras/__init__.py) 
  warnings.warn('could not import some architectures - ' + str(e)) 
Traceback (most recent call last): 
  File "/home/w568w/Projects/efn/efn_example.py", line 32, in <module> 
    from energyflow.archs import EFN 
ImportError: cannot import name 'EFN' from 'energyflow.archs' (/home/w568w/Projects/efn/.venv/lib/python3.11/site-packages/energyflow/archs/__init__.py)

After inspection, I found that the module tensorflow.keras has been splited into a separate package since 2.6.0 released, 2 yrs ago. tensorflow.keras still exists as a backward-compatible API endpoint. But of course, the basic attribute of a package, __version__, does not come along with the export.

However, EnergyFlow is still looking for:

https://github.com/pkomiske/EnergyFlow/blob/e5eb2bdacba93fb933f63185817df7f37ad32b58/energyflow/archs/efn.py#L18

which is incompatible on any TensorFlow version greater than 2.6.0.

Reproducing Steps

  1. Create a clean environment:
$ mkdir efn
$ cd efn
$ PIPENV_VENV_IN_PROJECT=1 pipenv shell
(efn) $ pip install tensorflow scikit-learn matplotlib EnergyFlow
  1. Put efn_example.py into the directory;
  2. Run (efn) $ python efn_example.py;
  3. The error happens.

Environment Info

OS: Archlinux (Rolling Release)
Python: 3.11.3
pipenv: version 2023.7.23
TensorFlow: 2.13.0
EnergyFlow: 1.3.2

Loading a trained PFN model

Hi,
I am having an issue reloading a trained PFN model to use for later predictions. I have the whole model being saved at the end (not while training), and have tried loading it using load_model(), resulting in: "AttributeError: 'NoneType' object has no attribute 'get'". I am not sure if there is a specific way to load your saved models but if you have any help it would be greatly appreciated.

Question about Batch Normalization

Hi,
I am thinking about using Batch Normalization to avoid overfitting when training an EFN/PFN model but I don't find anything relevant to that in your document.

I am wondering whether Batch Normalization is implemented in the package? If yes, could you please give me some instruction on how to use that? If not, would it be useful to have that in the EFN package in the future?

Thanks,
Ang

Segfault while calculating emds after import pytorch

I'm getting an unusual segfault when using the emd.emds() function on Mac OS 11.4, which seems to only occur if importing pytorch before energyflow.

Script to reproduce segfault:

import torch
import energyflow.emd as emd
import numpy as np

x = np.random.rand(100, 30, 4)
y = np.random.rand(100, 30, 4)

emd.emds(x, y)  # will segfault

using:

macOS: 11.4
python: 3.9.6
numpy: 1.21.2
energyflow: 1.3.2
wasserstein: 1.0.1
torch: 1.9.0

Importing torch after energyflow works fine:

import energyflow.emd as emd
import torch
import numpy as np

x = np.random.rand(100, 30, 4)
y = np.random.rand(100, 30, 4)

emd.emds(x, y)  # no segfault

Import error

Hi, while trying to run your code I encountered the following problem:
ImportError: cannot import name 'PFN' from 'energyflow.archs'

Will appreciate your respond,

Loading MOD datasets results in unsustainable memory consumption

Hi,

Attempting to download the MOD datasets using the built-in ef.datasets.mod.load function (as below) results in excess memory consumption, which kills my process on the machine I'm currently using.

I could grab the files myself from zenodo, but it seems possible that this is unintended behaviour and so I am reporting it here.

๐Ÿป MLB

ef.datasets.mod.load( amount=1.0,
                      cache_dir='/faxbox2/user/mleblanc/energyflow',
                      collection='CMS2011AJets',
                      dataset='sim',
                      subdatasets=None,
                      validate_files=False,
                      store_pfcs=True,
                      store_gens=True,
                      verbose=1)

ef.datasets.mod.load( amount=1.0,
                      cache_dir='/faxbox2/user/mleblanc/energyflow',
                      collection='CMS2011AJets',
                      dataset='gen',
                      subdatasets=None,
                      validate_files=False,
                      store_pfcs=True,e
                      store_gens=True,
                      verbose=1)

ef.datasets.mod.load( amount=1.0,
                      cache_dir='/faxbox2/user/mleblanc/energyflow',
                      collection='CMS2011AJets',
                      dataset='cms',
                      subdatasets=None,
                      validate_files=False,
                      store_pfcs=True,
                      store_gens=True,
                      verbose=1)

Unexpected behaviour of periodic_phi

Hi,
I'm seeing a very unexpected behaviour when testing periodic_phi. First, it seems to propagate also to wasserstein, and second it seems to stay and can't be undone setting periodic_phi=False.
Am I doing something wrong here is this a bug?

Cheers
Javier

image_2022_10_19T09_35_50_494Z

qg_nsubs.load fail to download the data

I am trying to run dnn_example.py and it seems that qg_nsubs.load(...) method can not find a path to download the data. I attach traceback from the output of the jupyter notebook (ipynb renamed to txt) and my system setup.
Runner_dnn.txt
version.txt

Thank You in advance for the help!
Serhii

PFN training on tf.data.Dataset objects

Hello,
My pfn network seems to have some kind of incompatibility with training on Dataset objects. Using numpy arrays the training works fine. However, I'm trying to construct a data pipeline in order to handle larger data sets as well as use keras tools for multi GPU training. I'm constructing this object in the following way:

deepSets = tf.data.Dataset.from_tensor_slices((X, Y))
ds_train = deepSets.skip(val+test)
ds_test = deepSets.take(test+val)
ds_val = ds_test.skip(test)
ds_test = ds_test.take(test)

pfn = PFN(input_dim=5, Phi_sizes=Phi_sizes, F_sizes=F_sizes,
output_act=output_act, output_dim=output_dim, loss=loss,
optimizer=optimizer, metrics=[])

history = pfn.fit(ds_train,
epochs=num_epoch,
batch_size=batch_size,
validation_data=ds_val,
verbose=1)

Now I get the following error:
Epoch 1/1500
WARNING:tensorflow:Model was constructed with shape (None, None, 5) for input KerasTensor(type_spec=TensorSpec(shape=(None, None, 5), dtype=tf.float32, name='input'), name='input', description="created by layer 'input'"), but it was called on an input with incompatible shape (1001, 5)

What is the correct way to make a PFN model compatible with a tf.data.Dataset object?

Problem with parallel processing of batch_compute method

I downloaded and tried to run an efp_example.py. I left the code running for a few hours but it seems to stuck at the "Calculating ." It can run and use all the CPU power without any response for hours. I went through console output and it definitely looks like a problem with running parallel processes; one or more are not closed and it causes immediate shutdown by python. Then it tries to reevaluate this part of code and stucks in this loop. I don't know how to extract sole output from console, so attach some screenshots. Sorry for this inconvenience.
Console_1
Console_2

Also, I attach the output from the jupyter notebook interface (renamed from ipynb to txt). Take into account, please, that evaluation was interrupted.
Runner_efp.txt

My setup:
version.txt

Thank You in advance for help!

Memory usage in the EFP batch_compute function

Hi, I'm trying to calculate the n=4, d=4 EFPs for 100k jets with 30 particles each. Every time I use the batch_compute function my program's memory usage jumps up by ~8GB even though the total output should only be of ~100MB. I was wondering if this is expected and if there is any way to lower the memory usage?

Feature Request: Feeding data directly into F

We (Andy Buckley and myself) have been working on using PFN's on large pp event level data (~300 particles per event). After conversations with Jesse Thaler and Ben Nachman, it was suggested that it might be useful to submit some data (e.g. the event's HT) directly into F, alongside the data obtained from the particles through Phi. This is currently not supported, but apparently would be relatively easy to implement.

Examples don't run for num_data < 100000

Hi all,

Trying to run e.g. efp_example.py from a clean install doesn't work. I get the following:

mlb-macbookpro:EMDs mleblanc$ /usr/local/bin/python ~/.energyflow/examples/efp_example.py
Using TensorFlow backend.
Traceback (most recent call last):
  File "/Users/mleblanc/.energyflow/examples/efp_example.py", line 54, in <module>
    X, y = qg_jets.load(num_data)
  File "/usr/local/lib/python2.7/site-packages/energyflow/datasets/qg_jets.py", line 132, in load
    max_len_axis1 = max([X.shape[1] for X in Xs])
ValueError: max() arg is an empty sequence

If num_data on L40 of the example is changed to be >99999, the data files are downloaded and things proceed as expected. I guess this is because num_data/num_per_file=0 if num_data<100000, because we're doing INT division.

I have fixed it locally with a small hack, but you might want to do something else. I've change L112 in qg_jets.py to be

    num_files = int(np.ceil(1.*num_data/num_per_file)) if num_data > -1 else max_num_files

... I'm happy to push the fix to the examples if you want!

๐Ÿป MLB

How was the q/g nsubs data generated?

Hello,

The documentation here:
https://energyflow.network/docs/datasets/#quark-and-gluon-nsubs
says that the q/g nsubs data is "A dataset consisting of 45 N-subjettiness observables for 100k quark and gluon jets generated with Pythia 8.230". But I don't see any other notes.

The kind of information I'm interested in: what were the processed used, at what sqrt(s), how were the final state particles clustered, which jet algo, which software, what kinematics cuts were applied, and was there any detector simulation, and what kind.

I'm looking for similar information as shown for the other q/g data:
https://energyflow.network/docs/datasets/#quark-and-gluon-jets

Also, are there any papers using the nsubs dataset, for comparison of results?

FInally -- and importantly -- how do I cite use of your dataset? I don't see a DOI or a Zenodo link.

Thanks!

Torch support?

Hi,
I was considering some ideas with EFP in Pytorch but I get from EFPSet.batch_compute
TypeError: arg is not one of numpy.ndarray, list, or fastjet.PseudoJet

Any chance to support also torch.Tensor as input type?

Cheers
Javier

EMD Loss Function

How can I use the end as a loss function ?

I tried using the following function :

def loss_function(r1,labels):
    
    check=energyflow.emd.emd(labels.detach().cpu().numpy(),r1.detach().cpu().numpy(),     R=1, norm=False, beta=0.1, measure='euclidean', coords='cartesian', return_flow=False, gdim=None, mask=False,  n_iter_max=100000, periodic_phi=False, phi_col=2, empty_policy='error')

    return check

but running loss.backward() in PyTorch returns the following error:

numpy.'float64' object has no attribute 'backward'.

Any help is appreciated !
Thank you in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.