Giter VIP home page Giter VIP logo

netket / netket Goto Github PK

View Code? Open in Web Editor NEW
502.0 25.0 171.0 64.22 MB

Machine learning algorithms for many-body quantum systems

Home Page: https://www.netket.org

License: Apache License 2.0

Python 99.97% Shell 0.03%
machine-learning-algorithms quantum neural-networks monte-carlo-methods hamiltonian physics-simulation variational-method variational-monte-carlo exact-diagonalization markov-chain-monte-carlo

netket's Introduction

logo

NetKet

Release Anaconda-Server Badge Paper (v3) codecov Slack

NetKet is an open-source project delivering cutting-edge methods for the study of many-body quantum systems with artificial neural networks and machine learning techniques. It is a Python library built on JAX.

Installation and Usage

NetKet runs on MacOS and Linux. We recommend to install NetKet using pip, but it can also be installed with conda. It is often necessary to first update pip to a recent release (>=20.3) in order for upper compatibility bounds to be considered and avoid a broken installation. For instructions on how to install the latest stable/beta release of NetKet see the Get Started page of our website or run the following command (Apple M1 users, follow that link for more instructions):

pip install --upgrade pip
pip install --upgrade netket

If you wish to install the current development version of NetKet, which is the master branch of this GitHub repository, together with the additional dependencies, you can run the following command:

pip install --upgrade pip
pip install 'git+https://github.com/netket/netket.git#egg=netket[all]'

To speed-up NetKet-computations, even on a single machine, you can install the MPI-related dependencies by using [mpi] between square brackets.

pip install --upgrade pip
pip install --upgrade "netket[mpi]"

We recommend to install NetKet with all it's extra dependencies, which are documented below. However, if you do not have a working MPI compiler in your PATH this installation will most likely fail because it will attempt to install mpi4py, which enables MPI support in netket.

The latest release of NetKet is always available on PyPi and can be installed with pip. NetKet is also available on conda-forge, however the version available through conda install can be slightly out of date compared to PyPi. To check what is the latest version released on both distributions you can inspect the badges at the top of this readme.

Extra dependencies

When installing netket with pip, you can pass the following extra variants as square brakets. You can install several of them by separating them with a comma.

  • "[dev]": installs development-related dependencies such as black, pytest and testing dependencies
  • "[mpi]": Installs mpi4py to enable multi-process parallelism. Requires a working MPI compiler in your path
  • "[extra]": Installs tensorboardx to enable logging to tensorboard, and openfermion to convert the QubitOperators.
  • "[all]": Installs all extra dependencies

MPI Support

To enable MPI support you must install mpi4jax. Please note that we advise to install mpi4jax with the same tool (conda or pip) with which you install it's dependency mpi4py.

To check whether MPI support is enabled, check the flags

>>> import netket
>>> netket.utils.mpi.available
True

Getting Started

To get started with NetKet, we recommend you give a look at our tutorials page, by running them on your computer or on Google Colaboratory. There are also many example scripts that you can download, run and edit that showcase some use-cases of NetKet, although they are not commented.

If you want to get in touch with us, feel free to open an issue or a discussion here on GitHub, or to join the MLQuantum slack group where several people involved with NetKet hang out. To join the slack channel just accept this invitation

License

Apache License 2.0

netket's People

Contributors

allesini99 avatar attila-i-szabo avatar chenao-phys avatar chrisrothut avatar dependabot[bot] avatar emilyjd avatar erinaldi avatar everthemore avatar fabienalet avatar femtobit avatar gcarleo avatar gpescia avatar gtorlai avatar imi-hub avatar inailuig avatar jamesetsmith avatar jwnys avatar kchoo1118 avatar llviteritti avatar nikita-astronaut avatar philipvinc avatar riccardo-rende avatar shhslin avatar stavros11 avatar tvieijra avatar volodyaco avatar wdphy16 avatar wuyukai avatar yannra avatar z-denis avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

netket's Issues

Unsupervised Learning

Add learning methods to perform unsupervised learning with the neural-network quantum states in NetKet.

Complex support for custom operators

While trying to define a custom Hamiltonian involving complex entries (for example the sigma_y spin matrix) the following error occurs: Object of type complex is not JSON serializable

The parameter 'Alpha' in RbmSpinSymm for custom Hamiltonian

in netket/Tutorials/CustomHamiltonian/custom_hamiltonian.py,
changing the machine to RbmSpinSymm and leaving the rest of the code untouched (so Alpha is left as 1) results in the output
'# Graph created'
'# Number of nodes = 20'
'# RBM Initizialized with nvisible = 20 and nhidden = 1'
'# Symmetries are being used : 22 parameters left, instead of 41'
.....
which implies that only 1 hidden unit is used instead of 20. Moreover, if the value of Alpha is changed then the nhidden as displayed will change accordingly (and equal to Alpha).

Confusion between wave function and probability

In the Restricted Boltzmann Machine, we define the wave function as
image
and thus the probability of the state (S_1,...,S_N) should be the modulus squared of this quantity.

However, in the sampler source code (for example Sampler/metropolis_local.hpp) the probability ratio is taken as (line 137-138)
const auto lvd = psi_.LogValDiff(v_, tochange, newconf, lt_);
double ratio = std::norm(std::exp(lvd));
which is the modulus of the ratio between the wave functions without a square.
Is there a reason of doing so or is it mathematically equivalent to taking the square?

Pythonize NetKet for a simple user experience

Hi, thank you very much for releasing this tool. I am sure many in the community are looking forward to its use.

I was wondering if it is possible to take full advantage of the Python ecosystem and develop this tool in a way similar to any Python-based scientific computing library. More specifically, the installation of the package can be made as simple as pip install netket or conda install netket. Python can handle running C++ code in the background and allows the use of notebooks such as this one on superradiance. It is a recent tool for dealing with permutational symmetry that we have developed and the package can be found in PIQS.

The current workflow where a JSON parameters file is created and then the C++ code is run using it can be completely done behind the scenes by wrapping everything up in a Python-based package. This will allow users to simply run your code in a python script or inside a Jupyter notebook. A simple prototype example could be:

from netket.graph import Hypercube
from netket.machine import RbmSpinSymm
from netket.hamiltonian import BoseHubbard
from netket import ground

graph = Hypercube(L=12, dim=1)
hamiltonian = BoseHubbard(4.0, nmax=3, bosons=12)
machine = RbmSpinSymm(alpha=4.0)

ground_state = ground(hamiltonian, graph, machine, method="Sr", learning_rate=0.1)
print(ground_state)

I am opening this issue in case you would like to develop this into a Python package.

Seeming preference for values of some operators

Hello,
I've been playing with the NetKet code for some time now. In particular, we've seen a particularly interesting behavior when it comes down to measuring one-site observables of the Ising 1D model in the ferromagnetic regime.
Let us say we run the code as pointed in the Ising1d tutorial for different values of the transverse field h \in [0J,2J]. We can also measure some in-site observables such as the in-site magnetization by specifying:

def sigmaz():  #Defines the third Pauli matrix
    return np.array([[1,0], [0, -1]])

def get_loc_obs_dict(i, O, name=''):  #Defines the information to be passed to the pars['Observables']
    sites =[[i]]
    ops = [O.tolist()]
    return dict(ActingOn=sites,Operators=ops,Name=name+str(i))

#Defines observables to measure during the learning procedure
    pars['Observables'] = []
    for i in range(L):
        pars['Observables'].append(get_loc_obs_dict(i, sigmaz(), `name='z'))```

I've also written a python script that runs a simulation on an L=12 chain for 40 equally spaced values of h in the interval specified. The energy has no trouble converging, and in fact observables such as <Z_iZ_{i+1}> also converge to the right values, but our in-site magnetization doesn't (see figures in the attachments). Sampling though different values for the hidden nodes did not help, as can be seen in the image I provide. The issue is prominent when the energy gap is small.

The issue is not that the theoretical and learnt in-site magnetization seem to disagree with the small h values, but that NetKet seems to converge to ground states which prefer a negative in-site magnetization. I've been looking through the documentation and I cannot seem to spot the issue.

I was wondering if I could be given some guidance on how to approach this issue. I've look through the seed in Utils/random_utils.hpp and the initialization ansatz in Machines/rbm_spin.hpp but haven't seen any issues. I even wrote an initialization scheme that approximates a GHZ state from the get-go, but the in-site magnetization is still negative. I'd be happy to give more details if needed.

netket_ising1d_alphas

Feed-forward networks

Add a generic implementation for feed-forward networks with arbitrary activation functions, number of layers, local hilbert (input) space for the visible units.

Cannot fix the total magnetization to an odd value?

It seems that something's wrong with MC sampling for odd magnetizations. Example config:

{ "Optimizer": {"Name": "AdaMax"}
, "Hamiltonian": {"Name": "Heisenberg", "TotalSz": 1}
, "Graph": {"Dimension": 1, "Name": "Hypercube", "Pbc": true, "L": 5}
, "GroundState": {"OutputFile": "test", "Diagshift": 0.1, "NiterOpt": 20, "UseIterative": true, "Nsamples": 2000.0, "Method": "Sr"}
, "Machine": {"Alpha": 2, "Name": "RbmSpin"}
, "Sampler": {"Name": "MetropolisHamiltonian"}
}

netket executable then terminates with the following error:

Cannot fix the total magnetization

It looks like a bug to me, because clearly 5 spins can have a total magnetisation of 1. Am I missing something here?

Custom Hamiltonian failure

The file custom_hamiltonian.py relies on the function json.dump() , which doesn't support complex numbers. Therefore, there is an error when creating the hamiltonian with complex entries.

Add python documentation

As we move towards alpha/beta release for version 2.0, we need to have complete documentation for the new python functions.

The idea is to use the nice parser developed by @ooreilly here . This parser + some markdown templates will be used to automatically generate the files for the docs website of the site, here.

To do so, the python functions exposed through pybind11 should support a minimally modified version of the google-style docstrings.

A simple function with signature

operation(self: python_example.Operations, i: int, j: int, op_name: str='add') -> int

would have an associated docstring:

      A function performing one of the allowed operations between two integers.

      Args:
          i (int): The first parameter.
          j (int): The second parameter.
          op_name (str='add'): The type of operation.

      Returns:
          int: If op_name=='add' it perform the sum param1+param2, otherwise 0.

Help is welcome to add documentation following this style, once we finalize the parser bit.

One can add docstrings to pybind11 functions like this:

m.def("operation", &operation, R"EOF(
      A function performing one of the allowed operations between two integers.

      Args:
          i (int): The first parameter.
          j (int): The second parameter.
          op_name (str='add'): The type of operation.

      Returns:
          int: If op_name=='add' it perform the sum param1+param2, otherwise 0.
)EOF");

Assertion failure in test-matrixwrapper on master

In search of the memory leak in #29 I've tried running tests on master with NETKET_Sanitizer=ON. test-matrixwrapper_default and test-matrixwrapper_all both trigger an assertion failure for BoseHubbard. I think it might be a good idea to fix this first before proceeding further with #29.

# Hypercube created 
# Dimension = 2
# L = 2
# Pbc = 0
# Bose Hubbard model created 
test-matrixwrapper: /home/tom/src/netket/NetKet/Hilbert/bosons.hpp:158: virtual void netket::Boson::UpdateConf(Eigen::VectorXd &, const std::vector<int> &, const std::vector<double> &) const: Assertion `CheckConstraint(v)' failed.
=================================================================
==27721==ERROR: AddressSanitizer: global-buffer-overflow on address 0x00000200cf40 at pc 0x0000009b2490 bp 0x00000200cef0 sp 0x00000200cee8
WRITE of size 8 at 0x00000200cf40 thread T0
    #0 0x9b248f in unsigned long* std::__uninitialized_copy<true>::__uninit_copy<std::move_iterator<unsigned long*>, unsigned long*>(std::move_iterator<unsigned long*>, std::move_iterator<unsigned long*>, unsigned long*) /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/stl_uninitialized.h
    #1 0x9b2293 in unsigned long* std::uninitialized_copy<std::move_iterator<unsigned long*>, unsigned long*>(std::move_iterator<unsigned long*>, std::move_iterator<unsigned long*>, unsigned long*) /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/stl_uninitialized.h:131:14
    #2 0x9b1e69 in unsigned long* std::__uninitialized_copy_a<std::move_iterator<unsigned long*>, unsigned long*, unsigned long>(std::move_iterator<unsigned long*>, std::move_iterator<unsigned long*>, unsigned long*, std::allocator<unsigned long>&) /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/stl_uninitialized.h:289:14
    #3 0x9b13c8 in unsigned long* std::__uninitialized_move_if_noexcept_a<unsigned long*, unsigned long*, std::allocator<unsigned long> >(unsigned long*, unsigned long*, unsigned long*, std::allocator<unsigned long>&) /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/stl_uninitialized.h:310:14
    #4 0x9afcd3 in void std::vector<unsigned long, std::allocator<unsigned long> >::_M_realloc_insert<unsigned long const&>(__gnu_cxx::__normal_iterator<unsigned long*, std::vector<unsigned long, std::allocator<unsigned long> > >, unsigned long const&&&) /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/vector.tcc:446:8
    #5 0x9aee3d in std::vector<unsigned long, std::allocator<unsigned long> >::push_back(unsigned long const&) /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/stl_vector.h:1085:4
    #6 0xc845a0 in Catch::StringStreams::release(unsigned long) /home/tom/src/netket/External/Catch/catch.hpp:9515:22
    #7 0xbd022b in Catch::ReusableStringStream::~ReusableStringStream() /home/tom/src/netket/External/Catch/catch.hpp:9544:35
    #8 0xc78401 in Catch::MessageStream::~MessageStream() /home/tom/src/netket/External/Catch/catch.hpp:1608:12
    #9 0xc70eb4 in Catch::MessageBuilder::~MessageBuilder() /home/tom/src/netket/External/Catch/catch.hpp:1619:12
    #10 0xb95993 in Catch::AssertionStats::AssertionStats(Catch::AssertionResult const&, std::vector<Catch::MessageInfo, std::allocator<Catch::MessageInfo> > const&, Catch::Totals const&) /home/tom/src/netket/External/Catch/catch.hpp:7566:9
    #11 0xbb7a76 in Catch::RunContext::assertionEnded(Catch::AssertionResult const&) /home/tom/src/netket/External/Catch/catch.hpp:8642:54
    #12 0xbbcd02 in Catch::RunContext::handleFatalErrorCondition(Catch::StringRef) /home/tom/src/netket/External/Catch/catch.hpp:8741:9
    #13 0xb93893 in (anonymous namespace)::reportFatal(char const*) /home/tom/src/netket/External/Catch/catch.hpp:7313:56
    #14 0xb9328d in Catch::FatalConditionHandler::handleSignal(int) /home/tom/src/netket/External/Catch/catch.hpp:7402:9
    #15 0x7f0c45642f4f  (/lib/x86_64-linux-gnu/libpthread.so.0+0x11f4f)
    #16 0x7f0c44c87e7a in gsignal (/lib/x86_64-linux-gnu/libc.so.6+0x34e7a)
    #17 0x7f0c44c89230 in abort (/lib/x86_64-linux-gnu/libc.so.6+0x36230)
    #18 0x7f0c44c809d9  (/lib/x86_64-linux-gnu/libc.so.6+0x2d9d9)
    #19 0x7f0c44c80a51 in __assert_fail (/lib/x86_64-linux-gnu/libc.so.6+0x2da51)
    #20 0x92009a in netket::Boson::UpdateConf(Eigen::Matrix<double, -1, 1, 0, -1, 1>&, std::vector<int, std::allocator<int> > const&, std::vector<double, std::allocator<double> > const&) const /home/tom/src/netket/NetKet/Hilbert/bosons.hpp:158:7
    #21 0x8ee8b9 in netket::Hilbert::UpdateConf(Eigen::Matrix<double, -1, 1, 0, -1, 1>&, std::vector<int, std::allocator<int> > const&, std::vector<double, std::allocator<double> > const&) const /home/tom/src/netket/NetKet/Hilbert/hilbert.hpp:104:16
    #22 0x9a86f0 in netket::SparseMatrixWrapper<netket::AbstractHamiltonian, Eigen::Matrix<std::complex<double>, -1, 1, 0, -1, 1> >::InitializeMatrix(netket::AbstractHamiltonian const&) /home/tom/src/netket/NetKet/Hamiltonian/MatrixWrapper/sparse_matrix_wrapper.hpp:91:25
    #23 0x861470 in netket::SparseMatrixWrapper<netket::AbstractHamiltonian, Eigen::Matrix<std::complex<double>, -1, 1, 0, -1, 1> >::SparseMatrixWrapper(netket::AbstractHamiltonian const&) /home/tom/src/netket/NetKet/Hamiltonian/MatrixWrapper/sparse_matrix_wrapper.hpp:41:9
    #24 0x849068 in ____C_A_T_C_H____T_E_S_T____0() /home/tom/src/netket/Test/Hamiltonian/unit-matrixwrapper.cc:85:70
    #25 0xbdc059 in Catch::TestInvokerAsFunction::invoke() const /home/tom/src/netket/External/Catch/catch.hpp:10067:9
    #26 0xbbf453 in Catch::TestCase::invoke() const /home/tom/src/netket/External/Catch/catch.hpp:9968:15
    #27 0xbbf104 in Catch::RunContext::invokeActiveTestCase() /home/tom/src/netket/External/Catch/catch.hpp:8832:27
    #28 0xbb5511 in Catch::RunContext::runCurrentTest(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >&) /home/tom/src/netket/External/Catch/catch.hpp:8806:17
    #29 0xbb122d in Catch::RunContext::runTest(Catch::TestCase const&) /home/tom/src/netket/External/Catch/catch.hpp:8595:13
    #30 0xbcd235 in Catch::(anonymous namespace)::runTests(std::shared_ptr<Catch::Config> const&) /home/tom/src/netket/External/Catch/catch.hpp:9135:39
    #31 0xbca54c in Catch::Session::runInternal() /home/tom/src/netket/External/Catch/catch.hpp:9333:27
    #32 0xbc93a6 in Catch::Session::run() /home/tom/src/netket/External/Catch/catch.hpp:9290:24
    #33 0xbc9047 in Catch::Session::run(int, char**) /home/tom/src/netket/External/Catch/catch.hpp:9258:26
    #34 0xc2a9eb in main /home/tom/src/netket/Test/unit-tests.cc:9:33
    #35 0x7f0c44c74a86 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21a86)
    #36 0x725729 in _start (/home/tom/src/netket/build/Test/test-matrixwrapper+0x725729)

0x00000200cf40 is located 32 bytes to the left of global variable '(anonymous namespace)::autoRegistrar4' defined in '/home/tom/src/netket/Test/Hamiltonian/unit-matrixwrapper.cc:93:1' (0x200cf60) of size 8
0x00000200cf40 is located 24 bytes to the right of global variable '(anonymous namespace)::autoRegistrar1' defined in '/home/tom/src/netket/Test/Hamiltonian/unit-matrixwrapper.cc:71:1' (0x200cf20) of size 8
SUMMARY: AddressSanitizer: global-buffer-overflow /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/stl_uninitialized.h in unsigned long* std::__uninitialized_copy<true>::__uninit_copy<std::move_iterator<unsigned long*>, unsigned long*>(std::move_iterator<unsigned long*>, std::move_iterator<unsigned long*>, unsigned long*)
Shadow bytes around the buggy address:
  0x0000803f9990: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0000803f99a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0000803f99b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0000803f99c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  0x0000803f99d0: 00 00 00 00 00 00 00 00 00 00 00 00 01 f9 f9 f9
=>0x0000803f99e0: f1 f1 f1 f1 00 f2 f2 f2[f9]f2 f2 f2 00 f2 f2 f2
  0x0000803f99f0: 00 f3 f3 f3 00 f9 f9 f9 f9 f9 f9 f9 00 00 00 00
  0x0000803f9a00: 00 00 00 00 00 00 00 00 f9 f9 f9 f9 00 f9 f9 f9
  0x0000803f9a10: f9 f9 f9 f9 f1 f1 f1 f1 00 f2 f2 f2 00 f2 f2 f2
  0x0000803f9a20: 00 f2 f2 f2 00 f3 f3 f3 00 00 00 00 00 00 00 00
  0x0000803f9a30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:       fa
  Freed heap region:       fd
  Stack left redzone:      f1
  Stack mid redzone:       f2
  Stack right redzone:     f3
  Stack after return:      f5
  Stack use after scope:   f8
  Global redzone:          f9
  Global init order:       f6
  Poisoned by user:        f7
  Container overflow:      fc
  Array cookie:            ac
  Intra object redzone:    bb
  ASan internal:           fe
  Left alloca redzone:     ca
  Right alloca redzone:    cb
==27721==ABORTING

Why does each graph store an adjacency list?

Looking at current master, the only meaningful use case for the adjacency list is DFS. DFS is in turn only used during initialisation, and seeing as it's O(N), it's no big deal to create a temporary adjacency list. That way creation of an adjacency list becomes an internal thing we can test and trust; we don't have to accept it as an argument which simplifies the interface (i.e. no more choosing between edges and adjacency_list) as well as the code. All of that without touching the hot path.

Save and load objects from python

The last main design issue to be solved for v2.0 concerns saving and loading objects from python.

Pybind11 has some pickling support.

However, the design issue to be addressed is how to serialize objects stored internally as pointers.
Basically, each pickable object needs to define a GetState function, returning a python tuple of the arguments needed to construct the object.

py::tuple GetState(const Pickleable &p) { 
            return py::make_tuple(p.Field1(),p.Field2(),...);
}

However, if the Pickeable stores a pointer to some abstract object (say Hilbert), then one obviously cannot do:

py::tuple GetState(const Pickleable &p) { 
            auto hilbert= p.GetHilbert(); //NO! 
            return py::make_tuple(p.Field1(),p.Field2(),hilbert);
}

Suggestions are welcome.

Look ups for derivative

Hello,

Thanks to everyone for developing and releasing this code.

Please excuse my C++ inexperience, I would like to ask a quick (and possibly simple) question: In the abstract_machine.hpp, is there any reason that there is no DerLog function that takes look ups as argument, similarly to LogVal and LogValDiff? I am assuming that it has to do with the way the derivative and the look up update is handled in the VMC part (for which I haven't checked the code), however wouldn't in principle to be possible to use already calculated look ups for more efficient calculation of derivative?

I am implementing some custom machines based on MPS, where all the left and right contractions are required for the full derivative. These contractions can also serve as look ups to assist LogValDiff calculations (particularly in the one-flip case). I am wondering whether it is possible to use them directly in DerLog without contracting from scratch every time. I tried defining private look up variables within my class for this and update them within UpdateLookup but this did not work, I am guessing because UpdateLookup is not called before every derivative calculation.

Is Windows supported?

I noticed a few Windows specific code paths in setup.py. Is building under Windows supported and tested? Or is this code just noise from copy-pasting?

Custom Metropolis sampling

Add custom Metropolis sampling, specifying local transition operators with the same format implemented for Hamiltonians and Observables

Exact Diagonalization

With a few tweaks, the NetKet infrastructure can be used to perform exact diagonalization on small quantum problems.

Implementing Boson Operators

Hello!

This is just a practical question of how to implement boson operators - such as the creation operator b - and also how to implement correlation calculations. In particular, I am hoping to calculate a two-point correlation operator <b_{i}* b_{i+N/2}> for the 1D Bose-Hubbard model by implementing it in the bosehubbard1d.py tutorial code. In the Netket documentation, it is said that the custom observables are limited to lattice observables with local operators. So does that mean that correlators can't be practically calculated in the code yet or is there a way to do so?

In the bosehubbard1d.py, so far I have attempted variations along the lines of:

b_dagger = np.diag(np.sqrt(range(1,Nmax+1)),k = -1).tolist() # creation operator
b = np.diag(np.sqrt(range(1,Nmax+1)),k = 1).tolist() # annihilation operator
zeromat = np.zeros(np.shape(b)).tolist()

twopointcorr=[]
sites=[]
for i in range(L):
if i==1:
twopointcorr.append(b_dagger)
elif i == round(L/2):
twopointcorr.append(b)
else:
twopointcorr.append(zeromat)
sites.append([i])

or
for i in range(L):
hopping_term = (np.kron(b_dagger,b)).tolist()
sites.append([1,1+L/2])

But these don't really make physical sense and neither do the results.
Any help and advice would be greatly appreciated! Thank you for your time and for creating this library!

Supervised Learning

Add learning methods to perform supervised learning with the neural-network quantum states in NetKet.

Benchmarks?

Quoting the docs:

NetKet is built upon a fast C++ core,

NetKet is built using MPI primitives, and can scale up to thousands of CPU cores.

These are pretty strong statements and I'm wondering whether anyone has done any benchmarks to verify the claims? If yes, could we perhaps make the data available? If no, I would suggest we implement some benchmarks.

More Steppers

Extend the current choice of steppers, for example taking inspiration from the excellent selection available in TensorFlow.

Add Monte Carlo sweeps for equilibration before making measurement

In ground_state.hpp the sampling of the spin configurations directly starts from an initial configuration, which is random at iteration 0. Therefore at this step the sampled derivatives can have large error as the Monte Carlo sampling has not reached equilibrium for the first few sweeps. This is not a big problem for large number of iterations; but if I want to start the iteration from a previous one by specifying "InitFile", even the first few iterations become important. So I suggest to add some sweeps, say, 1/10 of the total number of sweeps, after setting random initial spin configurations. This will not significantly change the timing because it is not needed for the later iterations.

About the `netket::AbstractGraph::EdgeColorsFrom*` functions

Could someone, please, clarify for me what are the pre- and postconditions for these functions?
Looking at the code I have a strange feeling that this whole "color" feature is broken and we don't have issues piling up just because nobody uses it. Really hope I'm wrong and someone more knowledgeable about this topic can explain (pinging you @jamesETsmith as you're the author, sorry :)).

Wave function log

Hi,

Attempting to look at the iteration of wavefunction, I noticed .wf is overwritten every SaveEvery steps.
From #66 I assume there is a technical problem concerning the stability of MPI.
Is there any easy way to get around, e.g. outputting into different files?

Make eigen as a submodule

Just had a brief look at your repo. It would be better to make eigen as an git submodule. And write a CMake file to download/load this as dependency. This will make your repository cleaner. About how to use a git submodule: https://git-scm.com/docs/git-submodule

There is a mirror for eigen3 on github: https://github.com/eigenteam/eigen-git-mirror

IMHO, a standard modern C++ library should use CMake rather than Makefile.

Furthermore, since this will be a some kind of python library. I would suggest to use xtensor with self defined operators instead,. For quantum many-body system, I believe the use of high dimensional array (like numpy's strided array) is quite often. Eigen does not have a mature support for this at the moment.

Add standard typedefs for matrices and vectors

Suggestion for discussion:

There are many places in the codebase where we use typedefs such as using Vector = Eigen::VectorXcd or using Matrix = Eigen::MatrixXcd which I think is somewhat redundant and can be a bit tedious.

I suggest adding a couple of standard typedefs to netket.hpp or a custom header file, such as (current names are just for illustration)

using NkVector = Eigen::VectorXcd;
using NkMatrix = Eigen::MatrixXcd;

// if needed:
using NkRealMatrix = ...
using NkRealVector = ...

It could also be convenient to have a shorthand for std::complex<double>.

Of course, there might be legitimate cases where something else should be used (e.g., fixed-size Eigen vectors for performance reasons), but I think we do not loose much by using the types above by default and declaring them globally and only adding template parameters or different typedefs if they are really needed. (Compare the YAGNI principle).

This could also help reduce the amount of template parameters we have which are only ever instantiated with one type.

Changing the Lookup class

The current implementation of the Lookup class Lookup/lookup.hpp, while functional, has several limitations and it is not exactly a beautiful piece of modern code!

I am thinking of rewriting it as

//Example of Lookup containing string, ints and vectors. This can be easily extended. 
using LookupElement =
    std::variant<std::string, int, std::vector<double>>;

using Lookup = std::unordered_map<std::string, LookupElement>;

//Let's add a vector to the lookup 
Lookup lt;
lt["myvector"] = std::vector<double>(100, 2);

//Let's retrieve a vector by reference (to modify it further) 
auto& v = std::get<std::vector<double>>(lt["myvector"]);

//Let's add another type 
lt["myint"] = 2;

//Let's add another vector
lt["myothervector"] = std::vector<double>(10,1);

This would allow for far greater flexibility than the current implementation, without paying a performance penalty (or at least the penalty would be to all purposes unmeasurable).

Since support for std::variant is a bit sparse, in the meanwhile one can use one of the non-standard implementations, for example the header-only mapbox::util::variant.

What the C++ experts think about this? @femtobit @twesterhout @verticube ? Other possible solutions?

segmentation fault for Sr with iterative method

I'm able to run the examples in the Tutorial using Stochastic Reconfiguration with 'UseIterative' set as False. But if I set it to True, I get the message "Segmentation fault (core dumped)". For example, below is what I get from the J1-J2 model with an empty .log file.

# Graph created
# Number of nodes = 20
# RBM Initizialized with nvisible = 20 and nhidden = 20
# Using visible bias = 1
# Using hidden bias = 1
# Machine initialized with random parameters
# Hamiltonian Metropolis sampler with parallel tempering is ready
# 16 replicas are being used
# Learning running on 1 processes
# Using the Stochastic reconfiguration method
# With iterative solver
Segmentation fault (core dumped)

I'm using Ubuntu 18.04 with gcc 7.3.0, python 3.6.5 and mpich 3.3a2. The other libraries come with the netket.

Memory Leaks

I've had a very brief look at the code and found found it shocking that netket::Machine and netket::Hamiltonian (and perhaps more classes) leak memory. They not only do raw memory allocation with new (which is already a bad idea in C++11: consider using std::unique_ptr instead), but are also missing destructors.

Time cost of tutorials

I tried to run the first tutorial example transverse field Ising model and found it took the computer (Xeon E5-2630 with enough memory) 10 min to finish the calculation in serial mode.
If the tutorial example really needs 10 min time, I think it'll be nice to point it out in the tutorial because a minimal working example usually does not take so much time. If something of my compilation or computer went wrong, it'll also be helpful for me to know that the example should take less than 1 min.
So, my suggestion is to add some comment on running time in the tutorial. I think it's beneficial for helping new users know about the program.

Restructure the optimization driver interface for v2.0

Version 2.0 calls for a re-thinking of the way we handle the output. At the moment, this is done in the (excellent) json output writer developed by @femtobit.

I think that in 2.0 we should go beyond the json output.

As a first goal, I propose to go for something similar to what is done in Keras , where the training history is saved in python dictionaries.

import matplotlib.pyplot as plt
... construct various objects here ... 
history = vmc.run(n_iter=300,n_samples=1000,save_every=10)

# Plot energies 
plt.plot(history.history['energy'].mean) #or something similar for plot with error bars 
plt.show()

As a second (slightly more ambitious) goal, it would be really cool to have callbacks ร  la Keras as well. One can use these callback to print each step on a file, or, even better, to visualise the results dynamically. Once the callbacks are in place, live streaming of the results in a browser can be easily achieved using Bokeh (see here for an example). This would be really cool to have.

@femtobit what do you think about goal 1 for example? Does it look like an easy modification of your output handlers?

Possible bug with logval_diff in Convolutional layer

g = nk.graph.Hypercube(length=4, ndim=1)
hi = nk.hilbert.Spin(s=0.5, graph=g, total_sz=0)

layers = [
   nk.layer.Convolutional(
       graph=g,
       input_channels=1,
       output_channels=2,
       activation=nk.activation.Tanh())
]

# FFNN Machine
machines["FFFN 1d Hypercube spin Convolutional"] = nk.machine.FFNN(hi, layers)

This seems to fail the logval_diff test (the logval_diff version without lookup tables)
I also realized that we didn't have a test of this logval_diff version in the previous c++ unit tests.

This emerged in PR #105 and does not affect NetKet versions 1.*

Unary RBM

Concerning implementing https://arxiv.org/abs/1810.02352 in NetKet: the authors of the paper provide a detailed derivation (in the appendix) of how to choose the RBM parameters for the unary layer to ensure that only certain states (like |100>, |010>, |001> for the spin-1 example) are allowed (i.e., other states have coefficient 0). However, since the unary layer is fixed anyway, as a "shortcut" one could simply set the coefficient 0 directly (for states different from |100>, |010>, |001>). In other words, why is a RBM representation of the unary layer required in the first place?

Rename the Learning section

The Learning section of the input, as it is, is too general, and should be restructured to take into account the future development of the code.

The best would be to group together all classes solving similar problems, either dealing to those problem in an exact way or with learning. The directory structure should reflect the input section. A first proposal is to remove the Learning section and just create a section GroundState for which as a method you can either specify FullEd, SparseEd, Sr, Gdesc and more and a section Dynamics for which as a method you can specify FullEd, SparseEd, (with possibly sub-fields to ask for imaginary-time dynamics), t-VMC (for the future) etc.

This would also allow to define specific sections corresponding, for example, to Supervised learning, Unsupervised, Tomography etc... once all these applications will be realized through the Challenges.

For backward compatibility in all versions v1.x.x, we will still need to support the Learning field though.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.