Giter VIP home page Giter VIP logo

xtensor's Introduction

xtensor

GHA Linux GHA OSX GHA Windows Documentation Doxygen -> gh-pages Binder Join the Gitter Chat

Multi-dimensional arrays with broadcasting and lazy computing.

Introduction

xtensor is a C++ library meant for numerical analysis with multi-dimensional array expressions.

xtensor provides

  • an extensible expression system enabling lazy broadcasting.
  • an API following the idioms of the C++ standard library.
  • tools to manipulate array expressions and build upon xtensor.

Containers of xtensor are inspired by NumPy, the Python array programming library. Adaptors for existing data structures to be plugged into our expression system can easily be written.

In fact, xtensor can be used to process NumPy data structures inplace using Python's buffer protocol. Similarly, we can operate on Julia and R arrays. For more details on the NumPy, Julia and R bindings, check out the xtensor-python, xtensor-julia and xtensor-r projects respectively.

xtensor requires a modern C++ compiler supporting C++14. The following C++ compilers are supported:

  • On Windows platforms, Visual C++ 2015 Update 2, or more recent
  • On Unix platforms, gcc 4.9 or a recent version of Clang

Installation

Package managers

We provide a package for the mamba (or conda) package manager:

mamba install -c conda-forge xtensor

Install from sources

xtensor is a header-only library.

You can directly install it from the sources:

cmake -DCMAKE_INSTALL_PREFIX=your_install_prefix
make install

Installing xtensor using vcpkg

You can download and install xtensor using the vcpkg dependency manager:

git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install xtensor

The xtensor port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.

Trying it online

You can play with xtensor interactively in a Jupyter notebook right now! Just click on the binder link below:

Binder

The C++ support in Jupyter is powered by the xeus-cling C++ kernel. Together with xeus-cling, xtensor enables a similar workflow to that of NumPy with the IPython Jupyter kernel.

xeus-cling

Documentation

For more information on using xtensor, check out the reference documentation

http://xtensor.readthedocs.io/

Dependencies

xtensor depends on the xtl library and has an optional dependency on the xsimd library:

xtensor xtl xsimd (optional)
master ^0.7.5 ^11.0.0
0.25.0 ^0.7.5 ^11.0.0
0.24.7 ^0.7.0 ^10.0.0
0.24.6 ^0.7.0 ^10.0.0
0.24.5 ^0.7.0 ^10.0.0
0.24.4 ^0.7.0 ^10.0.0
0.24.3 ^0.7.0 ^8.0.3
0.24.2 ^0.7.0 ^8.0.3
0.24.1 ^0.7.0 ^8.0.3
0.24.0 ^0.7.0 ^8.0.3
0.23.x ^0.7.0 ^7.4.8
0.22.0 ^0.6.23 ^7.4.8

The dependency on xsimd is required if you want to enable SIMD acceleration in xtensor. This can be done by defining the macro XTENSOR_USE_XSIMD before including any header of xtensor.

Usage

Basic usage

Initialize a 2-D array and compute the sum of one of its rows and a 1-D array.

#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
#include "xtensor/xview.hpp"

xt::xarray<double> arr1
  {{1.0, 2.0, 3.0},
   {2.0, 5.0, 7.0},
   {2.0, 5.0, 7.0}};

xt::xarray<double> arr2
  {5.0, 6.0, 7.0};

xt::xarray<double> res = xt::view(arr1, 1) + arr2;

std::cout << res;

Outputs:

{7, 11, 14}

Initialize a 1-D array and reshape it inplace.

#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"

xt::xarray<int> arr
  {1, 2, 3, 4, 5, 6, 7, 8, 9};

arr.reshape({3, 3});

std::cout << arr;

Outputs:

{{1, 2, 3},
 {4, 5, 6},
 {7, 8, 9}}

Index Access

#include <iostream>
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"

xt::xarray<double> arr1
  {{1.0, 2.0, 3.0},
   {2.0, 5.0, 7.0},
   {2.0, 5.0, 7.0}};

std::cout << arr1(0, 0) << std::endl;

xt::xarray<int> arr2
  {1, 2, 3, 4, 5, 6, 7, 8, 9};

std::cout << arr2(0);

Outputs:

1.0
1

The NumPy to xtensor cheat sheet

If you are familiar with NumPy APIs, and you are interested in xtensor, you can check out the NumPy to xtensor cheat sheet provided in the documentation.

Lazy broadcasting with xtensor

Xtensor can operate on arrays of different shapes of dimensions in an element-wise fashion. Broadcasting rules of xtensor are similar to those of NumPy and libdynd.

Broadcasting rules

In an operation involving two arrays of different dimensions, the array with the lesser dimensions is broadcast across the leading dimensions of the other.

For example, if A has shape (2, 3), and B has shape (4, 2, 3), the result of a broadcasted operation with A and B has shape (4, 2, 3).

   (2, 3) # A
(4, 2, 3) # B
---------
(4, 2, 3) # Result

The same rule holds for scalars, which are handled as 0-D expressions. If A is a scalar, the equation becomes:

       () # A
(4, 2, 3) # B
---------
(4, 2, 3) # Result

If matched up dimensions of two input arrays are different, and one of them has size 1, it is broadcast to match the size of the other. Let's say B has the shape (4, 2, 1) in the previous example, so the broadcasting happens as follows:

   (2, 3) # A
(4, 2, 1) # B
---------
(4, 2, 3) # Result

Universal functions, laziness and vectorization

With xtensor, if x, y and z are arrays of broadcastable shapes, the return type of an expression such as x + y * sin(z) is not an array. It is an xexpression object offering the same interface as an N-dimensional array, which does not hold the result. Values are only computed upon access or when the expression is assigned to an xarray object. This allows to operate symbolically on very large arrays and only compute the result for the indices of interest.

We provide utilities to vectorize any scalar function (taking multiple scalar arguments) into a function that will perform on xexpressions, applying the lazy broadcasting rules which we just described. These functions are called xfunctions. They are xtensor's counterpart to NumPy's universal functions.

In xtensor, arithmetic operations (+, -, *, /) and all special functions are xfunctions.

Iterating over xexpressions and broadcasting Iterators

All xexpressions offer two sets of functions to retrieve iterator pairs (and their const counterpart).

  • begin() and end() provide instances of xiterators which can be used to iterate over all the elements of the expression. The order in which elements are listed is row-major in that the index of last dimension is incremented first.
  • begin(shape) and end(shape) are similar but take a broadcasting shape as an argument. Elements are iterated upon in a row-major way, but certain dimensions are repeated to match the provided shape as per the rules described above. For an expression e, e.begin(e.shape()) and e.begin() are equivalent.

Runtime vs compile-time dimensionality

Two container classes implementing multi-dimensional arrays are provided: xarray and xtensor.

  • xarray can be reshaped dynamically to any number of dimensions. It is the container that is the most similar to NumPy arrays.
  • xtensor has a dimension set at compilation time, which enables many optimizations. For example, shapes and strides of xtensor instances are allocated on the stack instead of the heap.

xarray and xtensor container are both xexpressions and can be involved and mixed in universal functions, assigned to each other etc...

Besides, two access operators are provided:

  • The variadic template operator() which can take multiple integral arguments or none.
  • And the operator[] which takes a single multi-index argument, which can be of size determined at runtime. operator[] also supports access with braced initializers.

Performances

Xtensor operations make use of SIMD acceleration depending on what instruction sets are available on the platform at hand (SSE, AVX, AVX512, Neon).

xsimd

The xsimd project underlies the detection of the available instruction sets, and provides generic high-level wrappers and memory allocators for client libraries such as xtensor.

Continuous benchmarking

Xtensor operations are continuously benchmarked, and are significantly improved at each new version. Current performances on statically dimensioned tensors match those of the Eigen library. Dynamically dimension tensors for which the shape is heap allocated come at a small additional cost.

Stack allocation for shapes and strides

More generally, the library implement a promote_shape mechanism at build time to determine the optimal sequence type to hold the shape of an expression. The shape type of a broadcasting expression whose members have a dimensionality determined at compile time will have a stack allocated sequence type. If at least one note of a broadcasting expression has a dynamic dimension (for example an xarray), it bubbles up to the entire broadcasting expression which will have a heap allocated shape. The same hold for views, broadcast expressions, etc...

Therefore, when building an application with xtensor, we recommend using statically-dimensioned containers whenever possible to improve the overall performance of the application.

Language bindings

xtensor-python

The xtensor-python project provides the implementation of two xtensor containers, pyarray and pytensor which effectively wrap NumPy arrays, allowing inplace modification, including reshapes.

Utilities to automatically generate NumPy-style universal functions, exposed to Python from scalar functions are also provided.

xtensor-julia

The xtensor-julia project provides the implementation of two xtensor containers, jlarray and jltensor which effectively wrap julia arrays, allowing inplace modification, including reshapes.

Like in the Python case, utilities to generate NumPy-style universal functions are provided.

xtensor-r

The xtensor-r project provides the implementation of two xtensor containers, rarray and rtensor which effectively wrap R arrays, allowing inplace modification, including reshapes.

Like for the Python and Julia bindings, utilities to generate NumPy-style universal functions are provided.

Library bindings

xtensor-blas

The xtensor-blas project provides bindings to BLAS libraries, enabling linear-algebra operations on xtensor expressions.

xtensor-io

The xtensor-io project enables the loading of a variety of file formats into xtensor expressions, such as image files, sound files, HDF5 files, as well as NumPy npy and npz files.

Building and running the tests

Building the tests requires the GTest testing framework and cmake.

gtest and cmake are available as packages for most Linux distributions. Besides, they can also be installed with the conda package manager (even on windows):

conda install -c conda-forge gtest cmake

Once gtest and cmake are installed, you can build and run the tests:

mkdir build
cd build
cmake -DBUILD_TESTS=ON ../
make xtest

You can also use CMake to download the source of gtest, build it, and use the generated libraries:

mkdir build
cd build
cmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ../
make xtest

Building the HTML documentation

xtensor's documentation is built with three tools

While doxygen must be installed separately, you can install breathe by typing

pip install breathe sphinx_rtd_theme

Breathe can also be installed with conda

conda install -c conda-forge breathe

Finally, go to docs subdirectory and build the documentation with the following command:

make html

License

We use a shared copyright model that enables all contributors to maintain the copyright on their contributions.

This software is licensed under the BSD-3-Clause license. See the LICENSE file for details.

xtensor's People

Contributors

adriendelsalle avatar antoineprv avatar davidbrochart avatar davisvaughan avatar derthorsten avatar dhermes avatar egpbos avatar emmenlau avatar ewoudwempe avatar frozenwinters avatar ghisvail avatar gouarin avatar johanmabille avatar jvce92 avatar khanley6 avatar kolibri91 avatar martinrenou avatar matwey avatar oneraynyday avatar potpath avatar randl avatar serge-sans-paille avatar sounddev avatar spectre-ns avatar stuarteberg avatar sylvaincorlay avatar tdegeus avatar ukoethe avatar wolfv avatar zhujun98 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xtensor's Issues

Make xscalar a non-const expression

Non-const functions should be added to xscalar so it can be used as a non-const xexpression. This is required by the xref feature, allowing xscalar to take a reference on the wrapped scalar instead of a copy.

xreducer

Goal: Provide an xexpression corresponding to the reduction of dimension based on a reducer

If m has shape (4, 3, 2, 5), sum(m, {1, 3}) sums over dimensions 1 and 3 giving an expression of shape (3, 2), lazily.

Similarly to xfunction and vectorize, this should come with a helper generator function which creates a xreducer for a given function that takes a 1-D array.

matmul and dot

I've been looking, but I didn't find implementations for those two.

Is there a plan to leverage e.g. BLAS or similar libraries for those operations?

- Wolf

Iteration over trivial xview does not terminate

As of master:

xt::xtensor<double, 1> arr1 {{2}};
std::fill(arr1.begin(), arr1.end(), 6);
auto view {xt::make_xview(arr1, 0)};
std::cout << view << std::endl;
// -> 6, OK
for (auto x: view) { std::cout << x << std::endl; }
// -> infinite stream of 6's

cannot modify filter view?

This does not compile

xt::xarray<double> a = {{1, 5, 3}, {4, 5, 6}};
auto v = xt::filter(a, a >= 5);
v = 100;

but this does

xt::xarray<double> a = {{1, 5, 3}, {4, 5, 6}};
auto v = xt::view(a, xt::all());
v = 100;

xfunction optimization

Currently each time you instantiate an xfunction, its shape is computed and stored. This drastically hurts performance when manipulating complicated expressions involving xarray instances. For instance, consider the following code:

xt::array<double> a, b, c;
// init a, b, and c ....
xt::xarray<double> res = 2 * a + (b / c);

Here three xfunction instances are built, and thus three shape containers are dynamically allocated while only one is required (the global shape of the expression to be assigned).

A way to fix this is to make the computation of the shape lazy. The computation of the shape of the root node of the expression should not require computation of the shape of the other nodes but rely on the broadcast_shape applied to each node.

-Wreorder on xexpression

As of master,

xt::xtensor<double, 2> arr {{2, 3}};
xt::xtensor<double, 2> arr2 {{2, 3}};
arr2 = arr + 1;

triggers a -Wreorder warning with gcc 6.2.1.

Overloads of derived_cast

xexpression<T>::derived_cast should have different behaviors depending on whether this is an lvalue or an rvalue.

View in function with array as const reference gives errors

This seems like a strange bug. The following doesn't work for me:

#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"

int main() {

	xt::xarray<double> arr1
	  {{1.0, 2.0, 3.0, 9},
	   {2.0, 5.0, 7.0, 9},
	   {2.0, 5.0, 7.0, 9}};

	auto func = [](const auto& arr1) {
		auto view = make_xview(arr1, 1, xt::all());
		for(const auto& el : view) {
			std::cout << el << " ";
		}
	};
	func(arr1);

}

While this one works perfectly fine:

#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"

int main() {

	xt::xarray<double> arr1
	  {{1.0, 2.0, 3.0, 9},
	   {2.0, 5.0, 7.0, 9},
	   {2.0, 5.0, 7.0, 9}};

	auto func = [](const auto arr1) {
		auto view = make_xview(arr1, 1, xt::all());
		for(const auto& el : view) {
			std::cout << el << " ";
		}
	};
	func(arr1);

}

Cannot take view of const xtensor to new xtensor

As of master,

xt::xtensor<double, 3> const arr {{1, 2, 3}};
xt::xtensor<double, 2> arr2 {{2, 3}};
arr2 = xt::make_xview(arr, 0);

fails to compile even though constness of arr is not violated (as a copy is being made).

transpose operator

it can be emulated by using reshape, but it's currently missing, isn't it?

xindex_function for arange, linspace, meshgrid ...

In order to implement the mentioned functions, it might be good to have a xfunction like xexpression that takes an object overloading operator(Args... args) and operator[](xindex) and provides the appropriate values for each index.

Numpy style cheat sheet

Add a section in the html documentation sumilat numpy cheat sheet. But for a numpy - xtensor correspondance table.

Documentation update

Documentation should be refactored to integrate all recent features (generators and builders, comparison operators, newaxis, random module)

xio with view + newaxis never compiles

This snippet never finishes compiling for me (no error, just takes forever):

	xt::xarray<double> d1 = xt::random::rand<double>({5});
	auto d12 = view(d1, newaxis(), all());
	std::cout << d12 << std::endl;

However, this compiles fine:

	xt::xarray<double> d1 = xt::random::rand<double>({5});
	auto d12 = view(d1, newaxis(), all());
	xt::xarray<double> a = d12;
	std::cout << a << std::endl;

Iterator api renaming

Following the discussion we had on gitter about performances, I think we should rename storage_begin and storage_end into begin and end. The current begin and end would become xbegin and xend without argument.

There are mainly two reasons for that:

  • the range for loop is equivalent to a loop with the begin/end iterator pair. If storage_begin/storage_end pair is faster than begin/end pair, it's like we prevent to have performance with this syntax.

  • iterating on the storage container (i.e. regardless of the shape of the expression) is generally used for performing stl-like algorithms on the data. In that case, the algorithms are generally invoked with the begin/end iterator pair. Keeping the current interface would be a performance hit for generic code.

Since it's breaking backward compatibility, I think we should do it as soon as possible.

Computational Data Flow Expression / DAG Builder

I want to be able to do something like the following

xt::xexp<double> exp_res1 = xt::xvar("x") + xt::xvar("y") + xt::xconst(3);
xt::xexp<double> exp_res2 = exp_res1 /  xt::xconst(2);

xt::xarray<double> res1 = exp_res1.set("x", arr1).set("y", arr2).eval();
xt::xarray<double> res2 = exp_res1.set("y", arr3).eval();

xt::xarray<double> res3 = exp_res2.eval();

Here I am reusing the expression and also doing the evaluation when needed.

xio not working with "columnar" xview

This is not compiling:

#include <iostream>
#include "xtensor/xscalar.hpp"
#include "xtensor/xarray.hpp"
#include "xtensor/xio.hpp"
#include "xtensor/xslice.hpp"

int main() {
	xt::xarray<double> arr1
	  {{1.0, 2.0, 3.0, 9},
	   {2.0, 5.0, 7.0, 9},
	   {2.0, 5.0, 7.0, 9}};

	std::cout <<  xt::make_xview(arr1, xt::all(), 1) << std::endl;
}

GCC 6 -> Test failure in xview_on_xfunction

Compiler (Fedora 25):

Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/6.3.1/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --disable-libgcj --with-isl --enable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 6.3.1 20161221 (Red Hat 6.3.1-1) (GCC) 

And the failure:

/home/wolfv/Programs/xorig/test/test_xview.cpp:182: Failure
Value of: iter_end
  Actual: 120-byte object <20-61 C4-F7 FD-7F 00-00 50-61 C4-F7 FD-7F 00-00 8C-8F 54-01 00-00 00-00 00-00 00-00 00-00 00-00 90-62 C4-F7 FD-7F 00-00 20-61 C4-F7 FD-7F 00-00 B8-8E 54-01 00-00 00-00 00-00 00-00 00-00 00-00 DA-FF FF-FF FF-FF FF-FF 10-95 54-01 00-00 00-00 18-95 54-01 00-00 00-00 18-95 54-01 00-00 00-00 A0-97 54-01 00-00 00-00 A8-97 54-01 00-00 00-00 A8-97 54-01 00-00 00-00>
Expected: iter
Which is: 120-byte object <20-61 C4-F7 FD-7F 00-00 50-61 C4-F7 FD-7F 00-00 8C-8F 54-01 00-00 00-00 00-00 00-00 00-00 00-00 90-62 C4-F7 FD-7F 00-00 B0-61 C4-F7 FD-7F 00-00 40-97 54-01 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 B0-94 54-01 00-00 00-00 B8-94 54-01 00-00 00-00 B8-94 54-01 00-00 00-00 40-8D 54-01 00-00 00-00 48-8D 54-01 00-00 00-00 48-8D 54-01 00-00 00-00>
[  FAILED  ] xview.xview_on_xfunction (1 ms)

Incorrect iteration over xviews

As of master:

    xt::xarray<int> arr {{1, 2, 3}, {4, 5, 6}};
    auto arr_view = xt::make_xview(arr, 1);
    std::cout << arr_view << std::endl;
    // -> {4, 5, 6}, OK
    std::cout << arr_view.dimension() << std::endl;
    // -> 1, OK
    for (auto x: arr_view) {std::cout << arr_view << std::endl;}
    // -> 4 4 4, ???

xiterator constructor missing?

When trying auto itpair = std::minmax_element(arr.begin(), arr.end()); or similar such functions, it doesn't compile as an temporary xiterator cannot be instantiated from an empty initializer list, which minmax_element apparently tries.

newaxis

Goal: Provide a special type of slice for xview to insert new dimensions of length one, like numpy.newaxis.

Pretty printing

Add pretty printing, like Numpy, and make it the default way of outputting xexpressions

Incomplete indexings append zeros

I believe that "incomplete indexings" (e.g. indexing a 3d array/tensor with 2 indices) add as many zeros as needed to complete the multi-index. At least in the case of tensors (where the dimensionality is known at compile time), perhaps it may make more sense to return a view in such a case? This would mimic numpy's behavior.

Semantic

Semantic has to be fixed for xindexview and xview like it has been done for xbroadcast and xfunction.

Iteration over xtensor fails

As of xtensor 0.2.1,

    xt::xarray<int> arr {{1, 2, 3}, {4, 5, 6}};
    auto arr_view = xt::make_xview(arr, 0);
    std::cout << std::accumulate(arr_view.begin(), arr_view.end(), 0) << std::endl;

works (as advertised) but

    xt::xtensor<int, 2> tens {{3, 3}};
    auto tens_view = xt::make_xview(tens, 0);
    std::cout << std::accumulate(tens_view.begin(), tens_view.end(), 0) << std::endl;

fails to compile (gcc 6.2.1 from Arch Linux).

Dynamic xview's

Currently xview is implemented using tuple as holder for the slices.
As far as I understand this necessitates that all slices are known at compile time. .....

But for example when creating a view from python, it's not possible to know the slices at compile time. It would also make writing the xreducer functionality easier (as e.dimension() is not a constexpr and cannot be used as template parameter etc.) (or that's at least how I tried doing it).

So I am wondering whether it would be a good idea to either create a separate, dynamic xview class or exchange the tuple in xview for a std::vector holding an std::variant<xall, xrange, size_t> or similar.

xrange_adaptor

I think it would be cool to allow for numpy-style ranges in views with "colons" that find out about their length later, from the shape of the underlying expression.

I.e. numpy style (or pythons): a[:3] or a[1:] or a[::-1] ...

E.g.

struct xnone {};

template <class A, class B, class C>
range(A min, B max, C step)

could return a range_adaptor object, which in turn returns a valid range when initiated with some shape.

E.g. if class A was a xnone tag, and step is positive then min -> 0. If step negative, it would be shape.
If class B is a xnone tag, then max is the size at that dimension. If step negative, -1.

I am not sure about the naming though. xnone is not so nice.

Compiler workarounds

This issue is meant for tracking the workarounds we have implemented around compiler bugs

MSVC 2015: bug with std::enable_if and invalid types

std::enable_if evaluates its second argument, even if the condition is false. This is the reason for the get_xfunction_type_t workaround which adds a level of indirection for the second type to always be a valid type (Original issue #80, fixed in PR #148).

MSVC 2015: math functions not fully qualified

fma class is ambiguous if not fully qualified. See #81.

GCC-4.9 and clang < 3.8: constexpr std::min and std::max

std::min and std::max are not constexpr in these compiler. In xio.hpp, we define a XTENSOR_MIN macro before its usage and undefine it right after.

clang < 3.8 matching initializer_list with static arrays.

Old versions of clang don't handle overload resolution with braced initializer lists correctly: braced initializer lists are not properly matched to static arrays. This prevent compile-time detection of the length of a braced initializer list.

A consequence is that we need to use stack-allocated shape types in these cases.

GCC-6: std::isnan and std::isinf.

We are not directly using std::isnan or std::isinf in xmath as a workaround to the following bug in GCC-6 for the following reason.

C++11 requires that the <cmath> header declares bool std::isnan(double) and bool std::isinf(double).
C99 requires that the <math.h> header declares int ::isnan(double) and int ::isinf(double).
These two definitions would clash when importing both headers and using namespace std.

As of version 6, gcc detects whether the obsolete functions are present in the C <math.h> header and uses them if they are, avoiding the clash. However, this means that the function might return int instead
of bool as C++11 requires, which is a bug.

Default types

I think it could be nice to set a default type for zeros, ones, linspace ... following NumPy I think double is the right choice.

What do you think?

Homogenize naming for meta functions.

Following #101 , I propose we homogenize the naming of meta-functions used in xtensor

common_value_type, common_difference_type, xclosure, get_xfunction_type...

and providing STL-style _t variants for version returning the typename...

eval method

Armadillo has an eval method, which forces evaluation of expressions.

Maybe this would be useful for xtensor, too? E.g. giving some expression, it would return either an xtensor or xarray with the evaluation results.

If an xarray or xtensor is given, it just returns a closure to that.

Allow xscalar to take references on scalar

xscalar should be able to take a reference on the scalar it wraps instead of a copy. That would improve perofrmances when the copy of the scalar type is expensive.

However this behavior should be explicitly specified (via a xref function for instance), the default behavior should remain to take a copy.

Testing of expression access with more or less arguments

The desired behavior when accessing elements of an xexpression with operator(), element() and operator[] is

  • when the number of arguments is lesser than the dimensionality, behave as if zeros were appended to match the dimension.
  • when the number of arguments is greater than the dimensionality, discard the first* arguments until their number matches dimensionality.

Static tensor class

Goal: in addition to the dynamicly-dimensioned xarray, provide an xexpression of fixed dimension.

  • strides and shape attributes will then be std::arrays of specified length, and will be on the stack.

Broadcasting assign operator for simple assignments

It would be nice if

xarray<double> e = xt::random::rand<double>({3, 3});
auto v = make_xindexview(e, {{1, 1}, {1, 2}, {2, 2}});
v = 3;

would be working.
With v = xt::broadcast(3, v.shape()); it's currently working and should be easy to implement for the general case.

Dynamic operator[](index)

Goal: in addition to the variadic operator(), provide an operator[] taking a single multi-index argument.

Like for reshape, we should also enable passing a braced initializer list {4, 5, 6}.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.