Giter VIP home page Giter VIP logo

torchsharp's Introduction

Gitter
Build Status
TorchSharp
TorchAudio
TorchVision
TorchSharp-cpu TorchSharp-cuda-windows TorchSharp-cuda-linux

Please check the Release Notes file for news on what's been updated in each new release.

TorchSharp is now in the .NET Foundation!

If you are using TorchSharp from NuGet, you should be using a version >= 0.98.3 of TorchSharp, and >= 1.12.0 of the libtorch-xxx redistributable packages. We recommend using one of the 'bundled' packages: TorchSharp-cpu, TorchSharp-cuda-windows, or TorchSharp-cuda-linux. They will pull in the right LibTorch backends.

TorchSharp examples has their own home!

Head over to the TorchSharp Examples Repo for convenient access to existing and upcoming examples.

IMPORTANT NOTES:

When targeting .NET FX on Windows, the project configuration must be set to 'x64' rather than 'Any CPU' for anything that depends on TorchSharp.

As we build up to a v1.0 release, we will continue to make breaking changes, but only when we consider it necessary for usability. Similarity to the PyTorch experience is a primary design tenet, and we will continue on that path.

TorchSharp

TorchSharp is a .NET library that provides access to the library that powers PyTorch. It is part of the .NET Foundation.

The focus is to bind the API surfaced by LibTorch with a particular focus on tensors. The design intent is to stay as close as possible to the Pytorch experience, while still taking advantage of the benefits of the .NET static type system where it makes sense. For example: method overloading is relied on when Pytorch defines multiple valid types for a particular parameter.

The technology is a "wrapper library": no more, no less. DiffSharp uses this repository extensively and has been a major factor in iterating support.

Things that you can try:

using TorchSharp;
using static TorchSharp.torch.nn;

var lin1 = Linear(1000, 100);
var lin2 = Linear(100, 10);
var seq = Sequential(("lin1", lin1), ("relu1", ReLU()), ("drop1", Dropout(0.1)), ("lin2", lin2));

using var x = torch.randn(64, 1000);
using var y = torch.randn(64, 10);

var optimizer = torch.optim.Adam(seq.parameters());

for (int i = 0; i < 10; i++) {
    using var eval = seq.forward(x);
    using var output = functional.mse_loss(eval, y, Reduction.Sum);

    optimizer.zero_grad();

    output.backward();

    optimizer.step();
}

A Few Things to Know

While the intent has been to stay close to the Pytorch experience, there are some peculiarities to take note of:

  1. We have disregarded .NET naming conventions in favor of Python where it impacts the experience. We know this will feel wrong to some, but after a lot of deliberation, we decided to follow the lead of the SciSharp community and embrace naming similarity with Python over .NET tradition. We believe this will make it easier to take Python-based examples and snippets and apply them in .NET.

  2. In order to make a constructor call look more the Pytorch code, each class has a factory method with the same name. Because we cannot have a method and a class with the same name in a scope, we moved the class declarations to a nested scope 'Modules.'

    For example:

    Module conv1 = Conv1d(...);

    creates an instance of Modules.Conv1d, which has 'torch.Module' as its base class.

  3. C# uses ':' when passing a named parameter, while F# and Python uses '=', and Pytorch functions have enough parameters to encourage passing them by name. This means that you cannot simply copy a lot of code into C#.

  4. There are a number of APIs where Pytorch encodes what are effectively enum types as strings. We have chosen to use proper .NET enumeration types in most cases.

  5. The type torch.device is torch.Device in TorchSharp. We felt that using all-lowercase for a class type was one step too far. The device object constructors, which is what you use most of the time, are still called device()

Memory management

See docfx/articles/memory.md.

Download

TorchSharp is distributed via the NuGet gallery: https://www.nuget.org/packages/TorchSharp/

We recommend using one of the 'bundled' packages, which will pull in both TorchSharp and the right backends:

Otherwise, you also need one of the LibTorch backend packages: https://www.nuget.org/packages?q=libtorch, specifically one of

  • libtorch-cpu-linux-x64 (CPU, Linux)

  • libtorch-cpu-win-x64 (CPU, Windows)

  • libtorch-cpu-osx-x64 (CPU, OSX)

  • libtorch-cpu (CPU, references all three, larger download but simpler)

  • libtorch-cuda-12.1-linux-x64 (CPU/CUDA 12.1, Linux)

    NOTE: Due to the presence of very large native binaries, using the libtorch-cuda-12.1-linux-x64 package requires .NET 6, e.g. .NET SDK version 6.0.100-preview.5.21302.13 or greater.

  • libtorch-cuda-12.1-win-x64 (CPU/CUDA 12.1, Windows)

Alternatively you can access the LibTorch native binaries via direct reference to existing local native binaries of LibTorch installed through other means (for example, by installing PyTorch using a Python package manager). You will have to add an explicit load of the relevant native library, for example:

    using System.Runtime.InteropServices;
    NativeLibrary.Load("/home/gunes/anaconda3/lib/python3.8/site-packages/torch/lib/libtorch.so")

NOTE: Some have reported that in order to use TorchSharp on Windows, the C++ redistributable needs to be installed. This will be the case where VS is installed, but it maybe necessary to install this version of the C++ redist on machines where TorchSharp is deployed:

Microsoft Visual C++ 2015-2022 ( 14.36.32532 )

Code of Conduct

This project has adopted the code of conduct defined by the Contributor Covenant to clarify expected behavior in our community. For more information see the .NET Foundation Code of Conduct.

Developing and Contributing

See DEVGUIDE.md and CONTRIBUTING.md.

Uses

DiffSharp also uses this repository extensively and has been a major factor in iterating support.

torchsharp's People

Contributors

artidoro avatar cgravill avatar chengyen-tang avatar cryocz avatar dayo05 avatar dependabot[bot] avatar dsyme avatar ekagra-ranjan avatar fwaris avatar gilesbathgate avatar gyeongin avatar huangyangyu avatar interesaaat avatar joemoorhouse avatar kaiidams avatar leo-lihao avatar lostmsu avatar markusweimer avatar mattias-ncvib avatar mfagerlund avatar migueldeicaza avatar motus avatar movgp0 avatar natsukium avatar niklasgustafsson avatar roelofsmn avatar shaltielshmid avatar tarekgh avatar xhuan8 avatar yueyinqiu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torchsharp's Issues

Binding: Sleef APIs

The Sleef set of symbols from libcaffe2

Sleef_acosd4_u10 Sleef_acosd4_u10avx Sleef_acosd4_u10avx2 Sleef_acosd4_u10fma4 Sleef_acosd4_u35 Sleef_acosd4_u35avx Sleef_acosd4_u35avx2 Sleef_acosd4_u35fma4 Sleef_acosf8_u10 Sleef_acosf8_u10avx Sleef_acosf8_u10avx2 Sleef_acosf8_u10fma4 Sleef_acosf8_u35 Sleef_acosf8_u35avx Sleef_acosf8_u35avx2 Sleef_acosf8_u35fma4 Sleef_acoshd4_u10 Sleef_acoshd4_u10avx Sleef_acoshd4_u10avx2 Sleef_acoshd4_u10fma4 Sleef_acoshf8_u10 Sleef_acoshf8_u10avx Sleef_acoshf8_u10avx2 Sleef_acoshf8_u10fma4 Sleef_asind4_u10 Sleef_asind4_u10avx Sleef_asind4_u10avx2 Sleef_asind4_u10fma4 Sleef_asind4_u35 Sleef_asind4_u35avx Sleef_asind4_u35avx2 Sleef_asind4_u35fma4 Sleef_asinf8_u10 Sleef_asinf8_u10avx Sleef_asinf8_u10avx2 Sleef_asinf8_u10fma4 Sleef_asinf8_u35 Sleef_asinf8_u35avx Sleef_asinf8_u35avx2 Sleef_asinf8_u35fma4 Sleef_asinhd4_u10 Sleef_asinhd4_u10avx Sleef_asinhd4_u10avx2 Sleef_asinhd4_u10fma4 Sleef_asinhf8_u10 Sleef_asinhf8_u10avx Sleef_asinhf8_u10avx2 Sleef_asinhf8_u10fma4 Sleef_atan2d4_u10 Sleef_atan2d4_u10avx Sleef_atan2d4_u10avx2 Sleef_atan2d4_u10fma4 Sleef_atan2d4_u35 Sleef_atan2d4_u35avx Sleef_atan2d4_u35avx2 Sleef_atan2d4_u35fma4 Sleef_atan2f8_u10 Sleef_atan2f8_u10avx Sleef_atan2f8_u10avx2 Sleef_atan2f8_u10fma4 Sleef_atan2f8_u35 Sleef_atan2f8_u35avx Sleef_atan2f8_u35avx2 Sleef_atan2f8_u35fma4 Sleef_atand4_u10 Sleef_atand4_u10avx Sleef_atand4_u10avx2 Sleef_atand4_u10fma4 Sleef_atand4_u35 Sleef_atand4_u35avx Sleef_atand4_u35avx2 Sleef_atand4_u35fma4 Sleef_atanf8_u10 Sleef_atanf8_u10avx Sleef_atanf8_u10avx2 Sleef_atanf8_u10fma4 Sleef_atanf8_u35 Sleef_atanf8_u35avx Sleef_atanf8_u35avx2 Sleef_atanf8_u35fma4 Sleef_atanhd4_u10 Sleef_atanhd4_u10avx Sleef_atanhd4_u10avx2 Sleef_atanhd4_u10fma4 Sleef_atanhf8_u10 Sleef_atanhf8_u10avx Sleef_atanhf8_u10avx2 Sleef_atanhf8_u10fma4 Sleef_cbrtd4_u10 Sleef_cbrtd4_u10avx Sleef_cbrtd4_u10avx2 Sleef_cbrtd4_u10fma4 Sleef_cbrtd4_u35 Sleef_cbrtd4_u35avx Sleef_cbrtd4_u35avx2 Sleef_cbrtd4_u35fma4 Sleef_cbrtf8_u10 Sleef_cbrtf8_u10avx Sleef_cbrtf8_u10avx2 Sleef_cbrtf8_u10fma4 Sleef_cbrtf8_u35 Sleef_cbrtf8_u35avx Sleef_cbrtf8_u35avx2 Sleef_cbrtf8_u35fma4 Sleef_ceild4 Sleef_ceild4_avx Sleef_ceild4_avx2 Sleef_ceild4_fma4 Sleef_ceilf8 Sleef_ceilf8_avx Sleef_ceilf8_avx2 Sleef_ceilf8_fma4 Sleef_copysignd4 Sleef_copysignd4_avx Sleef_copysignd4_avx2 Sleef_copysignd4_fma4 Sleef_copysignf8 Sleef_copysignf8_avx Sleef_copysignf8_avx2 Sleef_copysignf8_fma4 Sleef_cosd4_u10 Sleef_cosd4_u10avx Sleef_cosd4_u10avx2 Sleef_cosd4_u10fma4 Sleef_cosd4_u35 Sleef_cosd4_u35avx Sleef_cosd4_u35avx2 Sleef_cosd4_u35fma4 Sleef_cosf8_u10 Sleef_cosf8_u10avx Sleef_cosf8_u10avx2 Sleef_cosf8_u10fma4 Sleef_cosf8_u35 Sleef_cosf8_u35avx Sleef_cosf8_u35avx2 Sleef_cosf8_u35fma4 Sleef_coshd4_u10 Sleef_coshd4_u10avx Sleef_coshd4_u10avx2 Sleef_coshd4_u10fma4 Sleef_coshf8_u10 Sleef_coshf8_u10avx Sleef_coshf8_u10avx2 Sleef_coshf8_u10fma4 Sleef_cospid4_u05 Sleef_cospid4_u05avx Sleef_cospid4_u05avx2 Sleef_cospid4_u05fma4 Sleef_cospif8_u05 Sleef_cospif8_u05avx Sleef_cospif8_u05avx2 Sleef_cospif8_u05fma4 Sleef_currentTimeMicros Sleef_erfcd4_u15 Sleef_erfcd4_u15avx Sleef_erfcd4_u15avx2 Sleef_erfcd4_u15fma4 Sleef_erfcf8_u15 Sleef_erfcf8_u15avx Sleef_erfcf8_u15avx2 Sleef_erfcf8_u15fma4 Sleef_erfd4_u10 Sleef_erfd4_u10avx Sleef_erfd4_u10avx2 Sleef_erfd4_u10fma4 Sleef_erff8_u10 Sleef_erff8_u10avx Sleef_erff8_u10avx2 Sleef_erff8_u10fma4 Sleef_exp10d4_u10 Sleef_exp10d4_u10avx Sleef_exp10d4_u10avx2 Sleef_exp10d4_u10fma4 Sleef_exp10f8_u10 Sleef_exp10f8_u10avx Sleef_exp10f8_u10avx2 Sleef_exp10f8_u10fma4 Sleef_exp2d4_u10 Sleef_exp2d4_u10avx Sleef_exp2d4_u10avx2 Sleef_exp2d4_u10fma4 Sleef_exp2f8_u10 Sleef_exp2f8_u10avx Sleef_exp2f8_u10avx2 Sleef_exp2f8_u10fma4 Sleef_expd4_u10 Sleef_expd4_u10avx Sleef_expd4_u10avx2 Sleef_expd4_u10fma4 Sleef_expf8_u10 Sleef_expf8_u10avx Sleef_expf8_u10avx2 Sleef_expf8_u10fma4 Sleef_expfrexpd4 Sleef_expfrexpd4_avx Sleef_expfrexpd4_avx2 Sleef_expfrexpd4_fma4 Sleef_expfrexpf8_avx Sleef_expfrexpf8_avx2 Sleef_expfrexpf8_fma4 Sleef_expm1d4_u10 Sleef_expm1d4_u10avx Sleef_expm1d4_u10avx2 Sleef_expm1d4_u10fma4 Sleef_expm1f8_u10 Sleef_expm1f8_u10avx Sleef_expm1f8_u10avx2 Sleef_expm1f8_u10fma4 Sleef_fabsd4 Sleef_fabsd4_avx Sleef_fabsd4_avx2 Sleef_fabsd4_fma4 Sleef_fabsf8 Sleef_fabsf8_avx Sleef_fabsf8_avx2 Sleef_fabsf8_fma4 Sleef_fdimd4 Sleef_fdimd4_avx Sleef_fdimd4_avx2 Sleef_fdimd4_fma4 Sleef_fdimf8 Sleef_fdimf8_avx Sleef_fdimf8_avx2 Sleef_fdimf8_fma4 Sleef_floord4 Sleef_floord4_avx Sleef_floord4_avx2 Sleef_floord4_fma4 Sleef_floorf8 Sleef_floorf8_avx Sleef_floorf8_avx2 Sleef_floorf8_fma4 Sleef_fmad4 Sleef_fmad4_avx Sleef_fmad4_avx2 Sleef_fmad4_fma4 Sleef_fmaf8 Sleef_fmaf8_avx Sleef_fmaf8_avx2 Sleef_fmaf8_fma4 Sleef_fmaxd4 Sleef_fmaxd4_avx Sleef_fmaxd4_avx2 Sleef_fmaxd4_fma4 Sleef_fmaxf8 Sleef_fmaxf8_avx Sleef_fmaxf8_avx2 Sleef_fmaxf8_fma4 Sleef_fmind4 Sleef_fmind4_avx Sleef_fmind4_avx2 Sleef_fmind4_fma4 Sleef_fminf8 Sleef_fminf8_avx Sleef_fminf8_avx2 Sleef_fminf8_fma4 Sleef_fmodd4 Sleef_fmodd4_avx Sleef_fmodd4_avx2 Sleef_fmodd4_fma4 Sleef_fmodf8 Sleef_fmodf8_avx Sleef_fmodf8_avx2 Sleef_fmodf8_fma4 Sleef_free Sleef_frfrexpd4 Sleef_frfrexpd4_avx Sleef_frfrexpd4_avx2 Sleef_frfrexpd4_fma4 Sleef_frfrexpf8 Sleef_frfrexpf8_avx Sleef_frfrexpf8_avx2 Sleef_frfrexpf8_fma4 Sleef_getCpuIdString Sleef_getIntd4 Sleef_getIntd4_avx Sleef_getIntd4_avx2 Sleef_getIntd4_fma4 Sleef_getIntf8 Sleef_getIntf8_avx Sleef_getIntf8_avx2 Sleef_getIntf8_fma4 Sleef_getPtrd4 Sleef_getPtrd4_avx Sleef_getPtrd4_avx2 Sleef_getPtrd4_fma4 Sleef_getPtrf8 Sleef_getPtrf8_avx Sleef_getPtrf8_avx2 Sleef_getPtrf8_fma4 Sleef_hypotd4_u05 Sleef_hypotd4_u05avx Sleef_hypotd4_u05avx2 Sleef_hypotd4_u05fma4 Sleef_hypotd4_u35 Sleef_hypotd4_u35avx Sleef_hypotd4_u35avx2 Sleef_hypotd4_u35fma4 Sleef_hypotf8_u05 Sleef_hypotf8_u05avx Sleef_hypotf8_u05avx2 Sleef_hypotf8_u05fma4 Sleef_hypotf8_u35 Sleef_hypotf8_u35avx Sleef_hypotf8_u35avx2 Sleef_hypotf8_u35fma4 Sleef_ilogbd4 Sleef_ilogbd4_avx Sleef_ilogbd4_avx2 Sleef_ilogbd4_fma4 Sleef_ilogbf8_avx Sleef_ilogbf8_avx2 Sleef_ilogbf8_fma4 Sleef_ldexpd4 Sleef_ldexpd4_avx Sleef_ldexpd4_avx2 Sleef_ldexpd4_fma4 Sleef_ldexpf8_avx Sleef_ldexpf8_avx2 Sleef_ldexpf8_fma4 Sleef_lgammad4_u10 Sleef_lgammad4_u10avx Sleef_lgammad4_u10avx2 Sleef_lgammad4_u10fma4 Sleef_lgammaf8_u10 Sleef_lgammaf8_u10avx Sleef_lgammaf8_u10avx2 Sleef_lgammaf8_u10fma4 Sleef_log10d4_u10 Sleef_log10d4_u10avx Sleef_log10d4_u10avx2 Sleef_log10d4_u10fma4 Sleef_log10f8_u10 Sleef_log10f8_u10avx Sleef_log10f8_u10avx2 Sleef_log10f8_u10fma4 Sleef_log1pd4_u10 Sleef_log1pd4_u10avx Sleef_log1pd4_u10avx2 Sleef_log1pd4_u10fma4 Sleef_log1pf8_u10 Sleef_log1pf8_u10avx Sleef_log1pf8_u10avx2 Sleef_log1pf8_u10fma4 Sleef_log2d4_u10 Sleef_log2d4_u10avx Sleef_log2d4_u10avx2 Sleef_log2d4_u10fma4 Sleef_log2f8_u10 Sleef_log2f8_u10avx Sleef_log2f8_u10avx2 Sleef_log2f8_u10fma4 Sleef_logd4_u10 Sleef_logd4_u10avx Sleef_logd4_u10avx2 Sleef_logd4_u10fma4 Sleef_logd4_u35 Sleef_logd4_u35avx Sleef_logd4_u35avx2 Sleef_logd4_u35fma4 Sleef_logf8_u10 Sleef_logf8_u10avx Sleef_logf8_u10avx2 Sleef_logf8_u10fma4 Sleef_logf8_u35 Sleef_logf8_u35avx Sleef_logf8_u35avx2 Sleef_logf8_u35fma4 Sleef_malloc Sleef_modfd4 Sleef_modfd4_avx Sleef_modfd4_avx2 Sleef_modfd4_fma4 Sleef_modff8 Sleef_modff8_avx Sleef_modff8_avx2 Sleef_modff8_fma4 Sleef_nextafterd4 Sleef_nextafterd4_avx Sleef_nextafterd4_avx2 Sleef_nextafterd4_fma4 Sleef_nextafterf8 Sleef_nextafterf8_avx Sleef_nextafterf8_avx2 Sleef_nextafterf8_fma4 Sleef_powd4_u10 Sleef_powd4_u10avx Sleef_powd4_u10avx2 Sleef_powd4_u10fma4 Sleef_powf8_u10 Sleef_powf8_u10avx Sleef_powf8_u10avx2 Sleef_powf8_u10fma4 Sleef_rintd4 Sleef_rintd4_avx Sleef_rintd4_avx2 Sleef_rintd4_fma4 Sleef_rintf8 Sleef_rintf8_avx Sleef_rintf8_avx2 Sleef_rintf8_fma4 Sleef_roundd4 Sleef_roundd4_avx Sleef_roundd4_avx2 Sleef_roundd4_fma4 Sleef_roundf8 Sleef_roundf8_avx Sleef_roundf8_avx2 Sleef_roundf8_fma4 Sleef_sincosd4_u10 Sleef_sincosd4_u10avx Sleef_sincosd4_u10avx2 Sleef_sincosd4_u10fma4 Sleef_sincosd4_u35 Sleef_sincosd4_u35avx Sleef_sincosd4_u35avx2 Sleef_sincosd4_u35fma4 Sleef_sincosf8_u10 Sleef_sincosf8_u10avx Sleef_sincosf8_u10avx2 Sleef_sincosf8_u10fma4 Sleef_sincosf8_u35 Sleef_sincosf8_u35avx Sleef_sincosf8_u35avx2 Sleef_sincosf8_u35fma4 Sleef_sincospid4_u05 Sleef_sincospid4_u05avx Sleef_sincospid4_u05avx2 Sleef_sincospid4_u05fma4 Sleef_sincospid4_u35 Sleef_sincospid4_u35avx Sleef_sincospid4_u35avx2 Sleef_sincospid4_u35fma4 Sleef_sincospif8_u05 Sleef_sincospif8_u05avx Sleef_sincospif8_u05avx2 Sleef_sincospif8_u05fma4 Sleef_sincospif8_u35 Sleef_sincospif8_u35avx Sleef_sincospif8_u35avx2 Sleef_sincospif8_u35fma4 Sleef_sind4_u10 Sleef_sind4_u10avx Sleef_sind4_u10avx2 Sleef_sind4_u10fma4 Sleef_sind4_u35 Sleef_sind4_u35avx Sleef_sind4_u35avx2 Sleef_sind4_u35fma4 Sleef_sinf8_u10 Sleef_sinf8_u10avx Sleef_sinf8_u10avx2 Sleef_sinf8_u10fma4 Sleef_sinf8_u35 Sleef_sinf8_u35avx Sleef_sinf8_u35avx2 Sleef_sinf8_u35fma4 Sleef_sinhd4_u10 Sleef_sinhd4_u10avx Sleef_sinhd4_u10avx2 Sleef_sinhd4_u10fma4 Sleef_sinhf8_u10 Sleef_sinhf8_u10avx Sleef_sinhf8_u10avx2 Sleef_sinhf8_u10fma4 Sleef_sinpid4_u05 Sleef_sinpid4_u05avx Sleef_sinpid4_u05avx2 Sleef_sinpid4_u05fma4 Sleef_sinpif8_u05 Sleef_sinpif8_u05avx Sleef_sinpif8_u05avx2 Sleef_sinpif8_u05fma4 Sleef_sqrtd4 Sleef_sqrtd4_avx Sleef_sqrtd4_avx2 Sleef_sqrtd4_fma4 Sleef_sqrtd4_u05 Sleef_sqrtd4_u05avx Sleef_sqrtd4_u05avx2 Sleef_sqrtd4_u05fma4 Sleef_sqrtd4_u35 Sleef_sqrtd4_u35avx Sleef_sqrtd4_u35avx2 Sleef_sqrtd4_u35fma4 Sleef_sqrtf8 Sleef_sqrtf8_avx Sleef_sqrtf8_avx2 Sleef_sqrtf8_fma4 Sleef_sqrtf8_u05 Sleef_sqrtf8_u05avx Sleef_sqrtf8_u05avx2 Sleef_sqrtf8_u05fma4 Sleef_sqrtf8_u35 Sleef_sqrtf8_u35avx Sleef_sqrtf8_u35avx2 Sleef_sqrtf8_u35fma4 Sleef_tand4_u10 Sleef_tand4_u10avx Sleef_tand4_u10avx2 Sleef_tand4_u10fma4 Sleef_tand4_u35 Sleef_tand4_u35avx Sleef_tand4_u35avx2 Sleef_tand4_u35fma4 Sleef_tanf8_u10 Sleef_tanf8_u10avx Sleef_tanf8_u10avx2 Sleef_tanf8_u10fma4 Sleef_tanf8_u35 Sleef_tanf8_u35avx Sleef_tanf8_u35avx2 Sleef_tanf8_u35fma4 Sleef_tanhd4_u10 Sleef_tanhd4_u10avx Sleef_tanhd4_u10avx2 Sleef_tanhd4_u10fma4 Sleef_tanhf8_u10 Sleef_tanhf8_u10avx Sleef_tanhf8_u10avx2 Sleef_tanhf8_u10fma4 Sleef_tgammad4_u10 Sleef_tgammad4_u10avx Sleef_tgammad4_u10avx2 Sleef_tgammad4_u10fma4 Sleef_tgammaf8_u10 Sleef_tgammaf8_u10avx Sleef_tgammaf8_u10avx2 Sleef_tgammaf8_u10fma4 Sleef_truncd4 Sleef_truncd4_avx Sleef_truncd4_avx2 Sleef_truncd4_fma4 Sleef_truncf8 Sleef_truncf8_avx Sleef_truncf8_avx2 Sleef_truncf8_fma4 Sleef_x86CpuID

dotnet build fails to generate TypeGeneration.cs on linux

Since the removal of TypeGeneration.cs file in e1c83df, I cannot build the project on Linux. Obviously, I don't have the VisualStudio to generate the file, and I cannot figure out if there is a combination of options to do it using only dotnet/msbuild. Here's what I get:

motus@motus-xps13:~/devel/TorchSharp/Tester:master$ dotnet run
Program.cs(5,20): error CS0246: The type or namespace name 'FloatTensor' could not be found (are you missing a using directive or an assembly reference?) [/home/motus/devel/TorchSharp/Tester/Tester.csproj]

The build failed. Please fix the build errors and run again.
motus@motus-xps13:~/devel/TorchSharp/Tester:master$ dotnet --version
2.1.403

I think @NiklasGustafsson has the same problem. @migueldeicaza, do you know how to fix it? Thank you!

Binding: THNN APIs

  • THNN_DoubleAbsCriterion_updateGradInput
  • THNN_DoubleAbsCriterion_updateOutput
  • THNN_DoubleBCECriterion_updateGradInput
  • THNN_DoubleBCECriterion_updateOutput
  • THNN_DoubleBatchNormalization_backward
  • THNN_DoubleBatchNormalization_updateOutput
  • THNN_DoubleClassNLLCriterion_updateGradInput
  • THNN_DoubleClassNLLCriterion_updateOutput
  • THNN_DoubleCol2Im_updateGradInput
  • THNN_DoubleCol2Im_updateOutput
  • THNN_DoubleELU_updateGradInput
  • THNN_DoubleELU_updateOutput
  • THNN_DoubleFeatureLPPooling_updateGradInput
  • THNN_DoubleFeatureLPPooling_updateOutput
  • THNN_DoubleGatedLinear_updateGradInput
  • THNN_DoubleGatedLinear_updateOutput
  • THNN_DoubleHardTanh_updateGradInput
  • THNN_DoubleHardTanh_updateOutput
  • THNN_DoubleIm2Col_updateGradInput
  • THNN_DoubleIm2Col_updateOutput
  • THNN_DoubleIndexLinear_accGradParameters
  • THNN_DoubleIndexLinear_accUpdateGradParameters
  • THNN_DoubleIndexLinear_updateOutput
  • THNN_DoubleIndexLinear_updateParameters
  • THNN_DoubleLeakyReLU_updateGradInput
  • THNN_DoubleLeakyReLU_updateOutput
  • THNN_DoubleLogSigmoid_updateGradInput
  • THNN_DoubleLogSigmoid_updateOutput
  • THNN_DoubleMSECriterion_updateGradInput
  • THNN_DoubleMSECriterion_updateOutput
  • THNN_DoubleMultiLabelMarginCriterion_updateGradInput
  • THNN_DoubleMultiLabelMarginCriterion_updateOutput
  • THNN_DoubleMultiMarginCriterion_updateGradInput
  • THNN_DoubleMultiMarginCriterion_updateOutput
  • THNN_DoubleRReLU_updateGradInput
  • THNN_DoubleRReLU_updateOutput
  • THNN_DoubleSigmoid_updateGradInput
  • THNN_DoubleSigmoid_updateOutput
  • THNN_DoubleSmoothL1Criterion_updateGradInput
  • THNN_DoubleSmoothL1Criterion_updateOutput
  • THNN_DoubleSoftMarginCriterion_updateGradInput
  • THNN_DoubleSoftMarginCriterion_updateOutput
  • THNN_DoubleSoftPlus_updateGradInput
  • THNN_DoubleSoftPlus_updateOutput
  • THNN_DoubleSoftShrink_updateGradInput
  • THNN_DoubleSoftShrink_updateOutput
  • THNN_DoubleSparseLinear_accGradParameters
  • THNN_DoubleSparseLinear_legacyAccGradParameters
  • THNN_DoubleSparseLinear_legacyUpdateOutput
  • THNN_DoubleSparseLinear_legacyUpdateParameters
  • THNN_DoubleSparseLinear_legacyZeroGradParameters
  • THNN_DoubleSparseLinear_updateOutput
  • THNN_DoubleSparseLinear_updateParameters
  • THNN_DoubleSparseLinear_zeroGradParameters
  • THNN_DoubleSpatialAdaptiveAveragePooling_updateGradInput
  • THNN_DoubleSpatialAdaptiveAveragePooling_updateOutput
  • THNN_DoubleSpatialAdaptiveMaxPooling_updateGradInput
  • THNN_DoubleSpatialAdaptiveMaxPooling_updateOutput
  • THNN_DoubleSpatialAveragePooling_updateGradInput
  • THNN_DoubleSpatialAveragePooling_updateOutput
  • THNN_DoubleSpatialClassNLLCriterion_updateGradInput
  • THNN_DoubleSpatialClassNLLCriterion_updateOutput
  • THNN_DoubleSpatialConvolutionMM_accGradParameters
  • THNN_DoubleSpatialConvolutionMM_updateGradInput
  • THNN_DoubleSpatialConvolutionMM_updateOutput
  • THNN_DoubleSpatialDilatedConvolution_accGradParameters
  • THNN_DoubleSpatialDilatedConvolution_updateGradInput
  • THNN_DoubleSpatialDilatedConvolution_updateOutput
  • THNN_DoubleSpatialDilatedMaxPooling_updateGradInput
  • THNN_DoubleSpatialDilatedMaxPooling_updateOutput
  • THNN_DoubleSpatialFractionalMaxPooling_updateGradInput
  • THNN_DoubleSpatialFractionalMaxPooling_updateOutput
  • THNN_DoubleSpatialFullDilatedConvolution_accGradParameters
  • THNN_DoubleSpatialFullDilatedConvolution_updateGradInput
  • THNN_DoubleSpatialFullDilatedConvolution_updateOutput
  • THNN_DoubleSpatialMaxUnpooling_updateGradInput
  • THNN_DoubleSpatialMaxUnpooling_updateOutput
  • THNN_DoubleSpatialReflectionPadding_updateGradInput
  • THNN_DoubleSpatialReflectionPadding_updateOutput
  • THNN_DoubleSpatialReplicationPadding_updateGradInput
  • THNN_DoubleSpatialReplicationPadding_updateOutput
  • THNN_DoubleSpatialUpSamplingBilinear_updateGradInput
  • THNN_DoubleSpatialUpSamplingBilinear_updateOutput
  • THNN_DoubleSpatialUpSamplingNearest_updateGradInput
  • THNN_DoubleSpatialUpSamplingNearest_updateOutput
  • THNN_DoubleTanh_updateGradInput
  • THNN_DoubleTanh_updateOutput
  • THNN_DoubleTemporalReflectionPadding_updateGradInput
  • THNN_DoubleTemporalReflectionPadding_updateOutput
  • THNN_DoubleTemporalReplicationPadding_updateGradInput
  • THNN_DoubleTemporalReplicationPadding_updateOutput
  • THNN_DoubleTemporalRowConvolution_accGradParameters
  • THNN_DoubleTemporalRowConvolution_updateGradInput
  • THNN_DoubleTemporalRowConvolution_updateOutput
  • THNN_DoubleTemporalUpSamplingLinear_updateGradInput
  • THNN_DoubleTemporalUpSamplingLinear_updateOutput
  • THNN_DoubleTemporalUpSamplingNearest_updateGradInput
  • THNN_DoubleTemporalUpSamplingNearest_updateOutput
  • THNN_DoubleThreshold_updateGradInput
  • THNN_DoubleThreshold_updateOutput
  • THNN_DoubleVolumetricAdaptiveAveragePooling_updateGradInput
  • THNN_DoubleVolumetricAdaptiveAveragePooling_updateOutput
  • THNN_DoubleVolumetricAdaptiveMaxPooling_updateGradInput
  • THNN_DoubleVolumetricAdaptiveMaxPooling_updateOutput
  • THNN_DoubleVolumetricAveragePooling_updateGradInput
  • THNN_DoubleVolumetricAveragePooling_updateOutput
  • THNN_DoubleVolumetricConvolutionMM_accGradParameters
  • THNN_DoubleVolumetricConvolutionMM_updateGradInput
  • THNN_DoubleVolumetricConvolutionMM_updateOutput
  • THNN_DoubleVolumetricDilatedConvolution_accGradParameters
  • THNN_DoubleVolumetricDilatedConvolution_updateGradInput
  • THNN_DoubleVolumetricDilatedConvolution_updateOutput
  • THNN_DoubleVolumetricDilatedMaxPooling_updateGradInput
  • THNN_DoubleVolumetricDilatedMaxPooling_updateOutput
  • THNN_DoubleVolumetricFullDilatedConvolution_accGradParameters
  • THNN_DoubleVolumetricFullDilatedConvolution_updateGradInput
  • THNN_DoubleVolumetricFullDilatedConvolution_updateOutput
  • THNN_DoubleVolumetricMaxUnpooling_updateGradInput
  • THNN_DoubleVolumetricMaxUnpooling_updateOutput
  • THNN_DoubleVolumetricReplicationPadding_updateGradInput
  • THNN_DoubleVolumetricReplicationPadding_updateOutput
  • THNN_DoubleVolumetricUpSamplingNearest_updateGradInput
  • THNN_DoubleVolumetricUpSamplingNearest_updateOutput
  • THNN_DoubleVolumetricUpSamplingTrilinear_updateGradInput
  • THNN_DoubleVolumetricUpSamplingTrilinear_updateOutput
  • THNN_Doubleunfolded_acc
  • THNN_Doubleunfolded_copy
  • THNN_FloatAbsCriterion_updateGradInput
  • THNN_FloatAbsCriterion_updateOutput
  • THNN_FloatBCECriterion_updateGradInput
  • THNN_FloatBCECriterion_updateOutput
  • THNN_FloatBatchNormalization_backward
  • THNN_FloatBatchNormalization_updateOutput
  • THNN_FloatClassNLLCriterion_updateGradInput
  • THNN_FloatClassNLLCriterion_updateOutput
  • THNN_FloatCol2Im_updateGradInput
  • THNN_FloatCol2Im_updateOutput
  • THNN_FloatELU_updateGradInput
  • THNN_FloatELU_updateOutput
  • THNN_FloatFeatureLPPooling_updateGradInput
  • THNN_FloatFeatureLPPooling_updateOutput
  • THNN_FloatGatedLinear_updateGradInput
  • THNN_FloatGatedLinear_updateOutput
  • THNN_FloatHardTanh_updateGradInput
  • THNN_FloatHardTanh_updateOutput
  • THNN_FloatIm2Col_updateGradInput
  • THNN_FloatIm2Col_updateOutput
  • THNN_FloatIndexLinear_accGradParameters
  • THNN_FloatIndexLinear_accUpdateGradParameters
  • THNN_FloatIndexLinear_updateOutput
  • THNN_FloatIndexLinear_updateParameters
  • THNN_FloatLeakyReLU_updateGradInput
  • THNN_FloatLeakyReLU_updateOutput
  • THNN_FloatLogSigmoid_updateGradInput
  • THNN_FloatLogSigmoid_updateOutput
  • THNN_FloatMSECriterion_updateGradInput
  • THNN_FloatMSECriterion_updateOutput
  • THNN_FloatMultiLabelMarginCriterion_updateGradInput
  • THNN_FloatMultiLabelMarginCriterion_updateOutput
  • THNN_FloatMultiMarginCriterion_updateGradInput
  • THNN_FloatMultiMarginCriterion_updateOutput
  • THNN_FloatRReLU_updateGradInput
  • THNN_FloatRReLU_updateOutput
  • THNN_FloatSigmoid_updateGradInput
  • THNN_FloatSigmoid_updateOutput
  • THNN_FloatSmoothL1Criterion_updateGradInput
  • THNN_FloatSmoothL1Criterion_updateOutput
  • THNN_FloatSoftMarginCriterion_updateGradInput
  • THNN_FloatSoftMarginCriterion_updateOutput
  • THNN_FloatSoftPlus_updateGradInput
  • THNN_FloatSoftPlus_updateOutput
  • THNN_FloatSoftShrink_updateGradInput
  • THNN_FloatSoftShrink_updateOutput
  • THNN_FloatSparseLinear_accGradParameters
  • THNN_FloatSparseLinear_legacyAccGradParameters
  • THNN_FloatSparseLinear_legacyUpdateOutput
  • THNN_FloatSparseLinear_legacyUpdateParameters
  • THNN_FloatSparseLinear_legacyZeroGradParameters
  • THNN_FloatSparseLinear_updateOutput
  • THNN_FloatSparseLinear_updateParameters
  • THNN_FloatSparseLinear_zeroGradParameters
  • THNN_FloatSpatialAdaptiveAveragePooling_updateGradInput
  • THNN_FloatSpatialAdaptiveAveragePooling_updateOutput
  • THNN_FloatSpatialAdaptiveMaxPooling_updateGradInput
  • THNN_FloatSpatialAdaptiveMaxPooling_updateOutput
  • THNN_FloatSpatialAveragePooling_updateGradInput
  • THNN_FloatSpatialAveragePooling_updateOutput
  • THNN_FloatSpatialClassNLLCriterion_updateGradInput
  • THNN_FloatSpatialClassNLLCriterion_updateOutput
  • THNN_FloatSpatialConvolutionMM_accGradParameters
  • THNN_FloatSpatialConvolutionMM_updateGradInput
  • THNN_FloatSpatialConvolutionMM_updateOutput
  • THNN_FloatSpatialDilatedConvolution_accGradParameters
  • THNN_FloatSpatialDilatedConvolution_updateGradInput
  • THNN_FloatSpatialDilatedConvolution_updateOutput
  • THNN_FloatSpatialDilatedMaxPooling_updateGradInput
  • THNN_FloatSpatialDilatedMaxPooling_updateOutput
  • THNN_FloatSpatialFractionalMaxPooling_updateGradInput
  • THNN_FloatSpatialFractionalMaxPooling_updateOutput
  • THNN_FloatSpatialFullDilatedConvolution_accGradParameters
  • THNN_FloatSpatialFullDilatedConvolution_updateGradInput
  • THNN_FloatSpatialFullDilatedConvolution_updateOutput
  • THNN_FloatSpatialMaxUnpooling_updateGradInput
  • THNN_FloatSpatialMaxUnpooling_updateOutput
  • THNN_FloatSpatialReflectionPadding_updateGradInput
  • THNN_FloatSpatialReflectionPadding_updateOutput
  • THNN_FloatSpatialReplicationPadding_updateGradInput
  • THNN_FloatSpatialReplicationPadding_updateOutput
  • THNN_FloatSpatialUpSamplingBilinear_updateGradInput
  • THNN_FloatSpatialUpSamplingBilinear_updateOutput
  • THNN_FloatSpatialUpSamplingNearest_updateGradInput
  • THNN_FloatSpatialUpSamplingNearest_updateOutput
  • THNN_FloatTanh_updateGradInput
  • THNN_FloatTanh_updateOutput
  • THNN_FloatTemporalReflectionPadding_updateGradInput
  • THNN_FloatTemporalReflectionPadding_updateOutput
  • THNN_FloatTemporalReplicationPadding_updateGradInput
  • THNN_FloatTemporalReplicationPadding_updateOutput
  • THNN_FloatTemporalRowConvolution_accGradParameters
  • THNN_FloatTemporalRowConvolution_updateGradInput
  • THNN_FloatTemporalRowConvolution_updateOutput
  • THNN_FloatTemporalUpSamplingLinear_updateGradInput
  • THNN_FloatTemporalUpSamplingLinear_updateOutput
  • THNN_FloatTemporalUpSamplingNearest_updateGradInput
  • THNN_FloatTemporalUpSamplingNearest_updateOutput
  • THNN_FloatThreshold_updateGradInput
  • THNN_FloatThreshold_updateOutput
  • THNN_FloatVolumetricAdaptiveAveragePooling_updateGradInput
  • THNN_FloatVolumetricAdaptiveAveragePooling_updateOutput
  • THNN_FloatVolumetricAdaptiveMaxPooling_updateGradInput
  • THNN_FloatVolumetricAdaptiveMaxPooling_updateOutput
  • THNN_FloatVolumetricAveragePooling_updateGradInput
  • THNN_FloatVolumetricAveragePooling_updateOutput
  • THNN_FloatVolumetricConvolutionMM_accGradParameters
  • THNN_FloatVolumetricConvolutionMM_updateGradInput
  • THNN_FloatVolumetricConvolutionMM_updateOutput
  • THNN_FloatVolumetricDilatedConvolution_accGradParameters
  • THNN_FloatVolumetricDilatedConvolution_updateGradInput
  • THNN_FloatVolumetricDilatedConvolution_updateOutput
  • THNN_FloatVolumetricDilatedMaxPooling_updateGradInput
  • THNN_FloatVolumetricDilatedMaxPooling_updateOutput
  • THNN_FloatVolumetricFullDilatedConvolution_accGradParameters
  • THNN_FloatVolumetricFullDilatedConvolution_updateGradInput
  • THNN_FloatVolumetricFullDilatedConvolution_updateOutput
  • THNN_FloatVolumetricMaxUnpooling_updateGradInput
  • THNN_FloatVolumetricMaxUnpooling_updateOutput
  • THNN_FloatVolumetricReplicationPadding_updateGradInput
  • THNN_FloatVolumetricReplicationPadding_updateOutput
  • THNN_FloatVolumetricUpSamplingNearest_updateGradInput
  • THNN_FloatVolumetricUpSamplingNearest_updateOutput
  • THNN_FloatVolumetricUpSamplingTrilinear_updateGradInput
  • THNN_FloatVolumetricUpSamplingTrilinear_updateOutput
  • THNN_Floatunfolded_acc
  • THNN_Floatunfolded_copy

Tensor operation - progress tracking

  • maskedFill

  • maskedCopy

  • maskedSelect

  • squeeze

  • squeeze1d

  • unsqueeze1d

  • nonzero

  • indexSelect

  • indexCopy

  • indexAdd

  • indexFill

  • take

  • put

  • gather

  • scatter

  • scatterAdd

  • scatterFill

  • dot

  • minall

  • maxall

  • medianall

  • sumall

  • prodall

  • neg

  • cinv

  • add

  • sub

  • add_scaled

  • sub_scaled

  • mul

  • div

  • lshift

  • rshift

  • fmod

  • remainder

  • clamp

  • bitand

  • bitor

  • bitxor

  • cadd

  • csub

  • cmul

  • cpow

  • cdiv

  • clshift

  • crshift

  • cfmod

  • cremainder

  • cbitand

  • cbitor

  • cbitxor

  • addcmul

  • addcdiv

  • addmv

  • addmm

  • addr

  • addbmm

  • baddbmm

  • match

  • numel

  • max

  • min

  • kthvalue

  • mode

  • median

  • sum

  • prod

  • cumsum

  • cumprod

  • sign

  • cross

  • cmax

  • cmin

  • cmaxValue

  • cminValue

  • zerosLike

  • onesLike

  • diag

  • eye

  • arange

  • range

  • randperm

  • sort

  • topk

  • tril

  • triu

  • cat

  • catArray

  • equal

  • ltValue

  • leValue

  • gtValue

  • geValue

  • neValue

  • eqValue

  • ltValueT

  • leValueT

  • gtValueT

  • neValueT

  • eqValueT

  • ltTensor

  • leTensor

  • gtTensor

  • geTensor

  • neTensor

  • eqTensor

  • ltTensorT

  • leTensorT

  • gtTensorT

  • geTensorT

  • neTensorT

  • eqTensorT

  • pow

  • tpow

  • abs

  • sigmoid

  • log

  • lgamma

  • digamma

  • trigamma

  • polygamma

  • log10

  • log1p

  • log2

  • exp

  • expm1

  • cos

  • acos

  • cosh

  • sin

  • asin

  • sinh

  • tan

  • atan

  • atan2

  • tanh

  • erf

  • erfc

  • erfinv

  • sqrt

  • rsqrt

  • ceil

  • floor

  • round

  • abs

  • trunc

  • frac

  • lerp

  • mean

  • std

  • var

  • norm

  • renorm

  • dist

  • histc

  • bhistc

  • meanall

  • varall

  • stdall

  • normall

  • linspace

  • logspace

  • dirichlet_grad

  • logicalAndAll

  • logicalAnyAll

  • logicalAnd

  • logicalAny

Make API more easy to use by following python's syntax.

Is it possible change the high level API like what SciSharp's done? Check the README: https://github.com/SciSharp/TensorFlow.NET.

For example:

using TorchSharp;

var x = new FloatTensor (100);   // 1D-tensor with 100 elements

Will be

using torch = TorchSharp.Torch;

var x = torch.tensor(100)   // 1D-tensor with 100 elements

Mirror API first would make progress be faster that move ML model to .NET world. Project will be done faster than ML.NET. @migueldeicaza When most of the thing runs well, we can refactor code that be more .NET conventions.

Binding: TH.*Blas APIs

  • Create DllImport bindings for the following functions:
    • THByteBlas_axpy
    • THByteBlas_copy
    • THByteBlas_dot
    • THByteBlas_gemm
    • THByteBlas_gemv
    • THByteBlas_ger
    • THByteBlas_scal
    • THByteBlas_swap
    • THCharBlas_axpy
    • THCharBlas_copy
    • THCharBlas_dot
    • THCharBlas_gemm
    • THCharBlas_gemv
    • THCharBlas_ger
    • THCharBlas_scal
    • THCharBlas_swap
    • THDoubleBlas_axpy
    • THDoubleBlas_copy
    • THDoubleBlas_dot
    • THDoubleBlas_gemm
    • THDoubleBlas_gemv
    • THDoubleBlas_ger
    • THDoubleBlas_scal
    • THDoubleBlas_swap
    • THFloatBlas_axpy
    • THFloatBlas_copy
    • THFloatBlas_dot
    • THFloatBlas_gemm
    • THFloatBlas_gemv
    • THFloatBlas_ger
    • THFloatBlas_scal
    • THFloatBlas_swap
    • THIntBlas_axpy
    • THIntBlas_copy
    • THIntBlas_dot
    • THIntBlas_gemm
    • THIntBlas_gemv
    • THIntBlas_ger
    • THIntBlas_scal
    • THIntBlas_swap
    • THLongBlas_axpy
    • THLongBlas_copy
    • THLongBlas_dot
    • THLongBlas_gemm
    • THLongBlas_gemv
    • THLongBlas_ger
    • THLongBlas_scal
    • THLongBlas_swap
    • THShortBlas_axpy
    • THShortBlas_copy
    • THShortBlas_dot
    • THShortBlas_gemm
    • THShortBlas_gemv
    • THShortBlas_ger
    • THShortBlas_scal
    • THShortBlas_swap
  • Create high-level Torch.Tensor API for such functions.

Binding: TH.*Vector APIs

  • THByteVector_adds
  • THByteVector_cadd
  • THByteVector_cdiv
  • THByteVector_cmul
  • THByteVector_copy
  • THByteVector_divs
  • THByteVector_fill
  • THByteVector_muls
  • THByteVector_neg
  • THByteVector_normal_fill
  • THCharVector_adds
  • THCharVector_cadd
  • THCharVector_cdiv
  • THCharVector_cmul
  • THCharVector_copy
  • THCharVector_divs
  • THCharVector_fill
  • THCharVector_muls
  • THCharVector_neg
  • THCharVector_normal_fill
  • THDoubleVector_abs
  • THDoubleVector_acos
  • THDoubleVector_adds
  • THDoubleVector_adds_AVX
  • THDoubleVector_asin
  • THDoubleVector_atan
  • THDoubleVector_cadd
  • THDoubleVector_cadd_AVX
  • THDoubleVector_cdiv
  • THDoubleVector_cdiv_AVX
  • THDoubleVector_ceil
  • THDoubleVector_cinv
  • THDoubleVector_cmul
  • THDoubleVector_cmul_AVX
  • THDoubleVector_copy
  • THDoubleVector_copy_AVX
  • THDoubleVector_cos
  • THDoubleVector_cosh
  • THDoubleVector_digamma
  • THDoubleVector_divs
  • THDoubleVector_divs_AVX
  • THDoubleVector_erf
  • THDoubleVector_erfc
  • THDoubleVector_erfinv
  • THDoubleVector_exp
  • THDoubleVector_expm1
  • THDoubleVector_fill
  • THDoubleVector_fill_AVX
  • THDoubleVector_floor
  • THDoubleVector_frac
  • THDoubleVector_lgamma
  • THDoubleVector_log
  • THDoubleVector_log10
  • THDoubleVector_log1p
  • THDoubleVector_log2
  • THDoubleVector_muls
  • THDoubleVector_muls_AVX
  • THDoubleVector_neg
  • THDoubleVector_normal_fill
  • THDoubleVector_pow
  • THDoubleVector_round
  • THDoubleVector_rsqrt
  • THDoubleVector_sigmoid
  • THDoubleVector_sin
  • THDoubleVector_sinh
  • THDoubleVector_sqrt
  • THDoubleVector_tan
  • THDoubleVector_tanh
  • THDoubleVector_trigamma
  • THDoubleVector_trunc
  • THFloatVector_abs
  • THFloatVector_acos
  • THFloatVector_adds
  • THFloatVector_adds_AVX
  • THFloatVector_asin
  • THFloatVector_atan
  • THFloatVector_cadd
  • THFloatVector_cadd_AVX
  • THFloatVector_cdiv
  • THFloatVector_cdiv_AVX
  • THFloatVector_ceil
  • THFloatVector_cinv
  • THFloatVector_cmul
  • THFloatVector_cmul_AVX
  • THFloatVector_copy
  • THFloatVector_copy_AVX
  • THFloatVector_cos
  • THFloatVector_cosh
  • THFloatVector_digamma
  • THFloatVector_divs
  • THFloatVector_divs_AVX
  • THFloatVector_erf
  • THFloatVector_erfc
  • THFloatVector_erfinv
  • THFloatVector_exp
  • THFloatVector_expm1
  • THFloatVector_fill
  • THFloatVector_fill_AVX
  • THFloatVector_floor
  • THFloatVector_frac
  • THFloatVector_lgamma
  • THFloatVector_log
  • THFloatVector_log10
  • THFloatVector_log1p
  • THFloatVector_log2
  • THFloatVector_muls
  • THFloatVector_muls_AVX
  • THFloatVector_neg
  • THFloatVector_normal_fill
  • THFloatVector_pow
  • THFloatVector_round
  • THFloatVector_rsqrt
  • THFloatVector_sigmoid
  • THFloatVector_sin
  • THFloatVector_sinh
  • THFloatVector_sqrt
  • THFloatVector_tan
  • THFloatVector_tanh
  • THFloatVector_trigamma
  • THFloatVector_trunc
  • THIntVector_abs
  • THIntVector_adds
  • THIntVector_cadd
  • THIntVector_cdiv
  • THIntVector_cmul
  • THIntVector_copy
  • THIntVector_divs
  • THIntVector_fill
  • THIntVector_muls
  • THIntVector_neg
  • THIntVector_normal_fill
  • THLongVector_abs
  • THLongVector_adds
  • THLongVector_cadd
  • THLongVector_cdiv
  • THLongVector_cmul
  • THLongVector_copy
  • THLongVector_divs
  • THLongVector_fill
  • THLongVector_muls
  • THLongVector_neg
  • THLongVector_normal_fill
  • THShortVector_abs
  • THShortVector_adds
  • THShortVector_cadd
  • THShortVector_cdiv
  • THShortVector_cmul
  • THShortVector_copy
  • THShortVector_divs
  • THShortVector_fill
  • THShortVector_muls
  • THShortVector_neg
  • THShortVector_normal_fill

Sort out half-float support

Currently .NET lacks support for half-float, so we do not surface these data types in the API.

We should figure out if we should at least provide some interop APIs, even if we can not operate directly on the half values.

Adding Training to TorchSharp: First Step

The new version of TorchSharp with training requires the addition of a C API over libTorch. The C API is contained into an external repository called LibTorchSharp. I chose to have a separate repo because I expect eventually LibTorchSharp to go away because:

  • libtorch will directly provide the C API, or
  • LibTorchSharp will be merged into libtorch, or
  • LibTorchSharp will be automatically generated from PyTorch / libtorch.

Anyway, it will be great if we can move LibTorchSharp from my local repo to a xamarin repo and while we do this:

  • we do code review,
  • we make sure that it compiles over different platforms (for the moment I only tested it on Windows)
  • we add Azure Pipeline to automate compilation and the creation of nuget
  • we add the nuget to TorchSharp.

After all these, I think I can submit the PR adding training to TorchSharp.

Ponder: Tensor type?

Perhaps there should be a TorchSharp.Tensor type that proxies to the right storage tensor, with an abstract interface, so that people can write generic-ish code that deals with Tensors, rather than having different data types.

Bonus points - even better would be to have the Tensor not surface a Tensor<T>, as that would defeat the reusability at that point. This would have the downside that operations would have to dynamically check for type compatibility.

Rethink SafeHandle

The current design would require us to build a special SafeHandle for each kind of type. This is required to properly dispose the object, so for each type that has a special deallocator, we would need to have a SafeHandle type for that particular deallocator.

Currently this is not done. We could either do this, or use IntPtr in its place and check for IntPtr.Zero and throw ObjectDisposedException

Doc comments in NNSupport.cs are generating warnings.

NNSupport.cs(2274,50): warning CS1573: Parameter 'finput' has no matching param tag in the XML comment for 'DoubleTensor.unfolded_acc(DoubleTensor, DoubleTensor, int, int, int, int, int, int, int, int, int, int, int)' (but other parameters do) [.../TorchSharp/TorchSharp/TorchSharp.csproj]
NNSupport.cs(2299,51): warning CS1573: Parameter 'finput' has no matching param tag in the XML comment for 'DoubleTensor.unfolded_copy(DoubleTensor, DoubleTensor, int, int, int, int, int, int, int, int, int, int, int)' (but other parameters do) [.../TorchSharp/TorchSharp/TorchSharp.csproj]
NNSupport.cs(5675,49): warning CS1573: Parameter 'finput' has no matching param tag in the XML comment for 'FloatTensor.unfolded_acc(FloatTensor, FloatTensor, int, int, int, int, int, int, int, int, int, int, int)' (but other parameters do) [.../TorchSharp/TorchSharp/TorchSharp.csproj]
NNSupport.cs(5700,50): warning CS1573: Parameter 'finput' has no matching param tag in the XML comment for 'FloatTensor.unfolded_copy(FloatTensor, FloatTensor, int, int, int, int, int, int, int, int, int, int, int)' (but other parameters do) [.../TorchSharp/TorchSharp/TorchSharp.csproj]

Pytorch ver. 1.0 and the C++ Frontend

I am not sure that it is correct place to ask but Pytorch version 1.0 has the C++ Frontend Pytorch (that allows consuming all variety of neural network modules, optimization algorithms etc. in c++) . Is there easy way to automate c#<->c++ binding to it?

Basically what I am trying achieve is to use high level api of pytorch in C# like
torch::nll_loss torch::log_softmax or torch::nn::Linear etc.

Automatic generation of the C API

It should be possible to automatically generate part of the API from the Descriptions.yaml file created by PyTorch at compile time. This project could be an interesting starting point, although it requires compiling pytorch for generating the input yaml file.

Fix doc strings

For a number of the functions on tensors, the doc strings are either incomplete or missing information. A systematic scrubbing needs to be done, consulting the libtorch source code and docs in order to complete the generated API.

Copyright notices

We should add an appropriate standard copyright notice to all the files.

Binding: THFile API

  • THDiskFile_bigEndianEncoding
  • THDiskFile_isBigEndianCPU
  • THDiskFile_isLittleEndianCPU
  • THDiskFile_littleEndianEncoding
  • THDiskFile_longSize
  • THDiskFile_name
  • THDiskFile_nativeEndianEncoding
  • THDiskFile_new
  • THDiskFile_noBuffer
  • THFile_ascii
  • THFile_autoSpacing
  • THFile_binary
  • THFile_clearError
  • THFile_close
  • THFile_free
  • THFile_hasError
  • THFile_isAutoSpacing
  • THFile_isBinary
  • THFile_isOpened
  • THFile_isQuiet
  • THFile_isReadable
  • THFile_isWritable
  • THFile_noAutoSpacing
  • THFile_pedantic
  • THFile_position
  • THFile_quiet
  • THFile_readByte
  • THFile_readByteRaw
  • THFile_readByteScalar
  • THFile_readChar
  • THFile_readCharRaw
  • THFile_readCharScalar
  • THFile_readDouble
  • THFile_readDoubleRaw
  • THFile_readDoubleScalar
  • THFile_readFloat
  • THFile_readFloatRaw
  • THFile_readFloatScalar
  • THFile_readHalf
  • THFile_readHalfRaw
  • THFile_readHalfScalar
  • THFile_readInt
  • THFile_readIntRaw
  • THFile_readIntScalar
  • THFile_readLong
  • THFile_readLongRaw
  • THFile_readLongScalar
  • THFile_readShort
  • THFile_readShortRaw
  • THFile_readShortScalar
  • THFile_readStringRaw
  • THFile_seek
  • THFile_seekEnd
  • THFile_synchronize
  • THFile_writeByte
  • THFile_writeByteRaw
  • THFile_writeByteScalar
  • THFile_writeChar
  • THFile_writeCharRaw
  • THFile_writeCharScalar
  • THFile_writeDouble
  • THFile_writeDoubleRaw
  • THFile_writeDoubleScalar
  • THFile_writeFloat
  • THFile_writeFloatRaw
  • THFile_writeFloatScalar
  • THFile_writeHalf
  • THFile_writeHalfRaw
  • THFile_writeHalfScalar
  • THFile_writeInt
  • THFile_writeIntRaw
  • THFile_writeIntScalar
  • THFile_writeLong
  • THFile_writeLongRaw
  • THFile_writeLongScalar
  • THFile_writeShort
  • THFile_writeShortRaw
  • THFile_writeShortScalar
  • THFile_writeStringRaw
  • THMemoryFile_longSize
  • THMemoryFile_new
  • THMemoryFile_newWithStorage
  • THMemoryFile_storage
  • THPipeFile_new

Tester/Program.cs app fails to compile

...with the following error:

Program.cs(18,5): error CS1501: No overload for method 'Random' takes 2 arguments [/mnt/c/Users/sergiym/devel/TorchSharp/Tester/Tester.csproj]

Will send a patch in a minute.

Discussion: SNT integration with TorchSharp

Given that PR #51 got merged I am opening this issue to log some of the issues we where discussing over there. The main issue probably is whether TorchSharp should remain into its own assembly or get merged with SNT. One point in favor of being its own assembly is that it looks it can live with no other dependency and the mapping with the native Torch layer is lean and simpler. The main question is however: do we aspect developers to use TorchSharp tensors directly or will they always use System Numeric Tensors? Because if this is the case, it will make more sense to merge the two projects, have TorchSharp tensors as internals and unify the TorchSharp Storage class with NartiveMemory so that we can avoid unnecessary objects.

dotnet build -c Release fails on Linux

It probably happens when we try to build a (multi-platform?) NuGet package. I get the following error:

/usr/share/dotnet/sdk/2.1.403/Sdks/NuGet.Build.Tasks.Pack/build/NuGet.Build.Tasks.Pack.targets(199,5): error : Could not find a part of the path '/home/motus/devel/TorchSharp/windows'. [/home/motus/devel/TorchSharp/TorchSharp/TorchSharp.csproj]

Observed on my Ubuntu 18.10 laptop with dotnet sdk 2.1.403.

Is there a way to build a release without creating a nuget package? Alternatively, maybe we should bundle (or git submodule?) libtorch libraries for all platforms we build a nuget for? @migueldeicaza you probably already have the solution in mind..

Binding: TH.*Lapack APIs

  • THDoubleLapack_geev
  • THDoubleLapack_gels
  • THDoubleLapack_geqrf
  • THDoubleLapack_gesdd
  • THDoubleLapack_gesv
  • THDoubleLapack_getrf
  • THDoubleLapack_getri
  • THDoubleLapack_getrs
  • THDoubleLapack_orgqr
  • THDoubleLapack_ormqr
  • THDoubleLapack_potrf
  • THDoubleLapack_potri
  • THDoubleLapack_potrs
  • THDoubleLapack_pstrf
  • THDoubleLapack_syev
  • THDoubleLapack_trtrs
  • THFloatLapack_geev
  • THFloatLapack_gels
  • THFloatLapack_geqrf
  • THFloatLapack_gesdd
  • THFloatLapack_gesv
  • THFloatLapack_getrf
  • THFloatLapack_getri
  • THFloatLapack_getrs
  • THFloatLapack_orgqr
  • THFloatLapack_ormqr
  • THFloatLapack_potrf
  • THFloatLapack_potri
  • THFloatLapack_potrs
  • THFloatLapack_pstrf
  • THFloatLapack_syev
  • THFloatLapack_trtrs
  • THDoubleLapack_geev
  • THDoubleLapack_gels
  • THDoubleLapack_geqrf
  • THDoubleLapack_gesdd
  • THDoubleLapack_gesv
  • THDoubleLapack_getrf
  • THDoubleLapack_getri
  • THDoubleLapack_getrs
  • THDoubleLapack_orgqr
  • THDoubleLapack_ormqr
  • THDoubleLapack_potrf
  • THDoubleLapack_potri
  • THDoubleLapack_potrs
  • THDoubleLapack_pstrf
  • THDoubleLapack_syev
  • THDoubleLapack_trtrs
  • THFloatLapack_geev
  • THFloatLapack_gels
  • THFloatLapack_geqrf
  • THFloatLapack_gesdd
  • THFloatLapack_gesv
  • THFloatLapack_getrf
  • THFloatLapack_getri
  • THFloatLapack_getrs
  • THFloatLapack_orgqr
  • THFloatLapack_ormqr
  • THFloatLapack_potrf
  • THFloatLapack_potri
  • THFloatLapack_potrs
  • THFloatLapack_pstrf
  • THFloatLapack_syev
  • THFloatLapack_trtrs

Split Razor template in several smaller pieces

The TypeGeneration.tt file is very big and will cause unnecessary merge conflicts unless it is split up into pieces.

I suggest the following organization into separate files:

  1. The Storage subclass (btw, it shouldn't need qualification, since it's a nested class).
  2. The basics -- constructors, ToString(), indexer, HType subclass, accessors, various static factories, etc.
  3. Copy and Fill operations.
  4. Tensor manipulation -- resize, transpose, etc.
  5. Element-wise operations, unary operators.
  6. Linear algebra operators

Tester is a .NET Framework app

The Tester project is a .NET Framework app, which makes it difficult to test things, as libtorch isn't available on Windows yet.

Implement (TorchScript) model scoring

We need to be able to load a model (trained in Python/TorchScript) from a file and use it for scoring in .NET. For that, we have to create a wrapper for libtorch C++ function

TORCH_API std::shared_ptr<script::Module> torch::jit::load(const std::string& filename);

and the method

IValue torch::jit::script::Module::forward(std::vector<IValue> inputs);

(plus some supporting classes and methods)

Publish NuGet

Completed the packaging of the NuGet, now need to use the Publish feature.

Needed:

  • Script to adjust the NuGet package version based on the branch name
  • Try it out :-)

Integrate with the CoreFX Tensor proposal

CoreFX is looking to expose a set of types that allow exchange and interop between the various tensor frameworks.

We now have an API proposal for what these types would look like here: dotnet/corefx#35765

It would be beneficial if TorchSharp could review the proposal to ensure it matches their expectations and raise any concerns before we take it to API review.

Merging LibTorchSharp in TorchSharp

Current situation:

@interesaaat has done quite a bit of work on his fork of TorchSharp. He has also created a separate repo with C/C++ code, LibTorchSharp on which his fork TorchSharp depends.

I have started integrating TorchSharp with ML.NET with a torch scoring transformer (which takes pretrained models) and a torch trainer which trains a user defined model. This work depends on @interesaaat's fork.

Plan:

Given the scattered work it seems like we should decide on a high level plan on how to streamline our development efforts. Here is what I think our goal should be:

  1. Have a single repository TorchSharp which builds both the native components and the C# code this essentially means merging @interesaaat's TorchSharp and LibtorchSharp repositories
  2. Have an official build that produces a signed nuget and publishes it to some nuget feed
  3. Either as part of this nuget, or as part of a separate nuget we should redistribute the libtorch dlls that we use and publish them to a nuget feed

Expected outcome:

For the developers:

  • Access to the latest work by @interesaaat.
  • Simplification of future work involving both native and C# code since they would be both in the same repository (like adding new APIs or working on auto-generating the API).
  • On the ML.NET side we would be able to take a dependency on the produced nugets.

For the users:
Anyone who wants to use TorchSharp directly can use the nugets from the feed instead of having to

  • download libtorch and adding it to the environment variables
  • compile libtorchsharp and adding it to the environment variables
  • downloading the TorchSharp nuget

Execution (WIP):

I am working on these three points with @interesaaat in my fork. It's still a work in progress but I am using ML.NET's build infrastructure to build both native and C# code, to have the official build that pushes to a feed, and as a model for the redistribution of libtorch (see Microsoft.ML.Tensoflow.Redist). I am currently able to build everything and run the tests. I have discussed with CELA about the redistribution of libtorch dlls. I am still figuring out the best way to do that, but it is possible.

cc/ @interesaaat @migueldeicaza

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.