Giter VIP home page Giter VIP logo

torch's Introduction

DISCONTINUATION OF PROJECT.

This project will no longer be maintained by Intel.

Intel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.

Intel no longer accepts patches to this project.

If you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project. Build Status

Torch*

Torch is a scientific computing framework with wide support for machine learning algorithms. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.

Intel®Software Optimization for Torch

This fork is dedicated to improving Torch performance when running on CPU, in particular Intel® Xeon processors (HSW, BDW, Xeon Phi)

Requirements

If you are root user, please use this command to install OpenBlas and other dependency:

bash install-deps

If you are not root user, please use this command to install OpenBlas from source code, and this script will add OpenBLAS to LD_LIBRARY_PATH automaticly.

. ./install-openblas.sh

Building

Install this repo, which installs the torch distribution, with a lot of nice goodies.

You can specify which compiler to compile the project, and the default compiler is gcc & g++.

git clone https://github.com/intel/torch.git ~/torch
cd ~/torch; bash install-deps;
./install.sh        #use gcc to install torch
./install.sh icc  #use icc to install torch

By default Torch will install LuaJIT 2.1. If you want other options, you can use the command:

TORCH_LUA_VERSION=LUA51 ./install.sh
TORCH_LUA_VERSION=LUA52 ./install.sh

Cleaning

To remove all the temporary compilation files you can run:

./clean.sh

Test

You can test that all libraries are installed properly by running:

./test.sh

Tested on Ubuntu 14.04, CentOS 7.

More build options for install.sh

./install.sh [gcc] [avx512] [mklml] [noskip]
  • icc/gcc, default gcc
  • avx512/off, default off, avx512 will force compilers(GCC version should be greater than 4.9.2) to use AVX512F instructions to compile the framework.
  • mklml/mkl, default mkl.
  • noskip/skip,default noskip, skip means skip the openblas checking

If you want to use MKL as the default BLAS library, please activate MKL before install.sh:

source /opt/intel/mkl/bin/mklvars.sh intel64

* Other names and trademarks may be claimed as the property of others.

torch's People

Contributors

alband avatar alexbw avatar apaszke avatar borisfom avatar btnc avatar fidergo-stephane-gourichon avatar howard0su avatar hughperkins avatar jdonald avatar linusu avatar luoq avatar marioyc avatar mlwoo avatar mnogu avatar nagadomi avatar nicolasvasilache avatar nithishdivakar avatar notastudio avatar perweij avatar quin47 avatar rdower avatar rsnk96 avatar rufflewind avatar sayakbiswas avatar soumith avatar szagoruyko avatar techraf avatar vitorgalvao avatar xhzhao avatar zhenyang12121733 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

torch's Issues

lua <-> torch dependency error

When installing the MKL branch intel/torch implementation, I am subsequently unable to install nn lua rock:

luarock install nn

Here's the error that I get:

Cloning into 'nn'...
/tmp/luarocks_nn-scm-1-7871/nn/lib/THNN/generic/Abs.c: In function 'THNN_FloatAbs_updateGradInput':
/tmp/luarocks_nn-scm-1-7871/nn/lib/THNN/init.c:31:5: error: unknown type name 'ptrdiff_t'
     ptrdiff_t n1 = THTensor_(nElement)(I1);     \
     ^
/tmp/luarocks_nn-scm-1-7871/nn/lib/THNN/generic/Abs.c:20:3: note: in expansion of macro 'THNN_CHECK_NELEMENT'
   THNN_CHECK_NELEMENT(input, gradOutput);
   ^
/tmp/luarocks_nn-scm-1-7871/nn/lib/THNN/init.c:32:5: error: unknown type name 'ptrdiff_t'
     ptrdiff_t n2 = THTensor_(nElement)(I2);                                 \
     ^

The solution, according to multiple sources, and presented around the same time as code here was first pushed, is to upgrade to the latest torch, with explanation being a broken dependency between torch and this lua rock.

When is the next drop of intel/torch expected?

Indexing Failure BUG

I've found a bug in the index function which corrupts the data whenever it is not indexing a Tensor across the 1st dimension.

-- Create random data and random slice indices
x = torch.rand(2000,20000)+3
ind = torch.randperm(x:size(2))[{{1,10}}]:long()

-- Slice x along dimension 2 using the supplied indices in ind
sx = x:index(2,ind)

print("Original Range: " .. x:min() .. "," .. x:max())
print("Slice Range: " .. sx:min() .. "," .. sx:max())

The original range should report 3,4 while the sliced range reports an invalid range of 0,~4 (the upper and lower can vary due to randomness, but it should never be lower than 3).

This indicates that the internal data is invalid when using the index function. It certainly has something to do with the size of the matrix itself since you get perfectly fine copies of Tensor sizes of [200,10]:

x = torch.rand(200,10)+3
ind = torch.LongTensor{1,2,3,4,5,6,7,8,9,10} -- Select all indices in order
sx = x:index(2,ind)
print("Matching: ".. x:eq(sx):sum() .. "/" .. x:nElement())

This produces Matching: 2000/2000, however the critical point seems to be when the first dimension is OVER 720.

A matrix of size [720,10] with the above test produces Matching: 7200/7200, however with a matrix of size [721,10] it produces Matching: 129/7210 which is absolutely not correct.

I'd really appreciate any insights into this error.

Binary file incompatibility?

Hi, thank you for great work. It runs faster than original torch on Xeon.

However, I'm having one trouble while trying to load trained .t7 files. These .t7 files are trained on GPGPU torch but converted for CPU. Original torch on CPU (without GPGPU) works well with these .t7 files. However, Intel torch shows following error message.

$ th ...
| loading model file...
/home/.../torch/inteltorch/install/bin/lua: .../torch/inteltorch/install/share/lua/5.2/torch/
File.lua:301: Failed to load function from bytecode: binary string: not a precompiled chunk
stack traceback:
        [C]: in function 'error'
        .../torch/inteltorch/install/share/lua/5.2/torch/File.lua:301: in function 'readObject'
        .../torch/inteltorch/install/share/lua/5.2/torch/File.lua:369: in function 'readObject'

Any idea? Thanks.

Intalling the latest MKL version

Following up on this instruction:

Be sure you have installed the lastest MKL version: parallel_studio_xe_2017.

I can't find a free download link - I do see that Parallel Studio is a commercial product.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.