Giter VIP home page Giter VIP logo

nngraph's Introduction

THIS REPOSITORY IS DEPRECEATED.

Please use https://github.com/torch/torch7

For install scripts, please look at: https://github.com/torch/ezinstall

Torch7 Library.

Torch7 provides a Matlab-like environment for state-of-the-art machine learning algorithms. It is easy to use and provides a very efficient implementation, thanks to an easy and fast scripting language (Lua) and a underlying C implementation.

In order to install Torch7 you can follow these simple instructions, but we suggest reading the detailed manual at http://www.torch.ch/manual/install/index

Requirements

  • C/C++ compiler
  • cmake
  • gnuplot
  • git

Optional

  • Readline
  • QT (QT4.8 is now supported)
  • CBLAS
  • LAPACK

Installation

$ git clone git://github.com/andresy/torch.git
$ cd torch
$ mkdir build
$ cd build

$ cmake .. 
OR
$ cmake .. -DCMAKE_INSTALL_PREFIX=/my/install/path

$make install

Running

$torch
Type help() for more info
Torch 7.0  Copyright (C) 2001-2011 Idiap, NEC Labs, NYU
Lua 5.1  Copyright (C) 1994-2008 Lua.org, PUC-Rio
t7> 

3rd Party Packages

Torch7 comes with a package manager based on Luarocks. With it it's easy to install new packages:

$ torch-rocks install image
$ torch-rocks list
$ torch-rocks search --all

Documentation

The full documentation is installed in /my/install/path/share/torch/html/index.html

Also, http://www.torch.ch/manual/index points to the latest documentation of Torch7.

nngraph's People

Contributors

abursuc avatar adamlerer avatar akfidjeland avatar alvinlschua avatar andreaskoepf avatar apaszke avatar atcold avatar btnc avatar d11 avatar dominikgrewe avatar fbesse avatar fidlej avatar hughperkins avatar iamalbert avatar ioannisantonoglou avatar jonathantompson avatar koraykv avatar leetaewoo avatar linusu avatar malcolmreynolds avatar rohanpadhye avatar schaul avatar soumith avatar yozw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nngraph's Issues

Speed reduction?

Couldn't find a mailing list for nngraph so I'll ask here. Is there any speed hit when using nngraph to specify a model over vanilla nn? Also, do these models port to the GPU like NN models do?

installation documentation

Hello,
it seems nobody get problem to install it but I've try on linux and mac and got a similar problem : nngraph can't find graphiz.
nngraph is installed with :
luarocks install nngraph
and graphiviz with :
sudo apt-get install graphviz -y
and brew command on osx

on linux :
/torch/install/share/lua/5.1/graph/graphviz.lua:145: attempt to call field 'gvContext' (a nil value)
I have this error message
on mac it's another message : it can't load libgvplugin_dot_layout.6.dylib
is the installation sequence is important ? luarocks before graphviz ?

best regards

split(1)

Hi,
Is it possible to have split(n) working with n=1 ? That would help in generalising part of a model.
Thanks

"Expecting only one start" error when using JoinTable

There seems to be a bug in the way I'm writing my LSTM architecture.

local LSTM = {}
function LSTM.create(input_size, input2_size, output_size, rnn_size)
    local inputs = {}
    local outputs = {}

    table.insert(inputs, nn.Identity()())
    table.insert(inputs, nn.Identity()())
    for L=1,2 do
        table.insert(inputs, nn.Identity()())
        table.insert(inputs, nn.Identity()())
    end

    local x, x_p, x2
    for L=1,2 do
        if L == 1 then
            x_p = inputs[1]
            input_size_L = input_size
        else 
            x = outputs[(L-2)*2]
            x2 = nn.LookupTable(input2_size+1, rnn_size)(inputs[2])
            x_p = nn.JoinTable(2)({x, x2})
            input_size_L = rnn_size*2
        end

        prev_c = inputs[L*2+1]
        prev_h = inputs[L*2+2]

        local i2h = nn.Linear(input_size_L, 4 * rnn_size)(x_p)
        local h2h = nn.Linear(rnn_size, 4 * rnn_size)(prev_h)
        local all_input_sums = nn.CAddTable()({i2h, h2h})

        local reshaped = nn.Reshape(4, rnn_size)(all_input_sums)
        local n1, n2, n3, n4 = nn.SplitTable(2)(reshaped):split(4)

        local in_gate = nn.Sigmoid()(n1)
        local forget_gate = nn.Sigmoid()(n2)
        local out_gate = nn.Sigmoid()(n3)

        local in_transform = nn.Tanh()(n4)

        local next_c           = nn.CAddTable()({
            nn.CMulTable()({forget_gate, prev_c}),
            nn.CMulTable()({in_gate,     in_transform})
          })

        local next_h = nn.CMulTable()({out_gate, nn.Tanh()(next_c)})

            table.insert(outputs, next_c)
            table.insert(outputs, next_h)
    end

    local last_h = outputs[#outputs]
    local proj = nn.Linear(rnn_size, output_size)(last_h)
    local logsoft = nn.LogSoftMax()(proj)
    table.insert(outputs, logsoft)

    return nn.gModule(inputs, outputs)
end

return LSTM

The problem seems to be when I try to use nn.JoinTable. What I'm trying to do here is send inputs[2] through a lookuptable and concatenate that (column wise) with the hidden state output of the previous layer.

The hidden state output will be: batchSize x rnnSize. The lookuptable output will be batchSize x rnnSize. If I concatenate these tensors, the new tensor will be batchSize x 2*rnnSize.

Regardless, I'm getting a "expecting only one start" issue, which probably means I'm doing something fundamentally wrong.

Thanks!

Crazy idea: make an nngraph 'optimizer'

Crazy idea: make an nngraph 'optimizer'.

I'm calling it crazy so I dont accidentally commit myself to doing something I then find wont work for reason x,y,z :-P Plus, it is a little way out there.

Thinking of creating an optimizer for nngraph, that takes a graph as input, and then culls edges where it can. eg, maybe replaces nn.Narrow with torch.Narrow, and removes the nn.Narrow node. Since I haven't actually implemented this yet, I dont know how possible this is going to be, and whether obstacles I meet are merely challenging, or are weeks of work.

I'm also thinking of something more general, and more gpu-specific, of implementing gmodule within clnn, and walking the graph within clnn, and joining some of the nodes together, during the forward/backward calls, eg replace a Linear + a Tanh by a single kernel that calls both, in one launch.

Overriding motivatoin for all of this is basically on certain gpus, I notice that the overhead of launching kernels, in char-rnn, appears to dominate the runtime. I havent used any particular diagnostic tools to confirm this, and perhaps I should, but I notice that I can increase the data size 20 times whilst the execution time only increases by aobut 10-20%, which suggests to me that it's not a calc/data issue, but plausibly linked to kernel launches.

nodes should have parent link

nodes should have parent link. would make certain transformations much easier (or rather: possible. the easiest way to achieve them is precisely to add the parents, or to create an associative map of nodes to parents).

Wrong "Number of gradients do not match my graph" error

I have two outputs.
And I pass in two gradOutputs.
The nngraph complains:

#outputs   =1   
#gradients =2
nngraph/gmodule.lua:251: Number of gradients do not match my graph

Example code to reproduce the error:

local in1 = nn.Sigmoid()()
local splitTable = nn.SplitTable(1)({in1})
local module = nn.gModule({in1}, {splitTable})

local input = torch.randn(2, 3)
local output = module:forward(input)
assert(#output == 2, "we have two outputs")
module:backward(input, {torch.randn(3), torch.randn(3)})

Removing Nodes

Is there an easy way to remove the last node or two from an architecture? I feel like this would be a fairly common operation, especially when adding a different classifier to fine-tune a model.

special handling of #input == 1

hey,

i am curious to know why the case of #input == 1 is handled separately
e.g.
https://github.com/torch/nngraph/blob/master/gmodule.lua#L195
and for the backward:
https://github.com/torch/nngraph/blob/master/gmodule.lua#L266
https://github.com/torch/nngraph/blob/master/gmodule.lua#L277

if i understand correctly, this also impacts the split, which cannot be done on tables of size 1:
https://github.com/torch/nngraph/blob/master/node.lua#L50

could you give me an idea why you made this choice?

thanks!

ronan

Input-free graph support?

I'd like to be able to build graphs that take no inputs during the forward pass, and build them cleanly, without unused bogus modules or having to pass placeholder bogus tensors... This example script shows what I mean...

Error when switching between float and cuda

Hi, consider the following example to illustrate this issue. It creates a gModule
and converts it to CUDA and back to a float.
After converting back to a float, I think some internal state isn't correctly updated
and I'm getting the error below.

I'm using the latest master branch of:

  • torch f62a95d3184fd730acdaa4754647b338d7686301
  • cutorch a7147d00e61a5e182a277995f5d1e99ec3bdf0f8
  • nn bc056eeb09f83aaba354d44b985b1819b6b6ee4a
  • cunn 3827fcd820d5d0d90cb37a443c403b47009cb7d4
  • nngraph d0c239b
require 'cutorch'
require 'nn'
require 'cunn'
require 'nngraph'
torch.manualSeed(1)
input = nn.Identity()()
L1 = nn.ReLU()(nn.Linear(3, 1)(input))
net = nn.Sequential()
net:add(L1)
g = nn.gModule({input}, {L1})
x = torch.randn(3)
g:forward(x)
g:cuda()
g:forward(x:cuda())
g:float()
g:forward(x)

Output

th> g:forward(x)
 0.1432
[torch.DoubleTensor of size 1]

                                                                      [0.0001s]
th> g:cuda()
nn.gModule
                                                                      [0.0596s]
th> g:forward(x:cuda())
 0.1432
[torch.CudaTensor of size 1]

                                                                      [0.0003s]
th> g:float()
nn.gModule
                                                                      [0.0004s]
th> g:forward(x)
/home/bamos/torch/install/share/lua/5.1/nn/Linear.lua:51: expected arguments: *FloatTensor~1D* [FloatTensor~1D] [float] FloatTensor~2D FloatTensor~1D | *FloatTensor~1D* float [FloatTensor~1D] float FloatTensor~2D FloatTensor~1D
stack traceback:
        [C]: in function 'addmv'
        /home/bamos/torch/install/share/lua/5.1/nn/Linear.lua:51: in function 'func'
        /home/bamos/torch/install/share/lua/5.1/nngraph/gmodule.lua:311: in function 'neteval'
        /home/bamos/torch/install/share/lua/5.1/nngraph/gmodule.lua:346: in function 'forward'
        [string "_RESULT={g:forward(x)}"]:1: in main chunk
        [C]: in function 'xpcall'
        /home/bamos/torch/install/share/lua/5.1/trepl/init.lua:630: in function 'repl'
        ...amos/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:185: in main chunk
        [C]: at 0x00406670

graphviz wrong number of arguments error

Running this (from the example)

require 'nngraph'

h1 = nn.Linear(20, 10)()
h2 = nn.Linear(10, 1)(nn.Tanh()(nn.Linear(10, 10)(nn.Tanh()(h1))))
mlp = nn.gModule({h1}, {h2})

x = torch.rand(20)
dx = torch.rand(1)
mlp:updateOutput(x)
mlp:updateGradInput(x, dx)
mlp:accGradParameters(x, dx)

-- draw graph (the forward graph, '.fg')
graph.dot(mlp.fg, 'MLP', '/home/yongfei')

Got this error.

.. /lua/5.1/graph/graphviz.lua:156: wrong number of arguments for function call

stack traceback:
    [C]: in function 'gvRender'
    ...e/yongfei/torch/install/share/lua/5.1/graph/graphviz.lua:156: in function 'graphvizFile'
    ...e/yongfei/torch/install/share/lua/5.1/graph/graphviz.lua:181: in function 'dot'
    train.lua:154: in main chunk
    [C]: in function 'dofile'
    ...gfei/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:133: in main chunk

What have I missed?

Any neat way to debug?

Is there any neat way to debug nngraph? Thanks!

Currently I write a Debug layer which is almost the same as Identity layer except that I can print whatever I want inside updateOutput() and updateGradInput()

Can't install in local tree

I'm trying to install the nngraph rock on my university high performance computing cluster, on which I don't have root access. I tried to install it in the local tree with:

luarockt --local install nngraph

but the installation fails with the following error:

Install the project...
-- Install configuration: "Release"
-- Old export file "/software/torch/share/cmake/torch/TorchExports.cmake" will be replaced. Removing files [/software/torch/share/cmake/torch/TorchExports-release.cmake].
-- Installing: /software/torch/share/cmake/torch/TorchExports.cmake
CMake Error at cmake_install.cmake:48 (file):
file INSTALL cannot copy file
"/scratch/tmp/luarocks_torch-scm-1-5519/torch7/build/CMakeFiles/Export/share/cmake/torch/TorchExports.cmake"
to "/software/torch/share/cmake/torch/TorchExports.cmake".

Any suggestion?

"nesting.lua:36: bad argument #1 to 'resizeAs'"

Following script crashes, with error nesting.lua:36: bad argument #1 to 'resizeAs':

require 'nngraph'

x = nn.Identity()()
m1 = nn.Linear(5, 2)(x)
m2 = nn.Linear(5, 2)(x)
m = nn.JoinTable(1)({m1, m2})

g = nn.gModule({x}, {m})

input = torch.DoubleTensor(5):uniform()
gradOutput = torch.DoubleTensor(4):uniform()

g:forward(input)
g:updateGradInput(input, gradOutput)

g:float()
g:forward(input:float())
g:updateGradInput(input:float(), gradOutput:float())

Result:

$ th /norep/dev/issue-61.lua
/home/user/torch/install/bin/luajit: /home/user/torch/install/share/lua/5.1/nngraph/nesting.lua:36: bad argument #1 to 'resizeAs' (torch.DoubleTensor expected, got torch.FloatTensor)
stack traceback:
    [C]: in function 'resizeAs'
    /home/user/torch/install/share/lua/5.1/nngraph/nesting.lua:36: in function 'resizeNestedAs'
    /home/user/torch/install/share/lua/5.1/nngraph/gmodule.lua:14: in function 'getTotalGradOutput'
    /home/user/torch/install/share/lua/5.1/nngraph/gmodule.lua:300: in function 'neteval'
    /home/user/torch/install/share/lua/5.1/nngraph/gmodule.lua:346: in function 'updateGradInput'
    /norep/dev/issue-61.lua:18: in main chunk
    [C]: in function 'dofile'
    ...user/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x00406670

Example: criteria

Would be nice to have an example for how to use multiple criteria on a network graph (e.g. MSE on the last layer, and L1 on the activations of the first hidden layer).

Is it possible to get access to intermediate node in nngraph?

Sorry I didn't find related documentations. I would also apologize if github issues is not a good place to ask such questions (e.g if there is a google group for such discussions)

What I'd like to do is something similar to:

m = nn.Sequential()
m:add(nn.Linear(5,5))
m:get(1)

I have a rnn.GRU module in my gModule, and I have to manually call forget() function and change self.userPrevOutput sometimes. So it would be great if I can get access to intermediate node. Otherwise, I have to move rnn.GRU out of gModule :(

Thanks in advance!

Display gradOutput

How do you get graphviz to display gradOutput?

It does'nt display automatically on my backward graphs?

system_backward_graph

I've done something really stupid, and I can't backprop ??? ๐Ÿ‘Ž

Qtsvg problem

Hi

I can't manage to get this package to work. the error comes up when require 'qtsvg' is being used.
the error is as follows:

require 'qtsvg'
/usr/local/share/lua/5.1/qtsvg/init.lua:2: module 'qt' not found:No LuaRocks module found for qt
no field package.preload['qt']
no file '/usr/local/share/lua/5.1/qt.lua'
no file '/usr/local/share/lua/5.1/qt/init.lua'
no file '/home/siavash/.luarocks/share/lua/5.1/qt.lua'
no file '/home/siavash/.luarocks/share/lua/5.1/qt/init.lua'
no file '/usr/local/share/lua/5.1/qt.lua'
no file '/usr/local/share/lua/5.1/qt/init.lua'
no file '/home/siavash/.luarocks/share/lua/5.1/qt.lua'
no file '/home/siavash/.luarocks/share/lua/5.1/qt/init.lua'
no file '/usr/local/share/lua/5.1/qt.lua'
no file '/usr/local/share/lua/5.1/qt/init.lua'
no file './qt.lua'
no file '/usr/local/share/luajit-2.0.2/qt.lua'
no file '/usr/local/share/lua/5.1/qt.lua'
no file '/usr/local/share/lua/5.1/qt/init.lua'
no file '/usr/local/lib/lua/5.1/qt.so'
no file '/home/siavash/.luarocks/lib/lua/5.1/qt.so'
no file '/usr/local/lib/lua/5.1/qt.so'
no file '/home/siavash/.luarocks/lib/lua/5.1/qt.so'
no file './qt.so'
no file '/usr/local/lib/lua/5.1/qt.so'
no file '/usr/local/lib/lua/5.1/loadall.so'
stack traceback:
[C]: in function 'require'
/usr/local/share/lua/5.1/qtsvg/init.lua:2: in main chunk
[C]: in function 'f'
[string "local f = function() return require 'qtsvg' e..."]:1: in main chunk
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/trepl/init.lua:553: in function </usr/local/share/lua/5.1/trepl/init.lua:489>

what can be the problem?

Example of an nngraph module that has multiple non-identical outputs

I'm writing an nngraph module with multiple non-identical outputs. Do we have any existing examples of modules which do this? As I remember, it is a principle of nngraph in general that a module can only produce one unique output?

Update: line 31 of gmodule.lua asserts each output can only have one output https://github.com/torch/nngraph/blob/master/gmodule.lua#L31 , so I guess the answer is: no, would need modification of gmodule, and socialization of why one might want do do that

=> closing, for now

failing local install

Hi,

I am trying to install nngraph locally using $ luarocks install --local nngraph. I was surprised to see that it said that I had missing dependencies for nngraph, even though I could seemly import them with require.

Missing dependencies for nngraph:
torch >= 7.0
nn
graph

After trying to build I end up with the following message.

Install the project...
-- Install configuration: "Release"
-- Installing: /usr/share/torch7-dist/install/share/cmake/torch/TorchExports.cmake
CMake Error at cmake_install.cmake:48 (FILE):
 file INSTALL cannot copy file
 "/tmp/luarocks_torch-scm-1-446/torch7/build/CMakeFiles/Export/share/cmake/torch/TorchExports.cmake"
 to "/usr/share/torch7-dist/install/share/cmake/torch/TorchExports.cmake".
make: *** [install] Error 1

Error: Failed installing dependency: https://raw.githubusercontent.com/torch/rocks/master/torch-scm-1.rockspec - Build error: Failed installing.

Would appreciate any help.

CAddTable and nngraph inconsistency: bad argument #1 to 'resizeAs'

nn.CAddTable, when used with nngraph, throws a strange error when given a table of size 1.

Here's a minimal test case:

require 'nn'
require 'nngraph'

N = 1
input = nn.Identity()()

hiddens = {}
for i = 1, N do
    table.insert(hiddens, nn.Identity()(input))
end
output = nn.CAddTable()(hiddens)
net = nn.gModule({input}, {output})
net:forward(torch.Tensor(1))

If you change N to be greater than 1, this works fine. For N = 1, it throws the error

...lua/5.1/nn/CAddTable.lua:10: bad argument #1 to 'resizeAs' (torch.DoubleTensor expected, got userdata)
stack traceback:
        [C]: in function 'resizeAs'
        ...lua/5.1/nn/CAddTable.lua:10: in function 'func'
        ...lua/5.1/nngraph/gmodule.lua:253: in function 'neteval'
        ...lua/5.1/nngraph/gmodule.lua:288: in function 'forward'

This seems like undesirable / unexpected behaviour so I thought I'd bring it up, though maybe it is an intentional consequence of #46.

Recurrence with split

Hi guys,

This crashes :

require 'nngraph';

n1 = 3
n2 = 4
n3 = 3

x1 = nn.Identity()()
x23 = nn.Identity()()
x2,x3 = x23:split(2)
z  = nn.JoinTable(1)({x1,x2,x3})
y1 = nn.Linear(n1+n2+n3,n2)(z)
y2 = nn.Linear(n1+n2+n3,n3)(z)
m = nn.gModule({x1,x23},{y1,y2})

input = {torch.randn(n1), {torch.randn(n2), torch.randn(n3)}}
output = m:forward(input)
print(output)
print(input)
input[2] = output
print(input)
m:forward(input)

The error:

/usr/local/bin/luajit: /usr/local/share/lua/5.1/nngraph/gmodule.lua:314: split(2) cannot split 0 outputs
stack traceback:
    [C]: in function 'error'
    /usr/local/share/lua/5.1/nngraph/gmodule.lua:314: in function 'neteval'
    /usr/local/share/lua/5.1/nngraph/gmodule.lua:346: in function 'forward'
    issues/issue172.lua:22: in main chunk
    [C]: in function 'dofile'
    /usr/local/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x00405e60

Basically, the issue happens when feeding back outputs of a previous forward as inputs to the next forward to a gModule using split.

Inconsistent behaviour vs nn.Module: nil output

Just wanted to point out that gModules do not configure their output by default (before the call to forward), unlike nn.Module which set "self.output = torch.Tensor()". This can be an unfortunate cause of bugs when nn.Modules are swapped out for gModules. See code below for example.

require 'nn'
require 'nngraph'

local mod = nn.Module()
print(mod.output)

local input = nn.Identity()()
local output = nn.Linear(10,20)(input)
local gmod = nn.gModule({input},{output})
print(gmod.output)

which outputs

[torch.DoubleTensor with no dimension]

nil

questions on recursively traverse the every node in nngraph

What if I build a model named my_module by using nngraph submodules, that is, In the model, other than naive node, there is also a nngraph module type node, however my_module.forwardnodes will not return the the node directly in the nngraph submodule for me to access. I try to solved it recursively tranverse the table of nodes. Is there some clean way to do that? hope someone give me tips

Inconsistent Table Behavior

I noticed that feeding tables into nngraph nodes has inconsistent behavior. If I feed a node a table with multiple elements it treats the table as a table, however, if I feed it a table with a single element, it pops that element. Is this intentional? Here's a minimal example showing how it can cause trouble:

The module:

id1 = nn.Identity()()
id2 = nn.Identity()()
j = nn.JoinTable(1)({id1, id2})
net = nn.gModule({id1, id2}, {j})

behaves as expect when the input is a pair of tensors, producing their join. However:

id = nn.Identity()()
j = nn.JoinTable(1)({id})
net = nn.gModule({id}, {j})

produces an error in JoinTable (the input is not a table) because it pops the the value of id from the table before feeding it to JoinTable. I would like it to feed {id.output} into JoinTable in the latter case, just as it's feeding {id1.output, id2.output} in the former.

id should be on the data, so invariant

node ids should be on the data rather than on the node, so that they are invariant.

eg, currently node ids in fg and node ids in bg are different, I think?

dlsym(RTLD_DEFAULT, _fopen): symbol not found

Hi,

I'm trying to plot the graph for a NN, in debug mode, but I get this error:

.../torch/install/share/lua/5.1/graph/graphviz.lua:167: dlsym(RTLD_DEFAULT, _fopen): symbol not found stack traceback: [C]: in function '__index' .../torch/install/share/lua/5.1/graph/graphviz.lua:167: in function 'graphvizFile' ...e/torch/install/share/lua/5.1/graph/graphviz.lua:195: in function 'dot' ./Source/NN.lua:125: in function 'opfunc'

The nn is this:
question = nn.Identity()() local target = nn.Identity()(question) self.model = nn.gModule({question},{target})

The graph does plot when I use more complicated NNs (with two inputs, for example)

I installed graphviz through brew, nngraph through luarocks.
I am on OS X El Capitan v 10.11.4

type casting?

How do you guys do type casting? The current :type() method seems very partial; running the following code prints out lots of DoubleTensors.

setprintlevel(20)

require 'nngraph'

local i1 = nn.Identity()()
local i2 = nn.Identity()()

local o1 = nn.Tanh()( nn.Linear(10,10)(i1) )
local o2 = nn.Tanh()( nn.Linear(10,10)(i2) )

local m = nn.gModule({i1,i2}, {o1,o2})

m:forward({torch.randn(10), torch.randn(10)})
m:backward({torch.randn(10), torch.randn(10)}, {torch.randn(10), torch.randn(10)})

m:float()

print{m}

Wrong gradInput layout

In the following example, the gradInput layout should be:

{
  1 : DoubleTensor - size: 3
  2 : 
    {
      1 : DoubleTensor - size: 3
      2 : DoubleTensor - size: 3
    }
}

The obtained wrong gradInput layout is:

{
  1 : DoubleTensor - size: 3
  2 : 
    {
      1 : 
        {
          1 : DoubleTensor - size: 3
          2 : DoubleTensor - size: 3
        }
      2 : DoubleTensor - size: 3
    }
}

It happens when:

  1. An input is obtained by split().
  2. And the output of the split is used multiple times in the graph.

Code to reproduce the problem:

local xInput = torch.randn(3)
local h = torch.randn(3)

local x = nn.Identity()()
local prevRnnState = nngraph.Node({input={}})
local prevH1, prevCell = prevRnnState:split(2)
local prevH = prevH1
-- Wrapping the input by Identity() avoids the problem.
--local prevH = nn.Identity()(prevH1)

local cellOut = nn.CAddTable()({
        nn.CMulTable()({x, prevH}),
        nn.CMulTable()({prevH, prevCell})})
local module = nn.gModule({x, prevRnnState}, {cellOut})

local c = torch.randn(h:size())
local prevRnnState = {h, c}
local output = module:forward({xInput, prevRnnState})

local gradOutput = torch.randn(h:size())
local gradInput = module:backward({xInput, prevRnnState}, gradOutput)
local gradX, gradPrevState = unpack(gradInput)
local gradPrevH, gradPrevCell = unpack(gradPrevState)
assert(type(gradPrevH) == type(h), "wrong gradPrevH type")

Using a subset of outputs

Is there a way to pass a subset of outputs to the next module?
For example, nn.SplitTable() can have 2 outputs.
And I want to send output[1] to nn.Linear and output[2] to nn.Tanh.

Switched gradInput

I see that the gradInput returned from a gModule
has wrong elements.
The expected gradInput is a table.
The shapes of the elements in the table seem to be switched.
The shapes should be (2) and (5).
But the obtained shapes are (5) and (2).

Am I using the nngraph incorrectly?

Code to reproduce the problem:

require 'nn'
require 'nngraph'

local getInput1 = nn.Identity()()
local getInput2 = nn.Identity()()
local mlp = nn.Tanh()(getInput1)
local net = nn.gModule({getInput1, getInput2}, {mlp, getInput2})


local input1 = torch.randn(2)
local input2 = torch.randn(5)

net:forward({input1, input2})
local gradInput = net:backward({input1, input2},
    {torch.randn(input1:size()), torch.randn(input2:size())})
print("gradInput[1]:", gradInput[1])
print("gradInput[2]:", gradInput[2])
assert(gradInput[1]:nElement() == input1:nElement(), "size mismatch")

Possible mistake in backward

Hi, in many situations we don't need to count dervative wrt the input, so for optimiztion it is good idea to write
....updateGradInput = function(input) return end
In container nn.Sequential it works.
In gModule it doesn't.
But if you run once your model with standart updateGradInput function and than write updateGradInput = function(input) return end, everything will be fine.

Nesting graphs

From my current experimentation it appears that nesting graphs withing graphs (using the same __ call__ syntax) is not yet possible. This is a nice-to-have, but not urgent.

can't get graphviz work on mac

When I try to use "graph.dot" on mac, it gives following error:

Warning: Could not load "/usr/local/Cellar/graphviz/2.38.0/lib/graphviz/libgvplugin_dot_layout.6.dylib" - file not found
Error: Layout type: "dot" not recognized. Use one of: circo dot fdp neato nop nop1 nop2 osage patchwork sfdp twopi
/Users/luo123n/torch/install/bin/luajit: ...s/luo123n/torch/install/share/lua/5.1/graph/graphviz.lua:135: graphviz layout failed
stack traceback:
    [C]: in function 'assert'
    ...s/luo123n/torch/install/share/lua/5.1/graph/graphviz.lua:135: in function 'graphvizFile'
    ...s/luo123n/torch/install/share/lua/5.1/graph/graphviz.lua:162: in function 'dot'
    test.lua:23: in main chunk
    [C]: in function 'dofile'
    ...123n/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x010e387330

I can find similar issues using google, but there seems no solution yet.

attempt to call field 'gvContext' (a nil value)

On ubuntu 14.04, to get nngraph graphviz t owork, I need to changes lines 29 and 30 of graphviz.lua to:

local graphvizOk, graphviz = pcall(function() return ffi.load('libgvc.so.6') end)
local cgraphOk, cgraph = pcall(function() return ffi.load('libcgraph.so.6') end)

(ie adding postfix '.6' to each of hte filenames)

(I'm not sure how to do this portably though, hence putting this as an issue, rather than a PR.

how to skip some nodes ?

Hey guys,

suppose I create a gmodule with two input A and B.
Now A will be the training example, and B will be some random sampled negative example
Now during forward path I want the full calculation only be done once, and from the second time only B will be changing and A will be fixed, so the calculation related with A is not necessary to be conducted again.

So is there some example show that how to skip some nodes in the nngraph during forward pass?

Error when using graph.dot

I'm following the README.md file and when I get around to running:

graph.dot(mlp.fg, 'MLP')

from the first example (two hidden layers MLP), I get the following error:

...l/Cellar/torch/HEAD/share/lua/5.1/graph/graphviz.lua:162: attempt to index local 'g' (a nil value) stack traceback: ...local/Cellar/torch/HEAD/share/lua/5.1/trepl/init.lua:501: in function <...local/Cellar/torch/HEAD/share/lua/5.1/trepl/init.lua:494> ...l/Cellar/torch/HEAD/share/lua/5.1/graph/graphviz.lua:162: in function 'graphvizFile' ...l/Cellar/torch/HEAD/share/lua/5.1/graph/graphviz.lua:195: in function 'dot' [string "_RESULT={graph.dot(mlp.fg, 'MLP')}"]:1: in main chunk [C]: in function 'xpcall' ...local/Cellar/torch/HEAD/share/lua/5.1/trepl/init.lua:651: in function 'repl' ...lar/torch/HEAD/lib/luarocks/rocks/trepl/scm-1/bin/th:199: in main chunk [C]: ?

I'm running this from trepl on a mac. I have already run brew install graphviz (it installed version 2.38.0) and luarocks install graph (as suggested at the end of this thread) successfully. I haven't been able to find anything on this error online either. How do I get around this error?

Should users use nngraph.Node()?

Is is intended to use nngraph.Node(input={}) instead of nn.Identity()?

The following example works, only if using nngraph.Node instead of nn.Identity.

local in1 = nn.Identity()()
local in2 = nn.Identity()()
-- The nngraph.Node() is needed for the example to work.
--local in2 = nngraph.Node({input={}})
local prevH, prevCell = in2:split(2)

local out1 = nn.CMulTable()({in1, prevH, prevCell})
local module = nn.gModule({in1, in2}, {out1})

local input = {torch.randn(3), {torch.randn(3), torch.randn(3)}}
module:forward(input)
local gradInput = module:backward(input, torch.randn(3))
assert(type(gradInput[2]) == "table", "wrong gradInput[2] type")

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.