Giter VIP home page Giter VIP logo

theanets's People

Contributors

ebenolson avatar john-a-m avatar jurcicek avatar kaikaun avatar kastnerkyle avatar lmjohns3 avatar majidaldo avatar mhr avatar qyouurcs avatar reyoung avatar saromanov avatar talbaumel avatar yoavg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

theanets's Issues

Pre-training and fine-tuning deep autoencoders

Dear Theano-nets users,

Thanks to your support, I could slowly but surely progress in my experimentations with NN.

Now, I'm trying to work with deep autoencoders, starting with pre-training stacked auto-encoders followed by fine-tuning the resulting network with a linear regression layer on top of it.

In Theano-nets, there is a deepautoencoder sample code. However, I am not sure if the pre-training and fine-tuning steps are performed in the code or if it has to be specifically written.

Also, is there any option to train deep-autoencoders and to use the resulting network with the provided recurrent neural network code?

Again, thank you very much for your help.

H.R.

TypeError

Hi,
Using dev version of theano-nets, pulled today, I get the following TypeError:

Traceback (most recent call last):
  File "/Users/davegreenwood/Desktop/ttest.py", line 3, in <module>
    e = theanets.Classifier(layers = [5,5,5])
  File "/Users/davegreenwood/_git/theano-nets/theanets/feedforward.py", line 775, in __init__
    super(Classifier, self).__init__(**kwargs)
TypeError: must be type, not classobj
[Finished in 0.7s with exit code 1]

Using the simplest code example:

import theanets
e = theanets.Classifier(layers=[5, 5, 5])

changing class Network: to class Network(object):makes the error go away, but I'm sure this is not the intended solution.

My OS and Python version:

Python 2.7.8 |Anaconda 2.1.0 (x86_64)| (default, Aug 21 2014, 15:21:46) 
[GCC 4.2.1 (Apple Inc. build 5577)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://binstar.org
>>> 

Many thanks
Dave.

batch_size=1 does not work

e = theanets.Experiment(
    theanets.recurrent.Regressor
    #theanets.recurrent.Autoencoder
    ,layers=(1, 10, 1)
    ,batch_size=1
    )

gives

TypeError: Cannot convert Type TensorType(float32, 3D) (of Variable Subtensor{::
-1}.0) into Type TensorType(float32, (False, True, False)). You can try to manua
lly convert Subtensor{::-1}.0 into a TensorType(float32, (False, True, False)).

recurrent net don't operate after loading

if you create a RNN, save it, then load it, it might not operate (train, predict, ..etc). that's b/c some compilation info is not reinitialized.

in recurrent.Network.setup_encoder there is batch_size = kwargs.get('batch_size', 64). so if the network were created with a batch_size=32, this does not get recreated since setup_encoder is not invoked on loading.

you may wish to reevaluate how nets are saved in order to better generalize saving any network. i suggest saving all the args and kwargs that went into creating the net and calling __init__ with them on loading.

Adding sparsity constraint to autoencoder cost

Is there currently any way to add a sparsity constraint to the cost of an autoencoder? I see regularization terms (weight_l1, l2, etc.) but another tutorial also mentions an explicit sparsity term. Looking at it, it seems (to me at least) different than regularization based on the weights. However, other deep learning notes don't seem to have this parameter, at least not in the form shown in the link.

If we do need this functionality, I am thinking a separate SparseAutoencoder class might be better than adding construction options to the current autoencoder - what are your thoughts?

Validation

I like to set the validation to 1 so that I get a very clear picture about how the model is doing in every epoch, but it seems to me the for the HF trainer i cannot set the validation. If in the experimenter i give the argument validation=1 i get an error message that the train() function recieves two values for the 'validation' key.

Large dataset numpy arrays solution - store batches on hard drive?

I have tried to load big matrices into numpy arrays and then to call the dataset creation routines. However, very large data can pose memory issues for numpy. Usually we need lots of data for deep learning and this data cannot be kept all in memory.
It would be nice to have the batches stored on hard drive in different parts, and loading these parts into memory at training one by one, rather than the full dataset. Maybe I can try to implement something like this. In this case should the train function should be modified or what is the best way?

Feature request: ongoing training error

Hi,

Thanks for this fantastic library. I would like to request to provide ongoing training error, so that I will be able to write loops that tries different parameters and pick the ones which yield to better convergence.

Cheers,
Amin

theanets.recurrent.Predictor throws an error 'TypeError: Tried to provide value for implicit input: rnn1_xh'

Here's a copy and paste example of the code that is giving me the error

#!/usr/bin/env python

import climate
import numpy as np
import theanets

import warnings
warnings.filterwarnings("ignore")

BS = 2

climate.enable_default_logging()

data = np.random.random((1000, BS, 7)), np.random.random((1000, 1))

e = theanets.Experiment(
    theanets.recurrent.Predictor,
    layers=(7, ('rnn', 20), 1),
    batch_size=BS)

for monitor in e.itertrain(data):
    print monitor[0].get('loss')

and here is the stacktrace I get

Traceback (most recent call last):
  File ".\test.py", line 21, in <module>
    for monitor in e.itertrain(data):
  File "C:\Users\John\.virtualenvs\firstco\lib\site-packages\theanets\main.py", line 328, in itertrain
    for i, monitors in enumerate(opt.itertrain(**sets)):
  File "C:\Users\John\.virtualenvs\firstco\lib\site-packages\theanets\trainer.py", line 248, in itertrain
    validation = self.evaluate(valid_set)
  File "C:\Users\John\.virtualenvs\firstco\lib\site-packages\theanets\trainer.py", line 186, in evaluate
    values = [self.f_eval(*x) for x in dataset]
  File "C:\Users\John\.virtualenvs\firstco\lib\site-packages\theano-0.6.0-py2.7.egg\theano\compile\function_module.py",
line 590, in __call__
    self.inv_finder[c]))
TypeError: Tried to provide value for implicit input: rnn1_xh

supervised layerwise pre-training details

Hi,

First of all, thanks for this super useful library. I need to learn about (and also cite) the "layerwise pre-training" (when we set optimize='layerwise'). I have two basic questions:

1- Assume we have a network with 4 hidden layers: [100, 200, 300, 200, 50, 20]. At first step, the network [100,200,20] is trained. At second step, does it train a [100,200,300,20] network (simply adding next layer and train) or a [200,300,20] network (using the pre-trained first hidden layer as input layer)?

2- Do you know any paper or tutorial that explains this approach? I searched and also looked through Bengio's works. I found several papers for unsupervised layerwise pre-training. However, here it seems that we are doing supervised pre-training (because output layer is always used in our pre-training steps). Am I wrong about this?

Thanks,
Amin

HF trainer's cost set wrong value.

The code here set hf trainer to all monitor value of network, which include accuracy. The hf trainer will never find a new best value because the cost, which include accuracy, will always get larger if the network is trained well.

The accuracy should be not passed to hf trainer.

No Diagnostic message with SGD

I use sgd for training and I don't get the diagnostic messages of the trainer for some reason and I do not know what am I doing wrong:
e = Experiment(Regressor,
layers=(input_layer, hidden1, output_layer),
train_batches = 100,
optimize='sgd')

Simple regression using Theano-nets

Dear Theano-nets users,

I am new to coding using Theano and Theano-nets and I have been trying to perform a simple prediction task that takes as input two-dimensional samples of real numbers (sample_size x 2) and return a one dimensional vector (1 x sample_size).

For example, my train set is extremely simple and as follows:

0 0 gives 1
1 1 gives 2
2 2 gives 3
3 3 gives 4
etc.

My test set would be, say:

10 10 gives 11
11 11 gives 12
etc.

Based on some provided examples, I have written the following:

train_set_x = np.genfromtxt('train_set_x.dat', delimiter=',', dtype=float32)
train_set_y = np.genfromtxt('train_set_y.dat', delimiter=',', dtype=float32)
train_set = [train_set_x, train_set_y]

valid_set_x = np.genfromtxt('valid_set_x.dat', delimiter=',', dtype=float32)
valid_set_y = np.genfromtxt('valid_set_y.dat', delimiter=',', dtype=float32)
valid_set = [valid_set_x, valid_set_y]

test_set_x = np.genfromtxt('test_set_x.dat', delimiter=',', dtype=float32)
test_set_y = np.genfromtxt('test_set_y.dat', delimiter=',', dtype=float32)
test_set = [test_set_x, test_set_y]

e = theanets.Experiment(theanets.feedforward.Regressor, layers=(2, 100, 2), learning_rate=0.1, optimize="sgd", patience=300, activation="tanh")

e.run(train_set, train_set)

print "Input:"
print train_set[0]

print "Output"
print train_set[1]

print "Predictions"
print e.network(np.array([[1, 1],[3, 3]]))

The code runs well but the produced output values are not reasonable.
(In this case: "Predictions
[[-0.02094674 0.19985442]
[-0.09269754 0.53628206]]" while
[[2 2]
[4 4]] would have been expected. (The output has two columns to avoid a matrix dimension error.))

I would be extremely grateful for any advice or hint on where the code is wrong.

Thank you very much a lot for your help,

H.R.

ValueError: Shape mismatch when creating batches from 3d array on axis=1

I have a recurrent regression architecture in a toy example with the layers [8,10,24].
I am creating a dataset from two numpy arrays with dimensions [40,64,8] and [40,64,24] with batches_size having different values but let's say batches_size=8 and axis=1 (the batches are split on the sequences axis).
Training the network gives this error, which does not happen is the split is done on the time axis=0:

File "/home/neuralnets/theanets/main.py", line 246, in train
for _ in self.itertrain(_args, _kwargs):
File "/home/neuralnets/theanets/main.py", line 315, in itertrain
for i, costs in enumerate(opt.train(
_sets)):
File "/home/neuralnets/theanets/trainer.py", line 162, in train
if not self.evaluate(iteration, valid_set):
File "/home/marius/neuralnets/theanets/trainer.py", line 116, in evaluate
np.mean([self.f_eval(_x) for x in valid_set], axis=0)))
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 588, in call
self.fn.thunks[self.fn.position_of_error])
File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 579, in call
outputs = self.fn()
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_op.py", line 656, in rval
r = p(n, [x[0] for x in i], o)
File "/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_op.py", line 650, in
self, node)
File "scan_perform.pyx", line 341, in theano.scan_module.scan_perform.perform (/home/marius/.theano/compiledir_Linux-3.13.0-40-generic-i686-with-Ubuntu-14.04-trusty-i686-2.7.6-32/scan_perform/mod.cpp:3573)
File "scan_perform.pyx", line 335, in theano.scan_module.scan_perform.perform (/home/marius/.theano/compiledir_Linux-3.13.0-40-generic-i686-with-Ubuntu-14.04-trusty-i686-2.7.6-32/scan_perform/mod.cpp:3505)
ValueError: Shape mismatch: x has 64 rows but z has 8 rows
Apply node that caused the error: Gemm{inplace}(Dot22.0, TensorConstant{1.0}, <TensorType(float64, matrix)>, W_pool_copy, TensorConstant{1.0})
Use another linker then the c linker to have the inputs shapes and strides printed.
Use the Theano flag 'exception_verbosity=high' for a debugprint of this apply node.
Apply node that caused the error: forall_inplace,cpu,scan_fn}(Shape_i{0}.0, Subtensor{int64:int64:int8}.0, Alloc.0, W_0, W_pool, InplaceDimShuffle{x,0}.0)
Inputs shapes: [(), (40, 8, 8), (40, 64, 10), (8, 10), (10, 10), (1, 10)]
Inputs strides: [(), (4096, 64, 8), (5120, 80, 8), (80, 8), (80, 8), (80, 8)]
Inputs types: [TensorType(int64, scalar), TensorType(float64, 3D), TensorType(float64, 3D), TensorType(float64, matrix), TensorType(float64, matrix), TensorType(float64, row)]
Use the Theano flag 'exception_verbosity=high' for a debugprint of this apply node.

I am working with audio so I have an 1-D array at each time frame which gives the spectrum, however many audio sequences. I realize that the library is under intense development now but any help on this issue would be appreciated.

Overcomplete basis with autoencoder with tied_weights=True leads to total sparsity in decode?

This gist sums up what I am seeing. When I try to do an overcomplete autoencoder, the sparsity for all decode layers shows up as 1. and the cost gets "stuck" at ~87 (because the gradient can't flow backwards with totally sparse layers?)

I encountered this while trying to build the canonical 784-1000-500-250-30-350-500-1000-784 deep autoencoder for MNIST digits - didn't have time to explore or recreate til now. Any thoughts?

Support masks for targets/labels

Supervised models, especially recurrent models, need to support data of variable length. We should add a mask parameter to these models, or add support for masked target arrays.

Type Error in XOR example

Hi,
I was testing my setup with your examples just to be sure everything works before I start doing any serious work.

When I run the example I get some log output and then following error:

  File "xor-classfier.py", line 27, in <module>
    e.run(train, train)
  File "/home/theano/dp/python/theano-nets/theanets/main.py", line 210, in run
    for _ in self.train(train=train, valid=valid):
  File "/home/theano/dp/python/theano-nets/theanets/main.py", line 238, in train
    cg_set=self.datasets['cg']):
  File "/home/theano/dp/python/theano-nets/theanets/trainer.py", line 135, in train
    self.evaluate(i, valid_set)
  File "/home/theano/dp/python/theano-nets/theanets/trainer.py", line 93, in evaluate
    costs = np.mean([self.f_eval(*x) for x in valid_set], axis=0)
  File "/usr/lib/python2.7/site-packages/theano/compile/function_module.py", line 497, in __call__
    allow_downcast=s.allow_downcast)
  File "/usr/lib/python2.7/site-packages/theano/tensor/type.py", line 119, in filter
    raise TypeError(err_msg, data)
TypeError: ('Bad input argument to theano function at index 0(0-based)', 'TensorType(float32, matrix) cannot store a value of dtype float64 without risking loss of precision. If you do not mind this loss, you can: 1) explicitly cast your data to float32, or 2) set "allow_input_downcast=True" when calling "function".', array([[ ... ]]))

I understand what's going on, I'm just not sure how to fix it.

I see two option here: either I have incorrect setup in my theano installation or there is some flaw in your code.

My setup taken from .theanorc

[global]
floatX = float32
device = gpu0
mode=FAST_RUN

[nvcc]
fastmath = True

[blas]
ldflags = -latlas -lgfortran -lf77blas

I think there is a conflict with your code and my settings, as my theano is trying to run on GPU, while yours isn't aware of it.

Thanks for help.

inconsistency in initializing SequenceDataSet with ndarray vs callable

when the SequenceDataSet is initialized with an array it is broken into minibatches on the first axis. however, when it's given a callable, the data generated from the callable for a RNN is expected to have shape (sequence_length, batch_size, dimension). this creates an inconsistency when SequenceDataSet is initialized.

from theanets.dataset import SequenceDataset as DS
import numpy as np
import climate
climate.enable_default_logging()

class DataGen(object):

    def __init__(self,dim=(3000,128,3) ,I=13):
        self.mydim=dim
        self.I=I
        self.myiter=self.data_iter()
        return

    def data_iter(self):
        i=0
        while i<self.I:
            yield [np.random.rand(*self.mydim).astype('f32')]
            i+=1

    def __call__(self):
        return self.myiter.next()

adata=DataGen()()[0] #the ndarray
dg=DataGen()         #yields of [ndarray]

print 'sequencedataset initialized with array shape ', adata.shape
DS(adata)
print 'sequencedataset init. w/ a gen of data of shape ', dg.mydim
DS(dg)

output

sequencedataset initialized with array shape  (3000L, 128L, 3L)
I 2014-11-25 22:39:23 theanets.dataset:94 data dataset: 94x 94 mini-batches of (32L, 128L, 3L)
sequencedataset init. w/ a gen of data of shape  (3000, 128, 3)
I 2014-11-25 22:39:23 theanets.dataset:94 data dataset: 32x -> mini-batches of (3000L, 128L, 3L)

problems with climate / get_args

When I do a simple

exp = theanets.Experiment(theanets.Autoencoder, layers=(100,20,100))

it fails:

Traceback (most recent call last):
File "/home/xtv/PycharmProjects/testhyperopt/test4.py", line 67, in
exp = theanets.Experiment(theanets.Autoencoder, layers=(100,20,10))
File "/usr/local/lib/python2.7/dist-packages/theanets/main.py", line 78, in init
self.args, self.kwargs = parse_args(**overrides)
File "/usr/local/lib/python2.7/dist-packages/theanets/main.py", line 39, in parse_args
args = climate.get_args().parse_args()
AttributeError: 'module' object has no attribute 'get_args'

It seems that climate 0.3.1 doesnt has this method:

from .flags import parse_args, add_mutex_arg_group, add_arg_group,
add_arg, add_command, annotate
...

layerwise with SGD

Hi,

I use layerwise pre-training as below:

e = theanets.Experiment(theanets.feedforward.Regressor,
                                                layers=(options.featuresN, 200, 300, 200, options.landmarksN*2),
                                                optimize= ['layerwise','sgd'] ,
                                                activation='relu',
                                                )

My code used to work fine with previous versions of the software. However, when I installed a newer version, it seems that the default trainer for layerwise is changed from SGD to NAG. It gives me J=nan. Is there a way that I can use the layerwise pre-trainer with SGD again? or if it's not possible, can you tell me how I can download and install the older version (that used SGD) using git clone?

Thanks,
Amin

How to change the logging setting for SGD and layerwise trainer?

Hi,

I was wondering if there is a way to control the logging info for layerwise and SGD optimization? For example, I like to see the training error only every other 50 updates (not on each update). My training takes a couple days. Whenever I get to my computer I see the logging for the last hour at most, and I won't get a feeling of what's going on.

Thanks for your great package,
Amin

Unsupervised Pretraining Feature Request

Yes, I know, there is this growing sentiment that unsupervised pretraining (e.g. with Restricted Boltzman Machines) is becoming obsolete in neural nets by the recent advancements in regularisation and better optimisations techniques for back-propagation. I beg to differ though. Unsupervised pretraining would allow the use of a much bigger set of unlabeled data first, while fine-tuning is done on a much smaller labeled data set. In other words, it could be helpful beyond just finding a better initialisation for the weights.

It would be lovely if we could do something similar to this in theano-nets:

exp = theanets.Experiment(theanets.Classifier)

exp.add_dataset('pretrain', unlabeled_dataset)
exp.add_dataset('train', labeled_dataset[:1000])
exp.add_dataset('valid', labeled_dataset[1000:])
exp.train()

That way, the (potentially unlabeled) pretrain data set can be different from the labeled training set, e.g. be much bigger.

wrong NN XOR predictions

result of first run:
I 2015-02-16 21:51:11 root:21 NN XOR predictions:
[[ 0. ]
[ 0.5 ]
[ 0.99000001]
[ 0.5 ]]

oddly it worked the second time:
I 2015-02-16 21:53:49 root:21 NN XOR predictions:
[[ 0.]
[ 1.]
[ 1.]
[ 0.]]

How to control new features?

Hi,

My neural network converges very well on a machine that has the older version of theano-nets. However on a new machine (which we installed the latest version on it), the convergence is not as good. I guess that's because of the change of the default parameters. I noticed that the default value for "min_improvement" is changed from 0.001 to 0.01. I changed it back, but it didn't solve the problem. The other parameters I noticed that are added are rprop_decrease, rprop_increase, etc. Is it now using rprop by default? If yes, how can I make it not to use rprop?

Maybe these problems happen because my lack of knowledge about Github. Are these changes written anywhere so that I can go through? Or is there a way to check the version of theano-nets and also install that specific version on a new machine?

Thanks,
Amin

double net init when loading from saved net

the following is the log that i get when i have the network reinitialized from the same script that has save_progress. you can note a double initialization. i guess it's somewhat minor but it should only be done once.

I 2014-12-10 15:54:46 theanets.feedforward:156 hidden activation: logistic
I 2014-12-10 15:54:46 theanets.feedforward:161 output activation: linear
I 2014-12-10 15:54:46 theanets.feedforward:348 weights for layer 0: 1 x 50
I 2014-12-10 15:54:46 theanets.feedforward:348 weights for layer xh_1: 50 x 50
I 2014-12-10 15:54:46 theanets.feedforward:348 weights for layer hh_1: 50 x 50
I 2014-12-10 15:54:46 theanets.feedforward:348 weights for layer 2: 50 x 1
I 2014-12-10 15:54:46 theanets.recurrent:117 5201 total network parameters
I 2014-12-10 15:54:47 theanets.dataset:97 valid: 29 mini-batches from callable
I 2014-12-10 15:54:47 theanets.dataset:97 train: 272 mini-batches from callable
I 2014-12-10 15:54:47 theanets.main:351 loading model from trnn
I 2014-12-10 15:54:47 theanets.feedforward:156 hidden activation: logistic
I 2014-12-10 15:54:47 theanets.feedforward:161 output activation: linear
I 2014-12-10 15:54:47 theanets.feedforward:348 weights for layer 0: 1 x 50
I 2014-12-10 15:54:47 theanets.feedforward:348 weights for layer xh_1: 50 x 50
I 2014-12-10 15:54:47 theanets.feedforward:348 weights for layer hh_1: 50 x 50
I 2014-12-10 15:54:47 theanets.feedforward:348 weights for layer 2: 50 x 1
I 2014-12-10 15:54:47 theanets.recurrent:117 5201 total network parameters
I 2014-12-10 15:54:47 theanets.feedforward:577 W_0: setting value (1L, 50L)
I 2014-12-10 15:54:47 theanets.feedforward:577 W_xh_1: setting value (50L, 50L)
I 2014-12-10 15:54:47 theanets.feedforward:577 W_hh_1: setting value (50L, 50L)
I 2014-12-10 15:54:47 theanets.feedforward:577 W_2: setting value (50L, 1L)
I 2014-12-10 15:54:47 theanets.feedforward:580 b_0: setting value (50L,)
I 2014-12-10 15:54:47 theanets.feedforward:580 b_h_1: setting value (50L,)
I 2014-12-10 15:54:47 theanets.feedforward:580 b_2: setting value (1L,)
I 2014-12-10 15:54:47 theanets.feedforward:583 trnn: loaded model parameters
I 2014-12-10 15:54:47 theanets.main:198 creating trainer <class 'theanets.traine
r.NAG'>

More explicit documentation on the decoding layer

Hello,

I think that it is a little bit counterintuitive that the last layer is linear, while the activation is chosen by the user. The user should be able to pick the decoding activation (and in my opinion it should be defaulted to "activation" option)

RNN not working

commit 2e4d725 broke it.

C:\Anaconda\lib\site-packages\theano\tensor\subtensor.py:114: FutureWarning: com
parison to `None` will result in an elementwise object comparison in the future.

  stop in [None, length, maxsize] or
C:\Anaconda\lib\site-packages\theano\scan_module\scan_perform_ext.py:85: Runtime
Warning: numpy.ndarray size changed, may indicate binary incompatibility
  from scan_perform.scan_perform import *
I 2014-12-16 14:11:01 theanets.trainer:142 compiling RmsProp learning function
Traceback (most recent call last):
  File "testrnn.py", line 65, in <module>
    xp.train(ecgb_trn, ecgb_val)
  File "c:\users\majid\documents\github\theano-nets\theanets\main.py", line 252,
 in train
    for _ in self.itertrain(*args, **kwargs):
  File "c:\users\majid\documents\github\theano-nets\theanets\main.py", line 315,
 in itertrain
    opt = self.create_trainer(opt, **kwargs)
  File "c:\users\majid\documents\github\theano-nets\theanets\main.py", line 207,
 in create_trainer
    return factory(*args, **kw)
  File "c:\users\majid\documents\github\theano-nets\theanets\trainer.py", line 3
54, in __init__
    super(RmsProp, self).__init__(network, **kwargs)
  File "c:\users\majid\documents\github\theano-nets\theanets\trainer.py", line 1
46, in __init__
    updates=list(network.updates) + list(self.learning_updates()))
  File "C:\Anaconda\lib\site-packages\theano\compile\function.py", line 223, in
function
    profile=profile)
  File "C:\Anaconda\lib\site-packages\theano\compile\pfunc.py", line 490, in pfu
nc
    no_default_updates=no_default_updates)
  File "C:\Anaconda\lib\site-packages\theano\compile\pfunc.py", line 217, in reb
uild_collect_shared
    raise TypeError(err_msg, err_sug)
TypeError: ('An update must have the same type as the original shared variable (
shared_var=W_xh_0_g1, shared_var.type=CudaNdarrayType(float32, matrix), update_v
al=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', '
If the difference is related to the broadcast pattern, you can call the tensor.u
nbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dim
ensions.')

Hessian Free Optimizer does not work

The hf optimizer does not work if implemented on the xor.py in the example folder.

Below is the error returned on the compiler:
Traceback (most recent call last):
File "/home/camaro/workspace/theanets/xor.py", line 14, in
e.train([X, Y], optimize='hf', patience=5000, batch_size=4)
File "/home/camaro/theanets/theanets/main.py", line 258, in train
for _ in self.itertrain(_args, *_kwargs):
File "/home/camaro/theanets/theanets/main.py", line 321, in itertrain
opt = self.create_trainer(opt, *_kwargs)
File "/home/camaro/theanets/theanets/main.py", line 213, in create_trainer
return factory(_args, **kw)
File "/home/camaro/theanets/theanets/trainer.py", line 681, in init
None)
File "/tmp/hf.py", line 66, in init
Gv = gauss_newton_product(costs[0], p, v, s)
File "/tmp/hf.py", line 14, in gauss_newton_product
Jv = T.Rop(s, p, v)
File "/home/camaro/Theano/theano/gradient.py", line 292, in Rop
elif seen_nodes[out.owner][out.owner.outputs.index(out)] is None:
KeyError: None

How do I get predictions from my Classifier network

Hi, I built a network for learning a XOR function using the following example. I just do know know how to get results/predictions from the network. Please can you help me?

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import cPickle
import gzip
import logging
import lmj.cli
import matplotlib.pyplot as plt
import numpy as np
import os
import tempfile
import theano
import theanets

lmj.cli.enable_default_logging()

X = np.array([
               [0.0, 0.0],
               [0.0, 1.0],
               [1.0, 0.0],
               [1.0, 1.0],
             ])

Y = np.array([0, 1, 1, 0, ])

print X.shape
print Y.shape

train = [X,  Y.astype('int32')]

e = theanets.Experiment(theanets.Classifier,
                        layers=(2, 5, 2),
                        activation = 'tanh',
#                        learning_rate=.005,
#                        learning_rate_decay=.1,
#                        patience=20,
                        optimize="sgd",
                        num_updates=10,
#                        tied_weights=True,
#                        batch_size=32,
                        )
e.run(train, train)

print e.network(X)

Network object has no attribute cose

This code runs but if I change the Regressor to Network I get the error message that the Network has no attribute cost. What am I doing wrong?

def getDataset():
    description_vectors = numpy.load('train_desps_features.npy')
    signature_vectors = numpy.load('train_signatures_features.npy')
    dataset = (description_vectors, signature_vectors)
    return dataset

def trainModel(dataset):
    input_layer = len(dataset[0][0])
    hidden1 = int(input_layer*(2.0/3))
    output_layer = len(dataset[1][0])
    e = theanets.Experiment(theanets.Regressor, layers=(input_layer, hidden1, output_layer))
    e.run(dataset, dataset)

dataset = getDataset()
trainModel(dataset)

IPython compability

The code doesn't work with IPython notebook due to argument parsing :( (IPython adds some arguments when calling python). Simple workaround would be to change parse_args to parse_known_args.

Reproducing: create any experiment inside IPython notebook.

How to check the training converges?

Hi,

I have a multivariate nonlinear regression problem and I am trying to solve it using deep neural networks. I use the code below for training. My X is 10000_40 and my Y is 10000_78. I was wondering how I can check a few things:
1- How do I know the training converged.
2- How do I know what 'learning rate', 'momentum' and 'update_num' it used as default.

e = theanets.Experiment(theanets.feedforward.Regressor,
layers=(40, 100, 200, 300, 150, 78),
optimize='sgd',
activation='tanh')
e.run(train_set, train_set)
Y_predicted = e.network(X_test_minmax)

I tried using 'hf' instead of 'sgd'. It printed some performance variables for each iteration, but it was too slow for my application. The other problem is that when I write 'layerwise' instead of 'sgd', it gives me an error. Any kind of help is appreciated.

Thanks,
Amin

How to Speed-up training?

Hi,

Thanks for your great package. I am training a network with three hidden layers, 400 features, and 78 targets. I use the code below for training. The result on test data is impressive, but it takes too much time to train (about two days). Is there a way to parallelize training on multiple cores or GPU? Or any other suggestions to speed-up the training process?

train_set = [X_minmax, Y_xyz_minmax]
e = theanets.Experiment(theanets.feedforward.Regressor,
layers=(featuresNum, 200, 300, 150, vertebNum*3),
optimize= 'layerwise' ,
activation='tanh',
)
e.run(train_set, train_set)

Dataset omits first called minibatch

The first called minibatch from a callable is left out when given to Dataset
demo:

from theanets.dataset import SequenceDataset as DS
import numpy as np
import climate
climate.enable_default_logging()

class DataGen(object):

    def __init__(self,dim=(3,2,1) ,I=5):
        self.mydim=dim
        self.I=I
        self.myiter=self.data_iter()
        return

    def data_iter(self):
        i=0
        while i<self.I:
            yield [i+np.random.rand(*self.mydim).astype('f32')]
            i+=1

    def __call__(self):
        return self.myiter.next()

dg=DataGen()         #yields of [ndarray]

print 'sequencedataset init. w/ a gen of data of shape ', dg.mydim
ds=DS(dg)

print 'should be', dg.I
print 'Dataset has', len([ad for ad in ds])
print '..while data gen has', len([ad for ad in DataGen().myiter])

ouput

I 2014-11-26 23:28:38 theanets.dataset:94 data dataset: 32x -> mini-batches of (3L, 2L, 1L)
should be 5
Dataset has 4
..while data gen has 5

..and this is not minding that it's not really 32x

Calling a parent script with args which embeds theanets Experiment from command line triggers an error in theanets

There is a script which is launched from command line with an arg which specifies the input directory for the features files. So I call the script like this
python lstm_ex.py --input_path /path/to/input/dir
I receive an error, which is triggered when the experiment is initialized
usage: lstm_ex.py [-h] [--help-activation] [--help-optimize]
[-n N [N ...]] [-g FUNC] [--output-activation FUNC] [-t]
[--decode N] [-O ALGO [ALGO ...]] [--no-learn-biases]
[--num-updates N] [-p N] [-v N] [-b N] [-B N] [-V N]
[--save-progress FILE] [--save-every N]
[--contractive-l2 S] [--input-noise S]
[--input-dropouts R] [--hidden-noise S]
[--hidden-dropouts R] [--hidden-l1 K] [--hidden-l2 K]
[--weight-l1 K] [--weight-l2 K] [-l V] [-m V]
[--min-improvement R] [--gradient-clip V]
[--max-gradient-norm V] [--rms-halflife N]
[--rprop-increase R] [--rprop-decrease R]
[--rprop-min-step V] [--rprop-max-step V] [-C N]
[--initial-lambda K] [--global-backtracking]
[--preconditioner] [--recurrent-error-start T]
lstm_ex.py: error: unrecognized arguments: --input_path
I think something gets mixed up during climate processing of input args...

'Regressor' object has no attribute 'x' when initializing a feedforward regressor

I tried to initialize a feedforward regressor (code here: #11, lmjohns3's first response). However, I get an error on this line:

e = theanets.Experiment(theanets.feedforward.Regressor, layers=(2, 20, 1), optimize='sgd', activation='tanh')

Here's the stack trace

Couldn't import dot_parser, loading of dot files will not be possible.
Traceback (most recent call last):
  File "simple_regression2.py", line 21, in <module>
    activation='tanh')
  File "/Users/user/python/theano-nets/theanets/main.py", line 89, in __init__
    self.network = self._build_network(network_class, **kw)
  File "/Users/user/python/theano-nets/theanets/main.py", line 101, in _build_network
    return network_class(**kwargs)
  File "/Users/user/python/theano-nets/theanets/feedforward.py", line 188, in __init__
    _, encode_count = self.setup_encoder(**kwargs)
  File "/Users/user/python/theano-nets/theanets/feedforward.py", line 226, in setup_encoder
    self.x,
AttributeError: 'Regressor' object has no attribute 'x'

I'm new to this package. Is there something I'm missing?

net.predict() Raises Error Calculating Dot Product; Theano Cannot Find MKL Lib

I am trying to use theanets to take advantage of its favorable speed in order to replace a slower ML library I was using (PyBrain). I have come across a confusing error that seems to be more of an issue between Anaconda and Theano but I was wondering if anyone here could provide any insight.

Here is a link to an issue on the Theano repository Theano/Theano#1871 dealing with a similar issue. The fix suggested in that thread is to set the DYLD_FALLBACK_LIBRARY_PATH to .../anaconda/lib. After doing this, though, my issue persists.

import theanets
import numpy as np
x = np.array([1,2,3])
net = theanets.Network(layers=[3,10,3])
net.predict(x)
WARNING (theano.gof.compilelock): Overriding existing lock by dead process '1480' (I am process '4543')
Traceback (most recent call last):
File "", line 1, in
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theanets/feedforward.py", line 556, in predict
return self.feed_forward(x)[-1]
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theanets/feedforward.py", line 540, in feed_forward
self._compile()
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theanets/feedforward.py", line 428, in _compile
[self.x], self.hiddens + [self.y], updates=self.updates)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/compile/function.py", line 265, in function
profile=profile)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/compile/pfunc.py", line 511, in pfunc
on_unused_input=on_unused_input)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py", line 1546, in orig_function
defaults)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/compile/function_module.py", line 1409, in create
_fn, _i, _o = self.linker.make_thunk(input_storage=input_storage_lists)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/link.py", line 531, in make_thunk
output_storage=output_storage)[:3]
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/vm.py", line 897, in make_all
no_recycling))
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/op.py", line 722, in make_thunk
output_storage=node_output_storage)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 1043, in make_thunk
keep_lock=keep_lock)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 985, in compile
keep_lock=keep_lock)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 1423, in cthunk_factory
key=key, fn=self.compile_cmodule_by_step, keep_lock=keep_lock)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/cmodule.py", line 1005, in module_from_key
module = next(compile_steps)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/cc.py", line 1338, in compile_cmodule_by_step
preargs=preargs)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/cmodule.py", line 2011, in compile_str
return dlimport(lib_filename)
File "/Users/tylerpayne/anaconda/lib/python2.7/site-packages/theano/gof/cmodule.py", line 289, in dlimport
rval = import(module_name, {}, {}, [module_name])
ImportError: ('The following error happened while compiling the node', Dot22(x, W_0), '\n', 'dlopen(/Users/tylerpayne/.theano/compiledir_Darwin-14.0.0-x86_64-i386-64bit-i386-2.7.9-64/tmpkwsKlI/e72a390c58958feb2b036cde5102049d.so, 2): Library not loaded: libmkl_intel_lp64.dylib\n Referenced from: /Users/tylerpayne/.theano/compiledir_Darwin-14.0.0-x86_64-i386-64bit-i386-2.7.9-64/tmpkwsKlI/e72a390c58958feb2b036cde5102049d.so\n Reason: image not found', '[Dot22(x, W_0)]')

Distributed Computing Feature Request

The theanets package is working perfectly for me so far its very easy to use and have enough features to solve my problem, so basically Good Job!
My only issue is that I cannot use my available architecture to its fullest as I have access to a server of 26 cores and as far as I understand theanets does not have the ability to train a model on a cluster. I think it would make theanets more versatile and also way faster. If theanets would do cluster computing it would be my dream neural network package :-).

Cheers

Load Experiment

I save out a network like so:

e = Experiment(Regressor, layers=(input_layer, hidden1, output_layer), optimize='hf', num_updates=30, verbose='True')
e.run(dataset, dataset)
e.save('network.dat')

Then when I'm trying to load it back in:
network = theanets.Experiment(theanets.Network).load('network.dat')

I get the following error message, and I'm not sure what am I doing wrong.
Traceback (most recent call last):
File "test.py", line 10, in
network = theanets.Experiment(theanets.Network).load('network.dat')
File "/usr/local/lib/python2.7/dist-packages/theanets/main.py", line 90, in init
self.network = self._build_network(network_class, *_kw)
File "/usr/local/lib/python2.7/dist-packages/theanets/main.py", line 103, in _build_network
return network_class(activation=activation, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/theanets/feedforward.py", line 107, in init
self.x.tag.test_value = np.random.randn(DEBUG_BATCH_SIZE, layers[0])
TypeError: 'NoneType' object has no attribute 'getitem'

feature request: checkpointing

I'm seeing the argument for periodic saving on the command line. However, I'm not seeing its implementation in the code. I suggest a periodic saving based on some elapsed time (as opposed to training cycles for example).

Given the long training times, this should be prioritized.

Cost function specification

I had been writing neural nets code without Theano and recently converted to specifying the experiments using theanets because i want to use Theano to experiment with different cost functions at the output.

I am running a regression model with 96 output variables where Euclidean Distance (~Mean Square Error) is not a very good measure of estimatation accuracy. Since you are built on top of Theano it seems like this cost function should be parameterizeable, but I don't see anything about this in the documentation.

Can you tell me where I can specify the cost function as a Theano function?

AsTensorError

Here's the traceback of code that worked with last (2 days old) revision. With the latest, it does not. If I revert, it works again.

I 2014-12-08 19:15:15 theanets.feedforward:161 hidden activation: logistic
I 2014-12-08 19:15:15 theanets.feedforward:166 output activation: linear
I 2014-12-08 19:15:15 theanets.feedforward:371 weights for layer 0: 75 x 512
Traceback (most recent call last):
File "./thea.py", line 56, in
global_backtracking=True,
File "theanets/main.py", line 152, in init
self.network = network_class(**self.kwargs)
File "theanets/feedforward.py", line 169, in init
self.setup_layers(**kwargs)
File "theanets/feedforward.py", line 201, in setup_layers
inputs = self.setup_encoder(**kwargs)
File "theanets/feedforward.py", line 238, in setup_encoder
self.preacts.append(TT.dot(z, W) + b)
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/basic.py", line 4725, in dot
a, b = as_tensor_variable(a), as_tensor_variable(b)
File "/usr/local/lib/python2.7/dist-packages/theano/tensor/basic.py", line 192, in as_tensor_variable
raise AsTensorError("Cannot convert %s to TensorType" % str_x, type(x))
theano.tensor.var.AsTensorError: ('Cannot convert (W_0, 38400) to TensorType', <type 'tuple'>)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.