hello!
while trying to start training .getting this error
pdb> > /home/seemab/seemab/Siamese-LSTM-master/lstmQA.py(191)adadelta()
189 for rg2, g in zip(running_grads2, grads)]
190
7-> 191 f_grad_shared = theano.function([emb11,mask11,emb21,mask21,y], cost, updates=zgup + rg2up,
192 name='adadelta_f_grad_shared',allow_input_downcast=True)
193
ipdb>
ipdb> Traceback (most recent call last):
File "", line 1, in
debugfile('/home/seemab/seemab/Siamese-LSTM-master/trainQA.py', wdir='/home/seemab/seemab/Siamese-LSTM-master')
File "/home/seemab/.local/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 728, in debugfile
debugger.run("runfile(%r, args=%r, wdir=%r)" % (filename, args, wdir))
File "/home/seemab/anaconda2/lib/python2.7/bdb.py", line 400, in run
exec cmd in globals, locals
File "", line 1, in
File "/home/seemab/.local/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "/home/seemab/.local/lib/python2.7/site-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
builtins.execfile(filename, *where)
File "/home/seemab/seemab/Siamese-LSTM-master/trainQA.py", line 14, in
sls=lstm("new.p",load=False,training=True) #call to class lstm()
File "lstmQA.py", line 303, in init
self.f_grad_shared, self.f_update = adadelta(lr, tnewp, grads,emb11,mask11,emb21,mask21,y, cost)
File "lstmQA.py", line 192, in adadelta
name='adadelta_f_grad_shared',allow_input_downcast=True)
File "/home/seemab/anaconda2/lib/python2.7/site-packages/theano/compile/function.py", line 298, in function
output_keys=output_keys)
File "/home/seemab/anaconda2/lib/python2.7/site-packages/theano/compile/pfunc.py", line 449, in pfunc
no_default_updates=no_default_updates)
File "/home/seemab/anaconda2/lib/python2.7/site-packages/theano/compile/pfunc.py", line 208, in rebuild_collect_shared
raise TypeError(err_msg, err_sug)
TypeError: ('An update must have the same type as the original shared variable (shared_var=1lstm1_U_rgrad2, shared_var.type=TensorType(float32, matrix), update_val=Elemwise{add,no_inplace}.0, update_val.type=TensorType(float64, matrix)).', 'If the difference is related to the broadcast pattern, you can call the tensor.unbroadcast(var, axis_to_unbroadcast[, ...]) function to remove broadcastable dimensions.')