goodfeli / theano_exercises Goto Github PK
View Code? Open in Web Editor NEWExercises for my tutorials on Theano
License: BSD 3-Clause "New" or "Revised" License
Exercises for my tutorials on Theano
License: BSD 3-Clause "New" or "Revised" License
Hi, there,
I'm a newbie to theano and run into a stone wall. The solution says that I'll get an error but I have not. And I want to ask how I can add this line THEANO_FLAGS = "mode=FAST_COMPILE"
to the code?
Sincerely,
Lerner
Traceback (most recent call last):
File "ex_02_detect_negative.py", line 63, in
f(0.)
File "/home/danielcanelhas/workspace/Theano/theano/compile/function_module.py", line 588, in call
outputs = self.fn()
File "/home/danielcanelhas/workspace/Theano/theano/gof/link.py", line 761, in f
raise_with_op(node, thunk)
File "/home/danielcanelhas/workspace/Theano/theano/gof/link.py", line 759, in f
wrapper(i, node, _thunks)
File "/home/danielcanelhas/workspace/Theano/theano/gof/link.py", line 774, in wrapper
f(_args)
File "ex_02_detect_negative.py", line 41, in neg_check
do_check_on(x, node, fn)
File "ex_02_detect_negative.py", line 32, in do_check_on
if var.min() < 0:
AttributeError: 'CudaNdarray' object has no attribute 'min'
Apply node that caused the error: GpuFromHost(x)
Inputs types: [TensorType(float32, scalar)]
Inputs shapes: ['No shapes']
Inputs strides: ['No strides']
Inputs scalar values: ['not scalar']HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano flags optimizer=fast_compile
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint of this apply node.
workarounds may be to compare the fgraph as a string containing "Softmax", though I'm not sure how much you like it:
s = str(f.maker.fgraph)
if "Softmax" in s:
return True
return False
alternatively :
import theano.sandbox.cuda as CUDA
...
if isinstance(app.op, T.nnet.Softmax) or isinstance(app.op, CUDA.nnet.GpuSoftmax):
return True
return False
Not really an issue, but rather a question.
I don't see the difference between both cases in the 03_energy.py example:
If I comment out the _ElemwiseNoGradient class and use T.grad(energy(W,V,H).mean(),W) in grad_expected_energy I still get correct answer.
On the other hand I get an error if I switch mode to FAST_COMPILE (both cases).
Error:
MethodNotDefined: ('perform', <class 'theano.sandbox.rng_mrg.GPU_mrg_uniform'>, 'GPU_mrg_uniform')
Apply node that caused the error: GPU_mrg_uniform{CudaNdarrayType(float32, vector),no_inplace}(<CudaNdarrayType(float32, vector)>, TensorConstant{(1,) of 12})
Inputs types: [CudaNdarrayType(float32, vector), TensorType(int32, (True,))]
Inputs shapes: [(12,), (1,)]
Inputs strides: [(1,), (4,)]
Inputs scalar values: ['not scalar', array([12], dtype=int32)]
Backtrace when the node is created:
File "03_energy.py", line 55, in
W = rng_factory.normal(size=(nv, nh), dtype=v0.dtype)
File "/local/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py", line 1328, in normal
nstreams=nstreams)
File "/local/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py", line 1210, in uniform
ndim, dtype, size))
File "/local/lib/python2.7/site-packages/theano/sandbox/rng_mrg.py", line 545, in new
return op(rstate, cast(v_size, 'int32'))
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint of this apply node.
It was giving this error until I turned the optimization off
RuntimeError: Error doing inplace add
Apply node that caused the error: GpuIncSubtensor{InplaceInc;::}(GpuFromHost.0, GpuArrayConstant{1.0})
Toposort index: 1
Inputs types: [GpuArrayType(float32, (False,)), GpuArrayType(float32, ())]
Inputs shapes: [(4,), ()]
Inputs strides: [(4,), ()]
Inputs values: [gpuarray.array([ 0., 0., 0., 0.], dtype=float32), gpuarray.array(1.0, dtype=float32)]
Outputs clients: [[HostFromGpu(gpuarray)(GpuIncSubtensor{InplaceInc;::}.0)]]
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.