nengo / nengo-dl Goto Github PK
View Code? Open in Web Editor NEWDeep learning integration for Nengo
Home Page: https://www.nengo.ai/nengo-dl
License: Other
Deep learning integration for Nengo
Home Page: https://www.nengo.ai/nengo-dl
License: Other
import nengo
import nengo_dl
import numpy as np
import tensorflow as tf
ignored_loss = lambda y_true, y_pred: 0 / 0 # raise ZeroDivisionError
with nengo.Network() as net:
p = nengo.Probe(nengo.Node(1))
with nengo_dl.Simulator(net) as sim:
sim.compile(loss=[nengo_dl.losses.Regularize(), ignored_loss])
sim.fit(n_steps=1, y=np.zeros((1, 1, 1)), epochs=1)
Expected behaviour: this should raise an error or warning saying that extra elements in the loss
list are not being used. We can see that ignored_loss
is being ignored since no ZeroDivisionError
is raised; if we make it the first element in the loss
list then we do get the zero division error.
Context: I was trying to do this in the hope that it might somehow weight together the two loss functions (normally in Keras one can easily add extra loss functions like regularization to other parts of the network). For reference, to do that the correct way, see the pattern in this unit test:
nengo-dl/nengo_dl/tests/test_objectives.py
Lines 38 to 76 in 7363bc3
n_neurons = 100
# transform = np.zeros((n_neurons, 1)) # <-- okay
transform = nengo.dists.Choice([0]) # <-- bad
with nengo.Network() as model:
x = nengo.Ensemble(n_neurons, 1)
nengo.Connection(nengo.Node(0), x.neurons, transform=transform)
with nengo_dl.Simulator(model) as sim:
sim.freeze_params(model)
Build finished in 0:00:00
Optimization finished in 0:00:00
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
<ipython-input-9-dabbd12ddcb0> in <module>
10
11 with nengo_dl.Simulator(model) as sim:
---> 12 sim.freeze_params(model)
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/nengo/utils/magic.py in __call__(self, *args, **kwargs)
179 return self.wrapper(wrapped, instance, args, kwargs)
180 else:
--> 181 return self.wrapper(self.__wrapped__, self.instance, args, kwargs)
182 else:
183 instance = getattr(self.__wrapped__, "__self__", None)
~/git/nengo-dl/nengo_dl/simulator.py in require_open(wrapped, instance, args, kwargs)
64 )
65
---> 66 return wrapped(*args, **kwargs)
67
68
~/git/nengo-dl/nengo_dl/simulator.py in freeze_params(self, objs)
1278 for o, params in zip(todo, self.get_nengo_params(todo)):
1279 for k, v in params.items():
-> 1280 setattr(o, k, v)
1281
1282 def get_nengo_params(self, nengo_objs, as_dict=False):
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/nengo/base.py in __setattr__(self, name, val)
106 SyntaxWarning,
107 )
--> 108 super().__setattr__(name, val)
109
110 def __str__(self):
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/nengo/config.py in __setattr__(self, name, val)
490 except ValidationError:
491 exc_info = sys.exc_info()
--> 492 raise exc_info[1].with_traceback(None)
493 else:
494 super().__setattr__(name, val)
ValidationError: init: Shape of initial value (100,) does not match expected shape (100, 1)
I've been following the from_nengo
example (https://www.nengo.ai/nengo-dl/examples/from-nengo.html) as a starting point, and when I include the following lines, which are in the example,
net.config[nengo.Ensemble].trainable = True
I get this error
ValueError: Variable <tf.Variable 'TensorGraph/base_params/float32_1000_12288:0' shape=(1000, 12288) dtype=float32> has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
I've attached a minimal example of this that recreates the issue. You can try it out by commenting/uncommenting lines 53-54
. I had to change the filetype to .txt
to upload here
The 1.13 release candidate is out, so need to check compatibility and perform any necessary updates.
Currently, the lif_smoothing
config option only applies to nengo.LIF
neurons, not to nengo.LIFRate
neurons. It would make sense for it to apply to both.
Minimal Reproducer
do_bug = True
import pickle
import nengo
import nengo_dl
with nengo.Network() as model:
nengo.Ensemble(100, 1)
with nengo_dl.Simulator(model) as sim:
if do_bug:
sim.freeze_params(model)
s = pickle.dumps(model)
pickle.loads(s)
Error
ValidationError Traceback (most recent call last)
<ipython-input-12-6a0fd89abc6d> in <module>()
13
14 s = pickle.dumps(model)
---> 15 pickle.loads(s)
~/git/nengo/nengo/base.py in __setstate__(self, state)
89 for attr in self.params:
90 if attr in state:
---> 91 setattr(self, attr, state.pop(attr))
92
93 for k, v in state.items():
~/git/nengo/nengo/base.py in __setattr__(self, name, val)
105 "Did you mean to change an existing attribute?" % (name, self),
106 SyntaxWarning)
--> 107 super().__setattr__(name, val)
108
109 def __str__(self):
~/git/nengo/nengo/config.py in __setattr__(self, name, val)
453 except ValidationError:
454 exc_info = sys.exc_info()
--> 455 raise exc_info[1].with_traceback(None)
456 else:
457 super().__setattr__(name, val)
ValidationError: Ensemble.n_neurons: Unconfigurable parameters have no defaults. Please ensure the value of the parameter is set before trying to access it.
Expected Behaviour
I expect to be able to unpickle the model. The reason I am using freeze_params
here is so that I can save a trained nengo_dl
model and then run the same model later on a different backend. The work-around is to keep the trained model in memory, and avoid pickling it between training and testing. This however means that I need to retrain the model every time that I want to test it. The work-around for this is to manually save all of the parameters for each layer and then manually reconstruct the model later.
Versions
Current master branch on both nengo
and nengo_dl
.
Testing issue syncing, ignore
Steps to reproduce:
docs/examples/spiking-mnist.ipynb
sim.fit(...)
to sim.fit(..., validation_split=0.55)
do_training = True
and then run notebookTrain on 26999 samples, validate on 33001 samples
Epoch 1/200
26800/26999 [============================>.] - ETA: 0s - loss: 0.2636 - probe_loss: 0.2636
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-10-c2eaadcf9727> in <module>
6 loss={out_p: tf.losses.SparseCategoricalCrossentropy(from_logits=True)}
7 )
----> 8 sim.fit({inp: train_images}, {out_p: train_labels}, epochs=200, validation_split=0.55)
9
10 # save the parameters to file
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/nengo/utils/magic.py in __call__(self, *args, **kwargs)
179 return self.wrapper(wrapped, instance, args, kwargs)
180 else:
--> 181 return self.wrapper(self.__wrapped__, self.instance, args, kwargs)
182 else:
183 instance = getattr(self.__wrapped__, "__self__", None)
~/git/nengo-dl/nengo_dl/simulator.py in require_open(wrapped, instance, args, kwargs)
64 )
65
---> 66 return wrapped(*args, **kwargs)
67
68
~/git/nengo-dl/nengo_dl/simulator.py in fit(self, x, y, n_steps, stateful, **kwargs)
847
848 return self._call_keras(
--> 849 "fit", x=x, y=y, n_steps=n_steps, stateful=stateful, **kwargs
850 )
851
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/nengo/utils/magic.py in __call__(self, *args, **kwargs)
179 return self.wrapper(wrapped, instance, args, kwargs)
180 else:
--> 181 return self.wrapper(self.__wrapped__, self.instance, args, kwargs)
182 else:
183 instance = getattr(self.__wrapped__, "__self__", None)
~/git/nengo-dl/nengo_dl/simulator.py in with_self(wrapped, instance, args, kwargs)
48 instance.tensor_graph.device
49 ):
---> 50 output = wrapped(*args, **kwargs)
51 tf.keras.backend.set_floatx(keras_dtype)
52
~/git/nengo-dl/nengo_dl/simulator.py in _call_keras(self, func_type, x, y, n_steps, stateful, **kwargs)
995 func_args = dict(x=x, y=y, **kwargs)
996
--> 997 outputs = getattr(self.keras_model, func_type)(**func_args)
998
999 # update n_steps/time
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
726 max_queue_size=max_queue_size,
727 workers=workers,
--> 728 use_multiprocessing=use_multiprocessing)
729
730 def evaluate(self,
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
672 validation_steps=validation_steps,
673 validation_freq=validation_freq,
--> 674 steps_name='steps_per_epoch')
675
676 def evaluate(self,
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
391
392 # Get outputs.
--> 393 batch_outs = f(ins_batch)
394 if not isinstance(batch_outs, list):
395 batch_outs = [batch_outs]
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py in __call__(self, inputs)
3578
3579 fetched = self._callable_fn(*array_vals,
-> 3580 run_metadata=self.run_metadata)
3581 self._call_fetch_callbacks(fetched[-len(self._fetches):])
3582 output_structure = nest.pack_sequence_as(
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/client/session.py in __call__(self, *args, **kwargs)
1470 ret = tf_session.TF_SessionRunCallable(self._session._session,
1471 self._handle, args,
-> 1472 run_metadata_ptr)
1473 if run_metadata:
1474 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Input to reshape is a tensor with 156016 values, but the requested shape has 156800
[[{{node TensorGraph/while/iteration_0/DotIncBuilder/Reshape_1}}]]
[[loss_2/mul/_187]]
(1) Invalid argument: Input to reshape is a tensor with 156016 values, but the requested shape has 156800
[[{{node TensorGraph/while/iteration_0/DotIncBuilder/Reshape_1}}]]
0 successful operations.
0 derived errors ignored.
This error seems to appear when the size of the validation split (here, 33001) is not evenly divisble by the minibatch size (here, 200).
Versions:
nengo-dl
conda install tensorflow-gpu
I'm attempting to run the spiking_mnist demo with mnist fashion dataset instead of digits, but am getting a "ValidationError: input data: should have rank 3 (batch_size, n_steps, dimensions), found rank 4"
error when running sim.loss
. I converted to one-hot, and it seems to be the exact same format as the mnist example. What could the rank issue be? Running digits seems to work, but the clothing mnist fails
Hi
In nengo , how can i implement autoencoder in spiking neural networks
is that possible in nengo?
Can the use of multiple GPU be enabled? If so how?
When attempting to run spiking-mnist.ipynb exactly as written in the example, I get
TypeError: loss() missing 1 required positional argument: 'targets'
Any ideas what may be causing this, or possible solutions?
Hi,
I was getting started with nengo-dl, I started with the Example Code in the repository readme, but the code crashes at nengo_dl.Simulator.
I tried other examples code provided but I get the same error, I also tried to run on google colab and a virtual environment in case there is a conflict with the versions on my machine but I still got the same error.
Is it expecting specific versions of python or tensorflow, I am using python 3.7 and tensorflow 2.1.
This is the error I got!
Traceback (most recent call last):
File "nengodl_digit_class.py", line 11, in
with nengo_dl.Simulator(net, seed=0,minibatch_size=200, progress_bar=True) as sim: # this is the only line that changes
File "/Users/marinaneseem/Documents/Research/SNNs/src/nengo-dl/nengo_dl/simulator.py", line 516, in init
seed,
File "/usr/local/lib/python3.7/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/Users/marinaneseem/Documents/Research/SNNs/src/nengo-dl/nengo_dl/tensor_graph.py", line 124, in init
old_operators = operators
File "/usr/local/lib/python3.7/site-packages/progressbar/bar.py", line 548, in exit
self.finish(dirty=bool(exc_type))
TypeError: finish() got an unexpected keyword argument 'dirty'
Thanks.
Trying to optimize a model that contains BatchNormalization Layers inside TensorNodes results in an error. E.g.,
import tensorflow as tf
import nengo
import nengo_dl
import numpy as np
with nengo.Network() as net:
a = nengo.Node([0])
b = nengo_dl.Layer(tf.keras.layers.BatchNormalization())(a)
p = nengo.Probe(b)
with nengo_dl.Simulator(net) as sim:
sim.compile(optimizer=tf.optimizers.SGD(0), loss=tf.losses.mse)
sim.fit(np.ones((1, 1, 1)), np.ones((1, 1, 1)))
tensorflow.python.framework.errors_impl.InvalidArgumentError: {{node training_1/group_deps}} has inputs from different frames. The input {{node TensorGraph/while/iteration_0/SimTensorNodeBuilder/cond_2/Merge}} is in frame 'TensorGraph/while/while_context'. The input {{node loss/mul}} is in frame ''.
I'd guess that using BatchNormalization layers inside any TensorFlow while loop results in the same error, but haven't looked into making a minimal example yet.
Line 353 in b9e7ae5
Building network
Build finished in 0:00:01
|# Optimizing graph | 0:00:00Traceback (most recent call last):
File "MSO_model.py", line 158, in
run_MSO_model()
File "MSO_model.py", line 128, in run_MSO_model
sim = build_model()
File "MSO_model.py", line 119, in build_model
sim = nengo_dl.Simulator(model, dt=dt, unroll_simulation=1)
File "/home/cnrg-ntu/nengo-dl/nengo_dl/simulator.py", line 138, in init
max_value=None) as progress:
File "/home/cnrg-ntu/nengo-dl/nengo_dl/utils.py", line 353, in enter
return self.start()
File "/home/cnrg-ntu/nengo-dl/nengo_dl/utils.py", line 294, in start
self.thread.start()
File "/usr/lib/python2.7/threading.py", line 730, in start
raise RuntimeError("threads can only be started once")
RuntimeError: threads can only be started once
| # Optimizing graph | 0:00:00
just change to:
return self
seems to fix it.
I was just reading this page, and it recommends not using feed_dict
with sess.run
, since apparently it's often slow. This is how nengo_dl
currently does things.
The first step is to determine if this is actually a problem in nengo_dl
, by benchmarking the feed_dict
method versus other alternatives (I'm not sure what alternatives there all are).
Hi,
I'm trying to use nengo-dl to train a customized dataset for a classification task. The input for my dataset is 224x224 greyscale image, and the output is one of 56 classes. I trained my data with a VGG like CNN architecture on Keras. It converges to 90% accuracy without any fine-tuning or data augmentation. I used the same architecture on Nengo, but it seems not to converge. I'm new to this framework, I just changed a few lines from the mnist example. Could you help with the possible issues with my code?
with open('config.json', 'r') as fp:
cfg = json.load(fp)
input_path = os.path.join(cfg['root_path'], "train_test_data")
x_train, y_train = load_data(input_path) # x_train shape: (n_samples, 224*224), y_train shape: (n_samples, 56) - onehot encoded
h, w = 224, 224
with nengo.Network() as net:
net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([100])
net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])
neuron_type = nengo.LIF(amplitude=0.01)
nengo_dl.configure_settings(trainable=False)
kernel_size = 32
inp = nengo.Node([0] * h * w)
x = nengo_dl.tensor_layer(inp, tf.layers.conv2d, shape_in=(h, w, 1), filters=kernel_size, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size), filters=kernel_size, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.average_pooling2d, shape_in=(h, w, kernel_size), pool_size=2, strides=2)
h, w = h // 2, w // 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size), filters=kernel_size * 2, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size * 2), filters=kernel_size * 2, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.average_pooling2d, shape_in=(h, w, kernel_size * 2), pool_size=2, strides=2)
h, w = h // 2, w // 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size * 2), filters=kernel_size * 4, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size * 4), filters=kernel_size * 4, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size * 4), filters=kernel_size * 4, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.average_pooling2d, shape_in=(h, w, kernel_size * 4), pool_size=2, strides=2)
h, w = h // 2, w // 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size * 4), filters=kernel_size * 8, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size * 8), filters=kernel_size * 8, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.conv2d, shape_in=(h, w, kernel_size * 8), filters=kernel_size * 8, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type)
h, w = h - 2, w - 2
x = nengo_dl.tensor_layer(x, tf.layers.average_pooling2d, shape_in=(h, w, kernel_size * 8), pool_size=2, strides=2)
h, w = h // 2, w // 2
# linear readout
x = nengo_dl.tensor_layer(x, tf.layers.dense, units=56)
out_p = nengo.Probe(x)
out_p_filt = nengo.Probe(x, synapse=0.1)
minibatch_size = 8
sim = nengo_dl.Simulator(net, minibatch_size=minibatch_size)
# add the single timestep to the training data
train_data = {inp: x_train[:, None, :],
out_p: y_train[:, None, :]}
n_steps = 30
n_test = 1000
test_data = {
inp: np.tile(x_train[:n_test, None, :],
(1, n_steps, 1)),
out_p_filt: np.tile(y_train[:n_test, None, :],
(1, n_steps, 1))}
def objective(outputs, targets):
return tf.nn.softmax_cross_entropy_with_logits_v2(
labels=targets, logits=outputs)
# opt = tf.train.RMSPropOptimizer(learning_rate=0.001)
opt = tf.train.GradientDescentOptimizer(learning_rate=0.0001)
def classification_error(outputs, targets):
return 100 * tf.reduce_mean(
tf.cast(tf.not_equal(tf.argmax(outputs[:, -1], axis=-1),
tf.argmax(targets[:, -1], axis=-1)),
tf.float32))
# print("error before training: %.2f%%" % sim.loss(
# test_data, {out_p_filt: classification_error}))
do_training = True
epochs = 100
weights_name = "./data/temp3"
weights_name_ep = weights_name
if do_training:
# run training
for i in range(epochs):
if os.path.exists(weights_name_ep + ".index"):
sim.load_params(weights_name_ep)
print("load", weights_name_ep)
sim.train(train_data, opt, objective={out_p: objective}, n_epochs=1)
# save the parameters to file
weights_name_ep = weights_name + "_" + str(i)
sim.save_params(weights_name_ep)
sim.close()
To reproduce, visit: https://www.nengo.ai/nengo-dl/search.html?q=train&check_keywords=yes&area=default#
The page will say "Searching..." indefinitely, due to a JS error.
The console shows the following error in Chrome:
searchtools.js:144 Uncaught ReferenceError: Stemmer is not defined
at Object.query (searchtools.js:144)
at Object.setIndex (searchtools.js:83)
at <anonymous>:1:8
at p (jquery.js:2)
at Function.globalEval (jquery.js:2)
at text script (jquery.js:4)
at Qb (jquery.js:4)
at A (jquery.js:4)
at XMLHttpRequest.<anonymous> (jquery.js:4)
and in Firefox:
[Show/hide message details.] ReferenceError: Stemmer is not defined[Learn More] searchtools.js:144:9
query
https://www.nengo.ai/nengo-dl/_static/searchtools.js:144:9
setIndex
https://www.nengo.ai/nengo-dl/_static/searchtools.js:83:7
<anonymous>
https://www.nengo.ai/nengo-dl/search.html#:1:1
p
https://www.nengo.ai/nengo-dl/_static/jquery.js:2:516
globalEval
https://www.nengo.ai/nengo-dl/_static/jquery.js:2:2581
text script
https://www.nengo.ai/nengo-dl/_static/jquery.js:4:17034
Qb
https://www.nengo.ai/nengo-dl/_static/jquery.js:4:10192
A
https://www.nengo.ai/nengo-dl/_static/jquery.js:4:13719
c/<
https://www.nengo.ai/nengo-dl/_static/jquery.js:4:16323
Hi, thank you for developing this great project.
I'm having trouble understanding how to use a class with a TensorNode. Specifically I can't figure out how to set up a class that takes a batch of inputs; the example in the documentation only uses one image.
I've tried to make a minimal example that reproduces my issue.
MNISTY_SHAPE = (28, 28, 1)
SIZE_OUT = 10
class SimpleNode:
def __call__(self, t, x):
img = tf.reshape(tf.cast(x, tf.float32), (-1,) + MNISTY_SHAPE)
conv1 = tf.layers.conv2d(img, filters=32, kernel_size=(5,5), strides=(3,3), padding='VALID')
maxpool1 = tf.layers.max_pooling2d(conv1,
pool_size=(2,2),
strides=(2,2),
padding='VALID')
input_shape = maxpool1.get_shape().as_list()[1:]
n_input_units = np.prod(input_shape)
n_output_units = SIZE_OUT
weights_shape = [n_input_units, n_output_units]
fc1W = tf.get_variable(name='fc1W_weights',
shape=weights_shape)
fc1b = tf.get_variable(name='fc1W_biases',
initializer=lambda shape, dtype, partition_info: tf.zeros(shape=shape, dtype=dtype),
shape=[n_output_units])
fc1 = tf.nn.relu_layer(
tf.reshape(maxpool1, [-1, int(np.prod(maxpool1.get_shape()[1:]))]), fc1W, fc1b)
probabilities = tf.nn.softmax(fc1, name='probabilities')
net = nengo.Network()
with net:
input_shape = np.prod(MNISTY_SHAPE)
input_node = nengo.Node(output=np.zeros(MNISTY_SHAPE).flatten())
simplenode = nengo_dl.TensorNode(SimpleNode(),size_in=input_shape,size_out=SIZE_OUT)
nengo.Connection(input_node, simplenode, synapse=None)
minibatch_size = 20
sim = nengo_dl.Simulator(net, minibatch_size=minibatch_size)
When I run this, I get a crash, I think it's from the Tensorflow "VM" after the sim has been built.
# ...(very long traceback from my_environment/site-packages/tensorflow)...
ValueError: Tried to convert 'x' to a tensor and failed. Error: None values not supported.
Is there something I'm doing wrong in setting up the __call__
function of the class?
I'm guessing it's because there's a None
in the shape of the tensor being passed as x
?
I'm trying to import nengo_dl(2.2.0), while it shows the following mistakes:
Traceback (most recent call last):
File "/home/snn/LIF/test.py", line 2, in
import nengo_dl
File "/home/anaconda3/envs/nengo_dl/lib/python3.5/site-packages/nengo_dl/init.py", line 32, in
from nengo_dl.compat import tf_compat
File "/home/anaconda3/envs/nengo_dl/lib/python3.5/site-packages/nengo_dl/compat.py", line 53, in
if LooseVersion(nengo.version) < "3.0.0":
AttributeError: module 'nengo' has no attribute 'version'
Random operations (e.g. tf.random.uniform(...)
), have an underlying state that controls the sequence of random numbers that are generated. Setting the TensorFlow seed (tf.random.set_seed(n)
) does not reset that state, so e.g.
tf.random.set_seed(0)
x = tf.random.uniform(...)
sess.run(x) # --> produces some number A
tf.random.set_seed(0)
sess.run(x) # --> produces a different number B
More concretely for NengoDL, this means that calling Simulator.reset()
does not reset that internal RNG state either, so
sim.run(...)
sim.reset()
sim.run(...)
may contain different random sequences in the two runs. This is probably surprising to most users. Note however that currently there is no TensorFlow randomness in a standard Nengo model (all the randomness is through numpy), so the only way this would occur is if someone has built a TensorNode that contains random ops like tf.random.uniform
.
There is currently no way to reset the TensorFlow RNG other than completely rebuilding the graph/simulator. However, there is experimental support for a different RNG implementation that does support resetting in tf.random.experimental
(https://www.tensorflow.org/api_docs/python/tf/random/experimental). I looked into supporting this (which would basically mean resetting the tf.random.experimental.global_generator
on Simulator.reset
), but it still seems a bit buggy. We should investigate this more if this approach leaves the "experimental" status though.
The current nengo_dl.Simulator
assumes that node.output
does not change after the model is built. This means that it is possible to confuse the Simulator if those values are changed after build time:
def test_node_output_change(Simulator, seed):
for pre_val in [0, lambda t: 0]:
for post_val in [1, lambda t: 1]:
with nengo.Network(seed=seed) as net:
inp = nengo.Node(pre_val)
ens = nengo.Ensemble(n_neurons=100, dimensions=1)
nengo.Connection(inp, ens)
p = nengo.Probe(ens)
with Simulator(net) as sim:
inp.output = post_val
sim.run(0.05)
assert np.abs(np.mean(sim.data[p])-0.0) < 0.01
The sliders in nengo_gui
require this feature for them to work!
Steps to reproduce:
docs/examples/lmu.ipynb
with nengo.Network(seed=seed) as net:
nengo_dl.configure_settings(
trainable=None, stateful=False, keep_history=False,
)
inp = nengo.Node(np.zeros(train_images.shape[-1]))
h = nengo_dl.Layer(tf.keras.layers.LSTM(units=128))(inp)
out = nengo_dl.Layer(tf.keras.layers.Dense(units=10))(h)
p = nengo.Probe(out)
Build finished in 0:00:00
Optimization finished in 0:00:00
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-25-fc54bb9ca5d2> in <module>
12
13 with nengo_dl.Simulator(
---> 14 net, minibatch_size=100, unroll_simulation=8) as sim:
15 sim.compile(
16 loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
~/git/nengo-dl/nengo_dl/simulator.py in __init__(self, network, dt, seed, model, device, unroll_simulation, minibatch_size, progress_bar)
510 # build keras models
511 self.graph = tf.Graph()
--> 512 self._build_keras()
513
514 # initialize sim attributes
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/nengo/utils/magic.py in __call__(self, *args, **kwargs)
179 return self.wrapper(wrapped, instance, args, kwargs)
180 else:
--> 181 return self.wrapper(self.__wrapped__, self.instance, args, kwargs)
182 else:
183 instance = getattr(self.__wrapped__, "__self__", None)
~/git/nengo-dl/nengo_dl/simulator.py in with_self(wrapped, instance, args, kwargs)
48 instance.tensor_graph.device
49 ):
---> 50 output = wrapped(*args, **kwargs)
51 tf.keras.backend.set_floatx(keras_dtype)
52
~/git/nengo-dl/nengo_dl/simulator.py in _build_keras(self)
535 # if the global learning phase is set, use that
536 training=backend._GRAPH_LEARNING_PHASES.get(
--> 537 backend._DUMMY_EAGER_GRAPH, None
538 ),
539 ),
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
845 outputs = base_layer_utils.mark_as_return(outputs, acd)
846 else:
--> 847 outputs = call_fn(cast_inputs, *args, **kwargs)
848
849 except errors.OperatorNotAllowedInGraphError as e:
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/autograph/impl/api.py in wrapper(*args, **kwargs)
290 def wrapper(*args, **kwargs):
291 with ag_ctx.ControlStatusCtx(status=ag_ctx.Status.DISABLED):
--> 292 return func(*args, **kwargs)
293
294 if inspect.isfunction(func) or inspect.ismethod(func):
~/git/nengo-dl/nengo_dl/tensor_graph.py in call(self, inputs, training, progress, stateful)
400 with progress.sub("build stage", max_value=len(self.plan) * self.unroll) as sub:
401 steps_run, probe_arrays, final_internal_state = (
--> 402 self._build_loop(sub) if self.use_loop else self._build_no_loop(sub)
403 )
404
~/git/nengo-dl/nengo_dl/tensor_graph.py in _build_loop(self, progress)
514 loop_vars=loop_vars,
515 parallel_iterations=1, # TODO: parallel iterations work in eager mode
--> 516 back_prop=not self.inference_only,
517 )
518
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py in while_loop_v2(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, maximum_iterations, name)
2476 name=name,
2477 maximum_iterations=maximum_iterations,
-> 2478 return_same_structure=True)
2479
2480
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name, maximum_iterations, return_same_structure)
2751 ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, loop_context)
2752 result = loop_context.BuildLoop(cond, body, loop_vars, shape_invariants,
-> 2753 return_same_structure)
2754 if maximum_iterations is not None:
2755 return result[1]
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py in BuildLoop(self, pred, body, loop_vars, shape_invariants, return_same_structure)
2243 with ops.get_default_graph()._mutation_lock(): # pylint: disable=protected-access
2244 original_body_result, exit_vars = self._BuildLoop(
-> 2245 pred, body, original_loop_vars, loop_vars, shape_invariants)
2246 finally:
2247 self.Exit()
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/ops/control_flow_ops.py in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
2168 expand_composites=True)
2169 pre_summaries = ops.get_collection(ops.GraphKeys._SUMMARY_COLLECTION) # pylint: disable=protected-access
-> 2170 body_result = body(*packed_vars_for_body)
2171 post_summaries = ops.get_collection(ops.GraphKeys._SUMMARY_COLLECTION) # pylint: disable=protected-access
2172 if not nest.is_sequence_or_composite(body_result):
~/git/nengo-dl/nengo_dl/tensor_graph.py in loop_body(loop_i, n_steps, probe_arrays, saved_state, base_params)
486 )
487
--> 488 loop_i = self._build_inner_loop(loop_i, update_probes, progress)
489
490 state_arrays = tuple(self.signals.bases[key] for key in self.saved_state)
~/git/nengo-dl/nengo_dl/tensor_graph.py in _build_inner_loop(self, loop_i, update_probes, progress)
658 with tf.control_dependencies([loop_i]):
659 # build operators
--> 660 side_effects = self.op_builder.build(progress)
661
662 logger.debug("collecting probe tensors")
~/git/nengo-dl/nengo_dl/builder.py in build(self, progress)
98
99 with self.name_scope(ops):
--> 100 output = self.op_builds[ops].build_step(self.signals)
101
102 if isinstance(output, (tf.Tensor, tf.Variable)):
~/git/nengo-dl/nengo_dl/tensor_node.py in build_step(self, signals)
352 if len(inputs) == 1:
353 inputs = inputs[0]
--> 354 output = self.func.call(inputs)
355 else:
356 output = self.func(*inputs)
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/layers/recurrent_v2.py in call(self, inputs, mask, training, initial_state)
918 input_length=timesteps,
919 time_major=self.time_major,
--> 920 zero_output_for_mask=self.zero_output_for_mask)
921 runtime = _runtime(_RUNTIME_UNKNOWN)
922 else:
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py in rnn(step_function, inputs, initial_states, go_backwards, mask, constants, unroll, input_length, time_major, zero_output_for_mask)
3902
3903 for input_ in flatted_inputs:
-> 3904 input_.shape.with_rank_at_least(3)
3905
3906 if mask is not None:
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_shape.py in with_rank_at_least(self, rank)
1030 """
1031 if self.rank is not None and self.rank < rank:
-> 1032 raise ValueError("Shape %s must have rank at least %d" % (self, rank))
1033 else:
1034 return self
ValueError: Shape (1, 100) must have rank at least 3
I have also tried adding unroll=True
to the LSTM and/or configuring stateful=True
and/or configuring keep_history=True
under nengo_dl.configure_settings
.
This is pointed out in soft_reset
but not in reset
. I've suggested some changes in commit de7c89d which is in the reset-docstrings
branch.
Should check out what impact this has on memory/accuracy/performance.
Reloading model parameters can introduce unexpected indeterminacy to simulations. Minimal example:
import nengo
import nengo.spa as spa
import nengo_dl
import numpy as np
import tensorflow as tf
seed = 98
dims = 32
vocab = spa.Vocabulary(dimensions=dims)
vocab.parse('TRACE')
vocab.parse('CUE')
vocab.add('OUTPUT', vocab.parse('TRACE*~CUE').v)
with nengo.Network(seed=seed) as net:
net.config[nengo.Ensemble].neuron_type = nengo.RectifiedLinear()
net.config[nengo.Connection].synapse = None
trace_inp = nengo.Node(vocab['TRACE'].v)
cue_inp = nengo.Node(vocab['CUE'].v)
extractor = nengo.networks.CircularConvolution(5, dims, invert_b=True)
nengo.Connection(trace_inp, extractor.input_a)
nengo.Connection(cue_inp, extractor.input_b)
out = nengo.Probe(extractor.output)
inp_array = vocab['TRACE'].v
inp_array = inp_array[None, None, :]
cue_array = vocab['CUE'].v
cue_array = cue_array[None, None, :]
out_array = vocab['OUTPUT'].v
out_array = out_array[None, None, :]
inputs = {trace_inp: inp_array, cue_inp: cue_array}
outputs = {out: out_array}
with nengo_dl.Simulator(net, seed=seed) as sim1:
optimizer = tf.train.RMSPropOptimizer(1e-3)
print('loss pre-training ', sim1.loss(inputs, outputs, 'mse'))
sim1.train(inputs, outputs, optimizer, n_epochs=5, objective='mse')
print('loss post-training ', sim1.loss(inputs, outputs, 'mse'))
sim1.save_params('./example-params')
with nengo_dl.Simulator(net, seed=seed) as sim2:
sim2.load_params('./example-params')
print('loss post-reloading', sim2.loss(inputs, outputs, 'mse'))
The loss computed after reloading the saved model parameters will not be equivalent to the loss computed prior to saving these parameters.
(Feature request) Since the synaptic operations are differentiable with respect to the coefficients in their difference equations, they can also be optimized via backpropagation.
After learning, the synaptic coefficients can be mapped back onto the time-constants of the synapse via the poles of the new discrete transfer function, as follows:
from nengolib.synapses import DoubleExp
from nengolib.signal import cont2discrete
dt = 0.001
tau1 = 0.05
tau2 = 0.03
# LTI difference equation is obtained from the ZOH discretization tf coefficients
# https://en.wikipedia.org/wiki/Digital_filter#Difference_equation
disc = nengolib.signal.cont2discrete(DoubleExp(tau1, tau2), dt=dt, method='zoh')
# These coefficients can be related back to the time-constant by the following formula
# derived by zero-pole matching
assert np.allclose(
np.sort(- dt / np.log(disc.poles)),
np.sort([tau1, tau2]))
This should be useful any time the data set has some temporal dynamics. These dynamics can be learned through not only the recurrent connection, but the dynamics of the synapses (which are like miniature recurrent connections).
As a contrived yet simple example, suppose our input data is all ones, and the output data is a step-response with some exponential time-constant. If our network is feed-forward with a specific time-constant, then backpropagation could in theory could minimize the MSE by optimizing the time-constant on the synapse. However, nengo_dl
is currently only able to reduce the error by scaling the static gain:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import nengo
import nengo_dl
import tensorflow as tf
tau_actual = 0.005
tau_ideal = 0.1
length = 1000
dt = 0.001
t = np.arange(length)*dt
y_ideal = nengo.Lowpass(tau_ideal).filt(np.ones_like(t), y0=0, dt=dt)
with nengo.Network() as model:
u = nengo.Node(output=1)
x = nengo.Node(size_in=1)
nengo.Connection(u, x, synapse=tau_actual)
p = nengo.Probe(x, synapse=None)
inputs = {u: np.ones((1, length, 1))}
outputs = {p: y_ideal[None, :, None]}
with nengo_dl.Simulator(model, dt=dt, minibatch_size=1) as sim:
optimizer = tf.train.AdamOptimizer()
sim.train(inputs, outputs, optimizer, n_epochs=1000, objective='mse')
sim.run_steps(length)
plt.figure()
plt.plot(t, y_ideal, label="Ideal")
plt.plot(t, sim.data[p].squeeze(), label="Actual")
plt.xlabel("Time (s)")
plt.legend()
plt.show()
Note: if you set tau_actual == tau_ideal
, then the MSE becomes zero. And so the optimal solution with this architecture is to modify the time-constant on the synapse.
import matplotlib.pyplot as plt
import nengo
import nengo_dl
with nengo.Network() as model:
stim = nengo.Node(output=lambda _: np.random.randn())
p = nengo.Probe(stim[0], synapse=None)
for simulator in (nengo.Simulator, nengo_dl.Simulator):
with simulator(model) as sim:
sim.run(.1)
plt.figure()
plt.title(simulator.__module__)
plt.plot(sim.trange(), sim.data[p])
plt.show()
The second graph's output is a flat 0
, when it should be the same as in the first graph. Changing stim[0]
to stim
gives the desired output.
This also happens when the slice happens on connections from stim
.
Just a very minor inconvenience when switching between Nengo and Nengo-DL. In Nengo, the default for sim.run
and sim.run_steps
is progress_bar=None
which inherits the progress bar from the constructor. But in Nengo-DL the default is progress_bar=True
. This means I need to disable the progress bar in 2 places rather than 1.
Related to #132.
Minimal reproducer:
with nengo.Network() as model:
nengo.Probe(nengo.Node(0))
with nengo_dl.Simulator(model, minibatch_size=2) as sim:
sim.keras_model.save_weights("temp.hdf5")
with nengo_dl.Simulator(model, minibatch_size=1) as sim:
sim.keras_model.load_weights("temp.hdf5")
Stack trace:
ValueError Traceback (most recent call last)
<ipython-input-15-5bffe7574ed5> in <module>
6
7 with nengo_dl.Simulator(model, minibatch_size=1) as sim:
----> 8 sim.keras_model.load_weights("temp.hdf5")
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py in load_weights(self, filepath, by_name)
179 raise ValueError('Load weights is not yet supported with TPUStrategy '
180 'with steps_per_run greater than 1.')
--> 181 return super(Model, self).load_weights(filepath, by_name)
182
183 @trackable.no_automatic_dependency_tracking
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py in load_weights(self, filepath, by_name)
1175 saving.load_weights_from_hdf5_group_by_name(f, self.layers)
1176 else:
-> 1177 saving.load_weights_from_hdf5_group(f, self.layers)
1178
1179 def _updated_config(self):
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group(f, layers)
697 str(len(weight_values)) + ' elements.')
698 weight_value_tuples += zip(symbolic_weights, weight_values)
--> 699 K.batch_set_value(weight_value_tuples)
700
701
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/keras/backend.py in batch_set_value(tuples)
3356 assign_placeholder = array_ops.placeholder(tf_dtype,
3357 shape=value.shape)
-> 3358 assign_op = x.assign(assign_placeholder)
3359 x._assign_placeholder = assign_placeholder
3360 x._assign_op = assign_op
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/ops/resource_variable_ops.py in assign(self, value, use_locking, name, read_value)
812 with _handle_graph(self.handle):
813 value_tensor = ops.convert_to_tensor(value, dtype=self.dtype)
--> 814 self._shape.assert_is_compatible_with(value_tensor.shape)
815 assign_op = gen_resource_variable_ops.assign_variable_op(
816 self.handle, value_tensor, name=name)
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/tensorflow_core/python/framework/tensor_shape.py in assert_is_compatible_with(self, other)
1113 """
1114 if not self.is_compatible_with(other):
-> 1115 raise ValueError("Shapes %s and %s are incompatible" % (self, other))
1116
1117 def most_specific_compatible_shape(self, other):
ValueError: Shapes (1, 1) and (2, 1) are incompatible
Expected behaviour: Expected this to be okay, as one might want to change the minibatch size from one run to another (e.g., to work around #132 or #121, or to experiment with different batch sizes) while reusing the same model parameters from a previous run.
Version: master
sim = nengo_dl.Simulator(nengo.Network())
with sim:
pass
with sim:
pass
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-f89be4cbbd16> in <module>
2 with sim:
3 pass
----> 4 with sim:
5 pass
~/git/nengo-dl/nengo_dl/simulator.py in __enter__(self)
1905
1906 def __enter__(self):
-> 1907 self._graph_context = self.graph.as_default()
1908 self._device_context = self.graph.device(self.tensor_graph.device)
1909
AttributeError: 'NoneType' object has no attribute 'as_default'
Would expect either to be able to reopen the simulator (IMO useful for interactively coding in jupyter), or to get a more informative error message.
One issue I've run into a number of times is that I've forgotten to make synapses=None
on my connections for training, and the training does not converge.
I realize that in some types of networks (recurrent networks?) synapses might be desirable, but I'm wondering if there could be some kind of warning that reminds users if they have synapses in their network, that training might not work.
can haz?
i am trying to run the nengo_dl example using my own dataset which is composed of jpg images of 227X227 in test set and training set.i kept one image in test and 3 images in training but still get the memory errors.i have free memory but again the results was same. i have 4GB of RAM with 64 bit OS but 32 bit anaconda v.dont know why this is error is appearing.suggest me the solution.
code snippet along with error trace is given below
def build_network(neuron_type):
with nengo.Network() as net:
# we'll make all the nengo objects in the network
# non-trainable. we could train them if we wanted, but they don't
# add any representational power so we can save some computation
# by ignoring them. note that this doesn't affect the internal
# components of tensornodes, which will always be trainable or
# non-trainable depending on the code written in the tensornode.
nengo_dl.configure_settings(trainable=True)
# the input node that will be used to feed in input images
inp = nengo.Node(nengo.processes.PresentInput(X_train, 0.1))
# add the first convolutional layer
x = nengo_dl.tensor_layer(
inp, tf.layers.conv2d, shape_in=(227, 227,3), filters=32,
kernel_size=3)
# apply the neural nonlinearity
x = nengo_dl.tensor_layer(x, neuron_type, **ens_params)
# add another convolutional layer
x = nengo_dl.tensor_layer(
x, tf.layers.conv2d, shape_in=(225, 225, 32),
filters=32, kernel_size=3)
x = nengo_dl.tensor_layer(x, neuron_type, **ens_params)
# add a pooling layer
x = nengo_dl.tensor_layer(
x, tf.layers.average_pooling2d, shape_in=(223, 223, 32),
pool_size=2, strides=2)
# add a dense layer, with neural nonlinearity.
# note that for all-to-all connections like this we can use the
# normal nengo connection transform to implement the weights
# (instead of using a separate tensor_layer). we'll use a
# Glorot uniform distribution to initialize the weights.
x, conn = nengo_dl.tensor_layer(
x, neuron_type, **ens_params, transform=nengo_dl.dists.Glorot(),
shape_in=(255,), return_conn=True)
# we need to set the weights and biases to be trainable
# (since we set the default to be trainable=False)
# note: we used return_conn=True above so that we could access
# the connection object for this reason.
net.config[x].trainable = True
net.config[conn].trainable = True
# add a dropout layer
x = nengo_dl.tensor_layer(x, tf.layers.dropout, rate=0.4)
# the final 10 dimensional class output
x = nengo_dl.tensor_layer(x, tf.layers.dense, units=1)
return net, inp, x
# construct the network
net, inp, out = build_network(softlif_neurons)
with net:
out_p = nengo.Probe(out)
# construct the simulator
minibatch_size = None
sim = nengo_dl.Simulator(net, minibatch_size=minibatch_size)
the errors messages are
Building network
Build finished in 0:00:06
| # Optimizing graph: creating signals | 0:00:00
C:\ProgramData\Anaconda34\lib\site-packages\nengo_dl\graph_optimizer.py:1132: FutureWarning: Conversion of the second argument of issubdtype from float
to np.floating
is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type
.
if np.issubdtype(sig.dtype, np.float):
Optimization finished in 0:00:20
The scenario is that I want to convert a Keras network to a subnetwork in a larger Nengo network. Currently this doesn't work because I can't pass input into the converted network. I can hack this in by changing
@Converter.register(tf.keras.layers.InputLayer)
class ConvertInput(LayerConverter):
"""Convert ``tf.keras.layers.InputLayer`` to Nengo objects."""
def convert(self, node_id):
try:
# if this input layer has an input obj, that means it is a passthrough
# (so we just return the input)
output = self.get_input_obj(node_id)
except KeyError:
# not a passthrough input, so create input node
shape = self.output_shape(node_id)
if any(x is None for x in shape):
raise ValueError(
"Input shapes must be fully specified; got %s" % (shape,)
)
# output = nengo.Node(np.zeros(np.prod(shape)), label=self.layer.name)
output = nengo.Node(size_in=np.prod(shape), label=self.layer.name)
...
and then i can
with nengo.Network() as net:
converter = nengo_dl.Convert(model)
vision = converter.net
ens = nengo.Ensemble(...)
nengo.Connection(ens, converter.inputs[model.input])
nengo.Connection(converter.layers[model.output_layer], ens)
which isn't great but works. But now it no longer works when just running it in with the nengo dl sim.predict
. Since i'm unfamiliar with the code i'm not sure what setting it to handle both cases would look like, but ideally there would be another function to export to a subnetwork. So i could
with nengo.Network() as net:
converter = nengo_dl.Convert(model, make_subnet=True)
visionnet = converter.ExportSubNet()
ens = nengo.Ensemble(...)
nengo.Connection(ens, visionnet.input)
nengo.Connection(visionnet.outpt, ens)
or somesuch!
Based on some work @hunse and I have been doing, I thought it might be worth discussing whether to add some shortcut options for adding regularization to a model. One way to regularize is to probe various parameters (e.g. connection weights), compute a cost (e.g. L2 norm), and then include this in the overall loss that is being minimized during training. However, it might also be nice to add a convenience interface to enable someone to set the regularization constant (lambda) and the type of regularization, and then have it automatically be incorporated into the loss for all trainable parameters.
Would it make sense to include this as part of the config system? e.g. you write something like net.config[nengo.Connection].l2_regularization = 0.001
? If so, I can try wrangling something together.
After installing nengo_dl via pip: pip install nengo-dl
, I tried simple provided examples and received runtime error: Cannot join thread before it is started.
This issue resolved for me by uninstalling the nengo-dl and installing its development version:
git clone https://github.com/nengo/nengo-dl.git
pip install -e ./nengo-dl
(Feature request)
Neuron models such as the Wilson
model (see WilsonEuler, courtesy of @psipeter) build signals with dtype=bool
, to track whether the neuron is undergoing an AP.
Even though this neuron model may not be differentiable, it has utility within the nengo_dl
simulator when, say, interacting with a differentiable sub-network trained via backpropagation. But currently, if one tries to simulate a network containing this neuron type, we get the following error:
NotImplementedError Traceback (most recent call last)
<ipython-input-9-903ea365987f> in <module>()
----> 1 with nengo_dl.Simulator(model, dt=dt) as sim_collect:
2 sim_collect.run(sim_t)
3 built_x = sim_collect.data[x]
~/CTN/nengo-dl/nengo_dl/simulator.py in __init__(self, network, dt, seed, model, dtype, device, unroll_simulation, minibatch_size, tensorboard, progress_bar)
166 self.tensor_graph = TensorGraph(
167 self.model, self.dt, unroll_simulation, dtype,
--> 168 self.minibatch_size, device, progress)
169
170 # construct graph
~/CTN/nengo-dl/nengo_dl/tensor_graph.py in __init__(self, model, dt, unroll_simulation, dtype, minibatch_size, device, progress)
137 # base arrays)
138 with progress.sub("creating signals", max_value=None):
--> 139 self.create_signals(sigs)
140
141 logger.info("Optimized plan length: %d", len(self.plan))
~/CTN/nengo-dl/nengo_dl/tensor_graph.py in create_signals(self, sigs)
919 dtype = np.int32
920 else:
--> 921 raise NotImplementedError
922
923 # resize scalars to length 1 vectors
NotImplementedError:
The work-around is to replace:
spiked[:] = (V > self.threshold) & (~AP)
...
model.sig[neurons]['AP'] = Signal(
np.zeros(neurons.size_in, dtype=bool), name="%s.AP" % neurons)
with:
spiked[:] = (V > self.threshold) & (AP == 0)
...
model.sig[neurons]['AP'] = Signal(
np.zeros(neurons.size_in, dtype=int), name="%s.AP" % neurons)
In Lasagne, why is all the data automatically shuffled in each minibatch for each epoch in this code? If the user wants their data shuffled before training, shouldn't they specify it?
When I encountered this, it failed on TravisCI (Python 3.6 build, no environment variables set). I re-ran the build, and it passed, so definitely something non-deterministic going on here. I ran the test 10 times on my own machine and didn't get a failure, so either a) it only fails very occasionally, or b) the failures are specific to the TravisCI setup. My guess is (a).
I think the first step is to run this ~100 times locally and see if we can reproduce it. Probably something in the simulator causing things to be non-deterministic.
Following a convention from Keras, sim.fit
may be called in the style of:
sim.fit(train_x, ...)
sim.fit({node: train_x}, ...)
When building off existing examples in the documentation, the first style is potentially dangerous since it automatically uses the first node created in the network, regardless of how many nodes there are. For instance, taking the LMU example, simply adding nengo.Node(1)
to the top of the network will cause the training to stay at random chance. This is because the input ends up going into this dangling node, rather than the intended node.
After some discussion, it seems a good solution would be to require the number of inputs (in the first style) to match the number of input nodes in the graph. Otherwise, force the user to pass in a dictionary to make things explicit (i.e., the second style).
An open question: how many existing examples/models would be affected by this change?
Alternatives to consider:
Related to #121.
Minimal reproducer:
with nengo.Network() as model:
nengo.Probe(nengo.Node(0))
with nengo_dl.Simulator(model, minibatch_size=2) as sim:
sim.compile(loss=tf.losses.MSE)
sim.evaluate(np.zeros((1, 1, 1)), np.zeros((1, 1, 1)), verbose=0)
Stack trace:
ValidationError Traceback (most recent call last)
<ipython-input-12-859a5caa84cc> in <module>
4 with nengo_dl.Simulator(model, minibatch_size=2) as sim:
5 sim.compile(loss=tf.losses.MSE)
----> 6 sim.evaluate(np.zeros((1, 1, 1)), np.zeros((1, 1, 1)), verbose=0)
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/nengo/utils/magic.py in __call__(self, *args, **kwargs)
179 return self.wrapper(wrapped, instance, args, kwargs)
180 else:
--> 181 return self.wrapper(self.__wrapped__, self.instance, args, kwargs)
182 else:
183 instance = getattr(self.__wrapped__, "__self__", None)
~/git/nengo-dl/nengo_dl/simulator.py in require_open(wrapped, instance, args, kwargs)
65 )
66
---> 67 return wrapped(*args, **kwargs)
68
69
~/git/nengo-dl/nengo_dl/simulator.py in evaluate(self, x, y, n_steps, stateful, **kwargs)
888
889 return self._call_keras(
--> 890 "evaluate", x=x, y=y, n_steps=n_steps, stateful=stateful, **kwargs
891 )
892
~/anaconda3/envs/nengo-dl/lib/python3.7/site-packages/nengo/utils/magic.py in __call__(self, *args, **kwargs)
179 return self.wrapper(wrapped, instance, args, kwargs)
180 else:
--> 181 return self.wrapper(self.__wrapped__, self.instance, args, kwargs)
182 else:
183 instance = getattr(self.__wrapped__, "__self__", None)
~/git/nengo-dl/nengo_dl/simulator.py in with_self(wrapped, instance, args, kwargs)
49 instance.tensor_graph.device
50 ):
---> 51 output = wrapped(*args, **kwargs)
52 tf.keras.backend.set_floatx(keras_dtype)
53
~/git/nengo-dl/nengo_dl/simulator.py in _call_keras(self, func_type, x, y, n_steps, stateful, **kwargs)
947 x,
948 n_steps=n_steps,
--> 949 batch_size=self.minibatch_size if "on_batch" in func_type else None,
950 )
951
~/git/nengo-dl/nengo_dl/simulator.py in _check_data(self, data, batch_size, n_steps, nodes)
1854 "Size of minibatch (%d) less than Simulation `minibatch_size` (%d)"
1855 % (x.shape[0], self.minibatch_size),
-> 1856 "%s data" % name,
1857 )
1858 if nodes and x.shape[1] % self.unroll != 0:
ValidationError: node data: Size of minibatch (0) less than Simulation `minibatch_size` (2)
Expected behaviour: I expected this to be okay, as it may not be that uncommon to want to evaluate some subset of the data (e.g., test data) that is smaller than the minibatch size. The error is also a little confusing because the sample size isn't 0 (it is 1).
It would be nice to have some support for automatically swapping out synapses during training, similar to how we can automatically swap neuron models.
import numpy as np
import matplotlib.pyplot as plt
import nengo
import nengo_dl
import tensorflow as tf
u = np.random.randn(100, 1, 1)
dt = 0.001
with nengo.Network() as model:
stim = nengo.Node(output=nengo.processes.PresentInput(
u, presentation_time=dt))
x = nengo.Ensemble(100, 1,
neuron_type=nengo.LIFRate()) # <-- HERE
nengo.Connection(stim, x, synapse=None)
p = nengo.Probe(x, synapse=None)
inputs = {stim: u}
targets = {p: u}
opt = tf.train.MomentumOptimizer(
learning_rate=1e-13, momentum=0.1, use_nesterov=True)
with nengo_dl.Simulator(model, minibatch_size=1, dt=dt) as sim:
sim.train(inputs, targets, opt, n_epochs=1) # <-- HERE
sim.run(len(u)*dt)
plt.figure()
plt.plot(sim.trange(), sim.data[p].squeeze())
plt.plot(sim.trange(), u.squeeze(), linestyle='--')
plt.show()
Initially this is a communication channel. After a single epoch (with essentially 0 learning rate) the ensemble outputs a flat 0
. Commenting out either of the lines labeled # <-- HERE
makes the problem go away. Changing the neuron type to Sigmoid
or LIF
also makes the problem go away.
There are nightly preview builds available now, so it would be good to get started on this (so that we're ready once 2.0 is released).
When trying to use get_nengo_params
on a connection with probed weights, I get the error below. This only seems to happen after 6ddf68e.
Traceback (most recent call last):
File "test_save_nengo_params.py", line 33, in <module>
params = sim.get_nengo_params(conn, as_dict=False)
File "/data/eric/workspace/nengo_dl/nengo_dl/simulator.py", line 892, in get_nengo_params
data = self.data.get_params(*fetches)
File "/data/eric/workspace/nengo_dl/nengo_dl/simulator.py", line 1488, in get_params
fetches[placeholder] = self.sim.tensor_graph.get_tensor(sig)
File "/data/eric/workspace/nengo_dl/nengo_dl/tensor_graph.py", line 29, in func_with_self
return func(self, *args, **kwargs)
File "/data/eric/workspace/nengo_dl/nengo_dl/tensor_graph.py", line 712, in get_tensor
return tf.gather(base, tensor_sig.tf_indices)
File "/home/eric/venv/full3/lib/python3.4/site-packages/tensorflow/python/ops/array_ops.py", line 2585, in gather
params, indices, validate_indices=validate_indices, name=name)
File "/home/eric/venv/full3/lib/python3.4/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1864, in gather
validate_indices=validate_indices, name=name)
File "/home/eric/venv/full3/lib/python3.4/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/eric/venv/full3/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
op_def=op_def)
File "/home/eric/venv/full3/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 1672, in __init__
control_flow_util.CheckInputFromValidContext(self, input_tensor.op)
File "/home/eric/venv/full3/lib/python3.4/site-packages/tensorflow/python/ops/control_flow_util.py", line 200, in CheckInputFromValidContext
raise ValueError(error_msg + " See info log for more details.")
ValueError: Cannot use 'while/iteration_0/Const_3' as input to 'Gather' because 'while/iteration_0/Const_3' is in a while loop. See info log for more details.
Here's a MWE:
import numpy as np
import nengo
import nengo_dl
from nengo_dl import tensor_layer
DO_ERROR = True
n_inputs = 3
n_neurons = 2
neuron_type = nengo_dl.SoftLIFRate()
# --- make network
with nengo.Network() as net:
nengo_dl.configure_settings(trainable=None)
net.config[nengo.Connection].synapse = None
net.config[nengo.Ensemble].trainable = False
inp = nengo.Node(np.zeros(n_inputs), label='input_node')
layer, layer_conn = tensor_layer(inp, neuron_type,
transform=nengo_dl.dists.Glorot(),
shape_in=n_neurons,
return_conn=True)
if DO_ERROR:
nengo.Probe(layer_conn, 'weights')
with nengo_dl.Simulator(net, minibatch_size=1) as sim:
for conn in net.connections:
print(conn)
params = sim.get_nengo_params(conn, as_dict=False)
a silly thing to do, but got this
File ".../nengo_dl/nengo_dl/simulator.py", line 570, in loss
loss_val /= i + 1
when i had too large a minibatch_size
for the training data i was using, took me just a second to figure out what was wrong but might be useful to have a check or warning for this
I think it's because this for-loop doesn't skip according to sample_every
.
Training a model with TensorFlow's Adam optimizer (tf.train.AdamOptimizer()
), produces a FailedPreconditionError
. Minimal example:
import nengo
import nengo_dl
import tensorflow as tf
with nengo.Network() as net:
net.config[nengo.Ensemble].neuron_type = nengo.RectifiedLinear()
inp = nengo.Node(0)
ens = nengo.Ensemble(100, 1)
probe = nengo.Probe(ens)
values = np.random.uniform(1, 1, size=(100, 1, 1))
inputs = {inp: values}
outputs = {probe: 2 * values}
with nengo_dl.Simulator(net) as sim:
optimizer = tf.train.AdamOptimizer(0.2)
sim.train(inputs, outputs, optimizer, 1, 'mse')
This may have something to do with how the optimizer requires the initialization of new variables for tracking gradient information.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.