Comments (3)
Yes, this would definitely be a cool thing to support, and very doable from a technical standpoint. It's basically just a matter of changing one flag from True to False. The main issue is a user interface one: how to let the user easily control which parts of the model they want to be trainable. This is part of a larger issue, as there are lots of other parameters that we could in theory allow to be trained, but currently don't (e.g., tau_rc
of LIF neurons).
Right now this is all controlled through the trainable system. This works fairly well for the set of parameters that are trainable right now, but doesn't scale up that well. We could actually extend it fairly easily to support Synapse
objects, which would be a good intermediate step. But, for example, how could we specify that we want to make the biases of an Ensemble trainable but not the RC constant? I suspect that long-term we'll need a custom solution, rather than piggy-backing on the Nengo config system, but I/we need to do some thinking about what that system might look like.
from nengo-dl.
That's an interesting challenge from a design stand-point. One possibility could be to introduce an operator to mark specific values as either learnable or static. For sake of having something to look at, let's say L(ยท)
for trainable and S(ยท)
for static. Under the hood, this could recast values to a special object that the builder would consider when setting the flag. To use the default parameter values the operator could be provided a special value (e.g., Default
). Example syntax:
from nengo_dl import L, S, Default
# Marking a transform as fixed (e.g., passthrough nodes)
nengo.Connection(..., transform=S(1))
# or, since 1 is the default:
nengo.Connection(..., transform=S(Default))
# Learning tau_rc while keeping tau_ref fixed (may not be possible right now?)
x = nengo.Ensemble(..., neuron_type=nengo.LIF(tau_rc=L(0.02), tau_ref=S(0.002)))
# Learning the neuron model, and the gains, but not the biases
x = nengo.Ensemble(..., neuron_type=L(Default), gain=L(Default), bias=S(Default))
# Marking a synapse as learnable
nengo.Connection(..., synapse=L(0.01))
This could further scale to learning only a subset of coefficients in a discretized transfer function, although the syntax starts to become unwieldy and the abstraction becomes somewhat leaky. But if this can be done in a way that keeps the builder extensible, then people could potentially roll their own solutions for these special use-cases.
from nengo-dl.
Quick update: This feature request is made somewhat obsolete by the new keras_spiking.Lowpass layer which learns the time-constant(s) (and initial state) of the lowpass filter for each dimension. There is also a trainable alpha filter in the same repo.
The caveat is that this layer currently cannot be converted via the NengoDL converter if there is more than one time-constant in the layer. Related issue: nengo/nengo#1636.
from nengo-dl.
Related Issues (20)
- AssertionError running custom neuron with TensorFlow 2.3.0 HOT 3
- Empty probes are Python lists instead of ndarrays
- Creating a simulator while keeping pretrained weights HOT 3
- Uninformative error message when using `sim.compile` on a network with no probed outputs
- Support/examples for converting or embedding Keras RNNs HOT 1
- Support scale_firing_rates with Regular/Poisson/Stochastic spiking wrappers
- Warn if converter's scale_firing_rates would skew the nonlinearities
- Support opting in to spikes on the forward pass
- Nengo version of ModelCheckpoint callback
- Use no-input nodes by default in converter
- load_params misbehaves with scale_firing_rates for some architectures HOT 1
- Converter `synapse` not applied to `neurons`-to-`TensorNode` connections HOT 1
- Converter fails with `tf.keras.applications.EfficientNet`
- Mistake in documentation
- Trainable parameters in Nengo LIF neurons HOT 2
- Which neuromorphic hardware does NengoDL simulate ?
- sim.predict make GPU full memory HOT 7
- BatchNormalization layer produces LOW accuracy
- Importing Nengo_DL in Google Colab
- `nengo_dl` cannot import `keras.engine` HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from nengo-dl.