Giter VIP home page Giter VIP logo

sinabs's People

Contributors

anyms-a avatar bauerfe avatar biphasic avatar edghyhdz avatar martinosorb avatar minakh avatar nkupelioglu avatar sheiksadique avatar ssinhaleite avatar waitxxxx avatar willian-girao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

sinabs's Issues

Backpropagating through membrane potential

Currently it is not possible to backpropagate through the recorded membrane potential of a LIF or IAF layer. This seems to be intentional, as the gradients are being detached in sinabs/layers/functional/lif.py, line 84.

I wonder what speaks against letting users backpropagate through the membrane potential?

Change the way to calculate Synops per second. Make it support batch calculation.

Description

The current Synop calculation doesn't support the batch_size > 1 case. Which limits the training speed if want to use SynOp Loss during training.

def synops_hook(layer, inp, out):
    assert len(inp) == 1, "Multiple inputs not supported for synops hook"
    inp = inp[0]
    layer.tot_in = inp.sum().item()
    layer.tot_out = out.sum().item()
    layer.synops = layer.tot_in * layer.fanout
    layer.tw = inp.shape[0]

Solution

Make the synops_hook calculate the averge Synops of one batch of data.

spike_fn, reset_fn and surrogate_grad_fn as layer parameters

For spiking layers the spike and reset behaviors as well as surrogate gradients are currently defined by passing an ActivationFunction object at instantiation.

This has two disadavantages:

  1. First having to instantiate an ActivationFunction is a bit tedious
  2. The default behavior is not clear this way.

Passing the spike_fn, reset_fn and surrogate_grad_fn directly as layer parameters rather than an ActivationFunction would solve these problems. The ActivationFunction could then be instantiated inside the forward functionals.

documentation updates

  • plots of neuron models
  • activation module
    • surrogate gradients
    • spike functions
  • data convention: this is the data format that is expected in this library

Support for auto-batching

Currently we can initialise the state shape (like v_mem) with shape=(batch_size, n_neurons). The batch size however should not be necessary for this and should be inferred automatically. Rockpool does it like that

Feature request: Re-introduce SpikingLayer class for easy identification of spiking layers.

Currently StatefulLayers have a property does_spike which is Truewhen the layer has an activation_fn. This only works, however, for StatefulLayer instances.
Looping over a model and doing layer.does_spike will throw an exception for all other types (e.g. torch linear layers).
Currently the only feasible option is hasattr(layer, "activation_fn"), which is not very intuitive for new users. If, instead, all spiking layers are a subclass of SpikingLayer we can simply do isinstance(layer, SpikingLayer).

Automatic instantiaion of individual time constants per neuron

Currently the shape of time constant parameters (alpha_xxx or tau_xxx) in the LIF layer is determined by what has been provided during class instantiation.

If a user wants individual time constants (one per neuron), they have to provide a tensor with corresponding shape.

I suggest introducing an additional option, individual_time_constants. When it is set to True, the time constants of the layer will be expanded to the number of neurons (without batch size). when calling forward. This makes it possible to train time constants on a per-neuron base, while still having the comfort of just passing a scalar initial value at instantiation.

When the new parameter is False (default), the behavior is the same as now.

Make Spiking Layer spike_threshold and min_v_mem parameters

Right now if you save a model state dictionary using torch utility functions torch.save(...), it does not save spike_threshold and min_v_mem for the spiking layers. Although for most use cases this is not a problem as thresholds and minimum membrane potentials are not trained during the training, when we customize these values we need to manually save them to a separate file and load them again. Furthermore, sinabs-dynapcnn layers are quantized and discretized before the deployment on the chip which also make use of the sinabs SpikingLayer objects. If we save a DynapcnnNetwork state dictionary, there is no way to guess what the threshold values would be after quantization. Therefore, it is impossible to load the weights through torch.load(...) function correctly after discretization.


This could be easily fixed by changing sinabs/lif.py lines 96 and 104 and making self.spike_threshold and self.min_v_mem objects of torch.nn.Parameter type with setting requires_grad=False during the initialization to ensure there would be no change in the process. That way torch saving and loading would include these parameters as well.

Examine effect of non-periodic surrogate gradient functions for MultiSpike activation.

SNN simulators typically use a single spike of activation per neuron per time step. Sinabs provides the option for MultiSpike activation, but the gradient for activations > 1 vanishes with surrogates such as SingleExponential or MultiGaussian (which both have a single peak around the spike threshold). It would be interesting to study if using such a surrogate fn has a regularizing effect on the firing rate, and what other differences in performance there are in comparison to PeriodicExponential / PeriodicMultiGaussian

Which neuromorphic hardware does Sinabs use to simulate SNNs ?

Hello,

I have been recently using Sinabs, but I am still not sure which neuromorphic hardware does it simulate ?
For example,
I am running this code and I can see that we are getting an SNN model but which neuromorphic hardware does Sinabs use to simulate the SNN ?

Enable conversion of individual objects, as well as nested modules with replace_module

Currently, the new replace_module function iterates over the immediate children of a module to convert all objects of a given type to something else.

Because only immediate children are considered, objects that are nested more deeply are ignored. Furthermore, if the module being passed to replace_module is by itself of the class that is to be converted, it gets skipped as well.

I propose to account for these cases by making a few smaller changes to how the function is implemented.

sinabs 0.3 release plan

  • Tests for LIF and ALIF #10 @gregor.lenz
  • Consistency with sinabs-slayer @felix.bauer
  • Merge branch alif into dev @sadique.sheik
  • Parameter harmonization between rockpool and sinabs @gregor.lenz
  • Documentation ie add new layers to the documentation @gregor.lenz
    • Make it pretty
    • Add plugins and links to sister packages @sadique.sheik
    • Add some tutorials
    • Convention and usage @sadique.sheik
  • Stateful layer base class @felix.bauer

[Bug]RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed.

This error is caused by the reset_states method in the StatefulLayer class in sinabs.layers.stateful_layer.

Because the in-place operation for reseting the states buffer seems make the buffer still keep the computation graph. So after call loss.backward() for a second time,

    def reset_states(self, randomize=False):
        """
        Reset the state/buffers in a layer.
        """
        if self.is_state_initialised():
            for buffer in self.buffers():
                if randomize:
                    torch.nn.init.uniform_(buffer)
                else:
                    buffer.zero_()

add a .detach_() operation after each resetting operation could solve this problem.

    def reset_states(self, randomize=False):
        """
        Reset the state/buffers in a layer.
        """
        if self.is_state_initialised():
            for buffer in self.buffers():
                if randomize:
                    torch.nn.init.uniform_(buffer).detach_()
                else:
                    buffer.zero_().detach_()

The arguments comment string of "StatefulLayer" is inconsistent with the constructor's actual argument.

    def __init__(self, state_names: List[str]):
        """
        Pytorch implementation of a stateful layer, to be used as base class.

        Parameters
        ----------
        threshold: float
            Spiking threshold of the neuron.
        min_v_mem: float or None
            Lower bound for membrane potential.
        membrane_subtract: float or None
            The amount to subtract from the membrane potential upon spiking.
            Default is equal to threshold. Ignored if membrane_reset is set.
        membrane_reset: bool
            If True, reset the membrane to 0 on spiking.
        """

MultiGausian incorrect.

The multigausian surrogate gradient function seems to be in fact a single spike Gaussian and not applicable to multi-spike like is the case for MultiExponential

bias parameter for lif neuron missing.

This parameter is not strictly necessary for IAF or a LIF with no synapse but it cannot be replicated without an explicit inclusion in the neuron model if there is synaptic state in the neuron model.

[bug] sinabs.layers.UnflattenTime will raise error with torch 1.8.1

Description

It seems the nn.Unflatten doesn't support -1 in the unflattened_size arg.

The code below will meet an error with torch 1.8.1, but works fine with torch 1.9.0

import torch
import torch.nn as nn
input = torch.randn(2, 50)
lyr = nn.Unflatten(dim=1, unflattened_size=(5, -1))
out = lyr(input)
Traceback (most recent call last):
  File "/home/allan/.local/share/JetBrains/PyCharm2021.1/python/helpers/pydev/pydevd.py", line 1483, in _exec
    pydev_imports.execfile(file, globals, locals)  # execute the script
  File "/home/allan/.local/share/JetBrains/PyCharm2021.1/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
    exec(compile(contents+"\n", file, 'exec'), glob, loc)
  File "/home/allan/PycharmProjects/snn_classifier_trainer/sinabs_snn_trainer/dradt1.py", line 8, in <module>
    out = lyr(input)
  File "/home/allan/PycharmProjects/snn_classifier_trainer/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/allan/PycharmProjects/snn_classifier_trainer/venv/lib/python3.6/site-packages/torch/nn/modules/flatten.py", line 131, in forward
    return input.unflatten(self.dim, self.unflattened_size)
  File "/home/allan/PycharmProjects/snn_classifier_trainer/venv/lib/python3.6/site-packages/torch/tensor.py", line 840, in unflatten
    return super(Tensor, self).unflatten(dim, sizes, names)
RuntimeError: unflatten: Provided sizes [5, -1] don't multiply up to the size of dim 1 (50) in the input tensor
python-BaseException

Maybe we should consider to update the torch requriements of sinabs to >=1.9.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.