Giter VIP home page Giter VIP logo

gsoc2016's People

Contributors

dokato avatar mstimberg avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

gsoc2016's Issues

statemonitor is not recording

Currently when exporter sees StateMonitor in Brian2 it adds Display not Outputfile. Not very hard to change though, just didn't have time.

Build automatically when `run` is encountered

Have a look at CPPStandaloneDevice, it has an option build_on_run which defaults to True. With this option activated, it will call build automatically at the end of network_run. In LEMSDevice, this also makes sense, in particular if we only support a single run statement -- this will save the user some typing, adding the set_device would be the only change needed to switch from a standard run to LEMS/NeuroML generation. The output file name can be set directly in set_device (again, have a look at CPPStandaloneDevice).

Support network input

For now, we do not support feeding external input into a network which limits the models we support. Have a look through Brian2s and NeuroML's classes for this purpose and see which ones can be translated easily. I think SpikeGeneratorGroup and PoissonGroup should be straightforward at least.

Use more specific base classes

As we discussed with Padraig, someone using neuron models exported from Brian2 would have it easier if the generated types do not only inherit from the generic baseCell. Compared to the other issues, this is a rather low priority one, but some of the simple classes should be very easy to support, e.g.:

  • If the cell defines a threshold, have it inherit from baseSpikingCell
  • If it has a threshold and a variable v with dimensionality volt, have it inherit from baseCellMembPot
  • If instead it has a threshold and a dimensionless variable V, have it inherit from baseCellMembPotDL

https://github.com/NeuroML/NeuroML2/blob/master/NeuroML2CoreTypes/Cells.xml

Use "real examples" for testing

Have a look at Brian2s examples and see which could be in principle supported by our LEMS/NeuroML export. If there's only a small functionality that is missing, then it might be worth implementing it. It would be great if at the end we'd have a few examples that do something non-trivial. With support for synapses (#6), I think we should be able to run e.g. examples/synapses/licklider.py (not 100% sure about the noise, though, you might have to manually replace "xi" by something like sqrt(dt) * normal() (or whatever LEMS uses for randn)).
With support for PoissonGroup (#8), also examples/synapses/STDP.py should work.

Synaptic connections

As we discussed, the next big feature to implement are synaptic connections. To start, deal with explicit synapses given with synapses.connect(i=..., j=...). To do this, you'll have to implement LEMSDevice.synapses_connect which will be called instead of the original Synapses.connect function (it takes the same arguments as Synapses.connect, but note that it starts like this:

def synapses_connect(self, synapses, <arguments of Synapses.connect>)

i.e. you have "two self arguments", self corresponds to the LEMSDevice and synapses is the Synapses object.

When connecting synapses works in this way, the next step is to support synaptic connections defined by expressions/patterns (probabilistic connections, one-to-one connections, etc.). As discussed with Padraig, this is not something that we can do in the generated LEMS/NeuroML code, but instead Brian will generate the connections and then you'll include the generated connections in the XML file in the same way as if the user had called Synapses.connect(i=..., j=...) directly. IMO, the best way to implement it would be to have LEMSDevice.synapses_connect first call the original Synapses.connect with the given arguments and then pass the generated values for i and j to NeuroML. In the beginning, only support a single call to connect for each Synapses object (i.e. raise a NotImplementedError for a second call). Alternatively (actually that might be even easier and naturally supports multiple connect calls): do not overwrite Synapses.connect at all, simply check Synapses.i and Synapses.j for all Synapses objects in the network and create the corresponding synapses.

However, all of this will not work with the current design of LEMSDevice because the expressions for generating synapses refer to variables that are not actually stored anywhere. E.g. you could define spatial connectivity with something like this:

group = NeuronGroup(100, '''dv/dt = ... : volt
                     x : meter
                     y : meter''', threshold='v>v_th')
group.x = '(x % 10) * 50*um'
group.y = '(x / 10) * 50*um'
synapses = Synapses(group, group, ...)
synapses.connect('sqrt((x_pre - x_post)**2 + (y_pre - y_post)**2) < 100*umeter')

With the current approach, we'd generate <OnStart> assignments for the x and y values of group, but when we run Brian2's standard Synapses.connect function, it cannot actually refer to them.

The solution is, I think, to actually have LEMSDevice act like the standard runtime device except for the run. This means that it will actually assign and store values for x and y and it can create the synapses. Doing this should be straightforward:

  • Have LEMSDevice inherit from RuntimeDevice
  • Delete all the dummy functions that do nothing (e.g. LEMSDevice.add_array, LEMSDevice.get_value) -- their functionality will be inherited from RuntimeDevice
  • Also delete the DummyCodeObject and LEMSDevice.code_object
  • For the overwritten functions like variableview_set_with_expression_conditional, call the original implementation in addition to what LEMSDevice is doing currently. (Note that you cannot call variableview.set_with_expression_conditional, because this would call the overwritten function again. You have to take the quite ugly workaround of calling VariableView.original_function.set_with_expression_conditional(variableview, ...) ).

This way, everything up to the run call (including most importantly the synapse creation) will work just as in a normal run, but then the run will not be executed but instead we'll write out the XML file.

record=True in Spike/State monitors

When record=True in monitors logical thing to do would be to add as many Lines to Display and as many EventSelection as there are neurons. But if the number is big it may cause some memory problems in the future I guess.

Implement automatic tests

With the monitors writing to disk, it should now be possible to run the same simulation with Brian 2 and with LEMS and then compare the stored voltage traces and/or spikes.

find value of dt

In the add_spikemonitor method when I'm taking data out from StateMonitor I wanted to update as well integration step of simulation. But when I take 'obj.clock.dt' it is always 0. Why? And if that's meant to be, how to extract that value at this point?

Bug in CGM example

I noticed that in example_cgm.py there are actually two runs which should have had separate MultiInstantiate and simulations. But I wonder how to check what was actually changed and what to modify in that case without running a whole exporter every time again from scratch.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.