brian-team / gsoc2016 Goto Github PK
View Code? Open in Web Editor NEWGoogle Summer of Code project 2016 (Importing and exporting simulator-independent model-descriptions with the Brian simulator)
Google Summer of Code project 2016 (Importing and exporting simulator-independent model-descriptions with the Brian simulator)
Currently when exporter sees StateMonitor
in Brian2 it adds Display
not Outputfile
. Not very hard to change though, just didn't have time.
Have a look at CPPStandaloneDevice
, it has an option build_on_run
which defaults to True
. With this option activated, it will call build
automatically at the end of network_run
. In LEMSDevice
, this also makes sense, in particular if we only support a single run
statement -- this will save the user some typing, adding the set_device
would be the only change needed to switch from a standard run to LEMS/NeuroML generation. The output file name can be set directly in set_device
(again, have a look at CPPStandaloneDevice
).
For now, we do not support feeding external input into a network which limits the models we support. Have a look through Brian2s and NeuroML's classes for this purpose and see which ones can be translated easily. I think SpikeGeneratorGroup
and PoissonGroup
should be straightforward at least.
As we discussed with Padraig, someone using neuron models exported from Brian2 would have it easier if the generated types do not only inherit from the generic baseCell
. Compared to the other issues, this is a rather low priority one, but some of the simple classes should be very easy to support, e.g.:
baseSpikingCell
v
with dimensionality volt
, have it inherit from baseCellMembPot
V
, have it inherit from baseCellMembPotDL
https://github.com/NeuroML/NeuroML2/blob/master/NeuroML2CoreTypes/Cells.xml
Have a look at Brian2s examples and see which could be in principle supported by our LEMS/NeuroML export. If there's only a small functionality that is missing, then it might be worth implementing it. It would be great if at the end we'd have a few examples that do something non-trivial. With support for synapses (#6), I think we should be able to run e.g. examples/synapses/licklider.py
(not 100% sure about the noise, though, you might have to manually replace "xi" by something like sqrt(dt) * normal()
(or whatever LEMS uses for randn
)).
With support for PoissonGroup
(#8), also examples/synapses/STDP.py
should work.
As we discussed, the next big feature to implement are synaptic connections. To start, deal with explicit synapses given with synapses.connect(i=..., j=...)
. To do this, you'll have to implement LEMSDevice.synapses_connect
which will be called instead of the original Synapses.connect
function (it takes the same arguments as Synapses.connect
, but note that it starts like this:
def synapses_connect(self, synapses, <arguments of Synapses.connect>)
i.e. you have "two self
arguments", self
corresponds to the LEMSDevice
and synapses
is the Synapses
object.
When connecting synapses works in this way, the next step is to support synaptic connections defined by expressions/patterns (probabilistic connections, one-to-one connections, etc.). As discussed with Padraig, this is not something that we can do in the generated LEMS/NeuroML code, but instead Brian will generate the connections and then you'll include the generated connections in the XML file in the same way as if the user had called Synapses.connect(i=..., j=...)
directly. IMO, the best way to implement it would be to have LEMSDevice.synapses_connect
first call the original Synapses.connect
with the given arguments and then pass the generated values for i
and j
to NeuroML. In the beginning, only support a single call to connect
for each Synapses
object (i.e. raise a NotImplementedError
for a second call). Alternatively (actually that might be even easier and naturally supports multiple connect calls
): do not overwrite Synapses.connect
at all, simply check Synapses.i
and Synapses.j
for all Synapses
objects in the network and create the corresponding synapses.
However, all of this will not work with the current design of LEMSDevice
because the expressions for generating synapses refer to variables that are not actually stored anywhere. E.g. you could define spatial connectivity with something like this:
group = NeuronGroup(100, '''dv/dt = ... : volt
x : meter
y : meter''', threshold='v>v_th')
group.x = '(x % 10) * 50*um'
group.y = '(x / 10) * 50*um'
synapses = Synapses(group, group, ...)
synapses.connect('sqrt((x_pre - x_post)**2 + (y_pre - y_post)**2) < 100*umeter')
With the current approach, we'd generate <OnStart>
assignments for the x
and y
values of group
, but when we run Brian2's standard Synapses.connect
function, it cannot actually refer to them.
The solution is, I think, to actually have LEMSDevice
act like the standard runtime device except for the run. This means that it will actually assign and store values for x
and y
and it can create the synapses. Doing this should be straightforward:
LEMSDevice
inherit from RuntimeDevice
LEMSDevice.add_array
, LEMSDevice.get_value
) -- their functionality will be inherited from RuntimeDevice
DummyCodeObject
and LEMSDevice.code_object
variableview_set_with_expression_conditional
, call the original implementation in addition to what LEMSDevice
is doing currently. (Note that you cannot call variableview.set_with_expression_conditional
, because this would call the overwritten function again. You have to take the quite ugly workaround of calling VariableView.original_function.set_with_expression_conditional(variableview, ...)
).This way, everything up to the run call (including most importantly the synapse creation) will work just as in a normal run, but then the run will not be executed but instead we'll write out the XML file.
When record=True
in monitors logical thing to do would be to add as many Line
s to Display
and as many EventSelection
as there are neurons. But if the number is big it may cause some memory problems in the future I guess.
With the monitors writing to disk, it should now be possible to run the same simulation with Brian 2 and with LEMS and then compare the stored voltage traces and/or spikes.
@mstimberg Just checking whether this is the latest version of code for exporting Brian models to LEMS.
If so, we can (slowly, low priority...) update it, and add information here: https://docs.neuroml.org/Userdocs/Software/Tools/Brian.html#userdocs-brian with what's currently supported.
In the add_spikemonitor
method when I'm taking data out from StateMonitor
I wanted to update as well integration step of simulation. But when I take 'obj.clock.dt' it is always 0. Why? And if that's meant to be, how to extract that value at this point?
I noticed that in example_cgm.py
there are actually two runs which should have had separate MultiInstantiate and simulations. But I wonder how to check what was actually changed and what to modify in that case without running a whole exporter every time again from scratch.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.