Giter VIP home page Giter VIP logo

uni-courses / snncompare Goto Github PK

View Code? Open in Web Editor NEW
2.0 2.0 0.0 95.11 MB

Runs networkx graphs representing spiking neural networks of LIF-neurons on lava-nc or networkx.

License: GNU Affero General Public License v3.0

Python 98.37% Shell 1.63%
graph graph-algorithms networkx neuromorphic neuromorphic-engineering spiking-neural-network spiking-neural-networks lava-nc neuromorphi-computing

snncompare's People

Contributors

a-t-0 avatar cashbobhudson avatar example123 avatar

Stargazers

 avatar

Watchers

 avatar  avatar

snncompare's Issues

Ensure a_in signal is not returned as duplicate value.

a_in seems to be not erased per timestep. Because at t=2 the correct a_in comes in, yet at t=3, a_in*2 is added. Fix this.

This was half of the issue, this half is resolved.
For the recurrent nodes, perhaps the edge is looped through twice (once from left to right, once from right to left. This may also result in a duplicate spike input signal.

Visualise network behaviour for lava SNNs and networkx SNNs.

  • Store the lava SNN network for t timesteps.
  • Store the networkx SNN network for t timesteps.
  • Compute spikes per node per timestep and store them.
  • Make the node colour change if the neuron spikes.
  • Make the outgoing edges of the neuron colour change if the input neuron of the edge spikes.

Write a test that verifies radiation is implemented correctly.

  • Allow users to run radiation, or adaptation only.
  • Allow dev to run test on all long test format, including radiation. Specify seed and hardcode expected neuron deaths and results.
  • Verify the neuron termination code works if all (selector) neurons are killed.
  • Determine why simulation is not stopped when terminator node spikes for:
python -m src.snncompare -e mdsa_size3_m0_adap_rad -v -x png

Write tests that verify running on networkx behaves the same as lava behaves.

Simulation

  • Create list of deep copies of G per timestep and return them from simulation.
  • Write test that generates arbitrary connected directional networks using a random seed.
  • Create lava SNN network from these random networks.
  • Simulate the lava SNN network for t timesteps.
  • Store the lava SNN network for t timesteps.
  • Create networkx SNN network from these random networks.
  • Simulate the lava SNN network for t timesteps.
  • Store the lava SNN network for t timesteps.

Tests

  • For each timestep for each Edge:
    • Verify synapse weight is correct.
    • Verify Left node and Right node is correct.
  • For each timestep for each Node:
    • Verify u[t] is correct.
    • Verify v[t] is correct.
    • Verify dv is correct.
    • Verify du is correct.
    • Verify bias is correct.
    • Verify vth is correct.

Include check to prevent duplicate runs.

For stage 1:

  • Write a test that checks stage 1 is not performed if it already has been performed and overwrite is not true.
  • Write a test that checks stage 1 is performed if it already has been performed and overwrite is true.
  • Write a test that checks stage 1 is performed if it is not yet done and overwrite is False.
  • Write a test that checks stage 1 is performed if it is not yet done and overwrite is True.

For stage 2:

  • Write a test that checks stage 2 is not performed if it already has been performed and overwrite is not true.
  • Write a test that checks stage 2 is performed if it already has been performed and overwrite is true.
  • Write a test that checks stage 2 is performed if it is not yet done and overwrite is False.
  • Write a test that checks stage 2 is performed if it is not yet done and overwrite is True.

For stage 3:

  • Write a test that checks stage 3 is not performed if it already has been performed and overwrite is not true.
  • Write a test that checks stage 3 is performed if it already has been performed and overwrite is true.
  • Write a test that checks stage 3 is performed if it is not yet done and overwrite is False.
  • Write a test that checks stage 3 is performed if it is not yet done and overwrite is True.

For stage 4:

  • Write a test that checks stage 4 is not performed if it already has been performed and overwrite is not true.
  • Write a test that checks stage 4 is performed if it already has been performed and overwrite is true.
  • Write a test that checks stage 4 is performed if it is not yet done and overwrite is False.
  • Write a test that checks stage 4 is performed if it is not yet done and overwrite is True.

Additional tests:

  • Include check to verify each input graph is unique.
  • Verify the random graph generation is indeed triangle free.

Complete experiment with single command.

  • Ensure a single src/.py file runs the entire test. Don't require a test/file to run to perform the experiment.
  • Create a separate module that is able to run the experiments based on the generated graphs.
  • [ ]

The networkx simulation adds input signals after 1 second of simulation, whereas LIF lava does not.

Adapt the networkx simulation by adding a delay of 1 timestep before adding the input signals of a spike, such that it runs in sync with the lava LIF simulation.

(Simply delaying entire snn networkx simulation by 1 timestep yields incorrect result as that does not allow for computation of the bias, etc for v[t] and u[t]).
Ensure the default values, like bias for v[t], etc are computed at the first/same timestep.

Convert old graph properties into new networkx graph properties.

The old graphs have certain properties per node.

  • List which properties are required for the new networkx LIF neurons.
    • name: int,
    • bias: float,
    • du: float,
    • dv: float,
    • vth: float
    • Find edge weights (and verify they are correct).
  • Find those properties in each of the old graphs.
    • mdsa_graph
    • brain_adaptation_graph
    • second_rad_damage_graph
  • Write function that converts those properties.
  • Re-run the SNN simulation.

Re-Ensure brain adaptation graphs can be ran and behaviour can be outputted.

Traceback (most recent call last):
File "/home/name/miniconda3/envs/nx2lava/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/name/miniconda3/envs/nx2lava/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/name/git/networkx-to-lava-nc/src/main.py", line 43, in
load_pickle_graphs()
File "/home/name/git/networkx-to-lava-nc/src/pickle_load_graphs.py", line 73, in load_pickle_graphs
properties_brain_adaptation_graph(
File "/home/name/git/networkx-to-lava-nc/src/pickle_load_graphs.py", line 182, in properties_brain_adaptation_graph
counter_neurons = print_graph_properties(brain_adaptation_graph)
File "/home/name/git/networkx-to-lava-nc/src/pickle_load_graphs.py", line 224, in print_graph_properties
for nodename in G.nodes:
AttributeError: 'NoneType' object has no attribute 'nodes'

Set up pre-commit hooks.

Include pre-commit hook:
https://ljvmiranda921.github.io/notebook/2018/06/21/precommits-using-black-and-flake8/

After looking into this example:
https://github.com/EdgarLefevre/python_cookiecutter/blob/fd033120be724027e52e5596c0c9cd5093f7db59/%7B%7Bcookiecutter.directory_name%7D%7D/.pre-commit-config.yaml
I determined it would be of added value to also include the following pre-commit checks for Python:

#-   repo: local
#    hooks:
#    -   id: run_test
#        name: "Run Pytest"
#        entry: "python .run_pytest.py"
#        language: "conda"
#        types: [python]
#    -   id: check_coverage
#        name: "Check Coverage"
#        entry: "python .check_coverage.py"
#        language: "conda"
#        types: [python]

Below are more pre-commit hook checks that can be included:

Check Yaml...............................................................Passed
Fix End of Files.........................................................Passed
Trim Trailing Whitespace.................................................Failed

Instead of creating duplicate graphs for each timestep, store the timestep data as attributes in a single graph.

Currently a the graphs of a single run config can store up to 42 nx.DiGraph objects for a simulation of 42 timesteps. That means the graph nodes are stored a duplicate, in total 42 times. That results in json files of sizes up to 61.1 mb for graphs of size 5. This also leads to long read and write durations.

So instead, create 1 graph, and make it such that it always stores the graph in the first timestep, and that the additional timesteps are stored in sequential steps.

Ensure this change is reflected throughout the entire code.

Verify redundancy behaviour on various neuron types.

Verify redundancy behaviour on various neuron types.

Determine why selector_2_0 and red_selector_2_0 fire at the same time for:
second_rad_id670488_probability_0.25_adapt_True_42_size3_m0_iter0_t=0.png
(Probably because it is at t0 meaning they should both fire, whilst one should wait for the other to get inhibited.)
second_rad_id670488_probability_0 25_adapt_True_42_size3_m0_iter0_t=0

Verify why certain nodes are not shown as red (selector_0 and red_selector_0) even though they seem to be dead due to radiation.
Screenshot from 2022-06-10 17-30-43

Compute results per run.

Get the test results per run (out of the pickles).
Create a Json file with the results, where each entry is led by a unique id which is a hash of:

JSON with:

  • Unique run id (hash of following properties:).
  • graph size
  • m
  • iteration
  • Bool:Has brain adaptation
  • Bool:Has radiation damage
  • neuron_death_probability
  • dead neurons
  • is_correct
  • G_alipour nodes
  • Computed nodes in mdsa graph
  • Seed
  • Simulation time
  • original graph (such that it can be recreated)
  • brain adaptation graph (if exists, None otherwise)
  • radiation damaged graph (if exists, None otherwise)

In output image folder:
Sort graphs on:
without adaptation
with adaptation without rad damage
with adaptation with rad damage

Then create subfolder:
Failed
Passed

Then create a separate subfolder folder per unique run id.

Create and verify run object.

Create the run outputs.

  • This set of ranges form a single experiment and can be exported as a dictionary/json.
  • Each experiment will have a unique id (that is not the same as the id of one of its runs).
  • SKIP: One should be able to combine multiple experiments into a single experiment. If one experiment is run without adaptation/radiation and the other with, or with and without, the merged experiment will run with and without.
  • An experiment consists over multiple runs.
  • Each run consists of 4 steps,
    • 1. Generate the graphs on which to run the experiment, and export this data into a pickle (and or dictionary).
    • 2. Run the experiments on the graphs that are to be ran by the experiment.
    • 3. Visualise the graph behaviour
    • 4. Process the results and output the processed results.stage_2_graphs

Specify input output

  • Determine what the folder structure of the 4 output stages is going to be.
  • Determine what to output exactly
  • Determine the naming screme of the output.
  • Write method that loops over experiment setting and generates run setting.
    • Each run shall have a unique id.
  • Write method to determine whether output has been generated or not.
    • Specify output of that method.
  • Write method that takes in what has already been computed, and then determines what should still be computed.
  • Write method that performs the computation for stage 1.
  • Write method that performs the computation for stage 2.
  • Write method that performs the computation for stage 3.
  • Write method that performs the computation for stage 4.

Check which run outputs have been generated.
Compute which run segments to perform.

  • Create a run object that contains a single configuration of the configuration settings.
  • Ensure the run object is verified.
    After checking whether the run object exists or not:
  • Include a boolean indicating whether it should show the results or not.
  • Include a boolean indicating whether it should export the results (of stage 3, visualisation) or not.
  • Include a boolean indicating whether Stage 1 of that run is already performed.
  • Include a boolean indicating whether Stage 2 of that run is already performed.
  • Include a boolean indicating whether Stage 3 of that run is already performed.
  • Include a boolean indicating whether Stage 4 of that run is already performed.
    Then for another issue:

Create new issue:

  • Allow outputting the results without running the complete graph simulation if it already exists.

Create separate graph name for different graph types.

Currently the output graphs are overwritten by each other.

An example of how the radiation eliminates the performance of the algorithm without adaptation for: rad_snn_algo_graph:
rad_snn_algo_graph_{'adaptation_redundancy':1.0,'algorithm_MDSA_m_val':0,'iteration':0,'graph_size':3,'graph_nr':0,'radiation_neuron_death':0.2,'seed':42,'simulator':'nx'}_25.png

Make sure all nodes are reached by a single starter neuron.

There may be random neurons that only have outgoing synapses. Unless there is only one such neuron, and this neuron happens to be the starter neuron, these outgoing-only neurons would not be reached by the starter neuron.

This may result in some unexpected behaviour. That may be the cause on why running the simulation yields in a hanging network when the `lava_neuron.du.get() is evaluated afterwards.

Ensure the recurrent synapses are created in the mdsa graphs.

Currently, no recursive edges are created in the outputted mdsa graphs. Neither in the graphs with brain adaptation nor radation.

  • Ensure the spike_once, rand neurons (and probably the selector neurons) have an inhibitory recurrent edge in the mdsa graph.
  • Ensure the spike_once, rand neurons (and probably the selector neurons) have an inhibitory recurrent edge in the brain adaptation graph.
  • Ensure the spike_once, rand neurons (and probably the selector neurons) have an inhibitory recurrent edge in the radiation graph.

Specify experiment object hierarchy.

The experiment is ran using the following for loop:

for m in range(0, 1):
            for iteration in range(0, 2, 1):
                for size in range(3, 4, 1):
                    # for neuron_death_probability in [0.1, 0.25, 0.50]:
                    for neuron_death_probability in [
                        0.01,
                        0.05,
                        0.1,
                        0.2,
                        0.25,
                    ]:
                        rad_dam = Radiation_damage(neuron_death_probability)
                        graphs = used_graphs.get_graphs(size)
                        for G in graphs:
                            for has_adaptation in [True, False]:
                                for has_radiation in [True, False]:

The experiment should be characterised by the following parameters:
iteration(range),m(range),size(range),neuron_death_probabilities(range), input_graph(range), has_adaptation(range), has_radiation(range), backend(networkx or lava), overwrite(bool).

EDIT: The experiment will not have an object hierarchy. It will consist of a flat dictionary. The runs as well.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.