uni-courses / snncompare Goto Github PK
View Code? Open in Web Editor NEWRuns networkx graphs representing spiking neural networks of LIF-neurons on lava-nc or networkx.
License: GNU Affero General Public License v3.0
Runs networkx graphs representing spiking neural networks of LIF-neurons on lava-nc or networkx.
License: GNU Affero General Public License v3.0
Currently, the concept of adaptation in the form of redundancy is implemented based on the MDSA SNN algorithm. This has a limited type of neurons and the redundancy is hardcoded per neuron type. It would be valuable if this redundancy can be generalised to automatically apply the redundancy to arbitrary SNNs.
a_in seems to be not erased per timestep. Because at t=2 the correct a_in comes in, yet at t=3, a_in*2 is added. Fix this.
This was half of the issue, this half is resolved.
For the recurrent nodes, perhaps the edge is looped through twice (once from left to right, once from right to left. This may also result in a duplicate spike input signal.
Sources:
https://medium.com/@pietrassyk/building-a-custom-pip-library-for-python-fe618034d54a
pip install --upgrade pip setuptools wheel
pip install tqdm
pip install --user --upgrade twine
https://packaging.python.org/en/latest/glossary/#term-Distribution-Package
https://packaging.python.org/en/latest/tutorials/packaging-projects/
https://dzone.com/articles/executable-package-pip-install
python -m src.snncompare -e mdsa_size3_m0_adap_rad -v -x png
For stage 1:
For stage 2:
For stage 3:
For stage 4:
Additional tests:
src/.py
file runs the entire test. Don't require a test/file
to run to perform the experiment.Currently, it is a boolean that either outputs all, or none of the json files.
https://github.com/hhatto/autopep8
(First check difference between pep8 and flake8 and determine which you want.)
Adapt the networkx simulation by adding a delay of 1 timestep before adding the input signals of a spike, such that it runs in sync with the lava LIF simulation.
(Simply delaying entire snn networkx simulation by 1 timestep yields incorrect result as that does not allow for computation of the bias, etc for v[t] and u[t]).
Ensure the default values, like bias for v[t], etc are computed at the first/same timestep.
If it is possible to restore the graphs and run configuration from the json files, then don't export pickle files. Otherwise, require a check on the completion of:
before deciding to skip the stage.
If no pickles are created/necessary, then skip the stage if only the:
The old graphs have certain properties per node.
Traceback (most recent call last):
File "/home/name/miniconda3/envs/nx2lava/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/name/miniconda3/envs/nx2lava/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/name/git/networkx-to-lava-nc/src/main.py", line 43, in
load_pickle_graphs()
File "/home/name/git/networkx-to-lava-nc/src/pickle_load_graphs.py", line 73, in load_pickle_graphs
properties_brain_adaptation_graph(
File "/home/name/git/networkx-to-lava-nc/src/pickle_load_graphs.py", line 182, in properties_brain_adaptation_graph
counter_neurons = print_graph_properties(brain_adaptation_graph)
File "/home/name/git/networkx-to-lava-nc/src/pickle_load_graphs.py", line 224, in print_graph_properties
for nodename in G.nodes:
AttributeError: 'NoneType' object has no attribute 'nodes'
Include pre-commit hook:
https://ljvmiranda921.github.io/notebook/2018/06/21/precommits-using-black-and-flake8/
After looking into this example:
https://github.com/EdgarLefevre/python_cookiecutter/blob/fd033120be724027e52e5596c0c9cd5093f7db59/%7B%7Bcookiecutter.directory_name%7D%7D/.pre-commit-config.yaml
I determined it would be of added value to also include the following pre-commit checks for Python:
#- repo: local
# hooks:
# - id: run_test
# name: "Run Pytest"
# entry: "python .run_pytest.py"
# language: "conda"
# types: [python]
# - id: check_coverage
# name: "Check Coverage"
# entry: "python .check_coverage.py"
# language: "conda"
# types: [python]
Below are more pre-commit hook checks that can be included:
Check Yaml...............................................................Passed
Fix End of Files.........................................................Passed
Trim Trailing Whitespace.................................................Failed
Verify:
https://stackoverflow.com/questions/70528/why-are-pythons-private-methods-not-actually-private
Or tip: use __some_function_name
.
Currently a the graphs of a single run config can store up to 42 nx.DiGraph objects for a simulation of 42 timesteps. That means the graph nodes are stored a duplicate, in total 42 times. That results in json files of sizes up to 61.1 mb for graphs of size 5. This also leads to long read and write durations.
So instead, create 1 graph, and make it such that it always stores the graph in the first timestep, and that the additional timesteps are stored in sequential steps.
Ensure this change is reflected throughout the entire code.
Especially between what is outputted to json, and what is created if a new run config is created using the overwrite settings in the same run_config
settings.
get_unique_hash()
function behaves deterministically for a given configuration).Verify redundancy behaviour on various neuron types.
Determine why selector_2_0
and red_selector_2_0
fire at the same time for:
second_rad_id670488_probability_0.25_adapt_True_42_size3_m0_iter0_t=0.png
(Probably because it is at t0 meaning they should both fire, whilst one should wait for the other to get inhibited.)
Verify why certain nodes are not shown as red (selector_0 and red_selector_0) even though they seem to be dead due to radiation.
Get the test results per run (out of the pickles).
Create a Json file with the results, where each entry is led by a unique id which is a hash of:
JSON with:
In output image folder:
Sort graphs on:
without adaptation
with adaptation without rad damage
with adaptation with rad damage
Then create subfolder:
Failed
Passed
Then create a separate subfolder folder per unique run id.
Include a catch for termination on exception.
Allow for recurrent synapses:
convert_networkx_to_lava
line 173:
Create the run outputs.
Specify input output
Check which run outputs have been generated.
Compute which run segments to perform.
Create new issue:
Currently the output graphs are overwritten by each other.
An example of how the radiation eliminates the performance of the algorithm without adaptation for: rad_snn_algo_graph
:
rad_snn_algo_graph_{'adaptation_redundancy':1.0,'algorithm_MDSA_m_val':0,'iteration':0,'graph_size':3,'graph_nr':0,'radiation_neuron_death':0.2,'seed':42,'simulator':'nx'}_25.png
There may be random neurons that only have outgoing synapses. Unless there is only one such neuron, and this neuron happens to be the starter neuron, these outgoing-only neurons would not be reached by the starter neuron.
This may result in some unexpected behaviour. That may be the cause on why running the simulation yields in a hanging network when the `lava_neuron.du.get() is evaluated afterwards.
Currently, no recursive edges are created in the outputted mdsa graphs. Neither in the graphs with brain adaptation nor radation.
spike_once
, rand
neurons (and probably the selector neurons) have an inhibitory recurrent edge in the mdsa graph.spike_once
, rand
neurons (and probably the selector neurons) have an inhibitory recurrent edge in the brain adaptation graph.spike_once
, rand
neurons (and probably the selector neurons) have an inhibitory recurrent edge in the radiation graph.Source: https://stackoverflow.com/questions/39491420/python-jsonexpecting-property-name-enclosed-in-double-quotes
In:
with open(f"results/{output_name}.json", "w", encoding="utf-8") as fp:
json.dump(test_results_dict, fp)
The dictionary is exported as a json. However, that means the json values contains commas at the end of an entry, which should not be the case for the JSON format. Ensure the trailing comma's are not included.
The experiment is ran using the following for loop:
for m in range(0, 1):
for iteration in range(0, 2, 1):
for size in range(3, 4, 1):
# for neuron_death_probability in [0.1, 0.25, 0.50]:
for neuron_death_probability in [
0.01,
0.05,
0.1,
0.2,
0.25,
]:
rad_dam = Radiation_damage(neuron_death_probability)
graphs = used_graphs.get_graphs(size)
for G in graphs:
for has_adaptation in [True, False]:
for has_radiation in [True, False]:
The experiment should be characterised by the following parameters:
iteration(range),m(range),size(range),neuron_death_probabilities(range), input_graph(range), has_adaptation(range), has_radiation(range), backend(networkx or lava), overwrite(bool)
.
EDIT: The experiment will not have an object hierarchy. It will consist of a flat dictionary. The runs as well.
https://github.com/nedbat/coveragepy/blob/master/.github/workflows/quality.yml
See coverage project badge named:qualtiy
.
https://pypi.org/project/coverage/
Perhaps related:
https://github.com/nedbat/coveragepy/blob/master/.github/workflows/codeql-analysis.yml
mdsa__death_prob0.01_adapt_True_raddamFalse__seed42_size3_m0_iter1_hash-2230525022878144772_t=0.png
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.