Giter VIP home page Giter VIP logo

Comments (10)

rbodo avatar rbodo commented on May 16, 2024

The aedat input format works similarly to the flow_from_directory InputDataGenerator from tf/keras, which reads images from folders that represent a class.

That means you would have to cut your aedat sequence into shorter files that only contain one class. Then structure your dataset directory like this:

\MyDVSdata
    \apples
        00.aedat
        01.aedat
        ...
    \oranges
        00.aedat
        01.aedat
        ...

The [input] section in your config file would look like this:

[input]
dataset_format = aedat
label_dict = {'apples': '0', 'oranges': '1'}  # Map from folder name to class index in output layer.

For training your model, you may have binned the DVS events into frames. The following arguments allow reconstructing these frames from the event stream at test time:

num_dvs_events_per_sample = 2000     # How many events to accumulate into one frame.
chip_size = (240, 180)               # The dimensions of your DVS sensor
frame_gen_method = rectified_sum     # rectified_sum: Discard polarity. Other possible value: ``signed_sum`` (keep polarity while adding up events into frame).
is_x_first = False                   # Depending on the axis ordering convention of your framework (numpy, PIL, ...) you may have to swap / flip the x/y coordinates.            
is_x_flipped = True
is_y_flipped = True
do_clip_three_sigma = True           # Outlier removal
maxpool_subsampling = True           # When events fall on the same pixel address within one time bin, we can either add them up (which results in a spike burst), or keep only one of them (which I found to be more robust)

Our simulator is time stepped, which means we can only approximate the asynchronous nature of the DVS event stream. We achieve this by feeding in very thin frames (a few microseconds in width). You can specifiy the bin width in microseconds here:

eventframe_width = 10                      

Smaller values mean more accurate "asynchronous" mode but also more computations.

from snn_toolbox.

hyeongilee avatar hyeongilee commented on May 16, 2024

Thanks.
I wrote some python code like /example/mnist_keras_brian2.py using .aedat files

But some error occurs as follows

Traceback (most recent call last):
File "KH_keras_brian2.py", line 164, in
main(config_filepath)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/run.py", line 31, in main
run_pipeline(config)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/utils.py", line 145, in run_pipeline
results = run(spiking_model, **testset)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/utils.py", line 220, in wrapper
results.append(run_single(snn, **testset))
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/utils.py", line 142, in run
return snn.run(**test_set)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/simulation/utils.py", line 549, in run
self.parsed_model.layers[0].batch_input_shape, int))
AttributeError: 'InputLayer' object has no attribute 'batch_input_shape'

How can I solve the problem?

from snn_toolbox.

rbodo avatar rbodo commented on May 16, 2024

Please update the toolbox to the latest pypi (or better: development) version. The error you are seeing is due to a keras breaking change.

from snn_toolbox.

hyeongilee avatar hyeongilee commented on May 16, 2024

With updating the development version the problem above solved.

But the following error occurs, and I think the error is saying the ANN needs traning and a method to create an image that should be given as the input of ANN from aedat file is not implemented.

Should I implement this part?

Traceback (most recent call last):
File "KH_keras_INI_2.py", line 165, in
main(config_filepath)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/run.py", line 31, in main
run_pipeline(config)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/bin/utils.py", line 81, in run_pipeline
num_to_test, **testset)
File "/home/student1/.local/lib/python3.5/site-packages/snntoolbox/parsing/model_libs/keras_input_lib.py", line 209, in evaluate
raise NotImplementedError
NotImplementedError

When I set the evaluate_ann option False, I think the weighs are set as 0, and nothing appears in the plotting example.

from snn_toolbox.

rbodo avatar rbodo commented on May 16, 2024

When using aedat as input, you should turn off the evaluate_ann option. This option allows testing the original ANN before conversion, but that requires that you have the DVS dataset preprocessed as frames.

What do you mean by "nothing appears in the plotting example"? The weights shouldn't be set to 0, all that's happening is that the original model is not tested.

from snn_toolbox.

hyeongilee avatar hyeongilee commented on May 16, 2024

The activations of the layer are almost 0 as below.
0Activations

I'm building a classification network using my own dataset that distinguish whether a short given sequence is a glass bottle or a mug as shown below.
image

If the weights are not 0, then why the activation values are all 0?

from snn_toolbox.

rbodo avatar rbodo commented on May 16, 2024

OK, what you see here is unrelated to the evaluate_ann option.

The reason the activations are so low is not that the weights are modified but that the input image is so sparse.

Remember that your input is a stream of DVS events. If we want to apply the ANN to these events, we need to bin them into frames. The toolbox allows you to specify how many events should go into one frame:

[input]
num_dvs_events_per_sample = 2000

I would try to increase this number until you get frames that look reasonable. (If you have all plots enabled, you can find the created frames in the log dir.)

from snn_toolbox.

rbodo avatar rbodo commented on May 16, 2024

OK, what you see here is unrelated to the evaluate_ann option.

The reason the activations are so low is not that the weights are modified but that the input image is so sparse.

Remember that your input is a stream of DVS events. If we want to apply the ANN to these events, we need to bin them into frames. The toolbox allows you to specify how many events should go into one frame:

[input]
num_dvs_events_per_sample = 2000

I would try to increase this number until you get frames that look reasonable. (If you have all plots enabled, you can find the created frames in the log dir.)

from snn_toolbox.

hyeongilee avatar hyeongilee commented on May 16, 2024

I increased num_dvs_events_per_sample to 200000, but the input image doesn't changes reasonably...
Only the noise that looks like a line at x~100 (I'm not sure what it is) is a bit sharper.input_image

There seems to be no problem with the input aedat file.

I have attached the input file and the code. Could you please help me?
test.zip

from snn_toolbox.

rbodo avatar rbodo commented on May 16, 2024

Which version of aedat file format are you using? I'm guessing 3. The toolbox supports 1 and 2, so I think what you see here is the effect of an incorrect decoding of the binary values read from file.

You could probably implement this yourself - unfortunately I won't be able to work on this.

from snn_toolbox.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.