tudelft / event_flow Goto Github PK
View Code? Open in Web Editor NEWSelf-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks
License: MIT License
Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks
License: MIT License
Thanks for your work. I am running your code for training, and found something strange.
Line 66 in e81f963
I trained a spiking neural network with the command (python train_flow.py --configs/train_SNN.yml). Then, to test my trained model, I run the command (python eval_flow.py <model_name> --config configs/eval_MVSEC.yml), where for I use my own trained model ID.
However, the estimation results of my trained model are not as good as the pre-trained model. How do I reproduce the results of the pre-trained model?
my trained model:
pretrained model:
I appreciate your quick response.^.^
Dear authors, thank you for the great work. I am fairly new to optical flow research and I'm having some tough time understanding the average timestamp image loss in the paper. Can you further explain it? Thanks in advance.
To be specific, I can understand how events are warped to t_ref, optical flow and interpolations,
but I failed to understand the meaning of fw_iwe_pos_ts from here: fw_iwe_pos_ts = interpolate(fw_idx.long(), fw_weights * ts_list, self.res, polarity_mask=pol_mask[:, :, 0:1])
and I failed to get how the image create from fw_iwe_pos_ts /= fw_iwe_pos + 1e-9 can produce a loss to direct learning of optical flow estimation.
Kindly let us know how to solve this issue after running
python eval_flow.py LIFFireNet --config configs/eval_ECD.yml
Kindly help
@Huizerd ,
I was using your module "SpikingRecEVFlowNet" as a a network to reconstruct images from event camera rather than optical flow and according modified the input and output channels compatible to image reconstruction. After making the environmental setup. I landed with bad image reconstruction results
Left is the target image and right is the predicted image. Iam using supervised technique rather than self-supervised framework using temporal consistency and LIPS loss function.
Can you please suggest in your module "SpikingRecEVFlowNet" encoder and decoder framework what changes needs to be incorporated apart from input and output channels.
Requesting you kindly to help on this regard
Dear authors,
It's me again. I'm curious about the GPU(s) you utilized for neural network training in your implementation.
And, could you provide experiment detail about the training time?
Additionally, did you consider employing multiple GPUs for parallel processing?
Thank you for your time and assistance.
Hi, don't know if you are still there~ May I ask the applied time-steps of trained SNN, according to the code and paper, I found it to be one defaultly. Have you tried to add up the time-steps? Thanks!
Dear authors,
Firstly, I'd like to express my gratitude for your contributions to event-based optical flow research; it's truly remarkable work. I have been delving into your code, specifically the self-supervised event-based optical flow codebase, and I have a small question.
In dataloader/h5.py
line 279 ,see below
# data augmentation
xs, ys, ps = self.augment_events(xs, ys, ps, batch)
The function augment_events()
in dataloader/base.py
line 88, see below
def augment_events(self, xs, ys, ps, batch):
"""
Augment event sequence with horizontal, vertical, and polarity flips.
:return xs: [N] tensor with event x location
:return ys: [N] tensor with event y location
:return ps: [N] tensor with event polarity ([-1, 1])
:param batch: batch index
:return xs: [N] tensor with augmented event x location
:return ys: [N] tensor with augmented event y location
:return ps: [N] tensor with augmented event polarity ([-1, 1])
"""
for i, mechanism in enumerate(self.config["loader"]["augment"]):
if mechanism == "Horizontal":
if self.batch_augmentation["Horizontal"][batch]:
xs = self.config["loader"]["resolution"][1] - 1 - xs
elif mechanism == "Vertical":
if self.batch_augmentation["Vertical"][batch]:
ys = self.config["loader"]["resolution"][0] - 1 - ys
elif mechanism == "Polarity":
if self.batch_augmentation["Polarity"][batch]:
ps *= -1
return xs, ys, ps
To the best of my knowledge, when performing data augmentation, the dataset should ideally include both the original data and the augmented data. For instance, if we consider horizontal augmentation, both the original dataset and the horizontally augmented dataset should be present.
However, based on the provided code, it appears that only the original data is being utilized for horizontal, vertical, and polarity transformations. The dataset does not appear to include the augmented versions.
Can you explain it? Thanks in advance.
event_flow/models/spiking_util.py
Line 42 in 7d90c9d
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.