Giter VIP home page Giter VIP logo

event_flow's People

Contributors

fedepare avatar huizerd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

event_flow's Issues

Inquiry Regarding GPU Usage and Parallel Computation

Dear authors,

It's me again. I'm curious about the GPU(s) you utilized for neural network training in your implementation.
And, could you provide experiment detail about the training time?
Additionally, did you consider employing multiple GPUs for parallel processing?

Thank you for your time and assistance.

"SpikingRecEVFlowNet" encoder and decoder framework for image reconstruction using event camera

@Huizerd ,

I was using your module "SpikingRecEVFlowNet" as a a network to reconstruct images from event camera rather than optical flow and according modified the input and output channels compatible to image reconstruction. After making the environmental setup. I landed with bad image reconstruction results
Custom_SNN

Left is the target image and right is the predicted image. Iam using supervised technique rather than self-supervised framework using temporal consistency and LIPS loss function.

Can you please suggest in your module "SpikingRecEVFlowNet" encoder and decoder framework what changes needs to be incorporated apart from input and output channels.
Requesting you kindly to help on this regard

About time-steps

Hi, don't know if you are still there~ May I ask the applied time-steps of trained SNN, according to the code and paper, I found it to be one defaultly. Have you tried to add up the time-steps? Thanks!

Question about the data augmentation

Dear authors,

Firstly, I'd like to express my gratitude for your contributions to event-based optical flow research; it's truly remarkable work. I have been delving into your code, specifically the self-supervised event-based optical flow codebase, and I have a small question.

In dataloader/h5.py line 279 ,see below

# data augmentation
xs, ys, ps = self.augment_events(xs, ys, ps, batch)

The function augment_events() in dataloader/base.py line 88, see below

    def augment_events(self, xs, ys, ps, batch):
        """
        Augment event sequence with horizontal, vertical, and polarity flips.
        :return xs: [N] tensor with event x location
        :return ys: [N] tensor with event y location
        :return ps: [N] tensor with event polarity ([-1, 1])
        :param batch: batch index
        :return xs: [N] tensor with augmented event x location
        :return ys: [N] tensor with augmented event y location
        :return ps: [N] tensor with augmented event polarity ([-1, 1])
        """

        for i, mechanism in enumerate(self.config["loader"]["augment"]):

            if mechanism == "Horizontal":
                if self.batch_augmentation["Horizontal"][batch]:
                    xs = self.config["loader"]["resolution"][1] - 1 - xs

            elif mechanism == "Vertical":
                if self.batch_augmentation["Vertical"][batch]:
                    ys = self.config["loader"]["resolution"][0] - 1 - ys

            elif mechanism == "Polarity":
                if self.batch_augmentation["Polarity"][batch]:
                    ps *= -1

        return xs, ys, ps

To the best of my knowledge, when performing data augmentation, the dataset should ideally include both the original data and the augmented data. For instance, if we consider horizontal augmentation, both the original dataset and the horizontally augmented dataset should be present.

However, based on the provided code, it appears that only the original data is being utilized for horizontal, vertical, and polarity transformations. The dataset does not appear to include the augmented versions.
Can you explain it? Thanks in advance.

Confused about the evaluation result of my trained model

I trained a spiking neural network with the command (python train_flow.py --configs/train_SNN.yml). Then, to test my trained model, I run the command (python eval_flow.py <model_name> --config configs/eval_MVSEC.yml), where for I use my own trained model ID.
However, the estimation results of my trained model are not as good as the pre-trained model. How do I reproduce the results of the pre-trained model?
my trained model:
image
pretrained model:
image

I appreciate your quick response.^.^

Question about the idea of average timestamp image

Dear authors, thank you for the great work. I am fairly new to optical flow research and I'm having some tough time understanding the average timestamp image loss in the paper. Can you further explain it? Thanks in advance.

To be specific, I can understand how events are warped to t_ref, optical flow and interpolations,

but I failed to understand the meaning of fw_iwe_pos_ts from here: fw_iwe_pos_ts = interpolate(fw_idx.long(), fw_weights * ts_list, self.res, polarity_mask=pol_mask[:, :, 0:1])

and I failed to get how the image create from fw_iwe_pos_ts /= fw_iwe_pos + 1e-9 can produce a loss to direct learning of optical flow estimation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.