Giter VIP home page Giter VIP logo

pips2's People

Contributors

aharley avatar dli7319 avatar eugenelyj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

pips2's Issues

Coordinate System of Trajectory

Hi Admins,

I was asking myself while exploring your repo where the origin of the coordinate system for the trajectories is? Is it the upper left corner?
Because I noticed that the trajectories which I loaded with the data loader from the file pointodysseydataset.py include negative values. Is this because the point is moving out of the image? And if so, are these points always marked as not visible?

Thanks a lot for you answer.

Trajectory has incorrect segments in the beginning

The videos logged to tensorboard seem fine, but when I plot the trajectories in 2d and 3d, even if the point doesn't move in the first few frames, the trajectories seems to indicate lots of movement in the beginning, so the plots always have an extra segment. Has anyone else observed this?
suc_1

Visualize tracked points on long videos?

It seems like the points get re-initialized every S frames in demo.py, so the resulting logs are chopped up into segments.
And test_on_pod.py doesn't visualize the predictions.
I could probably manually chain the points in demo, but was wondering if there's a better way to visualize points for the entire video?

Result of the demo is bad

Hi, I just tried the camel demo, but the result I got was messed up, I tried modify some parameters but the result were the same, did I do something wrong or the demo model is for test only?

Thanks for any help!

image

Track new area

Is there a way to allocate more grid points as new areas enter in the scene with new frames and other points/track disappear without relying on multiple inference passes on the full sequence?

Issues with sequence_loss() during training.

I was trying to train the model with S=8, N=75 but got this error. It seems there is an error either in the sequence_loss function or passing argument. Can you please make sure and provide a resolution?

line 120, in sequence_loss
i_loss = (flow_pred - flow_gt).abs() # B,S,N,2
~~~~~~~~~~^~~~~~~~~
RuntimeError: The size of tensor a (8) must match the size of tensor b (75) at non-singleton dimension 2

Tracking beyond border

Hey,
is there a way to handle points tracked beyond the borders of the image?

Thank you for the great work!

Can I get the visibility confidence score?

Thank you for your great work!!

In pips(the previous version), in order to chain the results, the visibility confidence vis is used in chain_demo.py. The main idea is to choose the location which has the highest visibility confidence in the last segment as the start of next inference on the following segments.
In pips, the vis is part of the output of forward function.

I'm curious if I can get visibility confidence in pips++? My video is much longer than 48 frames, I need to chain the results.

demo.py still uses delta_mult

In 9f901be, delta_mult was removed as an argument to the Pips' forward. However, it is still referenced if beautify=True and is still used in demo.py.

When calling demo.py, this results in

TypeError: Pips.forward() got an unexpected keyword argument 'delta_mult'

Evaluation on PointOdyssey

Hi @aharley,

Thank you for the great work!

I'm trying to reproduce the results on the test split of PointOdyssey. That's the link that I used to download the dataset. After following the installation instructions, I launched test_on_pod.py with S=128 and sur_thr=50, which produced the following output:

1_128_i16_pod05_212310; step 000001/12; rtime 0.97; itime 23.07; d_x 13.7; sur_x 23.0; med_x 69.7
1_128_i16_pod05_212310; step 000002/12; rtime 1.01; itime 26.43; d_x 16.9; sur_x 27.1; med_x 56.1
1_128_i16_pod05_212310; step 000003/12; rtime 13.41; itime 98.56; d_x 18.3; sur_x 35.4; med_x 45.0
1_128_i16_pod05_212310; step 000004/12; rtime 16.99; itime 134.77; d_x 28.2; sur_x 49.8; med_x 36.1
1_128_i16_pod05_212310; step 000006/12; rtime 19.53; itime 104.47; d_x 23.7; sur_x 40.7; med_x 47.3
1_128_i16_pod05_212310; step 000007/12; rtime 6.80; itime 42.10; d_x 24.4; sur_x 43.5; med_x 45.4
1_128_i16_pod05_212310; step 000008/12; rtime 3.20; itime 36.09; d_x 26.9; sur_x 48.2; med_x 42.0
1_128_i16_pod05_212310; step 000009/12; rtime 13.59; itime 121.77; d_x 25.3; sur_x 47.2; med_x 42.8
1_128_i16_pod05_212310; step 000010/12; rtime 13.38; itime 82.74; d_x 25.1; sur_x 48.6; med_x 50.3
1_128_i16_pod05_212310; step 000011/12; rtime 8.20; itime 75.24; d_x 28.2; sur_x 47.5; med_x 46.7
1_128_i16_pod05_212310; step 000012/12; rtime 10.96; itime 77.90; d_x 29.1; sur_x 47.0; med_x 44.8

My results are slightly different from what is described in the Testing section. Even though I have a different survival threshold, d_avg and median_l2 should be 31.3 and 33.0, respectively. Do you know why this might be the case?

In order to load the dataset, I had to change annotations.npz to annot.npz:

annotations_path = os.path.join(seq, 'annotations.npz')

and visibilities to visibs here:

visibs = annotations['visibilities'][full_idx].astype(np.float32)

Can it be a different version of the dataset?

test_on_tap.py results don't match expected results.

Hello,
When running test_on_tap.py, I get different results than reported in the testing section.
The mean d_avg of all 30 videos (output is added below) is 72.376, compared to d_avg 70.6; survival_16 89.3; median_l2 6.9 reported.
I download the reference mode using sh get_reference_model.sh, and I test on tapvid_davis.pkl which I downloaded and unzipped from https://storage.googleapis.com/dm-tapnet/tapvid_davis.zip.

I would really appreciate any assistance and clarifications on the matter!
Assaf

How do we know when the tracking fails?

Is there anyways to find the tracking failure from the demo.py code?

Can we decide it ourselves by checking the score thresholds ? How to check the score thresholds for each point predicted ?

How can we set the criterion to stop the tracking ?

Simplify demo.py

Can you simplify the demo to just return, tracks, visibility and eventually a video writer?
All the Tensorboard summary logics for a simple/minial inference demo it It is too convoluted to incentivize the pip2 testability

Simple demo

Other then test_* for the specific datasets do you have a minimal inference demo script for generic image sequences or video?

Processing Time for Tracking with PIPs++

Hi,

I am interested in understanding the processing time associated with using PIPs++ for tracking points across a standard video clip. Would it be possible to provide any benchmark data or insights regarding the time it takes to process a video of a common resolution and length, say 800x800 and 500 frames long?

Additionally, I'd appreciate any information on the hardware configurations used for any provided benchmark data to better understand the performance characteristics of PIPs++.

Thank you for your time and assistance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.