Giter VIP home page Giter VIP logo

depthstillation's People

Contributors

mattpoggi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

depthstillation's Issues

Depthstillation on real dataset

Good evening,
I'd like to reproduce your depthstillation process, but on real dataset called CADDY in order to evaluate some models (pwcnet and raft).
I already extracted the depth maps through MIDAS and depthstilled them, but it seems that the metrics of the models that i trained on these depthstilled data are worsening a lot...
I'm attaching, from top to bottom, computed flow, depth map, original and depthstilled images.

Do you see something strange in them? Any tips about the use case?
Thanks in advance!

brodarski-D_00001_left_00
brodarski-D_00001_left
brodarski-D_00001_left

brodarski-D_00001_left_00

Object motion in the generated dataset.

Thanks for your great work! But I have a small question about the moving object in the paper. Table 1 shows that adding moving objects gives better performance. However, in Table 2 and the following tables, the dCOCO does not appear to contain moving objects (with EPE=3.81 on KITTI 15). Do you implement the following experiments, including dDAVIS and dKITTI, without moving objects? This is very important to us because our recent work uses depthstillation as an important reference. We would appreciate receiving your reply.

The generation process of depth image

I use MiDaS and follow the instructions in https://pytorch.org/hub/intelisl_midas_v2/ to get the depth image, white the depth of the demo image I get is as follows:
demo

It seems very different from the depth image provided in this repo:
demo2

the code I use is like this:

midas = torch.hub.load("intel-isl/MiDaS", "MiDaS").cuda()
midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms")
transform = midas_transforms.default_transform

image = np.asarray(Image.open(input_image))
img = transform(image)
with torch.no_grad():
    prediction = midas(img.cuda())
    # prediction = torch.nn.functional.interpolate(
    #     prediction.unsqueeze(1),
    #     size=image.shape[:2],
    #     mode="bicubic",
    #     align_corners=False,
    # ).squeeze()
depth_image = prediction.cpu().numpy().squeeze()
depth_image = cv2.convertScaleAbs(depth_image, alpha=255/np.max(depth_image))
Image.fromarray(img, mode="L").save("demo.png")

Could you please give some details about how to get the depth image?

Optic flow generation on custom dataset

Hello,
Thank you for your great idea and code.
How can we apply your code to a custom dataset? I have video frames and want to generate the optic flow for my dataset. How can I use your code? Thank you.

The result is strange in win10

Hi! Thanks for your great work.

I try to run the code in win10, and use gcc -shared -o libwarping.dll warping.c instead of compile.sh. I modified lib = cdll.LoadLibrary("external/forward_warping/libwarping.so") to lib = cdll.LoadLibrary("external/forward_warping/libwarping.dll"). Then I run depthstillation.py but obtain strange result like that (dCOCO/im1):
95022_00
which is quite different from the result in supply. material. Did I miss something?

Png format to .flo format

Hello,
Thank you for your great code.
How did you train the models for optical flow where the ground truth should be in .flow but your code generates .png format?

Thank you.

warped image is strange

With command python depthstillation.py, I have tried to get warp of your sample image im0 and the first cat image in your article with code, but the result was rather strange. The cat image was a screenshot from your article and its depth map was gotten using Miads with weight "dpt_large". Here is link to the warp result of this two image im1_warped and cat_warped. Am I doing somthing wrong?

Dataset

Hi, thanks for your very interesting work. Can you provide the dataset you developed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.