Giter VIP home page Giter VIP logo

autoailab / fusiondepth Goto Github PK

View Code? Open in Web Editor NEW
83.0 2.0 7.0 640 KB

Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

License: MIT License

Python 98.96% Shell 1.04%
computer-vision computer-science lidar lidar-point-cloud depth-estimation monocular-depth-estimation self-supervised-learning self-driving-car artificial-intelligence convolutional-neural-networks

fusiondepth's Issues

Accurate or normalized depth value in training?

Thanks for your contribution in FusionDepth!

I wonder whether the "depth" in the trainer.py is normalized or the accurate value of the depth in each pixel. When I print them in the training process, the value looks like normalized, which is generally below 1. But in the reprojection procedure, this depth value is used directly.
T@S_F{IPZ(TMO7H2@GL9`%1
image

What's more, when calculating the loss, the depth is multiplied by 26. I have no idea what the "26" means.

$B3W2LD}3{F2IN~%%FDBN}F

Error when run evaluation on training result

Thank you for your work!

I have followed the procedure of Preprocess Data and Depth Prediction. However, when I run the evaluation on my model, the error occurred:

Traceback (most recent call last):
  File "evaluate_depth.py", line 510, in <module>
    evaluate(options.parse())
  File "evaluate_depth.py", line 120, in evaluate
    encoder.load_state_dict({k: v for k, v in encoder_dict.items() if k in model_dict})
  File "/home/chenwei/anaconda3/envs/diff/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResnetEncoder:
        size mismatch for encoder.layer1.0.conv1.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
        size mismatch for encoder.layer1.1.conv1.weight: copying a param with shape torch.Size([64, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
        size mismatch for encoder.layer2.0.conv1.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64, 3, 3]).
        size mismatch for encoder.layer2.0.downsample.0.weight: copying a param with shape torch.Size([512, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 64,1, 1]).
        size mismatch for encoder.layer2.0.downsample.1.weight: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for encoder.layer2.0.downsample.1.bias: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for encoder.layer2.0.downsample.1.running_mean: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for encoder.layer2.0.downsample.1.running_var: copying a param with shape torch.Size([512]) from checkpoint, the shape in current model is torch.Size([128]).
        size mismatch for encoder.layer2.1.conv1.weight: copying a param with shape torch.Size([128, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
        size mismatch for encoder.layer3.0.conv1.weight: copying a param with shape torch.Size ([256, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
        size mismatch for encoder.layer3.0.downsample.0.weight: copying a param with shape torch.Size([1024, 512, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 1, 1]).
        size mismatch for encoder.layer3.0.downsample.1.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for encoder.layer3.0.downsample.1.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for encoder.layer3.0.downsample.1.running_mean: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for encoder.layer3.0.downsample.1.running_var: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([256]).
        size mismatch for encoder.layer3.1.conv1.weight: copying a param with shape torch.Size([256, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 256, 3, 3]).
        size mismatch for encoder.layer4.0.conv1.weight: copying a param with shape torch.Size([512, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 3, 3]).
        size mismatch for encoder.layer4.0.downsample.0.weight: copying a param with shape torch.Size([2048, 1024, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 256, 1, 1]).
        size mismatch for encoder.layer4.0.downsample.1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for encoder.layer4.0.downsample.1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for encoder.layer4.0.downsample.1.running_mean: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for encoder.layer4.0.downsample.1.running_var: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([512]).
        size mismatch for encoder.layer4.1.conv1.weight: copying a param with shape torch.Size([512, 2048, 1, 1]) from checkpoint, the shape in current model is torch.Size([512, 512, 3, 3] ).
        size mismatch for encoder.fc.weight: copying a param with shape torch.Size([1000, 2048]) from checkpoint, the shape in current model is torch.Size([1000, 512]).

It seems that the format of the weights missmatch the evaluation code. This error also occurred when I tried to run evaluation on the initial model (generated by running python trainer.py). Nevertheless, when I run your pretrained model, everything was OK.

About the depth completion task

The RMSE performance of the depth completion improved a lot compared to other self-supervised methods, but the depth-completion training script completor.py is not provided.

Running trainer.py failed运行trainer.py失败

File "trainer.py", line 272, in process_batch
inputs[key] = ipt.to(self.device)
AttributeError: 'list' object has no attribute 'to'

may i ask how to solve this problem? I tried but failed.
请问怎么解决这个问题呢?我试了一些方法,但失败了

No such file or directory

Hello, thanks for your work. But when I run 'bash prepare_4beam_data_for_prediction.sh', there are no documents here such as sparsify.py, splits. Please make sure your code is complete. Thanks for your reply.

about static scenario

Hello, I have a question about the PoseNet. Can PoseNet handle the situation where the car stops at the road intersection? In this scenario, the front and rear frames are relatively static, what is the output of PoseNet?

Visualization

Hi Ziyue,

Thanks for releasing this fantastic work. I have a quick question about your video demo. Can you give some suggestions about point cloud visualization? like any tools or software etc.

Thanks,
Hang

the quantitative comparsion in your paper

ziyue 你好。
你的论文里标表1 中,比如Dorn &BTS的结果从哪里得到的啊?
我在他们的论文里没有找到。
你是重新在kitti eigen split上训练这些网络了吗?

questions about detection

I want to train the 3D detection network with the generated depth picture, but I can't run the export_detection.py correctly, what should I do?

No ptc2depth !

Hello, thanks for your work. But when I run 'bash prepare_4beam_data_for_prediction.sh', the gen2channel.py can not import ptc2depth form kitti_utils.

Error in Preprocessing Data

Thanks for your work!
I'm running the "bash prepare_4beam_data_for_prediction.sh". However, the data "/Kitti_RAW_Data/2011_09_26/2011_09_26_drive_0002_sync/4beam/0000000069.bin" seems to be needed.
})7CU%D5M$9`KBYEK5PLFT9

I wonder if it's the same as the one in the standard kitti dataset, which is "/data/Kitti_RAW_Data/2011_09_26/2011_09_26_drive_0002_sync/velodyne_points/data/0000000069.bin"
If not, how can I get the series of dataset mentioned in the image? Thanks!

Error when run evaluation by pretrained model

Thank you for your interesting work.
I want to confirm the performance of depth completion task. I prepared the pretrained model ResNet50 and validation data(data_depth_selection).
When I run
python evaluate_completion.py --load_weights_folder log/res50/models/weights_best --eval_mono --nbeams 4 --num_layers 50
It returns the following error.

Traceback (most recent call last):
  File "evaluate_completion.py", line 373, in <module>
    evaluate(options.parse())
  File "evaluate_completion.py", line 174, in evaluate
    output = depth_decoder(features, beam_features=beam_features)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/workspace/localDisk/yangjunjie/scripts/3d/FusionDepth/networks/depth_decoder.py", line 70, in forward
    x = input_features[-1] + beam_features[-1]
RuntimeError: The size of tensor a (20) must match the size of tensor b (38) at non-singleton dimension 3

It seems that input image is resized while the corresponding depth map keeps the same.

Absolute depth or relative depth?

Hello! Thank you for your splendid work! But I have some questions about what we get is absolute depth or relative depth by using your model?

运行refiner.py时loss为nan

作者你好,很感谢你的工作,当我运行到refiner.py时我发现loss为nan,经过debug我发现主要是因为gdc_loss为nan,这是我各个变量的值,希望你能帮助我找出问题
image
image
image

Tensor size matching error

Dear authors, thanks for the great work! I'm trying to train with a custom dataset containing images and Pseudo Dense Representations Generation of size H * W * 1 and have changed the Resnet encoder dimension from 2 to 1 accordingly. However, I'm getting RuntimeError: The size of tensor a (10) must match the size of tensor b (15) at non-singleton dimension 3 at x = input_features[-1] + beam_features[-1] in depth_decoder.py. I guess it's related to scaling as the Pseudo Dense Representations Generation has the original scale while for image it's scaled down. However in your original inputs["2channel"] = self.load_4beam_2channel(folder, frame_index, side, do_flip) it seems that there's no scaling down involved. Do you have any idea what might be the issue? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.