Giter VIP home page Giter VIP logo

atapour / monoculardepth-inference Goto Github PK

View Code? Open in Web Editor NEW
146.0 146.0 38.0 7.06 MB

Inference pipeline for the CVPR paper entitled "Real-Time Monocular Depth Estimation using Synthetic Data with Domain Adaptation via Image Style Transfer" (http://atapour.co.uk/papers/atapour18monocular.pdf).

Home Page: http://www.atapour.co.uk/monocularDepth.html

License: MIT License

Python 97.82% Shell 2.18%
deep-learning domain-adaptation monocular-depth-estimators pytorch-implementation style-transfer synthetic-data

monoculardepth-inference's People

Contributors

atapour avatar tobybreckon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

monoculardepth-inference's Issues

Question about training dataset

Which dataset did you use to train the network?
As i read the paper, I believe you need ground truth image of synthetic and depth domain for each real-world image.

Where did you acquire such dataset?

Share the train script?

Hello, Atapour. Thanks for the outstanding work. Can you share the training script? I am not very clearly understand the Loss function (3.1.1). I will appreciate it if you can reply. Thank you very much.

Question about DATASET

Hi Atapour

Thanks for sharing. The idea is really great!!
I wonder if you can share the dataset with me.

Endoscope images

Hi,

Thanks for the code.
I am about to use this repo for training to be able to estimate the depth for the images acquired from a stereo endoscope under the water. As I can see this and most of the mono depth methods are applied for the street and cars. Is there anything that I should do or not to do for training the model when my aim is to get the depth for underwater and small distant objects?

Also I noticed the amount of overlap between stereo images for me is not as much as typical images from the street views. So my problem is a smaller amount of the overlap.

Thanks for reading

some questions about synthetic training data

Hi Atapour,

thanks for sharing. the result is really amazing!!
But I'm not really sure whether what I understood about synthetic data is correct:

By exploiting the tool DeepGTAV, you put a camera on the visual car in GTA for data collection.
So that you can get training data from that camera's perspective.

Then, I'm wondering how you get the ground truth disparity.
Did you put two cameras on the car for triangulation calculation?
Could I have the training datasets you used for training? or just some sample pairs of data with ground truth.

Secondly, why not to train on depth directly instead of disparity, so that the model could directly output the depth?

thanks

error!

Hi atapour ! thanks for your works ,but i met an error when i inference images HERE is the error code

Traceback (most recent call last):
File "run_test.py", line 15, in
for i, data in enumerate(dataset):
File "E:\monocularDepth-Inference\data_init_.py", line 40, in iter
for i, data in enumerate(self.dataloader):
File "F:\anaconda3\envs\fast_dp\lib\site-packages\torch\utils\data\dataloader.py", line 501, in iter
return _DataLoaderIter(self)
File "F:\anaconda3\envs\fast_dp\lib\site-packages\torch\utils\data\dataloader.py", line 289, in init
w.start()
File "F:\anaconda3\envs\fast_dp\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "F:\anaconda3\envs\fast_dp\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "F:\anaconda3\envs\fast_dp\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "F:\anaconda3\envs\fast_dp\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "F:\anaconda3\envs\fast_dp\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
F:\anaconda3\envs\fast_dp\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "F:\anaconda3\envs\fast_dp\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

PLEASE TELL ME WHAT SHOULD I DO?

Training scheme of monocularDepthCGAN and CycleGAN

Hi,

The paper states

Our approach consists of two stages, the operations of which are carried out by two separate models, trained at the same time.

Is it possible for you to explain more about the training scheme of two models in stage 1 (the monocular depth estimation model) and stages 2 (the domain adaptation model)?

There is not the joint loss for both stages so I suppose two models are training at the same time but their weights are updated separately, aren't they?

I am not so sure about this, so I (and maybe others) would be very appreciated if you can talk a little more details about this.

Thanks in advance.

Depth or disparity map

Hello,

Thank you very much for sharing your work. I am trying to obtain depth map from my own dataset with your code. Further I need to construct 3D point cloud using the depth information. I need depth values to convert them into x, y and z points. As per this, the resultant image is the disparity map hence I consider the tensor variable fake_C (in this case) from which it is generated also has disparity values. So I can directly invert the tensor variable and obtain the depth values or the tensor variable already has absolute depth values?

Thank you!

How to get real depth in meters?

Hi. I want to get the absolute depth values in meters. But the run_test.py just output depth image. Can your model output the real depth value?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.