alibaba / unifuse-unidirectional-fusion Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Download PanoBasic
Copy stitching_Matterport3D.m to PanoBasic
Modify the directories in stitching_Matterport3D.m, i.e., changing source_dir to the folder of Matterport3D and changing target_dir as the output folder of panorama images and depth maps.
Execute stitching_Matterport3D.m using Matlab.
It seems that stiching_Matterport3D uses stich.m
, while stich.m
is missing in PanoBasic.
Hi, thanks your great work, but how can I access the pretrained model?Is there any link?
For the MatterPort dataset, there are 1TB files on the official website. I don't know which parts I need to download to complete this; For the 3D60 data set, there are also hundreds of GB; For PanoSUNCG, the official website did not provide a download link, and the author did not reply; For standfor2D3D, the official website maintenance also cannot obtain data sets. I wonder if you can share the dataset, thank you!
Hi, I have tested the Stanford2D3D dataset with the given model parameters, but the results are vary different with the quantitative results in the paper. Have you uploaded the wrong model parameters? The results with the given parameters are
Besides, I have tried to retrained the network on Stanford2D3D with PyTorch of the same version with your codes, but the results are worse than that in the paper too. My reproduced results are
Is there any difference between the given codes and the implementation used in the paper?
In the original version, the top and bottom parts of stanford2d3d are missing, but in your paper, it is full. How to deal with it?
It seems that your data split is different from OmniDepth and BiFuse.
How do you decide to use this split?
Thank you very much for your work, but there is a piece of code that makes me a little confused.
The location is Line18-Line21 in metrics.py
I understand that it means that gt and pred are limited to the range of 0.1-10 before calculating the evaluation metrics. However, the logic of these four lines only limits gt to 0.1 to 10, while pred will have an area greater than 0.1.
I hope you can remove my confusion.
[Question]
cube_inputs = torch.cat(torch.split(input_cube_image, self.cube_h, dim=-1), dim=0)
in unifuse.py
.
this code seemingly cat six cube face on dim0. i.e., (batch_size, 3, h, 6h) --> (batch_size6, 3, h, h).
I think that the cube encoder can not ultize feature of different minibatch. Intuitively, catting these face on dim of channel are responable. But, it can't use the imagenet-pretrained network.
Did I understand correctly?
When I set --gpu_devices as [0, 1, 2, 3], the training process can only work on single GPU. If any other setting should be conducted? Thanks!
How to download PanoSUNCG? I do not see any link in https://fuenwang.ml/project/360-depth/.
Thanks for your help. Looking forward for the reply!
Hi,
Can you please tell us how to handle Matterport3D images? I mean, once you downloded the dataset from the official website, how did you create the equirectangolar RGB images and depth maps?
Thanks in advance,
Valerio
Hi, thanks for your excellent work. I wanted to ask how the GPU memeory is obtained or calculated in this paper ?
Hello, I encounter a problem similar to zcq15 about 3D60 dataset test. I have tryed to retrained the network on 3D60, but the results seemed to poor compared with the results in paper. I just used your codes directly and didn't do any data preprocessing on 3D60. The results with the given parameters are
I have also tested the 3D60 dataset with the given model parameters, but it seem to worse than the results in paper too. the results are below
Is there any additional operations on 3D60 dataset just like Stanford2D3D?
Thanks for your help. Looking forward for the reply!
Hi nice work.
Coud u tell me pls. Whats wrong with this, after run python evaluate.py I have error.
python evaluate.py --data_path /Users/kgkozlov/Downloads/img_4.png --dataset matterport3d --load_weights_dir /Users/UniFuse-Unidirectional-Fusion/PretrainedModels/
File "/UniFuse-Unidirectional-Fusion/UniFuse/datasets/matterport3d.py", line 83, in __getitem__
rgb = cv2.cvtColor(rgb, cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(4.5.3) /private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/pip-req-build-z9mn802i/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
fpath = os.path.join(os.path.dirname(__file__), "datasets", "{}_{}.txt")
But you add txt file with RGB and DEPTH, But shouldn't we just add one RGB image ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.