Giter VIP home page Giter VIP logo

pvnet's Introduction

Good news! We release a clean version of PVNet: clean-pvnet, including

  1. how to train the PVNet on the custom dataset.
  2. Use PVNet with a detector.
  3. The training and testing on the tless dataset, where we detect multiple instances in an image.

PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation

introduction

PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation
Sida Peng, Yuan Liu, Qixing Huang, Xiaowei Zhou, Hujun Bao
CVPR 2019 oral
Project Page

Any questions or discussions are welcomed!

Truncation LINEMOD Dataset

Check TRUNCATION_LINEMOD.md for information about the Truncation LINEMOD dataset.

Installation

One way is to set up the environment with docker: How to install pvnet with docker.

Thanks Joe Dinius for providing the docker implementation.

Another way is to use the following commands.

  1. Set up python 3.6.7 environment
pip install -r requirements.txt

We need compile several files, which works fine with pytorch v0.4.1/v1.1 and gcc 5.4.0.

For users with a RTX GPU, you must use CUDA10 and pytorch v1.1 built from CUDA10.

  1. Compile the Ransac Voting Layer
ROOT=/path/to/pvnet
cd $ROOT/lib/ransac_voting_gpu_layer
python setup.py build_ext --inplace
  1. Compile some extension utils
cd $ROOT/lib/utils/extend_utils

Revise the cuda_include and dart in build_extend_utils_cffi.py to be compatible with the CUDA in your computer.

sudo apt-get install libgoogle-glog-dev=0.3.4-0.1
sudo apt-get install libsuitesparse-dev=1:4.4.6-1
sudo apt-get install libatlas-base-dev=3.10.2-9
python build_extend_utils_cffi.py

If you cannot install libsuitesparse-dev=1:4.4.6-1, please install libsuitesparse, run build_ceres.sh and move ceres/ceres-solver/build/lib/libceres.so* to lib/utils/extend_utils/lib.

Add the lib under extend_utils to the LD_LIBRARY_PATH

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/pvnet/lib/utils/extend_utils/lib

Dataset Configuration

Prepare the dataset

Download the LINEMOD, which can be found at here.

Download the LINEMOD_ORIG, which can be found at here.

Download the OCCLUSION_LINEMOD, which can be found at here.

Create the soft link

mkdir $ROOT/data
ln -s path/to/LINEMOD $ROOT/data/LINEMOD
ln -s path/to/LINEMOD_ORIG $ROOT/data/LINEMOD_ORIG
ln -s path/to/OCCLUSION_LINEMOD $ROOT/data/OCCLUSION_LINEMOD

Compute FPS keypoints

python lib/utils/data_utils.py

Synthesize images for each object

See pvnet-rendering for information about the image synthesis.

Demo

Download the pretrained model of cat from here and put it to $ROOT/data/model/cat_demo/199.pth.

Run the demo

python tools/demo.py

If setup correctly, the output will look like

cat

Visualization of the voting procedure

We add a jupyter notebook visualization.ipynb for the keypoint detection pipeline of PVNet, aiming to make it easier for readers to understand our paper. Thanks for Kudlur, M 's suggestion.

Training and testing

Training on the LINEMOD

Before training, remember to add the lib under extend_utils to the LD_LIDBRARY_PATH

export LD_LIDBRARY_PATH=$LD_LIDBRARY_PATH:/path/to/pvnet/lib/utils/extend_utils/lib

Training

python tools/train_linemod.py --cfg_file configs/linemod_train.json --linemod_cls cat

Testing

We provide the pretrained models of each object, which can be found at here.

Download the pretrained model and move it to $ROOT/data/model/{cls}_linemod_train/199.pth. For instance

mkdir $ROOT/data/model
mv ape_199.pth $ROOT/data/model/ape_linemod_train/199.pth

Testing

python tools/train_linemod.py --cfg_file configs/linemod_train.json --linemod_cls cat --test_model

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{peng2019pvnet,
  title={PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation},
  author={Peng, Sida and Liu, Yuan and Huang, Qixing and Zhou, Xiaowei and Bao, Hujun},
  booktitle={CVPR},
  year={2019}
}

Acknowledgement

This work is affliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.

Copyright (c) ZJU-SenseTime Joint Lab of 3D Vision. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

pvnet's People

Contributors

atopheim avatar dependabot[bot] avatar jiamingsuen avatar jwdinius avatar pengsida avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pvnet's Issues

Error building build_extend_utils_cffi.py

Hi,

Thanks for sharing your great work.

When I run the python script build_extend_utils_cffi.py, the error is :

python build_extend_utils_cffi.py
generating ./_extend_utils.c
(already up-to-date)
the current directory is '/root/Project/Image-based_Cabinet_Modeling/pvnet/lib/utils/extend_utils'
running build_ext
building '_extend_utils' extension
gcc -pthread -B /root/anaconda2/envs/image_cabinet/compiler_compat -Wl,--sysroot=/ -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/root/anaconda2/envs/image_cabinet/include/python2.7 -c _extend_utils.c -o ./_extend_utils.o
gcc -pthread -shared -B /root/anaconda2/envs/image_cabinet/compiler_compat -L/root/anaconda2/envs/image_cabinet/lib -Wl,-rpath=/root/anaconda2/envs/image_cabinet/lib -Wl,--no-as-needed -Wl,--sysroot=/ ./_extend_utils.o src/mesh_rasterization.cpp.o src/farthest_point_sampling.cpp.o src/uncertainty_pnp.cpp.o src/nearest_neighborhood.cu.o ./lib/libceres.so ./lib/libglog.so /usr/local/cuda-9.0/lib64/libcudart.so -L/root/anaconda2/envs/image_cabinet/lib -lstdc++ -lpython2.7 -o ./_extend_utils.so
/root/anaconda2/envs/image_cabinet/compiler_compat/ld:./lib/libceres.so: file format not recognized; treating as linker script
/root/anaconda2/envs/image_cabinet/compiler_compat/ld:./lib/libceres.so:0: syntax error
collect2: error: ld returned 1 exit status
Traceback (most recent call last):
File "build_extend_utils_cffi.py", line 41, in
ffibuilder.compile(verbose=True)
File "/root/anaconda2/envs/image_cabinet/lib/python2.7/site-packages/cffi/api.py", line 697, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File "/root/anaconda2/envs/image_cabinet/lib/python2.7/site-packages/cffi/recompiler.py", line 1520, in recompile
compiler_verbose, debug)
File "/root/anaconda2/envs/image_cabinet/lib/python2.7/site-packages/cffi/ffiplatform.py", line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File "/root/anaconda2/envs/image_cabinet/lib/python2.7/site-packages/cffi/ffiplatform.py", line 58, in _build
raise VerificationError('%s: %s' % (e.class.name, e))
cffi.error.VerificationError: LinkError: command 'gcc' failed with exit status 1

Are there any suggestions about that? My environment is Ubuntu 16.04+gcc 5.4.0+cuda 9.0.

Thanks again.

Question about training times and parameters

Thanks for making the code available. I have successfully run the code on Google Colab (K80 GPU with 11G) and have reproduced the demo and testing results. I have also trained ("cat") using 10000 synthetic images and 10000 Fuse images (no occlusion images) and a single epoch took 4 hours to complete. I see that the pre-trained models use 200 epochs, which would imply 800 hours of training time (on a K80) per object. I am not sure if that time is expected or if I have an unknown issue. Could you share the training parameters used to generate the 199.pth files and also any information on the training time? Thanks again.

Occluded-LineMod dataset train/validation split

Hi,

I have a question regarding the train/validation split used for Occluded-LineMod dataset from onedrive link. I am trying to understand the train/validation split that should be used. This ICCV_15 challenge page says "You can use anything as training data except the test sequences provided in the dataset.". The dataset contains XXX_val.txt file for each object. Does this mean we can use all the images in the dataset for training and avoid the specific occurrences of the objects as per those XXX_val.txt file?

Sorry, this question is not really about PVnet but very general to the Occluded-LineMod.

Questions related to YCB-v dataset

Hi, thanks for sharing the nice code. Several questions related to YCB-v dataset:

  1. How can i modify the code to train and evaluate on YCB-v dataset? Would you release the scripts and pretrained models for YCB-v dataset?
  2. How many additional sythetic images did you use for training apart from those provided in YCB-v datatet

The link to pretrained models on LINEMOD seems to be broken.

Thank you.

which is the url to download LINEMOD_ORIG

Hello, I cannot find the url to downlaod LINEMOD_ORIG. The url in the README.md redirects to a personal page. Some links seem like datasets:

Here you can find the databases for the ape, benchvise, bowl, can, cat, cup, driller, duck, glue, holepuncher, iron, lamp, phone, cam and eggbox.

I wonder do these links represent LINEMOD_ORIG Dataset? I'm not familar with LINEMOD. Please help me

What time will the code be published?

Hi, Congratulations to your remarkable achievements! I'm looking forward to knowing details in your code. So I just want to know What time will the code be published?
Thank you!

Test RuntimeError

When I run python demo.py,I met the flowing error:
RuntimeError: cublas runtime error : the GPU program failed to execute at /pytorch/aten/src/THC/THCBlas.cu:411
My GPU is 2080ti,CUDA 10.0
How can I slove the problem?Thank you.

Coordinate system convention of the ground truth poses

Hi,
One more question related to original LineMOD. The ground truth poses in Occluded-LineMOD dataset follows the OpenGL Coordinate system convention (camera viewing direction is the negative Z-axis) as mention in the section 2.2 of this document. But I am not sure of the coordinate system convention used in the original LineMOD dataset. For example one of the transformation file like looks like
-13.0792
-7.83575
104.177
The values are in centimetres.
so, that's ~1 meters along the positive z-axis.
This make me think that the poses are annotated with OpenCV
coordinate system convention. But my rendering pipeline is not working correctly if I follow this assumption. Any idea how the ground truth poses in original LineMOD dataset is defined?
Sorry again that this issue is also not about the PVNet but rather a general question.

Runtime Performance

Hi there! Thank you for making this available and congrats on getting accepted into CVPR!

I was reading your paper and I was curious on how you made your inference speed so fast with such big inputs. Using this implementation along with your pre-trained models I re-measured each component speed and found it to be different from what you detailed in the paper. (used the technique described here)

I then wrapped the net call, the RANSAC call and from line 167 to the PnP call with the previously mentioned timing code. Using a 1080Ti and a i5-8500 CPU 3.00GHz, when calling "python tools/train_linemod.py --cfg_file configs/linemod_train.json --linemod_cls ape --test_model --use_uncertainty_pnp", I got the following times

  • Forward Pass: ~27ms
  • RANSAC: ~14ms
  • PnP: ~1.6ms

Both RANSAC and the forward pass run on the GPU so I expected the processing time to be the same. What am I doing something wrong? The whole evaluation (with metric measuring and loading) ran at 11fps.

Thanks in advance!

compile ransac_voting

Hallo, I have problems with compilation of ransac_voting. I use python3.6, pytorch0.4.1 and gcc5.4.0.

(pvnet) gaobaoding@gaobaoding-T5:~/pvnet$ python tools/demo.py
Traceback (most recent call last):
File "tools/demo.py", line 8, in
from lib.ransac_voting_gpu_layer.ransac_voting_gpu import ransac_voting_layer_v3
File "/home/ning/pvnet/lib/ransac_voting_gpu_layer/ransac_voting_gpu.py", line 2, in
import lib.ransac_voting_gpu_layer.ransac_voting as ransac_voting
ImportError: /home/ning/pvnet/lib/ransac_voting_gpu_layer/ransac_voting.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE

Multiple instance support

Hi, guys,
In the paper, you provided a method for multiple instances, but based on a rough reading of the code, I think it supports only one instance a time.

Do I miss something, or it needs some revision to support multiple instances?

Thanks.

about hypothesis

Hello, thank you for making such a high performance algorithm.I have a question about the hypothesis: we randomly choose two pixels and take the intersection of their vectors as a hypothesis hki for the keypoint xk,I don't know what this step means, can you explain it in detail?What is the dimension of the hypothesis? thank you

problems about Compute FPS keypoints

When I run lib/utils/data_utils.py, I got this error:

Traceback (most recent call last):
File "lib/utils/data_utils.py", line 18, in
from lib.utils.extend_utils.extend_utils import farthest_point_sampling
File "/home/lax/pvnet/lib/utils/extend_utils/extend_utils.py", line 3, in
from lib.utils.extend_utils._extend_utils import lib, ffi
ImportError: libceres.so.2: cannot open shared object file: No such file or directory

How may I solve it? Thanks.

error in building build_extend_utils_cffi.py

x86_64-linux-gnu-gcc: error: src/uncertainty_pnp.cpp.o: No such file or directory
x86_64-linux-gnu-gcc: error: ./lib/libceres.so: No such file or directory
x86_64-linux-gnu-gcc: error: ./lib/libglog.so: No such file or directory

surface points create

when i try to create surface points using lib/utils/data_utils.py

pvnet/data/LINEMOD/ape/dense_pts.txt not found.

how to get dense_pts.txt

Dear author,

When I run 'lib/utils/data_utils.py', it shows: OSError: /home/xxx/pvnet/data/LINEMOD/ape/dense_pts.txt not found.

How can I generate dense_pts.txt.

./src/ransac_voting.cpp:7:81: error: ‘AT_ASSERTM’ was not declared in this scope

Hi, when I build ransac voting layer it happened this error. My gcc version is 5.4.0.
python setup.py build_ext --inplace
running build_ext
building 'ransac_voting' extension
gcc -pthread -B /media/data_1/home/zelin/anaconda3/envs/pvnet_new/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/media/data_1/home/zelin/anaconda3/envs/pvnet_new/lib/python3.6/site-packages/torch/lib/include -I/media/data_1/home/zelin/anaconda3/envs/pvnet_new/lib/python3.6/site-packages/torch/lib/include/TH -I/media/data_1/home/zelin/anaconda3/envs/pvnet_new/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/media/data_1/home/zelin/anaconda3/envs/pvnet_new/include/python3.6m -c ./src/ransac_voting.cpp -o build/temp.linux-x86_64-3.6/./src/ransac_voting.o -DTORCH_EXTENSION_NAME=ransac_voting -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
./src/ransac_voting.cpp: In function ‘at::Tensor generate_hypothesis(at::Tensor, at::Tensor, at::Tensor)’:
./src/ransac_voting.cpp:7:81: error: ‘AT_ASSERTM’ was not declared in this scope
#define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor")
^
./src/ransac_voting.cpp:9:24: note: in expansion of macro ‘CHECK_CUDA’
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
^
./src/ransac_voting.cpp:26:5: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(direct);
^
./src/ransac_voting.cpp: In function ‘void voting_for_hypothesis(at::Tensor, at::Tensor, at::Tensor, at::Tensor, float)’:
./src/ransac_voting.cpp:7:81: error: ‘AT_ASSERTM’ was not declared in this scope
#define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor")
^
./src/ransac_voting.cpp:9:24: note: in expansion of macro ‘CHECK_CUDA’
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
^
./src/ransac_voting.cpp:49:5: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(direct);
^
./src/ransac_voting.cpp: In function ‘at::Tensor generate_hypothesis_vanishing_point(at::Tensor, at::Tensor, at::Tensor)’:
./src/ransac_voting.cpp:7:81: error: ‘AT_ASSERTM’ was not declared in this scope
#define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor")
^
./src/ransac_voting.cpp:9:24: note: in expansion of macro ‘CHECK_CUDA’
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
^
./src/ransac_voting.cpp:70:5: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(direct);
^
./src/ransac_voting.cpp: In function ‘void voting_for_hypothesis_vanishing_point(at::Tensor, at::Tensor, at::Tensor, at::Tensor, float)’:
./src/ransac_voting.cpp:7:81: error: ‘AT_ASSERTM’ was not declared in this scope
#define CHECK_CUDA(x) AT_ASSERTM(x.type().is_cuda(), #x " must be a CUDA tensor")
^
./src/ransac_voting.cpp:9:24: note: in expansion of macro ‘CHECK_CUDA’
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
^
./src/ransac_voting.cpp:93:5: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(direct);
^
error: command 'gcc' failed with exit status 1

some problem about debug

hello, thanks for your method. when run the demo.py, there are some problems ,like
Traceback (most recent call last):
File "/home/robot/Downloads/pvnet-master/tools/demo.py", line 8, in
from lib.ransac_voting_gpu_layer.ransac_voting_gpu import ransac_voting_layer_v3
File "/home/robot/Downloads/pvnet-master/lib/ransac_voting_gpu_layer/ransac_voting_gpu.py", line 2, in
import lib.ransac_voting_gpu_layer.ransac_voting as ransac_voting
ImportError: /home/robot/Downloads/pvnet-master/lib/ransac_voting_gpu_layer/ransac_voting.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at5ErrorC1ENS_14SourceLocationESs
can you tell me reasons , thanks for your time.

run data_utils.py error, libspqr.2.0.2 not found

I found my libspqr in /usr/lib/x86_64-linux-gnu/ is 2.0.8.
so I met error when running data_utils.py as follow:
Traceback (most recent call last):
File "lib/utils/data_utils.py", line 18, in
from lib.utils.extend_utils.extend_utils import farthest_point_sampling
File "/home/jiaming/pvnet/lib/utils/extend_utils/extend_utils.py", line 3, in
from lib.utils.extend_utils._extend_utils import lib, ffi
ImportError: libspqr.so.2.0.2: cannot open shared object file: No such file or directory

Could this be solved?

Can this project run on CentOS linux?

The required versions of libglog/libsuitesparse/libatlas can't be found in the CentOS's yum list. I used available versions of libglog/libsuitesparse/libatlas in the yum list, and build up my own libceres.so, but it can't be linked to the _extend_utils.so.
Has anyone managed to get this project running on CentOS? I really need your help, thanks very much !!!

data_utils.py

Traceback (most recent call last):
File "lib/utils/data_utils.py", line 18, in
from lib.utils.extend_utils.extend_utils import farthest_point_sampling
File "/home/liuguangyuan/pvnet-master/lib/utils/extend_utils/extend_utils.py", line 3, in
from lib.utils.extend_utils._extend_utils import lib, ffi
ImportError: /home/hillaric/pvnet-master/lib/utils/extend_utils/_extend_utils.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN6google4base21CheckOpMessageBuilder9NewStringEv

python build_extend_utils_cffi.py Error

my gcc version is 5.4.0
and I revise the cuda_include and dart in build_extend_utils_cffi.py to be compatible with my CUDA in my computer.

and the error is:
leviathan@leviathan-1080ti:~/PycharmProjects/pvnet/lib/utils/extend_utils$ python build_extend_utils_cffi.py
In file included from ./include/ceres/internal/autodiff.h:147:0,
from ./include/ceres/autodiff_cost_function.h:133,
from ./include/ceres/ceres.h:37,
from src/uncertainty_pnp.cpp:3:
./include/ceres/internal/fixed_array.h:38:26: fatal error: glog/logging.h: No such file or directory
compilation terminated.
generating ./_extend_utils.c
(already up-to-date)
the current directory is '/home/leviathan/PycharmProjects/pvnet/lib/utils/extend_utils'
running build_ext
building '_extend_utils' extension
gcc -pthread -B /home/leviathan/anaconda3/envs/torch0.4/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/leviathan/anaconda3/envs/torch0.4/include/python3.6m -c _extend_utils.c -o ./_extend_utils.o
gcc -pthread -shared -B /home/leviathan/anaconda3/envs/torch0.4/compiler_compat -L/home/leviathan/anaconda3/envs/torch0.4/lib -Wl,-rpath=/home/leviathan/anaconda3/envs/torch0.4/lib -Wl,--no-as-needed -Wl,--sysroot=/ ./_extend_utils.o src/mesh_rasterization.cpp.o src/farthest_point_sampling.cpp.o src/uncertainty_pnp.cpp.o src/nearest_neighborhood.cu.o ./lib/libceres.so ./lib/libglog.so /usr/local/cuda-9.0/lib64/libcudart.so -lstdc++ -o ./_extend_utils.cpython-36m-x86_64-linux-gnu.so
gcc: error: src/uncertainty_pnp.cpp.o: No such file or directory
Traceback (most recent call last):
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/unixccompiler.py", line 197, in link
self.spawn(linker + ld_args)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/site-packages/cffi/ffiplatform.py", line 51, in _build
dist.run_command('build_ext')
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/command/build_ext.py", line 558, in build_extension
target_lang=language)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/ccompiler.py", line 717, in link_shared_object
extra_preargs, extra_postargs, build_temp, target_lang)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/distutils/unixccompiler.py", line 199, in link
raise LinkError(msg)
distutils.errors.LinkError: command 'gcc' failed with exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "build_extend_utils_cffi.py", line 39, in
ffibuilder.compile(verbose=True)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/site-packages/cffi/api.py", line 723, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/site-packages/cffi/recompiler.py", line 1526, in recompile
compiler_verbose, debug)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/site-packages/cffi/ffiplatform.py", line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File "/home/leviathan/anaconda3/envs/torch0.4/lib/python3.6/site-packages/cffi/ffiplatform.py", line 58, in _build
raise VerificationError('%s: %s' % (e.class.name, e))
cffi.VerificationError: LinkError: command 'gcc' failed with exit status 1

About LINEMOD dataset

thanks for your nice work! In LINEMOD dataset, there is a file called, for example, ape_farthest.txt, which include 15 3D points. what is it, is it the same with farthest4/12...?

libceres.so.2

File "lib/utils/data_utils.py", line 18, in
from lib.utils.extend_utils.extend_utils import farthest_point_sampling
File "/home/liuguangyuan/pvnet-master/lib/utils/extend_utils/extend_utils.py", line 3, in
from lib.utils.extend_utils._extend_utils import lib, ffi
ImportError: libceres.so.2: cannot open shared object file: No such file or directory

Training image size?

Screenshot from 2019-04-22 19-47-17
In your class LineModDatasetRealAug, I see you resized image size, but I confuse what is the training image size?

How to Get 3D Points

In demo, coordinates in cat_bb8_3d.txt are preset by Blender, and coordinates in cat_points_3d.txt are obtained by estimating_voting_distribution_with_mean?

Some questions about train and test!

I find that it doesn't use hough voting during training, only use during testing.What's more,your segmentation Network (PvNet) directly regresses to the prediction point, why don't you use mask-rcnn architecture?

RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /pytorch/aten/src/THC/generated/../generic/THCTensorMathReduce.cu:18

Sorry to bother you again! I am trying to train the network use my own data, so I write some code according to your repo to load my data, which can be find here:Dataset,but when i run:
$CUDA_LAUNCH_BLOCKING=1 python tools/train_linemod.py --cfg_file configs/linemod_train.json --linemod_cls db9
something wrong like this:
/home/tender/anaconda3/envs/pvnet/lib/python3.6/site-packages/torch/nn/functional.py:52: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
warnings.warn(warning.format(ret))
motion state False
/home/tender/anaconda3/envs/pvnet/lib/python3.6/site-packages/torch/nn/modules/upsampling.py:225: UserWarning: nn.UpsamplingBilinear2d is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.UpsamplingBilinear2d is deprecated. Use nn.functional.interpolate instead.")
/home/tender/anaconda3/envs/pvnet/lib/python3.6/site-packages/torch/nn/modules/upsampling.py:122: UserWarning: nn.Upsampling is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.Upsampling is deprecated. Use nn.functional.interpolate instead.")
train epoch 0 step 0 seg 0.61322248 ver 0.33627340 precision 6.14219141 recall 0.15100817
THCudaCheck FAIL file=/pytorch/aten/src/THC/generated/../generic/THCTensorMathReduce.cu line=18 error=77 : an illegal memory access was encountered
Traceback (most recent call last):
File "tools/train_linemod.py", line 372, in
train_net()
File "tools/train_linemod.py", line 356, in train_net
train(net, optimizer, train_loader, epoch)
File "tools/train_linemod.py", line 148, in train
seg_pred, vertex_pred, loss_seg, loss_vertex, precision, recall = net(image, mask, vertex, vertex_weights)
File "/home/tender/anaconda3/envs/pvnet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "/home/tender/anaconda3/envs/pvnet/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/tender/anaconda3/envs/pvnet/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "tools/train_linemod.py", line 90, in forward
loss_seg = torch.mean(loss_seg.view(loss_seg.shape[0],-1),1)
RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /pytorch/aten/src/THC/generated/../generic/THCTensorMathReduce.cu:18

some questions about runing data_utils.py

Thank you very much for you sharing code.
This problem makes me unable to solve.when I run python lib/utils/data_utils.py.
problem1
I search the source code and found the extend_utils.py file.
Selection_001
So I don't know why I am executing the code error.
Can anyone know this reason? Thank you.

a little advice

hi,
I notice that in your code, you read all data containing train&val&test at once for use.
It's a waste of memory, isn't it?
maybe you can improve the code?

nn.upsampling

UserWarning: nn.UpsamplingBilinear2d is deprecated. Use nn.functional.interpolate instead.

occulution dataset can't Inaccurate projection

I want to use this K * R | T * POINTS formula to calculate the projection of an object. The result shows that the projection of an object can not be displayed correctly. Has the author dealt with the model or the R | T matrix?
Screenshot from 2019-07-11 14-21-23
Screenshot from 2019-07-11 14-21-47
Screenshot from 2019-07-11 14-22-16
Screenshot from 2019-07-11 14-22-39
Screenshot from 2019-07-11 14-23-00
Screenshot from 2019-07-11 14-23-22

About cat_points_3d.txt

Hello, thanks for your nice work! In the demo, the parameters in cat_points_3d.txt are used to help estimate the pose. However, I found that it is different from the farthest*txts. If used to demo other categories, how can I get it (eg. can_point_3d.txt)?

LinkError: command 'gcc' failed with exit status 1

(zhangjun) yanglu@yanglu:~/zhangjun/pvnet/lib/utils/extend_utils$ python build_extend_utils_cffi.py
generating ./_extend_utils.c
(already up-to-date)
the current directory is '/home/yanglu/zhangjun/pvnet/lib/utils/extend_utils'
running build_ext
building '_extend_utils' extension
gcc -pthread -B /home/yanglu/anaconda3/envs/zhangjun/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/yanglu/anaconda3/envs/zhangjun/include/python3.6m -c _extend_utils.c -o ./_extend_utils.o
gcc -pthread -shared -B /home/yanglu/anaconda3/envs/zhangjun/compiler_compat -L/home/yanglu/anaconda3/envs/zhangjun/lib -Wl,-rpath=/home/yanglu/anaconda3/envs/zhangjun/lib -Wl,--no-as-needed -Wl,--sysroot=/ ./_extend_utils.o src/mesh_rasterization.cpp.o src/farthest_point_sampling.cpp.o src/uncertainty_pnp.cpp.o src/nearest_neighborhood.cu.o ./lib/libceres.so ./lib/libglog.so /usr/local/cuda-8.0/lib64/libcudart.so -lstdc++ -o ./_extend_utils.cpython-36m-x86_64-linux-gnu.so
gcc: error: /usr/local/cuda-8.0/lib64/libcudart.so: No such file or directory
Traceback (most recent call last):
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/unixccompiler.py", line 197, in link
self.spawn(linker + ld_args)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/ccompiler.py", line 909, in spawn
spawn(cmd, dry_run=self.dry_run)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/spawn.py", line 36, in spawn
_spawn_posix(cmd, search_path, dry_run=dry_run)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/spawn.py", line 159, in _spawn_posix
% (cmd, exit_status))
distutils.errors.DistutilsExecError: command 'gcc' failed with exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/site-packages/cffi/ffiplatform.py", line 51, in _build
dist.run_command('build_ext')
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/command/build_ext.py", line 448, in build_extensions
self._build_extensions_serial()
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/command/build_ext.py", line 473, in _build_extensions_serial
self.build_extension(ext)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/command/build_ext.py", line 558, in build_extension
target_lang=language)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/ccompiler.py", line 717, in link_shared_object
extra_preargs, extra_postargs, build_temp, target_lang)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/distutils/unixccompiler.py", line 199, in link
raise LinkError(msg)
distutils.errors.LinkError: command 'gcc' failed with exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "build_extend_utils_cffi.py", line 39, in
ffibuilder.compile(verbose=True)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/site-packages/cffi/api.py", line 697, in compile
compiler_verbose=verbose, debug=debug, **kwds)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/site-packages/cffi/recompiler.py", line 1520, in recompile
compiler_verbose, debug)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/site-packages/cffi/ffiplatform.py", line 22, in compile
outputfilename = _build(tmpdir, ext, compiler_verbose, debug)
File "/home/yanglu/anaconda3/envs/zhangjun/lib/python3.6/site-packages/cffi/ffiplatform.py", line 58, in _build
raise VerificationError('%s: %s' % (e.class.name, e))
cffi.error.VerificationError: LinkError: command 'gcc' failed with exit status 1

Question about the output in the paper

Thank you for your great work. However there are some questions confusing me in the paper.

According to the paper, the size of output tensor of unit vectors is HxWx(Kx2xC). Does it mean that for each class of objects, every pixel has K unit vectors? Every pixel votes for all classes?

Another question is about the usage of FPS algorithm. Is it used to generate ground truth key points? Or the network outputs one key point and the others are generated by FPS?

Maybe these are some stupid questions but I would appreciate it if you explain them. Thanks in advance.

Train and test problem

When I came to step Train:
python tools/train_linemod.py --cfg_file configs/linemod_train.json --linemod_cls cat

The Error:
Traceback (most recent call last):
File "tools/train_linemod.py", line 72, in
os.path.join(cfg.REC_DIR,train_cfg['model_name']+'.log'))
File "/home/yx/Projects/PVNet/lib/utils/net_utils.py", line 177, in init
self.writer = SummaryWriter(log_dir=rec_dir)
File "/usr/local/lib/python3.5/dist-packages/tensorboardX/writer.py", line 254, in init
self._get_file_writer()
File "/usr/local/lib/python3.5/dist-packages/tensorboardX/writer.py", line 310, in _get_file_writer
self.file_writer = FileWriter(logdir=self.logdir, **self.kwargs)
TypeError: init() got an unexpected keyword argument 'log_dir'

Anyone could tell me how to solve with it? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.