baowenbo / dain Goto Github PK
View Code? Open in Web Editor NEWDepth-Aware Video Frame Interpolation (CVPR 2019)
Home Page: https://sites.google.com/view/wenbobao/dain
License: MIT License
Depth-Aware Video Frame Interpolation (CVPR 2019)
Home Page: https://sites.google.com/view/wenbobao/dain
License: MIT License
down_temp == 0.0f ?
Tried to get this running on Colab, but I'm running into cuda issues...
Link to notebook.
Any ideas on how to fix this? Would be great to just have a Colab to experiment!
error in correlation_forward_cuda_kernel: no kernel image is available for execution on the device
Warning: Legacy autograd function with non-static forward method is deprecated and will be removed in 1.3. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function) (THPFunction_do_forward at /pytorch/torch/csrc/autograd/python_function.cpp:622)
Traceback (most recent call last):
File "demo_MiddleBury.py", line 131, in
y_s,offset,filter = model(torch.stack((X0, X1),dim = 0))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/networks/DAIN.py", line 149, in forward
self.forward_flownets(self.flownets, cur_offset_input, time_offsets=time_offsets),
File "/content/DAIN/networks/DAIN.py", line 205, in forward_flownets
temp = model(input) # this is a single direction motion results, but not a bidirectional one
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/PWCNet/PWCNet.py", line 220, in forward
corr6 = self.corr(c16, c26)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/content/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 59, in forward
result = CorrelationFunction(self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)(input1, input2)
File "/content/DAIN/PWCNet/correlation_package_pytorch1_0/correlation.py", line 27, in forward
self.pad_size, self.kernel_size, self.max_displacement,self.stride1, self.stride2, self.corr_multiply)
RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:80)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fe6bc85e193 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
frame #1: correlation_forward_cuda(at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, int, int, int, int, int, int) + 0x628 (0x7fe6b8f59ad8 in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #2: + 0x1bd3a (0x7fe6b8f69d3a in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #3: + 0x18880 (0x7fe6b8f66880 in /usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #4: python3() [0x50ac25]
could you give the environment such as export the environment by conda env export > environment.yaml,thank you
When trying to install this project, it fails on the PyTorch extension compilation step.
Specifically, /usr/bin/nvcc failed with exit status 1, due to errors in /usr/include/c++/6/tuple and /usr/include/c++/6/type_straits. (actual printout below).
Our system is running ubuntu 18.04, with cuda 9.1 (nvcc V9.1.85), nvidia driver 390.116. (Due to other projects on this workstation, reinstalling graphics drivers or cuda is not really viable...).
Python 3.6.8 is in a local conda environment, with specified conda install cudatoolkit=9.0 cudnn=7.1.2.
Pytorch, I've tried pytorch=1.0.0 and pytorch (no specific version) via conda install, neither seems to resolve the compile error.
Gcc/g++, I've tried 5.5.0, 4.9.3, 6.5.0 and 7.4.0. (4.9 from xenial repositories, since it's no longer available in bionic, and the others from ppa:/ubuntu-toolchain-r/test), switched via update-alternatives. No luck with any.
I've also tried to explicitly link nvcc to specific gcc in the various setup.py scripts by adding '-DCUDA_HOST_COMPILER=/usr/bin/gcc-5' to the nvcc-args list, even that did not work.
Googling for similar issues suggests that pytorch-1.0 extensions don't really work with nvcc/cuda <9.2; however you're suggesting version 9.0 in the instructions...
Any thoughts on how to best resolve this, so that pytorch extensions can compile?
Output from running the ./build.sh in DAIN/my_packages (from one of the setup.py, to avoid replicating the same thing 8 times:
running install running bdist_egg running egg_info creating filterinterpolation_cuda.egg-info writing filterinterpolation_cuda.egg-info/PKG-INFO writing dependency_links to filterinterpolation_cuda.egg-info/dependency_links.txt writing top-level names to filterinterpolation_cuda.egg-info/top_level.txt writing manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt' reading manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt' writing manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt' installing library code to build/bdist.linux-x86_64/egg running install_lib running build_ext building 'filterinterpolation_cuda' extension creating build creating build/temp.linux-x86_64-3.6 gcc -pthread -B /mnt/Partition2/deeplearning/DAIN/env/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/TH -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/include -I/mnt/Partition2/deeplearning/DAIN/env/include/python3.6m -c filterinterpolation_cuda.cc -o build/temp.linux-x86_64-3.6/filterinterpolation_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=filterinterpolation_cuda -D_GLIBCXX_USE_CXX11_ABI=0 cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ In file included from filterinterpolation_cuda.cc:1:0: /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/torch.h:7:2: warning: #warning "Including torch/torch.h for C++ extensions is deprecated. Please include torch/extension.h" [-Wcpp] #warning \ ^ /usr/bin/nvcc -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/TH -I/mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/include -I/mnt/Partition2/deeplearning/DAIN/env/include/python3.6m -c filterinterpolation_cuda_kernel.cu -o build/temp.linux-x86_64-3.6/filterinterpolation_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -DCUDA_HOST_COMPILER=/usr/bin/gcc-5 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=filterinterpolation_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11 /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48: required from here /usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’ return __and_<is_constructible<_Elements, _UElements&&>...>::value; ^~~~~ /usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48: required from here /usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’ return __and_<is_convertible<_UElements&&, _Elements>...>::value; ^~~~~ /usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’: /usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85: required from here /usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’ return __and_<is_constructible<_Elements, _UElements&&>...>::value; ^~~~~ /usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’: /usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85: required from here /usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’ return __and_<is_convertible<_UElements&&, _Elements>...>::value; ^~~~~ /usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’: /usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’: /usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197: required from here /usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’ return __and_<is_constructible<_Elements, _UElements&&>...>::value; ^~~~~ /usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197: required from here /usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’ return __and_<is_convertible<_UElements&&, _Elements>...>::value; ^~~~~ /usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’: /usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267: required from here /usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’ return __and_<is_constructible<_Elements, _UElements&&>...>::value; ^~~~~ /usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’: /usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267: required from here /usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’ return __and_<is_convertible<_UElements&&, _Elements>...>::value; ^~~~~ /usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’: /usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’: /usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:626:248: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107: required from here /usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’ return __and_<is_constructible<_Elements, _UElements&&>...>::value; ^~~~~ /usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:626:362: required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107: required from here /usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’ return __and_<is_convertible<_UElements&&, _Elements>...>::value; ^~~~~ /usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:662:419: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ /usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’: /usr/include/c++/6/tuple:686:422: required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’ /mnt/Partition2/deeplearning/DAIN/env/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107: required from here /usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2) return __and_<__not_<is_same<tuple<_Elements...>, ^ /usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’ struct is_convertible ^~~~~~~~~~~~~~ /usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement } ^ error: command '/usr/bin/nvcc' failed with exit status 1
It might be sweet to integrate https://github.com/guochengqian/TENet and https://github.com/jlygit/AI-video-enhance into an advanced DAIN function since that some videos are delivered through the internet in a lower resolution.
For super resolution (when video is delivered in lower resolution):
For denoising (when video is delivered in unstable bandwidth):
When I run build.sh in DAIN on windows,
D:\python3.7\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/torch.h(7): fatal error C1021: ▒▒Ч▒▒Ԥ▒▒▒▒▒▒▒▒▒warning▒▒
D:\python3.7\lib\site-packages\torch\utils\cpp_extension.py:184: UserWarning: Error checking compiler version for cl: 'utf-8' codec can't decode byte 0xd3 in position 0: invalid continuation byte
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
error: command 'D:\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.21.27702\bin\HostX86\x64\cl.exe' failed with exit status 2
But I have configured path for cl.exe,can someone guide me? Thanks!!!!!!!!
can you share train.py code? i don't understand training strategy. thanks
抱歉打扰了(如果您不介意的话请忽略下面糟糕的中式英语),请问DAIN是否适合实时补帧的场景?如果不适合的话,请您知不知道有别的适合实时补帧、资源占用较低的开源工具呢?我目前急需这类工具,之前看到过您在CVPR上有发表过一篇描述异构系统上进行实时补帧的技术的论文,所以就顺着找了过来,希望能咨询一下您相关领域的东西。我之前有用[email protected]这一地址向您的gmail邮箱发送过咨询的邮件,不知道您是否有收到?如果您不介意的话,可以告诉我您的QQ或者微信号吗?(或者直接用邮件回复?)
如有打扰,敬请原谅。
Sorry to bother you, but I really need to know whether this tool is suitable for real-time video interpolation. I'm working on my graduation project which need a real-time video interpolation tool.
I find that you have published a paper named "High-quality and real-time frame interpolation on heterogeneous computing system" on CVPR, which seems to be a suitable tool but I didn't find its open-source code. Do you know any tools suitable for real-time video interpolation if this project is not what I'm trying to find? 😅
I was curious about the number of paramters for the 1d kernel estimation network. In the paper, it states that this sub module has 5.51M of paramters. However, when I look at the kernel estimation network structure in the supplemental material, I believe it is much larger than 5.51M. In the decoder, one layer with 4x4x512x512 already has around 4.19M parameters.
Using newer SciPy (which is an unlisted requirement btw), needs changes to some code to not use scipy.misc.imread etc, and rather use http://imageio.github.io
Hi,
@baowenbo ,Thank you for your great work and sharing the code. The test results were really surprising, but I found some blurry or not-expected results in DAIN_HD_videos as shown below.
Can you tell why do these happen or some ideas for improvement?
when executing sudo ./build.sh
i get this output
pytorch 1.4.0
cuda-10.2.89-3
cudnn-7.6.5.32-3
on 5.4.24-1-MANJARO
GPU: NVIDIA GeForce GTX 1070
Need pytorch>=1.0.0
./build.sh: line 4: activate: No such file or directory
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating mindepthflowprojection_cuda.egg-info
writing mindepthflowprojection_cuda.egg-info/PKG-INFO
writing dependency_links to mindepthflowprojection_cuda.egg-info/dependency_links.txt
writing top-level names to mindepthflowprojection_cuda.egg-info/top_level.txt
writing manifest file 'mindepthflowprojection_cuda.egg-info/SOURCES.txt'
reading manifest file 'mindepthflowprojection_cuda.egg-info/SOURCES.txt'
writing manifest file 'mindepthflowprojection_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'mindepthflowprojection_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c mindepthflowprojection_cuda.cc -o build/temp.linux-x86_64-3.8/mindepthflowprojection_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=mindepthflowprojection_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
from /usr/include/ATen/cuda/CUDAContext.h:11,
from mindepthflowprojection_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating flowprojection_cuda.egg-info
writing flowprojection_cuda.egg-info/PKG-INFO
writing dependency_links to flowprojection_cuda.egg-info/dependency_links.txt
writing top-level names to flowprojection_cuda.egg-info/top_level.txt
writing manifest file 'flowprojection_cuda.egg-info/SOURCES.txt'
reading manifest file 'flowprojection_cuda.egg-info/SOURCES.txt'
writing manifest file 'flowprojection_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'flowprojection_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c flowprojection_cuda.cc -o build/temp.linux-x86_64-3.8/flowprojection_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=flowprojection_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
from /usr/include/ATen/cuda/CUDAContext.h:11,
from flowprojection_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating separableconv_cuda.egg-info
writing separableconv_cuda.egg-info/PKG-INFO
writing dependency_links to separableconv_cuda.egg-info/dependency_links.txt
writing top-level names to separableconv_cuda.egg-info/top_level.txt
writing manifest file 'separableconv_cuda.egg-info/SOURCES.txt'
reading manifest file 'separableconv_cuda.egg-info/SOURCES.txt'
writing manifest file 'separableconv_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'separableconv_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c separableconv_cuda.cc -o build/temp.linux-x86_64-3.8/separableconv_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=separableconv_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
from /usr/include/ATen/cuda/CUDAContext.h:11,
from separableconv_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating interpolationch_cuda.egg-info
writing interpolationch_cuda.egg-info/PKG-INFO
writing dependency_links to interpolationch_cuda.egg-info/dependency_links.txt
writing top-level names to interpolationch_cuda.egg-info/top_level.txt
writing manifest file 'interpolationch_cuda.egg-info/SOURCES.txt'
reading manifest file 'interpolationch_cuda.egg-info/SOURCES.txt'
writing manifest file 'interpolationch_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'interpolationch_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c interpolationch_cuda.cc -o build/temp.linux-x86_64-3.8/interpolationch_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=interpolationch_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
from /usr/include/ATen/cuda/CUDAContext.h:11,
from interpolationch_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating depthflowprojection_cuda.egg-info
writing depthflowprojection_cuda.egg-info/PKG-INFO
writing dependency_links to depthflowprojection_cuda.egg-info/dependency_links.txt
writing top-level names to depthflowprojection_cuda.egg-info/top_level.txt
writing manifest file 'depthflowprojection_cuda.egg-info/SOURCES.txt'
reading manifest file 'depthflowprojection_cuda.egg-info/SOURCES.txt'
writing manifest file 'depthflowprojection_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'depthflowprojection_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c depthflowprojection_cuda.cc -o build/temp.linux-x86_64-3.8/depthflowprojection_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=depthflowprojection_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
from /usr/include/ATen/cuda/CUDAContext.h:11,
from depthflowprojection_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating interpolation_cuda.egg-info
writing interpolation_cuda.egg-info/PKG-INFO
writing dependency_links to interpolation_cuda.egg-info/dependency_links.txt
writing top-level names to interpolation_cuda.egg-info/top_level.txt
writing manifest file 'interpolation_cuda.egg-info/SOURCES.txt'
reading manifest file 'interpolation_cuda.egg-info/SOURCES.txt'
writing manifest file 'interpolation_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'interpolation_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c interpolation_cuda.cc -o build/temp.linux-x86_64-3.8/interpolation_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=interpolation_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
from /usr/include/ATen/cuda/CUDAContext.h:11,
from interpolation_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating separableconvflow_cuda.egg-info
writing separableconvflow_cuda.egg-info/PKG-INFO
writing dependency_links to separableconvflow_cuda.egg-info/dependency_links.txt
writing top-level names to separableconvflow_cuda.egg-info/top_level.txt
writing manifest file 'separableconvflow_cuda.egg-info/SOURCES.txt'
reading manifest file 'separableconvflow_cuda.egg-info/SOURCES.txt'
writing manifest file 'separableconvflow_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'separableconvflow_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c separableconvflow_cuda.cc -o build/temp.linux-x86_64-3.8/separableconvflow_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=separableconvflow_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
from /usr/include/ATen/cuda/CUDAContext.h:11,
from separableconvflow_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
No CUDA runtime is found, using CUDA_HOME='/opt/cuda'
running install
running bdist_egg
running egg_info
creating filterinterpolation_cuda.egg-info
writing filterinterpolation_cuda.egg-info/PKG-INFO
writing dependency_links to filterinterpolation_cuda.egg-info/dependency_links.txt
writing top-level names to filterinterpolation_cuda.egg-info/top_level.txt
writing manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
reading manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
writing manifest file 'filterinterpolation_cuda.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'filterinterpolation_cuda' extension
creating build
creating build/temp.linux-x86_64-3.8
gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fno-semantic-interposition -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fPIC -I/usr/lib/python3.8/site-packages/torch/include -I/usr/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/usr/lib/python3.8/site-packages/torch/include/TH -I/usr/lib/python3.8/site-packages/torch/include/THC -I/opt/cuda/include -I/usr/include/python3.8 -c filterinterpolation_cuda.cc -o build/temp.linux-x86_64-3.8/filterinterpolation_cuda.o -std=c++11 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=filterinterpolation_cuda -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /usr/include/c10/cuda/CUDAStream.h:9,
from /usr/include/ATen/cuda/CUDAContext.h:11,
from filterinterpolation_cuda.cc:5:
/usr/include/c10/cuda/CUDAMacros.h:4:10: fatal error: c10/cuda/impl/cuda_cmake_macros.h: No such file or directory
4 | #include <c10/cuda/impl/cuda_cmake_macros.h>
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: command 'gcc' failed with exit status 1
UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
Segmentation fault (core dumped)
Why?
Hello - firstly, thanks for this and your great documentation. Much appreciated.
Im using Ubuntu 18.0.4 LTS, Cuda 10.2, Nvidia 4.40 drivers and a single Titan X
Ive followed the readme, installed the dependencies in a virtual envs, compiled the extensions, and am able to run the demo - however, after a few seconds the demo crashes and kernal panics the entire system.
I've attempted to edit both extension 's NVCC flags, as per the helpful note in the documentation, but to no avail.
'-gencode', 'arch=compute_52,code=sm_52',
'-gencode', 'arch=compute_60,code=sm_60',
'-gencode', 'arch=compute_61,code=sm_61',
'-gencode', 'arch=compute_70,code=sm_70',
'-gencode', 'arch=compute_75,code=sm_75',
'-gencode', 'arch=compute_75,code=compute_75',
However, that also kernel panics the machine.
I am able to monitor GPU memory usage right before the crash and am able to see pytorch allocating GPU memory, but It appears to go to the max, then the system dies.
Are there other specific hardware requirements for this code base?
I have a question when I read your paper.In your paper the DVF method's PSNR is 34.12. But in the DVF original paper the PSNR is 35.8. So, what's your evaluate method?
I have question about caculating NIE.
permission denied when i run build.sh, how can i solve this problem?
I want to train DAIN on a self-constucted HD video dataset, can you give me some suggestions on the details? Should I do resize or cropping to make the HD video dataset same as Vimeo90K dataset?
Thank you in advance for your reply.
The results on SSIM cannot be reproduced. Please provide your SSIM code.
Hi, nice work!
Can you share the GT of HD dataset?
I try look for it, unfortunately I cannot find it.
Thanks!
I am attempting to run DAIN on Linux, and the only way I could find was by cloning this git repository and compiling it, however, not a single line on the build.sh file seems to work for some reason, is there something I might be missing or doing wrong?
when running CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py
I got the following error
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=405 error=11 : invalid argument Traceback (most recent call last): File "demo_MiddleBury.py", line 131, in <module> y_s,offset,filter = model(torch.stack((X0, X1),dim = 0)) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/data/DAIN/networks/DAIN.py", line 130, in forward cur_filter_input[:, 3:, ...]),dim=0)) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward input = module(input) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 489, in __call__ result = self.forward(*input, **kwargs) File "/data/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 320, in forward self.padding, self.dilation, self.groups) RuntimeError: cuda runtime error (11) : invalid argument at /pytorch/aten/src/THC/THCGeneral.cpp:405
PS: my environment is python3.7.3 + cuda9.0 + cudnn7.1.3 + pytorch1.0.0
In the paper you have mentioned that details of adaptive warping layers and the configuration of the kernel estimation network were provided in the supplementary materials, but I didn't find where the supplementary materials are. Would you mind providing us ones? Thanks.
Hi~, in your code, it allows user to use cpu in modules of my_package dir, such as DepthFlowprojection etc. But I didn't find that implementation, (DepthFlowProjectionLayer_cpu_forward()). Did I miss it?
when I try to interpolate a sequence (1 minute of video) it only interpolates the first frame and nothing else.
example: if the interpolation is at 0.25 it only interpolates the first 4 frames and the process stops.
what's the problem in this?
Hi all,
I've seen your videos on youtube and the results are really amazing. Are you guys aware of Smooth Video Project?: https://www.svp-team.com/
It would be amazing if we could somehow integrate DAIN with Smooth Video Project. What do you guys think?
Here is a possible way of improving DAIN:
Instead of getting frame X-1 and frame X+1 to get frame X, what about frames X-3, X-1, X+1 and X+3?
With that it can enhance accuracy by acquiring more context, but that leads to some problems:
what if some of the frames are from different scenes? what should it do?
Are there already scene identification for bi-frame interpolation within DAIN?
你好,深度图确实有用吗?我看消融实验加了深度图只高了0.06分?能提供 得分 vs. 迭代 曲线吗?想入坑这个方向,发现这个方向的问题应该是运动估计的问题,你有什么建议吗?
Line 169 in 4dbb134
Shouldn't the Interpolation error be RMSE instead of L1 distance?
Hello,
I am a bit unexperienced with Linux/Ubuntu, so this might be entirely me misunderstanding the requirements.
But as far as I understand, Pytorch 1.2.0 requires a Python version above 3.7? And also there is no Pytorch 1.2.0 version for CUDA 9.0?
Are the versions needed to be exactly what you wrote, or just at least that?
Again, this is probably me being a bit slow, but I would be grateful for an answer in any case.
Hi.Thanks for the wonderful work!
I ran the test code on my 1280*720
video but I noticed that the edges of the image were moving, and it looked like a few pixels had been vertically compressed (720 is not divisible by 128). I see that your code has a padding operation for the input image and a clipping operation for the output image, but I don't know what caused this boundary drift problem.
Can you help me? Thanks!
Hi Bao,
Executed this command: CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py
and getting this module not found error, it is present though.
I have covered all as mentioned and got stuck in this step.
Need help, Thanks!!
I get the following Error when importing correlation_cuda in python 3.6 and python 2.7 with torch 1.4.0 and 1.1.0:
/usr/local/lib/python3.6/dist-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
the other modules from the my_package directories work fine.
Hi, I would like to adapt your implementation of separable convolution in other applications. However, it seems that the implementation could not pass the gradcheck of PyTorch. Have you ever tried it?
When I run
CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py
Dimetrodon
/home/zhenghe/anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.UpsamplingNearest2d is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
/home/zhenghe/anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/nn/modules/upsampling.py:129: UserWarning: nn.Upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.{} is deprecated. Use nn.functional.interpolate instead.".format(self.name))
/home/zhenghe/anaconda3/envs/pytorch1.0.0/lib/python3.6/site-packages/torch/nn/functional.py:2423: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
Segmentation fault
Hi.Thanks for the wonderful work!
I copied the virtual environment using “environment.yaml”which is provided.
And I execute build.sh successfully.
Now facing below the issue:
undefined symbol: _ZN3c105ErrorC1ENS_14SourceLocationERKSs
Any suggestion?
Can I get this running on Windows with anaconda? I installed the required packages, but I don't know how to get the ./build.sh thing working.
The demo script given in the repo require two frames as input, however, can the DAIN directly choose a video as input and output the corresponding synthesis video?
or does anyone has written the script to do so?
Can you provide a sample code for calculating IE for color images? I cannot get the same number for Middlebury other dataset.
may i get your visualization code for pwcnet output
I noticed that DAIN doesn't handle HD video sizes [1280x720] with a 8gb of GPU memory. Running on a GTX 1070. Is there anyway for me to reduce the memory usage on the GPU side?
Can you share the pre-trained PWCNet model. Thanks
error: FileNotFoundError: [Errno 2] No such file or directory: 'PWCNet/pwc_net.pth.tar'
Undefined names are usually a sign of a typo, missing imports, or code that has not been ported to Python 3. These would be compile-time errors in a compiled language but in Python a NameError is raised which will halt/crash the script on the user.
flake8 testing of https://github.com/baowenbo/DAIN on Python 3.8.0
$ flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
./my_package/test_module.py:21:25: F821 undefined name 'SeparableConvFlowModule'
FilterInterpolate = SeparableConvFlowModule(filtersize)
^
./my_package/test_module.py:125:25: F821 undefined name 'SeparableConvModule'
FilterInterpolate = SeparableConvModule(filtersize)
^
./my_package/test_module.py:219:25: F821 undefined name 'FilterInterpolationModule'
FilterInterpolate = FilterInterpolationModule()
^
./my_package/test_module.py:324:19: F821 undefined name 'InterpolationModule'
Interpolate = InterpolationModule()
^
./my_package/test_module.py:408:19: F821 undefined name 'InterpolationChModule'
Interpolate = InterpolationChModule(input1.size(1))
^
./my_package/test_module.py:492:15: F821 undefined name 'FlowProjectionModule'
Project = FlowProjectionModule()
^
./my_package/test_module.py:518:15: F821 undefined name 'FlowProjectionModule'
Project = FlowProjectionModule() # regnenerate
^
./my_package/test_module.py:632:23: F821 undefined name 'output'
x = output_cuda - output.cuda()
^
./my_package/test_module.py:683:15: F821 undefined name 'WeightedFlowProjectionModule'
Project = WeightedFlowProjectionModule(threshold=20.0/255.0,requires_grad=True)
^
./my_package/test_module.py:710:15: F821 undefined name 'WeightedFlowProjectionModule'
Project = WeightedFlowProjectionModule(threshold=20.0/255.0, requires_grad=True) # regnenerate
^
./my_package/test_module.py:770:19: F821 undefined name 'AdaptiveWeightInterpolationModule'
Interpolate = AdaptiveWeightInterpolationModule(training=training)
^
./MegaDepth/data/image_folder.py:42:78: F821 undefined name 'IMG_EXTENSIONS'
"Supported image extensions are: " + ",".join(IMG_EXTENSIONS)))
^
./MegaDepth/data/image_folder.py:131:78: F821 undefined name 'IMG_EXTENSIONS'
"Supported image extensions are: " + ",".join(IMG_EXTENSIONS)))
^
13 F821 undefined name 'IMG_EXTENSIONS'
13
https://flake8.pycqa.org/en/latest/user/error-codes.html
On the flake8 test selection, this PR does not focus on "style violations" (the majority of flake8 error codes that psf/black can autocorrect). Instead these tests are focus on runtime safety and correctness:
I get this error:
RuntimeError: CUDA call failed (correlation_forward_cuda at correlation_cuda.cc:80)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7fa66bef6193 in /opt/conda/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: correlation_forward_cuda(at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, at::Tensor&, int, int, int, int, int, int) + 0x5ea (0x7fa6694c0d4a in /opt/conda/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #2: + 0x1e9b4 (0x7fa6694d19b4 in /opt/conda/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #3: + 0x1b870 (0x7fa6694ce870 in /opt/conda/lib/python3.6/site-packages/correlation_cuda-0.0.0-py3.6-linux-x86_64.egg/correlation_cuda.cpython-36m-x86_64-linux-gnu.so)
frame #11: THPFunction_do_forward(THPFunction*, _object*) + 0x4ac (0x7fa6b72d0fec in /opt/conda/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
when running
CUDA_VISIBLE_DEVICES=0 python demo_MiddleBury.py
Anybody else get this error?
Also, does anybody know how to fix this issue?
As described in figure 3 from the CVPR paper, the input of frame synthesis network consists of five components, including raw interpolation kernels, projected flows, warped depth maps, warped frames and warped context features. However, in line 177 to 181 from DIAN_slowmotion.py
, the input for rectifyNet seems not as same as described:
rectify_input = torch.cat((cur_output_temp,ref0,ref2, cur_offset_output[0],cur_offset_output[1], cur_filter_output[0],cur_filter_output[1], ctx0,ctx2 ),dim =1)
It seems that the actual input for the frame synthesis network did not include the warped depth maps while used a blended result from warped frames alternatively.
So which one should be the correct way for the proposed method? Would you pleased give a numerical analysis for such different settings?
I want to use the obtained Projected Flows to warp a three-channel image without considering the kernel estimation. What should I do?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.