Giter VIP home page Giter VIP logo

vqfr's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vqfr's Issues

requirements.txt installs numpy>=1.21 citing numba, numba setup asks for numpy >= 1.21

It appears that numba has been updated since this project was uploaded. If one looks in the numba setup file.
https://github.com/numba/numba/blob/main/setup.py

min_numpy_build_version = "1.11"
min_numpy_run_version = "1.21"
build_requires = ['numpy >={}'.format(min_numpy_build_version)]
install_requires = [
    'llvmlite >={},<{}'.format(min_llvmlite_version, max_llvmlite_version),
    'numpy >={}'.format(min_numpy_run_version),
    'importlib_metadata; python_version < "3.9"',
]

At any rate that is overwritten by facexlib which if installed after vqfr reinstalls the later versions.

eye color change

hello, when using the model in face restoration, the eye color change. Do you have any suggestion to solve this problem?

have you try larger codebooks?

Hi, great work! Looking forward to the code release!
I have a question about the codebook, in the paper 1024 codebook entries with 256 channels was used, is it designed for small storage or efficient trainning?
Have you try larger codebooks? Will the results be better? thanks!

Results not good enough

Hey Thanks for the repository,
But the results are not really good enough for blind face restoration. GFP-Gan had similar issue of high smoothing. GPEN works much better in restoration and preserving the overall texture and looks highly realistic. I hope you guys improve on keeping the overall structure and texture, so it does not look unrealistic.
Thanks

Hi! training issue!

I am very impressed your paper.
So I want to train the restoration phase. I have a problem.
image
Help me!
Tanks!!

Demo.py does not work

I set up the requirements in a virtual environment, downloaded and placed the pre-trained models and executed the demo.py. It runs but shows lots of warning and fallback messages. After the execution, the output in the result folder is processed in all but the faces of all persons in every images are same as in input images, that is the faces are not processed. Any help in correcting the setup, if wrong will be great.

PS D:\Development\Workspace\VSCode\Python\VQFR> & C:/Development/Tools/Anaconda3/envs/VQFR/python.exe d:/Development/Workspace/VSCode/Python/VQFR/demo.py
C:\Development\Tools\Anaconda3\envs\VQFR\lib\site-packages\torchvision\models_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
C:\Development\Tools\Anaconda3\envs\VQFR\lib\site-packages\torchvision\models_utils.py:223: UserWarning: Arguments other than a weight enum or None for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing weights=None.
warnings.warn(msg)
Processing 00.jpg ...
Failed inference for VQFR: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCUDA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXLA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMPS: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradIPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradLazy: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse1: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse2: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse3: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
.
Failed inference for VQFR: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCUDA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXLA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMPS: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradIPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradLazy: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse1: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse2: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse3: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
.
Tile 1/20
Tile 2/20
Tile 3/20
Tile 4/20
Tile 5/20
Tile 6/20
Tile 7/20
Tile 8/20
Tile 9/20
Tile 10/20
Tile 11/20
Tile 12/20
Tile 13/20
Tile 14/20
Tile 15/20
Tile 16/20
Tile 17/20
Tile 18/20
Tile 19/20
Tile 20/20
Processing 10045.png ...
Failed inference for VQFR: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCUDA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXLA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMPS: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradIPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradLazy: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse1: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse2: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse3: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
.
Failed inference for VQFR: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCUDA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXLA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMPS: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradIPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradLazy: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse1: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse2: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse3: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
.
Failed inference for VQFR: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCUDA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXLA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMPS: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradIPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradLazy: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse1: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse2: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse3: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
.
Tile 1/6
Tile 2/6
Tile 3/6
Tile 4/6
Tile 5/6
Tile 6/6
Processing Blake_Lively.jpg ...
Failed inference for VQFR: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCUDA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXLA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMPS: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradIPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradLazy: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse1: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse2: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse3: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
.
Failed inference for VQFR: Could not run 'torchvision::deform_conv2d' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::deform_conv2d' is only available for these backends: [Dense, UNKNOWN_TENSOR_TYPE_ID, QuantizedXPU, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseCPU, SparseCUDA, SparseHIP, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, SparseXPU, UNKNOWN_TENSOR_TYPE_ID, SparseVE, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, NestedTensorCUDA, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID, UNKNOWN_TENSOR_TYPE_ID].

CPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\cpu\deform_conv2d_kernel.cpp:1162 [kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:133 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:18 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradCUDA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXLA: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradMPS: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradIPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradXPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradHPU: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
UNKNOWN_TENSOR_TYPE_ID: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradLazy: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse1: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse2: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
AutogradPrivateUse3: registered at C:\Users\circleci\project\torchvision\csrc\ops\autograd\deform_conv2d_kernel.cpp:256 [autograd kernel]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:295 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:481 [backend fallback]
Autocast: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:324 [backend fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\BatchingRegistrations.cpp:1064 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:89 [backend fallback]
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:137 [backend fallback]
.
Tile 1/4
Tile 2/4
Tile 3/4
Tile 4/4
Results are in the [results] folder.

List of installed packages are :

_pytorch_select 1.1.0 cpu
absl-py 1.2.0 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
aom 3.4.0 h0e60522_1 conda-forge
basicsr 1.4.1 pypi_0 pypi
blas 2.108 mkl conda-forge
blas-devel 3.9.0 8_mkl conda-forge
brotlipy 0.7.0 py37hcc03f2d_1004 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
ca-certificates 2022.6.15 h5b45459_0 conda-forge
cachetools 5.2.0 pypi_0 pypi
certifi 2022.6.15 pypi_0 pypi
cffi 1.15.1 py37hd8e9650_0 conda-forge
charset-normalizer 2.1.0 pyhd8ed1ab_0 conda-forge
colorama 0.4.5 pyhd8ed1ab_0 conda-forge
cryptography 37.0.4 py37h65266a2_0 conda-forge
cudatoolkit 11.3.1 h280eb24_10 conda-forge
cycler 0.11.0 pypi_0 pypi
einops 0.4.1 pyhd8ed1ab_0 conda-forge
expat 2.4.8 h39d44d4_0 conda-forge
facexlib 0.2.4 pypi_0 pypi
ffmpeg 4.3 ha925a31_0 pytorch
filterpy 1.4.5 pypi_0 pypi
font-ttf-dejavu-sans-mono 2.37 hab24e00_0 conda-forge
font-ttf-inconsolata 3.000 h77eed37_0 conda-forge
font-ttf-source-code-pro 2.038 h77eed37_0 conda-forge
font-ttf-ubuntu 0.83 hab24e00_0 conda-forge
fontconfig 2.14.0 hce3cb01_0 conda-forge
fonts-conda-ecosystem 1 0 conda-forge
fonts-conda-forge 1 0 conda-forge
fonttools 4.34.4 pypi_0 pypi
freeglut 3.2.2 h0e60522_1 conda-forge
freetype 2.10.4 h546665d_2 conda-forge
future 0.18.2 py37h03978a9_5 conda-forge
gettext 0.19.8.1 ha2e2712_1008 conda-forge
gfpgan 1.3.4 pypi_0 pypi
glib 2.72.1 h7755175_0 conda-forge
glib-tools 2.72.1 h7755175_0 conda-forge
google-auth 2.10.0 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
grpcio 1.47.0 pypi_0 pypi
gst-plugins-base 1.20.3 he07aa86_0 conda-forge
gstreamer 1.20.3 hdff456e_0 conda-forge
icu 70.1 h0e60522_0 conda-forge
idna 3.3 pyhd8ed1ab_0 conda-forge
imageio 2.21.1 pypi_0 pypi
importlib-metadata 4.12.0 pypi_0 pypi
intel-openmp 2022.1.0 h57928b3_3787 conda-forge
jasper 2.0.33 h77af90b_0 conda-forge
joblib 1.1.0 pyhd8ed1ab_0 conda-forge
jpeg 9e h8ffe710_2 conda-forge
kiwisolver 1.4.4 pypi_0 pypi
krb5 1.19.3 h1176d77_0 conda-forge
lcms2 2.12 h2a16943_0 conda-forge
lerc 4.0.0 h63175ca_0 conda-forge
libblas 3.9.0 8_mkl conda-forge
libcblas 3.9.0 8_mkl conda-forge
libclang 14.0.6 default_h77d9078_0 conda-forge
libclang13 14.0.6 default_h77d9078_0 conda-forge
libdeflate 1.13 h8ffe710_0 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libglib 2.72.1 h3be07f2_0 conda-forge
libiconv 1.16 he774522_0 conda-forge
liblapack 3.9.0 8_mkl conda-forge
liblapacke 3.9.0 8_mkl conda-forge
libogg 1.3.4 h8ffe710_1 conda-forge
libopencv 4.6.0 py37hbae3f13_0 conda-forge
libpng 1.6.37 h1d00b33_3 conda-forge
libprotobuf 3.20.1 h7755175_0 conda-forge
libtiff 4.4.0 h92677e6_3 conda-forge
libuv 1.44.2 h8ffe710_0 conda-forge
libvorbis 1.3.7 h0e60522_0 conda-forge
libwebp 1.2.3 h8ffe710_1 conda-forge
libwebp-base 1.2.3 h8ffe710_2 conda-forge
libxcb 1.13 hcd874cb_1004 conda-forge
libxml2 2.9.14 hf5bbc77_3 conda-forge
libzlib 1.2.12 h8ffe710_2 conda-forge
llvmlite 0.39.0 pypi_0 pypi
lmdb 1.3.0 pypi_0 pypi
lz4-c 1.9.3 h8ffe710_1 conda-forge
m2w64-gcc-libgfortran 5.3.0 6 conda-forge
m2w64-gcc-libs 5.3.0 7 conda-forge
m2w64-gcc-libs-core 5.3.0 7 conda-forge
m2w64-gmp 6.1.0 2 conda-forge
m2w64-libwinpthread-git 5.0.0.4634.697f757 2 conda-forge
markdown 3.4.1 pypi_0 pypi
markupsafe 2.1.1 pypi_0 pypi
matplotlib 3.5.2 pypi_0 pypi
mkl 2020.4 hb70f87d_311 conda-forge
mkl-devel 2020.4 h57928b3_312 conda-forge
mkl-include 2020.4 hb70f87d_311 conda-forge
mkl-service 2.3.0 py37h4ab8f01_2 conda-forge
msys2-conda-epoch 20160418 1 conda-forge
networkx 2.6.3 pypi_0 pypi
ninja 1.11.0 h2d74725_0 conda-forge
numba 0.56.0 pypi_0 pypi
numpy 1.20.3 py37h4c2b6ed_2 conda-forge
oauthlib 3.2.0 pypi_0 pypi
opencv 4.6.0 py37h03978a9_0 conda-forge
openh264 2.2.0 h0e60522_2 conda-forge
openjpeg 2.4.0 hb211442_1 conda-forge
openssl 1.1.1q h8ffe710_0 conda-forge
packaging 21.3 pypi_0 pypi
pcre 8.45 h0e60522_0 conda-forge
pillow 9.2.0 py37h8675073_0 conda-forge
pip 22.2.2 pyhd8ed1ab_0 conda-forge
protobuf 3.19.4 pypi_0 pypi
pthread-stubs 0.4 hcd874cb_1001 conda-forge
py-opencv 4.6.0 py37h90c5f73_0 conda-forge
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pyopenssl 22.0.0 pyhd8ed1ab_0 conda-forge
pyparsing 3.0.9 pypi_0 pypi
pysocks 1.7.1 py37h03978a9_5 conda-forge
python 3.7.12 h7840368_100_cpython conda-forge
python-dateutil 2.8.2 pypi_0 pypi
python_abi 3.7 2_cp37m conda-forge
pytorch 1.12.1 py3.7_cuda11.3_cudnn8_0 pytorch
pytorch-mutex 1.0 cuda pytorch
pywavelets 1.3.0 pypi_0 pypi
pyyaml 6.0 py37hcc03f2d_4 conda-forge
qt-main 5.15.4 h467ea89_2 conda-forge
realesrgan 0.2.5.0 pypi_0 pypi
requests 2.28.1 pyhd8ed1ab_0 conda-forge
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-image 0.19.3 pypi_0 pypi
scikit-learn 1.0.2 py37hcabfae0_0 conda-forge
scipy 1.7.3 py37hb6553fb_0 conda-forge
setuptools 63.4.2 py37h03978a9_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlite 3.39.2 h8ffe710_0 conda-forge
svt-av1 1.1.0 h0e60522_1 conda-forge
tb-nightly 2.10.0a20220809 pypi_0 pypi
tbb 2021.5.0 h2d74725_1 conda-forge
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
threadpoolctl 3.1.0 pyh8a188c0_0 conda-forge
tifffile 2021.11.2 pypi_0 pypi
timm 0.6.7 pypi_0 pypi
tk 8.6.12 h8ffe710_0 conda-forge
torch 1.12.1 pypi_0 pypi
torchaudio 0.12.1 py37_cu113 pytorch
torchvision 0.13.1 pypi_0 pypi
tqdm 4.64.0 pyhd8ed1ab_0 conda-forge
typing-extensions 4.3.0 hd8ed1ab_0 conda-forge
typing_extensions 4.3.0 pyha770c72_0 conda-forge
ucrt 10.0.20348.0 h57928b3_0 conda-forge
urllib3 1.26.11 pyhd8ed1ab_0 conda-forge
vc 14.2 hb210afc_6 conda-forge
vs2015_runtime 14.29.30037 h902a5da_6 conda-forge
werkzeug 2.2.2 pypi_0 pypi
wheel 0.37.1 pyhd8ed1ab_0 conda-forge
win_inet_pton 1.1.0 py37h03978a9_4 conda-forge
x264 1!161.3030 h8ffe710_1 conda-forge
x265 3.5 h2d74725_3 conda-forge
xorg-libxau 1.0.9 hcd874cb_0 conda-forge
xorg-libxdmcp 1.1.3 hcd874cb_0 conda-forge
xz 5.2.5 h62dcd97_1 conda-forge
yaml 0.2.5 h8ffe710_2 conda-forge
yapf 0.32.0 pyhd8ed1ab_0 conda-forge
zipp 3.8.1 pypi_0 pypi
zlib 1.2.12 h8ffe710_2 conda-forge
zstd 1.5.2 h6255e5f_3 conda-forge

vqfrv1训练效果不好

使用第一阶段的预训练模型 训练vqfrv1 20万迭代后 训练结果如下图
image
请问有遇到过类似情况吗 该怎么解决

How to convert model to onnx?

When I convert torch model to onnx model, the following error occurred:
main()
File "convert_to_onnx.py", line 118, in main
torch.onnx.export(model, x, onnx_model_path,
File "\Python\Python310\site-packages\torch\onnx\utils.py", line 506, in export
_export(
File "\Python\Python310\site-packages\torch\onnx\utils.py", line 1548, in _export
graph, params_dict, torch_out = _model_to_graph(
File "\Python\Python310\site-packages\torch\onnx\utils.py", line 1117, in _model_to_graph
graph = _optimize_graph(
File "\Python\Python310\site-packages\torch\onnx\utils.py", line 665, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "\Python\Python310\site-packages\torch\onnx\utils.py", line 1891, in _run_symbolic_function
return symbolic_fn(graph_context, *inputs, **attrs)
File "\Python\Python310\site-packages\torch\onnx\symbolic_helper.py", line 306, in wrapper
return fn(g, *args, **kwargs)
File "\Python\Python310\site-packages\deform_conv2d_onnx_exporter.py", line 655, in deform_conv2d
dcn_params = create_dcn_params(input, weight, offset, mask, bias,
File "\Python\Python310\site-packages\deform_conv2d_onnx_exporter.py", line 568, in create_dcn_params
in_h = get_tensor_dim_size(input, 2) + 2 * (pad_h + additional_pad_h)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
(Occurred when translating deform_conv2d).

How to solve this problem, thank you!

Resolution is low

I am not sure why after running VQFR, the mouth part of the face has lower resolution than the one from GFPGAN. I used the default parameters.
VQFR
image
GFPGAN
image

Any parameter I can change to make the resolution higher?

config file for vqfr_v2

Thanks for your sharing and great contribution!!
I'd like to know that the pretrained VQFR_v2(VQFR_v2.pth) is trained by train_vqfr_v2_B16_200K.yml, VQ_Codebook_FFHQ512_v2.pth and FFHQ 512 dataset. If any of them(config file, codebook or dataset) or other things are different, can you share the details ?

Thanks in advance. : )

Restoration Training. CUDA out of memory

Hi !
Results from inference are amaizing !!!
I have experience with your GFPGAN and I'd like to train VQFR.
I have two GPU's with 12Gb memory. During start training I get error: "CUDA out of memory".
I tried change "batch_size_per_gpu" to 1 and "--nproc_per_node" to 1, but I got same error :(
Could you help me with requirements to GPU ? Do I need NVIDIA A100 (like in https://arxiv.org/pdf/2205.06803.pdf) or may be I can train on my GPU's ?
Thank you

VQFR_v2 model not in Google Drive

Thank you for the work and the code. It looks great.

I cannot seem to find the v2 model in the Google Drive as indicated in README?
Are you uploading it? or is there a different link?
xvdp

how to use this model in to video?

inference every frame ,then combine them into a video is a common methond, but this will cause incontinuity。

please, any method get a better result in video?

TypeError: __init__() got an unexpected keyword argument 'in_dim'

Hi, I get an error when I run the v1 model reasoner. What should I do.

Traceback (most recent call last):
File "/root/autodl-fs/VQFR/demo.py", line 153, in
main()
File "/root/autodl-fs/VQFR/demo.py", line 102, in main
restorer = VQFR_Demo(model_path=model_path, upscale=args.upscale, arch=arch, bg_upsampler=bg_upsampler)
File "/root/autodl-fs/VQFR/vqfr/demo_util.py", line 36, in init
self.vqfr = VQFRv1(
File "/root/autodl-fs/VQFR/vqfr/archs/vqfrv1_arch.py", line 188, in init
self.quantizer = GeneralizedQuantizer(quantizer_opt)
File "/root/autodl-fs/VQFR/vqfr/archs/vqganv1_arch.py", line 177, in init
self.quantize_dict[level_name] = build_quantizer(level_opt)
File "/root/autodl-fs/VQFR/vqfr/archs/quantizer_arch.py", line 19, in build_quantizer
quantizer = QUANTIZER_REGISTRY.get(quantizer_type)(**opt)
TypeError: init() got an unexpected keyword argument 'in_dim'

about vqfr2:

decoder在训练的过程中更新了吗?
image
为什么要将decoder的参数固定住

Loss on codebook training. What are good loss values?

I'm trying to train a new codebook but I'm having seemingly super low loss values recorded. The training is really slow by the way, ETA 7-8 days to reach the 800k mark. So here are my two questions:

  1. Were your losses also this low when training?
  2. What made you decide on the 800k?

[train..][epoch: 0, iter: 28,500, lr:(1.000e-04,1.000e-04,)] [eta: 7 days, 21:16:33, time (data): 0.782 (0.003)] l_rec: 7.4355e-03 l_g_percep: 3.9922e-02 l_codebook: 5.1051e-03 l_total_g: 5.2463e-02

Unstable video effects

I have tested model V2.0. But it seems to be unstable in video face restoration.The facial details of hair and human face are not stable in the generated results, and abnormal jitter will occur after the video is synthesized

4090 not worked... only CPU mode?

(vqfr) E:\AI\VQFR>python demo.py -i inputs/whole_imgs -o results -v 2.0 -f 0.1
demo.py:69: UserWarning: The unoptimized RealESRGAN is slow on CPU. We do not use it. If you really want to use it, please modify the corresponding codes.
  warnings.warn('The unoptimized RealESRGAN is slow on CPU. We do not use it. '
e:\Anaconda3\envs\vqfr\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
  warnings.warn(
e:\Anaconda3\envs\vqfr\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
  warnings.warn(msg)

Bug, or not?
How i can start this script with cuda on 4090?? GPU load 0!

image

关于option:Linux

作者您好!请问VQFR这个网络的训练能改成在Windows下运作吗?

How to obtain the metruc_weights

Hi~, I would like to know how inception_FFHQ_512. PTH and alignment_Wflw_4Hg.pth are generated when calculating FID and other indicators

你好,关于可变形卷积的一些问题想请教。

文中说到,简单地输入特征会导致较差的细节,因为“the upsampling process from small spatial size to large spatial size, the intermediate features willlargely be influenced by input features, and gradually contain fewer high-quality facial details from the VQ codebook.”。
那将两个特征去生成offset,然后offset应用于F^t的具体作用是什么呢?已经文中的wrap具体是啥意思。

How to train FFHQ on 1024x1024 resolution?

Hi @guyuchao the current VQFR model is trained on 512x512 FFHQ but I want to train it on 1024x1024. There are some questions I would like to ask before proceeding with training since it is computationally expensive.

  1. Does the following changes to yml file are enough:
    a. out_size: 1024 at line 18 (datasets)
    b. in_dim:1024 at line 88 (network_sr, quantizer_opt)
    c. in_dim: 1024 at line 109 (network_g,quantizer_opt)
    d. out_size: 1024 at line 119 (network_d_global)
    e. out_size: 1024 at line 131 (network_d_main_global)

  2. Can I still use the 512x512 pre-trained models (paths in yml)

  3. If there are any other changes that I have missed, please help.

about the function of reestimation

hi, thank you for opensourcing your code.
I read about the part of VQGAN in your code and I do not understand the function of reestimation.
I also found it seems that this operation is slow during the training.
Can you briefly introduce me the function of reestimation? Thank you.

add web demo/models/datasets to ECCV 2022 organization on Hugging Face

Hi, congrats for the acceptance at ECCV 2022. We are having an event on Hugging Face for ECCV 2022, where you can submit spaces(web demos), models, and datasets for papers for a chance to win prizes. The hub offers free hosting and would make your work more accessible to the rest of the community. Hugging Hub works similar to github where you can push to user profiles or organization accounts, you can add the models/datasets and spaces to this organization:

https://huggingface.co/ECCV2022

after joining the organization using this link: https://huggingface.co/organizations/ECCV2022/share/kZuMIwRJKOTteDgoueNuPAMUGSnfDjWAGq

let me know if you need any help with the above steps, thanks

Can VQFR2 be placed in google drive too?

The google drive link only allows one to download VQFR1.3. The second link leads to a site which I can't, unfortunately, use, because I can't speak Chinese and therefore I can't understand popup windows appearing whenever I click something. Is there a possibility to put VQFR2 model in google drive too?

EDIT: nevermind, found the model elsewhere. Closing.

Where can I get the trained weights of the V2 model???

Hellow! Thanks for your great job and implementation!

How can I get the trained weights of the v2 models ( including v2 codebook and v2 vqfr model weights)???
The provided Google Drive seems to include only v1 models.
Can you please provide trained v2 model weights?

Hi, training issue, "No object named 'VQFRModel' found in 'model' registry".

Thanks for your fantastic work.

When I tried to train VQFR, I met a KeyError: "No object named 'VQFRModel' found in 'model' registry!". I have already downloaded the pre-trained VQ codebook and modified the configuration file, but I am unsure how to fix this error.

Additionally, I was wondering if you could provide me with the parameters for the Generator in the VQFR model. I would greatly appreciate any information or guidance you could provide on this matter.

关于FFHQ训练集

作者您好!我想问一下,FFHQ数据集(512×512)我已经下载好了,但是它对应的退化图像怎么获取呢?我想尝试训练您的网络,但不知道LQ图像应该怎么获得?可以麻烦您提供一下解决办法吗?感谢!

Issue with runing demo on V1 model

Hello,

I have been successful in running the demo file with the VQFR V2, however, V1 returns an error:

Traceback (most recent call last): File "demo.py", line 161, in <module> main() File "demo.py", line 103, in main restorer = VQFR_Demo(model_path=model_path, upscale=args.upscale, arch=arch, bg_upsampler=bg_upsampler) File "/home/victorphilippe/research/VQFR/vqfr/demo_util.py", line 91, in __init__ 'warmup_iters': -1 File "/home/victorphilippe/research/VQFR/vqfr/archs/vqfrv1_arch.py", line 188, in __init__ self.quantizer = GeneralizedQuantizer(quantizer_opt) File "/home/victorphilippe/research/VQFR/vqfr/archs/vqganv1_arch.py", line 177, in __init__ self.quantize_dict[level_name] = build_quantizer(level_opt) File "/home/victorphilippe/research/VQFR/vqfr/archs/quantizer_arch.py", line 19, in build_quantizer quantizer = QUANTIZER_REGISTRY.get(quantizer_type)(**opt) TypeError: __init__() got an unexpected keyword argument 'in_dim'
I am guessing this is a version issue with a package or something, but I have not been able to figure it out yet. I attached my package list below. Can you help me?
Thanks!

_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
absl-py 1.3.0 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
basicsr 1.4.2 pypi_0 pypi
bzip2 1.0.8 h7f98852_4 conda-forge
ca-certificates 2022.9.24 ha878542_0 conda-forge
cachetools 5.2.0 pypi_0 pypi
certifi 2022.9.24 pypi_0 pypi
charset-normalizer 2.1.1 pypi_0 pypi
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cycler 0.11.0 pypi_0 pypi
einops 0.6.0 pypi_0 pypi
facexlib 0.2.5 pypi_0 pypi
filelock 3.8.0 pypi_0 pypi
filterpy 1.4.5 pypi_0 pypi
fonttools 4.38.0 pypi_0 pypi
future 0.18.2 pypi_0 pypi
gfpgan 1.3.8 pypi_0 pypi
google-auth 2.14.1 pypi_0 pypi
google-auth-oauthlib 0.4.6 pypi_0 pypi
grpcio 1.51.0 pypi_0 pypi
huggingface-hub 0.11.0 pypi_0 pypi
idna 3.4 pypi_0 pypi
imageio 2.22.4 pypi_0 pypi
importlib-metadata 5.0.0 pypi_0 pypi
install 1.3.5 pypi_0 pypi
joblib 1.2.0 pypi_0 pypi
kiwisolver 1.4.4 pypi_0 pypi
ld_impl_linux-64 2.39 hcc3a1bd_1 conda-forge
libffi 3.4.2 h7f98852_5 conda-forge
libgcc-ng 12.2.0 h65d4601_19 conda-forge
libgomp 12.2.0 h65d4601_19 conda-forge
libnsl 2.0.0 h7f98852_0 conda-forge
libsqlite 3.40.0 h753d276_0 conda-forge
libstdcxx-ng 12.2.0 h46fd767_19 conda-forge
libuuid 2.32.1 h7f98852_1000 conda-forge
libzlib 1.2.13 h166bdaf_4 conda-forge
llvmlite 0.39.1 pypi_0 pypi
lmdb 1.3.0 pypi_0 pypi
markdown 3.4.1 pypi_0 pypi
markupsafe 2.1.1 pypi_0 pypi
matplotlib 3.5.3 pypi_0 pypi
ncurses 6.3 h27087fc_1 conda-forge
networkx 2.6.3 pypi_0 pypi
numba 0.56.4 pypi_0 pypi
numpy 1.20.3 pypi_0 pypi
nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
oauthlib 3.2.2 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openssl 3.0.7 h166bdaf_0 conda-forge
packaging 21.3 pypi_0 pypi
pillow 9.3.0 pypi_0 pypi
pip 22.3.1 pyhd8ed1ab_0 conda-forge
protobuf 3.20.3 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pyparsing 3.0.9 pypi_0 pypi
python 3.7.12 hf930737_100_cpython conda-forge
python-dateutil 2.8.2 pypi_0 pypi
pywavelets 1.3.0 pypi_0 pypi
pyyaml 6.0 pypi_0 pypi
readline 8.1.2 h0f457ee_0 conda-forge
realesrgan 0.3.0 pypi_0 pypi
requests 2.28.1 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
scikit-image 0.19.3 pypi_0 pypi
scikit-learn 1.0.2 pypi_0 pypi
scipy 1.7.3 pypi_0 pypi
setuptools 65.6.0 pypi_0 pypi
six 1.16.0 pypi_0 pypi
sqlite 3.40.0 h4ff8645_0 conda-forge
tb-nightly 2.12.0a20221121 pypi_0 pypi
tensorboard-data-server 0.6.1 pypi_0 pypi
tensorboard-plugin-wit 1.8.1 pypi_0 pypi
threadpoolctl 3.1.0 pypi_0 pypi
tifffile 2021.11.2 pypi_0 pypi
timm 0.6.11 pypi_0 pypi
tk 8.6.12 h27826a3_0 conda-forge
torch 1.13.0 pypi_0 pypi
torchvision 0.14.0 pypi_0 pypi
tqdm 4.64.1 pyhd8ed1ab_0 conda-forge
typing-extensions 4.4.0 pypi_0 pypi
tzdata 2022f h191b570_0 conda-forge
urllib3 1.26.12 pypi_0 pypi
vqfr 2.0.0 dev_0
werkzeug 2.2.2 pypi_0 pypi
wheel 0.38.4 pypi_0 pypi
xz 5.2.6 h166bdaf_0 conda-forge
yapf 0.32.0 pypi_0 pypi
zipp 3.10.0 pypi_0 pypi

questions about the evalutions of VQGAN.

Hi,
Could I ask about the evaluation of the results of VQGAN. I generate the results by,
python -u vqfr/test.py -opt options/test/VQGAN/test_vqgan_v1.yml

Then I evaluated the metrics by the commands as the figure below, but the results are lower than the results of VQFR provided in the paper. Since the VQGAN is an autoencoder of the HR images, is it correct that the results of VQGAN are lower than VQFR? Or do you know where did I evaluated wrong?
Thanks in advance!
结果

Failed inference for VQFR: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemmStridedBatched...

python demo.py -i ../output/frames -o results -v 2.0 -s 1 -f 0.8
got error message each frame processing
Failed inference for VQFR: CUDA error: CUBLAS_STATUS_NOT_SUPPORTED when calling `cublasSgemmStridedBatched( handle, opa, opb, m, n, k, &alpha, a, lda, stridea, b, ldb, strideb, &beta, c, ldc, stridec, num_batches)`.

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Fri_Dec_17_18:16:03_PST_2021
Cuda compilation tools, release 11.6, V11.6.55
Build cuda_11.6.r11.6/compiler.30794723_0
Thu Nov  9 10:53:56 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05   Driver Version: 525.147.05   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0  On |                  N/A |
| 62%   44C    P8    45W / 350W |    163MiB / 24576MiB |      4%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      3287      G   /usr/lib/xorg/Xorg                107MiB |
|    0   N/A  N/A     19571      G   /usr/bin/gnome-shell               30MiB |
|    0   N/A  N/A     29963      G   /usr/lib/firefox/firefox           22MiB |
+-----------------------------------------------------------------------------+

any idea?

In the whole process of training VQGAN, the PSNR was only 20

Brilliant work!Thanks for sharing!

I tried to train VQGAN v1 and v2 using the FFHQ256 dataset in the original setup without changing the batch size and other Settings in the provided train_vqgan_v1_B16_800K.yml file.

May I ask why PSNR has been only 20 after training more than 140k iters, and what is normal value of the validation's PSNR when training VQGAN?

I would be grateful if you can kindly give a hint! :)

Hi, Training issiue

Hello;
I will train using your model, but when I was reviewing your codes, I could not find the build dataset and build data loader parts. I want to know what was done in these parts. Could you help me please ?

image2

image1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.