msracver / deep-exemplar-based-colorization Goto Github PK
View Code? Open in Web Editor NEWThe source code of "Deep Exemplar-based Colorization".
Home Page: https://arxiv.org/abs/1807.06587
License: MIT License
The source code of "Deep Exemplar-based Colorization".
Home Page: https://arxiv.org/abs/1807.06587
License: MIT License
Hi, from the demo, it seems that the input needs to be images with fixed height and weight.
Since I want to explore if this model can be used as a downstream model for some preprocessed inputs, I wonder if it is possible to pass in only some selected samples from source image/reference, and the selected samples are with variable length. [ dimension of the samples are (length, 3) ]
Hi, I tried to run the demo, but I encountered an error:
Traceback (most recent call last):
File "../colorization_subnet/test.py", line 60, in <module>
for iter, data in enumerate(data_loader):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 434, in reraise
raise exception
cv2.error: Caught error in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/src/app/colorization_subnet/lib/TestDataset.py", line 117, in __getitem__
errs, warp_ba, warp_aba = combo5_loader(combo_path, w, h)
File "/src/app/colorization_subnet/lib/TestDataset.py", line 52, in combo5_loader
img_data_ndarray = cv2.cvtColor(img_data_ndarray, cv2.COLOR_BGR2RGB)
cv2.error: OpenCV(3.4.8) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
I print out the intermediate variables, and found out that
file_bytes: [89 80 69 ... 62 10 10]
file_bytes.shape: (127386,)
img_data_ndarray: None # img_data_ndarray = cv2.imdecode(file_bytes, cv2.IMREAD_COLOR)
I did some research but did not find a workable solution.
Could you please suggest on this? Thank you so much!!
Hello, I did not get where is the exact point that color transfer happens in your code. I mean /colorization_subnet/test.py . Could you please explain to me?
Best regards.
Does it contain the "Color Reference Recommendation" part in this work?
Since the code seems to be Caffe-based, what would be the steps needed to make this run on Linux?
hello, could you please tell me the file flow under Deep-Exemplar-based-Colorization-master\demo\example how to use? Thanks you! @hmmlillian @cddlyf
How can I generate the flow files using Deep-Image-Analogy when I want to colorize my own pictures?
When i try to run your model sometimes it successfully runs but then when i try with other image i get this ("Check failed: error == cudaSuccess (2 vs. 0) out of memory") error. Do you have any idea about this? Is it possible that depends on image size?
I have 4 GB GPU
GeForce GTX 950M
Windows10
Since the input is in range of 0 and 255, should it subtract the mean?
What is the mean value of the data you trained for vgg gray?
Thanks.
我们拜读了本论文,并模仿您的开发环境,成功使用了您提供的网络模型和预编译的exe文件,且获得了良好的效果。
但由于能力有限,且基于C++的caffe框架对开发环境及开发技术的苛刻要求,我们无法基于您的网络尝试修改和训练。(似乎当前版本的代码并未提供训练函数及相关损失、参数、及训练方法等)
请问您近期是否有打算发布一版基于python语言框架的《Deep Exemplar-based Colorization*》的实现的可训练版本呢?
如果您能在百忙之中抽空答复,我们将感激不尽
Is it possible to compile for Python 3.5? I'm getting errors that relate to python27.lib but I have Caffe compiled for Python 3.5.
I have two questions. I would be glad if you answer.
(1) Could you please explain what exactly are flow and combo files. What do they contain ?
(2) I got
Check failed: status == CUDNN_STATUS_SUCCESS (4 vs. 0) CUDNN_STATUS_INTERNAL_ERROR
What might be the reason of this error ?
python = 3.6.7
pytorch = 1.0.0 (cuda 8, cudnn 5.1)
torchvision = 0.2.1
Thank you so much.
Hello,
I'm newbie on deep learning and working with obsolete hardware that today unsupported by main framework. But because I'm really want to use your project, and have no money today to upgrade my hardware, I have hack the caffe version to allow me to build it with CUDA 6.5 and sm_13 (yes sm_13). For that i've just hack the math_functions.xxx files to use cublas v1 (not the current v2).
My second problem is the memory. Because my hardware is limited in memory, I've to load the two classifiers models on host/cpu not on gpu (out of memory error) and just add some memory copy to allow me to use the current GPU code of the two projects (norm / blend / patchmatch and others ...). There's no problem for now.
The similarity_combo exe build fine, work fine on the example provided, and product correct combo files with the provided bidirectionnal mapping files (flow files in demo directory).
My problem comes with the deep_image_similarity exe. And especially on the deconv (gradient descent) part. Because my change, I use the Cuda-L-BFGS in cpu mode and I've just modifiy the custom cost_function to generate diff on cpu, no gpu. But there's a problem because each iteration provide same values. All seem correct for me, but I suspect that the cost function do not update correctly at :
m_classifier->net_->BackwardFromTo(m_id1, m_id2 + 1);
in Deconv.cpp
Let me see you the log in debug mode : http://s000.tinyupload.com/index.php?file_id=87185167689484967287
Any idea maybe ? Thank you very much for your time.
Could you please tell me the pytorch version and the torchvision version and python version you used to run the Colorization Sub-net?
Could you please tell me how long does it take to color a photo? 7-8 minute per photo for me.
Hi, guys! It's a really fascinating job. Right now, I also work on a similar problem. I just wonder if you have any plan to publish this modified dataset? Thanks a lot!
Hi, I have read the combo file in python with 'rb', but I can not figure out the meaning of it.
caffe is outdated dinosaur, nobody use it.
Also your windows build does not work.
I suggest you to migrate this and future projects to pytorch or keras , which are user friendly.
I am sorry that I can not download the model from Dropbox, the speed is very slow, even I use the VPN,
So could you please give the download link of the Baidu cloud drive ?
Thank you very much!
The project is very awesome and it would be more convenient for testing if there would be the Linux version of the code. I know the Colorization Subnet is based on Pytorch, which is easily used on Linux, so I hope I can get the Linux version of the Similarity SubNet if possible. Thank you very much.
Hi, thanks for your great job!
i run code that project from https://github.com/ncianeo/Deep-Exemplar-based-Colorization. while error happen: FileNotFoundError: [Errno 2] No such file or directory: 'example/input/combo_new/in1_ref1.combo'
what is in1_ref1.combo?
For some reasons, I cannot use docker. Recently, I use this https://github.com/ncianeo/Deep-Exemplar-based-Colorization/tree/linux-docker-cv-caffe-build repo and have successfully run it under linux 18.04 without docker.
However, I have to change the source code of your supplied caffe. Otherwise, there will be segmentation fault, for both deepanalogy and similarity_net. The reason of segfault for deepanalogy is the force_backward param in its prototxt. And the reason for similarity_net is the following code in caffe/src/caffe/net.cpp, funtion AppendParam().
has_params_lr_.push_back(param_spec->has_lr_mult());
has_params_decay_.push_back(param_spec->has_decay_mult());
params_lr_.push_back(param_spec->lr_mult());
params_weight_decay_.push_back(param_spec->decay_mult());
Do you know the reason? Or could you please show your env?
It seems my 2080ti driver not compatible with CUDA 8 anymore (mine is CUDA 10.2). I tried to build Caffe with 10.2 but it kept sending this error:
Severity Code Description Project File Line Suppression State
Error MSB3721 The command ""C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2;\bin\nvcc.exe" -gencode=arch=compute_35,code="sm_35,compute_35" -gencode=arch=compute_52,code="sm_52,compute_52" --use-local-env -ccbin "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\bin\HostX86\x64" -x cu -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\lmdb-v120-clean.0.9.14.0\build\native....\lib\native\include" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\LevelDB-vc120.1.2.0.0\build\native../..//build/native/include/" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\protobuf-v120.2.6.1\build\native../..//build/native/include/" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\glog.0.3.3.0\build\native../..//build/native/include/" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\gflags.2.1.2.1\build\native../..///build/native/include/" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\boost.1.59.0.0\build\native....\lib\native\include\" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\hdf5-v120-complete.1.8.15.2\build\native....\lib\native\include" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\OpenBLAS.0.2.14.1\build\native....\lib\native\include" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\NugetPackages\OpenCV.2.4.10\build\native../../build/native/include/" -I"C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\similarity_subnet\windows\libcaffe\....\src\" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2" -I\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2" -I\include -G -lineinfo --keep-dir C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\similarity_subnet\windows..\Build\Int\libcaffe\x64\Debug -maxrregcount=0 --machine 64 --compile -cudart static -Xcudafe "--diag_suppress=exception_spec_override_incompat --diag_suppress=useless_using_declaration --diag_suppress=field_without_dll_interface" -D_SCL_SECURE_NO_WARNINGS -DGFLAGS_DLL_DECL= -g -DHAS_LMDB -DHAS_HDF5 -DHAS_OPENBLAS -DHAS_OPENCV -D_DEBUG -D_SCL_SECURE_NO_WARNINGS -DUSE_OPENCV -DUSE_LEVELDB -DUSE_LMDB -DUSE_CUDNN -D_UNICODE -DUNICODE -Xcompiler "/EHsc /W1 /nologo /Od /FdC:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\similarity_subnet\windows..\Build\Int\libcaffe\x64\Debug\libcaffe.pdb /FS /Zi /RTC1 /MDd " -o C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\similarity_subnet\windows..\Build\Int\libcaffe\x64\Debug\absval_layer.cu.obj "C:\Tools\AI\VideoProduction\Deep-Exemplar-based-Colorization-master\similarity_subnet\src\caffe\layers\absval_layer.cu"" exited with code 1. libcaffe C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations\CUDA 10.2.targets 764
Is there any way to fix this issue?
Thank you!
Hello, i could not understand how did you use Deep Image Analogy for colorization. I examined your colorization_subnet/test.py code and did not find anything for Image Anology. Could you please explain the correlation between colorization and Deep Image Analogy ?
Thanks,
Best Regards.
I run the pytorch part and get this error:
FileNotFoundError: [Errno 2] No such file or directory: 'data/combo_new/1_1r.combo'
I would appreciate if you could help me on this matter,thx
Hi! It's an impressive work.
I am trying to understand the implemented method. In particular, how is the theta parameter used in the Colorization Sub-Network. Formula 3 or Figure 4 in the paper do not give a detailed description.
Hello, could you release the training code and the training details? Thank you!
Could you please share PSNR calculation codes ?
您好,
我想请教一下关于test data那里,我们没有像训练数据reference匹配时的类别信息,不能按照类别去寻找reference,需要将每个图片与ImageNet中所有数据去对比 然后找最相近的数据嘛?
Thank u for your job. it helps a lot. I have ran the Deep-Image-Analogy on linux sucessfully. but i met some problems when do as your advises like
@TruePhone The Similarity SubNet (and Deep Image Analogy if you use it to build correspondence) is implemented in C++. It should be easy to transplant to Linux. Deep Image Analogy already has a Linux version (https://github.com/msracver/Deep-Image-Analogy/tree/linux) and our Similarity SubNet is actually built based on it. You may copy the source code from Deep-Exemplar-based-Colorization/similarity_subnet/windows/similarity_combo/ to msracver/Deep-Image-Analogy/tree/linux and slightly modify it on Linux.
but when i compile, it missing some files.h like windows.h . sorry, i am not familiar with c++ in windows. can u give me some advises to get similarity_combo file in linux?
I am trying to train the grey-VGG-19 in the paper ,but using pytorch. Could you please tell me some details about training it?
1.which dataset do you use to trian the VGG-19?
2.How did you transfer the dataset from RGB to Lab ?
3.The bidirectional map function just need to be used after the VGG-19 net ,am I right?
4.which Loss function/optimizer did you use in VGG-19 when training VGG-19?
sorry to both you...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.