Giter VIP home page Giter VIP logo

vita-group / enlightengan Goto Github PK

View Code? Open in Web Editor NEW
873.0 20.0 196.0 17.86 MB

[IEEE TIP] "EnlightenGAN: Deep Light Enhancement without Paired Supervision" by Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang

License: Other

Python 99.16% TeX 0.51% Shell 0.33%
unsupervised-learning generative-adversarial-networks gan pytorch low-light-enhance low-light

enlightengan's People

Contributors

yifanjiang19 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

enlightengan's Issues

KeyError: 'unexpected key "module.conv1_1.weight" in state_dict'

Traceback (most recent call last):
File "predict.py", line 18, in
model = create_model(opt)
File "/home/Documents//enlighten/EnlightenGAN/models/models.py", line 36, in create_model
model.initialize(opt)
File "/home/Documents/enlighten/EnlightenGAN/models/single_model.py", line 72, in initialize
self.load_network(self.netG_A, 'G_A', which_epoch)
File "/home/Documents/enlighten/EnlightenGAN/models/base_model.py", line 53, in load_network
network.load_state_dict(torch.load(save_path))
File "/home/anaconda3/envs/enlighten/lib/python3.5/site-packages/torch/nn/modules/module.py", line 522, in load_state_dict
.format(name))
KeyError: 'unexpected key "module.conv1_1.weight" in state_dict'

Python: 3.5.6
PyTorch : 0.3.1
CPU

I'm facing this error when I run the test command in script/scripts.py, I would appreciate any help!

Thank you in advance!

Assertion error: ../final_dataset\trainA is not a valid directory

can anyone help, whole traceback is:
train.py:10: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
return yaml.load(stream)
CustomDatasetDataLoader
dataset [UnalignedDataset] was created
Traceback (most recent call last):
File "train.py", line 14, in
data_loader = CreateDataLoader(opt)
File "D:\EnlightenGAN-master\data\data_loader.py", line 6, in CreateDataLoader
data_loader.initialize(opt)
File "D:\EnlightenGAN-master\data\custom_dataset_data_loader.py", line 39, in initialize
self.dataset = CreateDataset(opt)
File "D:\EnlightenGAN-master\data\custom_dataset_data_loader.py", line 29, in CreateDataset
dataset.initialize(opt)
File "D:\EnlightenGAN-master\data\unaligned_dataset.py", line 65, in initialize
self.A_imgs, self.A_paths = store_dataset(self.dir_A)
File "D:\EnlightenGAN-master\data\image_folder.py", line 39, in store_dataset
assert os.path.isdir(dir), '%s is not a valid directory' % dir
AssertionError: ../final_dataset\trainA is not a valid directory

Effect of SemanticLoss

Hi,

Thanks for your great work!

I found that there is another type of loss (SemanticLoss) instead of PerceptualLOSS that aim to preserve the semantics before/after transformation.

I wonder how these two types of loss effect the final performance?

pretrained model problems

Could you share your config and experiment parameters of your pretrained model? I can not use your pretrained model to test photos directly. And could you share your supplementary materials which mentioned in your paper?

training error

I always get this kind of errors every time I train. How do I resolve this?

File "/home/jef.silang/.conda/envs/jepwey/lib/python3.5/random.py", line 205, in randrange
raise ValueError("empty range for randrange() (%d,%d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (0,-60, -60)

and this one

ConnectionRefusedError: [Errno 111] Connection refused

Memory increased when using a larger training dataset

Hi TAMU-VITA,
Thank you for awesome work.
I have tried to train the model on a larger dataset, but the RAM memory was increasing during the time of loading data. Then as the consequence, it's got killed.

Is there any place in the dataloader class that we can modify to avoid above problem?

Thank you.

No module named lib.nn

I have already download the latest version,but still have a error about lib.nn missing when testing. It's strange that project have the lib module and nn module. Does someone run the code successfully ? And how to fix this problem?

About your NIQE result

The test dataset you provided includes NPE dataset and its 3 extensional datasets (data 11, data12, data 13). Are the NIQE scores of NPE dataset (Table 1, column 4) tested only on NPE dataset? or on NPE dataset together with its 3 extensions as a whole ?

dataset prepareing problem

I met a problem when i decompressed you training data. The link files may have been destroyed. Can you share your data with another way or link? Thank you.

Update to newest torch version?

Hi,

Thank for the great work.

One short question, why did you implement this repo with old version or torch(0.3) instead of new torch(>1.0)? Is there any advantage using old version of torch?

BTW, will you plan to upgrade the code to support newest version of torch?

about train

When I was training EnlightenGAN, I found that G_A-loss has been oscillating around 1.5. How can I tell if the network has converged?

Failed to download datasets

Dear researchers, I am sorry to say that I failed to download the datasets for training and predicting. Before that, I have gotten the other materials except datasets.
Could you please provide other mothods to download datasets? I am looking forward.

关于RGAN损失的疑问

您好,论文中提到利用的是RGAN格式的损失函数,但我在代码中没有找到,您能指出一下在哪里吗,谢谢~

Output in testing mode

Hi,
thank for sharing this code.
I was trying to execute the algorithm in testing mode using the pre-trained model provided by you.
I prepared two folders, as described by you, and tried with 10 images. I got this output:

process image... ['../test_dataset/testA/Img_Shutter_03-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_06-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_09-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_12-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_15-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_18-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_21-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_24-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_27-claheC.png']
process image... ['../test_dataset/testA/Img_Shutter_30-claheC.png']

However, I can't find the processed images. Am I doing something wrong?

Unable to reproduce EnlightenGAN-N

Hi,

I followed your steps in your paper to reproduce EnlightenGAN-N but with no avail. From my understanding, I just need to swap the training data with BDD-100k low-light images (mean pixel intensity values smaller than 45). Other parameters such as 200 epoch and normal light images remain the same.

Please let me know what am I missing here.

Thanks

Settings for Training

Hi! Thank you for sharing the EnlightenGAN. I just want to ask some questions cause I'm kinda new to this. How am I able to find the best setting for training? I used the same training dataset that you provided and used the same settings for training (100 epoch with learning rate of 0.0001 followed by another 100 epochs with learning rate linearly decayed to zero) but the trained model seemed to differ a lot from the pretrained model. What factors should I consider about this? Which of which parameters should I will change to get a close or a better result with the pretrained model?

training error(output size is too small)

Hi, I always get this error every time I train,but I didn't change any parameters yet(only patch size & batch size). How can I resolve this?

model [SingleGANModel] was created
Setting up a new session...
create web directory ./checkpoints\enlightening\web...
C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\functional.py:1890: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.")
C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\functional.py:1961: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
"See the documentation of nn.Upsample for details.".format(mode))
Traceback (most recent call last):
File "train.py", line 31, in
model.optimize_parameters(epoch)
File "D:\Low-Light_Enhancement\EnlightenGAN-master\models\single_model.py", line 398, in optimize_parameters
self.backward_G(epoch)
File "D:\Low-Light_Enhancement\EnlightenGAN-master\models\single_model.py", line 339, in backward_G
self.fake_patch, self.input_patch) * self.opt.vgg
File "D:\Low-Light_Enhancement\EnlightenGAN-master\models\networks.py", line 1028, in compute_vgg_loss
img_fea = vgg(img_vgg, self.opt)
File "C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\parallel\data_parallel.py", line 121, in forward
return self.module(*inputs[0], **kwargs[0])
File "C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\modules\module.py", line 477, in call
result = self.forward(*input, **kwargs)
File "D:\Low-Light_Enhancement\EnlightenGAN-master\models\networks.py", line 963, in forward
h = F.max_pool2d(h, kernel_size=2, stride=2)
File "C:\Users\DCMC\Anaconda3\envs\EnlightenGAN\lib\site-packages\torch\nn\functional.py", line 396, in max_pool2d
ret = torch._C._nn.max_pool2d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small at c:\programdata\miniconda3\conda-bld\pytorch_1533086652614\work\aten\src\thcunn\generic/SpatialDilatedMaxPooling.cu:69

The artifact problem

I followed your instructions and the results have some artifacts, especially the clean images.
image

supplementary materials

When I read your paper ,I always see the word of supplementary materials.I want to you can give the supplementary materials.Thank you !!!

ZeroDivisionError when running script.py with --predict

Running !python scripts/script.py --predict returns the following error:

Traceback (most recent call last):
File "predict.py", line 25, in
for i, data in enumerate(dataset):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 281, in next
return self._process_next_batch(batch)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 301, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
ZeroDivisionError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 55, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 55, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "/content/gdrive/My Drive/Colab Notebooks/EnlightenGAN/data/unaligned_dataset.py", line 82, in getitem
B_img = self.B_imgs[index % self.B_size]
ZeroDivisionError: integer division or modulo by zero

Requesting Sampled BDD-100k training images

Hi, can i have the 950 sampled BDD-100k dataset training images?

If possible, can u share about how u pick the 940 night-time images? There are more than 10k images which have mean pixel intensity smaller than 45. If you can share how you selected the 940 night-time images. that will be super helpful. Thanks !!

Running inference on a single image

Hello --
So my use case is this :
Take in a single input image, run it through EngligtenGAN and get a single input image as output.
A few questions:

  1. Is 200_net_G_A.pth only for aligned/unaligned datasets with an input_A and an input_B?
  2. In case I'd like to work with a single image, could I work with these weights, or would I need to train my model from scratch?
  3. I'm assuming the test model generator for a single image would be unet_256, or is there a different generator I should use.

I'd be grateful for the help.

关于Attention的问题

文章中说到“We take the illumination channel I of the input RGB image”,RGB图片的Illumination怎么去理解了,从代码中看好像就是RGB图的灰度图,请问是这样吗?

hope it supports pytorch 1.3

if pytorch 1.3 is supported, it would be better, otherwise we need to install pytorch 0.3.1 ,otherwise we cann't load the vgg weights

ModuleNotFoundError: No module named 'torch.utils.serialization'

Hello,i'm trying to run the train.py,it always gets a runtime error,i think it's the problem of torch version,what‘s your torch version?
Another,when i replace the ’load_lua‘ with ’torchfile.load‘ in the network.py,I got the following problem:can you show me how to handle?
Thank you

Traceback (most recent call last):
File "train.py", line 31, in
model.optimize_parameters(epoch)
File "/opt/Github/EnlightenGAN/models/cycle_gan_model.py", line 252, in optimize_parameters
self.backward_G(epoch)
File "/opt/Github/EnlightenGAN/models/cycle_gan_model.py", line 199, in backward_G
pred_fake = self.netD_A.forward(self.fake_B)
File "/opt/Github/EnlightenGAN/models/networks.py", line 497, in forward
return self.model(input)
File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 320, in forward
self.padding, self.dilation, self.groups)
TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not tuple

Problems while testing image with CPU

Hello, I'm trying to test the pre-trained model with CPU.
First, I put the two vgg16.weight and vgg16.t7 in the directory "model", then I put 200_net_G_A.pth in the checkpoints/enlightening directory and create test_dataset/testA and test_dataset/testB.
To test with CPU, I add the option "--gpu_ids -1" in the scripts/script.py to the "python predict.py" options.
Then I find there is still the problem of loading gpu which occurs at the
if len(gpu_ids) >= 0:
netG.cuda(device=gpu_ids[0])
netG = torch.nn.DataParallel(netG, gpu_ids)
netG.apply(weights_init)
return netG
I change the >= into > ( because when gpu_ids -1 then len(gpu_ids)=0 ) then get the unexpected
KeyError: 'unexpected key "module.conv1_1.weight" in state_dict'
Then I go to the corresponding closed issue for help but meet the problem he met which is left unsoloved, I don't understand why that issue is closed.
In a word, I want to test the model with CPU, and obviously I made some mistakes. So hope that you can help me! thx in advance!

assert(torch.cuda.is_available()) AssertionError

hi, can you solve the error i m getting, whole traceback is:
train.py:10: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
return yaml.load(stream)
CustomDatasetDataLoader
dataset [UnalignedDataset] was created
#training images = 20
single
Traceback (most recent call last):
File "train.py", line 19, in
model = create_model(opt)
File "D:\EnlightenGAN-master\models\models.py", line 36, in create_model
model.initialize(opt)
File "D:\EnlightenGAN-master\models\single_model.py", line 57, in initialize
opt.ngf, opt.which_model_netG, opt.norm, not opt.no_dropout, self.gpu_ids, skip=skip, opt=opt)
File "D:\EnlightenGAN-master\models\networks.py", line 86, in define_G
assert(torch.cuda.is_available())
AssertionError

CycleGAN image enhancement code replication

Hello author! I reproduced your single model and it was great. I also wanted to reproduce the image enhancement of CycleGAN. I set the model to cycle_gan in script.py, but I ran it with some errors.
I haven't studied the code carefully and want to run through it first. So I want to ask you how you set the parameters for train and predict in script.py. Looking forward to your reply. Thank you very much.

Training problems

Thank you for your excellent work, I have some questions:

  1. Is the pre-training model open source?
  2. model block order is CONV->leakyrelu-> bn, why not is conv-bn-leakyrelu?

No handlers could be found for logger "visdom"

  • when i use the code python3 scripts/script.py --predict

  • i happens that:
    Total number of parameters: 8636675


model [SingleGANModel] was created
No handlers could be found for logger "visdom"
Exception in user code:

Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/visdom/init.py", line 711, in _send
data=json.dumps(msg),
File "/usr/local/lib/python2.7/dist-packages/visdom/init.py", line 677, in _handle_post
r = self.session.post(url, data=data)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 578, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='localhost', port=8097): Max retries exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f31cc8c66d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
0

Pretrained Discriminator for Continued Training

Thank you for sharing this great project!
I've been working for the past week on experimenting with your codebase and trying out object-specific enhancement on the Exclusively Dark Dataset. I do have some decent results but they do not look qualitatively the same as the ones from the Arxiv paper for the domain adapted BBD dataset.

I'd like to try continued training on the EnlightenGAN but can't seem to find the Discriminator weights. Would you mind pointing me in the right direction or sharing these weights if they aren't public?

Grayscale Attention Map Calculation

Is this the code used to generate the grayscale attention map?

r,g,b = input_img[0]+1, input_img[1]+1, input_img[2]+1
A_gray = 1. - (0.299*r+0.587*g+0.114*b)/2.

Where was this formula derived from? It looks like some weighted/luminance based method

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.