Giter VIP home page Giter VIP logo

neural-photo-editor's Introduction

Neural Photo Editor

A simple interface for editing natural photos with generative neural networks.

GUI1 GUI2 GUI3

This repository contains code for the paper "Neural Photo Editing with Introspective Adversarial Networks," and the Associated Video.

Installation

To run the Neural Photo Editor, you will need:

  • Python, likely version 2.7. You may be able to use early versions of Python2, but I'm pretty sure there's some incompatibilities with Python3 in here.
  • Theano, development version.
  • lasagne, development version.
  • I highly recommend cuDNN as speed is key, but it is not a dependency.
  • numpy, scipy, PIL, Tkinter and tkColorChooser, but it is likely that your python distribution already has those.

Running the NPE

By default, the NPE runs on IAN_simple. This is a slimmed-down version of the IAN without MDC or RGB-Beta blocks, which runs without lag on a laptop GPU with ~1GB of memory (GT730M)

If you're on a Windows machine, you will want to create a .theanorc file and at least set the flag FLOATX=float32.

If you're on a linux machine, you can just insert THEANO_FLAGS=floatX=float32 before the command line call.

If you don't have cuDNN, simply change line 56 of the NPE.py file from dnn=True to dnn=False. Note that I presently only have the non-cuDNN option working for IAN_simple.

Then, run the command:

python NPE.py

If you wish to use a different model, simply edit the line with "config path" in the NPE.py file.

You can make use of any model with an inference mechanism (VAE or ALI-based GAN).

Commands

  • You can paint the image by picking a color and painting on the image, or paint in the latent space canvas (the red and blue tiles below the image).
  • The long horizontal slider controls the magnitude of the latent brush, and the smaller horizontal slider controls the size of both the latent and the main image brush.
  • You can select different entries from the subset of the celebA validation set (included in this repository as an .npz) by typing in a number from 0-999 in the bottom left box and hitting "infer."
  • Use the reset button to return to the ground truth image.
  • Press "Update" to update the ground-truth image and corresponding reconstruction with the current image. Use "Infer" to return to an original ground truth image from the dataset.
  • Use the sample button to generate a random latent vector and corresponding image.
  • Use the scroll wheel to lighten or darken an image patch (equivalent to using a pure white or pure black paintbrush). Note that this automatically returns you to sample mode, and may require hitting "infer" rather than "reset" to get back to photo editing.

Training an IAN on celebA

You will need Fuel along with the 64x64 version of celebA. See here for instructions on downloading and preparing it.

If you wish to train a model, the IAN.py file contains the model configuration, and the train_IAN.py file contains the training code, which can be run like this:

python train_IAN.py IAN.py

By default, this code will save (and overwrite!) the weights to a .npz file with the same name as the config.py file (i.e. "IAN.py -> IAN.npz"), and will output a jsonl log of the training with metrics recorded after every chunk.

Use the --resume=True flag when calling to resume training a model--it will automatically pick up from the most recent epoch.

Sampling the IAN

You can generate a sample and reconstruction+interpolation grid with:

python sample_IAN.py IAN.py

Note that you will need matplotlib. to do so.

Known Issues/Bugs

My MADE layer currently only accepts hidden unit sizes that are equal to the size of the latent vector, which will present itself as a BAD_PARAM error.

Since the MADE really only acts as an autoregressive randomizer I'm not too worried about this, but it does bear looking into.

I messed around with the keywords for get_model, you'll need to deal with these if you wish to run any model other than IAN_simple through the editor.

Everything is presently just dumped into a single, unorganized directory. I'll be adding folders and cleaning things up soon.

Notes

Remainder of the IAN experiments (including SVHN) coming soon.

I've integrated the plat interface which makes the NPE itself independent of framework, so you should be able to run it with Blocks, TensorFlow, PyTorch, PyCaffe, what have you, by modifying the IAN class provided in models.py.

Acknowledgments

This code contains lasagne layers and other goodies adopted from a number of places:

neural-photo-editor's People

Contributors

ajbrock avatar dribnet avatar michaelrgb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural-photo-editor's Issues

IDEA: Option to save changed image as the new ground-truth image

I was trying to make multiple independent changes to an image and I came to the following conclusion:

The masking technique means the output image is based off the original
ground-truth image. After an aesthetically pleasing change is made, such as
growing the length of the hair, a new ground-truth image must be saved and
the latent space recalculated before a different change is made such as
changing the hair color.

By adding the following PR I was able to get these results:
#8

Original image:
original

First operation - increase hair length:
longerhair

Second operation AFTER saving the new ground-truth image - new hair color:
newhaircolor

Without saving the new ground-truth image, I was unable to get these results as the algorithm attempted to remove my longer black hair when I attempted to change it to yellow, because it matches the skin color.

"fuel-download celeba" returns Dropbox HTML

I presume @vdumoulin's folder sharing was turned off somehow (or reached a limit). I'd suggest making a placeholder project release on GitHub and putting the files there—it's hosted on S3 too but no limits and public access is expected.

Possibly incorrect implementation of Batch Renorm

I came across your implementation of batch re-normalization in the BatchReNormDNNLayer class, and I think there might be an error that might be affecting the model's performance.

My understanding of batch re-norm is that it applies the standard BN normalization first, then applies the r/d correction, and then finally applies the gamma/beta scaling and bias. Something along the lines of this:

normed_x = (x - batch_mean) / batch_std    # standard BN
normed_x = normed_x * r + d                # The batch renorm correction
normed_x = normed_x * gamma + beta         # final scale and bias

However, this line is applying the r/d correction after the scaling and centering with gamma and beta.
https://github.com/ajbrock/Neural-Photo-Editor/blob/master/layers.py#L128

It probably works anyway, based on the good results you seem to have gotten. I just thought I'd bring it to your attention.

trained faces are all blury and seems not learnt

I implemented a version in pytorch, with the same architecture illustrated in your paper and code, without orthogonal regularization and MDC though.
However, my generated faces are 300k iteration are still very blury, like below. Do you have any idea why this might happen? thanks very much!!
rec_step_300000

Numpy fails to interpret IAN_Simple.npz?

Hi, I finally managed to install cuda and dev versions of lasagne and theano. Now when I try to launch NPE I get this:

gray@gray-linux:~/Neural-Photo-Editor$ python NPE.py
Using gpu device 0: GeForce GTX 970 (CNMeM is disabled, cuDNN 5105)
Loading weights
Traceback (most recent call last):
File "NPE.py", line 53, in
model = IAN(config_path = 'IAN_simple.py', dnn = True)
File "/home/gray/Neural-Photo-Editor/API.py", line 30, in init
GANcheckpoints.load_weights(self.weights_fname,params)
File "/home/gray/Neural-Photo-Editor/GANcheckpoints.py", line 39, in load_weights
param_dict = np.load(fname)
File "/home/gray/miniconda2/lib/python2.7/site-packages/numpy/lib/npyio.py", line 416, in load
"Failed to interpret file %s as a pickle" % repr(file))
IOError: Failed to interpret file 'IAN_simple.npz' as a pickle

Am I doing something wrong? As far as I can understand from search results, np.load is used for binary .npz files, but all I can see in IAN_Simple.npz are three strings of text:

version https://git-lfs.github.com/spec/v1
oid sha256:82e5fd3ff68b2c9095935c9db269e086e2dd27704b629853e1f03473e7059bd7
size 205207893

UPD: Whoops, my bad. For some reason git clone didn't download raw .npz files. I downloaded them manually and now everything works

CPU support

Hello

Thanks for releasing source code. I've got problem while running script on laptop without cuda gpu:

Traceback (most recent call last):
  File "NPE.py", line 57, in <module>
    config_module = imp.load_source('config',config_path)
  File "IAN_simple.py", line 10, in <module>
    import lasagne.layers.dnn
  File "/usr/local/lib/python2.7/dist-packages/lasagne/layers/dnn.py", line 14, in <module>
    "requires GPU support -- see http://lasagne.readthedocs.org/en/"
ImportError: requires GPU support -- see http://lasagne.readthedocs.org/en/latest/user/installation.html#gpu-support

How can I enforce CPU mode?

IOError: Failed to interpret file 'IAN_simple.npz' as a pickle

Traceback (most recent call last):
File ".\NPE.py", line 18, in
model = IAN(config_path = 'IAN_simple.py', dnn = False)
File "C:\Users\user\Desktop\Neural-Photo-Editor-master\API.py", line 30, in init
GANcheckpoints.load_weights(self.weights_fname,params)
File "C:\Users\user\Desktop\Neural-Photo-Editor-master\GANcheckpoints.py", line 39, in load_weights
param_dict = np.load(fname)
File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 429, in load
"Failed to interpret file %s as a pickle" % repr(file))
IOError: Failed to interpret file 'IAN_simple.npz' as a pickle

Any ideas?

No module named path

"Neural-Photo-Editor-master/GANcheckpoints.py", line 7, in
from path import Path
ImportError: No module named path

Failed to interpret file IAN_simple.npz as a pickle

After changing NPE.py to model = IAN(config_path = 'IAN_simple.py', dnn = False), I get:

$ python NPE.py
Loading weights
Traceback (most recent call last):
  File "NPE.py", line 53, in <module>
    model = IAN(config_path = 'IAN_simple.py', dnn = False)
  File "/Users/skurilyak/Documents/dev/testing/Neural-Photo-Editor/API.py", line 30, in __init__
    GANcheckpoints.load_weights(self.weights_fname,params)
  File "/Users/skurilyak/Documents/dev/testing/Neural-Photo-Editor/GANcheckpoints.py", line 39, in load_weights
    param_dict = np.load(fname)
  File "/usr/local/lib/python2.7/site-packages/numpy/lib/npyio.py", line 416, in load
    "Failed to interpret file %s as a pickle" % repr(file))
IOError: Failed to interpret file 'IAN_simple.npz' as a pickle

Any ideas?

Easier option to not use cuDNN

This project looks very interesting, but I don't have access to an nVidia card. The readme says "You'll need to uncomment my explicit DNN calls if you wish to not use it.", but if I have a look at the code, there's a lot of references to DNN, so this doesn't look very trivial.

Is it possible to create a custom version (maybe a branch?) that works without having cuDNN installed?

Theano optimization failed

Hi,

I am trying to reproduce the code on a V100 instance and I ran into the following issues when I ran python NPE.py

Do you have any recommendations on how we can reproduce your experimental setup in the form of a Dockerfile?

/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ubuntu/.config/matplotlib/matplotlibrc", line #2
  (fname, cnt))
/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file "/home/ubuntu/.config/matplotlib/matplotlibrc", line #3
  (fname, cnt))
Loading weights
Compiling Theano Functions
ERROR (theano.gof.opt): Optimization failure due to: LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu)
ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2074, in process_node
    remove=remove)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
    chk = fgraph.replace_all_validate(replacements, reason)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 518, in replace_all_validate
    fgraph.replace(r, new_r, reason=reason, verbose=False)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/fg.py", line 486, in replace
    ". The type of the replacement must be the same.", old, new)
BadOptimization: BadOptimization Error
  Variable: id 139714617198864 CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}.0
  Op CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}(Elemwise{Cast{float64}}.0, enc_conv1.W)
  Value Type: <type 'NoneType'>
  Old Value:  None
  New Value:  None
  Reason:  LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu). The type of the replacement must be the same.
  Old Graph:
  AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False} [id A] <TensorType(float32, 4D)> ''
   |X [id B] <TensorType(float32, 4D)>
   |enc_conv1.W [id C] <TensorType(float64, 4D)>

  New Graph:
  CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False} [id D] <TensorType(float64, 4D)> ''
   |Elemwise{Cast{float64}} [id E] <TensorType(float64, 4D)> ''
   | |X [id B] <TensorType(float32, 4D)>
   |enc_conv1.W [id C] <TensorType(float64, 4D)>


Hint: relax the tolerance by setting tensor.cmp_sloppy=1
  or even tensor.cmp_sloppy=2 for less-strict comparison


ERROR (theano.gof.opt): Optimization failure due to: LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu)
ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2074, in process_node
    remove=remove)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
    chk = fgraph.replace_all_validate(replacements, reason)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 518, in replace_all_validate
    fgraph.replace(r, new_r, reason=reason, verbose=False)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/fg.py", line 486, in replace
    ". The type of the replacement must be the same.", old, new)
BadOptimization: BadOptimization Error
  Variable: id 139714617838416 CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}.0
  Op CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}(Elemwise{Cast{float64}}.0, enc_conv1.W)
  Value Type: <type 'NoneType'>
  Old Value:  None
  New Value:  None
  Reason:  LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu). The type of the replacement must be the same.
  Old Graph:
  AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False} [id A] <TensorType(float32, 4D)> ''
   |X [id B] <TensorType(float32, 4D)>
   |enc_conv1.W [id C] <TensorType(float64, 4D)>

  New Graph:
  CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False} [id D] <TensorType(float64, 4D)> ''
   |Elemwise{Cast{float64}} [id E] <TensorType(float64, 4D)> ''
   | |X [id B] <TensorType(float32, 4D)>
   |enc_conv1.W [id C] <TensorType(float64, 4D)>


Hint: relax the tolerance by setting tensor.cmp_sloppy=1
  or even tensor.cmp_sloppy=2 for less-strict comparison


ERROR (theano.gof.opt): Optimization failure due to: LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu)
ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2074, in process_node
    remove=remove)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
    chk = fgraph.replace_all_validate(replacements, reason)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 518, in replace_all_validate
    fgraph.replace(r, new_r, reason=reason, verbose=False)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/fg.py", line 486, in replace
    ". The type of the replacement must be the same.", old, new)
BadOptimization: BadOptimization Error
  Variable: id 139714617837136 CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}.0
  Op CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}(Elemwise{Cast{float64}}.0, enc_conv1.W)
  Value Type: <type 'NoneType'>
  Old Value:  None
  New Value:  None
  Reason:  LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu). The type of the replacement must be the same.
  Old Graph:
  AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False} [id A] <TensorType(float32, 4D)> ''
   |X [id B] <TensorType(float32, 4D)>
   |enc_conv1.W [id C] <TensorType(float64, 4D)>

  New Graph:
  CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False} [id D] <TensorType(float64, 4D)> ''
   |Elemwise{Cast{float64}} [id E] <TensorType(float64, 4D)> ''
   | |X [id B] <TensorType(float32, 4D)>
   |enc_conv1.W [id C] <TensorType(float64, 4D)>


Hint: relax the tolerance by setting tensor.cmp_sloppy=1
  or even tensor.cmp_sloppy=2 for less-strict comparison


ERROR (theano.gof.opt): Optimization failure due to: LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu)
ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2074, in process_node
    remove=remove)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 569, in replace_all_validate_remove
    chk = fgraph.replace_all_validate(replacements, reason)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/toolbox.py", line 518, in replace_all_validate
    fgraph.replace(r, new_r, reason=reason, verbose=False)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/fg.py", line 486, in replace
    ". The type of the replacement must be the same.", old, new)
BadOptimization: BadOptimization Error
  Variable: id 139714617737296 CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}.0
  Op CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False}(Elemwise{Cast{float64}}.0, enc_conv1.W)
  Value Type: <type 'NoneType'>
  Old Value:  None
  New Value:  None
  Reason:  LocalOptGroup(local_abstractconv_gemm,local_abstractconv_gradweight_gemm,local_abstractconv_gradinputs_gemm,local_abstractconv3d_gemm,local_abstractconv3d_gradweight_gemm,local_abstractconv3d_gradinputs_gemm,local_conv2d_cpu,local_conv2d_gradweight_cpu,local_conv2d_gradinputs_cpu). The type of the replacement must be the same.
  Old Graph:
  AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False} [id A] <TensorType(float32, 4D)> ''
   |X [id B] <TensorType(float32, 4D)>
   |enc_conv1.W [id C] <TensorType(float64, 4D)>

  New Graph:
  CorrMM{((2, 2), (2, 2)), (2, 2), (1, 1), 1 False} [id D] <TensorType(float64, 4D)> ''
   |Elemwise{Cast{float64}} [id E] <TensorType(float64, 4D)> ''
   | |X [id B] <TensorType(float32, 4D)>
   |enc_conv1.W [id C] <TensorType(float64, 4D)>


Hint: relax the tolerance by setting tensor.cmp_sloppy=1
  or even tensor.cmp_sloppy=2 for less-strict comparison


ERROR (theano.gof.opt): Optimization failure due to: local_abstractconv_check
ERROR (theano.gof.opt): node: AbstractConv2d{convdim=2, border_mode=(2, 2), subsample=(2, 2), filter_flip=False, imshp=(None, 3, 64, 64), kshp=(128, 3, 5, 5), filter_dilation=(1, 1), num_groups=1, unshared=False}(X, enc_conv1.W)
ERROR (theano.gof.opt): TRACEBACK:
ERROR (theano.gof.opt): Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2034, in process_node
    replacements = lopt.transform(node)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/tensor/nnet/opt.py", line 500, in local_abstractconv_check
    node.op.__class__.__name__)
LocalMetaOptimizerSkipAssertionError: AbstractConv2d Theano optimization failed: there is no implementation available supporting the requested options. Did you exclude both "conv_dnn" and "conv_gemm" from the optimizer? If on GPU, is cuDNN available and does the GPU support it? If on CPU, do you have a BLAS library installed Theano can link against? On the CPU we do not support float16.

Traceback (most recent call last):
  File "NPE.py", line 19, in <module>
    model = IAN(config_path = 'IAN_simple.py', dnn = False)
  File "/home/ubuntu/Neural-Photo-Editor/API.py", line 51, in __init__
    self.Z_hat_fn = theano.function([self.X],self.Z_hat)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/compile/function.py", line 317, in function
    output_keys=output_keys)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/compile/pfunc.py", line 486, in pfunc
    output_keys=output_keys)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/compile/function_module.py", line 1839, in orig_function
    name=name)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/compile/function_module.py", line 1519, in __init__
    optimizer_profile = optimizer(fgraph)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 108, in __call__
    return self.optimize(fgraph)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 97, in optimize
    ret = self.apply(fgraph, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 251, in apply
    sub_prof = optimizer.optimize(fgraph)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 97, in optimize
    ret = self.apply(fgraph, *args, **kwargs)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2143, in apply
    nb += self.process_node(fgraph, node)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 2039, in process_node
    lopt, node)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 1933, in warn_inplace
    return NavigatorOptimizer.warn(exc, nav, repl_pairs, local_opt, node)
  File "/home/ubuntu/anaconda3/envs/python2/lib/python2.7/site-packages/theano/gof/opt.py", line 1919, in warn
    raise exc
theano.gof.opt.LocalMetaOptimizerSkipAssertionError: AbstractConv2d Theano optimization failed: there is no implementation available supporting the requested options. Did you exclude both "conv_dnn" and "conv_gemm" from the optimizer? If on GPU, is cuDNN available and does the GPU support it? If on CPU, do you have a BLAS library installed Theano can link against? On the CPU we do not support float16.

No module named CAcheckpoints

Traceback (most recent call last):
File "train_IAN_simple.py", line 112, in
import CAcheckpoints
ImportError: No module named CAcheckpoints

Which library versions are used?

I got

Traceback (most recent call last):
  File "NPE.py", line 57, in <module>
    config_module = imp.load_source('config',config_path)
  File "IAN_simple.py", line 12, in <module>
    from lasagne.layers import batch_norm as BN
ImportError: cannot import name batch_norm

when trying to run this with stable lasagne.

That appears to be fixed after running

sudo pip2 install --upgrade https://github.com/Theano/Theano/archive/master.zip
sudo pip2 install --upgrade https://github.com/Lasagne/Lasagne/archive/master.zip

But now I get

Compiling Theano Functions
Traceback (most recent call last):
  File "NPE.py", line 75, in <module>
    Xh = lasagne.layers.get_output(model['l_out'],{model['l_latents']:ZZ},deterministic=True)
  File "/usr/lib/python2.7/site-packages/lasagne/layers/helper.py", line 191, in get_output
    all_outputs[layer] = layer.get_output_for(layer_inputs, **kwargs)
  File "/usr/lib/python2.7/site-packages/lasagne/layers/conv.py", line 330, in get_output_for
    conved = self.convolve(input, **kwargs)
  File "/home/tehdog/data/tmp/nobackup/pkg/Neural-Photo-Editor/layers.py", line 277, in convolve
    img = gpu_contiguous(input)
  File "/usr/lib/python2.7/site-packages/theano/gof/op.py", line 602, in __call__
    node = self.make_node(*inputs, **kwargs)
  File "/usr/lib/python2.7/site-packages/theano/sandbox/cuda/basic_ops.py", line 3963, in make_node
    input = as_cuda_ndarray_variable(input)
  File "/usr/lib/python2.7/site-packages/theano/sandbox/cuda/basic_ops.py", line 46, in as_cuda_ndarray_variable
    return gpu_from_host(tensor_x)
  File "/usr/lib/python2.7/site-packages/theano/gof/op.py", line 602, in __call__
    node = self.make_node(*inputs, **kwargs)
  File "/usr/lib/python2.7/site-packages/theano/sandbox/cuda/basic_ops.py", line 139, in make_node
    dtype=x.dtype)()])
  File "/usr/lib/python2.7/site-packages/theano/sandbox/cuda/type.py", line 95, in __init__
    (self.__class__.__name__, dtype, name))
TypeError: CudaNdarrayType only supports dtype float32 for now. Tried using dtype float64 for variable None

Can you elaborate which exact versions of the libraries you are using?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.