Giter VIP home page Giter VIP logo

deconvfaces's People

Contributors

radarhere avatar somewacko avatar welch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deconvfaces's Issues

How to build a Holodeck with this

I have been contemplating on what it takes in order to produce 3D objects/environments from interactive verbal commands (somewhat similar to the Star Trek Holodeck scenario, see my drat blog here: http://www.terraai.org/tag/holodeck/index.html), and this technology seems to be a reasonble starting point for that.

How's so? Because if you are able to create good interpolation between images with lighting/poses, then next step is to be able to interpolate parts (like: Computer, make the nose more like Nicole Kidman's), then pushed further to interpolate spatial constraints (like: Computer, put the chair on top of the desk), followed by conversion to full 3D for deployment to a VR device, the there you have a simple Holodeck!

The reasonable next step is probably about dealing with parts. Anybody has thoughts about how to get there? I figure the masking capability mentioned in the original paper would be vital here.

Problem while generating: No data provided for "pose"

I used the Cropped Yale data base to train a model, since it's just for testing I specified 2 epochs, 4 deconvolution layers and a batch size of 6.

But while trying to generate faces, using sigle.yaml or random.yaml I have got the following error

File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 67, in _standardize_input_data
'for each key in: ' + str(names))
ValueError: No data provided for "pose". Need data for each key in: ['identity', 'pose', 'lighting']

It's clear that the script coming from keras "training.py" is needing a specific data about the pictures, Does any one encountered any similaire issue?

Problem generating interpolation from Yale Faces B

I managed to find a copy of the Extended Yale Faces Database B and trained a model using one single identity (just for testing) without problem. However, when I tried to generate some interpolated images I got the following error:

$ python faces.py generate -m output/FaceGen.YaleFaces.model.d5.adam.h5 -f params/interpolate.yaml -o generated
Using Theano backend.
Loading model...
Generating images...
----->parser.use_yale=True
----->num_images=121
----->inputs["identity"].shape=(121, 28)
----->inputs["identity"]=[[ 0. 1. 0. ..., 0. 0. 0. ]
[ 0. 1. 0. ..., 0. 0. 0. ]
[ 0. 1. 0. ..., 0. 0. 0. ]
...,
[ 0. 0.59143859 0. ..., 0. 0. 0. ]
[ 0. 0.58439443 0. ..., 0. 0. 0. ]
[ 0. 0.57735027 0. ..., 0. 0. 0. ]]
0%| | 0/2 [00:00<?, ?it/s]
Traceback (most recent call last):
File "faces.py", line 129, in
cmd()
File "faces.py", line 96, in generate
batch_size=args.batch_size, extension=args.extension)
File "/mnt/mybk/tests/deconvfaces/faces/generate.py", line 680, in generate_from_yaml
gen = model.predict_on_batch(batch)
File "/home/kaihuchen01/anaconda2/envs/ph3/lib/python3.5/site-packages/keras/engine/training.py", line 1268, in predict_on_batch
self.internal_input_shapes)
File "/home/kaihuchen01/anaconda2/envs/ph3/lib/python3.5/site-packages/keras/engine/training.py", line 108, in standardize_input_data
str(array.shape))
Exception: Error when checking : expected identity to have shape (None, 1) but got array with shape (64, 28)
$

Those prefixed with '----->' are what I printed out for debugging purposes. Any suggestion how I can get pass this? Thanks in advance!

Trained Model Request

Would you possibly be willing to make a trained model publicly available? Would be super grateful!

Use a more open dataset

The RaFD data used for this project is not open data, but there is a lot of potential for this to be used in interesting ways outside of academic research.

It would be really great if we could find other data that could be used to generate faces that has a more permissive license. That way non-researchers would be able to tinker around with this, and we would be able to share the weights publicly.

I don't have much time at the moment to do more work for this, but because this project's gotten a lot of interest lately I want to open this as an issue in case anyone's interested in pursuing it.

keras, tensorflow, theano versions?

Hi,

I'm tripping over a few 'module has no attribute' errors that may simply be version incompatibilities (I'm working from a fresh install of everything). Can you include version info for your major dependencies in the README?

Thanks

Problem training on YaleFaces

First I want to thank you for publishing your code!

Since RaFD is not open, and that it seems your code supports the Yale Face database (http://vismod.media.mit.edu/vismod/classes/mas622-00/datasets/), I chose to give the Yale Face db a try instead. However the program immediately exited on the following error:

(ph3) kaihuchen01@instance-6:~/mybk/tests/deconvfaces$ python faces.py train YALEfaces/yalefaces
Using Theano backend.
Found 0 instances with 0 identities
(5, 4)
height: 5 width: 4
Built model with:
Deconv layers: 5
Output shape: (320, 256, 3)
Loading data...
0it [00:00, ?it/s]
Training...
Epoch 1/100
/home/kaihuchen01/anaconda2/envs/ph3/lib/python3.5/site-packages/keras/callbacks.py:284: RuntimeWarning: Can save best model only with loss available, skipping.
'skipping.' % (self.monitor), RuntimeWarning)
/home/kaihuchen01/anaconda2/envs/ph3/lib/python3.5/site-packages/keras/callbacks.py:358: RuntimeWarning: Early stopping requires loss available!
(self.monitor), RuntimeWarning)
Traceback (most recent call last):
File "faces.py", line 127, in
cmd()
File "faces.py", line 65, in train
verbose = True,
File "/mnt/mybk/tests/deconvfaces/faces/train.py", line 182, in train_model
callbacks=callbacks, shuffle=True, verbose=1)
File "/home/kaihuchen01/anaconda2/envs/ph3/lib/python3.5/site-packages/keras/engine/training.py", line 1106, in fit
callback_metrics=callback_metrics)
File "/home/kaihuchen01/anaconda2/envs/ph3/lib/python3.5/site-packages/keras/engine/training.py", line 844, in _fit_loop
callbacks.on_epoch_end(epoch, epoch_logs)
File "/home/kaihuchen01/anaconda2/envs/ph3/lib/python3.5/site-packages/keras/callbacks.py", line 40, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "/home/kaihuchen01/anaconda2/envs/ph3/lib/python3.5/site-packages/keras/callbacks.py", line 360, in on_epoch_end
if self.monitor_op(current, self.best):
TypeError: unorderable types: NoneType() < float()

Where the content of the directory YALEfaces/yalefaces is as follows (excerpted):

subject01.centerlight subject06.centerlight subject11.centerlight
subject01.glasses subject06.glasses subject11.glasses
subject01.happy subject06.happy subject11.happy
subject01.leftlight subject06.leftlight subject11.leftlight
subject01.noglasses subject06.noglasses subject11.noglasses
subject01.normal subject06.normal subject11.normal
subject01.rightlight subject06.rightlight subject11.rightlight
subject01.sad subject06.sad subject11.sad
(more)

Any advice what I should do in order to rectify this problem?

Deconvolution vs Upsampling

Hi,

Awesome results, congrats!

I notice that in the blog post you talk about deconvolution layers, but then in the code I see you use the Upsampling2D layer instead of the Deconvolution2D. Why this choice?

Thanks!

Runtime Error Warning

MacBook-Pro:deconvfaces-master CJL$ python faces.py train grey_scaled/ Using TensorFlow backend. Found 0 instances with 0 identities (5, 4) height: 5 width: 4 Built model with: Deconv layers: 5 Output shape: (320, 256, 3) Loading data... 0it [00:00, ?it/s] Training... Epoch 1/100 /Users/CJL/anaconda/lib/python3.5/site-packages/keras/callbacks.py:286: RuntimeWarning: Can save best model only with loss available, skipping. 'skipping.' % (self.monitor), RuntimeWarning) /Users/CJL/anaconda/lib/python3.5/site-packages/keras/callbacks.py:370: RuntimeWarning: Early stopping requires loss available! (self.monitor), RuntimeWarning) Traceback (most recent call last): File "faces.py", line 127, in <module> cmd() File "faces.py", line 65, in train verbose = True, File "/Users/CJL/Downloads/deconvfaces-master/faces/train.py", line 182, in train_model callbacks=callbacks, shuffle=True, verbose=1) File "/Users/CJL/anaconda/lib/python3.5/site-packages/keras/engine/training.py", line 1124, in fit callback_metrics=callback_metrics) File "/Users/CJL/anaconda/lib/python3.5/site-packages/keras/engine/training.py", line 862, in _fit_loop callbacks.on_epoch_end(epoch, epoch_logs) File "/Users/CJL/anaconda/lib/python3.5/site-packages/keras/callbacks.py", line 42, in on_epoch_end callback.on_epoch_end(epoch, logs) File "/Users/CJL/anaconda/lib/python3.5/site-packages/keras/callbacks.py", line 372, in on_epoch_end if self.monitor_op(current - self.min_delta, self.best): TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'

I am using a folder of PNG files and getting this error during training. What kinds of image files does it use? How much images did you use to train?

Can you share your dataset?

I would like to know, what photos (how many and what resolution of them) are you used to train your own NN, may you share them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.