Comments (10)
Here is how to load a model:
from itertools import izip
gen_params_values = joblib.load(model_path + '_gen_params.jl')
for p, v in izip(gen_params, gen_params_values):
p.set_value(v)
discrim_params_values = joblib.load(model_path + '_discrim_params.jl')
for p, v in izip(discrim_params, discrim_params_values):
p.set_value(v)
from dcgan_code.
Thanks @udibr, that worked great! So I stand corrected, I tried to replicate this sampling near the origin, but did not see any difference. In fact, I strangely don't see any difference independent of range. Some examples below, keep in mind the RNG is run from the same state. Here is the default range:
random_zmb = floatX(np_rng.uniform(-1., 1., size=(nvis, nz)))
here is a scaled range:
random_zmb = floatX(np_rng.uniform(-0.01, 0.01, size=(nvis, nz)))
and here is a scaled and translated range:
random_zmb1 = floatX(np_rng.uniform(0.25, 0.26, size=(nvis, nz)))
and just in case you think my code is having no effect, here is a constant range:
random_zmb1 = floatX(np_rng.uniform(0.0, 0.0, size=(nvis, nz)))
This seems to imply that the absolute magnitude and position of the vectors don't matter at all, rather it is their direction from their mean. This is surprising to me - I'll definitely have to rethink what a slice of the latent space should be and might have implications for what constitutes a random walk.
from dcgan_code.
Here is one last one - this is a linear path from the (-1,-1..) corner of the cube to the (1,1...) corner of the hypercube (here nz=100):
random_zmb = floatX(np_rng.uniform(-1., 1., size=(nvis, nz)))
for i in range(nvis):
frac = i / (nvis - 1.0)
for j in range(nz):
random_zmb[i][j] = frac
I like this one because it shows:
- there are only two images for the two vector directions (top half is negative, bottom positive)
- the image degrades as the magnitude of the vector approaches 0, which was causing my original issue
from dcgan_code.
this remindes me of Hinton's famous document clustering plot (fig 4C in http://www.cs.toronto.edu/~hinton/science.pdf )
Each cluster is ray from the center and the distance from the center can be interpreted with how sure the cluster is.
A "random walk" will be a circular path around the center
from dcgan_code.
It wont be a circular path in this case because Z is sampled from uniform
dist, not a spherical dist like normal. It also explains edge effects,
which i think i can see happening with uniform.
On Thursday, December 3, 2015, Ehud Ben-Reuven [email protected]
wrote:
this remindes me of Hinton's famous document clustering plot (fig 4C in
http://www.cs.toronto.edu/~hinton/science.pdf )Each cluster is ray from the center and the distance from the center can
be interpreted with how sure the cluster is.A "random walk" will be a circular path around the center
—
Reply to this email directly or view it on GitHub
#12 (comment).
from dcgan_code.
@dribnet The generator/sample code currently uses the minibatch to calculate statistics for batchnorm. This is the reason why changing the scale and mean of sampled Zs has no effect - batchnorm shifts and scales everything back to zero/unit. The current batchnorm code supports using cached/computed inference values - modifying the generator to pass in u (mean) and s (variance) to each call of batchnorm should fix the scale issues.
The hack that also works is to have a large amount of "random" samples passed in alongside the points you want to sample - this was done for some of the figures in the paper. You should keep the visualization to random sample low to avoid significantly changing the batchnorm statistics.
This could explain the deadzone, it could also not. I'm out of town right now but when I'm back late this weekend I can take a look on my end - retraining with Z sampled from a unit sphere or just random normed vectors may fix the issue. If I remember correctly some of this was experimented with in Ian's original code base.
from dcgan_code.
W_kx = k * Wx, BN(k_x) = sign(k)*BN(x) => generator(kx) = generator(x) if k>0 and you take the tanh from the end of the generator
from dcgan_code.
Thanks @Newmu - this makes sense now. I can try regenerating my images using your hack soon and compare results. And maybe if look at the BN code, I can figure out a more principled way to add just a few extra samples per sample that I want (eg, maybe: -sample, sample scaled to unit, -sample scaled to unit).
from dcgan_code.
Could someone explain why the middle output images are different in the following cases?
Number of samples to visualize:
nvis = 5
Generator:
def gen(Z, w, g, b, w2, g2, b2, w3, g3, b3, w4, g4, b4, wx):
h = relu(batchnorm(T.dot(Z, w), g=g, b=b))
h = h.reshape((h.shape[0], ngf*8, 4, 4))
...
Visualize samples:
color_grid_vis(inverse_transform(samples), (1, nvis), 'samples.png')
Case 1: (similar to what @dribnet posted above)
Inputs:
z = floatX(np_rng.uniform(0.0, 0.0, size=(nvis, nz)))
Since z= [[0,...,0],...,[0,...,0]]
, it can be seen that h = relu(b)
, therefore h[2] = relu(b[2])
.
Case 2: (almost similar to what @dribnet posted above)
Inputs:
step = 2.0 / (nvis - 1)
z = floatX(np_rng.uniform(0.0, 0.0, size=(nvis, nz)))
for i in range(nvis):
v = -1.0 + step * i
for j in range(nz):
random_zmb[i][j] = v
Let H = T.dot(Z, w)
. Since z[i] = - z[nvis - 1 - i]
and z[2] = [0,...,0]
, H[i] = - H[nvis - 1 - i]
and H[2] = [0,...,0]
. Batchnorm may change H[i]
except i = 2
, it can be seen that batchnorm(H)[2] = [0,...,0]
, therefore h[2] = relu(b[2])
.
The values of h[2]
, which correspond to the middle images, are the same for both cases, but the output images are different. Do you have any ideas?
from dcgan_code.
After working more with interpolation, I think this result is likely due to the fact that random distributions in high dimensional spaces are shaped more like hyperspheres and so areas close the origin are extremely unlikely. This is true if the prior is uniform or gaussian, though it might be possible to construct a prior where this is not the case. So happy to close this issue.
A longer writeup of this reasoning is in this issue: soumith/dcgan.torch#14. @Newmu and @udibr - would be interested in your feedback on this idea. Note that this result also has consequences on the best way to do interpolation and averaging (eg: smiling woman) in the latent space.
from dcgan_code.
Related Issues (20)
- ValueError: total size of new array must be unchanged - Using Mnist Dataset HOT 1
- a typo?
- ValueError: total size of new array must be unchanged error HOT 1
- How to use it with cpu
- Batch normalization and inference in the DCGAN model HOT 1
- using GAN and deconv with "valid" border mode
- how to get paper's dataset?
- How can I add my picture in Arithmetic on faces in part three
- Saved model for faces
- Training input dimensions HOT 2
- ImageNet pretrained model layer dimensions HOT 1
- Album cover art dataset HOT 3
- Subset instances cannot be defined by a slice
- Do you have a docker image of DCGAN?
- No batch norm in last layer
- requirements.txt for installing deps?
- CLASSIFYING CIFAR-10 USING GANS AS A FEATURE EXTRACTOR
- [request]Figure 5 of DCGAN paper implementation
- A small question regarding conv_cond_concat
- How to train it on my own datasets?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from dcgan_code.