musyoku / chainer-glow Goto Github PK
View Code? Open in Web Editor NEWGlow: Generative Flow with Invertible 1×1 Convolutions
Glow: Generative Flow with Invertible 1×1 Convolutions
Can you add details to the README about how many GPUs you used and how long it took to get your results?
can you please provide your size 128 celebaHQ weights?
Hi, there! Thanks for your awesome work!
I am trying to get some stats (logpX, logpZ) from the model. And I used your pre-trained model on celebA-64x64 images.
When handling logpZ, I saw you first calculate negative log-likelihood of "z" in different multi-scales separately, and then sum them up.
However, when I concatenated z's into a single array, the log-likelihood I got is larger (~1.5x) than sum-up nll.
Here are the stats I got:
mean var
-0.0574313 0.311581 # level: 6x32x32
0.0713234 0.5110019 # level: 12x16x16
0.0486291 0.750326 # level: 24x8x8
0.0024840 0.994663 # level: 48x4x4
sum-up nll: 9259.02
concatenate:
mean: -0.022121632;
var: 0.6087184;
nll: 14386.038
Do you have any ideas why the variance in deeper levels is higher than shallower ones? Or the way I concatenate z's is wrong? I think different variances are the main reason the concatenated nll is higher than sum-up nll.
Thank you in advance!
where is the path that can using our image in testing, like "python3 change_temperature.py -snapshot ../snapshot"
I has problerm that why using np.random.normal to product input data.
I've downloaded the sample 32x32 celeb dataset linked in the readme, and then used the same line to call training on this data as in the readme (except I changed the path to data to match my local setup).
When it trains, all the outputted values during training are NaN (or 0, for kld):
python3 train.py -dataset /home/usr/celeba-64x64-images-npy/ -b 4 -depth 32 -levels 4 -nn 512 -bits 5 -ext npy
---- ------------
# 8500
mean -0.0831846
var 0.0825548
---- ------------
------------------ --------
levels 4
squeeze_factor 2
image_size (64, 64)
num_bits_x 5
nn_hidden_channels 512
lu_decomposition False
depth_per_level 32
------------------ --------
loading snapshot/model.hdf5
Can't broadcast (256,) -> (512,) <-- this is because I tried training differently previously
Iteration 1: Batch 7 / 2125 - loss: nan - nll: nan - kld: 0.00000000 - log_det: nan
It's training quite slowly, but is this expected and the values will change after enough training? Or is there something wrong? I'm training in google cloud on a K80 GPU.
Hi, I am trying Glow on my computer. But it is too gpu hungry.
Could you please upload pretrained model on CelebA 32x32?
Thanks a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.