compvis / net2net Goto Github PK
View Code? Open in Web Editor NEWNetwork-to-Network Translation with Conditional Invertible Neural Networks
Home Page: https://compvis.github.io/net2net/
Network-to-Network Translation with Conditional Invertible Neural Networks
Home Page: https://compvis.github.io/net2net/
Hi,
Thanks for the interesting work. I am trying to reproduce the results bu running faces32-to-faces256.yaml
. However, I am confused about how to prepare the corresponding dataset.
I have downloaded celebahq dataset from https://drive.google.com/drive/folders/11Vz0fqHS2rXDb5pprgTjpD7S2BAJhi1P, and put them into the data/celebahq folder. And there are 4 subfolders corresponding to different resolutions: 128 x 128, 256 x 256, 512 x 512 and 1024 x 1024, the images are in .jpg format.
My questions are:
First and foremost, I would like to express my sincere gratitude and respect for your work on this repository. The progress and innovations shared here have been immensely insightful and valuable to the community.
I am currently exploring the concept of fission in invertible neural networks, where a single latent representation 'x' can be decomposed into two distinct components 'y' and 'z'. My objective is to parameterize 'z' with a tractable distribution while ensuring that the combination of 'y' and 'z' can be accurately recombined to reconstruct 'x' using the reverse of the model.
Given your expertise in this field, I would greatly appreciate any guidance or suggestions you could provide on the following aspects:
Any insights, references, or examples you could share would be extremely helpful.
Thank you for your time and for the impactful contributions you've made to the field.
Best regards
Thank you for your surprising work.
During the SBERT-to-BigGAN, SBERT-to-BigBiGAN and SBERT-to-AE (COCO) execution, I received the following error:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "translation.py", line 531, in
melk()
NameError: name 'melk' is not defined
I'd appreciate it if you could check.
thank you for shareing your code and pretrained model.
If I want to retrain the unpaired - traslation task Oil-Portrait ⟷ Photography what is dataset I need ?
such as , how many Oil portrait images? and how many real human image photograph?
I want to train a pretrained model by myself, thank you
Hi, thanks for your interesting work. When i run the anime to photography task: python translation.py --base configs/creativity/anime_photography_256.yaml -t --gpus0, i receive the following error:
Traceback (most recent call last):
File "translation.py", line 522, in
trainer . fit(model, data)
File " /home /projects /miniconda3/ envs/net2net/lib/python3.7/site- packages/pytorch lightning/ trainer /states.py", line 48, in wrapped_ fn
result = fn(self , *args, **kwargs
File " /home/projects/miniconda3/envs /net2net/lib/python3.7/site-packages/pytorch_ lightning/ trainer/trainer .py", line 1058, in fit
results = self . accelerator_ backend. spawn_ ddp_ children( model )
File "/home/projects/miniconda3/envs /net2net/lib/python3 .7/site - packages /pytorch_ lightning/ accelerators/ddp_ backend.py", line 123, in spawn_ ddp_ childrenresults = self .ddp_ train(local_ rank, mp_ queue=None, model=model, is_ master=True )
File " /home / projects /miniconda3/envs /net2net/ lib/ python3.7/site- packages / pytorch_ lightning/ accelerators/ddp_ backend.py", line 224, in ddp_ train
results = self . trainer .run_ pretrain_ routine( model )
File " /home/projects/miniconda3/ envs /ne t2net/ lib/py thon3.7/site - packages/py torch_ lightning/trainer/trainer .py", line 1224, in run_ pretrain_ routineself._ run_ sanity check(ref_ model,model)
File " /home/projects/miniconda3/envs /net2net/lib/python3.7/site - packages/pytorch_ lightning/trainer/trainer .py", line 1257, in run_ sanity check
eval_ results = self._ evaluate(model, self .val_ dataloaders, max_ batches, False )
File " /home / projects /miniconda3 /envs /net2net/lib/python3.7/site- packages/ pytorch_ lightning/trainer /evaluation_ loop.py", line 369, in_ evaluate
self . on_ validation_ batch_ end( batch, batch_ idx, dataloader_ idx
File " /home /projects/miniconda3/envs /net2net/lib/py thon3.7/site packages/pytorch lightning/trainer/callback_ hook.py", line 156, in on_ validation_ batch_ endcallback. on validation batch end(self, self . getdell0batch, batch_ idx, dataloader idx)
File " /home /projects/net2net/translation.py", line 297, in on_ validation_ batch_ end
self.log_ img(pl_ module, batch, batch_ idx, split="val"
File " /home /projects /net2net/ translation.py", line 265, in log_ img
images = pl_ module. log_ images(batch, split=split)
File " /home /projects/miniconda3/envs /net2net/lib/python3. 7/site- packages/ torch/ autograd/grad_ mode.py", line 28, in decorate_ context
return func(*args, **kwargs)
File " /home /projects/net2net/net2net/models/flows/flow.py", line 157, in log_ images
log[" conditioning"] = log_ txt_ as_ img((w,h), xc)
File " /home /projects/ net2net /net2net/modules/util.py", line 18, in log_ txt_ as_ img
lines = "In" . join(xc[bi][start:start+nc] for start in range(0, len(xc[bi]), nc))
File " /home /projects /miniconda3/envs /net2nelib/python3.7/site-ackages/torch/_ tensor.py", line 589, in_ len__
raise TypeError("len() of a 0-d tensor "
TypeError: len() of a 0-d tensor
I don't know what causes this error. I would greatly appreciate it if you could help me find out the problem. Thanks for your time.
In readme about how to train unpaired translations task ;
you said :
python translation.py --base configs/translation/<task-of-interest>.yaml -t --gpus 0,
but in translation folder it has only faces32-to-faces256.yaml not any other, so I think it maybe :
python translation.py --base configs/creativty/<task-of-interest>.yaml -t --gpus 0,
Hi,if I have a new dataset with source domain x and target domain y , how I train the model like creativity/portrait-to-photo
as your paper said, it should be train two autoencoder (Resnet101-as encoder, bigGAN as decoder)。
is right ??
and would you provide an tutorial for how to apply on new datasets? thank you
Hi,
Thank you for your amazing work!
I am trying to replicate your results and training using
python translation.py --base configs/translation/sbert-to-biggan256.yaml -t --gpus 0,
I was wondering what gpu was used to train your model and what batch size did you use? I am only able to fit batch_size=2 on a TITAN XP, the default batch_size in the config was 16 but I am not able to launch it using 4 TITANs XP without running into memory issues. Is the BigGan or Sentence Transformer fine-tuned during the training (from your paper it seems like it was not), do you have any insight on what am I missing?
Thank you in advance
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.