Comments (7)
@lucidrains Thanks for your kindness!
The results changed after a few modifications, and I will keep training to see what happens.
By the way, I am curious why you dont't let the style codes participate in the attention process? for example, let the style codes modulate the weights of q&k on self-attention.
Tks.
from gigagan-pytorch.
tks! You are so nice. Waiting for your continuing work.
from gigagan-pytorch.
@landian60 i can do a quick review of your integration code if you have a public repo or a gist
from gigagan-pytorch.
i'm actually not too sure if they also used the l2 distance attention for cross attention as well as the extra transformer blocks they appended to the clip text encoder
maybe i should make it an option to fallback to regular attention
from gigagan-pytorch.
glad to hear it is working!
the style codes already participate in modulating the convolutional kernels. as for the text conditioning, that is cross attended to by each image feature map token
from gigagan-pytorch.
Hello Phil,
I have a question about training the model with cross attention. I only used clip contrastive loss and did not add any other losses. And the generative results always became the same color and the G loss was very big even to NaN. I think the reason is the model collapse. Have you encountered similar issues when training the model? And have you noticed that some losses or well-designed discriminator containing some vital structures could help to stabilize the training process with the attention mechanism?
Tks.
from gigagan-pytorch.
from gigagan-pytorch.
Related Issues (20)
- Possible Discrepancies HOT 3
- The training code not deal with paired data yet? HOT 2
- [Question] About the upscaler HOT 2
- Multi GPU training HOT 4
- Multi GPU with gradient accumulation
- [Request] Please provide a replicate.com version
- Confused about this project?
- NaN losses after hours of training (UPSAMPLER) HOT 16
- How to implement this model to enhance my input images? Do I have to train the model to use? HOT 2
- Weights of Gigagan Upscaler HOT 1
- Turn on/off gradients computation between generator/discriminator HOT 2
- Wrong order of resolutions list HOT 1
- to_rgb branch has only 1 learnable kernel HOT 7
- Gradient Penalty is very high in the start HOT 10
- How to use this model for SR ?
- Has Anyone Trained This Model Yet? HOT 2
- The text-to-image tasks
- Config to reproduce paper
- question about code in unet_upsampler.py HOT 1
- the loss became nan after a few train steps HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gigagan-pytorch.