Comments (5)
Hi,
I didn't encounter such a situation before. Which datasets did you use? It seems that
from diffuseq.
Thanks! I am using a protein sequence dataset with 100K training sequences and 6K diffusion steps however the same change of metrics happen when I use 2K.
Do you mean with
If yes, can I influence that without changing the previous denoising steps because I also noticed that the q2 nll goes up again after 1K iterations, whereas the q0/1 nll stays the same and the q3 is still gradually going down.
from diffuseq.
Do you mean with
$x_t$ with a larger sampled$t$ didn't get sufficient training_ that the last 25% of the denoising steps are not yet trained properly however the previous denoising steps are already "done"?
Yes. This pattern is not observed in the text sequence, not sure about the protein sequence. Maybe you can edit the time sampler to add more weight on q2 steps.
from diffuseq.
Thanks again :)
Two follow ups to your answer:
sampler to add more weight on q2 steps.
-
Do you mean to adjust the fixed sampler so that t's that are in q2 are weighted higher?
-
What is the expected behavior of the the different metrics for different qn's? I am currently running DiffuSeq on a reduced Conversation dataset and there, all metrics of a type are in the same ballpark for each qn. Is that also what you have observed?
from diffuseq.
Respond to 1: Yes
Respond to 2: Yes, the specific number of q_n is computed after re-weighting.
from diffuseq.
Related Issues (20)
- Issues with decoding and evaluation HOT 2
- Padding during training results in a "Killed"
- BERT parameter
- Try to train the model with another dataset, but get so many [UNK] token.
- a few questions about the 'MBR' decoding strategy. HOT 2
- Version of many packages
- Incorrect self-BLEU Computation
- a question about --local_rank
- Could not find a version that satisfies the requirement torch==1.9.0+cu111
- i face some promble Dataset(2) in "text_datasets.py" HOT 1
- If there is any rule to modify the parameters HOT 1
- Machine Translation Task with DiffuSeq HOT 6
- A question about the loss in V2
- Implementation of using soft absorbing state in the forward process in training. HOT 1
- ddim sampling HOT 2
- DDPM HOT 1
- train
- Where is CommonsenseConversation/test.jsonl ? When I run train. sh and then run run_decode_solver. sh or run_decode. sh, I always can't find test.jsonl HOT 2
- 'grad_norm' is NaN
- Understanding tT_loss
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from diffuseq.