Comments (6)
Hi,
It is true that the
from diffuseq.
Hi, It is true that the xt part is 0, referring to Eq.(2) in the paper. There is still yt part for the mse loss when training.
- Yes, what you mainly train is the loss in yt, but the X part entered into the mean_flat function is 0, and the function will also count the number of X, resulting in a smaller loss. the constant Xt shares part of Yt loss. For example, Zt = [0,0,0,2,8], where 0 is the part of MSE_X, it actually calculates the average of (2+8)/5 (including the number of X's and Y's) instead of calculating the average of (2+8)/2 (the number of Y's).
- What does the Embedding part of Transformer learn in training? learn how to embed text(X) in the tail of T = 2000?
- Is the diffusion model in NLP suitable only for large data, but not for small data?
- You are the first to use the seq2seq task for the diffusion model, I would like to ask if Transformer and training methods are similar to DiffLM?
Thank you for such beautiful code!
from diffuseq.
Hi,
- You're right, it is more rigorous. For the whole sequence, the total length is fixed to 128, and the length of x would not vary much, so I think this loss is bearable.
- The embedding learns the word embedding of each token, just the same as other pretrain language models. The word embedding here also corresponds to the semantic meaning of the token, e.g. similar words have closer vector cosine scores.
- There is no doubt that larger data helps better language modeling capacity, but I am not sure how DiffuSeq works on a small set of data.
- The transformer architecture (layers, hidden dims, etc) and the training methods (vlb loss, optimizer, learning rate annealing) are similar to Diffusion-LM.
from diffuseq.
Hi,
- You're right, it is more rigorous. For the whole sequence, the total length is fixed to 128, and the length of x would not vary much, so I think this loss is bearable.
- The embedding learns the word embedding of each token, just the same as other pretrain language models. The word embedding here also corresponds to the semantic meaning of the token, e.g. similar words have closer vector cosine scores.
- There is no doubt that larger data helps better language modeling capacity, but I am not sure how DiffuSeq works on a small set of data.
- The transformer architecture (layers, hidden dims, etc) and the training methods (vlb loss, optimizer, learning rate annealing) are similar to Diffusion-LM.
Thank you for your answer. For the second Embedding section. There is a Embedding(EMB) for mapping W to X0 by consulting the source code, but there is also a word_embedding in the Bert model itself, which shares the weight with lm_head. What's the difference? By referring to your code you will map X to Xt using word_embedding in Bert to implement the sample when sampling. So what role does EMB(W) play in the sampling process during training?
from diffuseq.
Hi,
I guess your "Embedding(W)" refers to randomly initialized:
Line 73 in bdc8f0a
In fact, we didn't use it in the end, both for training and sampling. This code is the legacy of the previous work and we leave it as a possible interface that you can initialize word embedding using your own weights (e.g. Glove embedding). We only use model.word_embedding
.
from diffuseq.
Hi, I guess your "Embedding(W)" refers to randomly initialized:
Line 73 in bdc8f0a
In fact, we didn't use it in the end, both for training and sampling. This code is the legacy of the previous work and we leave it as a possible interface that you can initialize word embedding using your own weights (e.g. Glove embedding). We only use
model.word_embedding
.
Thank you for your patience,
Good luck!
from diffuseq.
Related Issues (20)
- Issues with decoding and evaluation HOT 2
- Padding during training results in a "Killed"
- BERT parameter
- Try to train the model with another dataset, but get so many [UNK] token.
- a few questions about the 'MBR' decoding strategy. HOT 2
- Version of many packages
- Incorrect self-BLEU Computation
- a question about --local_rank
- Could not find a version that satisfies the requirement torch==1.9.0+cu111
- i face some promble Dataset(2) in "text_datasets.py" HOT 1
- If there is any rule to modify the parameters HOT 1
- Machine Translation Task with DiffuSeq HOT 6
- A question about the loss in V2
- Implementation of using soft absorbing state in the forward process in training. HOT 1
- ddim sampling HOT 2
- DDPM HOT 1
- train
- Where is CommonsenseConversation/test.jsonl ? When I run train. sh and then run run_decode_solver. sh or run_decode. sh, I always can't find test.jsonl HOT 2
- 'grad_norm' is NaN
- Understanding tT_loss
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from diffuseq.