Comments (7)
@DoodleJZ �Sorry to bother you. As I'm not familiar with PyTorch so much.
Now, I just change torch_t.ByteTensor into torch_t.BoolTensor as follows. And now everything is perfect~
def pad_and_rearrange(....):
invalid_mask = torch_t.BoolTensor(mb_size, len_padded)._fill(True)
from hpsg-neural-parser.
I encounter the same issue, @DoodleJZ @tanvidadu Anyone has fixed this problem?
from hpsg-neural-parser.
The dimensions do not match between d_model and the sum of d_tag, d_word and d_char if you concatenate all the embedding may be. You can check each part of the embedding dimension to find the problem easily.
from hpsg-neural-parser.
@DoodleJZ, I do not use d_tag and d_char embedding~ I run this code with python 3.6 with pytorch 0.4.0, however,
The error is: for residual shape = (packed_len, d_model), however, the shape of outputs = (self.batch_size* self.max_len, d_model)? Mismatching comes from the first dimension, not the second dimension. I compress glove.6B.100.txt into glove.gz and I can not confirm oov: 18820 is NORMAL for running test.sh.
Loading model from models/cwt.pt...
loading embedding: glove from data/glove.gz
oov: 18820
Reading dependency parsing data from data/ptb_test_3.3.0.sd
Loading test trees from data/23.auto.clean...
Loaded 2,416 test examples.
Parsing test sentences...
packed_len: 2501
sentences: 100
torch.Size([2501])
self.batch_size 100
self.max_len: 50
residual: torch.Size([2501, 1024])
v_padded: torch.Size([800, 50, 64])
outputs_padded: torch.Size([800, 50, 64])
outputs = outputs_padded[output_mask]: torch.Size([40000, 64])
d_v1: 32
outputs = self.combine_v(outputs): torch.Size([5000, 1024])
outputs = self.residual_dropout(outputs,batch_idxs): torch.Size([5000, 1024])
Traceback (most recent call last):
File "src_joint/main.py", line 746, in
main()
File "src_joint/main.py", line 742, in main
args.callback(args)
File "src_joint/main.py", line 577, in run_test
predicted, _, = parser.parse_batch(subbatch_sentences)
File "/home/LAB/wujs/word_pos/HPSG-Neural-Parser/src_joint/Zparser.py", line 1370, in parse_batch
extra_content_annotations=extra_content_annotations)
File "/home/LAB/wujs/software/pytorch-0.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/LAB/wujs/word_pos/HPSG-Neural-Parser/src_joint/Zparser.py", line 827, in forward
res, current_attns = attn(res, batch_idxs)
File "/home/LAB/wujs/software/pytorch-0.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call
result = self.forward(*input, **kwargs)
File "/home/LAB/wujs/word_pos/HPSG-Neural-Parser/src_joint/Zparser.py", line 349, in forward
return self.layer_norm(outputs + residual), attns_padded
RuntimeError: The size of tensor a (5000) must match the size of tensor b (2501) at non-singleton dimension 0
srun: error: dell-gpu-32: task 0: Exited with exit code 1
from hpsg-neural-parser.
Maybe you need to try pytorch >= 1.0.0, the error is output_mask whichl occurs when the version of pytorch is not matched.
from hpsg-neural-parser.
@DoodleJZ �Sorry to bother you. As I'm not familiar with PyTorch so much.
Now, I just change torch_t.ByteTensor into torch_t.BoolTensor as follows. And now everything is perfect~
def pad_and_rearrange(....):
invalid_mask = torch_t.BoolTensor(mb_size, len_padded)._fill(True)
it works perfectly!
from hpsg-neural-parser.
Hi @wujsAct, @CoyoteLeo, @tanvidadu. I am afraid that I have a similar problem:
I have two tensors that I want to add, one is a noise tensor of shape N x 1 x 64 x 64
, and the other tensor has the same shape.
Now things work fine until the very last batch index, it seems, when the programs stops and complains with the following error message: RuntimeError: The size of tensor a (96) must match the size of tensor b (128) at non-singleton dimension 0
.
Now, here is a bit of code:
`
for batch_idx, (real_images, targets) in enumerate(train_loader):
noise_disc = -torch.rand(size = (batch_size, 1, 64, 64))/5
noise_disc = noise_disc.to(device)
real_images = real_images.to(device) # shape: (batch_size, 1, 64, 64)
images_disc = (real_images + noise_disc)
# move to device:
images_disc = images_disc.to(device)`
Unfortunately, I don't really understand why this error occurs, and I would appreciate help a lot!
Merry Christmas. :-)
from hpsg-neural-parser.
Related Issues (15)
- How to run the parser? HOT 1
- Auto Tags Issue HOT 5
- Using HPSG-Neural-Parser for parsing raw sentence
- Which version of CTB to use?
- Instructions for Generating Dependency Trees for CTB HOT 2
- UNK in produced parses
- No such file or directory: 'data/glove.gz' HOT 1
- Access to Stanford parser 3.3.0 HOT 1
- Experiment with XLnet? HOT 4
- Test set accuracy is very low HOT 1
- No function pred_linearize HOT 4
- Punctuation must be separated from words? HOT 1
- Error in preprocessing data/training CTB HOT 2
- Question on the Dependency Training Objective. HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from hpsg-neural-parser.