Giter VIP home page Giter VIP logo

text-classification-models-pytorch's People

Contributors

anubhavgupta3377 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

text-classification-models-pytorch's Issues

NLLLoss

Hi, thanks for your code! However, I think there might be a bug on NLLLoss: I think the input of nn.NLLloss() is after logsoftmax according to this manual: https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html, however, here only softmax is provided in model.py therefore, the loss is always negative. I only check the Transformer model.

Kind regards,
John

RuntimeError: index out of range

Hi, when i run the train.py in the folder "textCNN model",there is an error that I cant find out what's wrong. Can you help me ? Thank you very much

Loaded 96000 training examples
Loaded 7600 test examples
Loaded 24000 validation examples
Epoch: 0
Traceback (most recent call last):
  File "/Users/y/Documents/Code/Text-Classification-Models-Pytorch-master/Model_TextCNN/train.py", line 43, in <module>
    train_loss,val_accuracy = model.run_epoch(dataset.train_iterator, dataset.val_iterator, i)
  File "/Users/y/Documents/Code/Text-Classification-Models-Pytorch-master/Model_TextCNN/model.py", line 85, in run_epoch
    y_pred = self.__call__(x)
  File "/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/Users/y/Documents/Code/Text-Classification-Models-Pytorch-master/Model_TextCNN/model.py", line 45, in forward
    embedded_sent = self.embeddings(x).permute(1,2,0)
  File "/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 118, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "/anaconda3/python.app/Contents/lib/python3.6/site-packages/torch/nn/functional.py", line 1454, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /Users/administrator/nightlies/pytorch-1.0.0/wheel_build_dirs/conda_3.6/conda/conda-bld/pytorch_1544137972173/work/aten/src/TH/generic/THTensorEvenMoreMath.cpp:191

Process finished with exit code 1

seq2seq-Attention inference batch size \<pad\> effects

hi, setting max_length to none, but short sentence would still be padded to the longest length among the batch, thus affects the training and prediction result
eg.
predict only one sentence, "0516酸菜鱼", index input to tensor([[241542],
[ 7789],
[ 192],
[ 260]])
but add "'Bird&Bird 香港鸿鹄律师事务所北京代表处鸿鹄知识产权代理(北京有限公司)'
index input to ":
image whereas index 1 is "<pad>"
and output differs:
image
image

Use of permute in RCNN

On lines 53 and 61 of Text-Classification-Models-Pytorch/Model_RCNN/model.py , function permute is used.

input_features = torch.cat([lstm_out,embedded_sent], 2).permute(1,0,2)
...
linear_output = linear_output.permute(0,2,1) # Reshaping fot max_pool

Could you please explain why it is necessary or useful to permute the dimensions of these tensors?

no non linearity in fasttext model

In fasttext model:
def forward(self, x):
embedded_sent = self.embeddings(x).permute(1,0,2)
h = self.fc1(embedded_sent.mean(1))
z = self.fc2(h)
return self.softmax(z)

Why is there no call to relu on h?

Model_Transformer

Hi, would you please share your point in "Only encoder part of Transformer model is used for classification",.
I apply this model in my dataset, the accuracy only 60% .I am thinking whether the number of layer is a cause?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.