Giter VIP home page Giter VIP logo

Comments (7)

tonyswoo avatar tonyswoo commented on August 17, 2024 2

Hello,

The three datasets I used to evaluate my implementation are the MTOP dataset, the multilingual ATIS dataset, and the IMDB dataset.

You can download the MTOP dataset here. The IMDB dataset can also be easily downloaded here. As for the multilingual ATIS dataset, getting access to the dataset is a bit more difficult; you need to create an LDC account, request the dataset, and wait for the request to approved (this might be a manual process). The multilingual ATIS catalogue page is here.

If you have further questions, feel free to add a comment.

from pnlp-mixer.

tiendung avatar tiendung commented on August 17, 2024

I tried several times but still cannot figure out how to run training script on imdb dataset.
I got the following error:

t@medu pnlp-mixer % python3 run.py -c cfg/imdb_xs.yml -n imdb_xs -m train
  File "/Users/t/repos/pnlp-mixer/run.py", line 167, in <module>
    data_module = PnlpMixerDataModule(cfg.vocab, train_cfg, model_cfg.projection)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 20, in __init__
    self.tokenizer = BertWordPieceTokenizer(**vocab_cfg.tokenizer)
  File "/usr/local/lib/python3.9/site-packages/tokenizers/implementations/bert_wordpiece.py", line 30, in __init__
    tokenizer = Tokenizer(WordPiece(vocab, unk_token=str(unk_token)))
Exception: Error while initializing WordPiece: No such file or directory (os error 2)t@medu pnlp-mixer %

I download and put imdb dataset at ./data/imdb

t@medu pnlp-mixer % ll ./data/imdb
.rw-r--r-- t staff 826 KB Wed Apr 13 00:14:11 2011  imdb.vocab
.rw-r--r-- t staff 882 KB Sun Jun 12 05:54:43 2011  imdbEr.txt
.rw-r--r-- t staff 3.9 KB Sun Jun 26 07:18:03 2011  README
drwxr-xr-x t staff 224 B  Wed Apr 13 00:22:40 2011  test/
drwxr-xr-x t staff 320 B  Sun Jun 26 08:09:11 2011  train/

Can you give some hints?

from pnlp-mixer.

tonyswoo avatar tonyswoo commented on August 17, 2024

Hi,

Could you show me the configuration file (the .yml file) you are using?

I believe the issue is that the vocab file does not exist in the provided path i.e. the vocab file does not exist at the path provided in vocab.tokenizer.vocab of the configuration file.

If you wish to use the multilingual BERT vocabulary, the file is included in the repo at ./wordpiece/mbert_vocab.txt

from pnlp-mixer.

tiendung avatar tiendung commented on August 17, 2024

You are right. I need to change the config to mbert_vocab.txt.

from pnlp-mixer.

tiendung avatar tiendung commented on August 17, 2024

Sorry to bother again. Now I'm stuck at AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'. I guessed it related to tknz? My config file is https://github.com/telexyz/pnlp-mixer/blob/master/cfg/imdb_xs.yml

  File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 87, in __getitem__
    words = self.get_words(fields)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 109, in get_words
    return [w[0] for w in self.tokenizer.pre_tokenizer.pre_tokenize_str(self.normalize(fields[0]))][:self.max_seq_len]
AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'

from pnlp-mixer.

tonyswoo avatar tonyswoo commented on August 17, 2024

Hi,

Which version of tokenizers are you using?

from pnlp-mixer.

zzk0 avatar zzk0 commented on August 17, 2024

Sorry to bother again. Now I'm stuck at AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'. I guessed it related to tknz? My config file is https://github.com/telexyz/pnlp-mixer/blob/master/cfg/imdb_xs.yml

  File "/usr/local/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 87, in __getitem__
    words = self.get_words(fields)
  File "/Users/t/repos/pnlp-mixer/dataset.py", line 109, in get_words
    return [w[0] for w in self.tokenizer.pre_tokenizer.pre_tokenize_str(self.normalize(fields[0]))][:self.max_seq_len]
AttributeError: 'BertWordPieceTokenizer' object has no attribute 'pre_tokenizer'

I have the same problem, the command below will fix it.

pip install tokenizers==0.11.4

from pnlp-mixer.

Related Issues (6)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.