ufal / augpt Goto Github PK
View Code? Open in Web Editor NEWDSTC9 Submission
License: MIT License
DSTC9 Submission
License: MIT License
I am attempting to train an ablated model without backtranslation, pre-training and data cleaning so I'm using the following command
./train_multiwoz.py --train-dataset multiwoz-2.0-train --dev-dataset multiwoz-2.0-val --model gpt2 --response-loss unlikelihood --epochs 10 --fp16 --gradient-accumulation-steps 4
This works fine, but then the generation
./generate.py --model <model> --dataset multiwoz-2.0-test --file predictions.txt
throws this error 20% into the process:
File "./generate.py", line 177, in <module>
orable_database_results=args.oracle_db)
File "./generate.py", line 131, in generate_predictions
conversation = pipeline(conversation)
File "/home/USER/augpt/pipelines.py", line 252, in __call__
for oracle_db, bs in zip(oracle_dbs_results, beliefs)]
File "/home/USER/augpt/pipelines.py", line 252, in <listcomp>
for oracle_db, bs in zip(oracle_dbs_results, beliefs)]
File "<string>", line 204, in __call__
File "<string>", line 191, in query_domain
sqlite3.OperationalError: no such column: destination
Is this something you encountered during your work and do you have thoughts on how I could go about debugging it? Thanks!
NB: Running the generation with jkulhanek/augpt-mw-20 works just fine.
Hi, can you specify what is meant by: "In this case the expected number of GPUs is four, you may need to adjust learning_rate and/or gradient-accumulation-steps accordingly." I would like to replicate the training conditions you used as closely as possible, so I will use 4 GPUs and wanted to check what the right lr and gradient accumulation values should be.
Perhaps the data version should not be hardcoded here in case one is training on MultiWOZ 2.0.
Running the evaluation in the following fashion:
./generate.py --model jkulhanek/augpt-mw-21 --dataset multiwoz-2.1-test --file predictions.txt
./evaluate_multiwoz.py --file predictions.txt --dataset multiwoz-2.1-test
I get inform and success results somewhat lower than what is reported in the paper (Table 1):
info: match: 0.8440, success: 0.6780
info: computing bleu
info: test bleu: 0.0000
info: test delex bleu: 0.1732
Is this to be expected? Is the jkulhanek/augpt-mw-21
checkpoint the same one used to obtain the results in Table 1?
I believe that this line can be safely commented out as it seems to serve no purpose and the import doesn't work.
is there any pretrained model for directly testing the performance using convlab evaluation?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.