Comments (7)
Can you provide more information about the training? People here #4 reported a higher performance than results show in the paper.
from glmp.
yes. I run with code with python3 myTrain.py -lr=0.001 -l=3 -hdd=128 -dr=0.2 -dec=GLMP -bsz=8 -ds=kvr, and no change is made, which should achieve the best performance and be consistent with your paper.
My pytorch version is 1.2.0.
But I run this code about 10 times and only obtain entity F1 scores between 56 and 58.5, which is far from that reported in your paper.
Now I have no idea how to reproduce the results reported in this paper. Am I missing something important?
from glmp.
from glmp.
The results in the paper using L=3 is 59.97, which is not "far away" from what you achieve. Have you tried other hyper-parameter settings, changing hdd size from 128 to 256, with different dropout ratios, and using smaller (4) or larger (16, 32) batch size . Also, now the code is early-stopping with BLEU and if you are more interested in Entity-F1, you can also adjust that part. As I mentioned, people here #4 reported a higher performance than results show in the paper. You can also pm them to ask how they do so, what hyper-parameters they are using.
Hopefully these are "some ideas" what you can do. Let me know what can you achieve after these. Side suggestion, it's better to be more polite when you are using other's code and asking for help. :)
from glmp.
@jasonwu0731 Thank you for your suggestions. I will try other hyper-parameter settings. BTW, if my words make you uncomfortable, I mean no offense. Sorry again :)
I will also ask how the people in #4 achieve better performance and let you know the final results.
from glmp.
Understood. Hope you can get response from #4 that how did they obtain better results. After some efforts internally, I was able to provide the trained model reported in the paper here. Hope that help your research.
ACC SCORE: 0.12267657992565056
F1 SCORE: 0.5996873409698522
CAL F1: 0.6956311078262298
WET F1: 0.6257885219093696
NAV F1: 0.5298233847529625
BLEU SCORE: 14.79
Thanks again for your interest in our paper.
Jason
from glmp.
I am very grateful for your kindness and your efforts are very helpful for me. BTW, I also get response from #4 just now. I will run this code more times with different hyper-parameter settings.
Again, thanks a lot. Have a good day :)
from glmp.
Related Issues (9)
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from glmp.