nlpersecjtu / ldsgm Goto Github PK
View Code? Open in Web Editor NEWCode for 'A Label Dependence-aware Sequence Generation Model for Multi-level Implicit Discourse Relation Recognition (AAAI 2022)'
Code for 'A Label Dependence-aware Sequence Generation Model for Multi-level Implicit Discourse Relation Recognition (AAAI 2022)'
Hi,
Thanks for your interesting work. I am confused about two parts of your implementation.
The first one is the data proprecessing. In the preprocess function, I see you maintain several "other" arrays, such as arg1_train_other and arg2_train_other and so on. If a sample doesn't have a second-level sence label, it will be assigned with a default label and added to those arrays. When evaluating, you calculate results based on the predictions on both normal samples and samples with default labels. This is fine for top-level evaluation, but not the case for second-level. Because previous works usually consider only samples with gold second level labels.
The second one is the construction of the graph. You mentioned in your paper that nodes will have a self-loop edges. But in the implementation, there is not self-loop in the adjacent matrix your provided. This contradicts your description.
Hi, When will the code be released, Thanks?
Hi,
(1) The data size in this paper is "the training set with 12,775 instances (Section 2-20), the validation set with 1,183 instances (Section 0-1), and the test set with 1,046 instances (Section 21-22)"๏ผ Is this before or after processing?
*Processing means: "Further, there exist 16 second-level labels, five of which with few training instances and no validation and test instance are removed. Therefore, we conduct an 11-way classification on the second-level labels. " in the paper.
(2) And in paper "On the Importance of Word and Sentence Representation Learning in Implicit Discourse Relation Classification", the data size is Train/dev/test: 12362, 1183, 1046. Which one is right and If there any code to process the dataset.
Thank you!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.