There are source codes for Fine-grained Fact Verification with Kernel Graph Attention Network.
For more information about the FEVER 1.0 shared task can be found on this website.
- Python 3.X
- fever_score
- Pytorch
- pytorch_pretrained_bert
- transformers
- All data and BERT based chechpoints can be found at Ali Drive.
- RoBERTa based models and chechpoints can be found at Ali Drive.
- BERT based ranker.
- Go to the
retrieval_model
folder for more information.
- Pre-train BERT with claim-evidence pairs.
- Go to the
pretrain
folder for more information.
- Our KGAT model.
- Go to the
kgat
folder for more information.
The results are all on Codalab leaderboard.
User | Pre-train Model | Label Accuracy | FEVER Score |
---|---|---|---|
GEAR_single | BERT (Base) | 0.7160 | 0.6710 |
a.soleimani.b | BERT (Large) | 0.7186 | 0.6966 |
KGAT | RoBERTa (Large) | 0.7407 | 0.7038 |
KGAT performance with different pre-trained language model.
Pre-train Model | Label Accuracy | FEVER Score |
---|---|---|
BERT (Base) | 0.7281 | 0.6940 |
BERT (Large) | 0.7361 | 0.7024 |
RoBERTa (Large) | 0.7407 | 0.7038 |
CorefBERT (RoBERT Large) | 0.7596 | 0.7230 |
@inproceedings{liu2020kernel,
title={Fine-grained Fact Verification with Kernel Graph Attention Network},
author={Liu, Zhenghao and Xiong, Chenyan and Sun, Maosong and Liu, Zhiyuan},
booktitle={Proceedings of ACL},
year={2020}
}
If you have questions, suggestions and bug reports, please email: