Giter VIP home page Giter VIP logo

codred's Introduction

CodRED

Dataset and baseline code for the EMNLP 2021 paper CodRED: A Cross-Document Relation Extraction Dataset for Acquiring Knowledge in the Wild

CodRED is the first human-annotated cross-document relation extraction (RE) dataset, aiming to test the RE systems’ ability of knowledge acquisition in the wild. CodRED has the following features:

  • it requires natural language understanding in different granularity, including coarse-grained document retrieval, as well as fine-grained cross-document multi-hop reasoning;
  • it contains 30,504 relational facts associated with 210,812 reasoning text paths, as well as enjoys a broad range of balanced relations, and long documents in diverse topics;
  • it provides strong supervision about the reasoning text paths for predicting the relation, to help guide RE systems to perform meaningful and interpretable reasoning;
  • it contains adversarially-created hard NA instances to avoid RE models to predict relations by inferring from entity names instead of text information.

Codalab

If you are interested in our dataset, you are welcome to join in the Codalab competition at CodRED

Baseline

Requirements:

pip install redis tqdm sklearn numpy
pip install transformers==4.3.3
pip install eveliver==1.21.0

Then download the following files: wiki_ent_link.jsonl, distant_documents.jsonl, popular_page_ent_link.jsonl to baseline/data/rawdata/:

wget https://thunlp.oss-cn-qingdao.aliyuncs.com/wiki_ent_link.jsonl
wget https://thunlp.oss-cn-qingdao.aliyuncs.com/distant_documents.jsonl
wget https://thunlp.oss-cn-qingdao.aliyuncs.com/popular_page_ent_link.jsonl

To run the baseline (Table 3 in the paper, closed setting, end-to-end model):

cd baseline/data/
python load_data_doc.py
python redis_doc.py
cd ../codred-blend
python -m torch.distributed.launch --nproc_per_node=4 codred-blend.py --train --dev --per_gpu_train_batch_size 1 --per_gpu_eval_batch_size 1 --learning_rate 3e-5 --num_workers 2 --logging_step 10

The result is AUC=48.59, F1=51.99.

Arguments:

  • --positive_only: Only use path with positive relations.
  • --positive_ep_only: Only use entity pair with positive path.
  • --no_doc_pair_supervision: Not use path-level supervision.
  • --no_additional_marker: Not use additional marker [UNUSEDx].
  • --mask_entity: Use [MASK] to replace entity.
  • --single_path: Randomly choose a path for every entity pair
  • --dsre_only: use intra-document relation prediction, not use cross-document relation prediction.
  • --raw_only: use cross-document relation prediction, not use intra-document relation prediction.

To run experiments with evidence sentence:

cd ../codred-evidence
python -m torch.distributed.launch --nproc_per_node=4 codred-evidence.py --train --dev --per_gpu_train_batch_size 1 --per_gpu_eval_batch_size 1 --learning_rate 3e-5 --num_workers 2 --logging_step 10

The result is AUC=79.09, F1=73.76.

Cite

If you use the dataset or the code, please cite this paper:

@inproceedings{yao-etal-2021-codred,
    title = "{C}od{RED}: A Cross-Document Relation Extraction Dataset for Acquiring Knowledge in the Wild",
    author = "Yao, Yuan and Du, Jiaju and Lin, Yankai and Li, Peng and Liu, Zhiyuan and Zhou, Jie and Sun, Maosong",
    booktitle = "Proceedings of EMNLP 2021",
    year = "2021",
    pages = "4452--4472",
}

codred's People

Contributors

dongbw18 avatar jiajudu avatar yaoyuanthu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

codred's Issues

Evaluating on the test set

Hi, seems the data released only contains train and dev data, could you give some instructions about when we could get the testing performance? Thanks

How to process multi-label samples in datasets

Hi,

Thanks for providing this amazing cross document RE datasets! I have a small issue about multi-label samples in both training and dev datasets.

I found one entity pair and one text path can be mapping to more than one relations. Here are some examples from the dev set (./rawdata/dev_dataset.json).

Q2718010#Q13646 ('Le Mans', 'Thalys') {'P137', 'P127'}
Q6488#Q67 ('IndiGo', 'Tom Enders') {'P176', 'n/a'}

I am wondering if this means that models are expected to have multi-label inference abilities under both close and open setting. And I am also confused about how to deal with the text paths with both positive and NA relations.

Which version of eveliver did you use? How to avoid the following issue?

python codred-blend.py --train --dev --per_gpu_train_batch_size 1 --per_gpu_eval_batch_size 1 --learning_rate 3e-5 --num_workers 2 --logging_step 10

2022-04-08 22:13:30.272958: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
INFO:root:Process rank: -1, device: cuda, n_gpu: 1, distributed training: False, 16-bits training: False
Traceback (most recent call last):
File "codred-blend.py", line 498, in
main()
File "codred-blend.py", line 494, in main
trainer.run()
File "/.local/lib/python3.6/site-packages/eveliver/trainer.py", line 381, in run
self.load_data()
File "/.local/lib/python3.6/site-packages/eveliver/trainer.py", line 322, in load_data
train_dataset, dev_dataset, test_dataset = self.callback.load_data()
ValueError: not enough values to unpack (expected 3, got 2)

How can I run codred-blend.py with a larger batch size?

Sincerely admire your outstanding contribution for this work!
Yet there's several problems I've met when running codred-blend.py:
It seems that 2 different types of datasets(Intra-doc/Cross-doc) were mixed, so that samples were processed one by one. Would you please tell me how it should be modified that I can process larger batches? With batch size=1 setting, 1 epoch took me 6 hours..

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.