Giter VIP home page Giter VIP logo

comet-atomic-2020's Introduction

(Comet-) ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

Example for ATOMIC2020

Paper

(Comet-) Atomic 2020: On Symbolic and Neural Commonsense Knowledge Graphs.
Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, Yejin Choi
AAAI Conference on Artificial Intelligence, 2021

If you'd like to cite this paper, please use the reference below:

@inproceedings{Hwang2021COMETATOMIC2O,
  title={COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs},
  author={Jena D. Hwang and Chandra Bhagavatula and Ronan {Le Bras} and Jeff Da and Keisuke Sakaguchi and Antoine Bosselut and Yejin Choi},
  booktitle={AAAI},
  year={2021}
}

Data: ATOMIC 2020

The data for ATOMIC 2020 is available here. If you need the ATOMIC 2019 data (Sap et al., 2019), it is downloadable here.

Model: COMET-ATOMIC 2020

Trained COMET-BART model can be downloaded here.

Trained COMET-GPT2XL model can be downloaded here.

Codebase

We include code used in expirements in COMET-ATOMIC2020 for reproducibility, ease of use. Our models are based off the HuggingFace Transformers codebase, with minor adjustments to adapt the model for our data. Details can be found in the AAAI paper.

Setup

Run pip install -r requirements.txt to install requirements for your Python instance. We recommend Conda to manage Python installs. Our codebases is on Python 3.

It's recommended that you test that your enviroment is set up correctly before running modeling code. You can do this via python models/comet_atomic2020_gpt2/comet_gpt2.py --test_install

The code for modeling is located in mosaic/infra/modeling. mosaic/datasets/KGDataset is used to convert the ATOMIC2020 CSV into an HuggingFace Datasets object.

Directory Overview

beaker_exp: Contains files needed to run expirements using Beaker (https://beaker.org/) instead of on your local machine.

human_eval: Contains HTML files for human evaluation on Amazon MTurk, as described in the AAAI paper.

models: Contains additional modeling files to reproduce the GPT2 and BART expirements. models/comet_atomic2020_bart contains a README and code to run COMET-BART2020.

scripts: Contains additional scripts (e.g. utils.py) used during expirements in the COMET-ATOMIC2020 paper.

split: Contains code used to make the test, train, and dev splits of ATOMIC2020 with Stratified Random Sampling.

system_eval: Contains code for automatic evaluation of generated entities.

Contributions

We welcome contributions to the codebase of COMET-2020. We encourage pull requests instead of issues; and suggest filing a GitHub issue with questions / suggestions.

License

COMET-ATOMIC 2020 (codebase) is licensed under the Apache License 2.0. The ATOMIC 2020 dataset is licensed under CC-BY.

Contact

Email: jenah[at]allenai[dot]org

comet-atomic-2020's People

Contributors

csbhagav avatar jenahwang avatar keisks avatar rlebras avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

comet-atomic-2020's Issues

Update PyTorch-Lightning Requirement to 1.6.4

I'd like to use 'comet' with the new PyTorch 'mps' backend. The PyTorch 'mps' implementation is still being 'shaken-out', as is PyTorch-Lightning 1.6.4, however, some of the code works.
Can you create a development branch of 'comet' that can use the latest version of 'PyTorch-Lightning' so we can run on the 'mps' backend (GPU).

T5 Implemetation

Do you have an implementation of COMET with T5? I have seen the following article mentioned it, but I couldn't find any code.

Understanding Few-Shot Commonsense Knowledge Models

Could you guide me on how I can change the current code to use T5? I've seen you declared T5 in some codes but you didn't use that.

Thanks

input format

Current comet model often requires X wanted to as a prefix.
Without such prefix, the model output are not good enough in some cases.

We might want to emphasize the prefix, or to retrain the model which works well without such prefix.

Can't reprodce results for BART

I downloaded the pre-trained COMET with BART, and executed run.sh without --do-train and gave the path of the downloaded model to --model_name_or_path however, the test RougeL is not what you reported in the paper:

I downloaded the test files from [here] as mentioned in other issue (https://storage.googleapis.com/ai2-mosaic-public/projects/mosaic-kgs/data_atomic_2020_BART-format.tgz
)
It's the content of metrics.json

            "test_avg_loss": 4.657896041870117,
            "test_avg_rouge1": 0.2141809276565792,
            "test_avg_rouge2": 0.03116943751406806,
            "test_avg_rougeL": 0.21284768699554912,
            "test_avg_gen_time": 0.003392871381747925,
            "test_avg_summ_len": 7.352367184676545,
            "avg_rouge1": 0.2141809276565792,
            "step_count": 1

COMET results

Could you please share or upload the COMET 2020 results files so that I can check the evaluation method you developed?
The file automatic_eval.py in system_eval contains the mentioned experiment files

Duplicate tuples and tuples with none tail node

There are duplicate tuples in all three splits of data: ~68,626 in the train, ~7,410 in dev, and ~8,473 in test (please correct me if I'm wrong). I wonder why? And should we just ignore the duplicates when using the data? One example:

['PersonX answers the question', 'xAttr', 'knowledgeable']
['PersonX answers the question', 'xAttr', 'knowledgeable']

Also, there are tuples with none tail node value (these none valued tuples are also part of the duplicate tuples). For example,

['PersonX accidentally threw ___', 'xIntent', 'none']
['PersonX accidentally threw ___', 'xIntent', 'none']

I wonder how these none values should be interpreted? Should we just ignore them? Or, does it mean that the subject or head has no relation of type edge relation in the tuple? For instance, in the case of PersonX accidentally threw ___, PersonX has no xIntent? If that's the case, then how should we treat the following cases:

['PersonX accidently left', 'oReact', 'none']
['PersonX accidently left', 'oReact', 'sad']

Where we have the same relation one time with a none tail node and another time with a non-empty tail node.

Thanks.

wandb.errors

i install the wandb module , and there is an error:
wandb.errors.UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key])

ightning_base.py", line 59, in __init__: AttributeError: can't set attribute

I got this error $bash ./run.sh : AttributeError: can't set attribute

% ipython
Python 3.8.13 (default, Mar 28 2022, 06:16:26) 
Type 'copyright', 'credits' or 'license' for more information
IPython 8.2.0 -- An enhanced Interactive Python. Type '?' for help.
In [2]: import pytorch_lightning
In [3]: print(pytorch_lightning.__version__)
1.6.3

In [4]: import torch
In [5]: print(torch.__version__)
1.10.1

% bash ./run.sh           
Traceback (most recent call last):
  File "finetune.py", line 441, in <module>
    main(args)
  File "finetune.py", line 330, in main
    model: SummarizationModule = SummarizationModule(args)
  File "finetune.py", line 68, in __init__
    super().__init__(hparams, num_labels=None, mode=self.mode, **kwargs)
  File "/Users/davidlaxer/comet-atomic-2020/models/comet_atomic2020_bart/lightning_base.py", line 59, in __init__
    self.hparams = hparams
  File "/Users/davidlaxer/tensorflow-metal/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1225, in __setattr__
    object.__setattr__(self, name, value)
AttributeError: can't set attribute

Lightning-AI/pytorch-lightning#7443

Reproducing the behavior of the AI2 demo

Hi,

Thanks for putting together the repo! I'm trying to make predictions with the model using the models/comet_atomic2020_bart/generation_example.py script. But the behavior is sometimes a little worse than that of the AI2 demo.

Could you please specify the model architecture/hyperparameter settings that are used by the demo? Thanks in advance!

How do you calculate BLEU for short targets?

The target or tail of many relations is short, for example it could be adjectives like happy, satisfied for xReact relation. How do you measure BLEU for these tails? As I read in the following implementation of BLEU

mjpost/sacrebleu#150 (comment)

The sentences must have at least 4 words so that it calculate BLEU. Moreover, because of the problems mentioned in previous issues I couldn't use eval.py to evaluate the generated lists by your code

How to evaluate generated data

When I tried to run:

py eval.py --gen_file ../models/comet_atomic2020_gpt2/results/pred_generations.jsonl

I got the following error:

Traceback (most recent call last):
  File "eval.py", line 139, in <module>
    sources, references, predictions = preprocess(args.gen_file, keys)
  File "eval.py", line 109, in preprocess
    keys_list = keys if keys!=None else generations[0]["generations"].keys()
AttributeError: 'list' object has no attribute 'keys'

What are keys? the generated list has no keys,

You also pass keys to the evaluation, but what are them?

python anli_evaluation/eval.py --gen_file GENERATIONS_FILE --keys MODEL_KEYS[comma-separated list] --results_file RESULTS_FILE

I also checked automatic_eval.py in system_eval, it gets different types of input (type 1, 2, 3) none of which are according to the generated results. The generated results have source, target, and generations while the function expect other fields (such as tails, head, etc.)

How to use ATOMIC to generate commensense just like using the 'predict' button in your demo?

Hi, I'm quite interested in the result of applying COMET on my own dataset. I've also checked your demo and found that I can do Commonsense Inferences about People and Events after entering a sentence and click 'predict'. But now I want to do inferences for plenty sentences, automatically, and in this case what should I do to edit corresponding py files to reach my goal? Looking forward to your replies!

Using comet-bart for zer-shot entailment?

consider
sent1="X drives too fast" and
sent2="X is pulled over by a cop"
Now we know "sent1 "happens before" sent2" is true. Is there any zero-shot way of finding out whether this is true or not?
Also What if in ATOMIC, we have sent1 -> r1 -> r2 -> ... -> rk -> sent2? Is there a way to find out about this from COMET? I don't want to know this from ATOMIC because well sent1 and sent2 can be sentences outside of ATOMIC. That's where COMET would be useful.

How to get .source and .target file at comet_atomic2020_bart

Hello sir.

I tried to run your codes that use BART model to generate knowledge triples.

In your codes, "models/comet_atomic2020_bart/finetune.py" requires "train.source" file and "train.target" file...

However, I couldn't figure out how to get these files.

How can I get these files?

Thanks.

Missing files in system_eval

Hello, I'm trying to run "automatic_eval.py" in "system_eval" directory to see its behavior.
However, it seems to require modules that are not included in "system_eval".

I found "utils.py" from "split/utils.py", but I couldn't find the "evaluation" module which you import in "automatic_eval.py" (from evaluation.eval import QGEvalCap).
Can you provide these files?

Also, can I know how the data (json file) that is used in "main()" can be obtained?

Thanks.

--Edit--
I have found a repository at https://github.com/xinyadu/nqg which seems to include all required components.
Is this the right repository?

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.