Giter VIP home page Giter VIP logo

lhrlab / nqe Goto Github PK

View Code? Open in Web Editor NEW
19.0 2.0 1.0 6.5 MB

[AAAI 2023] Official resources of "NQE: N-ary Query Embedding for Complex Query Answering over Hyper-relational Knowledge Graphs".

Home Page: https://doi.org/10.1609/aaai.v37i4.25576

License: MIT License

Python 100.00%
first-order-logic pytorch multi-hop-reasoning logical-reasoning transformer knowledge-graph hyper-relational fuzzy-logic query-embedding

nqe's Introduction

NQE

Official resources of "NQE: N-ary Query Embedding for Complex Query Answering over Hyper-Relational Knowledge Graphs". Haoran Luo, Haihong E, Yuhao Yang, Gengxian Zhou, Yikai Guo, Tianyu Yao, Zichen Tang, Xueyuan Lin, Kaiyang Wan. AAAI 2023 [paper].

Overview

An example of n-ary FOL query:

16 kinds of n-ary FOL queries in WD50K-NFOL:

Setup

Default implementation environment

  • Linux(SSH) + Python3.7.13 + Pytorch1.8.1 + Cuda10.2
pip install torch==1.8.1+cu102 torchvision==0.9.1+cu102 torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html

Install Dependencies

Install dependencies:

pip install -r requirements.txt

Configure the Dataset

We tested the effectiveness of our model on two datasets, including the WD50K-QE dataset and the WD50K-NFOL dataset.

  • WD50K-QE dataset is a dataset created with the multi-hop reasoning method StarQE. It covering multi-hop reasoning with conjunction logic operation. We call it "wd50k_qe" in the code. You can download and upzip the WD50K-QE dataset file from TeraBox or Baidu Netdisk.
unzip wd50k_qe.zip -d data/
  • WD50K-NFOL is a hyper-relational dataset we created covering logical operations including conjunction, disjunction and negation as well as combined operations. We call it "wd50k_nfol" in the code. You can download and upzip the WD50K-NFOL dataset file from TeraBox or Baidu Netdisk.
unzip wd50k_nfol.zip -d data/

Generate the Groundtruth

Then, we should generate the groundtruth of the chosen dataset for evaluation. If you don't change the dataset, please skip this step, because the zip files above have already got the groundtruth in "gt" file by following operation.

  • For WD50K-QE dataset:
python src/generate_groundtruth.py --dataset wd50k_qe
  • For WD50K-NFOL dataset:
python src/generate_groundtruth.py --dataset wd50k_nfol

Model Training

You can train query embedding model using "src/map_iter_qe.py".

  • For WD50K-QE dataset:
python src/map_iter_qe.py --dataset wd50k_qe --epoch 300 --gpu_index 0
  • For WD50K-NFOL dataset:
python src/map_iter_qe.py --dataset wd50k_nfol --epoch 50 --gpu_index 0

Evaluation

After training, you can only run prediction using "src/map_iter_qe.py" by with argument "do_learn" False and argument "do_predict" True. In this case, you need to select the ckpts file you want to use and configure the "prediction_ckpt" argument as you want.

  • For WD50K-QE dataset:
python src/map_iter_qe.py --dataset wd50k_qe --do_learn False --do_predict True --prediction_ckpt ckptswd50k_qe-train_tasks-1p-best-valid-DIM256.ckpt --prediction_tasks 1p,2p,3p,2i,3i,pi,ip --gpu_index 0
  • For WD50K-NFOL dataset:
python src/map_iter_qe.py --dataset wd50k_nfol --do_learn False --do_predict True --prediction_ckpt ckptswd50k_nfol-train_tasks-1p-best-valid-DIM256.ckpt --prediction_tasks 1p,2p,3p,2i,3i,pi,ip,2u,up,2cp,3cp,2in,3in,inp,pin,pni --gpu_index 0

BibTex

If you find this work is helpful for your research, please cite:

@article{luo2023nqe, 
  title={NQE: N-ary Query Embedding for Complex Query Answering over Hyper-Relational Knowledge Graphs}, 
  volume={37}, 
  url={https://ojs.aaai.org/index.php/AAAI/article/view/25576}, 
  DOI={10.1609/aaai.v37i4.25576}, 
  author={Luo, Haoran and E, Haihong and Yang, Yuhao and Zhou, Gengxian and Guo, Yikai and Yao, Tianyu and Tang, Zichen and Lin, Xueyuan and Wan, Kaiyang}, 
  year={2023}, 
  month={Jun.}, 
  pages={4543-4551} 
}

For further questions, please contact: [email protected], or wechat: lhr1846205978.

nqe's People

Contributors

lhrlab avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

kaiyangwan

nqe's Issues

Cannot reproduce the performance?

I use the parameters you provided to run the experiments, however, I cannot reproduce the results, can you help me? Below is the results:
image

Cannot evaluate inp query

Hi, I follow your guide to train and evaluate the model by 1p-train, but there is something wrong when evaluate inp query. Could you help me? Thanks.
Beside, how long does it take to train all queries?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.