Giter VIP home page Giter VIP logo

mkg_analogy's Introduction

MKG_Analogy

Code and datasets for the ICLR2023 paper "Multimodal Analogical Reasoning over Knowledge Graphs"

Quick links

Overview

In this work, we propose a new task of multimodal analogical reasoning over knowledge graph. A overview of the Multimodal Analogical Reasoning task can be seen as follows:

We provide a knowledge graph to support and further divide the task into single and blended patterns. Note that the relation marked by dashed arrows ($\dashrightarrow$) and the text around parentheses under images are only for annotation and not provided in the input.

Requirements

pip install -r requirements.txt

Data Collection and Preprocessing

To support the multimodal analogical reasoning task, we collect a multimodal knowledge graph dataset MarKG and a Multimodal Analogical ReaSoning dataset MARS. A visual outline of the data collection as shown in following figure:

We collect the datasets follow below steps:

  1. Collect Analogy Entities and Relations
  2. Link to Wikidata and Retrieve Neighbors
  3. Acquire and Validate Images
  4. Sample Analogical Reasoning Data

The statistics of the two datasets are shown in following figures:

We put the text data under MarT/dataset/, and the image data can be downloaded through the Google Drive or the Baidu Pan(TeraBox) (code:7hoc) and placed on MarT/dataset/MARS/images. Please refer to MarT for details.

The expected structure of files is:

MKG_Analogy
 |-- M-KGE	# multimodal knowledge representation methods
 |    |-- IKRL_TransAE   
 |    |-- RSME
 |-- MarT
 |    |-- data          # data process functions
 |    |-- dataset
 |    |    |-- MarKG    # knowledge graph data
 |    |    |-- MARS     # analogical reasoning data
 |    |-- lit_models    # pytorch_lightning models
 |    |-- models        # source code of models
 |    |-- scripts       # running scripts
 |    |-- tools         # tool function
 |    |-- main.py       # main function
 |-- resources   # image resources
 |-- requirements.txt
 |-- README.md

Evaluate on Benchmark Mehods

We select some baseline methods to establish the initial benchmark results on MARS, including multimodal knowledge representation methods (IKRL, TransAE, RSME), pre-trained vision-language models (VisualBERT, ViLBERT, ViLT, FLAVA) and a multimodal knowledge graph completion method (MKGformer).

In addition, we follow the structure-mapping theory to regard the Abudction-Mapping-Induction as explicit pipline steps for multimodal knowledge representation methods. As for transformer-based methods, we further propose MarT, a novel framework that implicitly combines these three steps to accomplish the multimodal analogical reasoning task end-to-end, which can avoid error propagation during analogical reasoning. The overview of the baseline methods can be seen in above figure.

Multimodal Knowledge Representation Methods

1. IKRL

We reproduce the IKRL models via TransAE framework, to evaluate on IKRL, running following code:

cd M-KGE/IKRL_TransAE
python IKRL.py

You can choose pre-train/fine-tune and TransE/ANALOGY by modifing finetune and analogy parameters in IKRL.py, respectively.

To evaluate on IKRL, running following code:

cd M-KGE/IKRL_TransAE
python TransAE.py

You can choose pre-train/fine-tune and TransE/ANALOGY by modifing finetune and analogy parameters in TransAE.py, respectively.

3. RSME

We only provide part of the data for RSME. To evaluate on RSME, you need to generate the full data by following scripts:

cd M-KGE/RSME
python image_encoder.py  # -> analogy_vit_best_img_vec.pickle
python utils.py          # -> img_vec_id_analogy_vit.pickle

Firstly, pre-train the models over MarKG:

bash run.sh

Then modify the --checkpoint parameter and fine-tune the models on MARS:

bash run_finetune.sh

More training details about the above models can refer to their offical repositories.

Transformer-based Methods

We leverage the MarT framework for transformer-based models. MarT contains two steps: pre-train and fine-tune.

To train the models fast, we encode the image data in advance with this script (Note that the size of the encoded data is about 7GB):

cd MarT
python tools/encode_images_data.py

Taking MKGformer as an example, first pre-train the model via following script:

bash scripts/run_pretrain_mkgformer.sh

After pre-training, fine-tune the model via following script:

bash scripts/run_finetune_mkgformer.sh

🍓 We provide the best checkpoints of transformer-based models during the fine-tuning and pre-training phrases at this Google Drive. Download them and add --only_test in scripts/run_finetune_xxx.sh for testing experiments.

Citation

If you use or extend our work, please cite the paper as follows:

@inproceedings{
zhang2023multimodal,
title={Multimodal Analogical Reasoning over Knowledge Graphs},
author={Ningyu Zhang and Lei Li and Xiang Chen and Xiaozhuan Liang and Shumin Deng and Huajun Chen},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=NRHajbzg8y0P}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.