Giter VIP home page Giter VIP logo

3d-vista's Introduction

3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment

Paper PDF Paper arXiv Project Page HuggingFace Checkpoints

Ziyu Zhu, Xiaojian Ma, Yixin Chen, Zhidong Deng๐Ÿ“ง, Siyuan Huang๐Ÿ“ง, Qing Li๐Ÿ“ง

This repository is the official implementation of the ICCV 2023 paper "3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment".

Paper | arXiv | Project | HuggingFace Demo | Checkpoints

Abstract

3D vision-language grounding (3D-VL) is an emerging field that aims to connect the 3D physical world with natural language, which is crucial for achieving embodied intelligence. Current 3D-VL models rely heavily on sophisticated modules, auxiliary losses, and optimization tricks, which calls for a simple and unified model. In this paper, we propose 3D-VisTA, a pre-trained Transformer for 3D Vision and Text Alignment that can be easily adapted to various downstream tasks. 3D-VisTA simply utilizes self-attention layers for both single-modal modeling and multi-modal fusion without any sophisticated task-specific design. To further enhance its performance on 3D-VL tasks, we construct ScanScribe, the first large-scale 3D scene-text pairs dataset for 3D-VL pre-training. ScanScribe contains 2,995 RGB-D scans for 1,185 unique indoor scenes originating from ScanNet and 3R-Scan datasets, along with paired 278K scene descriptions generated from existing 3D-VL tasks, templates, and GPT-3. 3D-VisTA is pre-trained on ScanScribe via masked language/object modeling and scene-text matching. It achieves state-of-the-art results on various 3D-VL tasks, ranging from visual grounding and dense captioning to question answering and situated reasoning. Moreover, 3D-VisTA demonstrates superior data efficiency, obtaining strong performance even with limited annotations during downstream task fine-tuning.

Install

  1. Install conda package
conda env create --name 3dvista --file=environments.yml
  1. install pointnet2
cd vision/pointnet2
python3 setup.py install

Prepare dataset

  1. Follow Vil3dref and download scannet data under data/scanfamily/scan_data, this folder should look like
./data/scanfamily/scan_data/
โ”œโ”€โ”€ instance_id_to_gmm_color
โ”œโ”€โ”€ instance_id_to_loc
โ”œโ”€โ”€ instance_id_to_name
โ””โ”€โ”€ pcd_with_global_alignment
  1. Download scanrefer+referit3d, scanqa, and sqa3d, and put them under /data/scanfamily/annotations
data/scanfamily/annotations/
โ”œโ”€โ”€ meta_data
โ”‚   โ”œโ”€โ”€ cat2glove42b.json
โ”‚   โ”œโ”€โ”€ scannetv2-labels.combined.tsv
โ”‚   โ”œโ”€โ”€ scannetv2_raw_categories.json
โ”‚   โ”œโ”€โ”€ scanrefer_corpus.pth
โ”‚   โ””โ”€โ”€ scanrefer_vocab.pth
โ”œโ”€โ”€ qa
โ”‚   โ”œโ”€โ”€ ScanQA_v1.0_test_w_obj.json
โ”‚   โ”œโ”€โ”€ ScanQA_v1.0_test_wo_obj.json
โ”‚   โ”œโ”€โ”€ ScanQA_v1.0_train.json
โ”‚   โ””โ”€โ”€ ScanQA_v1.0_val.json
โ”œโ”€โ”€ refer
โ”‚   โ”œโ”€โ”€ nr3d.jsonl
โ”‚   โ”œโ”€โ”€ scanrefer.jsonl
โ”‚   โ”œโ”€โ”€ sr3d+.jsonl
โ”‚   โ””โ”€โ”€ sr3d.jsonl
โ”œโ”€โ”€ splits
โ”‚   โ”œโ”€โ”€ scannetv2_test.txt
โ”‚   โ”œโ”€โ”€ scannetv2_train.txt
โ”‚   โ””โ”€โ”€ scannetv2_val.txt
โ””โ”€โ”€ sqa_task
    โ”œโ”€โ”€ answer_dict.json
    โ””โ”€โ”€ balanced
        โ”œโ”€โ”€ v1_balanced_questions_test_scannetv2.json
        โ”œโ”€โ”€ v1_balanced_questions_train_scannetv2.json
        โ”œโ”€โ”€ v1_balanced_questions_val_scannetv2.json
        โ”œโ”€โ”€ v1_balanced_sqa_annotations_test_scannetv2.json
        โ”œโ”€โ”€ v1_balanced_sqa_annotations_train_scannetv2.json
        โ””โ”€โ”€ v1_balanced_sqa_annotations_val_scannetv2.json
  1. Download all checkpoints and put them under project/pretrain_weights
Checkpoint Link Note
Pre-trained link 3D-VisTA Pre-trained checkpoint.
ScanRefer link Fine-tuned ScanRefer from pre-trained checkpoint.
ScanQA link Fine-tined ScanQA from pre-trained checkpoint.
Sr3D link Fine-tuned Sr3D from pre-trained checkpoint.
Nr3D link Fine-tuned Nr3D from pre-trained checkpoint.
SQA link Fine-tuned SQA from pre-trained checkpoint.
Scan2Cap link Fine-tuned Scan2Cap from pre-trained checkpoint.

Run 3D-VisTA

To run 3D-VisTA, use the following command, task includes scanrefer, scanqa, sr3d, nr3d, sqa, and scan2cap.

python3 run.py --config project/vista/{task}_config.yml

Acknowledgement

We would like to thank the authors of Vil3dref and for their open-source release.

News

  • [ 2023.08 ] First version!
  • [ 2023.09 ] We release codes for all downstream tasks.

Citation:

@article{zhu2023vista,
  title={3D-VisTA: Pre-trained Transformer for 3D Vision and Text Alignment},
  author={Zhu, Ziyu and Ma, Xiaojian and Chen, Yixin and Deng, Zhidong and Huang, Siyuan and Li, Qing},
  journal={ICCV},
  year={2023}
}

3d-vista's People

Contributors

liqing-ustc avatar zhuziyu-edward avatar 3d-vista avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.