Giter VIP home page Giter VIP logo

iagnet's Introduction

Grounding 3D Object Affordance from 2D Interactions in Images

PyTorch implementation of Grounding 3D Object Affordance from 2D Interactios in Images. This repository contains PyTorch training and evaluation code, the dataset will coming soon.

πŸ“‹ Table of content

  1. πŸ“Ž Paper Link
  2. ❗ Quick Understanding
  3. πŸ’‘ Abstract
  4. πŸ“– Method
  5. πŸ“‚ Dataset
  6. πŸ“ƒ Requirements
  7. ✏️ Usage
    1. Train
    2. Evaluate
    3. Inference
  8. πŸ“Š Experimental Results
  9. 🍎 Potential Applications
  10. βœ‰οΈ Statement
  11. πŸ” Citation

News: Our Paper has been accepted by ICCV2023, we will release the pre-trained model soon and the PIAD dataset will be released after the ICCV2023 conference.

πŸ“Ž Paper Link

  • Grounding 3D Object Affordance from 2D Interactions in Images (link)

Authors: Yuhang Yang, Wei Zhai, Hongchen Luo, Yang Cao, Jiebo Luo, Zheng-Jun Zha

❗Quick Understanding

The following demonstration gives a brief introduction to our task.


A single image could could be used to infer different 3D object affordance.


Meanwhile, a single point cloud be grounded the same 3D affordance through the same interaction, and different 3D affordances by distinct interactions.


πŸ’‘ Abstract

Grounding 3D object affordance seeks to locate objects' ''action possibilities'' regions in the 3D space, which serves as a link between perception and operation for embodied agents. Existing studies primarily focus on connecting visual affordances with geometry structures, e.g. relying on annotations to declare interactive regions of interest on the object and establishing a mapping between the regions and affordances. However, the essence of learning object affordance is to understand how to use it, and the manner that detaches interactions is limited in generalization. Normally, humans possess the ability to perceive object affordances in the physical world through demonstration images or videos. Motivated by this, we introduce a novel task setting: grounding 3D object affordance from 2D interactions in images, which faces the challenge of anticipating affordance through interactions of different sources. To address this problem, we devise a novel Interaction-driven 3D Affordance Grounding Network (IAG), which aligns the region feature of objects from different sources and models the interactive contexts for 3D object affordance grounding. Besides, we collect a Point-Image Affordance Dataset (PIAD) to support the proposed task. Comprehensive experiments on PIAD demonstrate the reliability of the proposed task and the superiority of our method. The dataset and code will be made available to the public.


Grounding Affordance from Interactions. We propose to ground 3D object affordance through 2D interactions. Inputting an object point cloud with an interactive image, grounding the corresponding affordance on the 3D object.

πŸ“– Method

IAG-Net


Our Interaction-driven 3D Affordance Grounding Network. It firstly extracts localized features $F_{i}$, $F_{p}$ respectively, then takes the Joint Region Alignment Module to align them and get the joint feature $F_{j}$. Next, Affordance Revealed Module utilizes $F_{j}$ to reveal affordance $F_{\alpha}$ with $F_{s}$, $F_{e}$ by cross-attention. Eventually, $F_{j}$ and $F_{\alpha}$ are sent to the decoder to obtain the final results $\hat{\phi}$ and $\hat{y}$.

πŸ“‚ Dataset


Properties of the PIAD dataset. (a) Data pairs in the PIAD, the red region in point clouds is the affordance annotation. (b) Distribution of the image data. The horizontal axis represents the category of affordance, the vertical axis represents quantity, and different colors represent different objects. (c) Distribution of the point cloud data. (d) The ratio of images and point clouds in each affordance class. It shows that images and point clouds are not fixed one-to-one pairing, they can form multiple pairs.


Examples of PIAD. Some paired images and point clouds in PIAD. The ''yellow'' box in the image is the bounding box of the interactive subject, the ''red'' box is the bounding box of the interactive object.

We will release our PIAD soon...

πŸ“ƒ Requirements

  • python-3.9
  • pytorch-1.13.1
  • torchvision-0.14.1
  • open3d-0.16.0
  • scipy-1.10.0
  • matplotlib-3.6.3
  • numpy-1.24.1
  • OpenEXR-1.3.9
  • scikit-learn-1.2.0
  • mitsuba-3.0.1

✏️ Usage

git clone https://github.com/yyvhang/IAGNet.git

Download PIAD

  • We will release the PIAD dataset soon.

Train

To train the IAG-Net model, you can modify the training parameter in config/config_seen.yaml and then run the following command:

python train.py --name IAG

To train the IAG-Net model by DDP, run the train_DDP.py as follows:

python -m torch.distributed.run --nproc_per_node=4 train_DDP.py  --gpu_num 4  --name IAG_DDP  

gpu_num, nproc_per_node is the count of GPU, chose it according to your devices.

Evaluate

To evaluate the trained IAG-Net model, run evalization.py:

python evalization.py

Inference

To inference the results with IAG-Net model, run inference.py to get the .ply file

python inference.py --model_path runs/train/IAG/best.pt

To render the .ply file, we provide the script rend_point.py, run this file to get .xml files:

python rend_point.py

Once you get the .xml file, just rend it with mitsuba:

mitsuba Chair.xml

πŸ“Š Experimental Results

  • Visual results of comparison in both partitions:


  • One image with multiple point clouds:


(a) Same object category. (b) Different object categories, similar geometrics. (c) Different object categories and geometrics, in this situation, the model does not make random predictions.

  • Multiple Affordances:


Some objects like β€œBag" contains the region that corresponds to multiple affordances.

  • Rotate and partial point cloud:


Results on the rotate and partial point clouds.

🍎 Potential Applications


Potential Applications of IAG affordance system. This work has the potential to bridge the gap between perception and operation, serving areas like demonstration learning, robot manipulation, and may be a part of human-assistant agent system e.g. Tesla Bot, Boston Dynamics Atlas.

βœ‰οΈ Statement

This project is for research purpose only, please contact us for the licence of commercial use. For any other questions please contact [email protected].

πŸ” Citation

@inproceedings{Yang2023Affordance,
  title={Grounding 3D Object Affordance from 2D Interactions in Images},
  author={Yang, Yuhang and Zhai, Wei and Luo, Hongchen and Cao, Yang and Luo Jiebo and Zha, Zheng-Jun},
  year={2023},
  eprint={2303.10437},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.