Giter VIP home page Giter VIP logo

spell's Introduction

Please consider using the most recent version of our graph learning framework: GraVi-T

SPELL

Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection (ECCV 2022)
paper | poster | presentation

Overview

SPELL is a novel spatial-temporal graph learning framework for active speaker detection (ASD). It can model a minute-long temporal context without relying on computationally expensive networks. Through extensive experiments on the AVA-ActiveSpeaker dataset, we demonstrate that learning graph-based representations significantly improves the detection performance thanks to its explicit spatial and temporal structure. Specifically, SPELL outperforms all previous state-of-the-art approaches while requiring significantly lower memory and computation resources.

Ego4D Challenges

SPELL and its improved version (STHG) achieved 1st place in the Ego4D Challenges @ECCV22 and @CVPR23, respectively. We summarize ASD performance comparisons on the validation set of the Ego4D dataset:

ASD Model ASD mAP(%)↑ ASD [email protected](%)↑
RegionCls - 24.6
TalkNet - 50.6
SPELL 71.3 60.7
STHG 75.7 63.7

💡In this table, We report two metrics to evaluate ASD performance: mAP quantifies the ASD results by assuming that the face bound-box detections are the ground truth (i.e. assuming the perfect face detector), whereas [email protected] quantifies the ASD results on the detected face bounding boxes (i.e. a face detection is considered positive only if the IoU between a detected face bounding box and the ground-truth exceeds 0.5). For more information, please refer to our technical reports for the challenge.

💡We computed [email protected] by using Ego4D's official evaluation tool

ActivityNet 2022

SPELL achieved 2nd place in the AVA-ActiveSpeaker Challenge at ActivityNet 2022. For the challenge, we used a visual input spanning a longer period of time (23 consecutive face-crops instead of 11). We also found that using a larger channel1 can further boost the performance.
tech report | presentation

Dependency

We used python=3.6, pytorch=1.9.1, and torch-geometric=2.0.3 in our experiments.

Code Usage

  1. Download the audio-visual features and the annotation csv files from Google Drive. The directories should look like as follows:
|-- features
    |-- resnet18-tsm-aug
        |-- train_forward
        |-- val_forward
    |-- resnet50-tsm-aug
        |-- train_forward
        |-- val_forward
|-- csv_files
    |-- ava_activespeaker_train.csv
    |-- ava_activespeaker_val.csv
  1. Run generate_graph.py to create the spatial-temporal graphs from the features:
python generate_graph.py --feature resnet18-tsm-aug

Although this script takes some time to finish in its current form, it can be modified to run in parallel and create the graphs for multiple videos at once. For example, you can change the files variable in line 81 of data_loader.py.

  1. Use train_val.py to train and evaluate the model:
python train_val.py --feature resnet18-tsm-aug

You can change the --feature argument to resnet50-tsm-aug for SPELL with ResNet-50-TSM.

Note

  • We used the official code of Active Speakers in Context (ASC) to extract the audio-visual features (Stage-1). Specifically, we used STE_train.py and STE_forward.py of the ASC repository to train our two-stream ResNet-TSM encoders and extract the audio-visual features. We did not use any other components such as the postprocessing module or the context refinement modules. Please refer to models_stage1_tsm.py and the checkpoints from this link to see how we implanted the TSM into the two-stream ResNets.

Citation

ECCV 2022 paper:

@inproceedings{min2022learning,
  title={Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection},
  author={Min, Kyle and Roy, Sourya and Tripathi, Subarna and Guha, Tanaya and Majumdar, Somdeb},
  booktitle={European Conference on Computer Vision},
  pages={371--387},
  year={2022},
  organization={Springer}
}

Technical report for AVA-ActiveSpeaker challenge 2022:

@article{minintel,
  title={Intel Labs at ActivityNet Challenge 2022: SPELL for Long-Term Active Speaker Detection},
  author={Min, Kyle and Roy, Sourya and Tripathi, Subarna and Guha, Tanaya and Majumdar, Somdeb},
  journal={The ActivityNet Large-Scale Activity Recognition Challenge},
  year={2022},
  note={\url{https://research.google.com/ava/2022/S2_SPELL_ActivityNet_Challenge_2022.pdf}}
}

spell's People

Contributors

kylemin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

spell's Issues

Replicating results of the first stage (feature generation) of training using ASC repository

I am trying to replicate the performance of stage 1 (88.0% mAP in Table 2 of the paper) by training a model using the code from the ASC repository and replacing their model file (models.py) with the TSM model file (models_stage1_tsm.py) provided in this repository. The best performance I could get was 83.2%7 mAP.

Would you be able to either share the code that you have used to train the first-stage networks or share the training parameters that are different from the first stage of ASC? The parameters that I am looking for are the initial learning rate, the number of steps after which the learning rate drops, the total number of epochs, the method to choose the best-performing model, and the range of random factors which are used to reduce the volume of the audio files in data augmentation. It would be great if you can also share any other differences in your training method from the first stage method of the ASC repository.

Question: Online inference

Thank you so much for such a wonderful paper!

I am working on exploring active speaker detection for real time and came across this paper and repo and wanted to ask a question.
Is it possible to do an online inference of active speaker with this approach for live video stream??

Thank you so much!

Vertex Identifying when inference time

Hi Kyle, thank you for such a nice paper!
I really learned a lot from your work.

Currently, I am trying to tune your model for inference in ASD task
(for random video, with no annotation about any bbox or entity)

As mentioned in #4 , this code assumes face bbox and audio-visual features
be detected by other models. (in this case, your model used 2DResNet with TSM for feature extraction)

I understood SPELL works on pre-built graph data, where vertex is identified with (video_id, entity_id, timestamp).
Could you give me some advice on how to get entity_id when inference?

I tried to integrate L186 ~ L223 of data_loader.py into inference loop
as you mentioned, but it seems to require entity_id for node.
I thought adding some tracker would help, but I am curious if modifying some of your codes might help this.

Thank you so much,
Sangmo

Inference Code

Hi,

Thank you for sharing this amazing project. I am able to follow your instruction to train a model. I am also interested in running inference on my own data. Could you please share the visualization code as well?

Thank you very much!

Requirements file

Hello,

Thank you so much for sharing these files. Could you please share the requirements list? (Including python, PyTorch, other dependencies, etc). Thank you!

Google Drive links has been invalid

Hi Kyle,

Thanks for your very nice code with excellent results!
Unfortunately, the Google Drive links has been invalid.
I wonder if you can provide the Google Drive links in the README again?

Thank you very much!

Regarding the CSV files

Hi Kyle,
Congratulations on winning 2nd place in the challenge! I want to train and evaluate your model but I couldn't see any relevant CSV files in Google Drive. Do you know how to get it?(ps: I only get the ”*.pkl“ in the Google Drive)

"Failed - Forbidden" and "Quota exceeded" when downloading resnet18_features and resnet50_features

Hi Kyle,

Congratulations on winning 2nd place in the challenge!! Great work! I want to train and evaluate your model but I couldn't download the two feature files. The download of csv files went well. I have tried Chrome and Edge, on company and personal computers, and all have failed. It would pause in the middle, and then when I clicked "resume", it said "failed - forbidden". That is for resnet18. For resnet50, the error is "download quota exceeded". Do you know how I can fix this?

Thank you for your time!
Best regards,
Chen

Question about 0.7 GFLOPs for visual feature encoding reported in paper

Thank you for sharing this work!
I have a question about how 0.7 GFLOPs is computed for visual feature encoding in the paper. I use the 2D ResNet-18+TSM that is shared in models_stage1_tsm.py, and feed input of shape (11,3,144,144) which is a stack of 11 consecutive face crops of resolution 144x144. And I got 8.73 GFLOPs. I use this tool to compute GFLOPs: https://github.com/facebookresearch/fvcore/blob/main/docs/flop_count.md.

error with TSM and the consecutive face-crops length(defalut 11)

Hi, I am trying to run your code, and I use my own dataset for training and testing, but I found some possible bugs in
models_stage1_tsm.py.

  1. I guess this '3' should be '3*clip_length', where 'clip_length' is the length of face-crops
    image

  2. This rgb_stack_size shoude be equal with clip_length?
    image

When rgb_stack_size is 11 (default), the program always throws an exception here
image
I have no idea to solve it, if you can help me, I'd really appreciate it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.