Giter VIP home page Giter VIP logo

pilhyeon / learning-action-completeness-from-points Goto Github PK

View Code? Open in Web Editor NEW
83.0 3.0 15.0 105 KB

Official Pytorch Implementation of 'Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization' (ICCV-21 Oral)

License: MIT License

Python 99.38% Shell 0.62%
deep-learning pytorch action-completeness weakly-supervised-learning temporal-action-localization point-level-supervision

learning-action-completeness-from-points's Introduction

Learning-Action-Completeness-from-Points

architecture

Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization
Pilhyeon Lee (Yonsei Univ.), Hyeran Byun (Yonsei Univ.)

Paper: https://arxiv.org/abs/2108.05029

Abstract: We tackle the problem of localizing temporal intervals of actions with only a single frame label for each action instance for training. Owing to label sparsity, existing work fails to learn action completeness, resulting in fragmentary action predictions. In this paper, we propose a novel framework, where dense pseudo-labels are generated to provide completeness guidance for the model. Concretely, we first select pseudo background points to supplement point-level action labels. Then, by taking the points as seeds, we search for the optimal sequence that is likely to contain complete action instances while agreeing with the seeds. To learn completeness from the obtained sequence, we introduce two novel losses that contrast action instances with background ones in terms of action score and feature similarity, respectively. Experimental results demonstrate that our completeness guidance indeed helps the model to locate complete action instances, leading to large performance gains especially under high IoU thresholds. Moreover, we demonstrate the superiority of our method over existing state-of-the-art methods on four benchmarks: THUMOS'14, GTEA, BEOID, and ActivityNet. Notably, our method even performs comparably to recent fully-supervised methods, at the 6 times cheaper annotation cost.

Prerequisites

Recommended Environment

  • Python 3.6
  • Pytorch 1.6
  • Tensorflow 1.15 (for Tensorboard)
  • CUDA 10.2

Depencencies

You can set up the environments by using $ pip3 install -r requirements.txt.

Data Preparation

  1. Prepare THUMOS'14 dataset.

    • We excluded three test videos (270, 1292, 1496) as previous work did.
  2. Extract features with two-stream I3D networks

    • We recommend extracting features using this repo.
    • For convenience, we provide the features we used. You can find them here.
  3. Place the features inside the dataset folder.

    • Please ensure the data structure is as below.
├── dataset
   └── THUMOS14
       ├── gt.json
       ├── split_train.txt
       ├── split_test.txt
       ├── fps_dict.json
       ├── point_gaussian
           └── point_labels.csv
       └── features
           ├── train
               ├── rgb
                   ├── video_validation_0000051.npy
                   ├── video_validation_0000052.npy
                   └── ...
               └── flow
                   ├── video_validation_0000051.npy
                   ├── video_validation_0000052.npy
                   └── ...
           └── test
               ├── rgb
                   ├── video_test_0000004.npy
                   ├── video_test_0000006.npy
                   └── ...
               └── flow
                   ├── video_test_0000004.npy
                   ├── video_test_0000006.npy
                   └── ...

Usage

Running

You can easily train and evaluate the model by running the script below.

If you want to try other training options, please refer to options.py.

$ bash run.sh

Evaulation

The pre-trained model can be found here. You can evaluate the model by running the command below.

$ bash run_eval.sh

References

We note that this repo was built upon our previous models.

  • Background Suppression Network for Weakly-supervised Temporal Action Localization (AAAI 2020) [paper] [code]
  • Weakly-supervised Temporal Action Localization by Uncertainty Modeling (AAAI 2021) [paper] [code]

We referenced the repos below for the code.

In addition, we referenced a part of code in the following repo for the greedy algorithm implementation.

Citation

If you find this code useful, please cite our paper.

@inproceedings{lee2021completeness,
  title={Learning Action Completeness from Points for Weakly-supervised Temporal Action Localization},
  author={Pilhyeon Lee and Hyeran Byun},
  booktitle={IEEE/CVF International Conference on Computer Vision},
  year={2021},
}

Contact

If you have any question or comment, please contact the first author of the paper - Pilhyeon Lee ([email protected]).

learning-action-completeness-from-points's People

Contributors

pilhyeon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

learning-action-completeness-from-points's Issues

about thumos14 label

Hello, in thumbos14, CliffDiving is a subclass of Diving, and the action instances of CliffDiving in the annotation file also belong to Diving. Why don't you use this prior knowledge to remove the action instance of CliffDiving class in the Diving class during training and add a Diving class for each predicted CliffDiving action instance during post-processing?
I think an action instance belonging to two categories may make the training difficult to converge.

About feature extractions

You've mentioned that your work's feature extractions' part was followed https://github.com/piergiaj/pytorch-i3d, but when I tried to apply it to my own datasets, I found that the dimention of layer 'logits.conv3d' is mismatch.

Traceback (most recent call last):
File "extract_features.py", line 88, in
run(mode=args.mode, load_model=args.load_model)
File "extract_features.py", line 51, in run
i3d.load_state_dict(torch.load(load_model))
File "/home/pengfang/.conda/envs/mvit/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for InceptionI3d:
size mismatch for logits.conv3d.weight: copying a param with shape torch.Size([400, 1024, 1, 1, 1]) from checkpoint, the shape in current model is torch.Size([54, 1024, 1, 1, 1]).
size mismatch for logits.conv3d.bias: copying a param with shape torch.Size([400]) from checkpoint, the shape in current model is torch.Size([54]).

Do I need to finetune the I3D on my datasets? Could you tell me how you apply this code to Thumos14?

Query regarding transcript in optimal transport

Thanks for making this awesome work publicly available !

I wanted to know what is the meaning if the term "transcript" in "search.py" ? I cannot understand the pattern why sometimes [0,1] is given and [1] is used sometimes. Can you kndly elaborate ?

About video-level probability

Thanks for your excellent job!
I am confused why express video-level probability by that:
vid_score = (torch.mean(topk_scores, dim=1) * vid_labels) + (torch.mean(cas_sigmoid[:,:,:-1], dim=1) * (1 - vid_labels))
Inconsistent between training and testing.

For GTEA and BEOID

Hello, thanks for your excellent job. I am interested in your work so much!
May I ask if the extracted features on ActivityNet, GTEA and BEOID will be released?

AttributeError: 'Namespace' object has no attribute 'read' How should I do?

Hello author, I encountered this error when running your code and cannot solve it. What is the reason? The environment and dependent libraries I configured according to your documentation are the same.

Traceback (most recent call last):
File "./main.py", line 21, in
config = Config(args)
File "/home/mcy/miniconda3/envs/python36/lib/python3.6/site-packages/config/init.py", line 709, in init
self.load(stream_or_path)
File "/home/mcy/miniconda3/envs/python36/lib/python3.6/site-packages/config/init.py", line 803, in load
items = p.container()
File "/home/mcy/miniconda3/envs/python36/lib/python3.6/site-packages/config/parser.py", line 285, in container
self.advance()
File "/home/mcy/miniconda3/envs/python36/lib/python3.6/site-packages/config/parser.py", line 130, in advance
self.token = self.tokenizer.get_token()
File "/home/mcy/miniconda3/envs/python36/lib/python3.6/site-packages/config/tokens.py", line 997, in get_token
c = get_char()
File "/home/mcy/miniconda3/envs/python36/lib/python3.6/site-packages/config/tokens.py", line 802, in get_char
c = self.stream.read(1)
AttributeError: 'Namespace' object has no attribute 'read'

ActivityNet

Hi, thanks for your excellent job!
May I ask if the source code and extracted feature on ActivityNet will be released?

The different features lengths compared with fully supervised methods

Hello, thank you very much for bringing inspiring work. I am a beginner and would like to ask you why some fully supervised methods, such as actionformer, use feature lengths that are inconsistent with the feature lengths you provide. Is it because i3d uses different sampling rates when extracting features?

How to reproduce the GTEA

Is there any trick to reproduce the result of GTEA , could you please giving config.txt for this dataset and it will be convenient for reproducing. Thank you.@Pilhyeon

about new_dense_anno

Hello, thanks for your excellent job. But I have some questions about the code. What does stored_info['new_dense_anno'] correspond to in the paper?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.