zhang-can / cola Goto Github PK
View Code? Open in Web Editor NEW[CVPR2021] CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning
Home Page: https://arxiv.org/abs/2103.16392
License: MIT License
[CVPR2021] CoLA: Weakly-Supervised Temporal Action Localization with Snippet Contrastive Learning
Home Page: https://arxiv.org/abs/2103.16392
License: MIT License
Hi thanks for releasing your work.
I found myself having trouble relating the original video and extracted features.
In your code, cfg.FEATS_FPS = 25, and it seems like that original video has fps of 30.
From the paper, I can see that 1 snippet is consist of 16 frames,
I understand that that's where t_factor formula in utils.py comes out.
-> t_factor = (16 * v_len) / (scale * num_segments * sampling_frames)
BUT when I run code for example for the test_video_000004, it has 1,011 frames but the number of segments of the extracted feature is 52... (RGB feature size of 52 x 1024)
Can you please explain what is going on between feature extractor and your model?
Thanks for your great work!
I notice that the number of training epochs are 6k and 8k for the two datasets respectively. That's quite a big number. I wonder what is the reason that it takes so many epochs to train? Do you have an ablation study on the effect of training epochs?
Besides, could you please provide your training times?
Thanks!
Hello,
Thanks for your great work.
I have a question:
Why do you use zeros as labels in the loss function and not the original labels?
I am talking about this part from NCE function:
labels= torch.zeros(logits.shape[0], dtype=torch.long).cuda()
Hi, could you please provide the extracted features for ActivityNet?
Hi @zhang-can , when you can release the code, extracted feature and pre-train models?
@zhang-can Thanks for sharing the training code of Activitynet1.2. However, when I try to reproduce the training results of Activitynet1.2 (I use the feature from https://github.com/sujoyp/wtalc-pytorch/tree/master since the feature in CoLA repo needs verified), I just get mIoU(avg) around 3 with the configuration of branch anet1.2. It seems that the SniCo loss could not decrease at all. I also find that the maximum number of hard background and action are sometimes less that the number we set. (The non-zero elements in aness_region_inner and aness_region_outer are less than k_hard.) Is that normal?
By the way, I can reproduce the results of THUMOS14 easily. So the question is only occured on Activitynet1.2. Would you please check the released code or provide a checkpoint and training log to help us make the things right?
hello,Could you please provide ActivityNet feature and corresponding file config.py. Thank you.
My e-mail is [email protected]
The correct command should be:
python main_cola.py train/test xxx
hello,How to solve this problem?
Traceback (most recent call last):
File "main_cola.py", line 207, in <module>
main()
File "main_cola.py", line 86, in main
loader_iter = iter(train_loader)
File "/home/linux01/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__
return self._get_iterator()
File "/home/linux01/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/linux01/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 944, in __init__
self._reset(loader, first_iter=True)
File "/home/linux01/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 975, in _reset
self._try_put_index()
File "/home/linux01/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1209, in _try_put_index
index = self._next_index()
File "/home/linux01/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 512, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/linux01/.local/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 227, in __iter__
for idx in self.sampler:
File "/home/linux01/.local/lib/python3.8/site-packages/torch/utils/data/sampler.py", line 124, in __iter__
yield from torch.randperm(n, generator=torch.Generator(device='cuda')
RuntimeError: Expected a 'cpu' device type for generator but found 'cuda'
Hello,I couldn't dowload the Activitynet1.2 feature which you provided, I 'm sorry.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.