Giter VIP home page Giter VIP logo

iic's People

Contributors

bestjuly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

iic's Issues

Training on hmdb51

Hello, Li Tao, Thanks for your great work.
I recently want to reproduce your work on hmdb51 dataset, but why I self-supervised pretrain model on UCF101 train split1, finetune on HMDB trainlist01.txt, and then evaluate with testlist01.txt, but I got accuracy of 2% ???

Question about pretraining

Thanks for sharing your great work.

Just wondering which model do you select for the downstream task since you save pertaining models every 40 epochs.

Thank you.

UCF101 action classification result only at 0.68

Hi, thanks for the good work.

I'm currently trying to reproduce the action classification results on UCF101. Using the training parameters that you provided, I've trained my own backbone network and the linear classifier. However, I'm only getting a 0.68 accuracy with the RGB+Res+Repeat settup. I've also trained a linear classifier with the backbone network that you provided, I'm also only getting a accuracy around 0.685.

Could you please help me with this problem? Is there anything you think that could go wrong? Can you share the weights of your linear classifier network?

Thanks very much.

repeat和shuffle的准确率都是73.5

    首先我对您的论文很感兴趣,最近也正在尝试复现,但是在这个过程中我发现了一个很有意思的现象,在使用github的代码复现过程中pre-training改变--neg,不管是repeat或者是shuffle,best model都是best_model_141.pt,接下来fine-tune进行下游识别任务时,ucf101的acc-avg都是73.5,感到很奇怪,是不是哪里出现问题了,还请您指教。

Runtime Error while running the training script

I am trying to the training script like this:

python train_ssl.py --dataset=ucf101
{'print_freq': 10, 'tb_freq': 500, 'save_freq': 40, 'batch_size': 16, 'num_workers': 8, 'epochs': 240, 'learning_rate': 0.01, 'lr_decay_epochs': '120,160,200', 'lr_decay_rate': 0.1, 'beta1': 0.5, 'beta2': 0.999, 'weight_decay': 0.0001, 'momentum': 0.9, 'resume': '', 'model': 'r3d', 'softmax': True, 'nce_k': 1024, 'nce_t': 0.07, 'nce_m': 0.5, 'feat_dim': 512, 'dataset': 'ucf101', 'model_path': './ckpt/', 'tb_path': './logs/', 'debug': True, 'modality': 'res', 'intra_neg': True, 'neg': 'repeat', 'seed': 632, 'model_name': 'intraneg_r3d_res_1012', 'model_folder': './ckpt/intraneg_r3d_res_1012', 'tb_folder': './logs/intraneg_r3d_res_1012'}
[Warning] The training modalities are RGB and [res]
Use split1
v_LongJump_g18_c03 896] BT 5.066 (5.538)        DT 4.007 (4.468)        loss 15.277 (20.188))   1_p -0.001 (0.014)      2_p -0.009 (-0.007)


Traceback (most recent call last):
  File "train_ssl.py", line 341, in <module>
    main()
  File "train_ssl.py", line 308, in main
    criterion_1, criterion_2, optimizer, args)
  File "train_ssl.py", line 166, in train
    for idx, (inputs, u_inputs, v_inputs, _, index) in enumerate(train_loader):
  File "/home/krishna/anaconda3/envs/py3.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
    data = self._next_data()
  File "/home/krishna/anaconda3/envs/py3.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 989, in _next_data
    return self._process_data(data)
  File "/home/krishna/anaconda3/envs/py3.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
    data.reraise()
  File "/home/krishna/anaconda3/envs/py3.7/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
    raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/krishna/anaconda3/envs/py3.7/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/krishna/anaconda3/envs/py3.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/krishna/anaconda3/envs/py3.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/tankpool/home/krishna/PycharmProjects/Inter-intra-video-contrastive-learning/datasets/ucf101.py", line 123, in __getitem__
    raise
RuntimeError: No active exception to reraise

It always gives me this runtime error. I followed all the steps in the README. Still unable to figure out. Can you please help me out? Thank you!

方法泛化

该自监督方法对短视频这种特征提取效果怎么样

pytorch 1.7+

  1. Has anyone reproduced the results with versions above pytorch1.7+?(1.8 1.9.1.10 and so on)
  2. For some reason I have to use cuda11, has anyone reproduced better results with this version of cuda11?

Training Loss Not Improving

Hi, I'm trying to reproduce your results by training on ucf101. I noticed that my loss is not improving at all. I trained with a batch size of 20, and the loss is stuck at around 15.2 for every iteration, and does not decrease. Just wondering what's the configuration that you used to train the model?

loss stuck in multi-gpu

when training the ssl model in multi-gpu setting, loss get stuck at around 15, but in single gpu, loss decrease normally. my environment is pytorch 1.5, CUDA10.2, GeForce RTX 2080 Ti

Poor finetuning results

Thanks for your great work.
I finetuned the pretrained model on UCF101 train split1, but evaluation results show about 6.5% accuracy.
I think that is caused by multi gpus and the procedure of loading checkpoints. But, despite of the change, the result was same.
I only change the original code about dataset path, model wrapped with torch.nn.DataParallel().

Have the same accuracy

Hello, Li Tao, Thanks for your great work.
I'am sorry to bother you, but the problem has been bothering me for a long time.
When I run train_ssl.py for pre-train, the --neg is set to repeat. Next, run ft_classify.py to fine-tune.
When I run train_ssl.py for pre-train, the --neg is set to shuffle. Other settings are the same. Next, run ft_classify.py to fine-tune.
Why is there the same accuracy and best_model. I don't understand this very much. Can you help me analyze what's wrong?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.