shikunli / sel-cl Goto Github PK
View Code? Open in Web Editor NEWCVPR 2022: Selective-Supervised Contrastive Learning with Noisy Labels
CVPR 2022: Selective-Supervised Contrastive Learning with Noisy Labels
Hi, I tried installing apex0.1 using the code
1st this
"""%%writefile setup.sh
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --disable-pip-version-check --no-cache-dir ./"""
and then this
"""!sh setup.sh"""
after this got this message too
Successfully built apex
Installing collected packages: apex
Attempting uninstall: apex
Found existing installation: apex 0.1
Uninstalling apex-0.1:
Removing file or directory /usr/local/lib/python3.8/dist-packages/apex-0.1.dist-info/
Removing file or directory /usr/local/lib/python3.8/dist-packages/apex/
Successfully uninstalled apex-0.1
Successfully installed apex-0.1
When I ran the code in git repo - https://github.com/ShikunLi/Sel-CL, in the log file I am getting error which suggests installation of apex is not done correctly, the error message is as follows
"""multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'")"""
results.log
to this I tried installing apex through
"""pip install -v --disable-pip-version-check --no-cache-dir --global_option = "--cpp_ext" --global_option ="--cuda_ext" ./ """
which gives error while installing and apex doesn't get installed.
I would appreciate any help that I can get so that I can run the code successfully. Thankss.
@ShikunLi Hi, I find an error in the compute of unsupervised mask of mixed image, here we should let mask2Unsup_batch
be the same mask matrix as maskUnsup_batch
and mask2Unsup_mem
be the same as maskUnsup_mem
.
Because we passpairwise_comp_batch
to function Supervised_ContrastiveLearning_loss
and elements in pairwise_comp_batch
is positive diagnally.
Thanks for your work!
I wonder whether the clean labels has involved the knn evaluation for the results of Tables 2 and 4. Looking forward to your reply!
This is a good work. However, the training efficiency is somehow low~
In the training stage, the GPU utilization is about 50-60%. In the "pare-wise selection" stage, the GPU utilization is approximate 0.
At first, I think it is because the program executes the CPU operations in the "pare-wise selection" stage. I set up some checkpoints in the "pare-wise selection" stage, and find that most of the time was spent on "Weighted k-nn correction" (code link).
Looking forward to your suggestions for training efficiently~~~
as mentioned in DiegoOrtego/LabelNoiseMOIT#5 , there also exits an error in your code:
There may be a mistake that the elements of first dist.size()[0] at dim 1 are always set to -1, since the features are from a minibatch. The right way is:
dist = torch.mm(features, trainFeatures) # instead of features here
dist[torch.arange(dist.size()[0]), index] = -1 ##Self-contrast set to -1
I just got 91.31 % top1 Accuracy on dataset 'CIFAR-10' which was 95.5% in your paper. what's the difference of experimental parameters between the original paper and the following two steps?
(1) python train_Sel-CL.py --epoch 250 --num_classes 10 --batch_size 128 --low_dim 128 --lr-scheduler "step" --noise_ratio 0.2
--network "PR18" --lr 0.1 --wd 1e-4 --dataset "CIFAR-10" --download False --noise_type "symmetric"
--sup_t 0.1 --headType "Linear" --sup_queue_use 1 --sup_queue_begin 3 --queue_per_class 1000
--alpha 0.5 --beta 0.25 --k_val 250 --experiment_name CIFAR10 --cuda_dev 0 --alpha_m 1.0 --seed_initialization 1 --seed_dataset 42
--uns_t 0.1 --uns_queue_k 10000 --lr-warmup-epoch 5 --warmup-epoch 1 --lambda_s 0.01 --lambda_c 1 --warmup_way 'uns'
(2) python train_Sel-CL_fine-tuning.py --epoch 70 --num_classes 10 --batch_size 128 --noise_ratio 0.2
--network "PR18" --lr 0.001 --wd 1e-4 --dataset "CIFAR-10" --cuda_dev 0
--headType "Linear" --noise_type "symmetric" --DA "Simple" --ReInitializeClassif 1
--startLabelCorrection 30 --alpha_m 1.0 --seed_initialization 1 --seed_dataset 42
--experiment_name CIFAR10 --train_root ./dataset --out ./out
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.