Comments (39)
Sure, please let me know if you have any questions. BTW, I will upload the swin-T and swin-S version soon, which are more comparable to ResNet-50 and ResNet-101.
from tvt.
did you change any thing in the script? Can you show me the script you used for Visda? Thanks.
from tvt.
So, I think it's important to releasing baseline script/code for your backbone is different from standard ViT backbone
from tvt.
what do you mean by "baseline script/code"?
from tvt.
did you change any thing in the script? Can you show me the script you used for Visda? Thanks.
I change data_loader to my dataload function, which include
{transforms.Resize([256, 256]),
transforms.RandomCrop(args.img_size),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
} operation.
others are as your script.
by the way, I think the result on visda17 is acceptable. results on office-31/office-home aren't.
CUDA_VISIBLE_DEVICES=7 python main.py --train_batch_size 64 --dataset visda --name visda --source_name train --target_name target --root_path /home/wendong/dataset/Vis2017 --source_list data/office/webcam_list.txt --target_list data/office/amazon_list.txt --test_list data/office/amazon_list.txt --num_classes 12 --model_type ViT-B_16 --pretrained_dir checkpoin
t/ViT-B_16.npz --num_steps 20000 --img_size 256 --beta 1.0 --gamma 0.01 --use_im
from tvt.
what do you mean by "baseline script/code"?
sorry, it was a misexpression. I mean code for implementing 【source only】 result.
from tvt.
Thanks. Let me test office-31 and office-home and let you know the result soon.
The code for [source only] will be released today. Will send you a message once I upload it.
from tvt.
Thanks. Let me test office-31 and office-home and let you know the result soon. The code for [source only] will be released today. Will send you a message once I upload it.
Thanks for your work. code for [source only] is important for implementing a transformer backbone network😂(especially for office datasets).
from tvt.
Hi, the source-only code is uploaded, please let me know if you might need further information.
I tested the dw
of Office-31
, the result at the epoch 300 is below (the code is still running):
from tvt.
from tvt.
thanks, I will try it again, and paste result later.
from tvt.
I run the srouce-only code in office d-w and a->d task, in d->w task, performance is the same as you reported, however, in d->a, only 74.14
here is my cmd:
python train.py --train_batch_size 64 --dataset office --name da_source_only --train_list data/office/dslr_list.txt --test_list data/office/amazon_list.txt --num_classes 31 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --num_steps 5000 --img_size 256
ps:use my dataloader,, which include
{transforms.Resize([256, 256]),
transforms.RandomCrop(256),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
}
from tvt.
I use python3 train.py --train_batch_size 64 --dataset office --name da_source_test --train_list data/office/dslr_list.txt --test_list data/office/amazon_list.txt --num_classes 31 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --num_steps 5000 --img_size 256
and get the following result:
from tvt.
I'm running the sourceOnly code on both office and office-home datasets and will post my results here once finished.
--update--
On the office dataset, I got
A->D: 89.36%; A->W: 90.06%; D->A: 76.36%; D->W: 98.49%; W->A: 75.1%; W->D: 100%
For office-home, I got
A->C: 60.85%; A->P: 78.01%; A->R: 83.61%; C->A: 70.29%; C->P: 78.06%; C->R: 80.3%; P->A: 67.5%; P->C: 52.5%; P->R: 83.0%; R->A:73.4%; R->C: 57.0%; R->P: 83.8%
The script I use is like this:
python train.py --train_batch_size 64 --dataset office --name da_source_only --train_list data/office/dslr_list.txt --test_list data/office/amazon_list.txt --num_classes 31 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --num_steps 5000 --img_size 256
The results are far from what are reported in the paper, please advise potential reasons for the failure of reproduction.
from tvt.
This is so weird. I will double check it and let you know ASAP. Thanks.
from tvt.
I just tested these two experiments. Not sure why you get quite different result. Let me upload the environment I used today.
da_source_only:
wa_source_only:
from tvt.
I just tested these two experiments. Not sure why you get quite different result. Let me upload the environment I used today.
da_source_only:
wa_source_only:
Thanks for your patience, I will re-download the database you paste in this repo and entirely use your dataloader to train later. (In a day or two,I will re-upload the results at that time)
from tvt.
I'm running the sourceOnly code on both office and office-home datasets and will post my results here once finished. --update-- On the office dataset, I got A->D: 89.36%; A->W: 90.06%; D->A: 76.36%; D->W: 98.49%; W->A: n/a; W->D: n/a For office-home, I got A->C: 60.85%; A->P: 78.01%; A->R: 83.61%; C->A: 70.29%; C->P: 78.06%;
The script I use is like this: python train.py --train_batch_size 64 --dataset office --name da_source_only --train_list data/office/dslr_list.txt --test_list data/office/amazon_list.txt --num_classes 31 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --num_steps 5000 --img_size 256
The results are far from what are reported in the paper, please advise potential reasons for the failure of reproduction.
Hi, can you @hellowangqian @ShiyeLi follow the following requirement to rebuild your environment and try again? Thanks.
https://github.com/uta-smile/TVT/blob/main/README.md#environment-python-3812
from tvt.
I'm running the sourceOnly code on both office and office-home datasets and will post my results here once finished. --update-- A->D: 89.36%; A->W: 90.06%; D->A: 76.36%; D->W: 98.49%; W->A: 75.1%; W->D: 100%. For office-home, I got A->C: 60.85%; A->P: 78.01%; A->R: 83.61%; C->A: 70.29%; C->P: 78.06%; C->R: 80.3%; P->A: 67.5%; P->C: 52.5%; P->R: 83.0%; R->A:73.4%; R->C:
The script I use is like this: python train.py --train_batch_size 64 --dataset office --name da_source_only --train_list data/office/dslr_list.txt --test_list data/office/amazon_list.txt --num_classes 31 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --num_steps 5000 --img_size 256
The results are far from what are reported in the paper, please advise potential reasons for the failure of reproduction.Hi, can you @hellowangqian @ShiyeLi follow the following requirement to rebuild your environment and try again? Thanks. https://github.com/uta-smile/TVT/blob/main/README.md#environment-python-3812
Sure, I'll set up a new environment following yours and re-run the experiments to see what happens.
---Update----
I use the same environment as yours, unfortunately, there is no difference from what I got before (i.e. the quoted results).
May I ask @viyjy if you use the same code in the repo (e.g., cloning from the repo as I did) to reproduce the results above? I ask this to check the possibility that some unnoticed changes have been made when you uploaded your code to GitHub.
from tvt.
I'm running the sourceOnly code on both office and office-home datasets and will post my results here once finished. --update-- A->D: 89.36%; A->W: 90.06%; D->A: 76.36%; D->W: 98.49%; W->A: 75.1%; W->D: 100%. For office-home, I got A->C: 60.85%; A->P: 78.01%; A->R: 83.61%; C->A: 70.29%; C->P: 78.06%; C->R: 80.3%; P->A: 67.5%; P->C: 52.5%; P->R: 83.0%; R->A:73.4%; R->C:
The script I use is like this: python train.py --train_batch_size 64 --dataset office --name da_source_only --train_list data/office/dslr_list.txt --test_list data/office/amazon_list.txt --num_classes 31 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --num_steps 5000 --img_size 256
The results are far from what are reported in the paper, please advise potential reasons for the failure of reproduction.Hi, can you @hellowangqian @ShiyeLi follow the following requirement to rebuild your environment and try again? Thanks. https://github.com/uta-smile/TVT/blob/main/README.md#environment-python-3812
Sure, I'll set up a new environment following yours and re-run the experiments to see what happens. ---Update---- I use the same environment as yours, unfortunately, there is no difference from what I got before (i.e. the quoted results).
May I ask @viyjy if you use the same code in the repo (e.g., cloning from the repo as I did) to reproduce the results above? I ask this to check the possibility that some unnoticed changes have been made when you uploaded your code to GitHub.
Yes, the result in #5 (comment) is obtained by downloading the code from this repo and run it again. What kind of machine are you using?
BTW, please directly add a new comment below to discuss your issue. I don't receive the email reminder if you update your previous comment. Thanks.
from tvt.
Ubuntu 20.04 + Nvidia Titan RTX GPU
from tvt.
Titan RTX GPU
Are you using a single GPU to train the model?
from tvt.
Titan RTX GPU
Are you using a single GPU to train the model?
Yes.
from tvt.
Titan RTX GPU
Are you using a single GPU to train the model?
Yes.
The only difference is that my ubuntu version is 18.04, but I don't think it makes a difference to the result.
from tvt.
Titan RTX GPU
Are you using a single GPU to train the model?
Yes.
The only difference is that my ubuntu version is 18.04, but I don't think it makes a difference to the result.
Thanks for clarifying the details. I'll spend more time investigating the issue.
from tvt.
I'm running the sourceOnly code on both office and office-home datasets and will post my results here once finished. --update-- A->D: 89.36%; A->W: 90.06%; D->A: 76.36%; D->W: 98.49%; W->A: 75.1%; W->D: 100%. For office-home, I got A->C: 60.85%; A->P: 78.01%; A->R: 83.61%; C->A: 70.29%; C->P: 78.06%; C->R: 80.3%; P->A: 67.5%; P->C: 52.5%; P->R: 83.0%; R->A:73.4%; R->C:
The script I use is like this: python train.py --train_batch_size 64 --dataset office --name da_source_only --train_list data/office/dslr_list.txt --test_list data/office/amazon_list.txt --num_classes 31 --model_type ViT-B_16 --pretrained_dir checkpoint/ViT-B_16.npz --num_steps 5000 --img_size 256
The results are far from what are reported in the paper, please advise potential reasons for the failure of reproduction.Hi, can you @hellowangqian @ShiyeLi follow the following requirement to rebuild your environment and try again? Thanks. https://github.com/uta-smile/TVT/blob/main/README.md#environment-python-3812
Sure, I'll set up a new environment following yours and re-run the experiments to see what happens. ---Update---- I use the same environment as yours, unfortunately, there is no difference from what I got before (i.e. the quoted results).
May I ask @viyjy if you use the same code in the repo (e.g., cloning from the repo as I did) to reproduce the results above? I ask this to check the possibility that some unnoticed changes have been made when you uploaded your code to GitHub.Yes, the result in #5 (comment) is obtained by downloading the code from this repo and run it again. What kind of machine are you using? BTW, please directly add a new comment below to discuss your issue. I don't receive the email reminder if you update your previous comment. Thanks.
hi , I download the code and dataset again from this repo and run it without any modification. however , still can not reproduce baseline result in paper.
here is my result in office31(srconly,best result after 5000epoch):
W->A:75.29%
D->A:76.67%
A->D:89.56%
A->W:89.94%
My environment is a little bit different from yours.(caused by my CUDA version in servers.) But i don't think it's the primary cause.
*pytorch==1.12.0.dev20220224+cu111 (to use apex provide by https://github.com/NVIDIA/apex , can only install pytorch >=1.12)
*torchvision==0.12.0.dev20220224+cu111
*torchaudio==0.13.0.dev20220224+cu111
tqdm==4.50.2
tensorboard==2.8.0
*apex == 0.1 (command 'conda install -c conda-forge nvidia-apex' will attempt to install torch 1.4.0 as follow picture shows, which will conflict with torchvision in later running. So I install apex from https://github.com/NVIDIA/apex)
scipy==1.5.2
ml-collections==0.1.0
scikit-learn==0.23.2
from tvt.
Thanks, let check this.
from tvt.
@ShiyeLi Hi, would you please send the data.zip in Datasets to [email protected]? I wrongly deleted it from my google drive yesterday. Thanks.
from tvt.
I have send this zip file, have you receive that?
from tvt.
I have send this zip file, have you receive that?
Yes, thanks.
from tvt.
Sorry for the late reply, I still cannot reproduce your results. May I know which pre-trained ViT are you using? Thanks.
from tvt.
Sorry for the late reply, I still cannot reproduce your results. May I know which pre-trained ViT are you using? Thanks.
I use the pre-trained model 'ViT-B_16.npz' you provided in this repo.
from tvt.
Thanks. I will test this on another machine and will let you know by the end of today.
from tvt.
@hellowangqian @ShiyeLi which Pretrained ViT are you using?
from tvt.
@hellowangqian @ShiyeLi which Pretrained ViT are you using?
I used ViT-B_16.npz previously. Now I can reproduce the SourceOnly results for OfficeHome in the paper by using ImageNet21K_ViT-B_16.npz.
from tvt.
Thanks. I have tested this code on another machine by downloading the repo, building the environment, and downloading the dataset all from scratch, but still cannot reproduce the issues you reported.
from tvt.
I guess all results reported in the paper are based on the ImageNet21K_ViT-B_16.npz pre-trained model, right? If so, It's expected to have lower performance when ViT-B_16.npz (pre-trained on ImageNet1K) is used. If you can get better results than what I shared above using ViT-B_16.npz (ImageNet1K), could you please share them here for reference? Thanks.
from tvt.
Right. Please check the following tables, where TVT* means the results by using ViT-B_16.npz (ImageNet1K).
from tvt.
Thanks, I haven't tried reproducing TVT* yet. What I got was for SourceOnly* (sourceOnly with ViT-B_16.npz). Since I've managed to reproduce SourceOnly results, my SourceOnly* results above should be close to yours if you have these results.
from tvt.
Related Issues (16)
- Accuracy of VisDA2017 only 70.76 HOT 27
- Some questions about TAM and DCM HOT 3
- No module named 'models.modeling_resnet' HOT 2
- code for source only experiments HOT 2
- how to load the .bin file for Attention map visualization HOT 3
- The results about vanilla ViT with adversarial adaptation. HOT 9
- Some problems in the code/modeling.py HOT 2
- The performance difference when target domain is Clipart on office home dataset. HOT 9
- About the feature visualization in your paper HOT 1
- About code in modeling.py HOT 6
- Question about loss function. HOT 2
- training time HOT 2
- The performance can't achieve result in paper when target domain is Clipart on office-home HOT 2
- Errors encountered in running the code,SOS!!!
- An interesting question.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tvt.