vdigpku / t-sea Goto Github PK
View Code? Open in Web Editor NEW[CVPR 2023] T-SEA: Transfer-based Self-Ensemble Attack on Object Detection
[CVPR 2023] T-SEA: Transfer-based Self-Ensemble Attack on Object Detection
In order to evaluate the methods in the paper, the corresponding detection label files for each detector are required. The labels corresponding to the three types of pedestrian datasets in the source code are all provided directly (as shown in the figure below). For the evaluation of custom datasets, there is no corresponding detection label for each detector, and the evaluation work cannot be carried out.
I would like to inquire about how to generate label files for each type of detector evaluation in the experiment, and whether there is a corresponding target detection model library that does not require pre training weights to train the dataset from scratch to obtain the final label file. I have conducted research on relevant object detection libraries that require pre trained models, and not all of the object detection models in the experiment are included. So please advise on how to obtain label files for custom datasets evaluated by each detector.
您好,我在尝试运行eval.sh和evaluate.py,但是出现了如下报错,我检查了attack-labels,这好像是一个运行时生成的文件夹,里面什么也没有,请问这个是什么问题,如何解决呢,谢谢?
0it [00:01, ?it/s]
ln -s /content/drive/MyDrive/proj/T-SEA/data/INRIAPerson/Test/labels/yolov2-rescale-labels /content/drive/MyDrive/proj/T-SEA/eval/inria/demo/v5-demo/yolov2/det-labels
ground truth path : /content/drive/MyDrive/proj/T-SEA/eval/inria/demo/v5-demo/yolov2/det-labels
Error. File not found: /content/drive/MyDrive/proj/T-SEA/eval/inria/demo/v5-demo/yolov2/attack-labels/crop001501.txt
(You can avoid this error message by running extra/intersect-gt-and-dr.py)
Sorry to bother, I have trained a patch following the readme.md and i try to run evaluate.py to test its performance on other detectors. But the results are not so satisfied. I have two questions : The first one is how can i know which epoch's patch performs the best bcause i try to evaluate the result after 1000epochs and that after 300epochs , and the 300 one is much better than the 1000 one. while this may be my problem about evaluate.py . The second one is Is there any pre work I should make before run evaluate.py. I just try to generate clean imgs labels on different detectors every time i run and get mAPs on different detectors. I wonder maybe something is wrong about the weights of different detectors I download from the wedsites in the files given.
您方便提供一下吗?谢谢
Hello,Thank you for your work! And I have some issues about the coco_process.py
As you said in README,
where the xyxy coordinates of the bbox is scale into [0, 1] or a rescaled version as [0, input_size]. The latter one can meet formatting requirements of mAP.py. The rescaled label file format will be like:
. The default rescale_factor is 416, but not every imgae'size is 416, how can we rescaled to [0,input_size]
? yolo_bbox *= rescale_factor
seems that can't rescale the yolo_bbox, but repeat the yolo_bbox rescale_factor times? I revised the codes and get the same ground_truth as you update.Thank you for your reply!
Hello, I read in the paper that a small patch scale is better, and it is also the same in the figure. But why is the description "A large patch scale during training will cause the test mAP to drop lower and faster"
Testing with -test_origin causes problems because save_label changes all_preds
您好,我使用yolov5提供的官方代码调用摄像头进行了尝试,但是对抗样本的效果不是很好,请问有可能是什么原因造成的呢?补丁用的是您的仓库中的demo图片通过打印获得的,模型参数是yolov5s.pt
视频链接如下:
链接:https://pan.baidu.com/s/12oGVUkQVBZ_OvGhKg-ZTWQ
提取码:empj
--来自百度网盘超级会员V2的分享
tensorboard看的时候画的框往右下角偏,但我尝试过各种bbox的格式转换,貌似都不对
运行测试文件 报错显示文件没有此文件
ground truth path : /home/work/Users/Jetball/MainCode/attack/T-SEA-main/data/test/v5-demo/yolov5/det-labels
Error: No ground-truth files found!
作者您好,我在跑您train_optim.py这个代码的时候,随着epoch的增长,显存占用也在缓慢增长,请问作者在自己的设备上跑的时候遇到过这种问题吗?
When I tried to reproduce the situation in Figure 5, I found that I could not get similar results.
patch use: ssd-combine-scale-1.png
yolov5 run cmd: python detect.py --source 0 --iou-thres 0.45 --classes 0
(default use yolov5s.pt)
I would like to ask, is the reason for not getting similar results because specific images need to be added to the training set?
INRIAPerson的数据集网上下不到,Google drive上只有test的,可以把完整训练集分享一下吗?
Hi, thanks for your interesting research. I want to evaluate your code on INRIA person dataset, but when I click generated label of INRIA, it goes to COCO labels. How can I get INRIA labels (e.g. faster_rcnn_rescale or yolov5-rescale-labels)?
Cuda memory usage increase over time when training. Is it a bug or feature? How can I fix it or avoid it. My training process will be killed when it comes to around 800 epoches due to cuda memory usage, but I need more.
/home/subinyi/anaconda3/envs/pytorch/bin/python /home/subinyi/Users/T-SEA-main/evaluate.py
Model init /home/subinyi/Users/T-SEA-main/detlib/HHDet/yolov5/yolov5/models/yolov5s.yaml
model cfg : HHDet/yolov5/yolov5/models/yolov5s.yaml
Reading patch from file: ./results/v5-demo.png
-------------------------Evaluating-------------------------
patch : ./results/v5-demo.png
cfg : ./configs/eval/coco80.yaml
save : /home/subinyi/Users/T-SEA-main/data/test/v5-demo
label_path : /home/subinyi/Users/T-SEA-main/data/INRIAPerson/Test/labels
data_root : /home/subinyi/Users/T-SEA-main/data/INRIAPerson/Test/pos
test_origin : False
test_gt : False
stimulate_uint8_loss : False
save_imgs : /home/subinyi/Users/T-SEA-main/data/test/
gen_no_label : False
eva_class : 0
quiet : False
eva_class_list : ['person']
ignore_class : ['cow', 'dog', 'skis', 'clock', 'tvmonitor', 'sportsball', 'spoon', 'horse', 'truck', 'oven', 'elephant', 'banana', 'broccoli', 'chair', 'bicycle', 'orange', 'bus', 'umbrella', 'firehydrant', 'refrigerator', 'toaster', 'fork', 'wineglass', 'vase', 'frisbee', 'donut', 'baseballglove', 'zebra', 'scissors', 'bench', 'knife', 'book', 'baseballbat', 'handbag', 'boat', 'cat', 'skateboard', 'suitcase', 'laptop', 'pizza', 'hotdog', 'bed', 'keyboard', 'parkingmeter', 'aeroplane', 'train', 'cup', 'cake', 'trafficlight', 'backpack', 'hairdrier', 'surfboard', 'stopsign', 'diningtable', 'sink', 'toilet', 'bowl', 'motorbike', 'snowboard', 'remote', 'mouse', 'sofa', 'sandwich', 'tennisracket', 'car', 'pottedplant', 'kite', 'cellphone', 'tie', 'giraffe', 'carrot', 'bear', 'bird', 'bottle', 'microwave', 'apple', 'teddybear', 'toothbrush', 'sheep']
100%|██████████| 288/288 [01:58<00:00, 2.44it/s]
ln -s /home/subinyi/Users/T-SEA-main/data/INRIAPerson/Test/labels/yolov5-rescale-labels /home/subinyi/Users/T-SEA-main/data/test/v5-demo/yolov5/det-labels
ground truth path : /home/subinyi/Users/T-SEA-main/data/test/v5-demo/yolov5/det-labels
0.00% = person AP
mAP = 0.00%
n classes: 1
No class-related preprocesser to be plotted!
n classes: 1
n classes: 1
ln -s /home/subinyi/Users/T-SEA-main/data/INRIAPerson/Test/labels/ground-truth-rescale-labels /home/subinyi/Users/T-SEA-main/data/test/v5-demo/yolov5/ground-truth
Attack Performance. AP : {'yolov5': 0.0}
See results in path : /home/subinyi/Users/T-SEA-main/data/test/v5-demo
Process finished with exit code 0
ln -s /PycharmProjects/T-SEA-main/data/INRIAPerson/Test/labels/faster_rcnn-rescale-labels /PycharmProjects/T-SEA-main/eval/inria/demo/v5-demo/faster_rcnn/det-labels
ground truth path : /PycharmProjects/T-SEA-main/eval/inria/demo/v5-demo/faster_rcnn/det-labels
Error. File not found: /PycharmProjects/T-SEA-main/eval/inria/demo/v5-demo/faster_rcnn/attack-labels/crop001501.txt
(You can avoid this error message by running extra/intersect-gt-and-dr.py)
In gen_det_labels.py
, when I try to save imgs I run into this problems,
img_numpy, img_numpy_int8 = detector.unnormalize(img_tensor_batch[0])
In the detector, it hasn't the attribute unormalize
. And I don't find the normalize
in the dataloder
. I hope you can help me ! Thank you!!
“ImportError: cannot import name 'ConvBNActivation' from 'torchvision.models.mobilenetv2'”
请问一下你们的pytorch和torchvision的版本是多少?我在运行ssd和faster rcnn时会报这个错!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.