Giter VIP home page Giter VIP logo

da_detection's People

Contributors

bhack avatar endernewton avatar hyunjaelee410 avatar josephkj avatar kevinhkhsu avatar kukuruza avatar mbuckler avatar oya163 avatar phil-bergmann avatar philokey avatar ppwwyyxx avatar ruotianluo avatar shijunk avatar snshine avatar tao-j avatar vasilgeorge avatar xyutao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

da_detection's Issues

文献公式2咨询

你好!你们的工作很有意思,最近阅读过程中一直没有理解你们公式2为啥这么构造,因为与文献 Domain-Adversarial Neural Networks 的结论好像不一样。

Error

FileNotFoundError: [Errno 2] No such file or directory: 'output/vgg16/KITTI_synthCity/adapt/vgg16_faster_rcnn_K2C_stage2_iter.pth'
who can help me

About generating intermediate domain image size.

Thank you very much for sharing the code. Is it a fake image that generates the original image size? When the image size is (1024, 2048), GPU memory is very large. Is there a better way to handle this situation?

What are motor and bike classes for Cityscapes

In the table 4 of original paper, results for adaptation from cityscapes to BDD100k are shown. What are the bike and motors classes for cityscapes? In original cityscapes challenge, there is only motorcycle class.

adapt_results_c2bdd

image size when generating from pre-trained weight

Hello. Let me ask a question to clarify one point,
You mentioned "Remember to change to the corresponding output image size" in the part of "Generate from pre-trained weight".
Does this "output image size" just mean the command-line argument "--size" in the implementation of cycle GAN (aitorzip/PyTorch-CycleGAN)? My understanding is that there are no need of direct data preprocessing such as resizing of training or test data.

How to generate score json file

In your readme file, you pointed out "Save a dictionary of CycleGAN discriminator scores with image name as key and score as value". When training on the custom dataset, where can i get this score? Can you explain it, thanks !

Foggy cityscapes dataset

I'm curious about the experiment setting of the foggy cityscapes dataset.
In foggy cityscapes, there are foggy images by three levels (0.05, 0.1, 0.2 level → 2975 images * 3 levels = 8,925).
Did you use all of this data in your experiments? or did you only use foggy images of a specific level?

module 'layer_utils.roi_align._ext.crop_and_resize' has no attribute 'crop_and_resize_gpu_forward'

Please hep How can i fix this error.
Traceback (most recent call last):
File "./tools/trainval_net_adapt.py", line 147, in
pretrained_model=args.weight,max_iters=args.max_iters)
File "/home/cv-lab/DA_detection/tools/../lib/model/train_val_adapt.py", line 396, in train_net
sw.train_model(max_iters)
File "/home/cv-lab/DA_detection/tools/../lib/model/train_val_adapt.py", line 304, in train_model
self.net.train_adapt_step_img(blobs, blobsT, self.optimizer, self.D_img_op, synth_weight)
File "/home/cv-lab/DA_detection/tools/../lib/nets/network.py", line 820, in train_adapt_step_img
fc7, net_conv = self.forward(blobs_S['data'], blobs_S['im_info'], blobs_S['gt_boxes'])
File "/home/cv-lab/DA_detection/tools/../lib/nets/network.py", line 736, in forward
rois, cls_prob, bbox_pred, net_conv, fc7 = self._predict()
File "/home/cv-lab/DA_detection/tools/../lib/nets/network.py", line 697, in _predict
pool5 = self._crop_pool_layer(net_conv, rois)
File "/home/cv-lab/DA_detection/tools/../lib/nets/network.py", line 183, in _crop_pool_layer
torch.cat([y1/(height-1),x1/(width-1),y2/(height-1),x2/(width-1)], 1), rois[:, 0].int())
File "/home/cv-lab/DA_detection/tools/../lib/layer_utils/roi_align/crop_and_resize.py", line 21, in forward
_backend.crop_and_resize_gpu_forward(
AttributeError: module 'layer_utils.roi_align._ext.crop_and_resize' has no attribute 'crop_and_resize_gpu_forward'
Command exited with non-zero status 1

Evaluation AP

Hi,

I really like your Repository. Could you tell something about the evaluation metrics? Which IoU Thresholds for the car AP Metric did you use?

Thanks in Advance

About Train CycleGAN

The image size of Cityscape and Foggy Cityscape dataset is 1024 * 2048. So when training CycleGAN, the input of network is also 1024 * 2048? I have a Tesla P100,which memory is 16G

could you explain how to train full network step by step?

for training domain adaptation from KITTI to cityscape dataset what step I need to flow step by step?

  1. The first train faster runs with the KITTI dataset.
  2. by using those train weights, train adaptation networks with cityscape datasets.
    is it true?

Reported Accuracy for cross camera adaptation on which dataset

Hi,
Could you please clarify that reported accuracy in Cross camera model is based on the validation set of CItyScapes dataset. Also about the visualization, are you using the stage 2 model out to generate the feature for t-SNE visualization
Thanks

About Cross Camera Adaptation

Thank you for sharing the code.
Could you provide synthetic KITTI images used in KITTI-> Cityscapes? I used cycleGAN to synthesize images according to the paper, but the result was only improved by 1.0%. I tried to adjust the parameters, etc., but still couldn't get an effective improvement. Do you have any suggestions?

Can I deploy the project under the windows system?

Thank you for providing open source code for learning to reproduce, your work is very meaningful. I would like to apply your work to my own data set to see the results of the experiment, but the servers in the laboratory are all based on the windows system. At present, I have encountered problems in the environment configuration. Can I deploy the project under the windows system?? Looking forward to your reply, thank you very much.

About One stage model

Thanks for sharing code.
I want to know whether you use this metod in one stage model such as YOLO v3 or SSD

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.