Giter VIP home page Giter VIP logo

rpnplus's Introduction

RPNplus

This repository is not going to be updated anymore. The new detection model will be published here: TARTDetection

Code accompanying the paper "Expecting the Unexpected: Training Detectors for Unusual Pedestrians with Adversarial Imposters(CVPR2017)". As for the generator for synthetic data, please take this repo for reference.

Requirement

  • ubuntu or Mac OS
  • tensorflow==1.1+
  • pip install image
  • pip install sklearn
  • pip install scipy
  • image_pylib(This repository should be put under the same folder with RPNplus.)

Usage

Run Demo:

  • Download model files(RPN_model & VGG16_model) first, and put them in the ./models/ folder.
  • The number 0 is your GPU index, and you can change to any available GPU index.
  • This demo will test the images in the ./images/ folder and output the results to ./results/ folder.
python demo.py 0

ATOCAR Logo

Train:

  • The number 0 is your GPU index, and you can change to any available GPU index.
  • Open train.py and set imageLoadDir and anoLoadDir to proper values(imageLoadDir means where you store your images and anoLoadDir means where you store your annotation files).
python train.py 0

Dataset Download

Related Datasets

Cite

Please cite our paper if you use this code or our datasets in your own work:

@InProceedings{Huang_2017_CVPR,
author = {Huang, Shiyu and Ramanan, Deva},
title = {Expecting the Unexpected: Training Detectors for Unusual Pedestrians With Adversarial Imposters},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}

Acknowledgement

Author

Shiyu Huang([email protected])

rpnplus's People

Contributors

akshay-ap avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rpnplus's Issues

training with previously trained model

After training for 10000 iterations, I want to train for another 10000.
Instead of training from 0 - 20000, I was wondering if I could load the params_10000.npy and train 10000 iterations more.

Tried loading the model like vgg16.npy but loss comes out to be huge.

Is there a way to train on top of previous trained model?

the anchor_min_height in the data_engine ???

hi,
I want to know the parameter about this code

self.anchor_min_height = 40 * self.image_resize_factor

the 40 is that:

45 - 5 = 40
45 is the feature map height

am I right ?

thx

proposal_prepare in data_engine.py

hi,
I want to use your model to change my own datasets.
But Know ,I want read your source code in the data_engine.py file.

    def proposal_prepare(self, imdb):
        anchors = self.generate_anchors()
        proposals = np.zeros(
            [self.anchor_size * self.convmap_width * self.convmap_height, 4])

        for i in range(self.convmap_height):
            h = i * 16 + 8
            for j in range(self.convmap_width):
                w = j * 16 + 8
                for k in range(self.anchor_size):
                    index = i * self.convmap_width * self.anchor_size + \
                            j * self.anchor_size + k
                    anchor = anchors[k, :]
                    proposals[index, :] = anchor + np.array([w, h, w, h])

the numbers 16 and 8 is acording to the four times pooling layer to use 16, but what does the 8 means ?

thx

Is model trained?

Hello, thank you for this great work.

Is the model already trained in this repo? If I use test.py I see it already detects people, so maybe it is just ready to use?

Another question: does train.py has a stop criteria or loss function threshold? I set it to train with precarious dataset and after 43 hours I have this output:

step : 8400 time : 153299.0062 loss : 0.46541554 l_r : 0.0001

When will it stop?

Thank you

PIL not found

I have got the following error while running your code.

Traceback (most recent call last):
  File "demo.py", line 10, in <module>
    from image_pylib import IMGLIB
ModuleNotFoundError: No module named 'image_pylib'

Train own dataset

Hello, I want to try train my own dataset. can you help to guide us how to train with our own dataset. we will really appreciate that. thankyou :)

How to fine-tune you model?

I want to train my data, but i could not fine tune the model with my own data, so how to fine tune the model?

请问怎么修改bbox的最小尺寸?

在duke-MTMC-reID数据集中,出现的人占图片的比例都很小,抠出来的框有时候比人大一圈,所以算出来的score很低。模型好像没有办法框出很小的人,我们看了半天没看懂代码,能不能告诉一下怎么修改bbox的最小尺寸使得能够框出很小的人?谢谢!

What does the 'wandhG' mean ?

hi,
I want to use RPNPlus to detect the object?
and I want to use this model to extract the proposal area.
And I changed the Input Image size to (224 ,224 ,1). But I don't know how to change the 'wandhG'?

wandhG = [[100.0, 100.0], [300.0, 300.0], [500.0, 500.0],
          [200.0, 100.0], [370.0, 185.0], [440.0, 220.0],
          [100.0, 200.0], [185.0, 370.0], [220.0, 440.0]]

thx

My training result is bad, why?

I train the model in synthetic_dataset. When I run demo.py with params_10000.npy (instead of model.npy), the result is very bad, even can't detect person. How do you train your model @huangshiyu13 ?

How can I handle with different size pictures?

And I use the pre-trained model on my data, there are always several boxes overlapped on one body, how can I fix this , is it related to the threshold of nms? Or I need to retrain by myself?

how to change the bbox min size?

This model is useful in some data sets. But in some data sets such as duke-MTMC-reID, the proportion of people in the picture is very small. And in this case, the bbox min size is bigger than the people's size, so some people can't detection successfully. How to deal with this problem? Thank u very much!

About RAM

首先,感谢你分享的代码,我现在遇到一个问题:计算机RAM 8GB,GPU 显存12GB,现在想用的数据集大概有4万多张,但是一训练直接就会因为内存不足而崩溃,请问不增加内存的情况下能解决这个问题吗?

cant Train the model

i used the following code in the terminal
python train.py 0
And I got the following errors

Traceback (most recent call last):' File "train.py", line 240, in
with tf.device(gpuNow):'

`File "/home/farshid/anaconda3/lib/python3.5/contextlib.py", line 59, in enter
return next(self.gen)'

File "/home/farshid/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3045, in device' device_function = pydev.merge_device(device_name_or_function)'

File "/home/farshid/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/device.py", line 282, in merge_device' spec = DeviceSpec.from_string(spec or "")'

File "/home/farshid/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/device.py", line 228, in from_string' return DeviceSpec().parse_from_string(spec)'

`File "/home/farshid/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/device.py", line 146, in parse_from_string'

splits = [x.split(":") for x in spec.split("/")]'AttributeError: 'int' object has no attribute 'split'
`

training dataset

Thank you for your sharing code and dataset. I am writing to you about your synthetic_dataset. I wanted to train a model to deal with the non-straight pedestrian detection and wondered if I can use the synthetic_dataset mentioned in your paper - "Expecting the Unexpected : Training Detectors for Unusual Pedestrians with Adversarial Imposters CVPR. 2017“ to train it. I found that the unreasonable datasets in the synthetic dataset didn't be deleted and I didn't know how to get only imposter images as you did in the paper. Then, I wish you can give me some advice.
Than you!

type error in prepare_data

After nn starts the training, it outputs a type error in data_engine.py in prepare_data,
line: fg_inx = fg_idx[:fg_num]

TypeError: slice indices must be integers or None or have an index method

I just downloaded the precarious_dataset and updated the folders on train.py

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.