Giter VIP home page Giter VIP logo

car-damage-detector's Introduction

Using Mask R-CNN to detect Car Damage

Using the amazing Matterport's Mask_RCNN implementation and following Priya's example, I trained an algorithm that highlights areas where there is damage to a car (i.e. dents, scratches, etc.). You can run the step-by-step notebook in Google Colab or use the following:

Usage: import the module (see Jupyter notebooks for examples), or run from
       the command line as such:

    # Train a new model starting from pre-trained COCO weights
    python3 custom.py train --dataset=/path/to/dataset --weights=coco

    # Resume training a model that you had trained earlier
    python3 custom.py train --dataset=/path/to/dataset --weights=last

    # Train a new model starting from ImageNet weights
    python3 custom.py train --dataset=/path/to/dataset --weights=imagenet

    # Apply color splash to an image
    python3 custom.py splash --weights=/path/to/weights/file.h5 --image=<URL or path to file>

    # Apply color splash to video using the last weights you trained
    python3 custom.py splash --weights=last --video=<URL or path to file>
"""

Output Detection

Gather training data

Use the google-images-download library or look manually for images in Google Images or Flickr. I chose Flickr and filter by the photos allowed for 'commercial and other mod uses'. I downloaded 80 images into the 'images' folder.

Installation

This script supports Python 2.7 and 3.7, although if you run into problems with TensorFlow and Python 3.7, it might be easier to just run everything from Google Colaboratory notebook.

Clone this repository

$ git clone https://github.com/nicolasmetallo/car-damage-detector.git

Install pre-requisites

$ pip install -r requirements.txt

Split dataset into train, val, test

Run the 'build_dataset.py' script to split the 'images' folder into train, val, test folders.

$ python3 build_dataset.py --data_dir='images' --output_dir='dataset'

Annotate images

There's no standard way to annotate images, but the VIA tool is pretty straightforward. It saves the annotation in a JSON file, and each mask is a set of polygon points. Here's a demo to get used to the UI and the parameters to set. Annotate the images in the 'train' and 'val' folders separately. Once you are done, save the exported 'via_region_data.json' to each folder.

VIA annotation

Train your model

We download the 'coco' weights and start training from that point with our custom images (transfer learning). It will download the weights automatically if it can't find them. Training takes around 4 mins per epoch with 10 epochs.

$ python3 custom.py --dataset='dataset' --weights=coco # it will download coco weights if you don't have them

Google Colaboratory

Mount Google Drive

# Install the PyDrive wrapper & import libraries.
# This only needs to be done once in a notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials

# Authenticate and create the PyDrive client.
# This only needs to be done once in a notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)

Save trained model to Google Drive

As the weights file is around 250 mb, you are not able to download it directly from Colab. You need to save it first to your Google Drive, and then download it you your local drive. Modify with your own variable names and paths.

# Create & upload a file.
uploaded = drive.CreateFile({'title': path/to/weights/file.h5})
uploaded.SetContentFile('file.h5')
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))

Load weights file from Google Drive

# Mount Drive folder
from google.colab import drive
drive.mount('/content/drive')
# Look for 'mask_rcnn_damage_0010.h5' and copy to the working directly
!cp 'drive/My Drive/mask_rcnn_damage_0010.h5' 'car-damage-detector'

Apply color splash to an image

$ python3 custom.py splash --weights=logs/damage20181007T0431/mask_rcnn_damage_0010.h5 --image=dataset/test/67.jpg

or run inspect_custom_model.ipynb

car-damage-detector's People

Contributors

mnm403 avatar nicolasmetallo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

car-damage-detector's Issues

Cannot start training

Traceback (most recent call last):
File "custom.py", line 43, in
from mrcnn import model as modellib, utils
File "C:\Users\Computer\Documents\GitHub\car-damage-detector\mrcnn\model.py", line 26, in
from mrcnn import utils
File "C:\Users\Computer\Documents\GitHub\car-damage-detector\mrcnn\utils.py", line 18, in
import skimage.io
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\io_init_.py", line 15, in
reset_plugins()
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\io\manage_plugins.py", line 88, in reset_plugins
_load_preferred_plugins()
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\io\manage_plugins.py", line 68, in _load_preferred_plugins
_set_plugin(p_type, preferred_plugins['all'])
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\io\manage_plugins.py", line 80, in set_plugin
use_plugin(plugin, kind=plugin_type)
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\io\manage_plugins.py", line 253, in use_plugin
load(name)
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\io\manage_plugins.py", line 297, in load
fromlist=[modname])
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\skimage\io_plugins\matplotlib_plugin.py", line 3, in
from mpl_toolkits.axes_grid1 import make_axes_locatable
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\mpl_toolkits\axes_grid1_init
.py", line 1, in
from . import axes_size as Size
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\mpl_toolkits\axes_grid1\axes_size.py", line 14, in
from matplotlib import cbook
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\matplotlib_init
.py", line 107, in
from . import cbook, rcsetup
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\matplotlib\rcsetup.py", line 28, in
from matplotlib.fontconfig_pattern import parse_fontconfig_pattern
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\matplotlib\fontconfig_pattern.py", line 15, in
from pyparsing import (Literal, ZeroOrMore, Optional, Regex, StringEnd,
File "C:\Users\Computer\AppData\Local\Programs\Python\Python36\lib\site-packages\pyparsing_init.py", line 130, in
version = version_info.version
AttributeError: 'version_info' object has no attribute 'version'

Training for more than 12 hrs and still not completed

screenshot 430
The training is going on for hrs and still going. To be precise u said it would take approx 4mins for each epoch , but for me its 3 hrs 30 mins for each epoch , so after 12 hrs only 4 epoch completed...
Guide me with this issue. thanks

Training stuck at 1st epoch itself

My system configuration:

Core i7 CPU @ 3.60 GHz * 8

24 GB RAM

GeForce GTX 1050 TI 4GB variant

OS: Ubuntu 18.04.4 LTS

ISSUE:

While training for multiclass car par detection script as provided in the repo the training is stuck at 1st epoch itself, tried many solutions from other issues as well but didnot worked, please help in this matter.

Thank you.

Tensorflow Keras versions to use

Hello,

Can you please point to the apt tf and keras versions to use, so that I can avoid running to errors regarding KE, KL etc?

Thanks in advance for your help.

Missing inspect_custom_model.ipynb file

Sir, at the end of the readme.md you have mentioned of using inspect_custom_model.ipynb file. As I have checked the whole repo but I haven't found it anywhere. Please upload it if is missing.
Also when I am trying to run the following command

python custom.py splash --weights=mask_rcnn_coco.h5 --image=test.jpeg

It is writing an image in the directory but without any splashes on the damaged area. Any suggestions why it is so.

Thanks

How to download coco

python3 custom.py --dataset='output' --weights=coco

but it does not download coco, it returns below, I am not using GPU
please help me to move forward

many thanks

2020-06-22 11:10:17.254678: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found
2020-06-22 11:10:17.255093: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Using TensorFlow backend.
usage: custom.py [-h] [--dataset /path/to/custom/dataset/] --weights /path/to/weights.h5 [--logs /path/to/logs/] [--image path or URL to image] [--video path or URL to video]
custom.py: error: the following arguments are required:

No results using trained model.

I have trained the model for 10 epochs and have used the trained model to run the command to plot splash. But it doesn't output anything, detected mask only have zeros inside it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.