Giter VIP home page Giter VIP logo

2018dsb's Introduction

2018DSB

2018 Data Science Bowl 2nd Place Solution

My solution is a modification of Unet. To make Unet instance‐aware, I add eight more outputs describing the relative positions of each pixel within every instance as shown in the images below. In my final model version, the entire network structure of Unet before the output layers was replaced by the pre‐trained Mask‐RCNN feature extractor (P2 as in Matterport version of MaskRCNN) for better performance.

https://github.com/jacobkie/2018DSB/blob/master/imgs/0.png

The relative position masks are shown as below: https://github.com/jacobkie/2018DSB/blob/master/imgs/1.png https://github.com/jacobkie/2018DSB/blob/master/imgs/1.png

Codes in utils.py, parallel_model.py, params.py, visualize.py, model_rcnn_weight.py are partly adapted from Matterport Mask_RCNN (https://github.com/matterport/Mask_RCNN) which is under MIT license. I also used its pre‐ trained weight on MS COCO (https://github.com/matterport/Mask_RCNN/releases).

Four sources of data were used:

  1. The Revised Train set(https://github.com/lopuhin/kaggle‐dsbowl‐2018‐dataset‐fixes)
  2. 2009 ISBI (http://murphylab.web.cmu.edu/data/2009_ISBI_Nuclei.html)
  3. Weebly (https://nucleisegmentationbenchmark.weebly.com/)
  4. TNBC (https://zenodo.org/record/1175282#.Ws2n_vkdhfA)

Some masks of the 2009ISBI data set are manually modified.

To train from scratch

  1. correct directory addresses of stage1 train set and stage2 test set accordingly in params.py
  2. run eda.py to load images&masks and save into pandas dataframe from stage1 train set and stage2 test set.
  3. correct directory address and run 2009isbi.py weebly.py TNBC.py
  4. run resize.py to create image pads of 256*256 size for all the datasets above.
  5. run train_ext.py to train from pretrained weight on MSCOCO
  6. correct directory address of weight_dir (where the model weight is saved) and run predict_auto.py to predict stage2 test set at four zooms (1/4, 1/2, 1, 2), and generate instance masks accordingly.
  7. correct directory address of weight_dir and run submission.py to combine instance masks from four zooms and mask submission file.

Or you can use my pretrained weight in the cache folder to make predictions directly.

2018dsb's People

Contributors

jacobkie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

2018dsb's Issues

RuntimeError: affine matrix has wrong number of rows`

when run 'python3 train_ext.py', meet some issues, is my environment not correct?
can you help to provide your running environment? by running the 'pip3 list'.

`
Epoch 1/100
Traceback (most recent call last):
File "train_ext.py", line 62, in
train_1s()
File "train_ext.py", line 59, in train_1s
model.train_generator(tr_ms1, val_ms1, 1e-3, 100, 'all')
File "/2th-DSB2018/script_final/model_rcnn_weight.py", line 495, in train_generator
use_multiprocessing=False,
File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 2145, in fit_generator
generator_output = next(output_generator)
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 770, in get
six.reraise(value.class, value, value.traceback)
File "/usr/lib/python3/dist-packages/six.py", line 686, in reraise
raise value
File "/usr/local/lib/python3.5/dist-packages/keras/utils/data_utils.py", line 635, in _data_generator_task
generator_output = next(self._generator)
File "/medical_data/yunhai/2th-DSB2018/script_final/model_ms1.py", line 42, in generator_1s_v11
image, mask = next(gen)
File "train_ext.py", line 34, in train_generator
image, mask = next(gen)
File "/2th-DSB2018/script_final/generator.py", line 64, in data_generator_multi
zoom_range = config.ZOOM_RANGE)
File "/2th-DSB2018/script_final/preprocess.py", line 198, in affine_transform_batch
xt = scipy.ndimage.affine_transform(x, matrix, order=1, cval=-512)
File "/usr/local/lib/python3.5/dist-packages/scipy/ndimage/interpolation.py", line 449, in affine_transform
raise RuntimeError('affine matrix has wrong number of rows')
RuntimeError: affine matrix has wrong number of rows

`

post process

label_mask = remove_small_holes(label_mask)
label_mask = basin(label_mask, wall)
label_mask = remove_small_holes(label_mask)

Are these function working on instance labels or semantic labels
@jacobkie

What's the required version of python packages in this project?

when I'm trying to run predict_auto.py in the folder script_final, there occurs an error:

from keras import Model, optimizers,losses, activations, models
ImportError: cannot import name 'Model'

It may due to the version of keras I installed do not fit the project. But the project didn't provide the required package version(maybe I do not find?)😭。 I tried to search in the Internet but still cannot solve the problem...
So could you please tell me what's the version need to run the project? Thank you so much!

pertained network hdf5 file

I am trying to use your pre-trained weights to run predict_auto.py, but it complains that there is no hdf5 file in the cache folder. If I run merge_model_weight.sh in said folder first, then it will load the merged h5 file. But, this process takes a very long time (~30min??) in 25 stages, i.e. the terminal output from the first stage looks like
0/25 2 (205, 694, 3) 704
19-02-22 23:58:35 INFO| loading weights form /home/ubuntu/git_repos/2018DSB/cache/UnetRCNN_180410-221747/71_0.4155.hdf5

Is this what I should be doing and expect to see?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.