Giter VIP home page Giter VIP logo

img_cls_exp's Introduction

configuration update sagemaker model and endpoint with data up to date update endpoint and return prediction

Medical Image Classification Experiment

Hello!

This is a repo for a workshop on applied computer vision. We aim at develop a mobile app based on TFlite model that can help medical diagnose at first. And we further develops a machine learning (ML) workflow that can update model and return prediction outcomes automatically, through our first CI/CD demo based on AWS platform and Github.

Platforms and tools.

  • Google Colab Notebook.

  • Github.

  • AWS Sagemaker.

  • TFlite.

Datasets:

We use two datasets from Kaggle, the ChestXray2017 dataset and OCT2017 dataset, regarding diagnose of pneumonia and common treatable blinding retinal diseases respectively.

The ChestXray2017 dataset includes 3 classes: normal, bacterial pneumonia and viral pneumonia, as shown in the figure below.

ChestXray2017

The OCT2017 dataset includes 4 classes: normal, CNV(choroidal neovascularization), drusen and DME(diabetic macular edema), as shown in the figure below.

OCT2017

As the main purpose of our experiment is to try new tools and get familiar with ML workflow. We only use 1000 images of each class.

Due to the differences of configuration between cloud platforms, for convenience, we build our TFlite-based app through Google Colab and construct our first CI/CD demo through AWS Sagemaker. Data for these two tasks are stored in Google Drive and AWS S3 respectively.

Task 1 Develop an android app customized by new model

The procedure of task 1 is straightforward.

  • Step 1: upload images into your Google Drive, with images of the same labels in the same folder and the label as folder's name.
  • Step 2: mount your data to your Colab and run the notebook to train a model, with model with ".tflite" format as output.
  • Step 3: set up the skeleton app in your Android Studio.
  • Step 4: load the model into the "start" part of the app.
  • Step 5: customize the "MainActivity.kt" file labeled with TODO by the app author, you may need to change the codes under each "TODO" according to your model, and import your model.
  • Step 6: run you app with a virtual device or with your own android phone, you need to authorize USB debugging in your phone.
  • Step 7: you can further design your UI in layout, drawable and ic_launcher.

The figure below is our demo interface. As the app use camera, the app is actually not so good as its model's accuracy (generally > 90%). We may update it in the future.

output

We have trained 4 models, "pn_model.tflite" and "pn_sub_model.tflite" are for ChestXray2017 dataset, with the latter account for subclass of pnenmonia. The "oct.tflite" and "oct_sub.tflite" are similar. You can check them in the folder Models.

Task 2 Construct a CI/CD demo for ML based on Sagemaker and Github

Diagnose based on medical images may not be enough reliable and popularized nowadays, but it's promissing that ML may help the medical field a lot. Automatic image classification can assist doctors who may be overwhelmed a large amount of cases everyday, to make a decision quickly and avoid diagnose. And machine may even do better than human on some diseases.

We can assume that our ML model will face routine tasks as in many other ML usage scenarios, involving data drift and version control. In our first step of CI/CD, we want to build a semi-automatic workflow which can retain model with newest data and return output automatically.

The procedure of task 2 is as follows:

  • Step 1: we use AWS S3 bucket to store our data, including data for model training, testing and unlabeled data from a prediction task.
  • Step 2: we use Github to host our codes.
  • Step 3: create a Git Repo under AWS SageMaker with Personal access tokens as password of AWS secret.
  • Step 4: create a Notebook instance under AWS SageMaker with an "AmazonSageMaker-ExecutionRole"(you can create a new one), linking to the above Git Repo.
  • Step 5: give the "AmazonSageMaker-ExecutionRole" above IAM "Read" permissions. You can check the "setup" notebook for reference, while the parts for Glue job and Lambda function is for task 3.
  • Step 5: run the Notebook for model training, deploying and prediction. The notebook will download data from S3, process it (generate annotation file and split the dataset) and upload back to S3. The training will be handled by SageMaker. We advice you to use the "conda_amazonei_tensorflow2_p36" kernel for this task, as the training is based on pretrained TensorFlow models. The "train.py" file will be generated by the notebook as the entry point of training, you can change parameters in that part of codes. When the training procedure succeeds, you can deploy it and test with test data picked from S3 test dataset. You can run the prediction task part to get a list of labels for the pictures you want to label, the output will return to both S3 and Github as txt file.
  • Step 6: you can delete the endpoint when the you don't need the model.

The current accuracy of task 2 is a bit lower than task 1, we have tuned it a lot but didn't improve much. We will try more pretrained models in the future.

Task 3 Build a Step function workflow with AWS Step Functions SDK

We are trying to use AWS Step Functions SDK to automatic the training to deploying process with one click, integrating Glue job and Lambda function. We want to use Glue job to simplify data preparation, and use Lambda function to add a condition (e.g. accuracy threshold) for model deploy.

Our target workflow is shown in the following figure. AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data. We want to use it to simplify ETL workflows and save money on data processing. And we will try to add the lambda function to evaluate if a model is good enough for deployment. In the future, we will try to add a trigger function so that only when we have enough data drift will we retrain a model.

workflow

We haven't closed this task in current stage. We will update it soon.

References

Group Members

Macy and Yuhan

img_cls_exp's People

Contributors

macyatmacy avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.