Giter VIP home page Giter VIP logo

etl-pipeline-disaster-messaging's Introduction

Disaster Response Pipeline Project

Introduction

In this project messaging data from Appen has been used to build a model for an API that classifies disaster messages. The messages that were sent during disaster events are cleaned and organized in the pipline.

The project's goal is to create a machine learning pipeline to appropriately categorize messages to a disaster relief agency. The project includes a web app where an emergency worker can input a new message and get classified results. A web app is deployed for easy interaction and displays additional data. The visualization is implemented as a plotly Dash app which is deployed here .

Deployed dash app

File Structure

app.py where the dash app lives
requirements.txt python modules that will be installed onto Heroku at build
runtime.txt simply tells Heroku (the Gunicorn HTTP server) which Python version to use
Procfile tells Heroku what type of process is going to run (Gunicorn web process) and the Python app entrypoint (app.py)
/assets this directory is to serve the CSS files and images. Generating the figures is done in charts.py
/data this folder contains the raw input csv files, the database file as well as the process_data.py
/models this folder contains the script to generate the model (train_classifier.py) as well as the model as a Pickle file
.gitignore

Installation

1. Getting Started

  • Change the current directory to the location where you want to clone the repository and clone this repo to your local machine:

$ git clone https://github.com/AReburg/ETL-Pipeline-Disasaster-Contact

  • Make sure that the app is running on the local webserver before attempting to deploy on Heroku. Setup your virtualenv (or don't) and ensure you have all the modules installed before running the app.

2. Requirements

Install the modules from the requirements.txt with pip3 or conda from a terminal in the project root folder

pip3 install -r requirements.txt
conda install --file requirements.txt (Anaconda)

Usage

Prepare data and train the model

Run the following commands in the project's root directory to set up your database and model.

python data/process_data.py data/disaster_messages.csv data/disaster_categories.csv data/DisasterResponse.db runs the ETL pipeline that cleans data and stores in database

python models/train_classifier.py data/DisasterResponse.db models/classifier.pkl runs ML pipeline that trains classifier and saves

Local web application

  1. Run the app from your IDE direct, or from the terminal in the projects root directory: python app.py

  2. It should be visible on the browser via http://localhost/:8050

  3. Open the app in the browser and start playing around

Conclusion

Due to the highly unbalanced dataset the results are not good enough for a commercial application. There are two ways to get a better outcome. Manually inspect all the messages and hand-label them for more details. or build an unsupervised pipeline. The advantages of an unsupervised application are that no labeling is necassary and it is less time-consuming -results will be automatically clustered.

Model evaluation

Deployed dash app

Authors, Acknowledgements

Appen made this data set available to Udacity for training purposes.

etl-pipeline-disaster-messaging's People

Contributors

areburg avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.