- I attempt to predict defaulters or non-defaulters using Home Credit's Kaggle Dataset.
- The model must quantify feature importance which appeals to domain knowledge.
- I use the model to score Probability of Default (PD).
- Use PD to compute Expected Loss of Principal Due to Default with 0% recovery rate.
- Build a dashboard to address business questions related to Completed Cash Loans. See link to dashboard.
- I considered approved cash loans only.
- Current applicants must have historical cash loans information from Home Credit.
- Instalments for historical cash loans must be at most 3 years old.
Each notebook represents a step in the analytics process and there are 4 Jupyter Notebooks:
columns-selection-for-selected-csv-files.ipynb
: this notebook removes unused columns from the original dataset.duplicate-removal.ipynb
: this notebook removes duplicates in the dataset.extracting-instalment-payment-features.ipynb
: this notebook extracts features for downstream classification and completed cash loan data for analysis.default-prediction.ipynb
: this notebook contains classification of defaulters or non-defaulters with logistic regression.
I used Kaggle Notebook to run my code as it offers more computing power. Thus, I needed to upload the Home Credit datasets to Kaggle and store intermediate and final datasets there.
The following are links to the datasets I stored in Kaggle for my work:
- raw-dataset: this link contains 4 csv files from Home Credit's original dataset.
- intermediate-dataset: this link contains 3 csv files produced by
columns-selection-for-selected-csv-files.ipynb
andduplicate-removal.ipynb
. - cleaned-dataset: this link contains 5 csv files produced by
extracting-instalment-payment-features.ipynb
.