Giter VIP home page Giter VIP logo

mcboost's Introduction

mcboost

tic Lifecycle: experimental CRAN Status DOI License: GPL v3 Mattermost

What does it do?

mcboost implements Multi-Calibration Boosting (Hebert-Johnson et al., 2018; Kim et al., 2019) for the multi-calibration of a machine learning model's prediction. Multi-Calibration works best in scenarios where the underlying data & labels are unbiased but a bias is introduced within the algorithm's fitting procedure. This is often the case, e.g. when an algorithm fits a majority population while ignoring or under-fitting minority populations.

For more information and example, see the package's website.

More details with respect to usage and the procedures can be found in the package vignettes.

Installation

The current version can be downloaded from CRAN using:

install.packages("mcboost")

You can install the development version of mcboost from Github with:

remotes::install_github("mlr-org/mcboost")

Usage

Post-processing with mcboost needs three components. We start with an initial prediction model (1) and an auditing algorithm (2) that may be customized by the user. The auditing algorithm then runs Multi-Calibration-Boosting on a labeled auditing dataset (3). The resulting model can be used for obtaining multi-calibrated predictions.

Example

In this simple example, our goal is to improve calibration for an initial predictor, e.g. a ML algorithm trained on an initial task. Internally, mcboost often makes use of mlr3 and learners that come with mlr3learners.

library(mcboost)
library(mlr3)

First we set up an example dataset.

  #  Example Data: Sonar Task
  tsk = tsk("sonar")
  tid = sample(tsk$row_ids, 100) # 100 rows for training
  train_data = tsk$data(cols = tsk$feature_names, rows = tid)
  train_labels = tsk$data(cols = tsk$target_names, rows = tid)[[1]]

To provide an example, we assume that we have already a learner l which we train below. We can now wrap this initial learner's predict function for use with mcboost, since mcboost expects the initial model to be specified as a function with data as input.

  l = lrn("classif.rpart")
  l$train(tsk$clone()$filter(tid))

  init_predictor = function(data) {
    # Get response prediction from Learner
    p = l$predict_newdata(data)$response
    # One-hot encode and take first column
    one_hot(p)
  }

We can now run Multi-Calibration Boosting by instantiating the object and calling the multicalibrate method. Note, that typically, we would use Multi-Calibration on a separate validation set! We furthermore select the auditor model, a SubpopAuditorFitter, in our case a Decision Tree:

  mc = MCBoost$new(
    init_predictor = init_predictor,
    auditor_fitter = "TreeAuditorFitter")
  mc$multicalibrate(train_data, train_labels)

Lastly, we predict on new data.

tstid = setdiff(tsk$row_ids, tid) # held-out data
test_data = tsk$data(cols = tsk$feature_names, rows = tstid)
mc$predict_probs(test_data)

Multi-Calibration

While mcboost in its defaults implements Multi-Accuracy (Kim et al., 2019), it can also multi-calibrate predictors (Hebert-Johnson et al., 2018). In order to achieve this, we have to set the following hyperparameters:

  mc = MCBoost$new(
    init_predictor = init_predictor,
    auditor_fitter = "TreeAuditorFitter",
    num_buckets = 10,
    multiplicative = FALSE
  )

MCBoost as a PipeOp

mcboost can also be used within a mlr3pipeline in order to use at the full end-to-end pipeline (in the form of a GraphLearner).

  library(mlr3)
  library(mlr3pipelines)
  gr = ppl_mcboost(lrn("classif.rpart"))
  tsk = tsk("sonar")
  tid = sample(1:208, 108)
  gr$train(tsk$clone()$filter(tid))
  gr$predict(tsk$clone()$filter(setdiff(1:208, tid)))

Further Examples

The mcboost vignettes Basics and Extensions and Health Survey Example demonstrate a lot of interesting showcases for applying mcboost.

Contributing

This R package is licensed under the LGPL-3. If you encounter problems using this software (lack of documentation, misleading or wrong documentation, unexpected behaviour, bugs, …) or just want to suggest features, please open an issue in the issue tracker. Pull requests are welcome and will be included at the discretion of the maintainers.

As this project is developed with mlr3's style guide in mind, the following resources can be helpful to individuals wishing to contribute: Please consult the wiki for a style guide, a roxygen guide and a pull request guide.

Code of Conduct

Please note that the mcboost project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Citing mcboost

If you use mcboost, please cite our package as well as the two papers it is based on:

  @article{pfisterer2021,
    author = {Pfisterer, Florian and Kern, Christoph and Dandl, Susanne and Sun, Matthew and 
    Kim, Michael P. and Bischl, Bernd},
    title = {mcboost: Multi-Calibration Boosting for R},
    journal = {Journal of Open Source Software},
    doi = {10.21105/joss.03453},
    url = {https://doi.org/10.21105/joss.03453},
    year = {2021},
    publisher = {The Open Journal},
    volume = {6},
    number = {64},
    pages = {3453}
  }
  # Multi-Calibration
  @inproceedings{hebert-johnson2018,
    title = {Multicalibration: Calibration for the ({C}omputationally-Identifiable) Masses},
    author = {Hebert-Johnson, Ursula and Kim, Michael P. and Reingold, Omer and Rothblum, Guy},
    booktitle = {Proceedings of the 35th International Conference on Machine Learning},
    pages = {1939--1948},
    year = {2018},
    editor = {Jennifer Dy and Andreas Krause},
    volume = {80},
    series = {Proceedings of Machine Learning Research},
    address = {Stockholmsmässan, Stockholm Sweden},
    publisher = {PMLR}
  }
  # Multi-Accuracy
  @inproceedings{kim2019,
    author = {Kim, Michael P. and Ghorbani, Amirata and Zou, James},
    title = {Multiaccuracy: Black-Box Post-Processing for Fairness in Classification},
    year = {2019},
    isbn = {9781450363242},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3306618.3314287},
    doi = {10.1145/3306618.3314287},
    booktitle = {Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society},
    pages = {247--254},
    location = {Honolulu, HI, USA},
    series = {AIES '19}
  }

mcboost's People

Contributors

carobec avatar chkern avatar dandls avatar mikekimbackward avatar owenward avatar pfistfl avatar sebffischer avatar sunnymatt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mcboost's Issues

Do proper gradient boosting for brier score optimization

The approach proposed in the papers optimizes the brier score through the assumption of predicted probabilities (which can be added or multiplicatively updated).
A proper gradient boosting setup where scores are optimized could be a worthwhile addition.

Bug in SubgroupModel$fit

https://github.com/pfistfl/mcboost/blob/ae92185f85bfc28be9f7c7e84848e63543df0c17/R/Predictor.R#L193

If mask is a binary vector e.g. c(0, 1, 0, 1, 0, 0), this line returns mean(c(abels[1], labels[1])).
Instead we want mean(c(labels[2], labels[4])), right?

Minimal example:

data = data.frame(X1 = rnorm(n = 10L), X2 = rnorm(n = 10L))
masks =  list(
    rep(c(1, 0), 5)
 )
sf = SubgroupFitter$new(masks)
resid = c(1, rep(0, 9)) 
sm =  SubgroupModel$new(masks)
mn = sm$fit(data = data, labels = resid) # returns mean of 1s no 0 included 

contributions.md

We should create file where every author's contributions are briefly explained

Release mcboost 0.3.0

First release:

Prepare for release:

  • urlchecker::url_check()
  • devtools::check(remote = TRUE, manual = TRUE)
  • devtools::check_win_devel()
  • rhub::check_for_cran()

Submit to CRAN:

  • usethis::use_version('patch')
  • devtools::submit_cran()
  • Approve email

Wait for CRAN...

  • Accepted 🎉
  • usethis::use_github_release()
  • usethis::use_dev_version()
  • Update install instructions in README

Run multicalibration on pre-computed scores w/o access to initial predictor

I'm trying to multi-calibrate scores precomputed from a black-box model (assume we don't have access to the model itself) but I'm getting non-sensical results.

I'm wondering if this should work in theory (and there's some other bug in my code) or if there's a more fundamental reason this doesn't work.

Here's an example to illustrate what I'm trying to do:

library(mcboost)

# simulate some random data
n = 100
scores = runif(n)
labels = rbinom(n, 1, scores)
is_test = as.logical(rbinom(n, 1, 0.1))
segmentation_features = data.table(
    cbind(
        rbinom(n, 1, 0.1),
        rbinom(n, 1, 0.5)
    )
)

init_predictor = function(data) {
    # Hack to make it return pre-computed scores for train/test since we don't have access to the model
    if(nrow(data) > 50) {
        scores[!is_test]
    } else {
        scores[is_test]
    }
}

mc = MCBoost$new(
    auditor_fitter="TreeAuditorFitter", 
    init_predictor=init_predictor
)

mc$multicalibrate(
    segmentation_features[!is_test],
    labels[!is_test]
)
mc

prs = mc$predict_probs(segmentation_features[is_test])

Create Extension to Survival Task

Additionally to a classification task, mcboost should also be able to handle survival tasks:

The main differences are:

  • compute residuals based on the derivate of the Integrated Brier Score (with right-censoring)
  • deal with individual distributions of survival probabilities instead of individual probabilities (also includes a new variables times)

Vignettes

    1. Usage and Examples
    1. Multi-calibration paper reproduction
    1. NHS Dataset

Adapt code to list of datasets

In the original paper, the validation data is a list of batches (instead of a single validation set).
Adapt code to allow for it, although this is probably not used in practice.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.