Giter VIP home page Giter VIP logo

mlim's Introduction

mlim : Single and Multiple Imputation with Automated Machine Learning

GitHub dev CRAN version

mlim is the first missing data imputation software to implement automated machine learning for performing multiple imputation or single imputation of missing data. The software, which is currently implemented as an R package, brings the state-of-the-arts of machine learning to provide a versatile missing data solution for various data types (continuous, binary, multinomial, and ordinal). In a nutshell, mlim is expected to outperform any other available missing data imputation software on many grounds. For example, mlim is expected to deliver:

  1. Lower imputation error compared to other missing data imputation software.
  2. Higher imputation fairness, when the data suffers from severe class imbalance, unnormal destribution, or the variables (features) have interactions with one another.
  3. Faster imputation of big datasets because mlim excells in making an efficient use of available CPU cores and the runtime scales fairly well as the size of data becomes huge.

The high performance of mlim is mainly by fine-tuning an ELNET algorithm, which often outperforms any standard statistical procedure or untuned machine learning algorithm and generalizes very well. However, mlim is an active research project and hence, it comes with a set of experimental optimization toolkit for exploring the possibility of performing multiple imputation with industry-standard machine learning algorithms such as Deep Learning, Gradient Boosting Machine, Extreme Gradient Boosting, and Stacked Ensembles. These algorithms can be used for either imputing missing data or optimizing already imputed data, but are NOT used by default NOR recommended to all users. Advanced users who are interested in exploring the possibilities of imputing missing data with these algorithms are recommended to read the free handbook (see below). These algorithms, as noted, are experimental, and the author is intended to examine their effectiveness for academic research (at this point). If you are interested to collaborate, get in touch with the author.

Fine-tuning missing data imputation

Simply put, for each variable in the dataset, mlim automatically fine-tunes a fast machine learning model, which results in significantly lower imputation error compared to classical statistical models or even untuned machine learning imputation software that use Random Forest or unsuperwised learning algorithms. Moreover, mlim is intended to give social scientists a powerful solution to their missing data problem, a tool that can automatically adopts to different variable types, that can appear at different rates, with unknown destributions and have high correlations or interactions with one another. But it is not just about higher accuracy! mlim also delivers fairer imputation, particularly for categorical and ordinal variables because it automatically balances the levels of the avriable, minimizing the bias resulting from class imbalance, which can often be seen in social science data and has been commonly ignored by missing data imputation software.

mlim outperforms other R packages for all variable types, continuous, binary (factor), multinomial (factor), and ordinal (ordered factor). The reason for this improved performance is that mlim:

  • Automatically fine-tunes the parameters of the Machile Learning models
  • Delivers a very high prediction accuracy
  • Does not make any assumption about the destribution of the data
  • Takes the interactions between the variables into account
  • Can to some extend take the hierarchical structure of the data into account
    • Imputes missing data in nested observations with higher accuracy compared to the HLM imputation methods
  • Does not force a particular linear model
  • Uses a blend of different machine learning models

Procedure: From preimputation to imputation and postimputation

When a dataframe with NAs is given to mlim, the NAs are replaced with plausible values (e.g. Mean and Mode) to prepare the dataset for the imputation, as shown in the flowchart below:

mlim follows three steps to optimize the missing data imputation. This procedure is optional, depending on the amount of computing resources available to you. In general, ELNET imputation already outperforms other available single and multiple imputation methods available in R. However, the imputation error can be further improved by training stronger algorithms such as GBM, XGB, DL, or even Ensemble, stacking several models on top of one another. For the majority of the users, the GBM or XGB (XGB is available only in Mac OSX and Linux) will significantly imprive the ELNET imputation, if long-enough time is provided to generate a lot of models to fine-tune them.

You do not necessarily need the post-imputation. Once you have reimputed the data with ELNET, you can stop there. ELNET is relatively a fast algorithm and it is easy to fine-tune it compared to GBM, XGB, DL, or Ensemble. In addition, ELNET generalizes nicely and is less prone to overfiting. In the flowchart below the procedure of mlim algorithm is drawn. When using mlim, you can use ELNET to impute a dataset with NA or optimize the imputed values of a dataset that is already imputed. If you wish to go the extra mile, you can use heavier algorithms as well to activate the postimputation procedure, but it is strictly optional and by default, mlim does not use postimputation.

Fast imputation with ELNET (without postimputation)

Below are some comparisons between different R packages for carrying out multiple imputations (bars with error) and single imputation. In these analyses, I only used the ELNET algorithm, which fine-tunes much faster than other algorithms (GBM, XGBoost, and DL). As it evident, ELNET already outperforms all other single and multiple imputation procedures available in R language. However, the performance of mlim can still be improved, by adding another algorithm, which activates the postimputation procedure.

Installation

mlim is under fast development. The package receive monthly updates on CRAN. Therefore, it is recommended that you install the GitHub version until version 0.1 is released. To install the latest development version from GitHub:

library(devtools)
install_github("haghish/mlim")

Or alternatively, install the latest stable version from CRAN:

install.packages("mlim")

Supported algorithms

mlim supports several algorithms:

  • ELNET (Elastic Net)
  • RF (Random Forest and Extremely Randomized Trees)
  • GBM (Gradient Boosting Machine)
  • XGB (Extreme Gradient Boosting, available in Mac OS and Linux)
  • DL (Deep Learning)
  • Ensemble (Stacked Ensemble)

ELNET is the default imputation algorithm. Among all of the above, ELNET is the simplest model, fastest to fine-tune, requires the least amount of RAM and CPU, and yet, it is the most stable one, which also makes it one of the most generalizable algorithms. By default, mlim uses only ELNET, however, you can add another algorithm to activate the post-imputation procedure.

GBM vs ELNET

But which one should you choose, assuming computation resources are not in question? Well, GBM is very liokely to outperform ELNET, if you specify a large enough max_models argument to well-tune the algorithm for imputing each feature. That basically means generating more than 100 models, at least. But you will enjoy a slight -- yet probably statistically significant -- improvement in the imputation accuracy. The option is there, for those who can use it, and to my knowledge, fine-tuning GBM with large enough number of models will be the most accurate imputation algorithm compared to any other procedure I know. But ELNET comes second and compared to its speed advantage, it is indeed charming!

Both of these algorithms offer one advantage over all the other machine learning missing data imputation methods such as kNN, K-Means, PCA, Random Forest, etc... Simply put, you do not need to specify any parameter yourself, everything is automatic and mlim searches for the optimal parameters for imputing each variable within each iteration. For all the aformentioned packages, some parameters need to be specified, which influence the imputation accuracy. Number of k for kNN, number of components for PCA, number of trees (and other parameters) for Random Forest, etc... This is why elnet outperform the other packages. You get a software that optimizes its models on its own.

Advantages and limitations

mlim fine-tunes models for imputation, a procedure that has never been implemented in other R packages. This procedure often yields much higher accuracy compared to other machine learning imputation methods or missing data imputation procedures because of using more accurate models that are fine-tuned for each feature in the dataset. The cost, however, is computational resources. If you have access to a very powerful machine, with a huge amount of RAM per CPU, then try GBM. If you specify a high enough number of models in each fine-tuning process, you are likely to get a more accurate imputation that ELNET. However, for personal machines and laptops, ELNET is generally recommended (see below). If your machine is not powerful enough, it is likely that the imputation crashes due to memory problems.... So, perhaps begin with ELNET, unless you are working with a powerful server. This is my general advice as long as mlim is in Beta version and under development.

Citation

Example

iris ia a small dataset with 150 rows only. Let's add 50% of artifitial missing data and compare several state-of-the-art machine learning missing data imputation procedures. ELNET comes up as a winner for a very simple reason! Because it was fine-tuned and all the rest were not. The larger the dataset and the higher the number of features, the difference between ELNET and the others becomes more vivid.

Single imputation

In a single imputation, the NAs are replaced with the most plausible values according the model. You do not get the diversity of the multiple imputation, but you still get an estimated imputation error based on 10-fold (or higher, if specified) cross-validation procedure for each variable (column) in the dataset. As shown below, mlim provides the mlim.error() function to summarize the imputation error for the entire dataset or each variable.

# Comparison of different R packages imputing iris dataset
# ===============================================================================
rm(list = ls())
library(mlim)
library(mice)
library(missForest)
library(VIM)

# Add artifitial missing data
# ===============================================================================
irisNA <- mlim.na(iris, p = 0.5, stratify = TRUE, seed = 2022)

# Single imputation with mlim, giving it 180 seconds to fine-tune each imputation
# ===============================================================================
MLIM <- mlim(irisNA, m=1, seed = 2022, tuning_time = 180) 
print(MLIMerror <- mlim.error(MLIM, irisNA, iris))

# kNN Imputation with VIM
# ===============================================================================
kNN <- kNN(irisNA, imp_var=FALSE)
print(kNNerror <- mlim.error(kNN, irisNA, iris))

# Single imputation with MICE (for the sake of demonstration)
# ===============================================================================
MC <- mice(irisNA, m=1, maxit = 50, method = 'pmm', seed = 500)
print(MCerror <- mlim.error(MC, irisNA, iris))

# Random Forest Imputation with missForest
# ===============================================================================
set.seed(2022)
RF <- missForest(irisNA)
print(RFerror <- mlim.error(RF$ximp, irisNA, iris))

Multiple imputation

mlim supports multiple imputation. All you need to do is to specify an integer higher than 1 for the value of m. For example, set m = 5 in the mlim function to impute 5 datasets. Then, mlim returns a list including 5 datasets. You can convert this list to a mids object using the mlim.mids() function and then follow up the analysis with the mids object the same way it is carried out by the mice R package. Here is an example:

# Comparison of different R packages imputing iris dataset
# ===============================================================================
rm(list = ls())
library(mlim)
library(mice)

# Add artifitial missing data
# ===============================================================================
irisNA <- mlim.na(iris, p = 0.5, stratify = TRUE, seed = 2022)

# multiple imputation with mlim, giving it 180 seconds to fine-tune each imputation
# ===============================================================================
MLIM2 <- mlim(irisNA,  m = 5, seed = 2022, tuning_time = 180) 
print(MLIMerror2 <- mlim.error(MLIM2, irisNA, iris))
mids <- mlim.mids(MLIM2, dfNA)
fit <- with(data=mids, exp=glm(Species ~ Sepal.Length, family = "binomial"))
res <- mice::pool(fit)
summary(res)

mlim's People

Contributors

haghish avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

realyuyangyang

mlim's Issues

java.lang.ArrayIndexOutOfBoundsException: Index 57 out of bounds for length 57

Hi,
Thank you for your constant work on this package. It is awesome. I had no problems using it so far and it gives very good results.
However, I have problem with my latest dataset and cannot figure out what is the problem..
This is the error:

> dane_elnet <- mlim(as.data.frame(x_miss), m=1, seed = 2022, tuning_time = 900, algos = c("ELNET"), report = "imputation_ELNET.md")

Random Forest preimputation in progress...


data 1, iteration 1 (RAM = 138.897 GiB):

  |                                                                                                                      |   0%
21:40:03.915: GLM_1_AutoML_1_20220908_214002 [GLM def_1] failed: java.lang.ArrayIndexOutOfBoundsException: Index 57 out of bounds for length 57
21:40:03.935: Empty leaderboard.
AutoML was not able to build any model within a max runtime constraint of 900 seconds, you may want to increase this value before retrying.
21:50:18.391: New models will be added to existing leaderboard mlim@@TNM (leaderboard frame=null) with already 0 models.
21:50:18.769: GLM_2_AutoML_2_20220908_215018 [GLM def_1] failed: java.lang.ArrayIndexOutOfBoundsException: Index 57 out of bounds for length 57
21:50:18.785: Empty leaderboard.
AutoML was not able to build any model within a max runtime constraint of 900 seconds, you may want to increase this value before retrying.
21:20:07.760: New models will be added to existing leaderboard mlim@@TNM (leaderboard frame=null) with already 0 models.
21:20:07.990: GLM_3_AutoML_3_20220909_212007 [GLM def_1] failed: java.lang.ArrayIndexOutOfBoundsException
21:20:08.1: Empty leaderboard.
AutoML was not able to build any model within a max runtime constraint of 900 seconds, you may want to increase this value before retrying.connection to JAVA server failed...

Error in value[[3L]](cond) : Java server crashed. perhaps a RAM problem?
In addition: Warning message:
In .automl.fetch_state(project_name) :
  The leaderboard contains zero models: try running AutoML for longer (the default is 1 hour).

Dataset seems to cleaned and formatted nicely. I also cannot figure out what Index 57 out of bounds for length 57 is referring to..

> str(x_miss)
'data.frame':	1641 obs. of  42 variables:
 $ Wiek               : num  69.6 73.1 76.5 65.1 63.3 48.4 68.8 69.5 78.2 71.1 ...
 $ EBRT_BT            : Factor w/ 2 levels "BT BOOST","EBRT": 2 2 2 2 2 2 2 2 2 2 ...
 $ GGG                : num  2 5 1 4 1 3 2 1 5 1 ...
 $ cores              : num  6 6 10 6 6 NA NA 6 12 6 ...
 $ cores_positive     : num  6 1 6 5 2 NA NA 2 12 1 ...
 $ cores_positive_proc: num  1 0.167 0.6 0.833 0.333 ...
 $ max_prccancer      : num  NA 50 100 100 NA NA 100 50 NA 20 ...
 $ TURP               : Factor w/ 2 levels "0_No","1_Yes": 1 1 1 1 1 1 1 1 1 1 ...
 $ V_prostata         : num  31.9 35.7 67.2 38.5 26.5 35.9 NA 60 NA 156 ...
 $ MR_pre_EPE         : Factor w/ 2 levels "0_No","1_Yes": 2 NA NA NA NA NA 1 NA NA NA ...
 $ MR_pre_SVI         : Factor w/ 2 levels "0_No","1_Yes": 2 NA NA NA NA NA 1 NA NA NA ...
 $ PSA_density        : num  0.72 0.59 1.29 1.09 1.53 4.74 NA 0.25 NA 0.07 ...
 $ TNM                : Factor w/ 7 levels "T1c","T2a","T2b",..: 7 3 3 6 1 6 NA 3 5 2 ...
 $ ZUBROD             : num  0 1 1 0 0 0 0 0 0 0 ...
 $ PSAmax             : num  23 21 86.4 41.8 40.5 ...
 $ Risk_Group         : Factor w/ 4 levels "1_low_IR","2_high_IR",..: 4 4 3 4 3 4 3 2 4 2 ...
 $ ADT_pre_RT         : Factor w/ 2 levels "0_No","1_Yes": 2 2 2 2 2 2 2 2 2 1 ...
 $ ADT_intractu_RT    : Factor w/ 2 levels "0_No","1_Yes": 2 2 2 2 2 2 2 2 2 1 ...
 $ ADT_ADJ            : Factor w/ 2 levels "0_No","1_Yes": 1 2 2 2 2 2 2 2 2 1 ...
 $ ADT_typ            : Factor w/ 5 levels "0_brak","analog",..: 3 3 3 3 3 3 3 3 3 1 ...
 $ ADT_czas_pre_RT    : num  100 110 104 92 74 76 101 90 113 1 ...
 $ ADT_ADJ_CZAS       : num  0 2.66 51.12 74.8 9.63 ...
 $ ADT_czas_suma      : num  5.81 11.4 59.1 80.35 13.7 ...
 $ czas_PSA_pre       : num  NA NA 0.76 8.74 1.38 NA NA 6.44 NA 0.66 ...
 $ PSA_pre_RT         : num  0.08 0.04 0 41.8 23.28 ...
 $ czas_RT            : num  38 40 62 50 61 39 50 50 58 46 ...
 $ DCp                : num  38 42 44 54 58 60 60 62 72 68 ...
 $ N_RT               : num  1 1 1 1 1 1 0 1 1 0 ...
 $ DCn                : num  38 42 44 50 44 50 0 50 43.2 0 ...
 $ DCbt               : num  0 0 0 0 0 0 0 0 0 0 ...
 $ BTfx               : num  0 0 0 0 0 0 0 0 0 0 ...
 $ BED3               : num  63.3 70 73.3 90 96.7 ...
 $ BED1_5             : num  88.7 98 102.7 126 135.3 ...
 $ FU                 : num  0.5 4 53.2 102.2 11.6 ...
 $ Zgon               : num  1 1 1 1 1 1 1 1 1 1 ...
 $ BC                 : num  0 0 0 0 0 1 0 0 1 1 ...
 $ MFS_24             : num  1 0 1 1 0 1 1 1 1 1 ...
 $ FFM                : num  0 0 0 0 0 1 0 0 0 1 ...
 $ BC_czas            : num  0.53 3.98 53.15 102.23 11.63 ...
 $ FFM_czas           : num  0.53 3.98 53.15 102.23 11.63 ...
 $ MFS_czas_24        : num  23.72 3.98 53.15 102.76 11.63 ...
 $ OS                 : num  23.7 112.5 53.2 102.8 84.3 ...

I have also not found any suspicious variables...

> caret::nearZeroVar(x_miss)
integer(0)

All other packages seems to handle well description of missing values...

naniar::vis_miss(x_miss, sort_miss = T)

image

Any suggestion would be really helpful. Thank you in advance.
Konrad

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.