- What is DISTIL?
- Key Features of DISTIL
- Starting with DISTIL
- Where can DISTIL be used?
- Package Requirements
- Documentation
- Make your PyTorch Model compatible with DISTIL
- Demo Notebooks
- Active Learning Benchmarking using DISTIL
- Testing Individual Strategies and Running Examples
- Mailing List
- Acknowledgment
- Team
- Resources
- Publications
DISTIL is an active learning toolkit that implements a number of state-of-the-art active learning strategies with a particular focus for active learning in the deep learning setting. DISTIL is built on PyTorch and decouples the training loop from the active learning algorithm, thereby providing flexibility to the user by allowing them to control the training procedure and model. It allows users to incorporate new active learning algorithms easily with minimal changes to their existing code. DISTIL also provides support for incorporating active learning with your custom dataset and allows you to experiment on well-known datasets. We are continuously incorporating newer and better active learning selection strategies into DISTIL.
- Decouples the active learning strategy from the training loop, allowing users to modify the training and/or the active learning strategy
- Implements faster and more efficient versions of several active learning strategies
- Contains most state-of-the-art active learning algorithms
- Allows running basic experiments with just one command
- Presents interface to various active learning strategies through only a couple lines of code
- Requires only minimal changes to the configuration files to run your own experiments
- Achieves higher test accuracies with less amount of training data, admitting a huge reduction in labeling cost and time
- Requires minimal change to add it to existing training structures
- Contains recipes, tutorials, and benchmarks for all active learning algorithms on many deep learning datasets
git clone https://github.com/decile-team/distil.git
cd distil
pip install -r requirements/requirements.txt
python train.py --config_path=/content/distil/configs/config_svhn_resnet_randomsampling.json
For making your custom configuration file for training, please refer to Distil Configuration File Documentation
You can also install it directly as a pip package:
pip install decile-distil
Some of the algorithms currently implemented in DISTIL include the following:
- Uncertainty Sampling [1]
- Margin Sampling [2]
- Least Confidence Sampling [2]
- FASS [3]
- BADGE [4]
- GLISTER ACTIVE [6]
- CoreSets based Active Learning [5]
- Random Sampling
- Submodular Sampling [3,6,7]
- Adversarial DeepFool [9]
- BALD [10]
- Kmeans Sampling [5]
- Adversarial Bim
- Baseline Sampling
To learn more on different active learning algorithms, check out the Active Learning Strategies Survey Blog
DISTIL is a toolkit which provides support for various active learning algorithms. Presently, it only works in the supervised learning setting for classification. We will be adding extensions to active semi-supervised learning and active learning for object detection. It can be used in scenarios where you want to reduce labeling cost and time by labeling only the few most informative points for your ML model.
- "numpy >= 1.14.2",
- "scipy >= 1.0.0",
- "numba >= 0.43.0",
- "tqdm >= 4.24.0",
- "torch >= 1.4.0",
- "apricot-select >= 0.6.0"
Learn more about DISTIL by reading our documentation.
DISTIL provides various models and data handlers which can be used directly. DISTIL makes it extremely easy to integrate your custom models with active learning. There are two main things that need to be incorporated in your code before using DISTIL.
-
Model
- Your model should have a function get_embedding_dim() which returns the number of hidden units in the last layer.
- Your forward() function should have an optional boolean parameter “last” where:
- If True: It should return the model output and the output of the second last layer
- If False: It should return only the model output.
- Check the models included in DISTIL for examples!
-
Data Handler
-
Your DataHandler class should have a boolean attribute “select=True” with default value True:
- If True: Your getitem(self, index) method should return (input, index)
- If False: Your getitem(self, index) method should return (input, label, index)
-
Your DataHandler class should have a boolean attribute “use_test_transform=False” with default value False.
-
Check the DataHandler classes included in DISTIL for examples!
-
To get a clearer idea about how to incorporate DISTIL with your own models, refer to Getting Started With DISTIL & Active Learning Blog
You can also download the .ipynb files from the notebooks folder.
The models used below were first trained on an initial random set of points (equal to the budget). For each set of new points added, the model was trained from scratch until the training accuracy crossed the max accuracy threshold. The test accuracy was then reported before the next selection round. The results below are preliminary results each obtained only with one run. We are doing a more thorough benchmarking experiment, with more runs and report standard deviations etc. We will also link to a preprint which will include the benchmarking results.
For more details on the benchmarking results, please check out the Active Learning Benchmark Blog: Cut Down Labeling Costs with DISTIL.
Model: Resnet18
The best strategies show 2x labeling efficiency compared to random sampling. BADGE does better than entropy sampling with a larger budget, and all strategies do better than random sampling.
Model: MnistNet
All strategies exhibit a gain over random sampling, and both entropy sampling and BADGE achieve a 4x labeling efficiency compared to random sampling.
Model: Resnet18
All strategies exhibit a gain over random sampling, and both entropy sampling and BADGE achieve a 4x labeling efficiency compared to random sampling.
Model: Resnet18
All strategies exhibit a gain over random sampling, and both entropy sampling and BADGE achieve a 3x labeling efficiency compared to random sampling.
Budget: 400, Model: Two Layer Net, Number of rounds: 11, Total Points: 4800 (30%)
Before running the examples or test script, please clone the dataset repository in addition to this one. The default data path expects the repository in the same root directory as that of DISTIL. If you change the location, the data paths in the examples and test scripts need to be changed accordingly.
Dataset repository:
git clone https://github.com/decile-team/datasets.git
To run examples:
cd distil/examples
python example.py
To test individual strategies:
python test_strategy.py --strategy badge
For more information about the arguments that --strategy accepts:
python test_strategy.py -h
To receive updates about DISTIL and to be a part of the community, join the Decile_DISTIL_Dev group.
https://groups.google.com/forum/#!forum/Decile_DISTIL_Dev/join
This library takes inspiration, builds upon, and uses pieces of code from several open source codebases. These include Kuan-Hao Huang's deep active learning repository, Jordan Ash's Badge repository, and Andreas Kirsch's and Joost van Amersfoort's BatchBALD repository. Also, DISTIL uses Apricot for submodular optimization.
DISTIL is created and maintained by Nathan Beck, Durga Sivasubramanian, Apurva Dani, Rishabh Iyer, and Ganesh Ramakrishnan. We look forward to have DISTIL more community driven. Please use it and contribute to it for your active learning research, and feel free to use it for your commercial projects. We will add the major contributors here.
Youtube Tutorials on DISTIL:
- Tutorial on Active Learning
- Tutorial and Setup of DISTIL
- Benchmarking Active Learning through DISTIL
[1] Settles, Burr. Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences, 2009.
[2] Wang, Dan, and Yi Shang. "A new active labeling method for deep learning." 2014 International joint conference on neural networks (IJCNN). IEEE, 2014
[3] Kai Wei, Rishabh Iyer, Jeff Bilmes, Submodularity in data subset selection and active learning, International Conference on Machine Learning (ICML) 2015
[4] Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. Deep batch active learning by diverse, uncertain gradient lower bounds. CoRR, 2019. URL: http://arxiv.org/abs/1906.03671, arXiv:1906.03671.
[5] Sener, Ozan, and Silvio Savarese. "Active learning for convolutional neural networks: A core-set approach." ICLR 2018.
[6] Krishnateja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, and Rishabh Iyer, GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning, 35th AAAI Conference on Artificial Intelligence, AAAI 2021
[7] Vishal Kaushal, Rishabh Iyer, Suraj Kothiwade, Rohan Mahadev, Khoshrav Doctor, and Ganesh Ramakrishnan, Learning From Less Data: A Unified Data Subset Selection and Active Learning Framework for Computer Vision, 7th IEEE Winter Conference on Applications of Computer Vision (WACV), 2019 Hawaii, USA
[8] Wei, Kai, et al. "Submodular subset selection for large-scale speech training data." 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014.
[9] Ducoffe, Melanie, and Frederic Precioso. "Adversarial active learning for deep networks: a margin based approach." arXiv preprint arXiv:1802.09841 (2018).
[10] Gal, Yarin, Riashat Islam, and Zoubin Ghahramani. "Deep bayesian active learning with image data." International Conference on Machine Learning. PMLR, 2017.