Giter VIP home page Giter VIP logo

deepalplus's Introduction

DeepAL+: Deep Active Learning Toolkit

DeepAL+ is an extended toolkit originated from DeepAL toolkit. Including python implementations of the following active learning algorithms:

  • Random Sampling
  • Least Confidence [1]
  • Margin Sampling [2]
  • Entropy Sampling [3]
  • Uncertainty Sampling with Dropout Estimation [4]
  • Bayesian Active Learning Disagreement [4]
  • Core-Set Selection [5]
  • Adversarial margin [6]
  • Mean Standard Deviation [7]
  • Variation Ratios [8]
  • Cost-Effective Active Learning [9]
  • KMeans with scikit-learn library and with faiss-gpu library
  • Batch Active learning by Diverse Gradient Embeddings [10]
  • Loss Prediction Active Learning [11]
  • Variational Adversarial Active Learning [12]
  • Wasserstein Adversarial Active Learning [13]

We support 10 datasets, MNIST, FashionMNIST, EMNIST, SVHN, CIFAR10, CIFAR100, Tiny ImageNet, BreakHis, PneumoniaMNIST, Waterbirds. One can add a new dataset by adding a new function get_newdataset() in data.py.

Tiny ImageNet, BreakHis, PneumoniaMNIST need to be downloaded manually, the corresponding data addresses can be found in data.py.

In DeepAL+, we use ResNet18 as the basic classifier. One can replace it with other basic classifiers and add them to nets.py.

Prerequisites

  • numpy 1.21.2
  • scipy 1.7.1
  • pytorch 1.10.0
  • torchvision 0.11.1
  • scikit-learn 1.0.1
  • tqdm 4.62.3
  • ipdb 0.13.9
  • openml 0.12.2
  • faiss-gpu 1.7.2
  • toma 1.1.0
  • opencv-python 4.5.5.64
  • wilds 2.0.0 (for waterbirds dataset only)

You can also use the following command to install the conda environment

conda env create -f environment.yml

faiss-gpu and wilds should use pip install.

Demo

  python demo.py \
      -a RandomSampling \
      -s 100 \
      -q 1000 \
      -b 100 \
      -d MNIST \
      --seed 4666 \
      -t 3 \
      -g 0

See arguments.py for more instructions. We have also constructed a comparative survey based on DeepAL+. Please refer to here for more details.

Citing

Please consider citing our paper if you use our code in your research or applications.

@article{zhan2022comparative,
  title={A comparative survey of deep active learning},
  author={Zhan, Xueying and Wang, Qingzhong and Huang, Kuan-hao and Xiong, Haoyi and Dou, Dejing and Chan, Antoni B},
  journal={arXiv preprint arXiv:2203.13450},
  year={2022}
}

Reference

[1] A Sequential Algorithm for Training Text Classifiers, SIGIR, 1994

[2] Active Hidden Markov Models for Information Extraction, IDA, 2001

[3] Active learning literature survey. University of Wisconsin-Madison Department of Computer Sciences, 2009

[4] Deep Bayesian Active Learning with Image Data, ICML, 2017

[5] Active Learning for Convolutional Neural Networks: A Core-Set Approach, ICLR, 2018

[6] Adversarial Active Learning for Deep Networks: a Margin Based Approach, arXiv, 2018

[7] Semantic segmentation of small objects and modeling of uncertainty in urban remote sensing images using deep convolutional neural networks, CVPR, 2016

[8] Elementary applied statistics: for students in behavioral science. New York: Wiley, 1965

[9] Cost-effective active learning for deep image classification. TCSVT, 2016

[10] Deep batch active learning by diverse, uncertain gradient lower bounds. ICLR, 2020

[11] Learning loss for active learning. CVPR, 2019

[12] Variational adversarial active learning, ICCV, 2019

[13] Deep active learning: Unified and principled method for query and training. AISTATS, 2020

Contact

If you have any further questions or want to discuss Active Learning with me or contribute your own Active Learning approaches to our toolkit, please contact [email protected] (my spare email is [email protected]).

deepalplus's People

Contributors

sinezhan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

deepalplus's Issues

VAAL implementation is incomplete?

Hi is VAAL implementation incorrect in this toolkit. I see that it is very different from the author's original implementation and it also looks incomplete, unusable. Can you please let me know if it is?

Issue in nets_waal.py file

Dear Repository owners,

I would like to use your deepALplus to do experiments with Deep Active Learning, but i got a comprehension problem with net_waal.py.
Indeed, at line 81, you recompute the features for labeled and unlabeled data in "with torch.no_grad" loop. Then you compute the gradient penalty and add it to the loss.
Since you used the with torch.no_grad loop, the contibution of the gradient penalty when updating the weight of your features_extractor will be null. and Since at line 64 you set the requires_grad=False for the discriminator the weigths of the discriminator will not be update.

I would like to know why you recomputed the features in the "with torch.no_grad" loop since it seems to make the gradient penalty to have no impact when updating the weight of your entire model.
Thank you.

Stuck at net to device

Dear Repository owners,

I would like to use your deepALplus to do experiments with Deep Active Learning.
However, I always get stuck at line 23 in nets.py. It takes ages to execute but should normally be milliseconds:
self.clf = self.net(dim = dim, pretrained = self.params['pretrained'], num_classes = self.params['num_class']).to(self.device)
The script fails after ~20 min with RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED.
Do you have any recommendations? Executing the code in https://github.com/ej0cl6/deep-active-learning works fine for me.
Could this be because of cudnn and cuda versions? Are the certain versions one has to use?

I installed using conda. I used cudnn/8.0_v7.0 and cuda/11.0.2 as well as cudnn/11.7_v8.6 amd cuda/11.7.0 and had the same behavior with both.

Thank you.

There was a memory error when I loaded the breakhis dataset

hello,@SineZHAN .
def get_BreakHis(handler, args_task):
download data from https://www.kaggle.com/datasets/ambarish/breakhis and unzip it in data/BreakHis/
data_dir = './data/BreakHis/BreaKHis_v1/BreaKHis_v1/histology_slides/breast'
data = datasets.ImageFolder(root=data_dir, transform=None).imgs
train_ratio = 0.7
test_ratio = 0.3
data_idx = list(range(len(data)))
random.shuffle(data_idx)
train_idx = data_idx[:int(len(data) * train_ratio)]
test_idx = data_idx[int(len(data) * train_ratio):]
X_tr = [np.array(Image.open(data[i][0])) for i in train_idx]
Y_tr = [data[i][1] for i in train_idx]
X_te = [np.array(Image.open(data[i][0])) for i in test_idx]
Y_te = [data[i][1] for i in test_idx]
X_tr = np.array(X_tr, dtype=object)
X_te = np.array(X_te, dtype=object)
Y_tr = torch.from_numpy(np.array(Y_tr))
Y_te = torch.from_numpy(np.array(Y_te))
return Data(X_tr, Y_tr, X_te, Y_te, handler, args_task)

A memory error occurred while running this code!

MemoryError: Unable to allocate 943. KiB for an array with shape (460, 700, 3) and data type uint8

X_tr keeps storing Data, which causes the memory to be full. How can I modify it to work with class data?

Custom Dataset

Hello,

Is there any way to test this code with a custom image dataset?

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.