Giter VIP home page Giter VIP logo

gmvandeven / class-incremental-learning Goto Github PK

View Code? Open in Web Editor NEW
70.0 2.0 14.0 1.55 MB

PyTorch implementation of a VAE-based generative classifier, as well as other class-incremental learning methods that do not store data (DGR, BI-R, EWC, SI, CWR, CWR+, AR1, the "labels trick", SLDA).

License: MIT License

Shell 1.18% Python 98.82%
continual-learning class-incremental-learning generative-classifier generative-classification variational-autoencoder elastic-weight-consolidation synaptic-intelligence cwr cwr-plus ar1

class-incremental-learning's People

Contributors

gmvandeven avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

class-incremental-learning's Issues

About the task split when training the proposed method

In the paper, you mentiond that MNIST and CIFAR10 are both split into 5 tasks, but I can't find the argument for the number of tasks (i.e., something like add_argument("--tasks", type=int....)) in the file options_gen_classifier.py,
image
but I did find an argument called --tasks in the file options.py .
image
Could you please tell me the reason~? Thank you.

Core50 result

flow<compare_all.py --experiment=CORe50 --n-seeds=10 --seed=11 --single-epochs --batch=1 --fc-layers=2 --z-dim=200 --fc-units=1024 --lr=0.0001 --c=10 --lambda=10 --omega-max=0.1 --ar1-c=1. --dg-prop=0. --bir-c=0.01 --si-dg-prop=0.6>

According to the code provided in the file, the result of running Core50 dataset is inconsistent with that in the article( BIR Table2)

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Got this error:

internals>", line 200, in argmax
File "/home/19mkn1/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 1242, in argmax
return _wrapfunc(a, 'argmax', axis=axis, out=out, **kwds)
File "/home/19mkn1/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 54, in _wrapfunc
return _wrapit(obj, method, *args, **kwds)
File "/home/19mkn1/.local/lib/python3.8/site-packages/numpy/core/fromnumeric.py", line 43, in _wrapit
result = getattr(asarray(obj), method)(*args, **kwds)
File "/home/19mkn1/.local/lib/python3.8/site-packages/torch/_tensor.py", line 678, in array
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
srun: error: aurora: task 0: Exited with exit code 1

This error occurs in a Python script that is attempting to convert a PyTorch tensor to a NumPy array. However, the tensor is located on a CUDA device (i.e., GPU) rather than on the CPU. PyTorch doesn't support directly converting CUDA tensors to NumPy arrays because NumPy operates on CPU memory.

The error message specifically suggests using Tensor.cpu() to copy the tensor to host memory first, before converting it to a NumPy array. Here's how you can fix it:

import torch

# Assuming 'cuda_tensor' is your PyTorch tensor on the CUDA device
cuda_tensor = torch.tensor([1, 2, 3]).cuda()

# Move the tensor to CPU
cpu_tensor = cuda_tensor.cpu()

# Now you can convert it to a NumPy array
numpy_array = cpu_tensor.numpy()

By first moving the tensor to the CPU using cpu(), you can then safely convert it to a NumPy array.

Question about model size

Hi Gido,

Does this method have a drawback of a too large model size for large label space like ImageNet?

Question regarding the online incremental learning

Hi,

Nice to meet you.
May I know is this code is applicable to online incremental learning? Because after i read your paper, the paper mentions online incremental learning.

Thanks if you could assist me :D

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.