xialeiliu / awesome-incremental-learning Goto Github PK
View Code? Open in Web Editor NEWAwesome Incremental Learning
Awesome Incremental Learning
Hi, I would like to suggest adding our lab's paper "REMIND Your Neural Network to Prevent Catastrophic Forgetting"
It is accepted to ECCV-2020 and performs incremental classification on the ImageNet and CORe50 datasets, as well as incremental visual question answering on the TDIUC and CLEVR datasets.
arXiv: https://arxiv.org/abs/1910.02509
ecva: https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123530460.pdf
code: https://github.com/tyler-hayes/REMIND
The code for paper "Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference" can be found here: https://github.com/mattriemer/mer
I thought it easier to open an issue but please let me know if you prefer a pull request.
-Maria
Hi @xialeiliu , please add the following paper in repo.
Paper: https://openaccess.thecvf.com/content/WACV2021/html/Kurmi_Do_Not_Forget_to_Attend_to_Uncertainty_While_Mitigating_Catastrophic_WACV_2021_paper.html
Project: https://delta-lab-iitk.github.io/Incremental-learning-AU/
Thanks
Paper name/title: Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning
Paper link: https://arxiv.org/abs/2308.09303
Code link: https://github.com/moonjunyyy/si-blurry
keywords: Class Incremental Learning
Thank you for contributing to such a great repository for continual learning literature.
Could you please add the following paper that was published in ICCV 2023?
Paper: Growing a Brain with Sparsity-Inducing Generation for Continual Learning
Code: https://github.com/Jin0316/GrowBrain
Thanks for your consideration !
Hi Xialei,
Thanks for your contributions to the incremental learning field. Can I request you to add our recent paper published at ACM MM 2021 in the repo?
Paper: Co-Transport for Class-Incremental Learning https://arxiv.org/pdf/2107.12654.pdf
Dear Xialei,
thank you for maintaining this great list!
We have authored several published (full) continual/lifelong learning papers and I am wondering if you could please add them to the list. One of them might be of particular interest to the community, as it is a broad recent published survey paper from 2023:
Survey paper: "A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning"
Published in: Neural Networks 160, 2023
Link: https://www.sciencedirect.com/science/article/pii/S089360802300014X (or on arXiv)
"A Procedural World Generation Framework for Systematic Evaluation of Continual Learning"
Published in: NeurIPS 2021
Link: https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/d645920e395fedad7bbbed0eca3fe2e0-Abstract-round1.html (or on arXiv)
Code: https://github.com/ccc-frankfurt/EndlessCL-Simulator-Source
"Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition"
Published in: Journal of Imaging 8:4, 2022
Link: https://www.mdpi.com/2313-433X/8/4/93 (or on arXiv)
Code: https://github.com/MrtnMndt/OpenVAE_ContinualLearning
Hi @xialeiliu ,
Thanks for maintaining this repo. Can I request you to add our recent paper published at ICML 2021 (also at ICLR NERL workshop https://sites.google.com/view/neverendingrl as a spotlight) in the repo?
Paper:
Continuous Coordination As a Realistic Scenario for Lifelong Learning https://arxiv.org/pdf/2103.03216.pdf
Paper name/title: MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition
Paper link: https://arxiv.org/abs/2305.14758
Code link: https://github.com/simplify23/MRN
Blog in Chinese: https://zhuanlan.zhihu.com/p/643948935
keywords: OCR, Incremental learning
Hi, could you add some incremental learning papers published in the ICML2022? For example, please consider our paper "Online Continual Learning through Mutual Information Maximization" (https://proceedings.mlr.press/v162/guo22g.html).
There are many good paper related to incremental learning in WACV 2022, we should include them in the website.
Hi,
Thank you for maintaining the list on continual learning.
Would you please add the following paper that is published in AAAI 2021?
Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network, AAAI 2021
https://arxiv.org/abs/2107.01349
https://github.com/bigdata-inha/Split-and-Bridge
Kind Regards,
Don
Are there any incremental learning papers that do not use the old task data?
i published it on github a few days ago.
gaenari
continuous chunk data can be inserted and updated, and rebuilds can be run if concept drift reduces accuracy.
See README.MD for details.
Hi Xialei Liu,
Thank you for maintaining such a great repo for continual learning literature!
Could you please add the following paper that was published in PR 2024
Paper: Introspective GAN: Learning to Grow a GAN for Incremental Generation and Classification
Url: https://www.sciencedirect.com/science/article/abs/pii/S0031320324001341
Code: https://github.com/TonyPod/IntroGAN
Thanks for your consideration!
Thanks for contributing such a great repository!
Would you please add the following paper published at AAAI 2024?
Paper title: MIND: Multi-Task Incremental Network Distillation
Paper link: https://arxiv.org/abs/2312.02916
Code link: https://github.com/Lsabetta/MIND
keywords: Replay-free Class Incremental learning
Thanks for your consideration!
I want to find some papers related to streaming incremental learning
Here is a paper published at kdd 19
Hi,
could you please add the ICCV workshop paper: TKIL: Tangent Kernel Optimization for Class Balanced Incremental Learning. link to paper: https://openaccess.thecvf.com/content/ICCV2023W/VCL/papers/Xiang_TKIL_Tangent_Kernel_Optimization_for_Class_Balanced_Incremental_Learning_ICCVW_2023_paper.pdf
thanks!
https://ai-paper-collector.vercel.app/
(https://github.com/MLNLP-World/AI-Paper-collector)
such as
[AAAI2022] Adaptive Orthogonal Projection for Batch and Online Continual Learning
[AAAI2022] Same State, Different Task: Continual Reinforcement Learning without Interference
[AAAI2022] Continual Learning through Retrieval and Imagination
[ACL2022] Continual Prompt Tuning for Dialog State Tracking
[ACL2022] Tackling Fake News Detection by Continually Improving Social Context Representations using Graph Neural Networks
[ACL2022] Overcoming Catastrophic Forgetting beyond Continual Learning: Balanced Training for Neural Machine Translation
[ACL2022] Continual Few-shot Relation Learning via Embedding Space Regularization and Data Augmentation
[ACL2022] ConTinTin: Continual Learning from Task Instructions
[ACL2022] On Continual Model Refinement in Out-of-Distribution Data Streams
[ACL2022] Continual Sequence Generation with Adaptive Compositional Modules
[ACL2022] Continual Pre-training of Language Models for Math Problem Understanding with Syntax-Aware Memory Network
[ACL2022] Hierarchical Inductive Transfer for Continual Dialogue Learning
[ACL2022] Learn and Review: Enhancing Continual Named Entity Recognition via Reviewing Synthetic Samples
[ACL2022] Consistent Representation Learning for Continual Relation Extraction
[COLING2022] Continual Few-shot Intent Detection
[COLING2022] Improving Continual Relation Extraction through Prototypical Contrastive Learning
[COLING2022] Continually Detection, Rapidly React: Unseen Rumors Detection Based on Continual Prompt-Tuning
[ICLR2022] CoMPS: Continual Meta Policy Search
[ICLR2022] Continual Normalization: Rethinking Batch Normalization for Online Continual Learning
[ICLR2022] Towards Continual Knowledge Learning of Language Models
[ICLR2022] Information-theoretic Online Memory Selection for Continual Learning
[ICLR2022] Pretrained Language Model in Continual Learning: A Comparative Study
[ICLR2022] CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability
[ICLR2022] Model Zoo: A Growing Brain That Learns Continually
[ICLR2022] Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System
[ICLR2022] Learning curves for continual learning in neural networks: Self-knowledge transfer and forgetting
[ICLR2022] New Insights on Reducing Abrupt Representation Change in Online Continual Learning
[ICLR2022] Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference
[ICLR2022] Online Coreset Selection for Rehearsal-based Continual Learning
[ICLR2022] Memory Replay with Data Compression for Continual Learning
[ICLR2022] Representational Continuity for Unsupervised Continual Learning
[ICLR2022] Continual Learning with Filter Atom Swapping
[ICLR2022] Continual Learning with Recursive Gradient Optimization
[ICLR2022] TRGP: Trust Region Gradient Projection for Continual Learning
[ICME2022] Attention Distraction: Watermark Removal Through Continual Learning with Selective Forgetting
[ICME2022] Continual Contrastive Learning for Image Classification
[ICML2022] VariGrow: Variational Architecture Growing for Task-Agnostic Continual Learning based on Bayesian Novelty
[ICML2022] Online Continual Learning through Mutual Information Maximization
[ICML2022] NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks
[ICML2022] Forget-free Continual Learning with Winning Subnetworks
[ICML2022] Continual Repeated Annealed Flow Transport Monte Carlo
[ICML2022] Continual Learning via Sequential Function-Space Variational Inference
[ICML2022] Improving Task-free Continual Learning by Distributionally Robust Memory Evolution
[ICML2022] Continual Learning with Guarantees via Weight Interval Constraints
[IJCAI2022] Continual Semantic Segmentation Leveraging Image-level Labels and Rehearsal
[IJCAI2022] Continual Federated Learning Based on Knowledge Distillation
[IJCAI2022] CERT: Continual Pre-training on Sketches for Library-oriented Code Generation
[IJCAI2022] Multiband VAE: Latent Space Alignment for Knowledge Consolidation in Continual Learning
[IJCAI2022] Learning from Students: Online Contrastive Distillation Network for General Continual Learning
It seems most of the papers are doing research on Class-incremental Learning, is there any good paper focusing on Task-incremental Learning? For example, from object detection model increases a new feature let's say segmentation.
Hi Xialei,
Thanks for gathering all the papers. It is quite helpful.
Could you add our recent paper "GraphSAIL: Graph Structure Aware Incremental Learning for Recommender Systems" which has been recently accepted by CIKM 2020: https://arxiv.org/abs/2008.13517.
Thank you,
Yingxue
Hi Xialei Liu,
Thanks for the useful resource! I wanted to suggest a recent paper on incremental learning published in Nature Communications: “Brain-inspired replay for continual learning with artificial neural networks” (https://www.nature.com/articles/s41467-020-17866-2)
And there’s code here: https://github.com/GMvandeVen/brain-inspired-replay
Thanks,
Gido
Hi xialeiliu:
I am new in this area, and this question confused me for a long time:
What are the differences among the three concepts : Incremental learning, continual learning and lifelong learning?
It seems that "continual learning " and ''lifelong learning'' are more conmmonly used in deep learning filed, and incremental learning is more conmmonly used in big data processing. But it also semms that they are addressing the same question in mechine learning: overcome catastrophic forgetting whithout access to old data.
for deep learning, continual learning and lifelong learning was first proposed from the paper(perhaps), but this issue was found in the early neuro networks researches(non-deep), and also widely applied in many areas.
What's your opinion about this question?
Hi, thank you for maintaining such a great repo!
We have released our code of the NeurIPS 2023 paper "A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm" at https://github.com/Wang-ML-Lab/unified-continual-learning.
Could you please kindly update the readme? Thank you so much in advance!
Hi Xialei,
Thanks so much for contributing such a great repository!
Could you consider adding the paper "A Survey on Incremental Update for Neural Recommender Systems"?
In practical recommender systems, incremental updating is a very important subject. For researchers who are studying RecSys it will be helpful.
Paper: https://arxiv.org/abs/2303.02851
Thanks for your consideration!
Hi @xialeiliu,
First of all, thank you for maintaining the repository - it was super helpful for me when I was getting up to speed with the topic!
Please, consider adding our recent paper accepted/published at ECML PKDD 2021: Streaming Decision Trees for Lifelong Learning (https://link.springer.com/chapter/10.1007/978-3-030-86486-6_31). It's an alternative approach to CL using hybridization of deep learning and decision trees.
Paper: https://2021.ecmlpkdd.org/wp-content/uploads/2021/07/sub_1050.pdf
Code: https://github.com/lkorycki/lldt
Best,
Lukasz
Hi @xialeiliu, nice list you have here! 💯 😄
At http://continualai.org we have an ongoing project for an open-source collaborative Wiki on Incremental/Continual Learning: https://github.com/ContinualAI/wiki
It would be great to have you in the community and join forces on this common endeavor!
Best,
Vincenzo
Thank you for the great work! I have gotten so much help from this repository while conducting research on continual learning.
I kindly suggest you to add AAAI2024 paper on Class-Incremental Learning: Cross-Class Feature Augmentation for Class Incremental Learning (Link: https://arxiv.org/abs/2304.01899)
Thank you for consideration!
Thanks for contributing such a great repository!
Would you please add the following paper published at ICCV 2023?
Paper title: Prototype Reminiscence and Augmented Asymmetric Knowledge Aggregation for Non-Exemplar Class-Incremental Learning
Paper link: https://openaccess.thecvf.com/content/ICCV2023/html/Shi_Prototype_Reminiscence_and_Augmented_Asymmetric_Knowledge_Aggregation_for_Non-Exemplar_Class-Incremental_ICCV_2023_paper.html
Code link: https://github.com/ShiWuxuan/PRAKA
keywords: Prototype, Class Incremental learning
Thanks for your consideration!
Hi Xialei Liu,
Thanks for gathering all the papers. It is quite helpful.
I find this interesting paper named "Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay" focusing on continual learning.
Arxiv Link: https://arxiv.org/abs/1903.04566
Best,
Jiahui Cui
Hi, thanks for maintaining this up-to-date repository, I think its the go-to place for anyone interested in incremental learning.
Please add my latest work accepted in WACV 2022 : Multi-Domain Incremental Learning for Semantic Segmentation
https://arxiv.org/abs/2110.12205
Code: https://github.com/prachigarg23/MDIL-SS
It's the first attempt at domain incremental learning for semantic segmentation.
Thank you for contributing to such a great repository for continual learning literature.
Could you please add code to the following paper that was published in ICCV 2023?
Paper: Exemplar-Free Continual Transformer with Convolutions
Code: https://github.com/CVIR/contracon
Also, could you please add the following paper and code that has been accepted in CVPR 2024?
Paper: https://arxiv.org/pdf/2403.20317
Code: https://github.com/CVIR/ConvPrompt
Thanks for your consideration !
Hi Xialei,
Thanks so much for contributing such a great repository! Could you consider adding the paper "Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection" at COLING'22? It proposed an episodic memory prompting framework for continual event detection.
Paper: https://arxiv.org/abs/2204.07275
Code: https://github.com/VT-NLP/Incremental_Prompting
Thanks for your consideration!
LAMOL: LANGUAGE MODELING FOR LIFELONG LANGUAGE LEARNING https://openreview.net/pdf?id=Skgxcn4YDS
Distill and Replay for Continual Language Learning https://www.aclweb.org/anthology/2020.coling-main.318.pdf
Hi,
Thanks for contributing such a great repository for incremental learning.
Would you please add the following paper published at CVPR 2023?
Paper:
Decoupling Learning and Remembering: A Bilevel Memory Framework With Knowledge Projection for Task-Incremental Learning
Code:
https://github.com/SunWenJu123/BMKP
Thanks again!
Hi, I would like to suggest adding our lab's survey paper "Replay in Deep Learning: Current Approaches and Missing Biological Elements"
It was published in Neural Computation (2021). We discuss how replay happens in biological networks and compare it to how replay is implemented for continual learning in artificial networks. We then discuss how the two differ. Thank you in advance!
arXiv: https://arxiv.org/abs/2104.04132
Neural Computation: https://direct.mit.edu/neco/article-abstract/33/11/2908/107071/Replay-in-Deep-Learning-Current-Approaches-and?redirectedFrom=fulltext
Thanks for the ressource.
Here's my NeurIPS2019 paper
http://papers.neurips.cc/paper/9357-online-continual-learning-with-maximal-interfered-retrieval
It would be great if you could add it :)
Hi Xialei Liu,
You can consider including a recent paper on incremental learning in NeurIPS'19:
“Random Path Selection for Incremental Learning,” Advances in Neural Information Processing Systems, (NeurIPS), Vancouver, Canada, 2019.
Arxiv Link: https://arxiv.org/abs/1906.01120
Thanks.
Hi @xialeiliu ,
Thanks for maintaining the repository, always a great resource to dive into recent advancements.
Can you please add the following paper published at WACV 2022?
Paper:
Dataset Knowledge Transfer for Class-Incremental Learning without Memory https://arxiv.org/pdf/2110.08421.pdf
Code:
https://github.com/HabibSlim/DKT-for-CIL/
Thanks in advance!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.