Continual learning stands as a crucial component in advancing artificial intelligence, yet it encounters a significant challenge known as catastrophic forgetting. This phenomenon occurs when models lose previously acquired knowledge upon learning new tasks. While some methods propose partial remedies, the impact of altering the model's architecture on this forgetting remains largely unexplored. This paper delves into Residual Networks (ResNets) to evaluate how modifications in depth, width, and connectivity influence the process of continual learning. By introducing a simplified design tailored specifically for continual learning, this research seeks to compare its efficiency against established ResNets. Through an in-depth exploration of the algorithm's configuration, the paper aims to elucidate the underlying rationale behind its design decisions. Furthermore, it evaluates the performance of the proposed model using a diverse set of metrics, aiming to identify both strengths and areas for improvement. Ultimately, this paper sheds light on how the architectural aspects of a model impact its learning capabilities over time, with the ultimate goal of fostering the development of AI systems capable of continuous learning without experiencing the detrimental effects of forgetting. Demonstrating accuracy levels ranging from 62.52% to 90.39% across various tasks, the proposed model showcases its effectiveness in real-world continual learning scenarios.
its-rahul-cloud / effcient-and-effective-achitecture-in-continual-learning-through-various-resnets-- Goto Github PK
View Code? Open in Web Editor NEWthis repo is created based research work on effect of different resnet achitecture on continual learning
License: Apache License 2.0