Research Progress of Continual Learning
-
Graphical Abstract
-
Abstract
In recent years, with the continuous development of information technology, all kinds of data have shown explosive growth. Traditional machine learning algorithms can only achieve better performance when the distribution of testing data and training data is similar. In other words, it is impossible to continuously and adaptively learn in dynamic environment. However, this ability that can learn adaptively in dynamic environment is very important for any intelligent systems. Deep neural networks have shown the best learning ability in many applications. However, when we apply these methods to incrementally update the model parameters, the model would face catastrophic interference or forgetting problems, which can cause the model to forget the old knowledge after learning a new task. The research of continual learning alleviates this problem. Continual learning is a process of simulating brain learning. It learns continual non-independent and identically distributed data streams in a certain order, and incrementally updates the model according to the results of task. The significance of continual learning is to efficiently transform and use the knowledge that has been learned to complete the learning of new tasks, and to greatly reduce the problems caused by forgetting. The study of continuous learning is of great significance for intelligent computing systems to adaptively learn changes in the environment. In view of the application value, theoretical significance and future development potential of continual learning, the article systematically reviews the research progress of continual learning. Firstly, this paper outlines the definition of continual learning. Three typical continual learning models are introduced, namely learning without forgetting, elastic weight consolidation and gradient episodic memory. Then, the key problems and solutions of continual learning are also introduced. After that, the three types of methods based on regularization, dynamic framework, memory replay and complementary learning systems have been introduced. At last, this paper points out potential challenges and future directions in the field of continual learning.
-
-