Analogous to human learning, neural networks may also work in order not to forget previous knowledge. Using the traditional approaches for neural learning, this is nearly impossible, due to the fact that every training implies replacing all the connections already made by new ones, thereby forgetting the previous knowledge. Thus a need arises to make the neural networks adapt to new knowledge by incrementing instead of replacing their current knowledge. To address that issue, we are going to explore one method called adaptive resonance theory (ART).
The question that drove the development of this theory was: How can an adaptive system remain plastic to a significant input and yet keep stability for irrelevant inputs? In other words: How can it retain previously learned information while learning new information?
We've seen that competitive learning in unsupervised learning deals with pattern recognition, whereby similar inputs yield similar...