NOTE! This site uses cookies and similar technologies.

If you not change browser settings, you agree to it. Learn more

I understand

Learn more about cookies at :

December 21, 2018 | Neural networks will soon be capable of incremental learning

neurones 250A method based on a model of human memory has made it possible for neural networks to learn incrementally. This advance will open the door to new possibilities in the field of autonomous systems.

Neural networks are very powerful for deep learning applications. However, they are not particularly well-suited to incremental learning. Currently, when a neural network learns a new piece of information, old information is overwritten. A solution to this "catastrophic forgetting" would make neural networks more operational in autonomous systems running in constantly-changing environments.

Researchers at Leti, a CEA Tech institute, worked with cognitive neuropsychology lab LPNC, which has been developing a human memory model since the 90s, and fellow CEA Tech institute List*, which is developing an artificial neural network simulation tool called N2D2, to come up with a model that could be a game changer. The model can re-learn all information (old and new) together using two neural networks, eliminating the need to save old information to an external memory, which would drastically increase memory requirements.

Here's how it works: The first network is presented with alternating true examples that correspond to the new information being learned and "pseudo examples" that are generated by the second network. These "pseudo examples" represent what has already been learned and are used to "refresh" the first network's memory, so to speak. The primary advantages of the method are that it does not limit network plasticity and does not require additional memory.

*List earned the prestigious Institut Carnot seal in 2006 (Institut Carnot TN@UPSaclay).