Generalization in a linear perceptron in the presence of noise

and

Published under licence by IOP Publishing Ltd
, , Citation A Krogh and J A Hertz 1992 J. Phys. A: Math. Gen. 25 1135 DOI 10.1088/0305-4470/25/5/020

0305-4470/25/5/1135

Abstract

The authors study the evolution of the generalization ability of a simple linear perceptron with N inputs which learns to imitate a 'teacher perceptron'. The system is trained on p= alpha N example inputs drawn from some distribution and the generalization ability is measured by the average agreement with the teacher on test examples drawn from the same distribution. The dynamics may be solved analytically and exhibits a phase transition from imperfect to perfect generalization at alpha =1, when there are no errors (static noise) in the training examples. If the examples are produced by an erroneous teacher, overfitting is observed, i.e. the generalization error starts to increase after a finite time of training. It is shown that a weight decay of the same size as the variance of the noise (errors) on the teacher improves on the generalization and suppresses the overfitting. The generalization error as a function of time is calculated numerically for various values of the parameters. Finally dynamic noise in the training is considered. White noise on the input corresponds on average to a weight decay, and can thus improve generalization, whereas white noise on the weights or the output degrades generalization. Generalization is particularly sensitive to noise on the weights (for alpha (1) where it makes the error constantly increase with time, but this effect is also shown to be damped by a weight decay. Weight noise and output noise acts similarly above the transition at alpha =1.

Export citation and abstract BibTeX RIS

10.1088/0305-4470/25/5/020