In many applications, some form of input information, such as test inputs or extra inputs, is available. We incorporate input information into learning by an augmented error function, which is an estimator of the out-of-sample error. The augmented error consists of the training error plus an additional term scaled by the augmentation parameter. For general linear models, we analytically show that the augmented solution has smaller out-of-sample error than the least squares solution. For nonlinear models, we devise an algorithm to minimize the augmented error by gradient descent, determining the augmentation parameter using cross validation.Augmented objective functions also arise when hints are incorporated into learning. We first show that using the invariance hints to estimate the test error, and early stopping on this estimator, results in better solutions than the minimization of the training error. We also extend our algorithm for incorporating input information to the case of learning from hints.Input information or hints are additional information about the target function.When the only available information is the training set, all the models with the same training error are equally likely to be the target.In that case, we show that early stopping of training at any training error level above the minimum can not decrease the out-of-sample error.Our results are nonasymptotic for general linear models and the bin model, and asymptotic for nonlinear models.When additional information is available, early stopping can help.
【 预 览 】
附件列表
Files
Size
Format
View
Incorporating input information into learning and augmented objective functions