A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning

A supervised learning algorithm (Scaled Conjugate Gradient, SCG) is introduced TIw pelformance of SCG is benchmarked against that of the standard back propagation algorithm (BP) ( Rumelhart. Hinton. & 14"illiams. 1986 ), the conjugate gradient algorithm with line search ( CGL ) ( Johansson, Dowla. & Goodman, 1990) and the one-step Broyden-Fletcher-Gold./arb-Shanno memoriless quasi-Newton algorithm ( BFGS) ( Battiti, 1990 ). SCG is lhlly-automated, inJudes no critical user-dependent parametepw, and avoids a time consuming line search, which CGL and BFGS use in each iteration in order to determine an appropriate step size. E.¥periments show that SCG is considerablyJhster than BP, CGL, and BFGS.