Realistic world of limited sample neural network applications: how to proceed on a firm methodological foundation with small-n

Improving evaluation, especially with small sample, or small-n, applications, may be highly dependent on incorporating expanding knowledge about methodological pitfalls to avoid. It is the intent of the current paper to provide an informational guide to key evaluation issues with small-n. Although the present paper is focused on supervised learning classification paradigms typified by the back-propagation network, the principles hold true in various degrees for other artificial neural networks.