Logarithmic search in a winner-take-all network

An analysis of the relaxation time for the WTA (winner-take-all) mechanism used in a number of neural network models of visual attention is presented. The analysis assumes only bottom-up activation of the WTA array, and approximates this activation as consisting of a maximum activation, plus some number of equal, submaximum activations. Under these conditions, the relaxation time of the network is a logarithmic function of the number of secondary activations. This result is interpreted as a prediction that reaction time dependence on display size in visual search tasks should, under appropriate conditions, show this logarithmic dependency, rather than the linear dependency that is generally reported. The analysis also finds a dependence of relaxation time on target saliency that is in agreement with the experimental data and theories of a number of researchers.<<ETX>>