Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision

Abstract : Spatiotemporal grouping phenomena are examined in the context of static and time-varying imagery. Dynamics that exhibit static feature grouping on multiple scales as a function of time and long-range apparent motion between time-varying inputs are developed for a biologically plausible diffusion- enhancement bilayer network. The architecture consists of a diffusion layer and a contrast-enhancement layer coupled by feedforward and feedback connections; time-varying input is provided by a separate feature extracting layer. The model is cast as an analog circuit that is realizable in very large scale integration, the parameters of which are selected to satisfy a psychophysical database of the following long-range apparent motion phenomena: gamma motion of a single light, smooth motion between two lights, reverse motion, split and merge among three light, Ternus motion among multiple lights, and peripheral motion. the relation between motion on a uniform network (i.e, cortex) and inputs to a nonuniform sampling array (i.i, retina) are discussed in the context of a logarithmic scaling of space. A new interpretation of short- and long-range visual motion systems is introduced.