Optical flow

A large number of contributions to this workshop is concerned with computing or making use of optical flow . This i s the term now commonly used for an intermediate representation of time-varying imagery where each pixel is assigned a velocity vector describing its temporal displacement in th e image plane or—for human vision-in the retinal field . Optical flow can be consciously experienced by human observers (e .g . when travelling in a car) and has early been recognized a s a valuable source of information pertaining to the motion an d 3D characteristics of a scene (GIBSON 50) . Thorough quantitative analyses, however, have only become available during the last five years, when an increasing number of vision researchers turned to motion problems . As can be seen fro m this workshop, interesting results on how to exploit optica l flow are still being uncovered . Before making use of optical flow it must be computed — unfortunately . As it turns out, no computational theory has yet been offered which promises satisfactory results for unrestricted real-world images . Nevertheless, considerable progres s has been made in certain restricted situations . This is als o documented by several contributions to this workshop . hi thi s introductory survey I shall try to point out the major differences in the approaches taken so far. Fig . 1 gives a rough sketch of the representations and th e processing connected with optical flow . Much of the variety o f the research contributions is due to certain assumptions abou t the visual world . These will be discussed in the following sec tion . The visual world is projected yielding intensity arrays from which optical flow computation per se proceeds . Thre e rather distinct directions of processing have been proposed . A s a first possibility, optical flow is directly computed from th e intensity array . The result is usually a dense flow field . Alternately, descriptive elements like prominent points or edges may be computed first . Points usually give rise to a sparse flo w field after correspondence is established . Edges lead to a quit e different flow compentation clue to the remaining degree of freedom . In section 3 these distinctions are elaborated in som e more detail . Finally, I shall briefly review ways of extractin g useful information from optical flow .

[1]  Masahiko Yachida,et al.  Determining Velocity Map By 3-D Iterative Estimation , 1981, IJCAI.

[2]  Thomas S. Huang,et al.  Some Experiments on Estimating the 3-D Motion Parameters of a Rigid Body from Two Consecutive Image Frames , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[3]  E. Hildreth The computation of the velocity field , 1984, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[4]  E H Adelson,et al.  Spatiotemporal energy models for the perception of motion. , 1985, Journal of the Optical Society of America. A, Optics and image science.

[5]  Takeo Kanade,et al.  Adapting optical-flow to measure object motion in reflectance and x-ray image sequences (abstract only) , 1984, COMG.

[6]  Ellen C Hildreth,et al.  Computing the velocity field along contours , 1986, Workshop on Motion.

[7]  E. Parzen On Estimation of a Probability Density Function and Mode , 1962 .

[8]  Jon A. Webb,et al.  OBSERVING JOINTED OBJECTS. , 1980 .

[9]  Thomas S. Huang,et al.  Estimating three-dimensional motion parameters of a rigid planar patch, III: Finite point correspondences and the three-view problem , 1984 .

[10]  Thomas S. Huang,et al.  Estimating three-dimensional motion parameters of a rigid planar patch , 1981 .

[11]  Ramesh C. Jain,et al.  Determining Motion Parameters for Scenes with Translation and Rotation , 1984, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  J. H. Rieger,et al.  Determining the instantaneous axis of translation from optic flow generated by arbitrary sensot motion , 1986, Workshop on Motion.

[13]  S. HuangT.,et al.  Determining 3-D motion parameters of a rigid body , 1984 .

[14]  Edward H Adelson,et al.  The perception of coherent motion in two-dimensional patterns (abstract only) , 1984, COMG.

[15]  Edward H. Adelson,et al.  PYRAMID METHODS IN IMAGE PROCESSING. , 1984 .

[16]  J. Craggs Applied Mathematical Sciences , 1973 .

[17]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[18]  Hans-Hellmut Nagel,et al.  On 3D Reconstruction from Two Perspective Views , 1981, IJCAI.

[19]  Hans-Hellmut Nagel,et al.  An Investigation of Smoothness Constraints for the Estimation of Displacement Vector Fields from Image Sequences , 1983, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  H. C. Longuet-Higgins,et al.  The interpretation of a moving retinal image , 1980, Proceedings of the Royal Society of London. Series B. Biological Sciences.

[21]  R. Hetherington The Perception of the Visual World , 1952 .

[22]  Berthold K. P. Horn,et al.  Determining Optical Flow , 1981, Other Conferences.

[23]  S. Ullman,et al.  The interpretation of visual motion , 1977 .