Automatic sound generation for spherical objects hitting straight beams based on physical models

The objective of this paper is the development of concepts, methods and a prototype for an audio frame work. This audio frame work shall describe sounds on a highly abstract semantic level. We describe every sound as the result of one or several interactions between one or several objects at a certain place and in a certain environment. The attributes of every interaction influence the generated sound. Simultaneously, the participating objects, which take part in the sound generation process, can consist of different physical conditions (states of aggregation), materials as well as their configurations. All relevant attributes have an influence on the generated sound. The hearing of sounds in everyday life is based on the perception of events and not on the perception of sounds as such. For this reason, everyday sounds are often described by the events they are based on. In this paper, a framework concept for the description of sounds is presented, in which sounds can be represented as auditory signal patterns along several descriptive dimensions of various objects interacting together in a certain environment. On the basis of the differentiation of purely physical and purely semantic descriptive dimensions, the automatic sound generation is discussed on the physical and semantic levels. Within the scope of this research project, we shall especially look for possibilities to describe the sound class 'solid objects', in particular the class of the primitive sounds 'knock' ('strike', 'hit'), because this class of sounds occurs very frequently in everyday life, the interacting objects can be easily and well described by their material characteristics and the knowledge of solid state physics can be used. As an example the falling of a spherical elastic object onto a linear elastic beam is physically and mathematically modelled, and implemented on a SGI workstation. The main parameters which influence the impact behaviour of such objects will be discussed. On the theoretical level, first a better overview and a better understanding of the capabilities, restrictions and problems of the existing instruments (tools) for the automatic generation of audio data can be anticipated.

[1]  William W. Gayer The SonicFinder: An Interface that Uses Auditory Icons (Abstract Only) , 1989, SGCH.

[2]  Martin Georg Koller Elastischer Stoss von Kugeln auf dicke Platten , 1983 .

[3]  Meera Blattner,et al.  Earcons and Icons: Their Structure and Common Design Principles , 1989, Hum. Comput. Interact..

[4]  R. Daniel Bergeron,et al.  Stereophonic and surface sound generation for exploratory data analysis , 1990, CHI '90.

[5]  Matthias Rauterberg,et al.  An empirical comparison of menu-selection (CUI) and desktop (GUI) computer programs carried out by beginners and experts , 1992 .

[6]  William W. Gaver,et al.  AUDITORY ICONS IN LARGE-SCALE COLLABORATIVE ENVIRONMENTS , 1990, SGCH.

[7]  H. Hertz Ueber die Berührung fester elastischer Körper. , 1882 .

[8]  Michael Cohen,et al.  Extending the notion of a window system to audio , 1990, Computer.

[9]  M. Partl,et al.  Impact behaviour of corrugated sheets of fibre-reinforced cement , 1992 .

[10]  Matthias Rauterberg,et al.  Positive Effects of Sound Feedback During the Operation of a Plant Simulator , 1994, EWHCI.

[11]  William W. Gaver Auditory Icons: Using Sound in Computer Interfaces , 1986, Hum. Comput. Interact..

[12]  William W. Gaver What in the World Do We Hear? An Ecological Approach to Auditory Event Perception , 1993 .

[13]  Elizabeth M. Wenzel,et al.  Localization with non-individualized virtual acoustic display cues , 1991, CHI.

[14]  Masoud Motavalli Das Verhalten mehrschichtiger Verbundrohre unter Innendruck , 1991 .

[15]  William W. Gaver,et al.  Effective sounds in complex systems: the ARKOLA simulation , 1991, CHI.

[16]  S. Joy Mountford,et al.  The Art of Human-Computer Interface Design , 1990 .