Sonification and audification create auditory displays of datasets. Audification translates data points into digital audio samples and the auditory display's duration is determined by the playback rate. Like audification, auditory graphs maintain the temporal relationships of data while using parameter mappings (typically data-to-frequency) to represent the ordinate values. Such direct approaches have the advantage of presenting the data stream `as is' without the imposed interpretations or accentuation of particular features found in indirect approaches. However, datasets can often be subdivided into short non-overlapping variable length segments that each encapsulate a discrete unit of domain-specific significant information and current direct approaches cannot represent these. We present Direct Segmented Sonification (DSSon) for highlighting the segments' data distributions as individual sonic events. Using domain knowledge to segment data, DSSon presents segments as discrete auditory gestalts while retaining the overall temporal regime and relationships of the dataset. The method's structural decoupling from the sound stream's formation means playback speed is independent of the individual sonic event durations, thereby offering highly flexible time compression/stretching to allow zooming into or out of the data. Demonstrated by three models applied to biomechanical data, DSSon displays high directness, letting the data `speak' for themselves.
[1]
Paul Vickers,et al.
Sonification Abstraite/Sonification Concrète: An 'Aesthetic Perspective Space' for Classifying Auditory Displays in the Ars Musica Domain
,
2006,
ArXiv.
[2]
Paul Vickers,et al.
Musical program auralization: Empirical studies
,
2005,
TAP.
[3]
Robert Höldrich,et al.
Augmented Audification
,
2015,
ICAD.
[4]
Nick Caplan,et al.
Movement amplitude on the Functional Re-adaptive Exercise Device: deep spinal muscle activity and movement control
,
2017,
European Journal of Applied Physiology.
[5]
John H. Flowers,et al.
THIRTEEN YEARS OF REFLECTION ON AUDITORY GRAPHING: PROMISES, PITFALLS, AND POTENTIAL NEW DIRECTIONS
,
2005
.
[6]
Richard Kronland-Martinet,et al.
Comparison and Evaluation of Sonification Strategies for Guidance Tasks
,
2016,
IEEE Transactions on Multimedia.
[7]
Nassir Navab,et al.
SonifEye: Sonification of Visual Information Using Physical Modeling Sound Synthesis
,
2017,
IEEE Transactions on Visualization and Computer Graphics.
[8]
Julian Rohrhuber,et al.
S̊ – Introducing sonification variables
,
2012
.
[9]
Joshua Atkins,et al.
Perceiving Graphical and Pictorial Information via Hearing and Touch
,
2016,
IEEE Transactions on Multimedia.