The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of “big data” and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.
[1]
Robert J. Schalkoff.
Intelligent Systems: Principles, Paradigms, and Pragmatics
,
2009
.
[2]
S. Kotz,et al.
Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing
,
2006,
Trends in Cognitive Sciences.
[3]
C. Darwin.
The Expression of the Emotions in Man and Animals
,
.
[4]
Christoph Stasch,et al.
New Generation Sensor Web Enablement
,
2011,
Sensors.
[5]
R. Fay,et al.
Evolution of hearing in vertebrates: the inner ears and processing
,
2000,
Hearing Research.
[6]
Julian Hyde.
Data in Flight
,
2009,
ACM Queue.
[7]
Mark Ballora,et al.
Beyond visualization of big data: a multi-stage data exploration approach using visualization, sonification, and storification
,
2013,
Defense, Security, and Sensing.
[8]
David L. Hall,et al.
Hybrid human-computing distributed sense-making: extending the soa paradigm for dynamic adjudication and optimization of human and computer roles
,
2013
.
[9]
Michael D. McNeese,et al.
Conserving analyst attention units: use of multi-agent software and CEP methods to assist information analysis
,
2013,
Defense, Security, and Sensing.
[10]
David L. Hall,et al.
Human cognitive and perceptual factors in JDL level 4 hard/soft data fusion
,
2012,
Defense + Commercial Sensing.
[11]
Charles Dodge,et al.
Computer Music: Synthesis, Composition, and Performance
,
1997
.
[12]
Edward H. Shortliffe,et al.
Production Rules as a Representation for a Knowledge-Based Consultation Program
,
1977,
Artif. Intell..