The Virtual Reality Lab: Realization and Application of Virtual Sound Environments
暂无分享,去创建一个
Giso Grimm | Volker Hohmann | Markus Meis | Melanie Krueger | Richard Paluch | G. Grimm | V. Hohmann | M. Meis | Melanie Krueger | Richard Paluch
[1] Gerard Llorach,et al. Movement and Gaze Behavior in Virtual Audiovisual Listening Environments Resembling Everyday Life , 2019, Trends in hearing.
[2] Michael Vorländer,et al. An Extended Binaural Real-Time Auralization System With an Interface to Research Hearing Aids for Experiments on Subjects With Hearing Loss , 2018, Trends in hearing.
[3] C. Frith,et al. Making up the mind , 2009 .
[4] S. Debener,et al. Concealed, Unobtrusive Ear-Centered EEG Acquisition: cEEGrids for Transparent EEG , 2017, Front. Hum. Neurosci..
[5] J. Oleson,et al. Efficacy and Effectiveness of Advanced Hearing Aid Directional and Noise Reduction Technologies for Older Adults With Mild to Moderate Hearing Loss. , 2018, Ear and hearing.
[6] Giso Grimm,et al. Spatial Acoustic Scenarios in Multichannel Loudspeaker Systems for Hearing Aid Evaluation. , 2016, Journal of the American Academy of Audiology.
[7] Jennifer L. Campos,et al. Effects of Hearing Loss on Dual-Task Performance in an Audiovisual Virtual Reality Simulation of Listening While Walking. , 2016, Journal of the American Academy of Audiology.
[8] DeLiang Wang,et al. A New Framework for CNN-Based Speech Enhancement in the Time Domain , 2019, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[9] Mary T Cord,et al. Relationship between laboratory measures of directional advantage and everyday success with directional microphone hearing aids. , 2004, Journal of the American Academy of Audiology.
[10] M. Meis,et al. The technization of self-care in hearing aid research , 2018 .
[11] Aaron J. HELLER. The Ambisonic Decoder Toolbox : Extensions for Partial-Coverage Loudspeaker Arrays , 2014 .
[12] Jörg M. Buchholz,et al. Validation of realistic acoustic environments for listening tests using directional hearing aids , 2014, 2014 14th International Workshop on Acoustic Signal Enhancement (IWAENC).
[13] Gitte Keidser,et al. Conversational Interaction Is the Brain in Action: Implications for the Evaluation of Hearing and Hearing Interventions. , 2020, Ear and hearing.
[14] Todd R. Jennings,et al. A visually guided beamformer to aid listening in complex acoustic environments , 2018 .
[15] G. Fink,et al. It's in your eyes--using gaze-contingent stimuli to create truly interactive paradigms for social cognitive and affective neuroscience. , 2010, Social cognitive and affective neuroscience.
[16] William M Whitmer,et al. Speech, movement, and gaze behaviours during dyadic conversation in noise , 2019, Scientific Reports.
[17] Gerard Llorach,et al. Influence of visual cues on head and eye movements during listening tasks in multi-talker audiovisual environments with animated characters , 2018, Speech Commun..
[18] N. Lesica. Why Do Hearing Aids Fail to Restore Normal Auditory Perception? , 2018, Trends in Neurosciences.
[19] J. Buchholz,et al. Hearing Aid Amplification Reduces Communication Effort of People With Hearing Impairment and Their Conversation Partners. , 2020, Journal of speech, language, and hearing research : JSLHR.
[20] Giso Grimm,et al. Review of Self-Motion in the Context of Hearing and Hearing Device Research. , 2020, Ear and hearing.
[21] Matthias Husinsky,et al. Virtual Stage: Interactive Puppeteering in Mixed Reality , 2018, 2018 IEEE 1st Workshop on Animation in Virtual and Augmented Environments (ANIVAE).
[22] 清造 村上. REFERENCE NOTE , 1958, Domenico Scarlatti.
[23] R. Bentler. Effectiveness of directional microphones and noise reduction schemes in hearing aids: a systematic review of the evidence. , 2005, Journal of the American Academy of Audiology.
[24] J. Bailenson,et al. Social interaction in augmented reality , 2019, PloS one.
[25] Richard Paluch. Die technisch vermittelte Umweltbeziehung des leiblichen Selbstes in virtuellen Welten , 2019, Mensch und Welt im Zeichen der Digitalisierung.
[26] Gerard Llorach,et al. Web-Based Live Speech-Driven Lip-Sync , 2016, 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES).
[27] Giso Grimm,et al. Ethnographic research: The interrelation of spatial awareness, everyday life, laboratory environments, and effects of hearing aids , 2017 .
[28] Jean-Luc Schwartz,et al. Adverse conditions improve distinguishability of auditory, motor, and perceptuo-motor theories of speech perception: An exploratory Bayesian modelling study , 2012 .
[29] Hong Zhang,et al. Facial expression recognition via learning deep sparse autoencoders , 2018, Neurocomputing.
[30] Ilona Straub,et al. ‘It looks like a human!’ The interrelation of social presence, interaction and agency ascription: a case study about the effects of an android robot on social agency ascription , 2016, AI & SOCIETY.
[31] Giso Grimm,et al. Moving from the field to the lab: towards ecological validity of audio-visual simulations in the laboratory to meet individual behavior patterns and preferences. , 2017 .
[32] Ton Roosendaal,et al. The Official Blender Game Kit: Interactive 3d for Artists , 2003 .
[33] Jennifer L. Campos,et al. From Healthy Hearing to Healthy Living: A Holistic Approach. , 2020, Ear and hearing.
[34] Giso Grimm,et al. A gaze-based attention model for spatially-aware hearing aids , 2018, ITG Symposium on Speech Communication.
[35] Karolina Smeds,et al. Common Sound Scenarios: A Context-Driven Categorization of Everyday Sound Environments for Application in Hearing-Device Research. , 2016, Journal of the American Academy of Audiology.
[36] Justin A. Zakis,et al. Preferred overall loudness. II: Listening through hearing aids in field and laboratory tests , 2006, International journal of audiology.
[37] Giso Grimm,et al. Evaluation of the Influence of Head Movement on Hearing Aid Algorithm Performance Using Acoustic Simulations , 2020, Trends in hearing.
[38] Torsten Dau,et al. Sound source localization with varying amount of visual information in virtual reality , 2018, bioRxiv.
[39] J. Daniel,et al. Représentation de champs acoustiques, application à la transmission et à la reproduction de scènes sonores complexes dans un contexte multimédia , 2000 .
[40] Giso Grimm,et al. Evaluation of spatial audio reproduction schemes for application in hearing aid research , 2015, ArXiv.
[41] Gerard Llorach,et al. Say Hi to Eliza - An Embodied Conversational Agent on the Web , 2017, IVA.
[42] Douglas S. Brungart,et al. The Quest for Ecological Validity in Hearing Science: What It Is, Why It Matters, and How to Advance It , 2020, Ear and Hearing.
[43] J. Buchholz,et al. Eliciting Naturalistic Conversations: A Method for Assessing Communication Ability, Subjective Experience, and the Impacts of Noise and Hearing Impairment. , 2019, Journal of speech, language, and hearing research : JSLHR.
[44] Gerard Llorach,et al. Towards Realistic Immersive Audiovisual Simulations for Hearing Research: Capture, Virtual Scenes and Reproduction , 2018, AVSU@MM.
[45] A. Strauss,et al. The social psychology of George Herbert Mead , 1956 .
[46] Gerard Llorach,et al. Web-Based Embodied Conversational Agents and Older People , 2019, Human–Computer Interaction Series.
[47] Karolina Smeds,et al. Selecting Scenarios for Hearing-Related Laboratory Testing. , 2020, Ear and hearing.
[48] S. Shiffman,et al. Ecological momentary assessment. , 2008, Annual review of clinical psychology.
[49] Giso Grimm,et al. A toolbox for rendering virtual acoustic environments in the context of audiology , 2018, Acta Acustica united with Acustica.