Image2Reverb: Cross-Modal Reverb Impulse Response Synthesis

Measuring the acoustic characteristics of a space is often done by capturing its impulse response (IR), a representation of how a full-range stimulus sound excites it. This is the first work that generates an IR from a single image, which we call Image2Reverb. This IR is then applied to other signals using convolution, simulating the reverberant characteristics of the space shown in the image. Recording these IRs is both time-intensive and expensive, and often infeasible for inaccessible locations. We use an end-toend neural network architecture to generate plausible audio impulse responses from single images of acoustic environments. We evaluate our method both by comparisons to ground truth data and by human expert evaluation. We demonstrate our approach by generating plausible impulse responses from diverse settings and formats including well known places, musical halls, rooms in paintings, images from animations and computer games, synthetic environments generated from text, panoramic images, and video conference backgrounds.

[1]  Adrian Hilton,et al.  Immersive Spatial Audio Reproduction for VR/AR Using Room Acoustic Modelling from 360° Images , 2019, 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR).

[2]  Renato S. Pellegrini,et al.  Quality assessment of auditory virtual environments , 2001 .

[3]  Kumar Krishna Agrawal,et al.  GANSynth: Adversarial Neural Audio Synthesis , 2019, ICLR.

[4]  Zihou Meng,et al.  The Just Noticeable Difference of Noise Length and Reverberation Perception , 2006, 2006 International Symposium on Communications and Information Technologies.

[5]  IR-GAN: Room Impulse Response Generator for Speech Augmentation , 2020, ArXiv.

[6]  S. Geisser,et al.  On methods in the analysis of profile data , 1959 .

[7]  Michael Rettinger Reverberation Chambers for Broadcasting and Recording Studios , 1957 .

[8]  Hideki Koike,et al.  Deep Neural Networks for Cross-Modal Estimations of Acoustic Reverberation Characteristics from Two-Dimensional Images , 2018 .

[9]  Jaakko Lehtinen,et al.  Progressive Growing of GANs for Improved Quality, Stability, and Variation , 2017, ICLR.

[10]  Raymond Y. K. Lau,et al.  Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[11]  Jonathan S. Abel,et al.  APPROXIMATING MEASURED REVERBERATION USING A HYBRID FIXED/SWITCHED CONVOLUTION STRUCTURE , 2010 .

[12]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[13]  David Stanley Mcgrath,et al.  Convolution Processing for Realistic Reverberation , 1995 .

[14]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[15]  Heiga Zen,et al.  WaveNet: A Generative Model for Raw Audio , 2016, SSW.

[16]  Kristen Grauman,et al.  2.5D Visual Sound , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[17]  Vesa Välimäki,et al.  Fifty Years of Artificial Reverberation , 2012, IEEE Transactions on Audio, Speech, and Language Processing.

[18]  Bolei Zhou,et al.  Learning Deep Features for Scene Recognition using Places Database , 2014, NIPS.

[19]  Ralph B. D'Agostino,et al.  Tests for Departure from Normality , 1973 .

[20]  Hideki Koike,et al.  An auditory scaling method for reverb synthesis from a single two-dimensional image , 2020, Acoustical Science and Technology.

[21]  Damian Murphy,et al.  OpenAIR: An Interactive Auralization Web Resource and Database , 2010 .

[22]  Manfred R. Schroeder,et al.  -Colorless- Artificial Reverberation , 1960 .

[23]  Hideki Koike,et al.  Estimation of Late Reverberation Characteristics from a Single Two-Dimensional Environmental Image Using Convolutional Neural Networks , 2019, Journal of the Audio Engineering Society.

[24]  Abhishek Das,et al.  Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[25]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Wolfgang Fichtner,et al.  Implementation of High-Order Convolution Algorithms with Low Latency on Silicon Chips , 2004 .

[27]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[28]  Dingzeyu Li,et al.  Scene-aware audio for 360° videos , 2018, ACM Trans. Graph..

[29]  Chris Donahue,et al.  Adversarial Audio Synthesis , 2018, ICLR.

[30]  Nicholas J. Bryan Impulse Response Data Augmentation and Deep Neural Networks for Blind Room Acoustic Parameter Estimation , 2020, ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[31]  Roland Badeau,et al.  Common mathematical framework for stochastic reverberation models. , 2019, The Journal of the Acoustical Society of America.

[32]  E. S. Pearson,et al.  Tests for departure from normality: Comparison of powers , 1977 .

[33]  Dinesh Manocha,et al.  Interactive Sound Propagation and Rendering for Large Multi-Source Scenes , 2016, ACM Trans. Graph..

[34]  Adrian Hilton,et al.  Reproducing Real World Acoustics in Virtual Reality UsingSpherical Cameras , 2019 .

[35]  Josh H McDermott,et al.  Statistics of natural reverberation enable perceptual separation of sound and space , 2016, Proceedings of the National Academy of Sciences.

[36]  Zhenan Sun,et al.  A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications , 2020, IEEE Transactions on Knowledge and Data Engineering.

[37]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[38]  Eirikur Agustsson,et al.  High-Fidelity Generative Image Compression , 2020, NeurIPS.

[39]  Chenliang Xu,et al.  Deep Cross-Modal Audio-Visual Generation , 2017, ACM Multimedia.