Personalized and Immersive Sound Experiences Based on an Interoperable End-to-End Next Generation Audio (NGA) Chain Using the Audio Definition Model (ADM)

Next-generation audio (NGA) delivers the best possible listening experience in varying situations (e.g., improved intelligibility and understanding, adapting to reproduction set-up and listening context, audio content tailored to individual preferences and needs), saving bandwidth and production efforts. These scenarios are enabled by the so-called renderer, the purpose of which is to convert a set of audio signals with associated metadata to a different configuration of audio signals (e.g., loudspeaker feeds) based on the metadata and control inputs from the playback environment and the user’s preferences. The approach of defining a specific renderer with its own metadata is extremely viable in clearly defined vertical businesses such as cinema, packaged media, and so on, but for broadcast, which is by its nature a transversal business, this is definitely not the case, so standards that describe the metadata and the behavior of the renderer become beneficial. The audio definition model (ADM) standard defined in recommendation International Telecommunications Union – Radiocommunication (ITU-R) BS.2076 is particularly relevant in this context to ensure interoperability and reproducibility along the chain. The aim of this tutorial is to describe ADM-based use cases and workflows and the ongoing efforts to promote a wide adoption and integration of the ADM .