Humans express and perceive emotions in a multimodal manner. The multimodal information is intrinsically fused by the human sensory system in a complex manner. Emulating a temporal desynchronisation between modalities, in this paper, we design an end-to-end neural network architecture, called TA-AVN, that aggregates temporal audio and video information in an asynchronous setting in order to determine the emotional state of a subject. The feature descriptors for audio and video representations are extracted using simple Convolutional Neural Networks (CNNs), leading to real-time processing. Undoubtedly, collecting annotated training data remains an important challenge when training emotion recognition systems, both in terms of effort and expertise required. The proposed approach solves this problem by providing a natural augmentation technique that allows achieving a high accuracy rate even when the amount of annotated training data is limited. The framework is tested on three challenging multimodal reference datasets for the emotion recognition task, namely the benchmark datasets CREMA-D and RAVDESS, and one dataset from the FG2020’s challenge related to emotion recognition. The results prove the effectiveness of our approach and our end-to-end framework achieves state-of-the-art results on the CREMA-D and RAVDESS datasets.