The effect of multi-modal learning in Artificial Grammar Learning Task

The aim of the following study was to answer the question whether multimodal grammar learning would improve classification accuracy as compared with a unimodal learning. To test this hypothesis, an experimental procedure was constructed based on the research conducted by Conway and Christiansen [2006]. Their study regarded modality-specific Artificial Grammar Learning task (AGL). The grammatical sequence that was used in the study presented here was based on an algorithm with a finite number of results. Two additional sets of ungrammatical sequences were generated in a random manner. One of them was used in the learning phase in the control group while the second one, in the classification phase, in both, control and experimental groups. The obtained results showed that participants performed classification task above the chance level. These findings supported the hypothesis, which stated that grammar learning would occur [Conway and Christiansen 2006; Reber 1989]. We did not observe any effect regarding the hypothesized accuracy enhancement in a multimodal learning condition.

[1]  A. Baddeley Working memory: looking back and looking forward , 2003, Nature Reviews Neuroscience.

[2]  Morten H. Christiansen,et al.  PSYCHOLOGICAL SCIENCE Research Article Statistical Learning Within and Between Modalities Pitting Abstract Against Stimulus-Specific Representations , 2022 .

[3]  Frédéric Berthommier,et al.  Binding and unbinding the auditory and visual streams in the McGurk effect. , 2012, The Journal of the Acoustical Society of America.

[4]  D. Shanks IMPLICIT LEARNING AND TACIT KNOWLEDGE - AN ESSAY ON THE COGNITIVE UNCONSCIOUS - REBER,A , 1995 .

[5]  Sophie M. Wuerger,et al.  Low-level integration of auditory and visual motion signals requires spatial co-localisation , 2005, Experimental Brain Research.

[6]  P. Perruchet,et al.  Implicit learning and statistical learning: one phenomenon, two approaches , 2006, Trends in Cognitive Sciences.

[7]  David Alais,et al.  No direction-specific bimodal facilitation for audiovisual motion detection. , 2004, Brain research. Cognitive brain research.

[8]  Gregory Hickok,et al.  An fMRI Study of Audiovisual Speech Perception Reveals Multisensory Interactions in Auditory Cortex , 2013, PloS one.

[9]  Tobias Johansson,et al.  Strengthening the case for stimulus-specificity in artificial grammar learning: no evidence for abstract representations with extended exposure. , 2009, Experimental psychology.

[10]  C. Werry Reflections on language: Chomsky, linguistic discourse and the value of rhetorical self-consciousness , 2007 .