A Study of Video-based Concordancer on Scene Classification

Video-based concordancer can be used to provide a scenario that engage context information of studying novel words. Usually, a fixed number of contextual sentences are retrieved accompanying the keywords. However, there may be a lack of complete context for learners to comprehend the keywords in the videos. Few studies have discussed about how the videos are presented to assist learners to use the keywords appropriately, and lead learners to find relevant knowledge effectively. In this paper, a keyword-in-scene video concordance (KWIS), which recognizes the scenes in the videos and provides the scene-based clips, is proposed. Each video clip is tagged actual scene type information. Learners are able to query the KWIS system with keywords, phrases, or natural language sentences, and watch relevant scenario clips to understand where the conversation can be carried on. A pilot study was conducted to evaluate the proposed system. The result shows that there is a positive effect on students' comprehension of English phrases while using the system.

[1]  Yue-Shi Lee,et al.  BVideoQA: Online English-Chinese bilingual video question answering , 2009 .

[2]  Boon-Lock Yeo,et al.  Video browsing using clustering and scene transitions on compressed sequences , 1995, Electronic Imaging.

[3]  Hsien-Chin Liou,et al.  Effects of Web-based Concordancing Instruction on EFL Students' Learning of Verb – Noun Collocations , 2005 .

[4]  Yu-Chih Sun,et al.  Learning process, strategies and web-based concordancers: a case study , 2003, Br. J. Educ. Technol..

[5]  Ting-Wei Hou,et al.  Building Video Concordancer Supported English Online Learning Exemplification , 2008, PCM.

[6]  劉顯親,et al.  A Study of Using Web Concordancing for English Vocabulary Learning in a Taiwanese High School Context , 2003 .

[7]  Francesca Coccetta,et al.  Enriching language learning through a multimedia corpus , 2007 .

[8]  T Johns,et al.  Should you be persuaded. Two samples of data-driven learningmaterials , 1991 .

[9]  Li Zhao,et al.  Video shot grouping using best-first model merging , 2001, IS&T/SPIE Electronic Imaging.

[10]  Rita C. Simpson,et al.  A CORPUS-BASED STUDY OF IDIOMS IN ACADEMIC SPEECH , 2003 .

[11]  Yueh-Min Huang,et al.  Retrieving video features for language acquisition , 2009, Expert Syst. Appl..

[12]  Fionn Murtagh,et al.  The structure of narrative: The case of film scripts , 2008, Pattern Recognit..

[13]  Meng Chang Chen,et al.  On the design of Web-based interactive multimedia contents for English learning , 2004, IEEE International Conference on Advanced Learning Technologies, 2004. Proceedings..

[14]  Fred D. Davis,et al.  User Acceptance of Computer Technology: A Comparison of Two Theoretical Models , 1989 .

[15]  Sebastian Sainoo-Fuller Towards a Visual Lexicon : The Creation of A Corpus Linguistic Database Using Digital Movie Data , 2003 .

[16]  Boon-Lock Yeo,et al.  Segmentation of Video by Clustering and Graph Analysis , 1998, Comput. Vis. Image Underst..

[17]  Iva Baltova,et al.  The Impact of Video on the Comprehension Skills of Core French Students. , 1994 .