Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Applied Linguistics Review

Editor-in-Chief: Wei, Li

4 Issues per year


IMPACT FACTOR 2016: 0.351

Online
ISSN
1868-6311
See all formats and pricing
More options …

Multi-modal language input: A learned superadditive effect

Dominic Cheetham
Published Online: 2017-10-25 | DOI: https://doi.org/10.1515/applirev-2017-0036

Abstract

Review of psychological and language acquisition research into seeing faces while listening, seeing gesture while listening, illustrated text, reading while listening, and same language subtitled video, confirms that bi-modal input has a consistently positive effect on language learning over a variety of input types. This effect is normally discussed using a simple additive model where bi-modal input increases the total amount of data and adds redundancy to duplicated input thus increasing comprehension and then learning. Parallel studies in neuroscience suggest that bi-modal integration is a general effect using common brain areas and following common neural paths. Neuroscience also shows that bi-modal effects are more complex than simple addition, showing early integration of inputs, a learning/developmental effect, and a superadditive effect for integrated bi-modal input. The different bodies of research produce a revised model of bi-modal input as a learned, active system. The implications for language learning are that bi- or multi-modal input can powerfully enhance language learning and that the learning benefits of such input will increase alongside the development of neurological integration of the inputs.

Keywords: bi-modal input; subtitled video; superadditive effect; reading while listening

References

  • Alloway, Tracy Packiam & Ross G. Alloway. 2010. Investigating the predictive roles of working memory and IQ in academic attainment. Journal of Experimental Child Psychology 106(1). 20–29.CrossrefGoogle Scholar

  • Baddeley, Alan. 2000. The episodic buffer: a new component of working memory? Trends in Cognitive Sciences 4(11). 417–423.CrossrefGoogle Scholar

  • Baier, Bernhard., Andreas Kleinschmidt & Notger Muller. 2006. Cross-modal processing in early visual and auditory cortices depends on expected statistical relationship of multisensory information. The Journal of Neuroscience 26(47). 12260–12265.CrossrefGoogle Scholar

  • Beauchamp, Michael S., Kathryn E. Lee, Brenna D. Argall & Alex Martin. 2004. Integration of auditory and visual information about objects in superior temporal sulcus. Neuron 41(5). 809–823.CrossrefGoogle Scholar

  • Beers, Kylene. 1998. Listen while you read: Struggling readers and audiobooks. School Library Journal 44(4). 30–35.Google Scholar

  • Bitan, Tali, Douglas D. Burman, Tai‐Li Chou, Dong Lu, Nadia E. Cone, Fan Cao, Jordan D. Bigio & James R. Booth. 2007. The interaction between orthographic and phonological information in children: An fMRI study. Human Brain Mapping 28(9). 880–891.CrossrefGoogle Scholar

  • Blok, Henk. 1999. Reading to young children in educational settings: A meta-analysis of recent research. Language Learning 49(2). 343–371.CrossrefGoogle Scholar

  • Blum, Irene H., Patricia S. Koskinen, Nancy Tennant, E. Marie Parker, Mary Straub & Christine Curry. 1995. Using audiotaped books to extend classroom literacy instruction into the homes of second-language learners. Journal of Reading Behavior 27(4). 535–563.CrossrefGoogle Scholar

  • Braze, David, W. Einar Mencl, Whitney Tabor, Kenneth R. Pugh, R. Todd Constable, Robert K. Fulbright, James S. Magnuson, Julie A. Van Dyke & Donald P. Shankweiler. 2011. Unification of sentence processing via ear and eye: An fMRI study. Cortex 47(4). 416–431.CrossrefGoogle Scholar

  • Bristow, Davina, Ghislaine Dehaene-Lambertz, Jeremie Mattout, Catherine Soares, Teodora Gliga, Sylvain Baillet & Jean-François Mangin. 2009. Hearing faces: How the infant brain matches the face it sees with the speech it hears. Journal of Cognitive Neuroscience 21(5). 905–921.CrossrefGoogle Scholar

  • Brown, Ronan, Rob Waring & Sangrawee Donkaewbua. 2008. Incidental vocabulary acquisition from reading, reading-while-listening, and listening to stories. Reading in a Foreign Language 20(2). 136–163.Google Scholar

  • Bulkin, David A. & Jennifer M. Groh. 2006. Seeing sounds: Visual and auditory interactions in the brain. Current Opinion in Neurobiology 16(4). 415–419.CrossrefGoogle Scholar

  • Bus, Adriana G., Marinus H. Van Ijzendoorn & Anthony D. Pellegrini. 1995. Joint book reading makes for success in learning to read: A meta-analysis on intergenerational transmission of literacy. Review of Educational Research 65(1). 1–21.CrossrefGoogle Scholar

  • Calvert, Gemma A. A, Edward T. Bullmore, Michael J. Brammer, Ruth Campbell, Steven CR Williams, Philip K. McGuire, Peter WR Woodruff, Susan D. Iversen & David Anthony S. 1997. Activation of auditory cortex during silent lipreading. Science 276(5312). 593–596.CrossrefGoogle Scholar

  • Campbell, Ruth. 2008. The processing of audio-visual speech: Empirical and neural bases. Philosophical Transactions of the Royal Society B: Biological Sciences 363(1493). 1001–1010.CrossrefGoogle Scholar

  • Carney, Russell N. & Joel R. Levin. 2002. Pictorial illustrations still improve students’ learning from text. Educational Psychology Review 14(1). 5–26.CrossrefGoogle Scholar

  • Chang, Anna C. S 2009. Gains to L2 listeners from reading while listening vs. listening only in comprehending short stories. System 37(4). 652–663.CrossrefGoogle Scholar

  • Chang, Anna C. S. 2011. The effect of reading while listening to audiobooks: Listening fluency and vocabulary gain. Asian Anthropology 10. 43–64.Google Scholar

  • Chang, Anna C. S & Sonia Millett. 2014. The effect of extensive listening on developing L2 listening fluency: Some hard evidence. ELT Journal 68(1). 31–40.CrossrefGoogle Scholar

  • Chang, Anna C. S & Sonia Millett. 2015. Improving reading rates and comprehension through audio-assisted extensive reading for beginner learners. System 52. 91–102.CrossrefGoogle Scholar

  • Cheung, Him. 1996. Nonword span as a unique predictor of second-language vocabulary language. Developmental Psychology 32(5). 867–873.CrossrefGoogle Scholar

  • Choi, Jeungok. 2011. Literature review: Using pictographs in discharge instructions for older adults with low-literacy skills. Journal of Clinical Nursing 20(21–22). 2984–2996.CrossrefGoogle Scholar

  • Colonnesi, Cristina, Geert Jan JM Stams, Irene Koster & Marc J. Noom. 2010. The relation between pointing and language development: A meta-analysis. Developmental Review 30(4). 352–366.CrossrefGoogle Scholar

  • Cotton, Jack Chilton 1935. Normal “visual hearing”. Science 82. 592–593.CrossrefGoogle Scholar

  • Danan, Martine. 2004. Captioning and subtitling: Undervalued language learning strategies. Meta: Journal des traducteurs Meta: Translators’ Journal 49(1). 67–77.CrossrefGoogle Scholar

  • David., McNeill 1992. Hand and mind: What gesture reveals about thought. Chicago: University of Chicago Press.Google Scholar

  • Davis, Chris & Jeesun Kim. 1998. Repeating and Remembering Foreign Language Words: Does Seeing Help? AVSP’98 International Conference on Auditory-Visual Speech Processing. The University of Melbourne.Google Scholar

  • Day, Richard R. & Julian Bamford. 1998. Extensive reading in the second language classroom. Cambridge: Cambridge University Press.Google Scholar

  • Esteve-Gibert, Nuria & Pilar Prieto. 2014. Infants temporally coordinate gesture-speech combinations before they produce their first words. Speech Communication 57. 301–316.CrossrefGoogle Scholar

  • Fevre, Deidre M., Dennis W. Moore & Ian AG Wilkinson. 2003. Tape-assisted reciprocal teaching: Cognitive bootstrapping for poor decoders. British Journal of Educational Psychology 73(1). 37–58.CrossrefGoogle Scholar

  • Goolkasian, Paula & Paul W. Foos. 2005. Bimodal format effects in working memory. The American Journal of Psychology 118(1). 61–78.Google Scholar

  • Guillory, Helen. 1998. The effects of key word captions to authentic French video in foreign language instruction. CALICO Journal 15(1–3). 89–108.Google Scholar

  • Gyselinck, Valérie, M. F Ehrlich, Cesare Cornoldi, Rossana De Beni & Véronique Dubois. 2000. Visuospatial working memory in learning from multimedia systems. Journal of Computer Assisted Learning 16(2). 166–176.Google Scholar

  • Habets, Boukje, Sotaro Kita, Zeshu Shao, Asli Özyurek & Peter Hagoort. 2011. The role of synchrony and ambiguity in speech–gesture integration during comprehension. Journal of Cognitive Neuroscience 23(8). 1845–1854.CrossrefGoogle Scholar

  • Hahn, Noemi, John J. Foxe & Sophie Molholm. 2014. Impairments of multisensory integration and cross-sensory learning as pathways to dyslexia. Neuroscience and Biobehavioral Reviews 47. 384–392.CrossrefGoogle Scholar

  • Holle, Henning & Thomas C. Gunter. 2007. The role of iconic gestures in speech disambiguation: ERP evidence. Journal of Cognitive Neuroscience 19(7). 1175–1192.CrossrefGoogle Scholar

  • Horst, Marlise, Tom Cobb, Thomas Cobb & Paul Meara. 1998. Beyond a clockwork orange: Acquiring second language vocabulary through reading. Reading in a Foreign Language 11(2). 207–223.Google Scholar

  • Hostetter, Autumn B. 2011. When do gestures communicate? A meta-analysis. Psychological Bulletin 137(2). 297–311.CrossrefGoogle Scholar

  • Hubbard, Amy L., Stephen M. Wilson, Daniel E. Callan & Mirella Dapretto. 2009. Giving speech a hand: Gesture modulates activity in auditory cortex during speech perception. Human Brain Mapping 30(3). 1028–1037.CrossrefGoogle Scholar

  • Jacquemot, Charlotte, Christophe Pallier, Denis LeBihan, Stanislas Dehaene & Emmanuel Dupoux. 2003. Phonological grammar shapes the auditory cortex: A functional magnetic resonance imaging study. The Journal of Neuroscience 23(29). 9541–9546.Google Scholar

  • Jäncke, Lutz & Nadim Joni Shah. 2004. Hearing syllables by seeing visual stimuli. European Journal of Neuroscience 19(9). 2603–2608.CrossrefGoogle Scholar

  • Jobard, Gael, Mathieu Vigneau, Bernard Mazoyer & Nathalie Tzourio-Mazoyer. 2007. Impact of modality and linguistic complexity during reading and listening tasks. Neuroimage 34(2). 784–800.CrossrefGoogle Scholar

  • Kawase, Tetsuaki, Shuichi Sakamoto, Yoko Hori, Atsuko Maki, Yôiti Suzuki & Toshimitsu Kobayashi. 2009. Bimodal audio–visual training enhances auditory adaptation process. Neuroreport 20(14). 1231–1234.CrossrefGoogle Scholar

  • Kellerman, Susan. 1990. Lip service: The contribution of the visual modality to speech perception and its relevance to the teaching and testing of foreign language listening comprehension. Applied Linguistics 11(3). 272–280.CrossrefGoogle Scholar

  • Kelly, Spencer D., Corinne Kravitz & Michael Hopkins. 2004. Neural correlates of bimodal speech and gesture comprehension. Brain and Language 89(1). 253–260.CrossrefGoogle Scholar

  • Kelly, Spencer D., Tara McDevitt & Megan Esch. 2009. Brief training with co-speech gesture lends a hand to word learning in a foreign language. Language and Cognitive Processes, 24(2). 313–334.CrossrefGoogle Scholar

  • Kelly, Spencer D., Aslı Özyürek & Eric Maris. 2010. Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science 21(2). 260–267.CrossrefGoogle Scholar

  • Krahmer, Emiel & Marc Swerts. 2005. How children and adults produce and perceive uncertainty in audiovisual speech. Language and Speech 48(1). 29–53.CrossrefGoogle Scholar

  • Latifi, Mehdi, Ali Mobalegh & Elham Mohammadi. 2011. Movie subtitles and the improvement of listening comprehension ability: Does it help? The Journal of Language Teaching and Learning 1(2). 18–29.Google Scholar

  • Levie, W. Howard & Richard Lentz. 1982. Effects of text illustrations: A review of research. ECTJ 30(4). 195–232.Google Scholar

  • Lewis, David. 2001. Reading contemporary picturebooks: Picturing text. New York: Routledge.Google Scholar

  • Lewis, Gwyneth & David Poeppel. 2014. The role of visual representations during the lexical access of spoken words. Brain and Language 134. 1–10.CrossrefGoogle Scholar

  • Littleton, Karen, Clare Wood & Pav Chera. 2006. Interactions with talking books: Phonological awareness affects boys’ use of talking books. Journal of Computer Assisted Learning 22(5). 382–390.CrossrefGoogle Scholar

  • MacSweeney, Mairéad, Edson Amaro, Gemma Calvert, Ruth Campbell, Anthony David, Philip McGuire, Steve Williams, Bencie Woll & Michael Brammer. 2000. Silent speechreading in the absence of scanner noise: An event-related fMRI study. NeuroReport 11(8). 1729–1733.CrossrefGoogle Scholar

  • Man, Kingson, Jonas T. Kaplan, Antonio Damasio & Kaspar Meyer. 2012. Sight and sound converge to form modality-invariant representations in temporoparietal cortex. The Journal of Neuroscience 32(47). 16629–16636.CrossrefGoogle Scholar

  • Markham, Peter L. 1993. Captioned television videotapes: Effects of visual support on second language comprehension. Journal of Educational Technology Systems 21(3). 183–191.CrossrefGoogle Scholar

  • Marsh, Emily E. & Marilyn Domas White. 2003. A taxonomy of relationships between images and text. Journal of Documentation 59(6). 647–672.CrossrefGoogle Scholar

  • Masoura, Elvira V. & Susan E. Gathercole. 1999. Phonological short-term memory and foreign language learning. International Journal of Psychology 34(5–6). 383–388.CrossrefGoogle Scholar

  • Mastroberardino, Serena, Valerio Santangelo, Fabiano Botta, Francesco S. Marucci & Marta Olivetti Belardinelli. 2008. How the bimodal format of presentation affects working memory: An overview. Cognitive Processing 9(1). 69–76.CrossrefGoogle Scholar

  • Mayer, Richard E. & Roxana Moreno. 2003. Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist 38(1). 43–52.CrossrefGoogle Scholar

  • McGurk, Harry & John MacDonald. 1976. Hearing lips and seeing voices. Nature 264. 746–748.CrossrefGoogle Scholar

  • Menne, Joy M. & John W. Menne. 1972. The relative efficiency of bimodal presentation as an aid to learning. Educational Technology Research and Development 20(2). 170–180.Google Scholar

  • Miller, George A. 1956. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review 63(2). 81–97.CrossrefGoogle Scholar

  • Mol, Suzanne E. & Adriana G. Bus. 2011. To read or not to read: A meta-analysis of print exposure from infancy to early adulthood. Psychological Bulletin 137(2). 267–296.CrossrefGoogle Scholar

  • Mol, Suzanne E., Adriana G. Bus & Maria T. De Jong. 2009. Interactive book reading in early education: A tool to stimulate print knowledge as well as oral language. Review of Educational Research 79(2). 979–1007.CrossrefGoogle Scholar

  • Mol, Suzanne E., Adriana G. Bus, Maria T. De Jong & Daisy J. Smeets. 2008. Added value of dialogic parent–child book readings: A meta-analysis. Early Education and Development 19(1). 7–26.CrossrefGoogle Scholar

  • Montali, Julie & Lawrence Lewandowski. 1996. Bimodal reading: Benefits of a talking computer for average and less skilled readers. Journal of Learning Disabilities 29(3). 271–279.CrossrefGoogle Scholar

  • Nikolajeva, Maria & Carole Scott. 2001. How picturebooks work. New York: Routledge.Google Scholar

  • Papagno, Costanza, Tim Valentine & Alan Baddeley. 1991. Phonological short-term memory and foreign-language vocabulary learning. Journal of Memory and Language 30(3). 331–347.CrossrefGoogle Scholar

  • Peterson, Matthew, Kevin Wise, Yilin Ren, Zongyuan Wang & Jiachen Yao. 2017. Memorable metaphor: How different elements of visual rhetoric affect resource allocation and memory for advertisements. Journal of Current Issues and Research in Advertising 38(1). 65–74.CrossrefGoogle Scholar

  • Pujolà, Joan-Tomàs. 2002. CALLing for help: Researching language learning strategies using help facilities in a web-based multimedia program. ReCALL 14(2). 235–262.Google Scholar

  • Rasinski, Timothy V. 1990. Effects of repeated reading and listening-while-reading on reading fluency. The Journal of Educational Research 83(3). 147–151.CrossrefGoogle Scholar

  • Reisberg, Daniel, John Mclean & Anne Goldfield. 1987. Easy to hear but hard to understand: A lip-reading advantage with intact auditory stimuli. In Barbara Dodd & Ruth Campbell (eds.) Hearing by eye: The psychology of lip-reading, 97–114. Hillsdale, NJ: Earlbaum.Google Scholar

  • Ross, Lars A., Dave Saint-Amour, Victoria M. Leavitt, Daniel C. Javitt & John J. Foxe. 2007. Do you see what I am saying? Exploring visual enhancement of speech comprehension in noisy environments. Cerebral Cortex 17(5). 1147–1153.Google Scholar

  • Scarborough, Hollis S. & Wanda Dobrich. 1994. On the efficacy of reading to pre-schoolers. Developmental Review 14(3). 245–302.CrossrefGoogle Scholar

  • Sénéchal, Monique & Laura Young. 2008. The effect of family literacy interventions on children’s acquisition of reading from kindergarten to grade 3: A meta-analytic review. Review of Educational Research 78(4). 880–907.CrossrefGoogle Scholar

  • Service, Elisabet. 1992. Phonology, working memory, and foreign-language learning. The Quarterly Journal of Experimental Psychology 45(1). 21–50.CrossrefGoogle Scholar

  • Shankweiler, Donald W., Einar Mencl, David Braze, Whitney Tabor, Kenneth R. Pugh & Robert K. Fulbright. 2008. Reading differences and brain: Cortical integration of speech and print in sentence processing varies with reader skill. Developmental Neuropsychology 33(6). 745–775.CrossrefGoogle Scholar

  • Sipe, Lawrence R. 1998. How picture books work: A semiotically framed theory of text-picture relationships. Children’s Literature in Education 29(2). 97–108.CrossrefGoogle Scholar

  • Skipper, Jeremy I., Virginie Van Wassenhove, Howard C. Nusbaum & Steven L. Small. 2007. Hearing lips and seeing voices: How cortical areas supporting speech production mediate audiovisual speech perception. Cerebral Cortex 17(10). 2387–2399.CrossrefGoogle Scholar

  • Stewart, Melissa & Inmaculada Petusa. 2004. Gains to language learners from viewing target language closed-captioned films. Foreign Language Annals 37(3). 438–442.CrossrefGoogle Scholar

  • Sueyoshi, Ayano & Debra M. Hardison. 2005. The role of gestures and facial cues in second language listening comprehension. Language Learning 55(4). 661–699.CrossrefGoogle Scholar

  • Sumby, William H. & Irwin Pollack. 1954. Visual contribution to speech intelligibility in noise. The Journal of the Acoustical Society of America 26(2). 212–215.CrossrefGoogle Scholar

  • Taylor, Gregory. 2005. Perceived processing strategies of students watching captioned video. Foreign Language Annals 38(3). 422–427.CrossrefGoogle Scholar

  • Van Atteveldt, Nienke, Alard Roebroeck & Rainer Goebel. 2009. Interaction of speech and script in human auditory cortex: Insights from neuro-imaging and effective connectivity. Hearing Research 258(1). 152–164.CrossrefGoogle Scholar

  • Van Atteveldt, Nienke M., Elia Formisano, Leo Blomert & Rainer Goebel. 2007. The effect of temporal asynchrony on the multisensory integration of letters and speech sounds. Cerebral Cortex 17(4). 962–974.Google Scholar

  • Vandergrift, Larry & Jeremy Cross. 2014. Captioned video: How much listening is really going on? Contact 40(3). 31–33.Google Scholar

  • Vanderplank, Robert. 1988. The value of teletext sub-titles in language learning. ELT Journal 42(4). 272–281.CrossrefGoogle Scholar

  • Verhagen, Josje, Paul Leseman & Marielle Messer. 2015. Phonological memory and the acquisition of grammar in child L2 learners. Language Learning 65(2). 417–448.CrossrefGoogle Scholar

  • Vogel, Edward K., Geoffrey F. Woodman & Steven J. Luck. 2001. Storage of features, conjunctions, and objects in visual working memory. Journal of Experimental Psychology: Human Perception and Performance 27(1). 92–114.Google Scholar

  • Wagner, Petra, Zofia Malisz & Stefan Kopp. 2014. Gesture and speech in interaction: An overview. Speech Communication 57. 209–232.CrossrefGoogle Scholar

  • Werner, Sebastian & Uta Noppeney. 2010. Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization. Cerebral Cortex, 20(8). 1829–1842.CrossrefGoogle Scholar

  • Willems, R. M., A. Özyürek & P. Hagoort, 2007. When language meets action: The neural integration of gesture and speech. Cerebral Cortex 17(10). 2322–2333.CrossrefGoogle Scholar

  • Winke, Paula, Susan Gass & Tetyana Sydorenko. 2010. The effects of captioning videos used for foreign language listening activities. Language Learning and Technology 14(1). 65–86.Google Scholar

  • Yanguas, Iñigo. 2009. Multimedia glosses and their effect on L2 text comprehension and vocabulary learning. Language Learning and Technology 13(2). 48–67.Google Scholar

About the article

Published Online: 2017-10-25


Citation Information: Applied Linguistics Review, ISSN (Online) 1868-6311, ISSN (Print) 1868-6303, DOI: https://doi.org/10.1515/applirev-2017-0036.

Export Citation

© 2017 Walter de Gruyter GmbH, Berlin/Boston. Copyright Clearance Center

Comments (0)

Please log in or register to comment.
Log in