Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Linguistics Vanguard

A Multimodal Journal for the Language Sciences

Editor-in-Chief: Bergs, Alexander / Cohn, Abigail C. / Good, Jeff

1 Issue per year

Online
ISSN
2199-174X
See all formats and pricing
More options …

Toward an infrastructure for data-driven multimodal communication research

Francis F. SteenORCID iD: http://orcid.org/0000-0003-2963-4077 / Anders Hougaard / Jungseock Joo / Inés Olza / Cristóbal Pagán Cánovas / Anna Pleshakova / Soumya Ray / Peter Uhrig / Javier Valenzuela / Jacek WoźnyORCID iD: http://orcid.org/0000-0003-0691-7090 / Mark TurnerORCID iD: http://orcid.org/0000-0002-2089-5978
Published Online: 2018-03-09 | DOI: https://doi.org/10.1515/lingvan-2017-0041

Abstract

Research into the multimodal dimensions of human communication faces a set of distinctive methodological challenges. Collecting the datasets is resource-intensive, analysis often lacks peer validation, and the absence of shared datasets makes it difficult to develop standards. External validity is hampered by small datasets, yet large datasets are intractable. Red Hen Lab spearheads an international infrastructure for data-driven multimodal communication research, facilitating an integrated cross-disciplinary workflow. Linguists, communication scholars, statisticians, and computer scientists work together to develop research questions, annotate training sets, and develop pattern discovery and machine learning tools that handle vast collections of multimodal data, beyond the dreams of previous researchers. This infrastructure makes it possible for researchers at multiple sites to work in real-time in transdisciplinary teams. We review the vision, progress, and prospects of this research consortium.

Keywords: multimodality; machine learning; automated parsing; corpora; research consortia

References

  • Chomsky, Noam & James McGilvray. 2012. The science of language. Cambridge: Cambridge University Press.Google Scholar

  • Clark, H. H. 1996. Using language. Cambridge: Cambridge University Press.Google Scholar

  • Craddock, R. Cameron, Daniel S. Margulies, Pierre Bellec, B. Nolan Nichols, Sarael Alcauter, Fernando A. Barrios, Yves Burnod, Christopher J. Cannistraci, Julien Cohen-Adad, Benjamin De Leener, Sebastien Dery, Jonathan Downar, Katharine Dunlop, Alexandre R. Franco, Caroline Seligman Froehlich, Andrew J. Gerber, Satrajit S. Ghosh, Thomas J. Grabowski, Sean Hill, Anibal Sólon Heinsfeld, R. Matthew Hutchison, Prantik Kundu, Angela R. Laird, Sook-Lei Liew, Daniel J. Lurie, Donald G. McLaren, Felipe Meneguzzi, Maarten Mennes, Salma Mesmoudi, David O’Connor, Erick H. Pasaye, Scott Peltier, Jean-Baptiste Poline, Gautam Prasad, Ramon Fraga Pereira, Pierre-Olivier Quirion, Ariel Rokem, Ziad S. Saad, Yonggang Shi, Stephen C. Strother, Roberto Toro, Lucina Q. Uddin, John D. Van Horn, John W. Van Meter, Robert C. Welsh & Ting Xu. 2016. Brainhack: A collaborative workshop for the open neuroscience community. GigaScience. 5. 16. DOI 10.1186/s13742-016-0121-x.PubMedGoogle Scholar

  • Crowdy, Steve. 1995. The BNC spoken corpus. In Geoffrey N. Leech, Greg Myers & Jenny Thomas (eds.). Spoken English on computer: Transcription, mark-up and application, 224–235. Harlow: Longman.Google Scholar

  • Davies, Mark. 2015. The importance of robust corpora in providing more realistic descriptions of variation in English grammar. Linguistics Vanguard 1(1). 305–312.Google Scholar

  • Diemer, S., M.-L. Brunner & S. Schmidt. 2016. Compiling computer-mediated spoken language corpora: Key issues and recommendations. International Journal of Corpus Linguistics 21(3). 349–371.Web of ScienceGoogle Scholar

  • Duranti, Alessandro & Charles Goodwin (eds.). 1992. Rethinking context: Language as an interactive phenomenon, Vol. 11. Cambridge: Cambridge University Press.Google Scholar

  • Hardie, Andrew. 2012. CQPweb – Combining power, flexibility and usability in a corpus analysis tool. International Journal of Corpus Linguistics 17(3). 380–409.Web of ScienceCrossrefGoogle Scholar

  • Hoffmann, Sebastian & Stefan Evert. 2006. BNCweb (CQP Edition) – The marriage of two corpus tools. In Sabine Braun, Kurt Kohn & Joybrato Mukherjee (eds.), Corpus technology and language pedagogy: New resources, new tools, new methods, 177–195. Frankfurt am Main: Peter Lang.Google Scholar

  • Hoffmann, Thomas. 2017. Multimodal constructs – Multimodal constructions? The role of constructions in the working memory. Linguistics Vanguard 3(s1). DOI 10.1515/lingvan-2016-0042.Web of ScienceGoogle Scholar

  • Joo, Jungseock, Francis F. Steen & Song-Chun Zhu. 2015. Automated facial trait judgment and election outcome prediction: Social dimensions of face. Proceedings of the IEEE International Conference on Computer Vision. 3712–3720. Los Alamitos, CA: Institute of Electrical and Electronics Engineers.Google Scholar

  • Joo, Jungseock, Francis F. Steen & Mark Turner. 2017. Red Hen Lab: Dataset and tools for multimodal human communication research. KI - Künstliche Intelligenz. Special edition edited by Mehul Bhatt & Kristian Kersting, 1–5.Google Scholar

  • Li, Weixin, Jungseock Joo, Hang Qi, & Song-Chun Zhu. 2017. Joint image-text news topic detection and tracking by multimodal topic and-or graph. IEEE Transactions on Multimedia 19(2). 367–381.Web of ScienceCrossrefGoogle Scholar

  • Martinez, Aleix M. 2017. Computational models of face perception. Current Directions in Psychological Science 26(3). 263–269.PubMedWeb of ScienceCrossrefGoogle Scholar

  • Nesset, T., A. Endresen, L. Janda, A. Makarova, F. Steen & M. Turner. 2013. How ‘here’ and ‘now’ in Russian and English establish joint attention in TV news broadcasts. Russian Linguistics 37. 229–251. DOI 10.1007/s11185-013-9114-x.Web of ScienceGoogle Scholar

  • Pagán Cánovas, C. & J. Valenzuela. 2017. Timelines and multimodal constructions: Facing new challenges. Linguistic Vanguard 3(s1). https://www.degruyter.com/view/j/lingvan.2017.3.issue-s1/lingvan-2016-0087/lingvan-2016-0087.xml?format=INT.

  • Sharma, Rama Nath & Pānini. 1987–2003. The Astādhyāyī of Pānini. 6 Vols. New Delhi: Munshiram Manoharlal.Google Scholar

  • Steen, Francis F. & Mark Turner. 2013. Multimodal Construction Grammar. In Michael Borkent, Barbara Dancygier & Jennifer Hinnell (eds.), Language and the creative mind, 255–274. Stanford, CA: CSLI Publications/University of Chicago Press.Google Scholar

  • Suchan, Jakob & Mehul Bhatt. 2016. The geometry of a scene: On deep semantics for visual perception driven cognitive film studies. In IEEE Winter Conference on Applications of Computer Vision (WACV), 1–9. Piscataway, NJ: Institute of Electrical and Electronics Engineers.Google Scholar

  • Turchyn, Sergiy, Inés Olza Moreno, Cristóbal Pagán Cánovas, Francis F. Steen, Mark Turner, Javier Valenzuela & Soumya Ray. In press. Gesture annotation with a visual search engine for multimodal communication research. Proceedings of the Thirtieth Annual Conference on Innovative Applications of Artificial Intelligence (IAAI-18).Google Scholar

  • Turner, Mark. 2015. Blending in Language and Communication. In Ewa Dabrowska & Dagmar Divjak (eds.), Handbook of cognitive linguistics, Chapter 10. Berlin: De Gruyter Mouton.Google Scholar

  • Turner, Mark. 2017a. Multimodal form-meaning pairs for blended classic joint attention. Linguistics Vanguard 3(s1). 1–7. https://www.degruyter.com/view/j/lingvan.2017.3.issue-s1/lingvan-2016-0043/lingvan-2016-0043.xml?rskey=6ZNN3C&result=1&q=Turner.Web of Science

  • Turner, Mark. 2017b. Polytropos and communication in the wild. In Barbara Dancygier (ed.), The Cambridge handbook of cognitive linguistics, 93–98. Cambridge: Cambridge University Press.Google Scholar

  • Zima, Elisabeth. 2014a. English multimodal motion constructions. A construction grammar perspective. Studies van de BKL - Travaux du CBL - Papers of the LSB, Volume 8. http://uahost.uantwerpen.be/linguist/SBKL/sbkl2013/Zim2013.pdf.

  • Zima, Elisabeth. 2014b. Gibt es multimodale Konstruktionen? Eine Studie zu [V(motion) in circles] und [all the way from X PREP Y]. Gesprächsforschung - Online-Zeitschrift zur verbalen Interaktion 15. 1–48. http://www.gespraechsforschung-ozs.de/fileadmin/dateien/heft2014/ga-zima.pdf.

  • Zima, Elisabeth. 2017. Multimodal constructional resemblance. The case of English circular motion constructions. In Francisco Ruiz de Mendoza, Alba Luzondo & Paula Pérez-Sobrino (eds.), Constructing families of constructions. Human Cognitive Processing Series. Amsterdam: John Benjamins.Google Scholar

About the article

Received: 2017-09-07

Accepted: 2018-01-31

Published Online: 2018-03-09


Citation Information: Linguistics Vanguard, Volume 4, Issue 1, 20170041, ISSN (Online) 2199-174X, DOI: https://doi.org/10.1515/lingvan-2017-0041.

Export Citation

©2018 Walter de Gruyter GmbH, Berlin/Boston. Copyright Clearance Center

Comments (0)

Please log in or register to comment.
Log in