Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Paladyn, Journal of Behavioral Robotics

Editor-in-Chief: Schöner, Gregor

1 Issue per year

Open Access
See all formats and pricing
More options …

Towards an Articulation-Based Developmental Robotics Approach for Word Processing in Face-to-Face Communication

Bernd J. Kröger / Peter Birkholz
  • Department of Phoniatrics, Pedaudiology, and Communication Disorders, RWTH Aachen University, Aachen, GERMANY
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Christiane Neuschaefer-Rube
  • Department of Phoniatrics, Pedaudiology, and Communication Disorders, RWTH Aachen University, Aachen, GERMANY
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2011-08-16 | DOI: https://doi.org/10.2478/s13230-011-0016-6


While we are capable of modeling the shape, e.g. face, arms, etc. of humanoid robots in a nearly natural or human-like way, it is much more difficult to generate human-like facial or body movements and human-like behavior like e.g. speaking and co-speech gesturing. In this paper it will be argued for a developmental robotics approach for learning to speak. On the basis of current literature a blueprint of a brain model will be outlined for this kind of robots and preliminary scenarios for knowledge acquisition will be described. Furthermore it will be illustrated that natural speech acquisition mainly results from learning during face-to-face communication and it will be argued that learning to speak should be based on human-robot face-to-face communication. Here the human acts like a caretaker or teacher and the robot acts like a speech-acquiring toddler. This is a fruitful basic scenario not only for learning to speak, but also for learning to communicate in general, including to produce co-verbal manual gestures and to produce co-verbal facial expressions.

Keywords: developmental robotics; humanoid robotics; conversational agents; face-to-face-communication; speech; speech acquisition; speech production; speech perception


  • Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C, 2009. Cognitive developmental robotics: A survey. IEEE transactions on Autonomous Mental Development 1, 12-34.Google Scholar

  • Aziz-Sadeh L, Damasio A, 2008. Embodied semantics for actions: Findings from functional brain imaging. Journal of Physiology-Paris 102, 35-39.CrossrefGoogle Scholar

  • Bailly G, Raidt S, Elisei F, 2010. Gaze, conversational agents and face-to-face communication. Speech Communication 52, 598-612.CrossrefGoogle Scholar

  • Bergmann K, Kopp S, 2009. Increasing the Expressiveness of Virtual Agents – Autonomous Generation of Speech and Gesture for Spatial Description Tasks. In: Decker K, Sichman J, Sierra C, Castelfranchi C (eds.) Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2009), pp. 361-368.Google Scholar

  • Birkholz P, Kröger BJ, 2006. Vocal tract model adaptation using magnetic resonance imaging. Proceedings of the 7th International Seminar on Speech Production (Belo Horizonte, Brazil) pp. 493-500.Google Scholar

  • Birkholz P, Kröger BJ, Neuschaefer-Rube C, in press. Model-based reproduction of articulatory trajectories for consonant-vowel sequences. IEEE Transactions on Audio, Speech and Language Processing. DOI:10.1109/TASL.2010.2091632CrossrefGoogle Scholar

  • Brandl H, 2009. A computational model for unsupervised child-like speech acquisition. Unpublished Doctoral Thesis (University of Bielefeld, Bielefeld, Germany)Google Scholar

  • Breazeal C, 2003. Towards sociable robots. Robotics and Autonomous Systems 42, 167-175.CrossrefGoogle Scholar

  • Breazeal C, 2004. Function meets style: Insights from emotion theory applied to HRI. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 34, 187-194.Google Scholar

  • Brooks RA, Breazeal C, Marjanovic M, Scassellati B, Williamson MM, 1999. The cog project: building a humanoid robot. In: Nehaniv CL (ed.) Computation for metaphors, analogy, and agents (Springer Verlag, Berlin), pp. 52-87.Google Scholar

  • Caligiore D, Ferrauto T, Parisi D, Accornero N, Capozza M, Baldassarre G, 2008. Using motor babbling and Hebb rules for modeling the development of reaching with obstacles and grasping. In: Dillmann R, Maloney C, Sandini G, Asfour T, Cheng G, Metta G, Ude A (eds.) International Conference on Cognitive Systems, CogSys 2008 (University of Karlsruhe, Karlsruhe, Germany)Google Scholar

  • Cangelosi A, Riga T, 2006. An embodied model for sensorimotor grounding and grounding transfer: experiments with epigenetic robots. Cognitive Science 30, 673-689.CrossrefPubMedGoogle Scholar

  • Coleman J, 1999. Cognitive reality and the phonological lexicon: A review. Journal of Neurolinguistics 11, 295-320.Google Scholar

  • Dehaene-Lambertz G, Hertz-Pannier L, Dubois J, Dehaene S, 2008. How Does Early Brain Organization Promote Language Acquisition in Humans? European Review 16, 399-411.CrossrefGoogle Scholar

  • Demiris Y, Dearden A, 2005. From motor babbling to hierarchical learning by imitation: a robot developmental pathway. In: Berthouze L, Kaplan F, Kozima H, Yano H, Konczak J, Metta G, Nadel J, Sandini G, Stojanov G, Balkenius C (eds.) Proceedings of the Fifth International Workshop on Epigenetic Robotics: Modeling Cognitive Development in Robotic Systems (Lund University Cognitive Studies 123, Lund), pp. 31-37.Google Scholar

  • Desmurget M, Grafton ST, 2000. Forward modeling allows feedback control for fast reaching movements. Trends in Cognitive Sciences 4, 423-431. Dohen M, Schwartz, JL, Bailly G, 2010. Speech and face-to-face communication – An introduction. Speech Communication 52, 477-480.Google Scholar

  • Fehr E, Fischbacher U, Gächter S, 2002. Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature 13, 1-25.CrossrefGoogle Scholar

  • Fujie S, Fukushima K, Kobayashi T, 2004. A conversation robot with backchanel feedback function based on linguistic and nonlinguistic information. Proceedings of the 2nd International conference on Autonomous Robots and Agents (Palmerston North, New Zealand), pp. 379-384.Google Scholar

  • Fukui K, Nishikawa K, Ikeo S, Shintaku E, Takada K, Takanobu H, Honda M, Takanishi A, 2005. Development of a talking robot with vocal cords and lips having human-like biological structures. Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems (Edmonton, Alberta, Canada), pp. 2023-2028.Google Scholar

  • Galantucci B, Steels L, 2008. The embodied communication in artificial agents and humans. In: Wachsmuth I, Lenzen M, Knoblich G (eds.), Embodied Communication in Humans and Machines (Oxford University Press, Oxford) pp. 229-256.Google Scholar

  • Goldstein MH, Schwade J, 2008. Social Feedback to Infants’ Babbling Facilitates Rapid Phonological Learning. Psychological Science 19, 515-523.CrossrefPubMedGoogle Scholar

  • Goldstein MH, Schwade J, Briesch J, Syal S, 2010. Learning While Babbling: Prelinguistic Object-Directed Vocalizations Indicate a Readiness to Learn. Infancy 15, 362-391.CrossrefGoogle Scholar

  • Golfinopoulos E, Tourville JA, Bohland JW, Ghosh SS, Nieto-Castanon A, Guenther FH, 2011. fMRI investigation of unexpected somatosensory feedback perturbation during speech. NeuroImage 55, 1324-1338.CrossrefPubMedGoogle Scholar

  • Grossberg S, 2010. The link between brain learning, attention, and consciousness. In: Carsetti A (ed.) Causality, Meaningful Complexity and Embodied Cognition (Springer, Dordrecht), pp. 3-45.Google Scholar

  • Grossmann T, Johnson MH, Lloyd-Fox S, Blasi A, Deligianni F, Elwell C, Csibra G, 2008. Early cortical specialization for face-to-face communication in human infants. Proceedings of the Royal Society B: Biological Sciences 275, 2803-2811.Google Scholar

  • Guenther FH, Ghosh SS, Tourville JA, 2006. Neural modeling and imaging of the cortical interactions underlying syllable production. Brain and Language 96, 280–301.CrossrefGoogle Scholar

  • Haikonen POA, 2009. The role of associative processing in cognitive computing. Cognitive Computation 1, 42-49.Google Scholar

  • Hashimoto T, Kato N, Kobayashi H, 2010. Study on educational application of android robot SAYA: Field trial and evaluation at elementary school. In: Lui H, Ding H, Xiong Z, Zhu X (eds.) Intelligent Robotics and Applications. LNCS 6425 (Springer, Berlin), pp. 505-516.Google Scholar

  • Hickok G, Poeppel D, 2007. Towards a functional neuroanatomy of speech perception. Trends in Cognitive Sciences 4, 131–138.Google Scholar

  • Indefrey P, Levelt WJM, 2004. The spatial and temporal signatures of word production components. Cognition 92, 101-144.Google Scholar

  • Iverson JM, Capirci O, Longobardi E, Caselli MC, 1999. Gesturing in mother-child interactions. Cognitive Development 14, 57-75.CrossrefGoogle Scholar

  • Kanda T, Hirano T, Eaton D, 2004. Interactive robots as social partners and peer tutors for children: a field trial. Human-Computer Interaction 19, 61-84.Google Scholar

  • Kanda T, Kamasima M, Imai M, Ono T, Sakamoto D, Ishiguro H, Anzai Y, 2007. A humanoid robot that pretends to listen to route guidance from a human. Journal of Autonomous Robots 22, 87-100.Google Scholar

  • Kanda T, Miyashita T, Osada T, Haikawa Y, Ishiguro H, 2008. Analysis of humanoid appearance in human-robot interaction. IEEE Transactions on Robotics 24, 725-735.CrossrefGoogle Scholar

  • Kandel ER, Schwartz JH, Jessell TM, 2000. Principles of Neural Science. 4th edition (McGraw-Hill, New York).Google Scholar

  • Kiebel SJ, Daunizeau J, Friston KJ, 2008. A Hierarchy of Time-Scales and the Brain. PLoS Comput Biol 4(11): e1000209. doi:10.1371/journal.pcbi.1000209.PubMedGoogle Scholar

  • Kipp M, Neff M, Kipp KH, Albrecht I, 2007. Towards Natural Gesture Synthesis: Evaluating gesture units in a data-driven approach to gesture synthesis. In: Pellachaud C, Martin JC, Andre E, Chollet G, Karpouzis K, Pele D (eds.), Intelligent Virtual Agents. LNAI 4722 (Springer, Berlin), pp. 15-28.Google Scholar

  • Kohonen T, 2001. Self-Organizing Maps (Springer, Berlin).Google Scholar

  • Kopp S, Bergmann K, Buschmeier H, Sadeghipour A, 2009. Requirements and Building Blocks for Sociable Embodied Agents. In: Mertsching B, Hund M, Aziz Z (eds.) Advances in Artificial Intelligence. LNCS 5803 (Springer, Berlin), pp. 508-515.Google Scholar

  • Kopp S, Gesellensetter L, Krämer NC, Wachsmuth I, 2005. A Conversational Agent as Museum Guide – Design and Evaluation of a Real-World Application. In: Panayiotopoulos T, Gratch J, Aylett R, Ballin D, Oliver P, Rist T (eds.), Intelligent Virtual Agents. LNCS 3661 (Springer, Berlin), pp. 329-343.Google Scholar

  • Kosuge K, Hirata Y, 2004. Human-robot interaction. Proceedings of the 2004 IEEE International Conference on Robotics and Biometrics (Xhenyang, China), pp. 8-11.Google Scholar

  • Kröger BJ, Birkholz P, 2007. A gesture-based concept for speech movement control in articulatory speech synthesis. In: Esposito A, Faundez-Zanuy M, Keller E, Marinaro M (eds.) Verbal and Nonverbal Communication Behaviours. LNAI 4775 (Springer, Berlin), pp. 174-189.Google Scholar

  • Kröger BJ, Birkholz P, 2009. Articulatory Synthesis of Speech and Singing: State of the Art and Suggestions for Future Research. In: Esposito A, Hussain A, Marinaro M (eds) Multimodal Signals: Cognitive and Algorithmic Issues. LNAI 5398 (Springer, Berlin), pp. 306-319.Google Scholar

  • Kröger BJ, Kannampuzha J, Neuschaefer-Rube C, 2009. Towards a neurocomputational model of speech production and perception. Speech Communication 51, 793-809.CrossrefGoogle Scholar

  • Kröger BJ, Birkholz P, Lowit A, 2010. Phonemic, sensory, and motor representations in an action-based neurocomputational model of speech production (ACT). In: Maassen B, van Lieshout P (eds.), Speech Motor Control: New developments in basic and applied research. (Oxford University Press, New York), pp. 23-36.Google Scholar

  • Kröger BJ, Kopp S, Lowit A, 2010. A model for production, perception, and acquisition of actions in face-to-face communication. Cognitive Processing 11, 187-205.CrossrefGoogle Scholar

  • Kuhl PK, 2004. Early language acquisition: cracking the speech code. Nature Reviews Neuroscience 5, 831-843.CrossrefPubMedGoogle Scholar

  • Kuhl PK, 2007. Is speech learning „gated by the social brain? Developmental Science 10, 110-120.PubMedCrossrefGoogle Scholar

  • Lau EF, Phillips C, Poeppel D, 2008. A cortical network for semantics: (de)constructing the N400. Nature Reviews Neuroscience 9, 920-933.Google Scholar

  • Levelt WJM, Roelofs A, Meyer A, 1999. A theory of lexical access in speech production. Behavioral and Brain Sciences 22, 1-75.Google Scholar

  • Li P, Fakas I, MacWhinney B, 2004. Early lexical development in a self-organizing neural network. Neural Networks 17, 1345-1362.CrossrefGoogle Scholar

  • Li Y, Kurata S, Morita S, Shimizu S, Munetaka D, Nara S, 2008. Application of chaotic dynamics in a recurrent neural network to control: hardware implementation into a novel autonomous roving robot. Biological Cybernetics 99, 185-196.CrossrefGoogle Scholar

  • Lindblom J, Ziemke T, 2003. Social situatedness of natural and artificial intelligence: Vygotsky and beyond. Adaptive Behavior 11, 79-96.CrossrefGoogle Scholar

  • Lungarella M, Metta G, Pfeiffer R, Sandini, 2003. Developmental robotics: a survey. Connection Science 15, 151-190.CrossrefGoogle Scholar

  • Madden C, Hoen M, Dominey PF, 2010. A cognitive neuroscience perspective on embodied language for human-robot cooperation. Brain and Language 112, 180-188.CrossrefGoogle Scholar

  • McGurk H, MacDonald J, 1976. Hearing lips and seeing voices. Nature 264, 746-748.Google Scholar

  • Mitchell CJ, De Houwer J, Lovibond PF, 2009. The propositional nature of human associative learning. Behavioral and Brain Sciences 32, 183-198. Ogawa H, Watanabe T, 2000. Interrobot: A speech driven embodied interaction robot. Proceedings of the 2000 IEEE International Workshop on Robot and Human Interactive Communication (Osaka, Japan), pp. 322-327.CrossrefGoogle Scholar

  • Özçalkan S, Goldin-Meadow S, 2005. Gesture is at the cutting edge of early language development. Cognition 96, B101-B113.Google Scholar

  • Parisi D, 2010. Robots with language. Frontiers in Neurorobotics 4. DOI: 10.3389/fnbot.2010.00010CrossrefPubMedGoogle Scholar

  • Patterson K, Nestor PJ, Rogers TT, 2007. Where do you know what you know? The representation of semantic knowledge in the human brain. Nature Reviews Neuroscience 8, 976-987.CrossrefPubMedGoogle Scholar

  • Pelachaud C, Poggi I, 2002. Subtleties of facial expressions in embodied agents. The Journal of Visualization and Computer Animation 13, 301–312. Pierrehumbert JB, 2003. Phonetic diversity, statistical learning, and acquisition of phonology. Language and Speech 46, 115-154.Google Scholar

  • Plebe A, Mazzone M, de la Cruz V, 2010. First word learning: a cortical model. Cognitive Computation 2, 217-229.Google Scholar

  • Prince CG, Demiris Y, 2003. Introduction to the special issue on epigenetic robotics. Adaptive Behavior 11, 75-77.CrossrefGoogle Scholar

  • Rich C, Ponsler B, Holroyd A, Sidner CL, 2010. Recognizing engagement in human-robot interaction. Proceedings of the 5th ACM/IEEE International conference on Human-Robot Interaction (Osaka, Japan), pp. 375-382.Google Scholar

  • Riecker A, Mathiak K, Wildgruber D, Erb A, Hertrich I, Grodd W, Ackermann H, 2005. fMRI reveals two distinct cerebral networks subserving speech motor control. Neurology 64, 700-706.PubMedCrossrefGoogle Scholar

  • Rizzolatti G, 2005. The mirror neuron system and its function in humans. Anatomy and Embryology 210, 419-421.Google Scholar

  • Roy AC, Craighero L, Fabbri-Destro, M, Fadiga L, 2008. Phonological and lexical motor facilitation during speech listening: A transcranial magnetic stimulation study. Journal of Physiology-Paris 102, 101-105.CrossrefGoogle Scholar

  • Saunders JA, Knill DC, 2004. Visual Feedback Control of Hand Movements. The Journal of Neuroscience 24, 3223-3234.CrossrefGoogle Scholar

  • Schaal S, 1999. Is imitation learning the route to humanoid robots? Trends in Cognitive Sciences 3, 233-242.CrossrefGoogle Scholar

  • Shiomi M, Kanda T, Miralles N, Miyashita T, 2004. Face-to-face interactive humanoid robot. Proceedings of the 2004 IEEE International Conference on Intelligent Robots and Systems (Sendai, Japan), pp. 1340-1346.Google Scholar

  • Shiwa T, Kanda T, Imai M, Ishiguro H, Hagita N, 2008. How quickly should communication robots respond? Proceedings of 2008 ACM Conference of Human Robot Interaction (Amsterdam, Netherlands), pp. 153-160.Google Scholar

  • Sidner CL, Lee C, Kidd CD, Lesh N, Rich C, 2005. Explorations in engagement for humans and robots. Artificial Intelligence 166, 140-164. Steels L, 2003. Evolving grounded communication for robots. Trends in Cognitive Sciences 7, 308-312.CrossrefGoogle Scholar

  • Tani J, Ito M, 2003. Self-organization of behavioral primitives as multiple attractor dynamics: a robot experiment. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 33, 481-488.Google Scholar

  • Tani J, Nishimoto R, Namikawa J, Ito M, 2008. Codevelopmental learning between human and humanoid robot using a dynamic neural network model. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics 38, 43-59.CrossrefGoogle Scholar

  • Thompson RF, 1986. The neurobiology of learning and memory. Science 233, 941-947.Google Scholar

  • Tomasello M, 2000. First steps towards a usage-based theory of language acquisition. Cognitive Linguistics 11, 61-82.Google Scholar

  • Trappenberg T, Hartono P, Rasmusson D, 2009. Top-Down Control of Learning in Biological Self-Organizing Maps. In: Principe JC, Miikkulainen R (eds.), Advances in Self-Organizing Maps. LNCS 5629 (Springer, Berlin), pp. 316-324.Google Scholar

  • Yoshikawa Y, Shinozawa K, Ishiguro H, Hagita N, Miyamoto T, 2006. The effects of responsive eye movement and blinking behavior in a communication robot. Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (Beijing, China), pp. 4564-4569.Google Scholar

  • Vaz M, Brandl H, Joublin F, Goerick C, 2009. Learning from a tutor: Embodied speech acquisition and imitation learning. Proceedings of the IEEE 8th International Conference on Development and Learning (Shanghai, China), pp. 1-6.Google Scholar

  • Vilhjálmsson H, 2009. Representing communicative function and behavior in multimodal communication. In: Esposito A, Hussain A, Marinaro M, Martone R (eds.) Multimodal Signals: Cognitive and Algorithmic Issues. LNCS 5398 (Springer, Berlin), pp. 47-59.Google Scholar

  • Weng J, 2004. Developmental robotics: Theory and experiments. International Journal of Humanoid Robotics 1, 199-236.CrossrefGoogle Scholar

  • Weng J, McClelland J, Pentland A, Sporns O, Stockman I, Sur M, Thelen E, 2001. Autonomous mental development by robots and animals. Science 291, 599-600.Google Scholar

About the article

Received: 2011-01-31

Accepted: 2011-05-24

Published Online: 2011-08-16

Published in Print: 2011-06-01

Citation Information: Paladyn, Journal of Behavioral Robotics, ISSN (Online) 2081-4836, DOI: https://doi.org/10.2478/s13230-011-0016-6.

Export Citation

© Bernd J. Kröger et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. BY-NC-ND 3.0

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

Bernd J. Kröger, Eric Crawford, Trevor Bekolay, and Chris Eliasmith
Frontiers in Computational Neuroscience, 2016, Volume 10
Mengxue Cao, Aijun Li, Qiang Fang, Emily Kaufmann, and Bernd J. Kröger
Frontiers in Psychology, 2014, Volume 5
Bernd J Kröger, Jim Kannampuzha, and Emily Kaufmann
EPJ Nonlinear Biomedical Physics, 2014, Volume 2, Number 1
Byron D. Erath, Matías Zañartu, Kelley C. Stewart, Michael W. Plesniak, David E. Sommer, and Sean D. Peterson
Speech Communication, 2013, Volume 55, Number 5, Page 667

Comments (0)

Please log in or register to comment.
Log in