Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Paladyn, Journal of Behavioral Robotics

Editor-in-Chief: Schöner, Gregor

1 Issue per year


CiteScore 2017: 0.33

SCImago Journal Rank (SJR) 2017: 0.104

ICV 2017: 99.90

Open Access
Online
ISSN
2081-4836
See all formats and pricing
More options …

Adaptive reinforcement learning with active state-specific exploration for engagement maximization during simulated child-robot interaction

George Velentzas / Theodore Tsitsimis / Iñaki Rañó / Costas Tzafestas / Mehdi Khamassi
Published Online: 2018-09-01 | DOI: https://doi.org/10.1515/pjbr-2018-0016

Abstract

Using assistive robots for educational applications requires robots to be able to adapt their behavior specifically for each child with whom they interact.Among relevant signals, non-verbal cues such as the child’s gaze can provide the robot with important information about the child’s current engagement in the task, and whether the robot should continue its current behavior or not. Here we propose a reinforcement learning algorithm extended with active state-specific exploration and show its applicability to child engagement maximization as well as more classical tasks such as maze navigation. We first demonstrate its adaptive nature on a continuous maze problem as an enhancement of the classic grid world. There, parameterized actions enable the agent to learn single moves until the end of a corridor, similarly to “options” but without explicit hierarchical representations.We then apply the algorithm to a series of simulated scenarios, such as an extended Tower of Hanoi where the robot should find the appropriate speed of movement for the interacting child, and to a pointing task where the robot should find the child-specific appropriate level of expressivity of action. We show that the algorithm enables to cope with both global and local non-stationarities in the state space while preserving a stable behavior in other stationary portions of the state space. Altogether, these results suggest a promising way to enable robot learning based on non-verbal cues and the high degree of non-stationarities that can occur during interaction with children.

This article offers supplementary material which is provided at the end of the article.

Keywords: human-robot interaction; reinforcement learning; active exploration; meta-learning; autonomous robotics; engagement; joint action

References

  • [1] T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots, Robotics and Autonomous Systems, 2003, 42, 143-166Google Scholar

  • [2] T. Kanda, T. Hirano, D. Eaton, H. Ishiguro, Interactive robots as social partners and peer tutors for children: A field trial, Human- Computer Interaction, 2004, 19(1), 61-84Google Scholar

  • [3] B. Robins, K. Dautenhahn, R. Te Boekhorst, A. Billard, Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills? Universal Access in the Information Society, 2005, 4(2), 105-120Google Scholar

  • [4] T. Belpaeme, P. E. Baxter, R. Read, R. Wood, H. Cuayáhuitl, B. Kiefer, et al.,Multimodal child-robot interaction: Building social bonds, Journal of Human-Robot Interaction, 2012, 1(2), 33-53Google Scholar

  • [5] K.-Y. Chin, Z.-W. Hong, Y.-L. Chen, Impact of using an educational robot-based learning system on students motivation in elementary education, IEEE Transactions on Learning Technologies, 2014, 7(4), 333-345Google Scholar

  • [6] C. Rich, B. Ponsler, A. Holroyd, C. L. Sidner, Recognizing engagement in human-robot interaction, In: 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, 2010, 375- 382Google Scholar

  • [7] S. Ivaldi, S. Lefort, J. Peters, M. Chetouani, J. Provasi, E. Zibetti, Towards engagement models that consider individual factors in HRI: on the relation of extroversion and negative attitude towards robots to gaze and speech during a human-robot assembly task, International Journal of Social Robotics, 2017, 9(1), 63-86Google Scholar

  • [8] S. Lemaignan, M.Warnier, E.A. Sisbot, A. Clodic, R. Alami, Artificial cognition for social human-robot interaction: An implementation, Artificial Intelligence, 2017, 247, 45-69Google Scholar

  • [9] C. L. Sidner, C. Lee, C. D. Kidd, N. Lesh, C. Rich, Explorations in engagement for humans and robots, Artificial Intelligence, 2005, 166(1-2), 140-164Google Scholar

  • [10] S. M. Anzalone, S. Boucenna, S. Ivaldi, M. Chetouani, Evaluating the engagement with social robots, International Journal of Social Robotics, 2015, 7(4), 465-478Google Scholar

  • [11] M. Khamassi, S. Lallée, P. Enel, E. Procyk, P. F. Dominey, Robot cognitive control with a neurophysiologically inspired reinforcement learning model, Frontiers in Neurorobotics, 2011, 5, 1Google Scholar

  • [12] J. Kober, J. Peters, Policy search for motor primitives in robotics, Machine Learning, 2011, 84, 171-203Google Scholar

  • [13] F. Stulp, O. Sigaud, Robot skill learning: From reinforcement learning to evolution strategies, Paladyn Journal of Behavioral Robotics, 2013, 4(1), 49-61Google Scholar

  • [14] J. Kober, J. A. Bagnell, J. Peters, Reinforcement learning in robotics: A survey, The International Journal of Robotics Research, 2013, 32(11), 1238-1274Google Scholar

  • [15] M. Khamassi, G. Velentzas, T. Tsitsimis, C. Tzafestas, Active exploration and parameterized reinforcement learning applied to a simulated human-robot interaction task, In: 2017 First IEEE International Conference on Robotic Computing (IRC), Taichung, Taiwan, 2017, 28-35Google Scholar

  • [16] M. Khamassi, G. Velentzas, T. Tsitsimis, C. Tzafestas, Robot fast adaptation to changes in human engagement during simulated dynamic social interaction with active exploration in parameterized reinforcement learning, IEEE Transactions on Cognitive and Developmental Systems, 2018 (in press)Google Scholar

  • [17] W. Masson, P. Ranchod, G. Konidaris, Reinforcement learning with parameterized actions, In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI-16), 2016Google Scholar

  • [18] M. Hausknecht, P. Stone, Deep reinforcement learning in parameterized action space, In: International Conference on Learning Representations (ICLR 2016), 2016Google Scholar

  • [19] J. Schmidhuber, Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts, Connection Science, 2006, 18(2), 173-187Google Scholar

  • [20] A. Baranes, P.-Y. Oudeyer, Active learning of inverse models with intrinsically motivated goal exploration in robots, Robotics and Autonomous Systems, 2013, 61(1), 49-73Google Scholar

  • [21] C. Moulin-Frier, P.-Y. Oudeyer, Exploration strategies in developmental robotics: a unified probabilistic framework, In: 2013 IEEE Third Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL), IEEE, 2013, 1-6Google Scholar

  • [22] F. C. Y. Benureau, P.-Y. Oudeyer, Behavioral diversity generation in autonomous exploration through reuse of past experience, Frontiers in Robotics and AI, 2016, 3Google Scholar

  • [23] J. X. Wang, Z. Kurth-Nelson, D. Tirumala, H. Soyer, J. Z. Leibo, R. Munos, et al., Learning to reinforcement learn, 2016, arXiv:1611.05763Google Scholar

  • [24] N. Schweighofer, K. Doya, Meta-learning in reinforcement learning, Neural Networks, 2003, 16(1), 5-9Google Scholar

  • [25] K. Doya, Metalearning and neuromodulation, Neural Networks, 2002, 15(4-6), 495-506Google Scholar

  • [26] G. Velentzas, C. Tzafestas, M. Khamassi, Bio-inspired meta learning for active exploration during non-stationary multiarmed bandit tasks, In: IEEE Intelligent Systems Conference 2017, London, UK, 2017Google Scholar

  • [27] A. Garivier, E.Moulines, On upper-confidence bound policies for non-stationary bandit problems, 2008, arXiv:0805.3415Google Scholar

  • [28] H. van Hasselt, M. Wiering, Reinforcement learning in continuous action spaces, In: IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, 2007, 272-279Google Scholar

  • [29] R. S. Sutton, A. G. Barto, Reinforcement Learning: An Introduction, Cambridge, MA: MIT Press, 1998Google Scholar

  • [30] L. Schilbach, M. Wilms, S. B. Eickhoff, S. Romanzetti, R. Tepest, G. Bente, N. J. Shah, G. R. Fink, K. Vogeley, Minds made for sharing: Initiating joint attention recruits reward-related neurocircuitry, Journal of Cognitive Neuroscience, 2010, 22(12), 2702- 2715.Google Scholar

About the article

Received: 2018-02-02

Accepted: 2018-06-29

Published Online: 2018-09-01


Citation Information: Paladyn, Journal of Behavioral Robotics, Volume 9, Issue 1, Pages 235–253, ISSN (Online) 2081-4836, DOI: https://doi.org/10.1515/pjbr-2018-0016.

Export Citation

© by George Velentzas et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in