Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Paladyn, Journal of Behavioral Robotics

Editor-in-Chief: Schöner, Gregor


Covered by SCOPUS


CiteScore 2018: 2.17

SCImago Journal Rank (SJR) 2018: 0.336
Source Normalized Impact per Paper (SNIP) 2018: 1.707

ICV 2018: 120.52

Open Access
Online
ISSN
2081-4836
See all formats and pricing
More options …

Learning Robot Speech Models to Predict Speech Acts in HRI

Ankuj Arora / Humbert Fiorino / Damien Pellier / Sylvie Pesty
Published Online: 2018-10-05 | DOI: https://doi.org/10.1515/pjbr-2018-0015

Abstract

In order to be acceptable and able to “camouflage” into their physio-social context in the long run, robots need to be not just functional, but autonomously psycho-affective as well. This motivates a long term necessity of introducing behavioral autonomy in robots, so they can autonomously communicate with humans without the need of “wizard” intervention. This paper proposes a technique to learn robot speech models from human-robot dialog exchanges. It views the entire exchange in the Automated Planning (AP) paradigm, representing the dialog sequences (speech acts) in the form of action sequences that modify the state of the world upon execution, gradually propelling the state to a desired goal. We then exploit intra-action and inter-action dependencies, encoding them in the form of constraints. We attempt to satisfy these constraints using aweighted maximum satisfiability model known as MAX-SAT, and convert the solution into a speech model. This model could have many uses, such as planning of fresh dialogs. In this study, the learnt model is used to predict speech acts in the dialog sequences using the sequence labeling (predicting future acts based on previously seen ones) capabilities of the LSTM (Long Short Term Memory) class of recurrent neural networks. Encouraging empirical results demonstrate the utility of this learnt model and its long term potential to facilitate autonomous behavioral planning of robots, an aspect to be explored in future works.

Keywords: human robot interaction; automated planning; SAT; LSTM

References

  • [1] C. Breazeal, Emotion and sociable humanoid robots, International Journal of Human-Computer Studies, 2003, 59(1), 119-155, https://doi.org/10.1016/S1071-5819(03)00018-1CrossrefGoogle Scholar

  • [2] C. Breazeal, Social interactions in HRI: the robot view, IEEE Transactions on Systems, Man, and Cybernetics, 2004, 34(2), 181-186, https://doi.org/10.1109/TSMCC.2004.826268CrossrefGoogle Scholar

  • [3] B. D. Argall, S. Chernova, M. Veloso, B. Browning, A survey of robot learning from demonstration, Robotics and autonomous systems, 2003, 57(5), 469-483, https://doi.org/10.1016/j.robot.2008.10.024CrossrefGoogle Scholar

  • [4] D. B. Jayagopi, S. Sheikhi, D. Klotz, J. Wienke, J. M. Odobez, S. Wrede el al., The vernissage corpus: A multimodal human-robotinteraction dataset (No. EPFL-REPORT-182715), 2012Google Scholar

  • [5] C. R. Perrault, An application of default logic to speech act theory, Intentions in Communication, 1990, 161-186Google Scholar

  • [6] M. D. Sadek, Dialogue acts are rational plans, The Structure of Multimodal Dialogue (Second VENACO Workshop), 1991Google Scholar

  • [7] J. Riviere, C. Adam, S. Pesty, C. Pelachaud, N. Guiraud, D. Longin et al., Expressive multimodal conversational acts for SAIBA agents, International Workshop on Intelligent Virtual Agents, 2011, 316-323CrossrefGoogle Scholar

  • [8] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature, 2015, 521(7553), 436-444Google Scholar

  • [9] O. Sigaud, A. Droniou, Towards deep developmental learning, IEEE Transactions on Cognitive and Developmental Systems, 2016, 8(2), 99-114Google Scholar

  • [10] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural computation, 1997, 9(8), 1735-1780Google Scholar

  • [11] S. Jiménez, F. Fernández, D. Borrajo, The PELA architecture: integrating planning and learning to improve execution, Association for the Advancement of Artificial Intelligence, 2008Google Scholar

  • [12] S. J. Pan, Q. Yang, A survey on transfer learning, IEEE Transactions on Knowledge and Data engineering, 2010, 22(10), 1345-1359Google Scholar

  • [13] H. H. Zhuo, Q. Yang, Action-model acquisition for planning via transfer learning, Artificial Intelligence, 2014, 212, 80-103Web of ScienceGoogle Scholar

  • [14] R. García-Martínez, D. Borrajo, An integrated approach of learning, planning, and execution, Journal of Intelligent and Robotic Systems, 2000, 29(1), 47-78Google Scholar

  • [15] Q. Yang, K.Wu, Y. Jiang, Learning action models from plan examples using weightedMAX-SAT, Artificial Intelligence, 2007, 171(2-3), 107-143Google Scholar

  • [16] H. H. Zhuo, T. A. Nguyen, S. Kambhampati, Refining incomplete planning domain models through plan traces, International Joint Conference on Artificial Intelligence, 2013, 2451-2458Google Scholar

  • [17] S. Yoon, S. Kambhampati, Towards model-lite planning: A proposal for learning & planning with incomplete domain models, ICAPS 2007 Workshop on Artificial Intelligence Planning and Learning, 2007Google Scholar

  • [18] H. H. Zhuo, H. Muñoz-Avila, Q. Yang, Learning action models for multi-agent planning, The 10th International Conference on Autonomous Agents and Multiagent Systems, 2011, 1, 217-224Google Scholar

  • [19] R. Agrawal, R. Srikant, Fast algorithms for mining association rules, International Conference on Very Large Data Bases, 1994, 1215, 487-499Google Scholar

  • [20] W. K. Fung, Y. H. Liu, Adaptive categorization of ART networks in robot behavior learning using game-theoretic formulation, Neural Networks, 2003, 16(10), 1403-1420Google Scholar

  • [21] F. Stulp, A. Fedrizzi, L. Mösenlechner, M. Beetz, Learning and reasoning with action-related places for robust mobile manipulation, Journal of Artifial Intelligence, 2012, 43, 1-42Google Scholar

  • [22] D. McDermott, M. Ghallab, A. Howe, C. Knoblock, A. Ram, M. Veloso et al., PDDL-the planning domain definition language, 1998Google Scholar

  • [23] M. Brenner, Multiagent planning with partially ordered temporal plans, International Joint Conference on Artificial Intelligence, 2003, 3, 1513-1514Google Scholar

  • [24] K. Garoufi, A. Koller, Automated planning for situated natural language generation, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, 2010, 1573- 1582Google Scholar

  • [25] T. Marques, M. Rovatsos, Classical planning with communicative actions, European Conference on Artificial Intelligence, 2016, 1744-1745Google Scholar

  • [26] R. E. Fikes, N. J. Nilsson, STRIPS: A new approach to the application of theorem proving to problem solving, Artificial Intelligence, 1971, 2(3-4), 189-208CrossrefGoogle Scholar

  • [27] H. Mannila, H. Toivonen, A. I. Verkamo, Discovery of frequent episodes in event sequences, Data Mining and Knowledge Discovery, 1997, 1(3), 259-289Google Scholar

  • [28] C. Dousson, T. V. Duong, Discovering Chronicles with Numerical Time Constraints from Alarm Logs for Monitoring Dynamic Systems, International Joint Conference on Artificial Intelligence, 1999, 99, 620-626Google Scholar

  • [29] M. Guillame-Bert, J. L. Crowley, Learning temporal association rules on symbolic time sequences, In: Proceedings of the Asian Conference on Machine Learning, PMLR, 2012, 25, 159-174Google Scholar

  • [30] B. Borchers, J. Furman, A two-phase exact algorithm for MAXSAT and weighted MAX-SAT problems, Journal of Combinatorial Optimization, 1998, 2(4), 299-306Google Scholar

  • [31] H. A. Kautz, B. Selman, Y. Jiang, A general stochastic approach to solving problemswith hard and soft constraints, Satisfiability Problem: Theory and Applications, 1996, 35, 573-586Google Scholar

About the article

Received: 2017-11-29

Accepted: 2018-07-05

Published Online: 2018-10-05

Published in Print: 2016-08-01


Citation Information: Paladyn, Journal of Behavioral Robotics, Volume 9, Issue 1, Pages 295–306, ISSN (Online) 2081-4836, DOI: https://doi.org/10.1515/pjbr-2018-0015.

Export Citation

© by Ankuj Arora et al., published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Comments (0)

Please log in or register to comment.
Log in