Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Paladyn, Journal of Behavioral Robotics

Editor-in-Chief: Schöner, Gregor


Covered by SCOPUS


CiteScore 2018: 2.17

SCImago Journal Rank (SJR) 2018: 0.336
Source Normalized Impact per Paper (SNIP) 2018: 1.707

ICV 2018: 120.52

Open Access
Online
ISSN
2081-4836
See all formats and pricing
More options …

Understandable robots - What, Why, and How

Thomas Hellström / Suna Bensch
Published Online: 2018-07-11 | DOI: https://doi.org/10.1515/pjbr-2018-0009

Abstract

As robots become more and more capable and autonomous, there is an increasing need for humans to understand what the robots do and think. In this paper, we investigate what such understanding means and includes, and how robots can be designed to support understanding. After an in-depth survey of related earlier work, we discuss examples showing that understanding includes not only the intentions of the robot, but also desires, knowledge, beliefs, emotions, perceptions, capabilities, and limitations of the robot. The term understanding is formally defined, and the term communicative actions is defined to denote the various ways in which a robot may support a human’s understanding of the robot. A novel model of interaction for understanding is presented. The model describes how both human and robot may utilize a first or higher-order theory of mind to understand each other and perform communicative actions in order to support the other’s understanding. It also describes simpler cases in which the robot performs static communicative actions in order to support the human’s understanding of the robot. In general, communicative actions performed by the robot aim at reducing the mismatch between the mind of the robot, and the robot’s inferred model of the human’s model of the mind of the robot. Based on the proposed model, a set of questions are formulated, to serve as support when developing and implementing the model in real interacting robots.

Keywords: human-robot interaction; communication; predictable; explainable

References

  • [1] D. Doran, S. Schulz, T. R. Besold, What does explainable AI really mean? A new conceptualization of perspectives, 2017, arXiv:1710.00794 [cs.AI]Google Scholar

  • [2] A. Chandrasekaran, D. Yadav, P. Chattopadhyay, V. Prabhu, D. Parikh, It takes two to tango: Towards theory of AI’s mind, 2017, arXiv:1704.00717Google Scholar

  • [3] T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: Concepts, design and applications, Technical Report CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002Google Scholar

  • [4] S. Bensch, A. Jevtić, T. Hellström, On interaction quality in human-robot interaction, In: International Conference on Agents and Artificial Intelligence (ICAART), 2017, 182-189Google Scholar

  • [5] T. Nomura, K. Kawakami, Relationships between robot’s selfdisclosures and human’s anxiety toward robots, In: Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology, IEEE Computer Society, 2011, 66-69Google Scholar

  • [6] G. Baud-Bovy, P. Morasso, F. Nori, G. Sandini, A. Sciutti, Human machine interaction and communication in cooperative actions, In: S. I. Publishing (Ed.), Bioinspired Approaches for Human-Centric Technologies, 2014, 241-268Google Scholar

  • [7] M. J. Gielniak, A. L. Thomaz, Generating anticipation in robot motion, In: 2011 RO-MAN, 2011, 449-454Google Scholar

  • [8] M. Nilsson, S. Thill, T. Ziemke, Action and intention recognition in human interaction with autonomous vehicles, In: Experiencing Autonomous Vehicles: Crossing the Boundaries between a Drive and a Ride, Workshop in conjunction with CHI2015 (2015), 2015Google Scholar

  • [9] V. M. Lundgren, A. Habibovic, J. Andersson, T. Lagström, M. Nilsson, A. Sirkka, et al., Will there be new communication needs when introducing automated vehicles to the urban context?, In: Advances in Human Aspects of Transportation, 2016, 484, 485-497Google Scholar

  • [10] L. Wang, G. A. Jamieson, J. G. Hollands, Trust and reliance on an automated combat identification system, Human Factors, 2009, 51(3), 281-291Google Scholar

  • [11] M. M. de Graaf, B. F. Malle, A. Dragan, T. Ziemke, Explainable robotic systems, In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’18, New York, NY, USA, ACM, 2018, 387-388Google Scholar

  • [12] L. Takayama, D. Dooley, W. Ju, Expressing thought: Improving robot readability with animation principles, In: 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2011, 69-76Google Scholar

  • [13] C. Lichtenthäler, T. Lorenzy, A. Kirsch, Influence of legibility on perceived safety in a virtual human-robot path crossing task, In: 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012, 676-681Google Scholar

  • [14] C. Lichtenthäler, Legibility of Robot Behavior Investigating Legibility of Robot Navigation in Human-Robot Path Crossing Scenarios, PhD thesis, Technische Universität München, 2014Google Scholar

  • [15] K. Dautenhahn, S. Woods, C. Kaouri, M. Walters, K. L. Koay, I. Werry, What is a robot companion - friend, assistant or butler?, In: Proc. IEEE IRS/RSJ Int. Conference on Intelligent Robots and Systems, Edmonton, Alberta, Canada, 2005, 1488-1493Google Scholar

  • [16] A. D. Dragan, K. C. Lee, S. S. Srinivasa, Legibility and predictability of robot motion, In: Proceedings of the 8th ACM/IEEE International Conference on Human-robot Interaction, HRI ’13, Piscataway, NJ, USA, 2013, 301-308 IEEE PressGoogle Scholar

  • [17] J. Novikova, Designing Emotionally Expressive Behaviour: Intelligibility and Predictability in Human-Robot Interaction, PhD thesis, University of Bath, 2016Google Scholar

  • [18] H. Karvonen, I. Aaltonen, Intent communication of highly autonomous robots, In: The 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2017), 2017Google Scholar

  • [19] N. Mirnig, M. Tscheligi, Comprehension, coherence and consistency: Essentials of robot feedback, In: J. Markowitz (Ed.), Robots that talk and listen - technology and social impact, De Gruyter, 2015Google Scholar

  • [20] J. Knifka, On the significance of understanding in human-robot interaction, In: M. Nørskov (Ed.), Social Robots: Boundaries, Potential, Challenges, Ashgate, 2016, 3-17Google Scholar

  • [21] C. Breazeal, Towards sociable robots, Robotics and autonomous systems, 2002, 42Google Scholar

  • [22] K. Dautenhahn, The art of designing socially intelligent agents: Science, fiction, and the human in the loop, Applied Artificial Intelligence, 1998, 12(7-8), 573-617CrossrefGoogle Scholar

  • [23] R. H. Wortham, A. Theodorou, J. J. Bryson, Robot transparency, trust and utility, Connection Science, 2017, 29(3), 242-248CrossrefGoogle Scholar

  • [24] J. B. Lyons, Being transparent about transparency: A model for human-robot interaction, In: Proceedings of AAAI Spring Symposium on Trust in Autonomous Systems, 2013, 48-53Google Scholar

  • [25] M. P. Anderson, What is communication, The Journal of Communication, 1959, 5(9)Google Scholar

  • [26] C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal, 1948, 3(27), 379-423Google Scholar

  • [27] D. Chandler, The transmission model of communication, http: //visual-memory.co.uk/daniel/Documents/short/trans.html, 1994, (Accessed: May 20 2018)Google Scholar

  • [28] W. Schramm, The Beginnings of Communication Study in America, Thousand Oaks, CA: Sage, 1997Google Scholar

  • [29] C. L. Baker, J. B. Tenenbaum, Modeling human plan recognition using bayesian theory of mind, In: Plan, activity, and intent recognition: Theory and practice, 2014, 177-204Google Scholar

  • [30] S. Baron-Cohen,Mindblindness: An essay on autism and theory of mind, MIT Press, Cambridge, 1995Google Scholar

  • [31] D. G. Premack, G. Woodruff, Does the chimpanzee have a theory of mind?, Behavioral and Brain Sciences, 1978, 1(4), 515-526Google Scholar

  • [32] M. Michlmayr Simulation theory versus theory theory: Theories concerning the ability to read minds,Master’s thesis, University of Innsbruck, 2002Google Scholar

  • [33] P. M. Churchland, Folk psychology and the explanation of human behavior, In: J. D. Greenwood (Ed.), The future of folk psychology, Cambridge University Press, Cambridge, 1991, 51-69Google Scholar

  • [34] R. Verbrugge, L. Mol, Learning to apply theory of mind, Journal of Logic, Language and Information, 2008, 4(17), 489-511Google Scholar

  • [35] B. Hare, J. Call, M. Tomasello, Do chimpanzees know what conspecifics know and do not know?, Animal Behaviour, 2001, 61(1), 139-151Google Scholar

  • [36] T. Bugnyar, S. A. Reber, C. Buckner, Ravens attribute visual access to unseen competitors, Nature Communications, 2015, 7Google Scholar

  • [37] L. M. Hiatt, J. G. Trafton, A cognitive model of theory of mind, In: International Conference on Cognitive Modeling, 2010Google Scholar

  • [38] L. Barlassina, R. M. Gordon, Folk psychology as mental simulation, In: E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer Edition), 2017Google Scholar

  • [39] M. A. Arbib, A. Billard, M. Iacoboni, E. Oztop, Synthetic brain imaging: Grasping, mirror neurons and imitation, Neural Networks, 2000, 13, 975-997Google Scholar

  • [40] A. I. Goldman, Theory of mind, In: E. Margolis, R. Samuels, S. P. Stich (Ed.), The Oxford Handbook of Philosophy of Cognitive Science, 2012Google Scholar

  • [41] P. Carruthers, Simulation and self-knowledge: a defence of theory-theory, In: P. Carruthers, P. R. Smith (Ed.), Theories of theories of mind, Cambridge University Press, Cambridge, 1996, 22-38Google Scholar

  • [42] G. Gergely, Z. Nfidasdy, G. Csibra, S. Biro, Taking the intentional stance at 12 months of age, Cognition, 1995, 56, 165-193CrossrefGoogle Scholar

  • [43] E. Bonchek-Dokow, Cognitive Modeling of Human Intention Recognition, PhD thesis, Bar Ilan University, 2012Google Scholar

  • [44] R. Chadalavada, H. Andreasson, R. Krug, A. J. Lilienthal, That’s on my mind! robot to human intention communication through on-board projection on shared floor space, In: Proceedings of European Conference on Mobile Robots, 2015Google Scholar

  • [45] S. Augustsson, J. Olsson, L. G. Christiernin, G. Bolmsjö, How to transfer information between collaborating human operators and industrial robots in an assembly, In: Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, NordiCHI ’14, New York, USA, ACM, 2014, 286-294Google Scholar

  • [46] M. Matthews, G. V. Chowdhary, Intent communication between autonomous vehicles and pedestrians, In: Robotics: Science and Systems, 2015Google Scholar

  • [47] K. Kobayashi, S. Yamada, Making a mobile robot to express its mind by motion overlap, In: V. A. Kulyukin (Ed.), Advances in Human-Robot Interaction, InTech, 2009Google Scholar

  • [48] D. Dennett, The Intentional Stance, MIT Press, Cambridge, 1987Google Scholar

  • [49] F. Hegel, S. Krach, T. Kircher, B. Wrede, G. Sagerer, Theory of mind (ToM) on robots: A functional neuroimaging study, In: HRI’08, Netherlands, 2008, 335-342Google Scholar

  • [50] X. Zhao, C. Cusimano, B. F. Malle, Do people spontaneously take a robot’s visual perspective?, In: HRI ’16: The Eleventh Annual ACM/IEEE International Conference on Human-Robot Interaction, 2016Google Scholar

  • [51] S. lai Lee, I. Y. man Lau, S. Kiesler, C.-Y. Chiu, Human mental models of humanoid robots, In: Proceedings of the 2005 IEEE International Conference on Robotics and Automation ICRA 2005, IEEE, 2005, 2767- 2772Google Scholar

  • [52] T. Nakata, T. Sato, T. Mori, Expression of emotion and intention by robot body movement, In: Proc. of the Intl. Conf. on Autonomous Systems, 1998Google Scholar

  • [53] A. Zhou, D. Hadfield-Menell, A. Nagabandi, A. D. Dragan, Expressive robot motion timing, In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, Vienna, Austria, March 6-9 2017, 2017, 22-31Google Scholar

  • [54] T. Ono, M. Imai, R. Nakatsu, Reading a robot’s mind: A model of utterance understanding based on the theory of mind mechanism, Advanced Robotics, 2000, 14(4), 311-326CrossrefGoogle Scholar

  • [55] S. H. Huang, D. Held, P. Abbeel, A. D. Dragan, Enabling Robots to Communicate their Objectives, ArXiv e-prints, 2017Google Scholar

  • [56] H. Kautz, A Formal Theory of Plan Recognition, PhD thesis, University of Rochester, 1987Google Scholar

  • [57] E. Charniak, R. P. Goldman, A Bayesian model of plan recognition, Artificial Intelligence, 1993, 64(1), 53-79Google Scholar

  • [58] M. Vilain, Getting serious about parsing plans: A grammatical analysis of plan recognition, In: Proceedings of National Conference on Artificial Intelligence, 1990Google Scholar

  • [59] H. H. Bui, S. Venkatesh, G. West, Policy recognition in the abstract hidden Markov model, Journal of Artificial Intelligence Research, 2002, 17, 451-499CrossrefGoogle Scholar

  • [60] E. A. Billing, T. Hellström, A formalism for learning from demonstration, Paladyn, Journal of Behavioral Robotics, 2010, 1(1), 1-13Google Scholar

  • [61] E. A. Billing, T. Hellström, L. E. Janlert, Behavior recognition for learning from demonstration, In: Proceedings of IEEE International Conference on Robotics and Automation, Alaska, Anchorage, 2010, 866-872Google Scholar

  • [62] A. Billard, S. Calinon, R. Dillmann, S. Schaal, Robot Programming by Demonstration, Springer, 2008CrossrefGoogle Scholar

  • [63] C. L. Nehaniv, K. Dautenhahn, Of hummingbirds and helicopters: An algebraic framework for interdisciplinary studies of imitation and its applications, World Scientific Press, 2000, 24, 136-161Google Scholar

  • [64] B. Jansen, T. Belpaeme, A computational model of intention reading in imitation, Robotics and autonomous systems, 2005, 54, 394-402Google Scholar

  • [65] S. Trott, M. Eppe, J. Feldman, Recognizing intention from natural language: Clarification dialog and construction grammar, In: Proceedings of 2016 Workshop on Communicating Intentions in Human-Robot Interactionnd Systems in NYC, NY, 2016Google Scholar

  • [66] A. Sutherland, S. Bensch, T. Hellström, Inferring robot actions from verbal commands using shallow semantic parsing, In: H. Arabnia (Ed.), Proceedings of the 17th International Conference on Artificial Intelligence ICAI’15, 2015, 28-34Google Scholar

  • [67] A. Rasouli, L. Kotseruba, J. K. Tsotsos, Agreeing to cross: How drivers and pedestrians communicate, In: Proceedings of the IEEE Intelligent Vehicles Symposium (IV), 2017Google Scholar

  • [68] B. Scassellati, Theory of mind for a humanoid robot, Autonomous Robots, 2002, 12, 13CrossrefGoogle Scholar

  • [69] A. M. Leslie, Tomm, toby, and agency: Core architecture and domain specificity, In: l. A. Hirschfeld, S. A. Gelman (Ed.),Mapping the Mind: Domain Specificity in Cognition and Culture, Cambridge University Press, Cambridge, 1994, 119-148Google Scholar

  • [70] B. Benninghoff, P. Kulms, L. Hoffmann, N. Krämer, Theory of mind in human-robot-communication: Appreciated or not?, Kognitive System, 2013, 1Google Scholar

  • [71] M. Berlin, J. Gray, A. L. Thomaz, C. Breazeal, Perspective taking: An organizing principle for learning in human-robot interaction, In: Nat. Conf. on Artificial Intelligence, vol. 21. AAAI Press, MIT Press, 2006Google Scholar

  • [72] G. Milliez, M. Warnier, A. Clodic, R. Alami, A framework for endowing an interactive robot with reasoning capabilities about perspective-taking and belief management, In: Int. Symp. on Robot and Human Interactive Communication, IEEE, 2014, 1103-1109Google Scholar

  • [73] K.-J. Kim, H. Lipson, Towards a simple robotic theory of mind, In: Proceedings of the 9thWorkshop on Performance Metrics for Intelligent Systems (PerMIS09), New York, USA, ACM, 2009, 131-138Google Scholar

  • [74] S. Devin, R. Alami, An implemented theory of mind to improve human-robot shared plans execution, In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI16), Piscataway, NJ, USA, IEEE, 2016, 319-326Google Scholar

  • [75] L. M. Hiatt, A. M. Harrison, J. G. Trafton, Accommodating human variability in human-robot teams through theory of mind, In: IJCAI International Joint Conference on Artificial Intelligence, 2011, 2066-2071Google Scholar

  • [76] C. Bereiter, Education and mind in the Knowledge Age, L. Erlbaum Associates, 2002Google Scholar

  • [77] D. Vernon, S. Thill, T. Ziemke, The role of intention in cognitive robotics, In: A. Esposito, L. C. Jain (Ed.), Toward Robotic Socially Believable Behaving Systems - Volume I, pages 15-27. Springer International Publishing, 2016Google Scholar

  • [78] S. Thrun, J. Schulte, C. Rosenberg, Robots with humanoid features in public places: A case study, IEEE Intelligent Systems archive, 2000, 15(4), 7-11Google Scholar

  • [79] F. Stulp, J. Grizou, B. Busch, M. Lopes, Facilitating intention prediction for humans by optimizing robot motions, In: International Conference on Intelligent Robots and Systems (IROS), 2015Google Scholar

  • [80] H. Romat, M.-A. Williams, X. Wang, B. Johnston, H. Bard, Natural human-robot interaction using social cues, In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, HRI ’16, Piscataway, NJ, USA, IEEE Press, 2016, 503-504Google Scholar

  • [81] J. Hough, D. Schlangen, It’s not what you do, it’s how you do it: Grounding uncertainty for a simple robot, In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ‘17), 2017Google Scholar

  • [82] K. Baraka, M. M. Veloso, Mobile service robot state revealing through expressive lights: Formalism, design, and evaluation, Journal of Social Robotics, 2017Google Scholar

  • [83] B. Kühnlenz, S. Sosnowski, M. Buß, D. Wollherr, K. Kühnlenz, M. Buss, Increasing helpfulness towards a robot by emotional adaption to the user, Int J Soc Robot, 2013, 5(4), 457-476Google Scholar

  • [84] A. Moon, B. Panton, M. V. der Loos, E. Croft, Using hesitation gestures for safe and ethical human-robot interaction, In: IEEE ICRA’10 Workshop on Interactive Communication for Autonomous Intelligent Robots, 2010, 11-13Google Scholar

  • [85] R. A. Knepper, On the communicative aspect of human-robot joint action, In: IEEE International Symposium on Robot and Human Interactive Communication Workshop: Toward a Framework for Joint Action, What about Common Ground?, New York, NY, USA, 2016Google Scholar

  • [86] R. A. Knepper, C. I. Mavrogiannis, J. Proft, C. Liang, Implicit communication in a joint action, In: Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’17, pages 283-292, New York, NY, USA, ACM, 2017Google Scholar

  • [87] A. Sciutti, G. Sandini, Interacting with robots to investigate the bases of social interaction, IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2017, 25, 2295-2304CrossrefGoogle Scholar

  • [88] C. Breazeal, A. Edsinger, P. Fitzpatrick, B. Scassellati, Active vision systems for sociable robots, IEEE Trans. Syst.Man Cybern., 2001, 31, 443-453Google Scholar

  • [89] A. Watanabe, T. Ikeda, Y. Morales, Communicating robotic navigational intentions, In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015Google Scholar

  • [90] R. T. Azuma, A survey of augmented reality, Presence, 1997, 6(4), 355-385Google Scholar

  • [91] J. Carff, M. Johnson, E. M. El-Sheikh, J. E. Pratt, Human-robot team navigation in visually complex environments, In: 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), 2009, 3043-3050Google Scholar

  • [92] C. Breazeal, P. Fitzpatrick, That certain look: Social-amplification of animate vision, In: AAAI 2000 Fall Symposium, 2000, 18-22Google Scholar

  • [93] F. Broz, A. Di Nuovo, T. Belpaeme, A. Cangelosi, Talking about task progress: Towards integrating task planning and dialog for assistive robotic services, Paladyn, Journal of Behavioral Robotics, 2015, 6(1), 111-118Google Scholar

  • [94] R. Kelley, A. Tavakkoli, C. King, M. Nicolescu, M. Nicolescu, Understanding activities and intentions for human-robot interaction, In: D. Chugo (Ed.), Advances in Human-Robot Interaction, In-Tech, 2010, 288-305Google Scholar

  • [95] H. Knight, R. Simmons, Layering laban effort features on robot task motions, In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, New York, NY, USA, ACM, 2015, 135-136Google Scholar

  • [96] J. G. Trafton, N. L. Cassimatis, M. D. Bugajska, D. P. Brock, F. E. Mintz, A. C. Schultz, Enabling effective human-robot interaction using perspective-taking in robots, Systems,Man and Cybernetics, 2005, 35(4), 460-470Google Scholar

About the article

Received: 2017-12-14

Accepted: 2018-05-17

Published Online: 2018-07-11


Citation Information: Paladyn, Journal of Behavioral Robotics, Volume 9, Issue 1, Pages 110–123, ISSN (Online) 2081-4836, DOI: https://doi.org/10.1515/pjbr-2018-0009.

Export Citation

© 2018 Thomas Hellström and Suna Bensch. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0

Citing Articles

Here you can find all Crossref-listed publications in which this article is cited. If you would like to receive automatic email messages as soon as this article is cited in other publications, simply activate the “Citation Alert” on the top of this page.

[1]
Aran Sena and Matthew Howard
The International Journal of Robotics Research, 2019, Page 027836491988462
[2]
Junren Luo, Xiang Ji, Wei Gao, Wanpeng Zhang, and Shaofei Chen
Symmetry, 2019, Volume 11, Number 8, Page 1059
[3]
Victoria Alonso and Paloma de la Puente
Frontiers in Neurorobotics, 2018, Volume 12

Comments (0)

Please log in or register to comment.
Log in