Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Paladyn, Journal of Behavioral Robotics

Editor-in-Chief: Schöner, Gregor

1 Issue per year

CiteScore 2017: 0.33

SCImago Journal Rank (SJR) 2017: 0.104

Open Access
See all formats and pricing
More options …

Attractive, Informative, and Communicative Robot System on Guide Plate as an Attendant with Awareness of User’s Gaze

Tomoko Yonezawa / Hirotake Yamazoe / Akira Utsumi / Shinji Abe
Published Online: 2013-12-10 | DOI: https://doi.org/10.2478/pjbr-2013-0008


In this paper, we introduce an interactive guide plate system by adopting a gaze-communicative stuffed-toy robot and a gaze-interactive display board. An attached stuffed-toy robot on the system naturally show anthropomorphic guidance corresponding to the user’s gaze orientation. The guidance is presented through gaze-communicative behaviors of the stuffed-toy robot using joint attention and eye-contact reactions to virtually express its own mind in conjunction with b) vocal guidance and c) projection on the guide plate. We adopted our image-based remote gaze-tracking method to detect the user’s gazing orientation. The results from both empirical studies by subjective / objective evaluations and observations of our demonstration experiments in a semipublic space show i) the total operation of the system, ii) the elicitation of user’s interest by gaze behaviors of the robot, and iii) the effectiveness of the gaze-communicative guide adopting the anthropomorphic robot.

Keywords: gaze-correspondence; anthropomorphic media; stuffed-toy robot; gazing behaviors; gaze-tracking

  • [1] Arrington Research. Viewpoint eyetracker. http: //arringtonresearch.com/index.html.Google Scholar

  • [2] C. Breazeal, D. Buchsbaum, J. Gray, D. Gatenby, and B. Blumberg. Learning from and about others: Towards using imitation to bootstrap the social understanding of others by robots. Au-tonomous Robots, 11(Issues 1-2):31-62, 2005.Google Scholar

  • [3] Wolfram Burgard, Armin B. Cremers, Dieter Fox, Dirk Hahnel, Gerhard Lakemeyer, Dirk Schulz, Walter Steiner, and Sebastian Thrun. Experiences with an interactive museum tour-guide robot. Artificial Intelligence, 114(1-2):3-55, 1999.Google Scholar

  • [4] B. R. Duffy. Anthropomorphism and the social robot. Roboticsand Autonomous Systems, 42(3):177-190, 2003.Google Scholar

  • [5] A. Fukayama, T. Ohno, N. Mukawa, M. Sawaki, and N. Hagita. Messages embedded in gaze of interface agents - impression management with agent’s gaze-. Proc. ACM SIGCHI2002, 1:41-49, 2002.Google Scholar

  • [6] M. Imai, T. Ono, and H. Ishiguro. Physical relation and expression: Joint attention for human-robot interaction. IEEE Int. Workshopon Robot and Human Communication, pages 512-517, 2001.Google Scholar

  • [7] Lewis Johnson, Jeff Rickel, Randy Stiles, and Allen Munro. Integrating pedagogical agents into virtual environments. Presence:Teleoperators and Virtual Environments, 7(Issue 6):523-546, 1998.Google Scholar

  • [8] Y. Katagiri, T. Takahashi, and Y. Takeuchi. Social persuasion in human-agent interaction. Second IJCAI Workshop on Knowl-edge and Reasoning in Practical Dialogue Systems, IJCAI-2001:64-69, 2001. Google Scholar

  • [9] A. Kendon. Some functions of gaze-direction in social interaction. Acta Psychologica, 26:22-63, 1967.PubMedCrossrefGoogle Scholar

  • [10] H. Kojima. Infanoid: A babybot that explores the social environment. Socially Intelligent Agents, pages 157-164, 2002.Google Scholar

  • [11] Y. Matsumoto and A. Zelinsky. An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement. Proc. Int. Conf. Automatic Face and Gesture Recog-nition, pages 499-504, 2000.Google Scholar

  • [12] C. Moore, P. J. Dunham, and P. Dunham. Joint Attention: ItsOrigins and Role in Development. Lawrence Erlbaum, 1995.Google Scholar

  • [13] D. Moore. ‘it’s like a gold medal and it’s mine’ - dolls in dementia care. Journal of Dementia Care, 9(6):20-22, 2001.Google Scholar

  • [14] J. Pineau. Towards robotic assistants in nursing homes: challenges and results. Robotics and Autonomous Systems, 42(Issues 3-4):271-281, 2003.Google Scholar

  • [15] M. Pollack, S. Engberg, J.T. Matthews, Sebastian Thrun, L. Brown, D. Colbry, C. Orosz, B. Peintner, S. Ramakrishnan, J. Dunbar- Jacob, C. McCarthy, Michael Montemerlo, Joelle Pineau, and Nicholas Roy. Pearl: A mobile robotic assistant for the elderly. Workshop on Automation as Caregiver: the Role of Intelli-gent Technology in Elder Care (AAAI), 2002.Google Scholar

  • [16] L. Goldman. Counseling With Children in Contemporary Society, J. Mental Health Counseling, 26(2):168-187, 2004.Google Scholar

  • [17] T. Shibata, M. Yoshida, and J. Yamato. Artificial Emotional Creature for Human-Machine Interaction. Systems,Man,and Cyber-netics, pp. 2269-2274, 1997.Google Scholar

  • [18] C. Sidner, C. Lee, C. D. Kidd, N. Lesh, and C. Rich. Explorations in engagement for humans and robots. Artificial Intelligence, 12(Issues 1-2), 2005.Google Scholar

  • [19] Y. Sumi, T. Etani, S. Fels, N. Simonet, K. Kobayashi, and K. Mase. C-map: Building a context-aware mobile assistant for exhibition tours. Community Computing and Support Systems, LNCS 1519:137-154, 1998.Google Scholar

  • [20] Will Taggart, Sherry Turkle, and Cory D. Kidd. An interactive robot in a nursing home: Preliminary remarks. CogSci 2005 AndroidScience Workshop, pages 56-61, 2005.Google Scholar

  • [21] A. L. Thomaz, M. Berlin, and C. Breazeal. An embodied computational model of social referencing. ROMAN 2005, pages 591-598, 2005.Google Scholar

  • [22] Hiroko Tochigi, Kazuhiko Shinozawa, and Norihiro Hagita. User impressions of a stuffed doll robot’s facing direction in animation systems. Proc. of ICMI07, pages 279-284, 2007.Google Scholar

  • [23] H. Yamazoe, A. Utsumi, T. Yonezawa, and S. Abe. Remote gaze direction estimation with a single camera based on facial-feature tracking without special calibration actions. Eye-tracking Re-searches and Applications, 2008. to appear.Google Scholar

  • [24] T. Yonezawa, N. Suzuki, S. Abe, K. Mase, and K. Kogure. Crossmodal coordination of expressive strength between voice and gesture for personified media. Proc. of ICMI06, pages 43-50, 2006.Google Scholar

  • [25] T. Yonezawa, H. Yamazoe, A. Utsumi, and S. Abe. Gazecommunicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking. Proc. ofICMI07, 2007.Google Scholar

  • [26] Y. Yoshikawa, K. Shinozawa, H. Ishiguro, N. Hagita, and T. Miyamoto. The effects of responsive eye movement and blinking behavior in a communication robot. Proc. IROS2006, pages 4564-4569, 2006.Google Scholar

  • [27] Stephen V. Shepherd, Following gaze: gaze-following behavior as a window into social cognition, Frontiers in Integrative Neuro-science, Vol.4, Art.5, 2010.Google Scholar

  • [28] Lauri Nummenmaa, and Andrew J. Calder, Neural mechanisms of social attention, Trends in cognitive sciences, Vol.13, No.3, pp. 135-143, 2009. Google Scholar

About the article

Published Online: 2013-12-10

Published in Print: 2013-12-01

Citation Information: Paladyn, Journal of Behavioral Robotics, Volume 4, Issue 2, Pages 113–122, ISSN (Print) 2081-4836, DOI: https://doi.org/10.2478/pjbr-2013-0008.

Export Citation

This content is open access.

Comments (0)

Please log in or register to comment.
Log in