Jump to ContentJump to Main Navigation
Show Summary Details
More options …

i-com

Journal of Interactive Media

Editor-in-Chief: Ziegler, Jürgen

Online
ISSN
2196-6826
See all formats and pricing
More options …
Volume 18, Issue 1

Issues

Perception of an Uncertain Ethical Reasoning Robot

Hanna Stellmach
  • Corresponding author
  • Foundations of Artificial Intelligence Group, Department of Computer Science, University of Freiburg, Freiburg, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Felix Lindner
  • Foundations of Artificial Intelligence Group, Department of Computer Science, University of Freiburg, Freiburg, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
Published Online: 2019-04-16 | DOI: https://doi.org/10.1515/icom-2019-0002

Abstract

This study investigates the effect of uncertainty expressed by a robot facing a moral dilemma on humans’ moral judgment and impression formation. In two experiments, participants were shown a video of a robot explaining a moral dilemma and suggesting a decision to make. The robot either expressed certainty or uncertainty about the decision it suggests. Participants rated how much blame the robot deserves for its decision, the moral wrongness of the chosen action, and their impression of the robot in terms of four scale dimensions measuring social perception. The results suggest that the subpopulation of participants unfamiliar with the moral dilemma assigns significantly more blame to the uncertain robot as compared to the certain one, while expressed uncertainty has less effect on moral wrongness judgments. The second experiment suggests that higher blame ratings are mediated by the fact that the uncertain robot was perceived as more humanlike. We discuss implications of this result for the design of social robots.

Keywords: Moral HRI; Conversational Robot; Uncertainty

References

  • [1]

    Bastian, B., Laham, S. M., Wilson, S., Haslam, N., & Koval, P. (2011). Blaming, praising, and protecting our humanity: The implications of everyday dehumanization for judgments of moral status. British Journal of Social Psychology, 50(3), 469–483.Web of ScienceCrossrefGoogle Scholar

  • [2]

    Carpinella, C. M., Wyman, A. B., Perez, M. A., & Stroessner, S. J. (2017). The Robotic Social Attributes Scale (RoSAS): Development and Validation. In Proceedings of HRI 2017 (pp. 254–262).Google Scholar

  • [3]

    Carroll, M., & Nelson, T. O. (1993). Effect of overlearning on the feeling of knowing is more detectable in within-subject than in between-subject designs. The American journal of psychology, 106, 227–235.CrossrefGoogle Scholar

  • [4]

    Charness, G., Gneezy, U., & Kuhn, M. A. (2012). Experimental methods: Between-subject and within-subject design. Journal of Economic Behavior & Organization, 81(1), 1–8.CrossrefWeb of ScienceGoogle Scholar

  • [5]

    Cushman, F., Young, L., & Hauser, M. (2006). The role of conscious reasoning and intuition in moral judgment: Testing three principles of harm. Psychological Science, 17(12), 1082–1089.CrossrefGoogle Scholar

  • [6]

    Eyssel, F., Hegel, F., Horstmann, G., & Wagner, C. (2010). Anthropomorphic inferences from emotional nonverbal cues: A case study. In Ro-man, 2010 ieee (pp. 646–651).Google Scholar

  • [7]

    Goodwin, G. P., Piazza, J., & Rozin, P. (2014). Moral character predominates in person perception and evaluation. Journal of Personality and Social Psychology, 106(1), 148–168.CrossrefWeb of ScienceGoogle Scholar

  • [8]

    Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108(4), 814–834.CrossrefGoogle Scholar

  • [9]

    Kahn Jr, P. H., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., … Severson, R. L. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? In Proceedings of the seventh annual ACM/IEEE international conference on human-robot interaction (pp. 33–40).Google Scholar

  • [10]

    Kohlberg, L. (1969). Stage and Sequence: The Cognitive-Developmental Approach to Socialization. Rand McNally.Google Scholar

  • [11]

    Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2).Google Scholar

  • [12]

    Langer, E. J., Blank, A., & Chanowitz, B. (1978). The mindlessness of ostensibly thoughtful action: The role of “placebic” information in interpersonal interaction. Journal of Personality and Social Psychology, 36(6), 635–642.CrossrefGoogle Scholar

  • [13]

    Lindner, F., & Bentzen, M. M. (2017). The hybrid ethical reasoning agent immanuel. In Proceedings of HRI 2017 (pp. 187–188).Google Scholar

  • [14]

    Lindner, F., Wächter, L., & Bentzen, M. M. (2017). Discussions about lying with an ethical reasoning robot. In Proceedings of RO-MAN 2017.Google Scholar

  • [15]

    Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015). Sacrifice one for the good of many?: People apply different moral norms to human and robot agents. In Proceedings of HRI 2015 (pp. 117–124).Google Scholar

  • [16]

    Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016). Which robot am i thinking about?: The impact of action and appearance on people’s evaluations of a moral robot. In Proceedings of HRI 2016 (pp. 125–132).Google Scholar

  • [17]

    Marsi, E., & Van Rooden, F. (2007). Expressing uncertainty with a talking head in a multimodal question-answering system. In MOG 2007 Workshop on Multimodal Output Generation (pp. 105–116).Google Scholar

  • [18]

    Pizarro, D., Uhlmann, E., & Salovey, P. (2003). Asymmetry in judgments of moral blame and praise: The role of perceived metadesires. Psychological Science, 14(3), 267–272.CrossrefGoogle Scholar

  • [19]

    Powers, A., & Kiesler, S. (2006). The advisor robot: tracing people’s mental model from a robot’s physical attributes. In Proceedings of the 1st acm sigchi/sigart conference on human-robot interaction (pp. 218–225).Google Scholar

  • [20]

    Smith, V. L., & Clark, H. H. (1993). On the course of answering questions. Journal of Memory and Language, 32(1), 25–38.CrossrefGoogle Scholar

  • [21]

    Swerts, M., & Krahmer, E. (2005). Audiovisual prosody and feeling of knowing. Journal of Memory and Language, 53(1), 81–94.CrossrefGoogle Scholar

  • [22]

    Tannenbaum, D., Uhlmann, E. L., & Diermeier, D. (2011). Moral signals, public outrage, and immaterial harms. Journal of Experimental Social Psychology, 47(6), 1249–1254.CrossrefWeb of ScienceGoogle Scholar

  • [23]

    Thomson, J. J. (1985). The trolley problem. The Yale Law Journal, 94(6), 1395–1415.CrossrefGoogle Scholar

  • [24]

    Uhlmann, E. L., Pizarro, D. A., & Diermeier, D. (2015). A person-centered approach to moral judgment. Perspectives on Psychological Science, 10(1), 72–81.CrossrefWeb of ScienceGoogle Scholar

  • [25]

    Voiklis, J., Kim, B., Cusimano, C., & Malle, B. F. (2016). Moral judgments of human vs. robot agents. In Robot and human interactive communication (ro-man), 2016 25th ieee international symposium on (pp. 775–780).Google Scholar

  • [26]

    Wächter, L., & Lindner, F. (2018). An explorative comparison of blame attributions to companion robots across various moral dilemmas. In In proceedings of the 6th international conference on human-agent interaction (hai 2018) (pp. 269–276).Google Scholar

About the article

Hanna Stellmach

Hanna Stellmach holds a Bachelor’s Degree in Cognitive Science from the University of Tübingen and a Master’s Degree of Computer Science from the University of Freiburg. In her Master’s Thesis, she worked on the perception of ethical robots.

Felix Lindner

Felix Lindner is a post-doctoral researcher at the University of Freiburg. His current research interests include moral human-robot interaction and machine ethics.


Published Online: 2019-04-16

Published in Print: 2019-04-26


Citation Information: i-com, Volume 18, Issue 1, Pages 79–91, ISSN (Online) 2196-6826, ISSN (Print) 1618-162X, DOI: https://doi.org/10.1515/icom-2019-0002.

Export Citation

© 2019 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in