Abstract
Trust in technology is an important factor to be considered for safety-critical systems. Of particular interest today is the transport domain, as more and more complex information and assistance systems find their way into vehicles. Research in driving automation / automated driving systems is in the focus of many research institutes worldwide. On the operational side, active safety systems employed to save lives are frequently used by non-professional drivers that neither know system boundaries nor the underlying functional principle. This is a serious safety issue, as systems are activated under false circumstances and with wrong expectations. At least some of the recent incidents with advanced driving assistance systems (ADAS) or automated driving systems (ADS; SAE J3016) could have been prevented with a full understanding of the driver about system functionality and limitations (instead of overreliance). Drivers have to be trained to accept and use these systems in a way, that subjective trust matches objective trustworthiness (cf. “appropriate trust”) to prevent disuse and / or misuse. In this article, we present an interaction model for trust calibration that issues personalized messages in real time. On the showcase of automated driving we report the results of two user studies related to trust in ADS and driving ethics. In the first experiment (N = 48), mental and emotional states of front-seat passengers were compared to get deeper insight into the dispositional trust of potential users of automated vehicles. Using quantitative and qualitative methods, we found that subjects accept and trust ADSs almost similarly as male / female drivers. In another study (N = 40), moral decisions of drivers were investigated in a systematic way. Our results indicate that the willingness of drivers to risk even severe accidents increases with the number and age of pedestrians that would otherwise be sacrificed. Based on our initial findings, we further discuss related aspects of trust in driving automation. Effective shared vehicle control and expected advantages of fully / highly automated driving (SAE levels 3 or higher) can only be achieved when trust issues are demonstrated and resolved.
About the authors

Philipp Wintersberger is a research assistant at the research center CARISSMA (Center of Automotive Research on Integrated Safety Systems and Measurement Area) at the University of Applied Sciences Ingolstadt (THI). After finishing the Federal Higher Technical College for Informatics in Leonding, he studied Computer Science and obtained his diploma at the Johannes Kepler University Linz specializing in Human-Computer-Interaction and Computer Vision. He worked 10 years as software engineer / architect in professional software development (in the field of Business Process Management and Mobile Computing) and was repeatedly invited to give talks about mobile and software development. In January 2016, he decided to accept a position as PHD candidate in the area of Human Factors and & Driving Ergonomics at THI. His research interests focus on human factors in automated driving, especially trust in automation, ethics and driver state assessment.

Andreas Riener is professor for Human-Machine Interaction and Virtual Reality in the Faculty for Electrical Engineering and Computer Science at the University of Applied Sciences Ingolstadt (THI). He has a co-appointment in the research center CARISSMA (Center of Automotive Research on Integrated Safety Systems and Measurement Area) in the area of Human Factors & Driving Ergonomics. Riener is leading the degree program User Experience Design and the head of several labs at THI (UXD, Driving simulator).
His research interests include driving ergonomics, driver state assessment from physiological measures, human factors in driver-vehicle interfaces and topics related to (over)trust, user acceptance, and ethics in automated driving. His focus is hypothesis-driven experimental research in the area of driver and driving support systems at various levels (simulation, simulator studies, field operational tests, naturalistic driving studies). One particular interest is in the methodological investigation of human factors in driving (emotional state recognition: detection of stress, fatigue, cognitive overload, situation awareness; trust in and acceptance of technology, etc.). Furthermore, his research interests include cyber-physical (automotive) systems, augmented reality (AR) applications and virtual reality (VR) environments, and novel interaction concepts for automated driving including communication strategies, ethical and legal aspects, and safety and security issues (hacking, identity preservation).
Prof. Riener’s research has yielded more than 100 publications across various journals and conference proceedings in the broader field of sensor / actuator (embedded) systems, (implicit) human-computer interaction, human vital state recognition, or context-sensitive data processing. He has presented his research findings in more than 50 conference talks, was invited to teach courses at universities in Austria, Germany and US and to give keynote talks at several conferences. He was further invited as expert, consultant and key contributor to various workshops. Furthermore, he was engaged in several EU- (FP7 SOCIONICAL, FP7 OPPORTUNITY) and industrial funded (SIEMENS P2P, FACT) research projects and has been long-time reviewer for conferences (including PERVASIVE, UBICOMP, CHI, ISWC, AmI, EuroSSC) and journals (such as IEEE PCM, IEEE ITS, Springer PUC, etc.) in the pervasive / ubiquitous / automotive / embedded / networking domain. In June 2016 he was one of the co-organizers of the Dagstuhl seminar 16262 on “Automotive User Interfaces in the Age of Automation”.
Acknowledgements
This work is based, in part, on discussions with participants of the Dagstuhl Seminar 16262 “Automotive User Interfaces in the Age of Automation”, http://www.dagstuhl.de/16262.
References
[1] A. Mirnig, P. Wintersberger, C. Suttner und J. Ziegler, “A Framework for Analyzing and Calibrating Trust in Automated Vehicles,” in 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications AutomotiveUI2016, Ann Arbor, 2016.10.1145/3004323.3004326Search in Google Scholar
[2] A. Riener, “Die Einführung von hochautomatisiertem Fahren: Potenziale, Risiken, Probleme,” in Unterwegs in die Zukunft: Visionen zum Straßenverkehr, Wien, MANZ Verlag, 2016, pp. 105–116.Search in Google Scholar
[3] A.-K. Frison, P. Wintersberger und A. Riener, “First Person Trolley Problem: Evaluation of Drivers’ Ethical Decisions in a Driving Simulator,” in 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications AutomotiveUI2016, Ann Arbor, 2016.10.1145/3004323.3004336Search in Google Scholar
[4] B. Bailey und J. Konstan, “On the need for attention-aware systems: Measuring effects of interruption on task performance, error rate, and affective state,” Computers in Human Behavior 22, pp. 685–708, 2006.10.1016/j.chb.2005.12.009Search in Google Scholar
[5] B. M. Muir, “Trust between humans and machines, and the design of decision aids,” International Journal of Man-Machine Studies, Bd. 27, Nr. 5–6, pp. 527–539, 1987.10.1016/S0020-7373(87)80013-5Search in Google Scholar
[6] B. Schoettle und M. Sivak, “A survey of public opinion about autonomous and self-driving vehicles in the US, the UK, and Australia,” University of Michigan, Ann Arbor, Transportation Research Institute, 2014.10.1109/ICCVE.2014.7297637Search in Google Scholar
[7] D. A. Dickie und L. D. Boyle, “Drivers’ Understanding of Adaptive Cruise Control Limitations,” Proceeding of the Human Factors and Ergonomics Society Annual Meeting, Bd. 53, Nr. 23, 10 2009.10.1177/154193120905302313Search in Google Scholar
[8] D. A. Norman, “The Problem with automation: inappropriate feedback and interaction, not over-automation,” Philosophical Transactions of the Royal Society of London B: Biological Sciences, Bd. 327, Nr. 1241, pp. 585–593, 1990.10.1098/rstb.1990.0101Search in Google Scholar PubMed
[9] D. Garcia, C. Kreutzer, K. Badillo-Urquiola und M. Mouloua, “Measuring Trust of Autonomous Vehicles: A Development and Validation Study,” Vol 529 of the series Communications in Computer and Information Science, Bd. 529, pp. 610–615, 2015.10.1007/978-3-319-21383-5_102Search in Google Scholar
[10] F. Ekman, M. Johansson und J. L. Sochor, “Creating Appropriate Trust for Autonomous Vehicle Systems: A Framework for HMI Design,” in Proceedings of the 95th Annual Meeting of the Transportation Research Board, Washington, DC, 2016.10.1109/THMS.2017.2776209Search in Google Scholar
[11] G. H. Walker, N. A. Stanton und P. Salmon, “Trust in vehicle technology,” International Journal of Vehicle Design, Bd. 70, Nr. 2, pp. 157–182, 2016.10.1504/IJVD.2016.074419Search in Google Scholar
[12] G. J. Wilde, Target risk: Dealing with the danger of death, disease and damage in everyday decisions., Castor & Columbia, 1994.Search in Google Scholar
[13] I. Pettersson und K. Marianne, “Setting the stage for autonomous cars: a pilot study of future autonomous driving experiences,” IET intelligent transport systems, Bd. 9, Nr. 7, pp. 694–701, 2015.10.1049/iet-its.2014.0168Search in Google Scholar
[14] IMAS, “Selbstfahrende Autos in den Augen der Österreicher,” IMAS, http://www.imas.at/index.php/de/imas-report-de/aktuelle-reports/618-selbstfahrende-autos-in-den-augen-der-oesterreicher, 2016.Search in Google Scholar
[15] J. D. Lee und K. A. See, “Trust in automation: Designing for appropriate reliance,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 46, Nr. 1, pp. 50–80, 2004.10.1518/hfes.46.1.50.30392Search in Google Scholar
[16] J. Koo, J. Kwac, W. Ju, M. Steinert, L. Leifer und C. Nass, “Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance,” International Journal on Interactive Design and Manufacturing (IJIDeM), Bd. 9, Nr. 4, pp. 269–275, 2015.10.1007/s12008-014-0227-2Search in Google Scholar
[17] J. Lee und N. Moray, “Trust, control strategies and allocation of function in human-machine systems,” Ergonomics, Bd. 35, Nr. 10, pp. 1243–1270, 1992.10.1080/00140139208967392Search in Google Scholar PubMed
[18] J. Myounghoon, A. Riener, S. Jason., J.-H. Lee, B. Walker und I. Alvarez, “An International Survey on Autonomous and Electric Vehicles: Austria, Germany, South Korea and USA,” CHI ’17: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Denver, AT, US, 2017 (under review), 2017.Search in Google Scholar
[19] J.-F. Bonnefon, A. Shariff und I. Rahwan, “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?,” arXiv preprint arXiv:1510.03346, 2015.Search in Google Scholar
[20] J.-F. Bonnefon, A. Shariff und I. Rahwan, “The social dilemma of autonomous vehicles,” Science, Bd. 352, Nr. 6293, pp. 1573–1576, 2016.10.1126/science.aaf2654Search in Google Scholar PubMed
[21] K. A. Hoff und M. Bashir, “Trust in automation integrating empirical evidence on factors that influence trust,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 57, Nr. 3, pp. 407–434, 2015.10.1177/0018720814547570Search in Google Scholar PubMed
[22] M. Bahram, A. M. und W. D., “Please Take Over! An Analsysis and Strategy For a Driver Take Over Request During Autonomous Driving,” in IEEE Intelligent Vehicles Symposium, Seoul, 2015.10.1109/IVS.2015.7225801Search in Google Scholar
[23] M. Manseer und A. Riener, “Evaluation of Driver Stress while Transiting Road Tunnels,” in 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI’14), Seattle, USA, 2014.10.1145/2667239.2667269Search in Google Scholar
[24] M. Mori, “The Uncanny Valley,” Energy, Bd. 7, Nr. 4, pp. 33–35, 1970.10.5749/j.ctvtv937f.7Search in Google Scholar
[25] N. J. Goodall, “Can you program ethics into a self-driving car?,” IEEE Spectrum, Bd. 53, Nr. 6, pp. 28–58, 2016.10.1109/MSPEC.2016.7473149Search in Google Scholar
[26] P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. de Visser und R. Parasuraman, “A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 53, Nr. 5, pp. 517–527, 2011.10.1177/0018720811417254Search in Google Scholar PubMed
[27] P. L. Hardré, “When, How, and Why Do We Trust Technology Too Much?,” in Emotions, Technology, and Behaviors, Academic Press, 2015, p. 85.10.1016/B978-0-12-801873-6.00005-4Search in Google Scholar
[28] P. Lin, “Why ethics matters for autonomous cars,” in Autonomous Driving, Springer, 2016, pp. 69–85.10.1007/978-3-662-48847-8_4Search in Google Scholar
[29] P. Wintersberger und A. Riener, “Maximizing Driver Satisfaction and Productivity in Side Activities By Using Context-Aware Take-Over Timing,“ in EARPA Form Forum, Brussels, 2016.Search in Google Scholar
[30] P. Wintersberger, “Human Factors in Highly Automated Driving,” in 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Doctoral Colloquium), Ann Arbor, 2016.Search in Google Scholar
[31] P. Wintersberger, A. Riener und A.-K. Frison, “Automated Driving System, Male, or Female Driver: Who’d You Prefer? Comparative Analysis of Passengers’ Mental Conditions, Emotional States and Qualitative Feedback,” in Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications AutomotiveUI2016, Ann Arbor, 2016.10.1145/3003715.3005410Search in Google Scholar
[32] R. C. Mayer, J. H. Davis und F. D. Schoorman, “An integrative model of organizational trust,” Academy of management review, Bd. 20, Nr. 3, pp. 709–734, 1995.10.2307/258792Search in Google Scholar
[33] R. Parasuraman und V. Riley, “Humans and automation: Use, misuse, disuse, abuse,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 39, Nr. 2, pp. 230–253, 1997.10.1518/001872097778543886Search in Google Scholar
[34] S. Hergeth, L. Lorenz, R. Vilimek und J. F. Krems, “Keep Your Scanners Peeled Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 58, Nr. 3, pp. 509–519, 2016.10.1177/0018720815625744Search in Google Scholar PubMed
[35] S. Marsh und M. R. Dibben, “The role of trust in information science and technology,” Annual Review of Information Science and Technology, Bd. 37, Nr. 1, pp. 465–498, 2003.10.1002/aris.1440370111Search in Google Scholar
[36] S. Thill, M. Riveiro und M. Nilsson, “Perceived intelligence as a factor in (semi-) autonomous vehicle UX,” in Experiencing Autonomous Vehicles: Crossing the Boundaries between a Drive and a Ride, workshop in conjunction with CHI2015, Seoul, 2015.Search in Google Scholar
[37] T. Helldin, G. Falkman, M. Riveiro und S. Davidsson, “Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving,” in Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Eindhoven, 2013.10.1145/2516540.2516554Search in Google Scholar
[38] W. Payre, J. Cestac und P. Delhomme, “Fully Automated Driving Impact of Trust and Practice on Manual Control Recovery,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 58, Nr. 2, pp. 229–241, 2016.10.1177/0018720815612319Search in Google Scholar PubMed
© 2016 Walter de Gruyter GmbH, Berlin/Boston