Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag December 9, 2016

Trust in Technology as a Safety Aspect in Highly Automated Driving

  • Philipp Wintersberger

    Philipp Wintersberger is a research assistant at the research center CARISSMA (Center of Automotive Research on Integrated Safety Systems and Measurement Area) at the University of Applied Sciences Ingolstadt (THI). After finishing the Federal Higher Technical College for Informatics in Leonding, he studied Computer Science and obtained his diploma at the Johannes Kepler University Linz specializing in Human-Computer-Interaction and Computer Vision. He worked 10 years as software engineer / architect in professional software development (in the field of Business Process Management and Mobile Computing) and was repeatedly invited to give talks about mobile and software development. In January 2016, he decided to accept a position as PHD candidate in the area of Human Factors and & Driving Ergonomics at THI. His research interests focus on human factors in automated driving, especially trust in automation, ethics and driver state assessment.

    EMAIL logo
    and Andreas Riener

    Andreas Riener is professor for Human-Machine Interaction and Virtual Reality in the Faculty for Electrical Engineering and Computer Science at the University of Applied Sciences Ingolstadt (THI). He has a co-appointment in the research center CARISSMA (Center of Automotive Research on Integrated Safety Systems and Measurement Area) in the area of Human Factors & Driving Ergonomics. Riener is leading the degree program User Experience Design and the head of several labs at THI (UXD, Driving simulator).

    His research interests include driving ergonomics, driver state assessment from physiological measures, human factors in driver-vehicle interfaces and topics related to (over)trust, user acceptance, and ethics in automated driving. His focus is hypothesis-driven experimental research in the area of driver and driving support systems at various levels (simulation, simulator studies, field operational tests, naturalistic driving studies). One particular interest is in the methodological investigation of human factors in driving (emotional state recognition: detection of stress, fatigue, cognitive overload, situation awareness; trust in and acceptance of technology, etc.). Furthermore, his research interests include cyber-physical (automotive) systems, augmented reality (AR) applications and virtual reality (VR) environments, and novel interaction concepts for automated driving including communication strategies, ethical and legal aspects, and safety and security issues (hacking, identity preservation).

    Prof. Riener’s research has yielded more than 100 publications across various journals and conference proceedings in the broader field of sensor / actuator (embedded) systems, (implicit) human-computer interaction, human vital state recognition, or context-sensitive data processing. He has presented his research findings in more than 50 conference talks, was invited to teach courses at universities in Austria, Germany and US and to give keynote talks at several conferences. He was further invited as expert, consultant and key contributor to various workshops. Furthermore, he was engaged in several EU- (FP7 SOCIONICAL, FP7 OPPORTUNITY) and industrial funded (SIEMENS P2P, FACT) research projects and has been long-time reviewer for conferences (including PERVASIVE, UBICOMP, CHI, ISWC, AmI, EuroSSC) and journals (such as IEEE PCM, IEEE ITS, Springer PUC, etc.) in the pervasive / ubiquitous / automotive / embedded / networking domain. In June 2016 he was one of the co-organizers of the Dagstuhl seminar 16262 on “Automotive User Interfaces in the Age of Automation”.

From the journal i-com

Abstract

Trust in technology is an important factor to be considered for safety-critical systems. Of particular interest today is the transport domain, as more and more complex information and assistance systems find their way into vehicles. Research in driving automation / automated driving systems is in the focus of many research institutes worldwide. On the operational side, active safety systems employed to save lives are frequently used by non-professional drivers that neither know system boundaries nor the underlying functional principle. This is a serious safety issue, as systems are activated under false circumstances and with wrong expectations. At least some of the recent incidents with advanced driving assistance systems (ADAS) or automated driving systems (ADS; SAE J3016) could have been prevented with a full understanding of the driver about system functionality and limitations (instead of overreliance). Drivers have to be trained to accept and use these systems in a way, that subjective trust matches objective trustworthiness (cf. “appropriate trust”) to prevent disuse and / or misuse. In this article, we present an interaction model for trust calibration that issues personalized messages in real time. On the showcase of automated driving we report the results of two user studies related to trust in ADS and driving ethics. In the first experiment (N = 48), mental and emotional states of front-seat passengers were compared to get deeper insight into the dispositional trust of potential users of automated vehicles. Using quantitative and qualitative methods, we found that subjects accept and trust ADSs almost similarly as male / female drivers. In another study (N = 40), moral decisions of drivers were investigated in a systematic way. Our results indicate that the willingness of drivers to risk even severe accidents increases with the number and age of pedestrians that would otherwise be sacrificed. Based on our initial findings, we further discuss related aspects of trust in driving automation. Effective shared vehicle control and expected advantages of fully / highly automated driving (SAE levels 3 or higher) can only be achieved when trust issues are demonstrated and resolved.

1 The Role of Trust in HMI

The future of human-machine interaction will be shaped by interaction of humans with complex machines in potentially safety-critical environments (e. g., automated traffic, industry 4.0, emergency / paramedical robots). In particular, automated vehicles (AVs), which will be among the first systems rolled out to a broad public, will challenge learned communication patterns between different road participants. In contrast to long-established fields of automation, like aviation or power plant control, where similar systems are only controlled by well trained and educated experts in limited conditions and environments, automated vehicles will be used by a wide variety of users with different levels of experience (from novice to expert drivers) in circumstances we can hardly imagine today. When pondering about the future of human-machine interaction, a critical factor of success and user acceptance is trust in technology. People can become very creative when using (or misusing) technology, and history has shown – not only for the car domain – that people tend to overtrust technology after some time of use [27], in particular if they are not familiar with the technology and its operational principles. For driving, overtrust can quickly result in safety- (and life-) critical situations, as numerous accidents with advanced driver assistance systems and just recently some accidents with Tesla cars have shown. In the accidents reported, most drivers did not permanently monitor their ADSs (as required by the vehicle manufacturer) or overtrusting for some other reasons. For example, in May 2016 a Tesla Model S crashed into a van while on Autopilot. The driver saw the crash coming, but trusted the system: “Yes, I could have reacted sooner, but when the car slows down correctly 1,000 times, you trust it to do it the next time too. My bad…”1[1]

Trust is a concept developing over time, starting long before actually getting in contact with the system [10]. For example, people’s trust in automated driving (AD) functions, such as “Tesla’s Autopilot”, would have been reported higher before a fatal Tesla accident (May 7, 2016) were made public. Today, after a few months have passed, this might again be reported differently, as trust is also regained when critical events are forgotten, no matter if a person was actually involved or only spectator. Similarities can be found in other mental dimensions such as anxiety or strain. Manseer and Riener [23] found, for example, that heart rate variability (HRV) analysis of both driver and front-seat passenger showed an elevated level of stress (identical development) while driving through a tunnel compared to open road segment – the passenger had no active role and could have leant back and relax.

Furthermore, from interviews conducted with study participants we found out, that at least some people do not trust a computer to control a vehicle at all [31]. Both researchers and manufacturers thus must address trust issues, otherwise the general public could prevent the broad deployment of AD. For instance, if too less people trust AVs, some proposed advantages will not become present as they require a high market penetration (reduced congestion, increased road safety, etc.). If on the other hand too many people overtrust AVs and misuse their functions, yet unknown accidents might give the impression that AD is error prone, not very sophisticated, and unsafe.

Furthermore, trust in automation is not limited to a single user but to society as a whole. Beside individual vehicle functions that might be trusted differently, it represents a triple of strongly interweaved abstract concepts (Figure 1). People will accept safety-critical systems only if they perceive them to be trustworthy and can comprehend their ethical implications. The exact extend of each factor may permanently change based on the current context, the actual (driving and secondary / tertiary) tasks, and the current user. In our view, automated driving will only be successful if people experience benefits, like the possibility to perform additional side activities. Those again can only be performed effectively and without distraction, if the underlying vehicle functions are trusted. In addition, also other human factors (such as situation awareness, fatigue, amount of training etc.) of the human operator, must be considered.

Figure 1 
          Mutual influence of trust, acceptance, and ethical aspects based on the actual task, context and operator (left subfigure); high-level view on the personalized trust calibration model with different human (HD) and system dimensions (SD) (right subfigure).
Figure 1

Mutual influence of trust, acceptance, and ethical aspects based on the actual task, context and operator (left subfigure); high-level view on the personalized trust calibration model with different human (HD) and system dimensions (SD) (right subfigure).

As a consequence, it is essential to look at the aspect of trust in technology from a holistic point of view, as becomes is clear that “trust” is very personal and context / situation-dependent. The process of adjusting an operator’s trust to fit a system’s capabilities (cf., appropriate trust) is called “calibration of trust” [5]. Such a calibration might be achieved by proper design of the human-machine interface by 1) quantifying the current levels of trust the operator attributes to important dimensions of trust and 2) correcting mistrust and overtrust (if present) with HMI, so that the driver trusts vehicle functions accordingly to their capabilities in the current context. We can also assume, that not only the vehicle has to be trusted but also the operator (exchange of truster and trustee, see Figure 1). A holistic model for trust calibration could use all data available about the operator and the automation, as well as the environment, to communicate trust developing messages to the operator.

Nevertheless, such information must fit the needs of operator as well as his / her knowledge and system experience (reporting the same information again and again might reduce his / her attention, provoking to miss important details in critical situations). With this work, we discuss in-depth the process of trust calibration and give insight in recent user studies.

The rest of the paper is structured as follows: We begin with an overview on related work followed by a discussion of the three scopes of trust calibration. First, we refer to distrust, as this is one of the main hurdles that must be overcome to contribute to the potential success of automated vehicles – other issues can become relevant only, if people start using the technology. We then discuss the safety-critical problem of overtrust before continuing to future driver-vehicle interaction. Only when systems are trusted appropriately, we can fully benefit of measures that try to improve the quality of shared control. As trust is not a single concept, we further discuss various dimensions of trust and present the results of a user study targeting ethical aspects. Finally, we present our model of trust calibration that is capable of modeling the concerning aspects and discuss how trust calibration could be communicated to the user.

2 Related Work

Trust is discussed in many domains, and has already a long history in the field of Human-Robot Interaction (HRI). Hancock et al. [26], for example, performed a meta-analysis on work addressing HRI and suggest to lay more emphasis on the human dimensions in trust development, even if they are said to play the minor role. Thill, Rivero and Nilsson [36] argue that people begin to “perceive vehicles as intelligent agents rather than mere tools”, and thus, an automated vehicle can, least to some extent, be seen as a robot. As anthropomorphism was identified as a relevant trust factor [10], AVs might also need to consider the “uncanny valley” [24] or similar problems discussed in the field of HRI.

When looking at definitions of trust in the field of AD, it becomes evident that literature uses a lot different interpretations. Common is, that 1) trust on technology builds up on similar concepts as interpersonal trust, 2) trust is different based on the understanding of technology, its capabilities and limitations, and finally 3) that trust highly depends on objective measurements of trustworthiness of a system (e. g., system performance and its ability to communicate what it is currently doing or next steps that are planned, “why and how” information [16]).

The connection from interpersonal to human-automation trust was presented by Lee and See [15], who defined trust as “the attitude that an agent will help achieve and individual’s goals in a situation characterized by uncertainty and vulnerability” and is the result of analytic, analogical and as well affective processes in a complex relationship between the “truster” (human) and the “trustee” (automation to be trusted). Ekman et al. [10] summarized Lee and Moray’s [17] transformation of three main trust-related components given by Mayer et al. [32] (ability, integrity and benevolence): “trust is built on the possibility to observe the system’s behavior (performance), understand the intended use of the system (purpose), as well as understand how it makes decisions (process)”. According to Marsh and Dibben [35], trust in automation from an operator’s perspective can be divided into three different layers:

  1. Dispositional Trust” is given by our general trust in automation and is influenced by various factors, such as age, culture, gender, personality or recent events like automation catastrophes (see Introduction).

  2. Situational Trust”, is built on interaction with the system in a given context.

  3. Learned Trust”, can further be split into “Initial Learned Trust” (initial system understanding, attitudes or brand reputation) and “Dynamic Learned Trust” (experienced system performance and reliability), see [21].

Parasuraman and Riley [33] stated, that interaction with an automated system can be classified with the terms use, disuse and misuse. Disuse reflects distrust and means, that an automated function is not used by an operator, while misuse reflects overtrust (system usage in wrong circumstances). Systems thus should always foster use and prevent misuse and disuse. Norman [8] argued that many automation catastrophes are the result of missing or inappropriate feedback rather than human error.

Recent work from Helldin et al. [37] investigated trust in connection with “Take-Over-Requests” (TORs). In a user study, they could show, that presenting uncertainty information to subjects resulted in increased Take-Over performance, even if it subjectively reduced trust in the system. Regarding TOR scenarios, Payre and colleagues [38] showed that overtrust can lead to increased reaction time. Hergeht et al. [34] found a correlation between the monitoring frequency (analyzed by gaze behavior) and trust, what is one of the first attempts to quantify trust in real time, and Ekman, Johansson and Sochor [10] presented a holistic framework called the “lifecycle of trust” and identified 11 key factors affecting trust (Feedback, Error Information, Uncertainty Information, Why and How Information, Training, Common Goals, Adaptive Information, Anthropomorphism, Customization, Expert / Reputable and Mental Models). Hoff and Bashir [21] summarize current advances and open questions in trust research generally, while Walker et al. [11] directly address trust in vehicle technology.

3 The Scale of Trust Calibration

Three components shape trust in automation – calibration, resolution and specificity [15]. Resolution denotes the relation of trust to automation capability. In a good resolution, a range of system capability maps to the same range of trust, whereas in a bad resolution this relationship deviates. Specificity “refers to the degree of which trust is associated with a particular component or aspect of the trustee” [15], describing the different modes and subfunctions and their changes over time (see Chapter 4 Dimensions of Trust and Ethical Decisions in Automated Driving). Calibration in turn describes the correspondence of trust and automation capability. We speak of distrust when trust is lower than capability, overtrust when trust exceeds capability, and appropriate trust when both correspond accordingly. Within this chapter, we discuss the scale of calibration in the context of automated driving.

3.1 Mistrust, Disuse and User Acceptance

Before talking about overtrust, proper system usage or improving the quality of shared control, we have to guarantee that people actually use automated vehicle technology. Although a high personal interest in AVs is reported by many people, it is still not strictly clear if people accept and use highly automated driving systems just because they are available. According to Schoettle and Sivak [6], only few people are willing to invest in autonomous features and many respondents think that “automated vehicles may not perform as well as actual drivers”. A recent survey conducted in Austria [14] revealed, that only 19 % of study participants believed that automated driving comes with higher safety while a quarter fears an increasing number of accidents due to software failures. These views seem to be stable over industrialized countries, as own studies confirm [18]. A vast majority (80 %) of subjects want to “gain control of the car in any situation” and would not allow a vehicle to overrule their input even in safety-critical situations. Only 22 % could imagine to own a car without lateral / longitudinal controls (like the Google self-driving vehicle) and more than a third fears that criminal organizations or even governments could use their vehicles against them. Garcia et al. [9] state, that “attitudes towards various autonomous features” are not yet fully examined, what is highly important as “the individual trust will take a prominent role in the success of the future use of vehicles”.

Trust, user acceptance and attitude towards autonomous features is connected with dispositional trust in automation. Regarding automated vehicles, we are interested in differences of dispositional trust between people, for instance between different age groups, gender, education level or other personality traits, as, according to Parasuraman and Riley [33], “large individual differences make systematic prediction of automation use by specific operators difficult”. Our first user study [31] thus dealt with the question, how people accept / trust ADSs. Most studies that try to investigate users’ perception of AVs are conducted in form of opinion surveys and interviews, while only a few people had the chance to drive in an AV. As people that use ADSs proverbially “lay their own, as well as their friends and families live in the hands of a complex computer system”, we wanted to find out how people trust ADSs, and if they react differently in comparison to human drivers. Therefor we have set up a user study (N = 48) in our high-fidelity hexapod driving simulator (moving platform) containing three different groups. Each participant was instructed to take place in the driving simulator to act as front-seat passenger of either an ADS, a male or a female driver. For all subjects the trip containing various situations that commonly would be perceived as “dangerous” to provoke stress and arousal (unregulated urban junctions with dense traffic, overtaking maneuvers with an approaching truck, fast driven hairpin curves in a mountain range), was exactly the same – in all cases the computer was driving, for the two groups with human (male, female) drivers, the experimenter just acted as “Wizard of Oz” making subjects believe he / she is controlling the vehicle. As self-report might conflict with observed behavior, we followed the suggestion of Hancock et al. [26] and included both subjective and objective measurements. To quantify trust and user acceptance we compared mental conditions and emotional states between the groups, and combined physiological measurements (HRV recorded by an electrocardiogram) with video recordings (to classify emotions and facial expressions) and subjective self-evaluation (PANAS, Affect Grid, Circumplex Model, as well as a self-designed survey and debriefing-interviews). Regarding the physiological HRV measurements, we could not find any statistically significant differences in the heart rate variability between the three groups (Figure 2). Nearly all subjects showed an increased stress tendency during the pre-defined dangerous driving situations.

Figure 2 
            Heart-rate variability (HRV) analysis. To compare the groups, we calculated the slope of the regression line fitting the 10 one-minute measurements (RMSSD, k<0 indicates an increased stress tendency, while k>0 indicates increased relaxation, left). On the right side, the individual slopes are plotted for the three groups. Evaluation revealed no statistically significant differences between the them, most subjects had a slightly increased tendency of stress.
Figure 2

Heart-rate variability (HRV) analysis. To compare the groups, we calculated the slope of the regression line fitting the 10 one-minute measurements (RMSSD, k<0 indicates an increased stress tendency, while k>0 indicates increased relaxation, left). On the right side, the individual slopes are plotted for the three groups. Evaluation revealed no statistically significant differences between the them, most subjects had a slightly increased tendency of stress.

We also recorded videos of all subjects in the experiment to compare their facial expressions during the trip. To guarantee similar circumstances, we told subjects not to talk to the experimenter (male or female driver). We then extracted a single frame for every second and classified their emotions with the Microsoft Oxford Emotion API, that can distinguish between 7 basic facial expressions (happiness, surprise, sadness, fear, disgust, contempt, anger) and a neutral face. Presumably as a result of high tension, most faces were classified as neutral and showed no emotions. The only emotional expression with relevant extent was happiness. Front-seat passengers of ADSs showed very low happiness values (1.86 %), passengers of female drivers showed hints of satisfaction at least in 3.99 % of the images, while the highest value (7.6 %) was expressed by front-seat passengers of male drivers. The classified images revealed even more interesting details when further splitting the groups according to their gender. Driver-passenger pairs with the same gender showed similar values for happiness (5.04 % for female and 5.35 % for male pairs). Female passengers of male drivers were classified as happy in 9.27 % of the images, while only 1.56 % of male faces showed happiness when being passenger of a female driver (even lower as for the ADS, Figure 3).

Figure 3 
            Facial expressions of subjects as classified by the Microsoft Oxford Emotion API. Most classified facial expressions represent “neutral” faces (94.3 %). People seemed to be most happy when being front-seat passenger of a male driver, while female drivers and ADSs seemed not to be a source of great satisfaction.
Figure 3

Facial expressions of subjects as classified by the Microsoft Oxford Emotion API. Most classified facial expressions represent “neutral” faces (94.3 %). People seemed to be most happy when being front-seat passenger of a male driver, while female drivers and ADSs seemed not to be a source of great satisfaction.

Additionally, we investigated our subjects’ subjective feelings using multiple standardized techniques, namely the PANAS questionnaire and the Affect Grid. With PANAS, subjects have to rate the intensity of 10 positive and 10 negative feelings on a 5-point Likert scale. In the Affect Grid, subjects have to classify their emotional state on a two-dimensional plane where one dimension stands for “pleasure” and the other for “arousal”. In our experiment, subjects had to do this twice, once before and once after their driving experience. Considering the subjective evaluation of PANAS adjectives, Affect Grid and self-designed surveys, no significant differences could be evaluated (Figure 4). We further conducted unstructured interviews with 14 passengers of group driven by the ADS and asked for three adjectives describing the experience. The adjectives were asserted to the Circumplex Model (Figure 4). Conflicting with our quantitative measurements, interviews showed that trust and acceptance regarding automated vehicles polarized – 8 out of 14 interviewed subjects stated, that there was no difference between human drivers and ADSs: “It felt normal for me, there is no difference as if a person would sit next to me. I think each person I know drives much worse than an automated vehicle”. They further would not hesitate to take a trip in reality: “Yes definitely [] I think in some years it will be a standard and it will not be allowed anymore to drive on our own, only offside the roads or for parking”, or “Sure, I think errors can happen everywhere, and they happen less with computers than by human drivers who can be distracted”. Contrarily, skepticism was reported by 5 subjects: “I would still like to keep control”. A reason for this is, that some like driving a car or have very low dispositional trust in AVs: “I trust in people, not in technology, if there is a human driver, it feels more concrete”.

Figure 4 
            The Affect Grid (left) is similar for all groups both before and after the experiment. The small differences in the chart are not statistically significant (at p < 0.05). On the right side we can see adjectives as stated by subjects arranged in the Circumplex Model with the font-size according to their number of mentions. Italic type corresponds to new adjectives, regular typed keywords are the one defined by Russel in the original model.
Figure 4

The Affect Grid (left) is similar for all groups both before and after the experiment. The small differences in the chart are not statistically significant (at p < 0.05). On the right side we can see adjectives as stated by subjects arranged in the Circumplex Model with the font-size according to their number of mentions. Italic type corresponds to new adjectives, regular typed keywords are the one defined by Russel in the original model.

Summarized, we could validate our hypothesis, that an ADS (compared to human drivers) has only little effect on the mental conditions and emotional states of front-seat passengers, and that such minor differences can also be observed between male and female drivers. Taking into account, that the trip was performed exactly the same way for all subjects (despite the driver), we might allocate these differences to already existing attitudes of people, like a gender biases that could be a reason for the fact that male passengers of female drivers showed less satisfaction (in terms of classified “happiness”) than otherwise. Also subjects’ knowledge of being in a driving simulator likely had influence on their stress levels as the provoked risky driving situations could not result in real harm. Finally, being passenger of an ADS seems not to be a source of satisfaction, and beside increasing dispositional trust of potential users, future vehicle HMI must find strategies to maintain the fun of driving.

3.2 Misuse and Overtrust

Drivers that use an ADS or ADAS system for a longer period of time likely adjust their behavior to the new situation, what can result in more risky behavior, as they believe the vehicle to be “intrinsically safer” [11]. This leads to an overtrust situation (levels of trust exceed the system’s actual trustworthiness), where drivers “rely uncritically on automation without recognizing its limitations or fail to monitor the automation’s behavior” [33]. A recent case for such a situation is the fatal accident of a Tesla Model S in May 2016, where both the automated system and the driver missed to recognize the white trailer of a forthcoming truck turning left into the lane the vehicle was driving, classifying it as an overhead sign. We can easily assume, that the driver would have been able to rightly realize the nearing danger when being aware of the situation. Nevertheless, overtrust issues are already present for a long period of time. For instance, Dickie and Boyle [7] showed, that many subjects of a user study were not aware of the system limitations of an adaptive cruise control (ACC) system and used it in situations where it could not work. Similar effects were observed by Wilde with anti-lock breakings (ABS) in the early nineties of the last century [12]: after monitoring a large number of cabs in Munich, half of them equipped with ABS, it was found out, that the ones using ABS had an increased accident rate and drove riskier (“risk homeostasis”). Taking such effects into account might make it difficult to increase traffic safety before reaching the phase of fully automated vehicles, but proper designed vehicle HMI could help to reduce the number of safety-critical situations resulting from overtrust – recent media articles (August 20162[2]) already reported, that Tesla might deactivate the automated driving feature (“Autopilot”) in case drivers do not have their hands on the wheel too often. This can of course only be a first attempt to deal with misuse and overtrust (hands on the wheel do not guarantee situation awareness) – still, a holistic system must take advantage of excessive driver state assessment in combination with environmental / traffic risk estimation to calibrate situational trust and fight misuse. All in all, it leaves us puzzled why after all these years of attested overtrust dangers, especially in the automotive domain, no countermeasures found their ways into available vehicles.

3.3 Appropriate Trust and Shared Control

An often mentioned advantage of automated driving is the emerging possibility to perform additional secondary tasks or side activities not possible with manually driven vehicles, such as relaxation, working, sleeping, reading, socializing, watching movies, productive working or even drinking alcohol [13], leading to a transformation of interior vehicle design. While in former forms of automated systems (like aviation) the primary objective was to increase performance and ease of managing the primary task, automated driving comes with an unknown dimension: Instead of only substituting the driving task by a joint human-machine system, yet unknown side activities must be supported to fulfill an operator’s needs. Clearly, such a transformation will only be possible if trust issues are solved first – people that do not trust ADSs will hardly engage in complex side activities, while unrestricted attention to such side activities can lead to overtrust, if drivers stop to monitor the automation. In our view, the role of primary (driving) and secondary (side activity) tasks will switch in the near future, at least from a consumer’s perspective – drivers will rate automated vehicles also by their interaction quality and comfort in side activities that will become the new primary activity in AVs.

As long as the phase of full automation is not reached, vehicles will provide automated driving systems only for specialized and limited environments (such as highway driving or a congestion assistance), and regularly TORs to demand manual control from a driver. This leads to new problems also associated with trust: drivers engaging in side activities have to switch between mental models of arbitrary activities and quickly regain awareness of the current driving situation. It is already known, that driving performance immediately after re-engaging in the driving task decreases with shorter Take-Over time [22]. Drivers will thus have to trust their vehicles to present such TORs properly, giving them the right amount of time to re-engage in the driving task, while not annoying them with unnecessary requests when they are in the middle of important activities (as false alarms can play an important role in trust development). To increase the quality of shared control and especially Take-Over situations, we have proposed a system that tries to issue TORs only at emerging task boundaries [29], see Figure 5. It is proven, that notifications provoking task-switches work best at task boundaries or times of low mental workload [4], while random notifications lead to stress and cognitive overload (what is everything else then ideal in safety-critical driving situations). Such a system could, for instance, monitor a driver’s activities with commonly used devices like smartphones, tablet computers or notebook and / or use video observation in the interior to issue TOR when appropriate (after completing a paragraph when writing emails, after switching to a new page when reading, at editor cuts when watching a video instead of anywhere in between, between songs or in breaks of a conversation, if possible, etc.).

Figure 5 
            Context-sensitive Take-Over-Requests: Emerging obstacles can be propagated to following vehicles, giving them an extra phase to time a TOR in a way not disturbing the driver during times of high cognitive load.
Figure 5

Context-sensitive Take-Over-Requests: Emerging obstacles can be propagated to following vehicles, giving them an extra phase to time a TOR in a way not disturbing the driver during times of high cognitive load.

Designing systems that issue TORs with respect to a driver’s engagement in side activities and situation awareness could improve productivity in secondary tasks and increase the quality of driver-vehicle interaction while reducing the number of accidents and safety-critical situations. Less perceived stress in situations that demand manual control recovery could further increase trust in automation, leaving the impression that the vehicle “cares about the driver”.

4 Dimensions of Trust and Ethical Decisions in Automated Driving

Although users will integrate their experience with a system as whole (“system trust”), trust in automation cannot be seen as a single factor, it rather has multiple dimensions and their relationships may vary between different human operators. Some could assert trust into different subsystems or vehicle tasks or actions, such as “trust in navigation”, “trust in lateral control”, “trust in overtaking maneuvers”, “trust in Take-Over timing and communication” or “trust in correct ethical decisions” (although ethics must be seen as related concept rather than a trust dimension, they overlap due to their mutual influence). Such dimensions refer to “specificity”, and each of those dimensions, as well as their functional synergy in the context of automated driving, has to be investigated.

Especially ethical decisions are a widely discussed topic these days. Automated vehicles will encounter situations where they have to decide on their own between two or more options with potentially lethal outcome for other road users. Some vehicle manufacturers argue, that this discussion is pointless as their vehicles will always drive so slow that they can stop safely – but what if a child chases a ball when the vehicle is not expecting it? To ensure that the society as a whole can trust automated vehicles, a public discussion about machine ethics and moral agents is highly important. Goodall [25] states, that these decisions must be rational and comprehensible, and if not taken seriously, the public might reject this technology when accidents become reality [28]. Bonnefon et al. [19] suggest to solve the problem using experimental ethics, and a moral dilemma widely used in such experiments is the so-called “Trolley Problem”. In this dilemma (translated to automated driving), an AV (represented by study participants) has to decide if it should go left and run over person A or go to the right and hit person B, assuming that a safe resolution of the situation is no more possible. Number and details about the persons associated with the options can easily be varied to investigate a certain aspect. Bonnefon and colleagues [20] have already conducted multiple studies with variations of the Trolley Problem and found out, that most people want vehicles to decide utilitarian (minimize the negative outcome for the society as a whole, even if it means to run into a wall and kill the driver to save multiple pedestrians), but most people would not buy an utilitarian vehicle for themselves. These experiments were mostly conducted online and offered options in which the involved persons are definitely killed, while the authors suggested to include uncertainties and risk in future experiments.

Thus we adapted the problem to investigate the decisions of drivers directly confronted with the situation [3]. Subjects (40 people between 19 and 50 years, 7 female / 33 male from several continents: 2 / Africa, 8 / Asia, 26 / Europe, 3 / North-America, 1 South-America) sitting in a driving simulator were presented multiple variations of the dilemma. They always had to – anonymously – decide if they want to take option A (swerve and kill pedestrians) or option B (stay on track and hit an obstacle). In our experiment, hitting the obstacle was not necessarily associated with being killed – we presented a percentage value to the subjects indicating their “probability to survive” when hitting the obstacle. We thus wanted to find out, under which conditions drivers would sacrifice themselves and provoke an accident to save the pedestrians. Additionally, we varied the number of affected pedestrians (1 or 5) as well as their age and personality (children, seniors or the “best friend”). Our hypothesis was, that the decision is strongly dependent on the presented “probability of survival”, the number of people affected, as well as their age / personality.

Our results indicate, that many people still want vehicles to act utilitarian, even when they are directly affected by the decision. Even when having a zero probability to survive, many people would sacrifice (and thus kill) themselves, especially when their vehicle would otherwise kill children (Figure 6). This seems illogical, but was backed by qualitative interviews we conducted with our subjects: “I would never want to possess a vehicle that kills people”. Still, some subjects mentioned that they would not have to feel guilty when the vehicle takes the decision, nevertheless they have to live with it. Another group of subjects answered not as merciful and argued, that if they pay for an automated vehicle, it has the responsibility to save them under any circumstances, no matter who would be affected from such a decision. Surprisingly, only a very little chance of survival (25 %) made people to risk the accident. Of course some decisions might have been influenced by social desirability bias emerging from the experimental setting. Nevertheless, we might partly explain the little percentage of survival needed to risk the own life with the optimism bias. People often overestimate their own controllability and believe that they belong to the small group that will not have to deal with the worst consequences.

Figure 6 
          Results of the ethics user study. The higher the chance of survival, the fewer subjects are willed to sacrifice pedestrians for their own good (left subfigure). Different groups associated with option A (right subfigure): while it seems often legitimate to sacrifice a single person to save the own live, it makes a difference with multiple persons. Only very few people would sacrifice 5 children to save their own life, while sacrificing 5 elderly people still feels acceptable for a larger group of subjects.
Figure 6

Results of the ethics user study. The higher the chance of survival, the fewer subjects are willed to sacrifice pedestrians for their own good (left subfigure). Different groups associated with option A (right subfigure): while it seems often legitimate to sacrifice a single person to save the own live, it makes a difference with multiple persons. Only very few people would sacrifice 5 children to save their own life, while sacrificing 5 elderly people still feels acceptable for a larger group of subjects.

The difference regarding the “chance of survival” indeed was statistically significant, as well as the two other dimensions, namely the size of the group and the age of the victims. With an increasing number of people affected from the decisions, people tend to risk their own life instead the lives of others. Also the age of the victims plays an important role in decision making. This effect becomes more evident with a larger group size – 30 % would sacrifice one child, 40 % one senior, but only 7.5 % would sacrifice 5 children in contrast to still 25 % for 5 seniors. Furthermore, we asked subjects if they would like their vehicle to behave exactly the same way as they did in the experiment. Most subjects were not sure, some stated that they would never want their decisions to find its way into a real algorithm. Summarizing, there is no right and wrong in those decisions, but it still has to be further investigated which approaches might be able to find consent within the public. Although some correlations were observed, we found strong diversity between the underlying moral arguments of subjects. As such argumentation can result from different personality traits, this implies strong differences in their dispositional trust [21]. Taking the results of our study into account, the size of the group and the age of the victims could act as valid decision boundary. When thinking about other potential parameters that could be used to differentiate between people, some have already mentioned things like social status or even education level [2]. It has to be mentioned, that even if some of those might make sense from a logical point of view, implementing decisions based on these properties would violate the universal declaration of human rights, that states that all human life has to be treated equally. Nevertheless, we cannot avoid such discussions when facing uncomfortable implications of future technology – ethical and moral standards were developed through the whole history of mankind, making modern societies to what we call “civilized”, but we still have to accept that those represent only today’s norms and not necessarily the top of the ladder.

5 An Implementation for Real-Time Trust Calibration

Trust must permanently correspond to a system’s capabilities to remain “appropriate”, but also system performance strongly depends on context. Environmental properties, such as road conditions or traffic volume, that have influence on the performance of ADSs, as well as drivers’ properties (situation awareness, engagement in secondary tasks, etc.) might change even within single trips, and demand a permanent observation of trust levels for proper re-calibration. A re-calibration of trust to increase / decrease a driver’s monitoring frequency could be achieved with different measures, for instance retraining, presenting “why and how” information or even an intended performance drop. The form of messages can vary in detail, amount, mode or degree of anthropomorphism. A system for trust calibration can issue such messages and, by observation, learn from a driver about their impact over time to personalize the calibration process for each individual automatically. Therefore, we need to 1) build models capable of representing all the necessary information, 2) be able to quantify a driver’s trust levels, and 3) find methods to communicate and re-calibrate his trust.

5.1 A Theoretical Model of Trust Calibration

We already have discussed potential diversities in the dispositional trust of drivers, serious threats resulting from overtrust as well as the multi-dimensionality of the problem. To finally be able to integrate such findings into a model that supports trust calibration with respect to varying personalities of users, we have proposed a model for flexible trust calibration (Figure 7, for a self-contained model description see [30]). The model assumes, that trust levels of a user can be quantified in real-time (x-axis) and calibrated by personalized messages (level of feedback, y-axis) to fit the actual trustworthiness of an automated function. The automated function itself can be described by its trustworthiness (or reliability r’), that varies based on the current context where it is in use. We define trust to be perfectly calibrated when the quantified trust level exactly matches trustworthiness. The function further has a well-defined criticality (c’) that specifies the (inverse) width of the range of appropriate trust. A function with criticality 1 has an appropriate trust range of zero, meaning trust to be already wrongly calibrated when deviating only very little from the system’s capabilities. A system with high criticality could be a highly automated highway driving system, where only little errors in trust calibration can quickly become a safety issue, while non-safety-critical functions, like navigation systems, allow a larger deviation. Trustworthiness / reliability r’ and criticality c’ of the automated function under a given context may be inferred from various sources like sensor and vehicle data, accident and test statistics, map data, current weather and lighting conditions, traffic volume, etc.

Figure 7 
            The diagram shows the level of feedback needed to push a user’s quantified trust level to the range of appropriate trust a’ for two exemplary systems, an ADS like Tesla Autopilot or Google Self-Driving Car Project (black), and an arbitrary Navigation System (dotted grey). The parameters are chosen for explanation purposes. System criticality c’ reflects the range of appropriate trust for a particular system.
Figure 7

The diagram shows the level of feedback needed to push a user’s quantified trust level to the range of appropriate trust a’ for two exemplary systems, an ADS like Tesla Autopilot or Google Self-Driving Car Project (black), and an arbitrary Navigation System (dotted grey). The parameters are chosen for explanation purposes. System criticality c’ reflects the range of appropriate trust for a particular system.

The level of feedback needed to calibrate a user’s trust may now follow a U-shaped function: the larger the deviation of his / her trust level from the point of actual trustworthiness (appropriate trust), the higher the level of feedback needed for re-calibration. When a user starts working with the system, his / her trust levels are quantified and reside somewhere on the graph(s). This initial position is given by his / her dispositional trust in the system and can strongly vary, based on personal characteristics, brand reputation, possible training, etc. For instance, a novice user still mistrusting the system (Un) is now issued a high level of feedback to push his trust levels to fit into the range of appropriate trust (a’). Analogously, a user facing overtrust would reside on the other side of the spectrum (Uo, can also represent a novice user and does not necessarily mean he / she has already experience working with the system).

After estimation of the parameters r’ and c’ in the model for each dimension of trust in the context of automated driving, personalized (multi-modal) messages can now try to calibrate trust levels of arbitrary users. The remaining question is, how is it possible to quantify situational trust, and what kind of messages are useful for trust calibration?

5.2 Quantifying and Communicating Trust

To allow a function to calibrate an individual’s trust, somehow his / her current trust levels (remember: various dimensions) have to be quantified. Most methods aiming for trust calibration today use subjective scales and need active input of a user, what is not applicable in the real-time context. Walker, Stanton and Salmon [11] suggest to use primary task measures in form of predictability – when a system is being able to predict when certain automated features are used, it is likely that the automation is also used properly. A vehicle could detect when a certain feature is enabled by a user and check, based on the current environment and traffic situation, if it fits to the intentions of the designer. If someone enables an ADS designed for highway driving on a rural street, the user might overtrust the system. If he never enables a certain function, he might distrust the system. Combined with additional measurements given by driver state assessment, such a feature could become very powerful.

Hergeth et al. [34] showed a connection between trust and monitoring frequency by analyzing gaze behavior of study participants. Monitoring frequency might be easily determined by video analysis of the driver and the vehicle interior and can act as trust measure – users who monitor a function permanently might face distrust, while users never monitoring a function will face overtrust. Not only monitoring frequency, also the choice of side activities, situation awareness or mental workload might reflect a user’s trust levels and could act as additional measure to make the quantification process more robust.

As soon as trust levels of a user are quantified and correlated to the actual trustworthiness of the function, the system can determine if feedback influencing trust must be issued. According to Lee and See [15], such information can be defined along the dimensions detail and abstraction to support analytic-, analogical- and affect-based trust development. Influencing trust might be achieved with messages containing why and how [16], error and uncertainty information with varying degree of anthropomorphism [10]. Such messages should be tailored for each user individually based to fit his preferences [21] and system experience. A framework that maps driving tasks performed by automated vehicles into trust affecting messages by targeting the three layers of vehicle control (operational, tactical, strategical) has been developed by Mirnig et al. [1]. The concept is based on the idea, that each vehicle operation can be described by actions within the three layers, and based on user experience and context, only a subset is presented in form of why and how information to “match the human’s expectations in the system”. Also, displaying information that influences risk judgement might be able to influence trust [15]. A vehicle could, at least for a short time window, change its driving style (or just present uncertainty information as done by Helldin et al. [37]), appearing not to have the situation fully under control. When done carefully, this could increase monitoring frequency and reduce overtrust.

6 Conclusion

In this work we have shown that trust in automated vehicles is an important topic for traffic safety and the quality of vehicle control. Although trust in automation has a long history from other domains like aviation, the concepts cannot easily be transferred to the domain of automated driving due to the wide variety of users and environments, indicating that trust research in AD is just at the beginning. Multiple accidents and safety-critical situations happening recently indicate, that trust calibration for drivers is a must-have in future vehicle generations to correspond drivers’ situational trust to the systems’ actual capabilities. By combining recent advances in trust research, driver state assessment, physiological sensing, computer vision and artificial intelligence, it might be possible to implement real-time trust calibration sensitive to different vehicle tasks / subsystems and finds calibration strategies for arbitrary users automatically.

Furthermore, we have shown that dispositional trust and attitude towards automated vehicles differ between users and that trust research is not only a safety, but also (by highlighting the need for moral vehicles) a philosophical issue for the whole society. Automation trust will shape the quality of interaction with vehicles in particular and other automated systems in general.

About the authors

Philipp Wintersberger

Philipp Wintersberger is a research assistant at the research center CARISSMA (Center of Automotive Research on Integrated Safety Systems and Measurement Area) at the University of Applied Sciences Ingolstadt (THI). After finishing the Federal Higher Technical College for Informatics in Leonding, he studied Computer Science and obtained his diploma at the Johannes Kepler University Linz specializing in Human-Computer-Interaction and Computer Vision. He worked 10 years as software engineer / architect in professional software development (in the field of Business Process Management and Mobile Computing) and was repeatedly invited to give talks about mobile and software development. In January 2016, he decided to accept a position as PHD candidate in the area of Human Factors and & Driving Ergonomics at THI. His research interests focus on human factors in automated driving, especially trust in automation, ethics and driver state assessment.

Andreas Riener

Andreas Riener is professor for Human-Machine Interaction and Virtual Reality in the Faculty for Electrical Engineering and Computer Science at the University of Applied Sciences Ingolstadt (THI). He has a co-appointment in the research center CARISSMA (Center of Automotive Research on Integrated Safety Systems and Measurement Area) in the area of Human Factors & Driving Ergonomics. Riener is leading the degree program User Experience Design and the head of several labs at THI (UXD, Driving simulator).

His research interests include driving ergonomics, driver state assessment from physiological measures, human factors in driver-vehicle interfaces and topics related to (over)trust, user acceptance, and ethics in automated driving. His focus is hypothesis-driven experimental research in the area of driver and driving support systems at various levels (simulation, simulator studies, field operational tests, naturalistic driving studies). One particular interest is in the methodological investigation of human factors in driving (emotional state recognition: detection of stress, fatigue, cognitive overload, situation awareness; trust in and acceptance of technology, etc.). Furthermore, his research interests include cyber-physical (automotive) systems, augmented reality (AR) applications and virtual reality (VR) environments, and novel interaction concepts for automated driving including communication strategies, ethical and legal aspects, and safety and security issues (hacking, identity preservation).

Prof. Riener’s research has yielded more than 100 publications across various journals and conference proceedings in the broader field of sensor / actuator (embedded) systems, (implicit) human-computer interaction, human vital state recognition, or context-sensitive data processing. He has presented his research findings in more than 50 conference talks, was invited to teach courses at universities in Austria, Germany and US and to give keynote talks at several conferences. He was further invited as expert, consultant and key contributor to various workshops. Furthermore, he was engaged in several EU- (FP7 SOCIONICAL, FP7 OPPORTUNITY) and industrial funded (SIEMENS P2P, FACT) research projects and has been long-time reviewer for conferences (including PERVASIVE, UBICOMP, CHI, ISWC, AmI, EuroSSC) and journals (such as IEEE PCM, IEEE ITS, Springer PUC, etc.) in the pervasive / ubiquitous / automotive / embedded / networking domain. In June 2016 he was one of the co-organizers of the Dagstuhl seminar 16262 on “Automotive User Interfaces in the Age of Automation”.

Acknowledgements

This work is based, in part, on discussions with participants of the Dagstuhl Seminar 16262 “Automotive User Interfaces in the Age of Automation”, http://www.dagstuhl.de/16262.

References

[1] A. Mirnig, P. Wintersberger, C. Suttner und J. Ziegler, “A Framework for Analyzing and Calibrating Trust in Automated Vehicles,” in 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications AutomotiveUI2016, Ann Arbor, 2016.10.1145/3004323.3004326Search in Google Scholar

[2] A. Riener, “Die Einführung von hochautomatisiertem Fahren: Potenziale, Risiken, Probleme,” in Unterwegs in die Zukunft: Visionen zum Straßenverkehr, Wien, MANZ Verlag, 2016, pp. 105–116.Search in Google Scholar

[3] A.-K. Frison, P. Wintersberger und A. Riener, “First Person Trolley Problem: Evaluation of Drivers’ Ethical Decisions in a Driving Simulator,” in 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications AutomotiveUI2016, Ann Arbor, 2016.10.1145/3004323.3004336Search in Google Scholar

[4] B. Bailey und J. Konstan, “On the need for attention-aware systems: Measuring effects of interruption on task performance, error rate, and affective state,” Computers in Human Behavior 22, pp. 685–708, 2006.10.1016/j.chb.2005.12.009Search in Google Scholar

[5] B. M. Muir, “Trust between humans and machines, and the design of decision aids,” International Journal of Man-Machine Studies, Bd. 27, Nr. 5–6, pp. 527–539, 1987.10.1016/S0020-7373(87)80013-5Search in Google Scholar

[6] B. Schoettle und M. Sivak, “A survey of public opinion about autonomous and self-driving vehicles in the US, the UK, and Australia,” University of Michigan, Ann Arbor, Transportation Research Institute, 2014.10.1109/ICCVE.2014.7297637Search in Google Scholar

[7] D. A. Dickie und L. D. Boyle, “Drivers’ Understanding of Adaptive Cruise Control Limitations,” Proceeding of the Human Factors and Ergonomics Society Annual Meeting, Bd. 53, Nr. 23, 10 2009.10.1177/154193120905302313Search in Google Scholar

[8] D. A. Norman, “The Problem with automation: inappropriate feedback and interaction, not over-automation,” Philosophical Transactions of the Royal Society of London B: Biological Sciences, Bd. 327, Nr. 1241, pp. 585–593, 1990.10.1098/rstb.1990.0101Search in Google Scholar PubMed

[9] D. Garcia, C. Kreutzer, K. Badillo-Urquiola und M. Mouloua, “Measuring Trust of Autonomous Vehicles: A Development and Validation Study,” Vol 529 of the series Communications in Computer and Information Science, Bd. 529, pp. 610–615, 2015.10.1007/978-3-319-21383-5_102Search in Google Scholar

[10] F. Ekman, M. Johansson und J. L. Sochor, “Creating Appropriate Trust for Autonomous Vehicle Systems: A Framework for HMI Design,” in Proceedings of the 95th Annual Meeting of the Transportation Research Board, Washington, DC, 2016.10.1109/THMS.2017.2776209Search in Google Scholar

[11] G. H. Walker, N. A. Stanton und P. Salmon, “Trust in vehicle technology,” International Journal of Vehicle Design, Bd. 70, Nr. 2, pp. 157–182, 2016.10.1504/IJVD.2016.074419Search in Google Scholar

[12] G. J. Wilde, Target risk: Dealing with the danger of death, disease and damage in everyday decisions., Castor & Columbia, 1994.Search in Google Scholar

[13] I. Pettersson und K. Marianne, “Setting the stage for autonomous cars: a pilot study of future autonomous driving experiences,” IET intelligent transport systems, Bd. 9, Nr. 7, pp. 694–701, 2015.10.1049/iet-its.2014.0168Search in Google Scholar

[14] IMAS, “Selbstfahrende Autos in den Augen der Österreicher,” IMAS, http://www.imas.at/index.php/de/imas-report-de/aktuelle-reports/618-selbstfahrende-autos-in-den-augen-der-oesterreicher, 2016.Search in Google Scholar

[15] J. D. Lee und K. A. See, “Trust in automation: Designing for appropriate reliance,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 46, Nr. 1, pp. 50–80, 2004.10.1518/hfes.46.1.50_30392Search in Google Scholar PubMed

[16] J. Koo, J. Kwac, W. Ju, M. Steinert, L. Leifer und C. Nass, “Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance,” International Journal on Interactive Design and Manufacturing (IJIDeM), Bd. 9, Nr. 4, pp. 269–275, 2015.10.1007/s12008-014-0227-2Search in Google Scholar

[17] J. Lee und N. Moray, “Trust, control strategies and allocation of function in human-machine systems,” Ergonomics, Bd. 35, Nr. 10, pp. 1243–1270, 1992.10.1080/00140139208967392Search in Google Scholar PubMed

[18] J. Myounghoon, A. Riener, S. Jason., J.-H. Lee, B. Walker und I. Alvarez, “An International Survey on Autonomous and Electric Vehicles: Austria, Germany, South Korea and USA,” CHI ’17: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Denver, AT, US, 2017 (under review), 2017.Search in Google Scholar

[19] J.-F. Bonnefon, A. Shariff und I. Rahwan, “Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?,” arXiv preprint arXiv:1510.03346, 2015.Search in Google Scholar

[20] J.-F. Bonnefon, A. Shariff und I. Rahwan, “The social dilemma of autonomous vehicles,” Science, Bd. 352, Nr. 6293, pp. 1573–1576, 2016.10.1126/science.aaf2654Search in Google Scholar PubMed

[21] K. A. Hoff und M. Bashir, “Trust in automation integrating empirical evidence on factors that influence trust,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 57, Nr. 3, pp. 407–434, 2015.10.1177/0018720814547570Search in Google Scholar PubMed

[22] M. Bahram, A. M. und W. D., “Please Take Over! An Analsysis and Strategy For a Driver Take Over Request During Autonomous Driving,” in IEEE Intelligent Vehicles Symposium, Seoul, 2015.10.1109/IVS.2015.7225801Search in Google Scholar

[23] M. Manseer und A. Riener, “Evaluation of Driver Stress while Transiting Road Tunnels,” in 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI’14), Seattle, USA, 2014.10.1145/2667239.2667269Search in Google Scholar

[24] M. Mori, “The Uncanny Valley,” Energy, Bd. 7, Nr. 4, pp. 33–35, 1970.Search in Google Scholar

[25] N. J. Goodall, “Can you program ethics into a self-driving car?,” IEEE Spectrum, Bd. 53, Nr. 6, pp. 28–58, 2016.10.1109/MSPEC.2016.7473149Search in Google Scholar

[26] P. A. Hancock, D. R. Billings, K. E. Schaefer, J. Y. Chen, E. J. de Visser und R. Parasuraman, “A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 53, Nr. 5, pp. 517–527, 2011.10.1177/0018720811417254Search in Google Scholar PubMed

[27] P. L. Hardré, “When, How, and Why Do We Trust Technology Too Much?,” in Emotions, Technology, and Behaviors, Academic Press, 2015, p. 85.10.1016/B978-0-12-801873-6.00005-4Search in Google Scholar

[28] P. Lin, “Why ethics matters for autonomous cars,” in Autonomous Driving, Springer, 2016, pp. 69–85.10.1007/978-3-662-48847-8_4Search in Google Scholar

[29] P. Wintersberger und A. Riener, “Maximizing Driver Satisfaction and Productivity in Side Activities By Using Context-Aware Take-Over Timing,“ in EARPA Form Forum, Brussels, 2016.Search in Google Scholar

[30] P. Wintersberger, “Human Factors in Highly Automated Driving,” in 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Doctoral Colloquium), Ann Arbor, 2016.Search in Google Scholar

[31] P. Wintersberger, A. Riener und A.-K. Frison, “Automated Driving System, Male, or Female Driver: Who’d You Prefer? Comparative Analysis of Passengers’ Mental Conditions, Emotional States and Qualitative Feedback,” in Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications AutomotiveUI2016, Ann Arbor, 2016.10.1145/3003715.3005410Search in Google Scholar

[32] R. C. Mayer, J. H. Davis und F. D. Schoorman, “An integrative model of organizational trust,” Academy of management review, Bd. 20, Nr. 3, pp. 709–734, 1995.10.5465/amr.1995.9508080335Search in Google Scholar

[33] R. Parasuraman und V. Riley, “Humans and automation: Use, misuse, disuse, abuse,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 39, Nr. 2, pp. 230–253, 1997.10.1518/001872097778543886Search in Google Scholar

[34] S. Hergeth, L. Lorenz, R. Vilimek und J. F. Krems, “Keep Your Scanners Peeled Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 58, Nr. 3, pp. 509–519, 2016.10.1177/0018720815625744Search in Google Scholar PubMed

[35] S. Marsh und M. R. Dibben, “The role of trust in information science and technology,” Annual Review of Information Science and Technology, Bd. 37, Nr. 1, pp. 465–498, 2003.10.1002/aris.1440370111Search in Google Scholar

[36] S. Thill, M. Riveiro und M. Nilsson, “Perceived intelligence as a factor in (semi-) autonomous vehicle UX,” in Experiencing Autonomous Vehicles: Crossing the Boundaries between a Drive and a Ride, workshop in conjunction with CHI2015, Seoul, 2015.Search in Google Scholar

[37] T. Helldin, G. Falkman, M. Riveiro und S. Davidsson, “Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving,” in Proceedings of the 5th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Eindhoven, 2013.10.1145/2516540.2516554Search in Google Scholar

[38] W. Payre, J. Cestac und P. Delhomme, “Fully Automated Driving Impact of Trust and Practice on Manual Control Recovery,” Human Factors: The Journal of the Human Factors and Ergonomics Society, Bd. 58, Nr. 2, pp. 229–241, 2016.10.1177/0018720815612319Search in Google Scholar PubMed

Published Online: 2016-12-09
Published in Print: 2016-12-01

© 2016 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 29.3.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2016-0034/html
Scroll to top button