Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag August 3, 2019

Drivers’ Individual Design Preferences of Takeover Requests in Highly Automated Driving

  • Stefan Brandenburg

    Stefan Brandenburg (PhD) studied psychology at the Technische Universität Chemnitz and the University of Oklahoma, USA. Since 2008, he is a research associate at the chair of Cognitive Psychology and Cognitive Ergonomics at the Technische Universität Berlin. He is a co-founder and chair of the IPA ethics commission. His research interests include the integration of ethical aspects in human factors research, temporal changes of affect and emotion, and the design of highly automated driving systems.

    and Sandra Epple

    Sandra Epple (M.Sc.) studied psychology at Humboldt-Universität zu Berlin, and Computational Engineering Science at Technische Universität Berlin, Germany. Since January 2018, she is a research associate at the chair of Cognitive Psychology and Cognitive Ergonomics at the Technische Universität Berlin. Her research topic is driver-vehicle-interaction in the field of highly automated driving.

    EMAIL logo
From the journal i-com

Abstract

Highly automated cars will be on the worlds’ roads within the next decade. In highly automated driving the vehicle’s lateral and longitudinal controls can be passed on from the driver to the vehicle and back again. The design of a vehicle’s take-over requests will largely determine the driver’s performance after taking back vehicle control. In the scope of this paper, potential drivers of highly automated cars were asked about their preferences regarding the human-machine interface design of take-over requests. Participants were asked to evaluate eight different take-over requests that differed with respect to (a) take-over request procedure (one-step or two-step procedure), (b) visual take-over request modality (text or text and pictogram), and (c) auditory take-over request modality (tone or speech). Results showed that participants preferred a two-step procedure using text and speech to communicate take-over requests. A subsequent conjoint analysis revealed that take-over requests ideally use speech output in a two-step procedure. Finally, a detailed evaluation showed that the best take-over request interface received significantly higher user experience ratings regarding product characteristics as well as users’ emotions and consequences of product use than the worst take-over request interface. Results are related to the background literature and practical implications are discussed.

1 Introduction

Highly automated cars are estimated to be on European roads between 2020 and 2025 (ERTRAC [11]; UK Department of Transportation [36]). They were anticipated for a long time and became more likely through the advances in computerisation, the development of on-board sensors [20], [38], [22] and changes in legal restrictions (ECE/TRANS/WP.1/145 [8]). In 2014 a set of amendments were added to the 1968 Vienna convention on road traffic. These amendments paved the way for accelerated research and development efforts in the field of highly automated driving. Accordingly, drivers can pass on vehicle control to a highly automated system. However, drivers remain responsible for their vehicle at all times and must be able to resume control whenever they have to or whenever they wish to do so [6]. This presumes that drivers understand the take-over situation and the necessary steps to resume vehicle control in a safe and efficient manner. The human-machine interface (HMI) plays a crucial role in providing drivers with an understanding of the driving situation and necessary actions, as it mediates the car-to-driver and driver-to-car interaction. In the scope of highly automated driving, the HMI communicates necessary transitions of control from automated to manual driving via take-over requests. The design of the HMI that issues take-over requests has a significant influence on safety outcomes of automated systems [6]. HMI design, for example, influences drivers’ reaction times when resuming vehicles controls [39]. The amount of information that is given to the drivers should fit to the necessary driver actions. Ideally, drivers are able to intuitively understand the interface that issues a take-over request because intuitive interaction is fast, unconscious and automatic [19]. But how should take-over requests be designed? Is there a combination of HMI design aspects that results into take-over requests with high usability, usefulness, and attractiveness?

To date, no study systematically varied design aspects of take-over requests and assessed the impact of these design aspects on the HMIs perceived usability and user experience ratings. The present study makes a first step by systematically varying three important HMI design aspects: take-over request procedure (one-step or two-step procedure), visual take-over request modality (text or text and pictogram), and auditory take-over request modality (tone or speech). It aims at finding the take-over request HMI that is preferred by the participants within this three-dimensional design space.

2 Related Work

In the literature, various HMI designs have been used to request drivers to take back control of the vehicle. One of the design aspects that has been investigated frequently in the scope of take-over requests is their modality. Take-over requests have been presented in visual, auditory, and tactile modality. Naujoks, Mai, and Neukum [26] compared visual to visual-auditory take-over requests while varying the difficulty of the take-over situation. The visual take-over request consisted of a pictogram of the steering wheel and the auditory take-over request was a warning tone. They found that visual-auditory take-over requests lead to better driving outcomes with respect to take-over time and lateral vehicle control, especially in difficult take-over situations. Petermeijer, Bazilinskyy, Bengler, and de Winter [29] examined auditory, vibrotactile and a combination of auditory and vibrotactile take-over requests. Auditory take-over requests consisted of a single pair of tones and vibrotactile take-over requests were vibrotactile pulses issued by the driver seat. They showed that a multimodal take-over request, that was composed of auditory and vibrotactile stimuli, was superior to unimodal take-over requests regarding drivers’ reaction time and usefulness ratings. Politis, Brewster, and Pollick [30] combined visual, auditory, and tactile take-over requests while varying the urgency of the take-over situation. Auditory take-over requests utilised single tones, visual take-over requests consisted of a coloured circle, and tactile take-over requests were vibrations issued by a waist belt. The results of the study indicated that unimodal visual, auditory, and tactile take-over requests did not differ regarding drivers’ reaction time. However, combining modalities to multimodal take-over requests lead to faster driver reactions than unimodal take-over requests.

All studies that are mentioned above used non-verbal take-over requests in differing modalities (colours, single tones, vibrations). However, take-over requests can also consist of verbal material, like written text and speech output. Politis, Brewster, and Pollick [31] compared verbal visual (written text) and verbal auditory (speech) take-over requests. The written text and the speech output informed drivers about the driving situation and the urgency to take over. Speech output was associated with considerably shorter reaction times, higher urgency ratings and higher alerting effectiveness ratings than written text output. Combining text and speech output into a multimodal take-over request resulted in higher urgency ratings and alerting effectiveness ratings compared to unimodal take-over requests. However, a multimodal take-over request was not associated with shorter reaction times than unimodal take-over requests consisting of speech output. In summary, literature on the modality of take-over requests suggests that using multimodal material containing speech output may be beneficial for triggering transitions of control in highly automated driving.

Another design aspect of take-over requests that has recently emerged in literature is take-over procedure. Take-over procedure refers to the number of times a take-over request is issued within a take-over situation. Most studies have used a one-step take-over procedure (e. g. Naujoks et al. [26]). Here, drivers are prompted to take over control of the vehicle at a single point in time. In two-step take-over requests, drivers are prompted to take-over control at two points in time: an initial warning followed by an alarm [10]. Drivers preferred a two-step procedure over a one-step procedure according to a simulator study conducted by Walch, Lange, Baumann, and Weber [37]. However, take-over times for two-step take-over procedures were longer than for one-step procedures. Epple et al. [10] conducted a study on the impact of two-step take-over requests in various modalities on drivers’ take-over behaviour. They found that two-step take-over requests with speech output resulted in shorter reaction times than two-step take-over requests with text output and an unspecific beep tone.

The brief literature review shows that most take-over request interfaces incorporate a one-step or two-step procedure for passing on the vehicles controls to the driver, a graphical HMI visualisation, and a tone to draw the drivers attention to the take-over request. This study firstly systematically varies the different aspects of take-over request interfaces and examines how they are perceived by the drivers. In addition, it aims at finding the optimal configuration for a human-machine interface issuing the take-over request in a three-dimensional design space: one-step or two-step procedure, text or text and pictogram visualisation, and tone or speech output. Different take-over request designs should affect participants’ general evaluation and ranking of the presented interfaces.

3 Material and Methods

3.1 Participants

A total of 53 participants (26 females, 26 males, 1 no answer) participated in the study. Their mean age was 32 yrs (SD=9.86yrs). Forty-eight participants (90 %) possessed a valid drivers’ license, the other 10 % were student drivers. Forty-two participants (79 %) received a university degree; the other participants were regular employees with varying diplomas or no qualifying degree at all (9 %). Twenty-two of the participants (41 %) were students and 20 (37 %) were employed. The other participants were self-employed. The study was subject to the evaluation of the local ethics committee and all participants signed an informed consent.

3.2 Design of the Human-Machine Interface for Take-Over Requests

Table 1 shows screenshots of each of the take-over request interfaces. Take-over request interfaces differed regarding the following three dimensions: take-over request procedure (one-step or two-step procedure), visual take-over request modality (text or text and pictogram), and auditory take-over request modality (tone or speech). A one-step procedure issued auditory and visual take-over requests at one point in time whereas a two-step procedure issued an initial warning followed by an alarm. The warning was: “Warning – Roadworks ahead – Take-over vehicle control soon” and the alarm was “Alert – Take-over vehicle control now”. The visual take-over request modality differed with respect to the additional information that was provided by the pictogram. It visualised a standard road works sign for the warning and the vehicle’s pedals and steering wheel for the alarm if applicable (see take-over request interface no. 6 in Table 1). Finally, auditory information consisted of a non-specific beep or a mechanical voice reading the warning and the alarm that was displayed on the dashboard.

Take-over request interfaces were dynamic in the sense that they were presented as short film clips of 8 seconds length for a one-step take-over request and 16 seconds for a two-step take-over request. Each film clip followed the same plot. The first 3 seconds showed the dashboard of the vehicle followed by the take-over request which lasted for about 3 seconds. The take-over request faded and another 2 seconds of showing the dashboard were added to end the take-over request smoothly. In the two-step take-over procedure condition the plot was repeated a second time which doubled its length. The videos of the best and the worst rated interfaces can be accessed in the supplementary material.

Table 1

Screenshots of the 8 take-over request interfaces.

One-step procedure (alarm) Two-step procedure (warning and alarm)


text text and pictogram text text and pictogram
tone no. 1 no. 2 no. 5 no. 6
speech no. 3 no. 4 no. 7 no. 8
  1. Note. The loudspeaker symbolises a simple beep tone; the talking head visualises speech output of the warning “Roadworks ahead! Take-over vehicle control soon” and/or the alert “Take-over vehicle control now” if applicable.

3.3 meCUE Questionnaire

The meCUE questionnaire is based on the Component of User Experience model of Thüring and Mahlke [35]. The instrument allows for the modular evaluation of key aspects of User Experience. It consists of five modules relating the perception of different product characteristics (usefulness, usability, visual aesthetics, status, commitment) to users’ emotions (both positive and negative emotions) and to consequences of product use (product loyalty and intention to use). The fifth module allows for a global assessment of the product. Participants are asked to answer the items on a 7-point Likert-type scale ranging from 1 (disagree) to 7 (agree), except for the general evaluation item that provides a scale ranging from 0 to 10. The questionnaire has been subject to an extensive validation procedure (Minge, Thüring, & Wagner [25]; Minge, Thüring, Wagner, & Kuhr [24]). We used the following dimensions of the meCUE: utility (3 items), perceived usability (3 items), visual aesthetics (3 items), positive emotions (6 items), negative emotions (6 items), intention to use (3 items), product loyalty (3 items), and general evaluation (1 item) for evaluating the participants’ personal highest ranked and their personal lowest ranked interface.

3.4 Procedure

Participants’ were invited to participate in the online-study via facebook™ and other social media platforms. When they clicked on the invitation link they were forwarded to SociSurvey, the online tool that was used for data assessment.

The online study consisted of four parts: (1) an introduction, (2) a general evaluation of each take-over request interface, (3) a ranking of take-over request interfaces, and (4) a detailed evaluation of the personal best and worst take-over request interfaces. The introduction explained the general procedure and obtained the participants data assessment agreement. Participants were asked some demographic questions that were followed by a brief description of the take-over request interface and the procedure of evaluation. It was highlighted that participants would need to switch on their loudspeakers.

In the general evaluation section each of the 8 take-over request interfaces were shown separately, in randomised order. The sequence of interfaces was randomised for each participant. Each evaluation showed a short introductory text on the top of the screen saying that participants should imagine sitting in a highly automated car and reading text messages on their smartphone. Participants were instructed to play a short film clip to see the take-over request that is going to be issued by the car asking them to resume control over their vehicle. They were then asked to evaluate the take-over request interface after playing the video that was located under the introductory text regarding the following four items: (a) This take-over request interface is intuitive, (b) I find this take-over request interface useful, (c) I find this take-over request interface attractive, and (d) I find the amount of information appropriate. Participants were asked to rate each interface on a scale ranging from 0 (totally disagree) to 100 (totally agree) by moving a slider.

In the third part of the questionnaire, participants were asked to bring the interfaces in descending order with the best interface on top (rank 1) and the worst interface on the bottom (rank 8). Take-over request interfaces were shown via the screenshots depicted in Table 1. Finally, in the last part of the questionnaire participants were asked to evaluate their personal best and their personal worst interface using the dimensions of the meCUE (Minge, Thüring, Wagner & Kuhr [25]) that were mention in section 3.3. This evaluation of best and worst interfaces was conducted to derive more detailed insights into the reasons for the preference structure regarding take-over request interfaces. The whole study lasted for about 25 minutes.

Table 2

Effects of the take-over request characteristics on the participants’ ratings of intuitiveness, usefulness, attractiveness, and appropriateness of the information.

source measure M ( S D ) first factor level M ( S D ) second factor level F(1, 50) p η part 2
One-step or two-step (s) Intuitiveness 63.99 (20.16) 72.40 (18.59) 29.22 <.001 .36
Usefulness 55.65 (20.14) 65.15 (20.82) 24.87 <.001 .33
Attractiveness 40.84 (20.34) 51.23 (20.33) 21.35 <.001 .29
Information 43.30 (20.99) 57.37 (20.31) 25.80 <.001 .34
Text or text and pictogram (p) Intuitiveness 66.14 (18.94) 70.38 (20.47) 6.38 .01 .11
Usefulness 59.27 (20.43) 61.63 (20.57) 3.04 .08 .05
Attractiveness 43.50 (20.37) 48.67 (19.83) 7.86 .007 .13
Information 49.18 (20.76) 51.72 (19.14) 2.48 .12 .04
Tone or speech (t) Intuitiveness 64.24 (22.65) 72.02 (19.96) 5.14 .02 .09
Usefulness 56.13 (22.31) 64.61 (21.41) 7.04 .01 .12
Attractiveness 46.05 (20.80) 46.02 (23.00) .07 .79 .001
Information 45.90 (21.63) 54.67 (21.79) 5.54 .02 .10
p*t Usefulness 4.62 .03 .08
Information 4.56 .03 .08
  1. Note. Non-significant interaction effects are not reported.

4 Results

Before analysing the data of this study, plausibility checks were conducted to ensure that participants read instructions and completed each part of the study in a meaningful way. To do so, minimum dwell times on each page of the questionnaire were inspected.

Subsequently, the analysis strategy comprised three steps. Firstly, participants’ general evaluation of all interfaces was subject to a 2×2×2 (one-step or two-step procedure, text or text and pictogram, and tone or speech) within-subjects MANOVA. ηpart2 is reported as a measure of effect size [7]. Effects of .01<ηpart2.08 are regarded as small, .08<ηpart2.14 medium, and ηpart2>.14 as large. Secondly, the importance of take-over request design features was examined by computing a conjoint analysis. The individual results of the ranking procedure were used as input. Thirdly, multiple t-tests were used to examine detailed differences between the worst and the best interfaces using the meCUE data. Cohen’s d is reported as a measure of effect size. Effects of .20<d.50 are regarded as small, .50<d.80 medium, and d>.80 as large [7]. Degrees of freedom might change due to missing data.

Figure 1 
            Interaction of tone or speech and text or text and pictogram on (left) participants’ usefulness ratings and (right) participants’ ratings of the appropriateness of the displayed information.
Figure 1

Interaction of tone or speech and text or text and pictogram on (left) participants’ usefulness ratings and (right) participants’ ratings of the appropriateness of the displayed information.

Table 3

Median rank of take-over request interfaces.

Order Interface Median rank (interquartile range*) Design aspect

Take-over request procedure Visual take-over request modality Auditory take-over request modality
1 No. 7 2 (1–2.5) Two-step Text Speech
2 No. 8 3 (1–5) Two-step Text + pictogram Speech
3 No. 5 4 (2–5) Two-step Text Tone
4 No. 6 4 (3–5) Two-step Text + pictogram Tone
5 No. 3 5 (3.5–6) One-step Text Speech
6 No. 4 6 (5–6) One-step Text + pictogram Speech
7 No. 1 6 (4–7) One-step Text Tone
8 No. 2 7 (6–8) One-step Text + pictogram Tone
  1. Note. *The range of interquartiles denotes the width of the box in a boxplot, it therefore is a measure of spread that captures 50 % of the values around the groups’ median. The take-over request interface with the lowest median rank (best interface) is printed in bold and the take-over request interface with the highest median rank (worst interface) is printed in italics.

4.1 General Evaluation of Take-Over Request Interfaces

The evaluation of the take-over request interfaces revealed significant main effects for each of the independent variables: (s) take-over request procedure (one-step or two-step procedure), (p) visual take-over request modality (text or text and pictogram), and (t) auditory take-over request modality (tone or speech), and an interaction (p*t). Table 2 summarises the effects of the independent variables on the dependent variables intuitiveness, usefulness, attractiveness, and appropriateness of the information. Table 2 shows that participants found the two-step procedure to be more intuitive, useful, and attractive. The two-step procedure also showed the information more appropriately. Participants rated text and pictogram more favourably than mere text with respect to the intuitiveness and the attractiveness of the take-over request interface. They stated speech was more intuitive, more useful and provided information more appropriately than an unspecific beep. Finally, one interaction was observed on two outcome variables. It is depicted in Figure 1.

Figure 1 shows that adding a pictogram to text does not increase ratings of usefulness and appropriateness of information when speech is used as auditory output. However, when an unspecific tone is used as auditory output, the ratings of usefulness and appropriateness of information are improved by adding a pictogram to text.

4.2 Take-Over Request Interface Ranking and Conjoint Analysis

Analysing the ranking of the 8 different take-over request interfaces, it was found that each interface appeared on all possible ranks. The range for the rankings of each interface included the 1st and the 8th rank. However, clear preferences became evident when analysing the median rank of each take-over request interface. Friedman’s ANOVA for non-parametric data showed a significant difference indicating that the rank distributions of the interfaces are not similar, p<.001. Table 3 shows each interface, its median rank and the corresponding interquartile range as measure of spread.

Table 3 shows that take-over request interface no. 7 received the lowest median rank and therefore was the best interface. It combined text with speech in a two-step procedure. Interface no. 8 showed the second lowest median rank. The best and the second-best interfaces differed with respect to the presence of pictograms that were contained in the second best but not in the best interface. The take-over request interface with the highest rank (worst interface) presented a tone and the combination of text and pictograms in a single step (interface no. 2).

Table 4

Utility scores and their standard error for each factor and its attributes.

Factor Attribute Utility (SE)
Constant 2.31 (.27)
Take-over request procedure One-step 2.20 (.12)
Two-step 4.40 (.25)
Visual take-over request modality Text −.75 (.12)
Text + pictogram −1.50 (.25)
Auditory take-over request modality Tone −.57 (.06)
Speech .57 (.06)
  1. Note. SE denotes the standard error of the utility value.

Table 5

Comparison of the take-over request interfaces with the highest and lowest ranks regarding their ratings on the meCUE.

meCUE module Interface no. 7 M(SD) Interface no. 2 M(SD) t ( 14 ) p Effect size
Usefulness 5.35 (1.04) 3.24 (1.03) 9.30 <.001 2.03
Usability 5.82 (0.80) 4.13 (1.38) 7.69 <.001 1.40
Visual aesthetics 4.31 (1.45) 2.64 (1.20) 6.40 <.001 1.24
Positive emotions 3.03 (1.21) 2.51 (1.14) 3.46 .001 0.44
Negative emotions 3.10 (1.46) 3.82 (1.28) −2.73 .008 0.52
Intention to use 3.15 (1.29) 2.24 (1.05) 3.54 .001 0.76
Product loyalty 3.71 (1.46) 1.88 (1.20) 4.01 <.001 1.35
General evaluation 7.66 (2.74) 3.66 (2.43) 6.40 <.001 1.53
  1. Note. Interface no. 7 was ranked to be the best interface and interface no. 2 was rated to be the worst interface in the test pool by the participants.

A conjoint analysis was computed using the three design aspects as factors and the participants’ ranking results as dependent variable. Nine of the 53 (16 %) participants did not complete the whole ranking procedure and were therefore excluded from analysis. The conjoint analysis revealed that factors differed with respect to their relative importance on participants’ rating. The number of steps that are used to announce a take-over request was the most important factor (relative importance = 38.63 %) followed by the utilisation of text or text and pictograms to visualise the take-over request message (relative importance = 32.18 %). The usage of a tone or speech had the lowest relative importance = 29.18 %. Table 4 shows the utility scores for each factor and its attributes. A higher utility score indicates a greater preference.

Table 4 shows the part-worth scores for each factor which can be used to compute the take-over request design with the highest utility. Maximum utility can be inferred by using the constant and by adding the single part-worth scores for each attribute. Table 4 indicates that a two-step take-over request procedure has by far the highest utility (4.40) and should be combined with speech auditory output (0.57) to design a take-over request with maximum total utility (7.28 = constant + two-step + speech). Visual output always decreases the total utility score.

4.3 Detailed User Experience Evaluation of the Best and the Worst Take-Over Request Interfaces

In order to examine the differences between the best and the worst rated take-over request interfaces, the data of the participants were used that put interface no. 7 on rank 1 and interface no. 2 on rank 8. A subsample of n=15 out of 53 participants fulfilled both criteria. Dependent group t-tests were computed for each of the meCUE dimensions. Table 5 shows the results of the detailed evaluation of the best and the worst interfaces.

Table 5 shows that interface no. 7 and interface no. 2 differed regarding all user experience dimensions that were measured with the meCUE questionnaire. Participants rated the winning interface no. 7 higher on all dimensions, except for negative emotions. These were higher for the interface no. 2. Therefore, the difference between the best and the worst rated interface did not just manifest itself in some dimensions of the meCUE but could be found on all dimensions. It might be noted here that both interfaces scored below the midpoint of the scale (3.50) for positive emotions and the intention to use showing the potential for improvement.

5 Discussion

The present study examined subjective preferences regarding the design of take-over requests in highly automated vehicles. Eight take-over request interfaces were designed that differed with respect to the three dimensions: take-over request procedure (one-step or two-step procedure), visual take-over request modality (text or text and pictogram), and auditory take-over request modality (tone or speech). Participants evaluated the take-over request interfaces regarding important aspects of interface design like intuitiveness, usefulness, etc. and they put the interfaces in order. Finally, participants provided a detailed user experience evaluation of the interfaces that were placed on the highest and on the lowest ranks.

5.1 General Evaluation of Take-Over Request Interfaces

The general evaluation of take-over request interfaces revealed that participants perceived clear differences regarding the design aspects that were manipulated in the study. Participants indicated that human-machine interfaces (HMIs) issuing take-over requests should implement a two-step procedure because it is more intuitive, useful, and attractive. The two-step procedure shows the information more appropriately than the one-step procedure. Moreover, text and pictogram were rated to be more intuitive and more attractive than mere text. For the items “usefulness” and “appropriateness of information”, the results were less clear: there was no difference between text and text with an additional pictogram regarding usefulness and appropriateness of information when speech was used as auditory output. However, when the auditory output consisted of a mere tone, text with an additional pictogram was rated more favourably regarding these measures. This finding could be explained by the additional information that can be conveyed by speech in comparison to an unspecific tone. Using speech output can help to provide important information about the driving situation and necessary actions to the driver. Conveying this information via the auditory channel, can make additional visual information provided by a pictogram unnecessary. In line with this argumentation, the results on the auditory modality showed that speech was rated more intuitive, more useful and provided more information than an unspecific beep. These findings suggest that take-over request interface designers should consider other interface design elements than a simple beep and some text that are issued in a single step. This take-over request interface design is associated with problematic driver performance (Naujoks et al. [26]; Brandenburg & Skottke [5]; Gold, Damböck, Lorentz, & Bengler [14]), delayed reallocation of attention resources to the driving task (Merat, Jamson, Lai, Daly, & Carsten [21]; Louw, Kountouriotis, Carsten, & Merat [18]), low TTC (time to collision), strong braking behaviour and high criticality ratings of drivers (Radlmayr, Gold, Lorenz, Farid, & Bengler [33]).

Designers should rather provide more information in a step-by-step procedure (cf. Brandenburg and Chuang [4]) that can be beneficial in case of engagement in a secondary task. Jamson, Merat, Carsten and Lai [15] showed that high levels of automation lead to drivers’ engagement in highly distracting secondary tasks associated with in-vehicle entertainment (e. g. the use of radio or DVD). Petermann-Stock, Hackenberg, Muhr, and Mergl [28] varied drivers’ tasks that they had to accomplish during a phase of highly automated driving from a conversation (low load) to reading (intermediate load) and to reading and writing (high load). They found that drivers’ reaction time increased with increasing load. Drivers also distributed less attention to the road and more attention to the secondary task with increasing load. The take-over request should therefore reallocate drivers’ attention to the road and to the necessary driver actions (Borojeni, Chuang, Heuten, & Boll [3]). A two-step procedure provides a more gradual take-over process because the driver is disengaged from a secondary task and engaged in the driving task in a more stepwise manner. The second step of a two-step take-over request also provides a fallback level in case drivers missed the first warning due to engagement in a secondary task. However, a two-step procedure might not be possible in critical driving situations where immediate driver reactions are necessary. Future studies should investigate in which driving situations two-step procedures and one-step procedures could be better suited to trigger transitions of control.

In addition, participants’ preference ratings indicated the wish for more information than a beep and a short text stating why that beep occurred. This finding could indicate the participants’ want to update their situation awareness [9]. Following Radlmayr et al. [33] the drivers’ situation awareness decreases during phases of highly automated driving. Informative take-over requests have the potential to support drivers understanding of the driving situation and the preparation of necessary actions. Knowing why actions have to be performed might provide useful information about the future projection of the course of action. It might also deliver information helping to estimate the urgency of driver reactions (cf. Stahl, Donmez, & Jamieson [34]). This information could be provided by speech output. In line with the findings by Epple et al. [10], our results show that the use of speech output in a two-step procedure could be beneficial for take-over requests in highly automated driving.

5.2 Take-Over Request Interface Ranking and Conjoint Analysis

The ranking of the eight interfaces revealed that participants clearly favoured one of the eight designs. The winning interface combined a textual visualisation with speech output in a two-step procedure. Adding a pictogram to this layout decreased participants’ appraisal of the take-over request visualisation. This finding is interesting as it is in contrast with the results of the general evaluation of take-over request interfaces. In the general evaluation, an interface containing text and pictogram was rated to be more attractive and intuitive than mere text. This discrepancy between general evaluation and ranking of take-over request interfaces may be due to a discrepancy between the aspects that we assessed in the general evaluation (intuitiveness, usefulness, attractiveness, and appropriateness of information) and the criteria that participants used to rank the interfaces. Possibly, participants perceived other criteria to be more decisive for their ranking, than the ones we assessed. One possible criterion could have been visual clutter. The interface with text and pictogram may contain too much visual information and therefore degrade information processing. Typically, take-over request interfaces that are reported in literature use pictograms to visualise the status of the automated system [18], [27] or the drivers need to resume vehicle controls (Albert, Lange, Schmidt, Wimmer, & Bengler [1]; Flemisch et al. [12]; Radlmayr et al. [33]).

Moreover, the ranking of interfaces revealed that speech was preferred over a single tone. Recently, an increasing number of studies used speech output to issue take-over requests to drivers (e. g., Miller et al. [23]; Politis et al. [31]; Walch et al. [37]). Politis et al. [31] and Epple et al. [10] found that take-over requests using speech were associated with considerably shorter reaction times than written text output. The present study highlights the importance of speech output when issuing take-over requests for subjective preference ratings. Speech output may be superior to mere visual output due to the drivers’ tendency to engage in demanding and mostly visual secondary tasks during highly automated driving.

The worst ranked take-over request interface combined an unspecific tone with a text and pictogram visualisation in a one-step procedure. This is what some car manufacturers and researchers propose as visualisation for take-over requests [33], [26]. Based on the results of the current study it could be recommended to refrain from using this take-over request interface any longer.

The conjoint analysis revealed additional insights in participants’ preferences regarding take-over request design aspects. Again, it underlined that participants value a two-step take-over request procedure over a one-step procedure. In fact, take-over request procedure was the most important factor for the participants’ preference ratings. This finding is in line with previous research of Brandenburg and Chuang (in review) on the design of take-over request procedures that also found a two-step procedure to be superior to a one-step procedure. However, a two-step procedure will be applicable in regular take-over request situations where the vehicle passes on the driving task to the driver with much time available; for example at the end of a phase of highly automated driving on an autobahn. It might not be applicable in highly dynamic situations where driver reactions have to follow the take-over request promptly. Future studies should examine the applicability of two-step take-over request procedures in varying driving situations. Brandenburg and Chuang (in review) showed that take-over request characteristics differently affect participants take-over request related performance depending on the characteristics of the driving course. Additionally, Jamson et al. [15] and Naujoks et al. [26] showed that drivers post-take-over request performance is affected by road characteristics, which further stresses the need for future work on the interaction of take-over request design and road characteristics.

Finally, the conjoint analysis extended the simple analysis of the participants’ take-over request interface ranking by the fact that both attributes of the factor text or text and pictogram are negatively related to the participants’ utility of take-over request interfaces. In fact, the take-over request interface with the highest subjective utility consists of a speech output in a two-step procedure. Any type of visualisation would decrease participants’ utility ratings. This finding is interesting because it relates to the issue of an increasing number of visual displays in in-car HMI. Implementing more and more assistance systems currently goes along with adding more visual elements in the in-car HMI as well. Drivers might feel overwhelmed by the high number of icons and pictograms that might impose feelings of stress and cognitive strain on them. However, it should be emphasised that this is a study on drivers’ preferences without a driving task. Participants may rate the interfaces differently in a real driving scenario. Moreover, driver preference does not necessarily accord with safety. Visual output could provide a backup when drivers miss the warnings in other modalities due to inattentiveness or auditory masking. Therefore, visual output may still contribute to a safer transition of control.

Based on the current work we conclude that the industries current focus on additional non-visual driver-to-car and car-to-driver interaction is supported by our findings (Gellatly, Hansen, Highstrom, & Weiss [13]). Pictograms and beeps might no longer be the primary way of communicating information from the car to the driver. Instead, the interaction of drivers and vehicles could be based on more meaningful auditory information to reach a high level of immersion in a driver-vehicle system [2].

5.3 User Experience Evaluation of the Best and the Worst Interfaces

The final evaluation of the best and the worst interfaces revealed that both differed regarding all aspects of user experience that were assessed in this study. This result sheds some light on possible reasons for the ranking of the best and the worst interface. Interface no. 7 was not just rated more favourably than interface no. 2 on product characteristics, like usefulness and usability. It also received better ratings for users’ emotions (both positive and negative emotions) and consequences of product use (product loyalty and intention to use). Whereas this finding might not be very surprising, the examination of group means revealed that the best and the worst interfaces differed on a low level of absolute values for some important UX dimensions. Non-instrumental aspects like positive emotions and the consequences of user experience like intention to use the product were on a low level indicating the potential for improvement. In addition, these effects were of medium size only, indicating that study participants did not perceive large differences in user experience between the interfaces. These results might partially be due to the study setting. Participants were asked to imagine that they sit in a highly automated vehicle. They played short film clips instead of experience a TOR situation in a driving simulator or a real car. Their ratings might therefore change when experiencing a TOR instead of seeing it on a computer screen.

6 Limitations and Future Work

The current study examined the subjective preferences of take-over request interfaces. It found that a two-step auditory display that included speech output received the highest utility ratings. The study therefore indicates that HMI research should further investigate the potential of non-visual displays in the automotive sector. However, it also had shortcomings that should be noted here. Firstly, we only assessed subjective preferences in an online study. Participants’ were asked to imagine a situation where they sit in a highly-automated vehicle interacting with their smartphone. Then they should play a video which animated the take-over request. This situation is highly artificial because it does not confront the participants with a driving task, or other disturbing factors like environmental noise, fatigue, etc. that bear the potential to alter their preferences for take-over request design aspects. Moreover, participants did not experience the take-over requests and their timing in the context of a real driving situation with for example a construction site approaching. Future studies should apply the take-over request interfaces in real driving settings to see whether participants also prefer a non-visual auditory display. These studies should also ensure that take-over procedure is not confounded with forewarning time. Secondly, take-over request design has safety critical impacts on drivers take-over request related performance [10]. The present online experiment did not relate the take-over request interfaces under investigation to participants’ driving behaviour. Obviously, this should be the aim of upcoming studies. Thirdly, we only tested combinations of three take-over request design aspects. Other aspects such as loudness or colour scheme might change the results again. Lastly, the sample of 53 participants may not be representative of the population of drivers. A large proportion of participants were students. Therefore, the utility of this study to predict buyer preference might be reduced.

Future studies could further investigate two-step take-over requests and speech-output in highly automated driving. The findings of this study suggest that two-step take-over requests using speech-output are preferred by drivers. However, more research could focus on the combination of these aspects of interface design in real driving situations and their impact on driving measures and safety. Moreover, alternative streams of research have focused on increasing situation awareness of drivers by providing them with more details about the take-over situation. Koo et al. [16] provided drivers with additional information on how and why the automated driving system is reacting to a system limit. Politis et al. [32] used a dialogue-based interaction design for the communication of the driver and his or her automated vehicle. Future research could include these interaction types in two-step take-over procedures to further increase the awareness of the driving situation.

The current investigation addresses an important aspect in the development of highly automated vehicles despite the shortcomings of the study. The success of automated vehicles partially depends on their human-machine interfaces [17]. The open question on how to design HMI for take-over requests in highly automated driving is therefore one of the very important research questions in the developing research field [21]. The present study added some information on the design of human-machine interfaces issuing take-over requests. It showed that participants preferences regarding the design aspects of take-over requests bear the potential for unexpected decisions that should be subject to further studies.

About the authors

Stefan Brandenburg

Stefan Brandenburg (PhD) studied psychology at the Technische Universität Chemnitz and the University of Oklahoma, USA. Since 2008, he is a research associate at the chair of Cognitive Psychology and Cognitive Ergonomics at the Technische Universität Berlin. He is a co-founder and chair of the IPA ethics commission. His research interests include the integration of ethical aspects in human factors research, temporal changes of affect and emotion, and the design of highly automated driving systems.

Sandra Epple

Sandra Epple (M.Sc.) studied psychology at Humboldt-Universität zu Berlin, and Computational Engineering Science at Technische Universität Berlin, Germany. Since January 2018, she is a research associate at the chair of Cognitive Psychology and Cognitive Ergonomics at the Technische Universität Berlin. Her research topic is driver-vehicle-interaction in the field of highly automated driving.

References

[1] Albert, M., Lange, A., Schmidt, A., Wimmer, M., & Bengler, K. (2015). Automated driving – assessment of interaction concepts under real driving conditions. Procedia Manufacturing 3, 2832–2839.10.1016/j.promfg.2015.07.767Search in Google Scholar

[2] Bengler, K., Dietmayer, K., Färber, B., Maurer, M., Stiller, C., & Winner, H. (2014). Three decades of driver assistance systems – review and future perspectives. IEEE Intelligent Transportation Magazine 6(4), 6–22.10.1109/MITS.2014.2336271Search in Google Scholar

[3] Borojeni, S.S., Chuang, L., Heuten, W., & Boll, S. (2016). Assisting drivers with ambient take-over requests in highly automated driving. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 237–244). DOI: 10.1145/3003715.3005409.Search in Google Scholar

[4] Brandenburg, S. & Chuang, L. (in review). Take-over requests during highly automated driving: How should they be presented and under which conditions? Transportation research part F.Search in Google Scholar

[5] Brandenburg, S. & Skottke, E.M. (2014). Switching from manual to automated driving and reverse: Are drivers behaving more risky after highly automated driving? In IEEE Proceedings of the 17th International Conference on Intelligent Transportation Systems (ITSC), IEEE (pp. 2978–2983). DOI: 10.1109/ITSC.2014.6958168.Search in Google Scholar

[6] Casner, S.M., Hutchins, E.L., & Norman, D. (2016). The challenges of partially automated driving. Communications of the ACM 59(5), 70–77. DOI: 10.1145/2830565.Search in Google Scholar PubMed PubMed Central

[7] Cohen, J. (1988). Statistical power for the behavioural sciences (2nd ed.). Lawrence: Erlbaum.Search in Google Scholar

[8] ECE/TRANS/WP.1/145 (2014). Report of the sixty-eight session of the Working Party on Road Traffic Safety, United Nations Economic and Social Council, 1–11.Search in Google Scholar

[9] Endsley, M.R. (1995). Towards a theory of situation awareness in dynamic systems. Human Factors 37(1), 32–64.10.1518/001872095779049543Search in Google Scholar

[10] Epple, S., Roche, F., & Brandenburg, S. (2018). The Sooner the Better: Drivers’ Reactions to Two-Step Take-Over Requests in Highly Automated Driving. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 62(1), 1883–1887). Sage CA: Los Angeles, CA: SAGE Publications.10.1177/1541931218621428Search in Google Scholar

[11] ERTRAC (2015). Automated Driving Roadmap. Technical report of the European Road Transportation Research Advisory Council. Brussels, Belgium.Search in Google Scholar

[12] Flemisch, F., Schieben, A., Schoeming, N., Strauss, M., Luecke, S., & Heyden, A. (2011). Universal Access in Human-Computer Interaction. In Context Diversity, 6767 LNCS (pp. 270–279).10.1007/978-3-642-21666-4_30Search in Google Scholar

[13] Gellatly, A.W., Hansen, C., Highstrom, M., & Weiss, J.P. (2010). Journey: General Motors’ move to incorporate contextual design into its next generation of automotive HMI designs. In Proceedings of the 2nd International Conference on Automotive User Interfaces and Vehicular Applications (pp. 156–161).10.1145/1969773.1969802Search in Google Scholar

[14] Gold, C., Damböck, D., Lorenz, L., & Bengler, K. (2013). “Take over!” How long does it take to get the driver back into the loop? Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 57(1), 1938–1942. DOI: 10.1177/1541931213571433.Search in Google Scholar

[15] Jamson, A.H., Merat, N., Carsten, O.M.J., & Lai, F.C.H. (2013). Behavioural changes in drivers experiencing highly-automated vehicle control in varying traffic conditions. Transportation Research Part C: Emerging Technologies 30, 116–125. DOI: 10.1016/j.trc.2013.02.008.Search in Google Scholar

[16] Koo, J., Kwac, J., Ju, W., Steinert, M., Leifer, L., & Nass, C. (2015). Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal on Interactive Design and Manufacturing (IJIDeM) 9(4), 269–275.10.1007/s12008-014-0227-2Search in Google Scholar

[17] Kröger, F. (2015). Automated driving in its social, historical, and cultural contexts. In M. Maurer, J.C. Gerdes, B. Lenz, & H. Winner. Autonomous driving: Technical, legal and social aspects (pp. 41–68). SpringerOpen: Berlin, Heidelberg.10.1007/978-3-662-48847-8_3Search in Google Scholar

[18] Louw, T.L., Kountouriotis, G., Carsten, O., & Merat, N. (2015). Driver Inattention During Vehicle Automation: How Does Driver Engagement Affect Resumption Of Control? In 4th International Driver Distraction and Inattention Conference (pp. 9–11).Search in Google Scholar

[19] Macaranas, A., Antle, A.N., & Riecke, B.E. (2015). What is intuitive Interaction? Balancing users’ performance and satisfaction with natural user interfaces. Interacting with Computers. DOI: 10.1093/iwc/iwv003.Search in Google Scholar

[20] McMillin, B. & Sanford, K.L. (1998). Automated highway systems. IEEE Potentials 17(4), 7–11.10.1109/45.721725Search in Google Scholar

[21] Merat, N., Jamson, A.H., Lai, F.C.H., Daly, M., & Carsten, O.M.J. (2014). Transition to manual: Driver behavior when resuming control from a highly automated vehicle. Transportation research part F 27, 274–282. DOI: 10.1016/j.trf.2014.09.005.Search in Google Scholar

[22] Merat, N. & Lee, J.D. (2012). Preface to the Special Section on Human Factors and Automation in Vehicles: Designing Highly Automated Vehicles With the Driver in Mind. Human Factors: The Journal of the Human Factors and Ergonomics Society 54(5), 681–686. DOI: 10.1177/0018720812461374.Search in Google Scholar PubMed

[23] Miller, D., Sun, A., Johns, M., Ive, H., Sirkin, D., Aich, S., & Ju, W. (2015). Distraction becomes engagement in automated driving. In Proceedings of the Human Factors Ergonomics Society 59th Annual Meeting, (pp. 1676–1680).10.1177/1541931215591362Search in Google Scholar

[24] Minge, M., Thüring, M., Wagner, I., & Kuhr, C.V. (2016). The meCUE Questionnaire. A Modular Evaluation Tool for Measuring User Experience. In M. Soares, C. Falcão & T.Z. Ahram (Eds.): Advances in Ergonomics Modeling, Usability & Special Populations. Proceedings of the 7th Applied Human Factors and Ergonomics Society Conference 2016. Switzerland: Springer International Press, 115–128.10.1007/978-3-319-41685-4_11Search in Google Scholar

[25] Minge, M., Thüring, M., Wagner, I. (2016). Developing and Validating an English Version of the meCUE Questionnaire for Measuring User Experience. Proceedings of the Human Factors Ergonomics Society Annual Meeting. DOI: 10.1177/1541931213601468.Search in Google Scholar

[26] Naujoks, F., Mai, C., & Neukum, A. (2014). The effect of urgency of take-over requests during highly automated driving under distraction conditions. In T. Ahram, W. Karowowski & T. Marek, Proc. of the 5th International Conference on Applied Human Factors and Ergonomics AHFE, Krakow, Poland.Search in Google Scholar

[27] Naujoks, F., Purucker, C., & Neukum, A. (2016). Secondary task engagement and vehicle automation – Comparing the effects of different automation levels in an on-road experiment. Transportation Research Part F, 38, 67–82. DOI: 10.1016/j.trf.2016.01.011.Search in Google Scholar

[28] Petermann-Stock, I., Hackenberg, L., Muhr, T., & Mergl, C. (2013). Wie lange braucht der Fahrer? Eine Analyse zu Übernahmezeiten aus verschiedenen Nebentätigkeiten während einer hochautomatisierten Staufahrt. 6. Tagung Fahrerassistenzsysteme. Der Weg zum automatischen Fahren, 1–26.Search in Google Scholar

[29] Petermeijer, S., Bazilinskyy, P., Bengler, K., & de Winter, J. (2017). Takeover again: Investigating multimodal and directional TORs to get the driver back into the loop. Applied Ergonomics, 62, 204–215. DOI: 10.1016/j.apergo.2017.02.023.Search in Google Scholar PubMed

[30] Politis, I., Brewster, S., & Pollick, F. (2014). Evaluating multimodal driver displays under varying situational urgency. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 4067–4076). Toronto, Ontario, Canada: ACM Press.10.1145/2556288.2556988Search in Google Scholar

[31] Politis, I., Brewster, S., & Pollick, F. (2015). Language-based multimodal displays for the handover of control in autonomous cars. In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 3–10). Nottingham: ACM Press. DOI: 10.1145/2799250.2799262.Search in Google Scholar

[32] Politis, I., Langdon, P., Adebayo, D., Bradley, M., Clarkson, P.J., Skrypchuk, L., Mouzakitis, A., Eriksson, A., Brown, J., Revell, K., & Stanton, N. (2018, March). An evaluation of inclusive dialogue-based interfaces for the takeover of control in autonomous cars. In 23rd International Conference on Intelligent User Interfaces (pp. 601–606). ACM.10.1145/3172944.3172990Search in Google Scholar

[33] Radlmayr, J., Gold, C., Lorenz, L., Farid, M., & Bengler, K. (2014). How traffic situations and non-driving related tasks affect the take-over quality in highly automated driving. In Proceedings of the Human Factors and Ergonomics Society 58th Annual Meeting, 2063–2067.10.1177/1541931214581434Search in Google Scholar

[34] Stahl, P., Donmez, B., & Jamieson, G.A. (2016). Supporting anticipation in driving through attentional and interpretational in-vehicle displays. Accident Analysis and Prevention 91, 103–113. DOI: 10.1016/j.aap.2016.02.030.Search in Google Scholar PubMed

[35] Thüring, M. & Mahlke, S. (2007). Usability, aesthetics and emotions in human–technology interaction. International Journal of Psychology, 42(4), 253–264.10.1080/00207590701396674Search in Google Scholar

[36] UK Department of Transportation (2015). The Pathway to Driverless Cars – Summary report and action plan. DfT Publications: London, UK.Search in Google Scholar

[37] Walch, M., Lange, K., Baumann, M., & Weber, M. (2015). Autonomous driving: investigating the feasibility of car-driver handover assistance. In Proceedings of the 7th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (pp. 11–18). ACM.10.1145/2799250.2799268Search in Google Scholar

[38] Walker, G.H., Stanton, N.A., & Young, M.S. (2001). Where is computing driving cars? International Journal of Human Computer Interaction, 13(2), 203–229.10.1207/S15327590IJHC1302_7Search in Google Scholar

[39] Zeeb, K., Buchner, A., & Schrauf, M. (2015). What determines the take-over time? An integrated model approach of driver take-over after automated driving. Accident Analysis and Prevention 78, 212–221.10.1016/j.aap.2015.02.023Search in Google Scholar PubMed

Published Online: 2019-08-03
Published in Print: 2019-08-27

© 2019 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 5.3.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2018-0028/html
Scroll to top button