Augmented Reality for Future Mobility: Insights from a Literature Review and HCI Workshop

: There is a growing body of research in the field of interaction between drivers/passengers and automated vehiclesusingaugmentedreality(AR)technology.Further-ing the advancements and availability of AR, the number of use cases in and around vehicles rises. Our literature review reveals that in the past, AR research focussed on increasing road safety and displaying navigational aids, however, more recent research explores the support of immersive (non-)driving related activities, and finally enhance driving and passenger experiences, as well as assist other road users through external human-machine interfaces (HMIs). AR may also be the enabling technology to increase trust and acceptance in automated vehicles through explainable artificial intelligence (AI), and therefore help on the shift from manual to automated driving. We organized a workshop addressing AR in automotive human-computer interaction (HCI) design, and identified a number of challenges including human factors issues that need to be tackled, as well as opportunities and practical usages of AR in future mobility. We believe that our status-quo literature analysis and future-oriented work-shop results can serve as a research agenda for user interface designers and researchers when developing automotive AR interfaces.


Introduction
As automated driving (AD) systems and corresponding human-machine interfaces (HMI) advance, the usage of augmented reality (AR) technology for both in-vehicle experiences and other road users is explored. In 2021, the Society of Automotive Engineers [35] released the latest version of their definition of vehicle automation, thereby providing the basis for future standardization, as well as introducing a common language. While SAE Level 2 (L2) vehicles are already permitted to drive on the road, more advanced mobility solutions such as SAE Level 3 (L3) automated vehicles (AVs), are expected to become commercially available in the near future. The intersection of automated driving and augmented reality enables usability and user experience (UX) researchers and practitioners to explore and evaluate novel automotive user interfaces. Recent literature reviews (e. g., [39,175]) on HMI concepts covered information presentation along the reality-virtuality (RV) continuum [136], from traditional 2D screens to augmented reality, and virtual reality (VR) visualizations. Application areas include automotive, aeronautics, logistics, production, education, cultural heritage, and others [39]. In AR, digital virtual images are placed on the real world, such that the digital content is adjusted with real world entities [9]. Contrarily, in VR, environments are entirely made up of 3D digital objects [206]. Mixed reality (MR) includes all potential positions on the RV continuum, i. e., between the real world and the virtual environment. Specifically, in the automotive domain, AR technology has been adopted for a variety of use cases. For example, head-up displays (HUDs) and larger windshield displays (WSDs) utilize AR technology to increase drivers' situational awareness by directing their gaze on the road (e. g., [193]), and even going further by displaying work-or entertainment-related content for performing non-driving related tasks (NDRTs) for more highly automated vehicles in prototype studies (e. g., [169,180,189]).
The arrival of AR glasses such as the Microsoft HoloLens and Magic Leap One, among others, enables research of in-vehicle use of AR in a more immersive way [100], thereby allowing researches to prototype humanmachine interaction concepts more realistically using these AR devices in both lab environments and field studies. Furthermore, VR head-mounted displays (HMDs) such as the HTC Vive and Oculus Quest, combined with interaction tracking technologies like Leap Motion for hand track-  ing and Ultraleap Stratos for haptics have the potential to create immersive experiences in fully simulated digital environments. These VR setups could be utilized to simulate AR content [57,172].
In this paper, we present the results of the workshop [176] for intelligent vehicles with focus on the practical usage of AR as well as future developments of this technology in and around intelligent vehicles, categorized into opportunities, challenges, and practical use cases.

Literature Review
We reviewed scientific publications since 2009, and our analysis revealed that an increasing number of them use augmented reality technology for their user studies, as previous research has shown that AR has the potential to foster trust and safety in automated vehicles as well as increased user experience of driving, and providing drivers/passengers the opportunity to perform nondriving related tasks (e. g., [12,18,62,98,99,197,237]).
Since 2009, the number of scientific publications on augmented reality in driving research has strongly increased (see Figure 1). We searched for augmented reality applications used in automated vehicles on Google Scholar, ACM Digital Library and IEEE Xplore, using the search terms "augmented reality", "driving", "automated vehicle", and "vehicle automation", and found more than 150 papers for the further analysis. Figure 2 gives an overview of selected augmented reality (AR) applications. AR is researched for infotainment and gamification concepts in automated driving, as well as displaying navigation hints and hazard indications. More recently, mobile work and well-being are researched topics in the AR domain.
In the following, we present each application area in more detail. Section 2.1.1 covers safety aspects, Section 2.1.2 navigational aspects, Section 2.1.3 user interface (UI) design with focus on head-up and windshield displays, Section 2.1.4 interaction modalities in combination with AR interfaces, Section 2.1.5 covers passenger experiences, and Section 2.1.6 external HMIs as well as vulnerable road users (VRUs). Furthermore, we give an overview of trust and motion/simulator sickness in Sections 2.1.7 and 2.1.8, respectively. Finally, in Section 2.2, we discuss the findings of our literature review.

Safety & Driver Assistance
The Safety category mostly covers research on manual driving (e. g., [77,86,90]). However, with mixed traffic situations, where manual and automated vehicles share the road, becoming soon reality, research efforts in this ap-plication area will have to increase for highly automated vehicles [107]. This includes research on teleoperation of AVs [19]. Virtual reality is currently used in safety research, as AR awareness/attention cues can be simulated [177]. Therefore, new opportunities will open up for AR in automotive safety research, including gamification concepts [193]. Utsumi et al. [215] investigate visual information for emergency behaviors, and different weather conditions are also being explored (e. g., [239]). Additionally, vulnerable road users are included in AR safety research, for example warnings for pedestrians [88,89] and cyclists [161]. Furthermore, AR automotive research encompasses takeover performance and other safety aspects [137]. Particularly, the transition from non-driving related content visualized on AR displays to manual driving should further be explored. To this end, keeping the driver in the loop must be a precondition for AR interfaces in conditionally automated driving [115]. Additional research is conducted on world-fixed AR using head-up and windshield displays in VR simulations. In this case, AR displays are used to support and increase situation awareness without being a strain on the driver's cognitive load [105,240].

Navigation & Routing
The research reviewed in this application area is mostly pertaining to manual and up to level 2 of vehicle automation [129,140]. With progress of HMD and CAVE technology, navigation and routing functions can be explored in a more immersive environment than with monitors. Higher levels of vehicle automation are also investigated [102,195]. Navigational aids in combination with AR HUDs or WSDs could be beneficial for reducing distractions [209,214], and mitigating motion sickness (covered in Section 2.1.8). Further, the advancement of AR technology can help to transition traditional, two-dimensional direction instructions towards world-fixed three-dimensional instructions that blend into the outside environment [45,191,213]. A future research direction for AVs would be to combine navigation tasks with passenger experiences and gamification (see Section 2.1.5). For example, Schroeter et al. [193] use AR to playfully make the driver aware of situational warnings, a concept which could be translated to navigation and routing.

User Interface Design: AR Head-up and Windshield Displays
Another application area of automotive AR is the rapid UI design and prototyping process that can be realized in immersive and cost-effective environments, as Goedicke et al. [58] and Riegler et al. [174] show. For example, Morra et al. [139], Riegler et al. [177], Charissis et al. [30], and Gerber et al. [56] use VR simulation to iteratively design AR HUDs or WSDs. Merenda et al. [134,135] focus on the granularity and visual fidelity the virtual environment must have, as driving performance and gaze behavior are impacted by these parameters, among others. Similarly, Paredes et al. [152] propose that virtual experiences should be sophisticated even early on in the prototyping phase, making use of high-fidelity content. While previous research efforts on automotive UI/HMI design emphasized the visualization of AR warnings to the driver, more recent research reacts to higher levels of vehicle automation, and their prospects as well as challenges. For example, Riegler et al. [181] explored content types for potential drivers of conditionally/fully AVs, and where they should be positioned on a simulated windshield display, and what transparency levels are suitable. Moreover, screen-fixed and world-fixed content presentations are being explored [173]. Especially for SAE L3 and L4 vehicles, the provision of take-over requests must be visualized properly [113]. Personalization and customization should be investigated further, as the progress of more and more digital in-vehicle interfaces, and the concurrent reduction of hardware-intensive UIs (e. g., buttons, sliders) will enable novel tailored UIs to be designed for any user. The transfer from VR driving simulator studies to real-world settings should also be researched in more detail. Pettersson et al. [157] compared a VR driving simulator setup to a field study, and their findings show that while VR studies do not fully replace field studies, they can provide added value to driving simulators and desktop testing. Especially, VR can be applied for conveying the overall principles behind a user study. Further research topics in this application area include the design of NDRT interfaces (e. g., virtual UI element sizes and appearances, feedback design, etc.), AR content/app switching (i. e., switching from one WSD application to another), and handover methods, such as transitioning from content shown on a mobile phone to the windshield display, and vice versa.

Interaction Modalities
Future AR-enabled vehicles will need an array of modalities for interacting with their UIs. Extending more traditional buttons, knobs, sliders, and touch screens, novel user interfaces (e. g., touch, speech, gestures, handwriting, and vision) [146] are researched in the automotive context. Virtual reality (VR) driving simulation is often used as it fosters rapid prototyping and iterating through interaction designs [174,192]. Increasingly, due to advancements in automated driving, research is conducted on driver-vehicle interaction focussed on NDRTs instead of the primary driving task, employing an AR interface. For example, Riegler et al. [170] explore gaze-based interactions with WSDs to control an AR infotainment system. Eye-tracking within VR simulations has also been utilized [124]. Riegler et al. [177] apply speech-based interactions to perform a NDRT in semi-automated vehicles. The literature review shows that many interaction modalities are being researched separately, and we encourage the employment of sensor and interaction fusion, i. e., combining multiple interaction modalities depending on the current situational context (e. g., time of day, work/leisure trip, entertainment/work activities, private/public transport). AR interfaces may also aid to give feedback on the success or error of given interactions, which is already applied in smart home appliances using conversational artificial intelligence [121]. In the areas of automotive context awareness and customization employing AR technology, we believe that research is still in the early stages, and research efforts should be increased. Particularly, the types of interaction suitable for the context-specific situation should be evaluated. For example, in shared public transports, large AR interface displays in combination with speech interaction might not be usable from a proxemics and privacy point of view, however, this might not be the case for private vehicles.

Passenger Experiences
AR technology can further be applied to explore in-vehicle passenger activities [186,223]. Equivalent to windshield displays for drivers, side windows can also be adjusted as displays for visualizing content for side and rear-seat passengers. In particular, [222] used side windows for displaying entertainment apps and games intended to support younger passengers on longer journeys. Wang et al. [223] explored rear-seat passengers' and commuters' preferences for activities in AR, and found that AR HUDs for back-seat users can lead to increased road safety by decreasing distractions [223]. Moreover, AR can lead passengers inside autonomous vehicles [152]. As human operators/drivers may not be required in future automated driving scenarios, understandable guidance of passengers, especially in public transport, needs to be investigated. Cross-vehicle entertainment might also be a viable use case for AR experiences, as Arnedo et al. [6] found. In this context, gamification approaches that include the real outside environment combined with AR digital content in vehicular settings might be a novel concept for future social gaming. For highly automated driving, when drivers become passengers, this research area will be especially useful, as vehicles will be used as mobile entertainment or office platforms [190]. We therefore encourage the automotive research community to investigate comfort, anxiety, entertainment, and work-related functionalities, using AR interfaces for AVs.

External HMIs & Vulnerable Road Users
Several studies in automotive research were conducted with emphasis on external vehicle bodies as a design space [28,33,145]. The majority of research utilizes AR to explore external HMI (eHMI) concepts in order to display information to other users, such as pedestrians, passengers, and drivers of other vehicles. In mixed traffic situations, eHMIs have the ability to convey additional information, such as the driving state and intent, to other road users. In a simulator study, Tong et al. [211] investigated a communication interface between an AV and pedestrians by displaying warnings. Similarly, Chang et al. [28] and Nguyen et al. [145] explored different eHMI concepts, such as public displays, warning systems, and alerts in case of vehicle malfunctions. However, such eHMIs are not limited to read-only displays, they may be utilized for interaction purposes as well, including route navigation for pedestrians using gestures [7,33]. External HMIs should further not only be explored for pedestrians, but also for other road users as well, such as cyclists. The overarching goal is to foster safe interactions, and establish connections between automated vehicles and vulnerable road users by creating a common language between them. Moreover, more research efforts should be put into the potential of VRUs wearing AR HMDs/glasses in future scenarios. This includes presenting visual cues on pedestrian's AR glasses through communicating intents between the various road users.

Trust & Acceptance
More recently, an increasing number of research studies was carried out in mixed reality environments to explore trust in automated driving systems. Ha et al. [59] examine the effect of explanation types and perceived risk on trust in AVs. They found that simple feedback (e. g., descriptions of the vehicle's tasks), resulted in more trust in AVs, while too much feedback had a negative effect on cognitive overload, and did not increase trust [59]. Djavadian et al.
[41] investigate the acceptance of automated vehicles, and found that particularly heavy traffic situations have the potential to increase driver's acceptance as the vehicle would automatically select faster and less congested routes. Additionally, trust and acceptance should not only be determined for drivers and passengers, but also from an outsider's standpoint. For example, Hollaender et al. [67] look into information exchange between AVs and pedestrians, and their effect on trust. Primarily, crossing scenarios, and AVs conveying safety crossing information to pedestrians, are explored. Additionally, Colley et al.
[34] investigate trust in AVs by evaluating pedestrians' intent in order to calibrate driver's trust. Challenges in this application area include the need to measure trust over longer periods of time using long-term studies [94], and the possibility of decreasing and subsequently re-building trust in automated vehicles. Consequently, overtrust in AVs must be researched as well, such that drivers of conditionally automated vehicles stay situationally aware, and AR interfaces could help establish alerting systems. Particularly, the level of system feedback needs to be calibrated [48].

Simulator & Motion Sickness
When carrying out user studies on automated driving in virtual and augmented reality environments, when active drivers transition to passive passengers, simulator and motion sickness should be evaluated [84]. Simulator sickness is caused by motion sickness due to visual discrepancies with the real world [167]. Motion sickness can arise when steady-state users are moving, or when there are lags between head movements and the visual display presentation [64]. Results of motion sickness encompass nausea, disorientation, headache, and general discomfort, among others [87]. McGill et al. [126,127] explore the utilization of HMDs in vehicles worn by passengers. Further, McGill et al. [128] suggest that postural movements should be limited in order to reduce simulator sickness in case no positional tracking can be performed by the simulator. Sawabe et al. [187] use vection illusion to cause pseudoacceleration which results in a reduction in the acceleration stimulus, helping passengers to counteract simulator and motion sickness. A visual solution is proposed by Hanau et al. [61] as they show acceleration cues to passengers, thus diminishing the sensory conflict between the perceived acceleration and the lack of corresponding visuals. Williams

Discussion of Findings
As Table 1 shows, the vast majority of research papers from 2009 to 2020 covered safety-relevant aspects using AR interfaces. More recently, safety aspects were combined with non-driving related tasks in the form of keeping the driver in the loop, and staying situationally aware. Therefore, safety aspects are further researched in particularly more highly automated driving scenarios, by directing the driver's attention towards the outside environment using AR interfaces. AR navigation interfaces are also an established research field that continues to be explored further with new concepts for world-fixed AR guidance. AR HUDs and WSDs are commonly researched for AR interfaces as vehicles, however, as AR HMDs/glasses become lighter and increase their field of view, automotive HCI researchers should consider this medium as well. Many open questions to in-vehicle AR interfaces pertain to highlighting hazards, conveying uncertainty/feedback, as well as trust and acceptance. Additionally, interfaces for NDRTs are the focus of HCI research, with multiple modalities including gestures, gaze, speech, and olfactory interfaces being prototypically explored. We therefore encourage researchers to further investigate world-fixed AR for information presentation and feedback design (not necessarily warning cues in manual driving, as was shown in Section 2.1.1), but for non-driving related tasks in (conditionally) automated driving, and keeping the driver in the loop to take over manual control of the vehicle. This includes visual transitions and multimodal displays (e. g., auditory, olfactory, among others) in order to increase take-over performance. Passengers have often been neglected in automotive HCI research, however, with AR interfaces, this might change, as side windows could turn into infotainment displays, and the windshield might become a tool for collaborative work and entertainment in automated vehicles. Therefore, novel passenger experiences with AR interfaces, not just for "fun activities", but also mitigating motion sickness, will require more research efforts.
Outside the vehicle, eHMIs are of growing interest to researchers, as one of the factors contributing to the advancements of automated vehicles will be the acceptance from pedestrians and other road users. Standards or guidelines for such external interfaces need to be developed in order to avoid confusion of future users (e. g., many ways to signal safe passage for pedestrians might be confusing for them). Finally, calibrating trust, dealing with distrust, but also overtrust in automated vehicles, has been explored also in conjunction with AR interfaces, and conveying explainable AI. Context awareness that is influenced by the driver's physiological metrics (e. g., stress), could be used to adapt (i. e., increase or decrease) system feedback and information display, and might lead to increasing levels of trust. However, longitudinal studies need to be carried out to better analyze trust over time, and how it can be calibrated with broad acceptance of drivers/passengers. The literature review served as basis for the HCI workshop, as it gave us a wide range of AR application areas for mobility concepts, and subsequently helped identify research gaps. Therefore, in the following Section 3, we will give an overview of our conducted workshop, and categorize the brainstorming results into action points.

HCI Workshop Results
The 2021 Mensch und Computer conference [1] was accompanied by a workshop [details removed for blind review] with the purpose of finding a holistic approach of augmented reality technology to be used for vehicle drivers/passengers and other road users. Our workshop participants were at the time of the workshop active in academia, and industry (i. e., original equipment manufacturers or automotive suppliers). Figure 3 shows our brainstorming and discussion results. The main focus of the workshop was to identify challenges, opportunities, and practical use cases for automotive AR technology. The workshop findings are presented in detail in the following. Section 3.1 covers AR applications for in-vehicle experiences, and Section 3.2 summarizes the workshop findings on AR usage for pedestrians, other road users, and remote operators.

Augmented Reality Applications for In-Vehicle Experiences
As our literature review shows, utilizing augmented reality technology has proven to benefit in-vehicle users such as drivers and passengers in prototype studies. In this section, we describe and discuss our workshop findings for in-vehicle AR usage, both stationary (e. g., using head-up and windshield displays) and mobile (e. g., using headmounted displays or AR glasses).

Safety & Driver Assistance
The role of safety in automotive AR research was thoroughly discussed in our workshop. We found that the overarching goal of AR safety applications should be the support of human information processing. To that end, AR has the potential to minimize gaze diversion by projecting nec-essary information into the driver's field of view, rather than the driver having to move their head to a center display, for example [201], thereby fostering situation awareness [153]. The driver must be aware at all times in which mode (automated vs. manual) the vehicle is operating, and thereby AR plays an important part in keeping the driver in the loop by visualizing the active mode [194]. Moreover, only relevant and necessary information should be visualized. This requires an understanding of the user's and vehicle's context. For example, highly automated vehicles (SAE level 3 and higher) could enable the "driver" to conduct work-and entertainment related activities, and vehicles with lower automation level could only allow to display short and small notifications, rather than content-rich applications [181]. Highlighting pedestrians or other hazards is an additional application area of AR, and requires both correct real-world positioning of the AR content, and being transparent about such detected traffic participants [25], and even gain the driver's attention when depicting crash predictions or preventions [214]. Therefore, information about other traffic participants could be displayed in order to support cooperation between the individual entities. For example, in case of traffic jams, the AR system could visualize an emergency corridor, guiding the driver to the correct lane. Another example we came up with is the visualization of the end of a traffic jam in order to prepare the driver/passengers for halting the vehicle. Another safety aspect is not about what information should be displayed, but rather what medium for AR should be used.
In most research papers, AR is realized/simulated as HUD or WSD, rather than through AR HMDs. However, the viability of HMDs for drivers should be investigated nonetheless. In that case, legal and safety aspects (for field studies) should be additionally considered. Further, longitu-dinal studies should be conducted to explore the role of perceived safety with AR interfaces, and the possibility to perform a behavioral adaptation of the interface, such as limiting the level of verbosity of visual information (e.g, displaying only emergency information vs. displaying all detected objects on the road environment) [11].

Navigation
Another prominent feature of automotive AR is the ability of more immersive navigation instructions, which are particularly useful for lower levels of vehicle automation, such as level 2 and lower. World-fixed AR can visualize the proposed path for the vehicle to follow blended seamlessly into the environment [81]. Further, road regulations and traffic signs can be highlighted, therefore guiding the driver with the most relevant information through traffic [2]. For higher levels of vehicle automation, detecting front vehicles, and visualizing content that "follows" along, can have the benefit of reducing simulator sickness and increasing situational awareness, by avoiding to change the visual focus of the driver's eyes between the real world and the digital content on the HUD/WSD. Overall, the consensus of the workshop participants was that for lower levels of vehicle automation, navigation instructions in AR can enhance the manual driving experience, while for more highly automated vehicles, motion sickness should be researched more extensively in combination with navigation visualizations.

Trust & Acceptance
Our workshop participants also identified trust as an important research topic, as two emerging technologies intersect in potential future mobility solutions: automated vehicles and augmented reality. New technologies need to build up their users' trust towards them [234]. While technologically affine people might welcome revolutionary changes in vehicles, other drivers should not be left behind. Therefore, novel technology might be seen as a hindrance rather than a catalyst for improved mobility. AR may play a vital role in creating trust and acceptance in automated vehicles, as the vehicle's "thinking and doing" can be visualized, such as why certain road maneuvers were made by highlighting obstacles in the HUD/WSD. This level of transparency can be a form of explainable artificial intelligence (XAI) [235]. On the other side of the spectrum, overtrust should be addressed as well. Relying too much on novel technology could lead to a false sense of comfort in automated vehicles, and the driver should be made aware of the technological feasibility and deficiencies. In that regard, trust must be calibrated to the individual user's attitude towards automated driving, and we believe that AR can visually show to the driver what the vehicle's sensors "see", and therefore give the driver an idea of the vehicle's technological scope.

Content Presentation & AR Visualization
One opportunity of AR interfaces is to re-define traditional content presentations, for example by using a headmounted display or a large 3D AR windshield display. Compared to dashboard screens or center tablets, windshield displays offer a vastly larger screen space, which can be utilized for multiple users. However, in order for worldfixed, or contact-analog, visualization to work, the outside road environment must be detected, e. g., using optical cameras, or Lidar sensors, resulting in point clouds, or areas that AR apps can utilize to place digital objects. This enables users (driver, passengers) to experience novel, richer information presentations based on real-world information (points of interest, other road users, etc.). Information pertaining to "real-world" entities, such as front vehicles, or pedestrians, might be better understandable when highlighted in AR rather than presenting warnings on a traditional screen. However, as drivers are not comprised of young, technologically affine users, all target groups must be considered, including elderly drivers, and their attitude towards new interfaces [178]. Further, users must be able to understand the presented information on AR displays, and it might be helpful to initially guide users through the AR interface with onboarding [36]. Further, AR-only interfaces might not be suitable for certain users, and automotive UI/UX designers might need to design for traditional UIs, as well as AR interfaces. Another challenge regarding AR interfaces is the context in which information with different levels of priority need to be visualized. For example, there is safety-relevant driving information, and side-activity information side-by-side on an AR HUD/WSD, yet, the driver must differentiate between them, or the system has to decide what granularity of information is best displayed. When presented with large screens, we must consider the possibility of visual overload and distraction caused by visual clutter, i. e., too many digital objects placed in the scene. This abundance of information could then lead to cognitive overload, thereby making the driver feel stressed, and reducing situational awareness [48]. It is therefore necessary to only display necessary information at the time when the driver's awareness is needed, e. g., for take-over requests in semi-automated driving [10]. It might be advantageous to blend the virtual information into the real environment using AR interfaces, as the driver would not have to switch focus because of different distances (real world, digital information), as is the case with screen-fixed displays, e. g., dashboard screens. Moreover, the digital content should not obstruct the real world by being displayed too opaquely, i. e., the transparency of the content should be adjusted to the real environment [171].

Control System
We clustered brainstorming ideas pertaining to interaction with and control of an AR interface under the term control system. Similar to novel finger gestures, such as pinch-tozoom, introduced with multi-touch enabled smartphones [65], interaction paradigms with AR interfaces must be learned. Therefore, onboarding processes will be required to get drivers/passengers familiar with operating nontraditional 3D displays. Additionally, all target and age groups need to be considered. We strongly believe that the advancements of automated driving will be coupled with the ubiquity of AR interfaces, as more and more tasks might shift towards novel non-driving related experiences in vehicles, making way for work and well-being activities [190]. In that regard, metaphors from the operation of smartphones might be helpful and considered intuitive for automotive users (e. g., [15,182]). Subsequently, we considered how interactions should be performed with AR interfaces. As vehicle automation progresses, the driver will move further away from the steering wheel, and pressing haptic buttons might not be the most efficient or pleasant way to navigate through AR content. Therefore, a multimodal approach, considering finger/hand gestures, speech, and gaze input, and combinations thereof, should be explored. For real world-fixed, or contact-analog AR, eye tracking is required, and would therefore make an interesting way to interact [169]. However, physical controls should not be dismissed, as they are already wellknown, and if placed on easily reachable locations (e. g., integrated into the driver's/passenger's rotatable seat), should be explored as well. Conversational UIs [93] can further be realized with AR interfaces, as they allow for a more user-centered human input, potentially combining voice assistance and immersive visualizations [159]. Furthermore, the transition from opaque 2D screens towards semi-transparent 3D AR displays will also require further attention by researchers and UI designers, as users are not yet familiar with these interfaces.

Personalization & Customization
One of the main opportunities of AR interfaces is the potential to modify the content more easily as compared to more traditional dashboard/center stack/steering wheel haptic interfaces. While large tablet displays also open the possibility to adapt the graphical user interface, AR HUDs/WSDs might have the edge as they direct the driver's/passengers' attention towards the road environment, thereby potentially increasing situational awareness, and mitigating motion sickness [173]. We further identified that AR display layouts should be divided into at least 2 groups, comprised of adaptable and customizable layout elements, and non-customizable and customizable layout elements. Different levels of customization should be introduced, such as binary modes (e. g., light or dark UI skins), finite modes (e. g., discrete volume steps between a minimum and maximum value), and continuous values (e. g., colors). Personalization should therefore be possible, depending on the functionality to customize, and the level of vehicle automation [78,179]. However, an adequate "one-size-fits-all" AR interface should be explored and evaluated. While future mobility solutions might prioritize in-vehicle experiences, customizable user experience (UX) and user preferences, however, do not necessarily imply better or safer interfaces. Therefore, a holistic approach on automotive AR interface design coupled with safety aspects should be investigated.

Driver Improvements & Gamification
AR interfaces may further enable drivers to improve their driving skills. For example, gamification concepts could be applied to both aid drivers increase road safety and drive environmentally friendly. Misbehavior, driving violations etc. could be instantaneously displayed on the AR HUD/WSD for the drivers to learn from their mistakes. Gamification interfaces and social features could be used to "rank" drivers and compare their driving skills among other drivers or friends. The use of AR interfaces for driving schools has the potential benefit of playfully instructing learners on improving their driving proficiency [3].

Feedback Strategies
Another main category of our workshop findings is the opportunity of AR interfaces to give visual feedback to the driver, and in case the feedback is location-based, the interface can visually highlight that information. Feedback is a core component for facilitating trust [233], but also to make explainable AI visually tangible. An example would be as to why a certain vehicle behavior or maneuver was performed. The level of feedback needs to be determined, as too much feedback might cognitively overwhelm the driver, while not enough feedback might reduce the driver's trust in the vehicle. An open question remains whether the vehicle should only provide feedback on what the vehicle can control, or the driver's performance (e. g., compliant driving, environmentally-friendly driving), or both. Additionally, the level of feedback might be interesting for the driver to customize. For example, should the AR interface show what the vehicle "sees and senses", "thinks", and/or the resulting action (cf. Tesla Autopilot's 3D mapping of individual vehicles, cf. [79])?

Entertainment & Work
One of the main benefits of automated driving is the ability to engage in non-driving related tasks (NDRTs), such as working or media consumption [142]. The shift from information displays nowadays to entertainment and experience displays in the future leads to new opportunities and challenges for automotive UI/UX designers. AR interfaces introduce new display and interaction paradigms in a transition away from traditional 2D screens towards immersive 3D continuous-depth displays [78]. While the immediate use case for AR HUDs/WSDs opens up for drivers (cf. safety and navigation aspects), passengers could also benefit from AR interfaces implemented on the side windows (cf. display of external information) [222]. Similar to the mobile app marketplaces today, we envision automotive app stores with developer-driven apps (in addition to OEM apps) focussing on experiences based on the different levels of vehicle automation, and placed in different app categories (e. g., games, safety, navigation, news etc.). Consequent to mobile app HCI and design guidelines, it would be adequate to start developing automotive app HCI and design guidelines for AR interfaces. As smartphone and tablet screens are comparatively small vs. windshield displays, a new design space [228] and inherent guidelines are needed for automotive UI/UX designers and developers, creating novel experiences for driver/passenger NDRTs.

Display of External Information
First, AR allows the placement of points of interest (POIs) in the driver's or passengers' field of view, thereby enabling them to process geospatial information blended into the environment [246]. Application areas for displaying POIs are beyond navigation [92], and include contextaware information such as restaurants and local businesses [151]. Furthermore, AR applications can display user-specific content relating to the outside road environment. For example, police officers could utilize AR windshield displays to display additional data about vehicles in the field of view, such as their speed, and license information. Of course, privacy issues might arise that will eventually need to be addressed [68].

Hardware
One of the biggest potential advantages of in-car AR is the enabling of minimalist UI. As such, visual clutter on the dashboard, center stack, and steering wheel with partly redundant functionality could be eliminated [181]. Notable challenges with in-vehicle AR pertain to hardware aspects. As such, vehicle-to-vehicle communication must be extended in order to reduce reliance on standalone sensors, e. g., for detecting other road users [162]. Further, 3D maps for navigation must be kept updated, in particular to facilitate world-relative navigation instructions that blend seamlessly into the environment [130]. For that reason, hardware is not only important inside the vehicle, but also reliable and secure backends need to be established, e. g., by OEMs or service providers to ensure that high-quality content is streamed to the vehicle. In case of AR HUDs and WSDs, eye box restrictions need to be accounted for. Currently, the position/placement of HUD content needs to be manually adjusted by the driver, however, with eye tracking devices being used for driver monitoring [38,198], an additional use case for them would be to automatically adjust the AR content according to the driver's head position and rotation. The use of AR HUDs/WSDs as single point of information would further advance the opportunity to eliminate or reduce physical materials for buttons, knobs, displays (which are sometimes redundant or rarely used), and their resource consumption, resulting in an ecological benefit. Only the needed settings/user-defined apps will be used, leading to a more minimalist user interface. When it comes to integrating digital objects with the real world, in a moving environment, one must further consider latency issues. If digital objects cannot be rendered and visualized in a reasonable timeframe, the overlay will become confusing for the driver. Additionally, while HUDs are designed for limited additional visual input for drivers, WSDs allow for a large design space. Field of view restrictions must therefore be overcome for further advances in large AR content spaces [75,244]. This is not only valid for stationary AR projections on the HUD/WSD, but also for mobile HMDs. In the latter case, the increased weight needs to be considered in combination with driver fatigue [69].

Augmented Reality Applications for Pedestrians and Other Road Users
While not the core focus of our workshop, we also discussed the potential of AR technology outside of vehicles being used to interact with them. For example, the potential of smartphone AR as well as head-worn AR glasses to warn pedestrians about approaching cars. Our main findings during the brainstorming sessions for the usage of AR outside of vehicles are related to the application areas of safety and navigation.

AR Worn by Vulnerable Road Users
The use of AR apps must be done with caution not only for in-vehicle experiences [44], but also from a pedestrian point of view [8]. Distractions caused by smartphone AR games, such as Pokémon GO, where players are incentivized for increased physical activity, lead to safety issues that need to be addressed. For example, the AR game could display warning messages or visualize the source of danger (e. g., an approaching vehicle) to the smartphone user to keep them in the situational awareness loop [164]. Headworn AR is also of increasing interest to academia and industry, for example in the form of bicycle or motorcycle helmet-integrated AR. First concepts (e. g., [218,219]) shows that new opportunities for AR designers and developers are opening up in the VRU AR design space. While initial concepts focus on safety and navigation features, similar to the automotive domain initially, we encourage researchers to explore new application areas.

AR on the Vehicle Used by Other Road Users
External car displays provide a novel design space for outside functionalities vehicles can provide to pedestrians and other road users. For example, environment explorations and navigation instructions can be facilitated with external AR displays [33]. Additionally, pedestrians crossing behaviors can be influenced using such displays [49,66]. Roadside maintenance can further be simplified by providing instructions on external displays, and can play an important part in building a cooperative vehicle infrastructure system [40].

AR/VR Used by Remote Operators
Another use case for AR/VR is the immersive visualization for remotely controlling, or so-called teleoperating, vehicles. In that scenario, the remote operator could utilize VR to steer the vehicle, while the in-vehicle passengers see AR feedback on the HUD/WSD, thereby establishing and fostering a collaboration with the remote operator [55,143]. Moreover, smartphone AR can be employed to transmit a live camera feed to the remote operator, and both the smartphone and remote operator can mark areas in the live feed needing to be investigated by the on-site operator. This form of AR collaboration is already prototypically applied for remote assistance in industrial repairs, and shows great potential (e. g., [147]).

Discussion and Outlook
Automotive

Conclusion
Automotive augmented reality applications will fundamentally change driver and passenger experiences as well as external experiences for other road users and pedestrians. Usable AR display technology for various industrial and home entertainment applications is already avail-able, and prototypes of AR head-up displays showcased by automotive manufacturers and suppliers in combination with semi-automated vehicles may foster further research efforts towards commercially available systems within the next years. Novel AR applications will have to account for a wide range of perceptual and cognitive issues within the (semi-automated) vehicle design space. We presented and discussed application areas, challenges, opportunities, and practical use cases based on a literature review as well as an HCI workshop with participants from industry and academia, and concluded by giving recommendations for potential future frontiers. More interdisciplinary research at the intersection of AR technology, HCI, UI/UX, and psychology is required to enable safe and practical AR applications for future mobility solutions.