Skip to content
BY 4.0 license Open Access Published by De Gruyter Oldenbourg August 31, 2022

Adapting visualizations and interfaces to the user

  • Francesco Chiossi

    Francesco Chiossi is a PhD researcher in the Media Informatics Group at the Department of Computer Science of the LMU Munich. He obtained a M.Sc. from the University of Padua in neuroscience and applied cognitive science. In his work, he focuses on implicit measures of human behavior, such as electrodermal activity and electroencephalography, as an implicit input to design physiologically-adaptive systems across the virtuality continuum.

    EMAIL logo
    , Johannes Zagermann

    Johannes Zagermann is a PhD researcher in the Human-Computer Interaction Group at the University of Konstanz. He studied Business Information Systems at the Hochschule Furtwangen University (BSc, 2010) and Information Engineering at the University of Konstanz and Linköping University (MSc, 2015). His research interests span the areas of cross-device interaction, multi-modal interaction, and hybrid user interfaces.

    , Jakob Karolus

    Jakob Karolus is a postdoctoral researcher at the Human-Centered Ubiquitous Media lab at the Ludwig Maximilian University of Munich. His research focuses on establishing guidelines and techniques for proficiency-aware systems based on ubiquitous sensing technologies. His key interests lie in investigating opportunities and the design of engaging experiences for users to understand their own proficiency. He also conducts research in employing electromyography for sensory augmentation and novel interaction paradigms in human-computer interaction.

    , Nils Rodrigues

    Nils Rodrigues is a PhD researcher at the Visualization Research Center (VISUS) of the University of Stuttgart since 2016. He studied software engineering at the University of Stuttgart (2013) and has also worked as a consultant for business integration. His research interests include information visualization, layout algorithms, and eye tracking.

    , Priscilla Balestrucci

    Priscilla Balestrucci is a postdoctoral researcher at Ulm University. She studied neuroscience (PhD, 2017) and medical engineering (MSc, 2013; BSc, 2011) at the University of Rome “Tor Vergata”. Her research focuses on how information from multiple sensory systems and prior knowledge affect sensorimotor control.

    , Daniel Weiskopf

    Daniel Weiskopf is Professor at the Visualization Research Center (VISUS) of the University of Stuttgart, Germany. He received his Dr. rer. nat. degree (similar to Ph.D.) in Physics from the University of Tübingen, Germany, in 2001, and the Habilitation degree in Computer Science from the University of Stuttgart in 2005. His research interests include visualization, visual analytics, eye tracking, human-computer interaction, computer graphics, and special and general relativity.

    , Benedikt Ehinger

    Benedikt Ehinger is Tenure-Track Professor at the Center for Simulation Technology and the Institute for Visualization and Interactive Systems in Stuttgart, Germany. He studied Cognitive Science at the University of Osnabrück (BSc;MSc; PhD, 2018). In 2018 he joined the Predictive Brain Lab with Floris de Lange at the Donders Institute, Nijmegen, the Netherlands as a PostDoc. His research interests are EEG, Eye-Movements, Statistics, Timeseries and Visualization.

    , Tiare Feuchtner

    Tiare Feuchtner is a Tenure-Track Professor at the Department of Computer Science of the University of Konstanz since 2021. She studied Computer Science at Aarhus University (Ph.D., 2018), TU Berlin (MSc, 2015), and TU Wien (BSc, 2011), was visiting researcher at the EventLab in Barcelona, and junior researcher at the Telekom Innovation Laboratories (T-Labs) in Berlin and the Austrian Institute of Technology (AIT) in Vienna.

    , Harald Reiterer

    Harald Reiterer is Professor at University of Konstanz and Chair for Human-Computer Interaction, Department of Computer Science. He received the Ph.D. degree in computer science from the University of Vienna, in 1991. In 1995 the University of Vienna conferred him the venia legendi (Habilitation) in Human-Computer Interaction. From 1990-1995 he was visiting researcher at the GMD in St. Augustin/Bonn (now Fraunhofer Institute for Applied Information Technology), and from 1995-1997 Assistant Professor at the University of Vienna. His research interests include different fields of Human-Computer Interaction, like Interaction Design, Usability Engineering, and Information Visualization.

    , Lewis L. Chuang

    Lewis L. Chuang is Professor of Humans and Technology at the Chemnitz University of Technology since 2022. He read psychology at York and Manchester and received his PhD in neuroscience from Tübingen University in 2011. His research addresses the role and reliability of implicit psychophysiological activity as an input for human-computer interactions.

    , Marc Ernst

    Marc Ernst is Chair of the Department of Applied Cognitive Psychology at Ulm University since 2016. He studied physics in Heidelberg and Frankfurt/Main and received his PhD in 2000 from Tübingen University for his work on human visuomotor behavior at the Max Planck Institute for Biological Cybernetics. He was visiting researcher at UC Berkeley (2000–2001), research scientist and group leader at the MPI in Tubingen (2001–2010), and professor at Bielefeld University (2011–2016).

    , Andreas Bulling

    Andreas Bulling is Professor of Computer Science at the University of Stuttgart where he heads the research group Human-Computer Interaction and Cognitive Systems. He received his MSc. in Computer Science from the Karlsruhe Institute of Technology (KIT), Germany, and a PhD in Information Technology and Electrical Engineering from ETH Zurich. His research interests are in fundamental computational methods to analyse and model everyday human behavior and cognition.

    , Sven Mayer

    Sven Mayer is an assistant professor of Human-Computer Interaction at LMU Munich. In his research, he uses machine learning tools to design, build, and evaluate future human-centered interfaces. He focuses on hand- and body-aware interactions in contexts such as large displays, augmented and virtual reality, and mobile scenarios.

    and Albrecht Schmidt

    Albrecht Schmidt received his Ph.D. degree in computer science from Lancaster University, in 2002. He is currently a Professor with LMU Munich and a Chair for Human-Centered Ubiquitous Media, Department of Informatics. His research encompasses the development and evaluation of augmented and virtual reality applications to enhance the user’s life quality through technology. This includes the amplification of human cognition and physiology through novel technology.

Abstract

Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.

1 Introduction

The first stage of implementing adaptive interaction systems, according to the definition, is to define the desired “goal.” In a system (a set of connected variables), such a goal is a state or multiple states (specified values for those variables at a certain point in time) that are chosen above others [10]. In order to achieve this, such systems require developing a user model that represents preferences, capacities, and affective processes and their relationship with the task at hand. This cybernetic process requires two components: a sensing component that detects the current state of the system and an actuation component that guides the system toward the desired goal depending on the sensing component.

The user provides multimodal information, both explicit and implicit, that can drive an interface or visualization adaptation toward a (shared) goal. To date, adaptive systems have mainly exploited direct user input as interaction modalities: the computer reacts only to explicit commands provided by the user, e. g., mouse, keyboard, speech, touch. However, recent approaches are increasingly considering implicit aspects of the user, such as their cognitive processing capabilities [32] and the user’s physiological state [17]. For example, in the field of visualization, a personalized adaptation based on various cognitive functions (such as perceptual speed and working memory) impacts the user’s performance [59] and different modalities of information processing require different visualizations [58]. This complements traditional approaches of considering implicit user data based on their profile, e. g., interests or prior knowledge.

Monitoring the user’s physiological state to infer and adapt interaction will couple them to the user’s goals and thus, enable developers to design biocybernetic loops. The user’s physiological data are processed online and classified to trigger the adaptive system, which is then in charge of performing adaptive actions in the interface [17]. Fairclough [16] proposed first and second-order adaptation loops. The first-order adaptation consists of a loop that begins with monitoring the user’s condition. The loop completes by executing adaptive actions. This first-order adaptation necessitates a set of rules that link each user’s current condition to at least one adaptive action. The second-order adaptation encompasses detecting changes as a direct result of adaptation. It allows the system to acquire information on the user’s state and preferences over multiple iterations. Thus, it allows the system to adjust the actions to a single user. And after a phase of reciprocal coupling, it leads to system and user co-existence.

Biocybernetic approaches do not consider individual users as static entities. Therefore, we describe such dynamic systems as continuously updating systems using incoming environment information and, thus, changing user requirements and goals [14], [62]. Such dynamics are critical, especially when the update of the adaptive interface is not under the explicit control of the user but depends on the characteristics of the interaction itself. Thus, both the adaptive interface and the user learn from each other. Mutual adaptation dynamics can lead to complex interaction patterns, affecting the adaptive system’s usability. A paradigmatic example consists of adaptive interfaces designed to improve the user’s performance by automatically reducing the error associated with a given task, e. g., an adaptive touch keyboard [19]. If an interface fails to incorporate the user’s learning capability, the performance of the joint system will likely be even worse than that of a user in isolation. In such cases, the contribution of the adaptive interface may result in error overcorrection and hence in a potentially unstable interaction [4]. In contrast, the system’s features will change according to the characteristics of the user, for example, by providing only partial correction and taking into account the learning rate associated with the user over the entire course of the interaction. In the future, such a joint adaptation approach can positively improve the outcome of the interaction and the user’s subjective experience. The next generation of intelligent systems will encompass increased autonomy and adaptability [26] facilitating proactive and implicit interaction with users [1].

This work provides an overview of applications in adaptive content, interaction, and visualization. We specifically address which information researchers used for adaptation, with which purpose of using such information, and lastly, which domains are feasible for adaptive systems in the human-computer interaction (HCI) and the visualization domains.

2 Technologies for adaptation

Systems nowadays may draw from various information to infer the user’s current state and environment. Such information ranges from static adaptation using user profiles to context-aware systems [12], [53], using the user’s surroundings, and even ubiquitous sensing technologies. The last category potentially provides deep insights into a person’s cognitive ability or motoric skills. In the following, we provide a short overview regarding currently common physiological measures leveraged in adaptive interfaces.

2.1 Extracting features for adaptation

Among the psychophysiological measures useful for adaptive interfaces, electroencephalography (EEG), eye tracking, electrodermal activity (EDA), and electromyography (EMG) have garnered the most interest. Their sensors’ comparatively small size and ability to measure physiological activity non-invasively make them more likely to be incorporated into wearable consumer devices, such as glasses, wristwatches, and headbands.

EEG records electric potentials from the scalp, which reflects brain activity. Machine learning (ML) can extract event-related activity to estimate cognitive workload [33], attention allocation [60], or affective states [40]. Brain-machine interfaces [8] and user state estimation systems use these ML-generated estimates. However, such systems have to be considered in light of current challenges such as the need for generalizable applications of classification methods online [41], improvement of transfer learning, and application of new approaches such as deep learning or Riemannian geometry-based classifiers [36].

Similarly, gaze behavior can indicate high-level cognitive processes, see early work by Deubel [11] and Hoffmann [25]. Recent work analyzed specific eye movements and gaze patterns to infer, for instance, user activities and cognitive states. Jacob and Karn [29] and Duchowski [13] provide in-depth overviews of this domain.

When the goal is to infer responses to novel stimuli, cognitive workload, and stress, the choice of EDA measures might be preferable as a noninvasive and easy-to-use method. EDA measures are the joined pattern of its phasic and tonic components [21]. Phasic Skin Conductance Responses (SCRs) reflect discrete and stimulus-specific responses to evaluate the novelty, importance, and intensity of the stimuli utilized [44]. As indexed via Skin Conductance Level (SCL), tonic activity is an inertial and slow response particularly well suited to evaluate the effect of continuous stimuli, i. e., task. Therefore, HCI has used it to quantify, for instance, changes in arousal under high cognitive load [34] or stress [6].

Besides inferring cognitive processes, measuring the user’s motoric responses can be especially useful in enabling a system to adapt to the user’s abilities and potential actions [39]. By measuring muscular activity, EMG can provide insights into the working mechanism of motor tasks. Using EMG measures for adaptation allows for providing user-tailored feedback, ranging from detecting emotional states through facial EMG [61], over gesture recognition [52], to an adaptive tutoring system for motor tasks [30].

Finally, sensor fusion multi-model adaptive systems can often achieve more robust adaptation. For example, Putze et al. [48] showed that combining EEG recordings with eye-tracking addresses the Midas-Touch problem in gaze-based selection by estimating whether a fixation was purposeful or not. Moreover, combining EMG with EEG, Haufe et al. [23] showed that this leads to faster automatic braking in a driving simulator than using EEG alone.

2.2 Adapting the interface and visualization

Although there have been earlier models for adaptive systems [22], [54], we consider three critical adaptation elements: content, presentation, and interaction. When adaptive systems adjust their content, which relates to users’ preferences and engagement, they must consider the user’s prior knowledge and interest. Such dimension might involve notification design and recommendations, especially considering the exploratory visual analytics process [51].

Secondly, presentation adaptation affects user interfaces (UIs) or visualizations according to users’ spare perceptual capacity, discomfort and, stress level by simplifying displayed information, luminance, or other properties.

Thirdly, interaction adaptation is a broader field as it might encompass different paradigms. For example, in multitasking environments, users might experience tasks being switched off [47], see the number of options change in a decision-making task [46], or modify the interaction modality, i. e., from gesture to hand-free interaction.

3 Use cases and applications

Here, we provide a brief overview of adaptive visualization and interfaces with use cases and applications from our work, specifically targeting content-based adaptation from physiological data, adaptation of visualization presentation from physiological data, and interaction adaptation.

3.1 Content adaptation

In the following, we present and discuss systems based on eye-tracking features, that adapt to support language proficiency, increase recommender systems’ performance based on inferred users’ interest, or help visual analytics.

3.1.1 Adaptive displays based on language proficiency

Globalization means that interfaces are prevalent in a multitude of different languages. Hard-to-access language correction can lead to user aversion. Consequently, there is merit in creating systems capable of estimating a user’s language proficiency and displaying content appropriate to the user’s abilities.

Figure 1 
User interacting with a language-aware interface [31].
Figure 1

User interacting with a language-aware interface [31].

Recently, Karolus et al. [31] explored the potential of using a user’s gaze properties to detect whether the information is presented in a language the user understands (see Figure 1). Robustness and feasibility with low-grade eye-tracking equipment were important aspects of this work. They proposed technical specifications for the recording equipment and the interaction period using robust gaze features, including fixation and blink duration. They found that a few seconds of recorded gaze data is sufficient to determine if a user can speak the displayed language.

Figure 2 
Screenshot of a recommender system with eye-tracking support [50]. Gaze is used as indicator for interest and mapped to the underlying data.
Figure 2

Screenshot of a recommender system with eye-tracking support [50]. Gaze is used as indicator for interest and mapped to the underlying data.

3.1.2 Gaze as input for recommender systems

Silva et al. [56] sketched the possibility of back-propagating eye-gaze through the visualization pipeline and mapping it onto the underlying data. According to the eye-mind hypothesis, this viewed data is of interest to the user. Recommender systems, such as the one in Figure 2, can take advantage of such implicitly selected data to suggest helpful visualizations [50]. Recommendations based on such data fit the user’s current interest and might, by extension, also fit their current task. However, a robust inference of an explicit task is not trivial, but a recommender system based on data interest can suggest the correct views for any generic and unidentified task.

3.1.3 Eye tracking support in visual analytics systems

Visual analytics is a design framework for interactive visual displays to facilitate the exploration of, and insight into, data sets. They rely on a loop that includes the viewer with all their prior knowledge, interests, and tasks. This allows the user to alter the selection of data, adjust parameters for data processing, and adapt the visualization on-the-fly to cater to current needs.

With the added information from eye-trackers, such visual analytics systems can augment existing interaction techniques [56]. This can include, for instance, gaze as additional cursors for interaction through speech or disambiguation of targets when pushing buttons on hand-held controllers. In addition, with the advent of coarse eye-tracking for devices with front-mounted cameras (e. g., tablets and phones), existing visual analytics software can “ retro-fit” gaze data without changing the actual hardware. For example, law enforcement agents already use software on car-mounted tablets to provide them with overviews of occurrences in their districts. In such a scenario, even coarse gaze data can check whether relevant events have been overlooked and provide adaptive visualizations to attract the agent’s attention.

3.2 Presentation adaptation

In this section, we highlight the work of adaptive systems that adapts presentation based on users’ physiological input, such as EDA, to support user experience, or to support processing of relevant information, i. e., notifications.

3.2.1 Adaptation of virtual reality visual complexity based on physiological arousal

Virtual reality (VR) is rapidly gaining popularity for social or collaborative virtual environment applications. Such settings envision the involvement of realistic Non-Player Characters (NPCs), such as virtual crowds with human-like behavior. However, highly dynamic environments could provide task-irrelevant elements that negatively increase a user’s cognitive load and distractibility. Thus, monitoring users’ physiological activity and adapting the interaction is an emerging research trend to optimize user experience or performance.

Figure 3 
Flowchart of the physiological loop used by Chiossi et al. [9]. The visual complexity (in the form of NPCs) adapts according to changes in the EDA calibrated from a baseline recording (
Δ
b\Delta b). The adaptation function is called every 20 secs.
Figure 3

Flowchart of the physiological loop used by Chiossi et al. [9]. The visual complexity (in the form of NPCs) adapts according to changes in the EDA calibrated from a baseline recording ( Δ b). The adaptation function is called every 20 secs.

The goal of physiological control loops is to detect deviations from the optimal physiological state that influence the adaptation of the features of the environment or tasks to drive users towards a more desirable state. Here, Chiossi et al. [9] focused on a peripheral measure of physiological arousal, i. e., EDA. Physiological arousal correlates with task demands and engagement in a multi-component task [18] and can be affected by proxemics of NPCs both in VR [35] and augmented reality (AR) [27]. Hence, the stream of NPCs was adapted in response to changing EDA levels of users while being engaged in a dual-task setting. They processed the EDA data only using an average moving window of 20 sec. For user-dependent adaptation, the adaptive algorithm adjusts the visual complexity to a baseline slope recording recorded at the beginning of the experiment. Thus, when the EDA slope was larger than the baseline slope, 2 NPCs were removed, indicating increased arousal. On the contrary, 4 NPCs are added to the environment if the system detected decreased arousal. Figure 3 visualizes the adaptation algorithm. Thus, they supported the user experience by leveraging visual complexity, i. e., increased system acceptance, competence, and immersion.

On the surface, the findings of Chiossi et al. [9] only impact the design of social VR scenarios. However, such results can generalize to the design of information visualization patterns to help users in mixed reality (MR) [57]. Thus, physiologically-aware systems can potentially personalize environments for the user, improving the user experience. Furthermore, in the visualization domain, VR complexity can burden users; thus, a physiological-adaptive system varying the information load according to users’ workload could foster the viability and usefulness of applications.

3.2.2 Adapting notifications to visual appearance and human perception

Users benefit from desktop notifications showing them their incoming messages, upcoming calendar events, or other important information. Notifications need to attract and divert attention from a primary task effectively to ensure that users notice important information. At the same time, notifications are embedded into the visual design of the user interface and are subject to aesthetic considerations. However, design decisions that are also currently static, i. e., do not adapt at runtime, can severely impair the user’s ability to perceive notifications.

Figure 4 
Müller et al. [42] collected perceivability and behavioral data on realistically looking synthesized desktop images. They used this data to identify the factors that impact the noticeability of notifications. This allowed them to develop a computational model of noticeability that can predict noticeability maps for a given desktop image and user attention focus. These maps visualize the locations at which a notification is likely to be missed (red) or likely to be seen (green).
Figure 4

Müller et al. [42] collected perceivability and behavioral data on realistically looking synthesized desktop images. They used this data to identify the factors that impact the noticeability of notifications. This allowed them to develop a computational model of noticeability that can predict noticeability maps for a given desktop image and user attention focus. These maps visualize the locations at which a notification is likely to be missed (red) or likely to be seen (green).

Müller et al. [42] presented a software tool to automatically synthesize realistically looking desktop images for major operating systems and applications. These images allowed them to systematically study the noticeability of notifications during a realistic interaction task. They found that the visual importance of the background at the notification location significantly impacts whether users detect notifications. Their work also introduced the idea of noticeability maps: 2D maps encoding the predicted noticeability across the desktop. The maps inform designers how to trade-of notification design and noticeability. In the future, such automatically predicted noticeability maps could be used in UI design and during runtime to adapt the appearance and placement of desktop notifications to the predicted user noticeability.

3.3 Interaction adaptation

In the last section, we present relevant work that shows how adaptive systems can support interaction for mid-air or multimodal interactions in immersive MR environments.

3.3.1 Adapting the 3D user interfaces for improved ergonomics

Interactive MR applications surround the user with virtual content that can be manipulated directly by reaching for it with the tracked hand or controllers. Such mid-air interaction techniques are beneficial, as they feel natural, but they may lead to physical strain, muscle fatigue, and challenging postures [5], [37]. The XRgonomics toolkit [15] addresses these issues by visualizing the ergonomics of the user’s interaction space (see Figure 5), allowing UI designers to create interfaces that are convenient and easy to manipulate. Further, it supports the automatic adaptation of UIs so that interactive elements remain within easy reach while the user moves about in a changing physical environment. The ergonomics metrics currently supported in XRgonomics are RULA [38], Consumed Endurance [24], and muscle activation [3].

Figure 5 
The XRgonomics toolkit [15] visualizes the cost of interaction for each reachable point in the user’s interaction space, through color coding (K) from blue (most comfortable) to red (least comfortable) (L). The applied metric is selected in a dropdown menu (A), and the computed value can be adapted for the user’s arm dimensions (C). For a better visibility, the voxel size can be adapted (B), and the range of values to visualize can be limited along all three axes (E-G) to show only individual regions or slices of the space. Further, the user can retrieve the “optimal” voxel with the lowest ergonomic cost (D). Finally, the visualization of the avatar can be deactivated (H), and three sliders enable control of the perspective (I).
Figure 5

The XRgonomics toolkit [15] visualizes the cost of interaction for each reachable point in the user’s interaction space, through color coding (K) from blue (most comfortable) to red (least comfortable) (L). The applied metric is selected in a dropdown menu (A), and the computed value can be adapted for the user’s arm dimensions (C). For a better visibility, the voxel size can be adapted (B), and the range of values to visualize can be limited along all three axes (E-G) to show only individual regions or slices of the space. Further, the user can retrieve the “optimal” voxel with the lowest ergonomic cost (D). Finally, the visualization of the avatar can be deactivated (H), and three sliders enable control of the perspective (I).

Prior research has explored ergonomics [3], [24], [38] and while the resulting metrics help evaluate existing UIs, it is difficult to use them for generating novel UI layouts. Further, the formulated design recommendations can be challenging to interpret and apply, particularly if the ideal interaction space is unavailable, e. g., due to the user’s physical environment.

To address this, Belo et al. [15] present a toolkit to visualize the interaction cost in the user’s entire interaction space by computing ergonomics metrics for each reachable point in space. Their work shows a half-sphere of voxels around the user, color-coded to reveal the ergonomics of reaching for that position. Thus, the toolkit allows UI designers to inspect the interaction space and identify ideal placements for various interactive elements. The toolkit further allows the definition of constraints, e. g., allowing the designer to define areas of the interaction space that are not available for placement of interactive virtual content, for example, due to physical obstacles in the user’s environment. Based on that, the toolkit can recommend the ideal position with the best ergonomic properties for reaching with the hand. As this computation is feasible in real-time, the toolkit API can be used for dynamic adaptation of UIs, depending on the user’s changing physical environment or varying visible space. For example, consider a UI element that should always remain within the user’s field of view in an AR scenario. The user is wearing a head-mounted display (HMD), and as they turn their head to look around, using the view frustum of the HMD, constraints arise in the available interaction space. The toolkit automatically computes the most ergonomic placement for the respective UI element within this available volume, keeping it in easy reach for the user.

Beyond improving the ergonomics of mid-air interaction, this approach may be applied to achieve the opposite goal of increasing physical effort to reach a UI element or virtual object, e. g., with the aim to train particular muscles. This may contribute to rehabilitation or be applied in exergame scenarios, as proposed by [43].

3.3.2 Hybrid user interfaces for augmented reality

The complexity of interaction in AR environments provides many opportunities for adaptation, such as adapting visualizations based on the user’s physical surroundings (e. g., Shin et al. [55]), within situated analytics (e. g., Fleck et al. [20]), or by considering the devices available in the user’s workspace (e. g., STREAM [28]). One possibility of adapting visualizations and interfaces to the user can be realized through hybrid user interfaces that combine the advantages of heterogeneous devices (e. g., head-mounted AR devices and handheld tablets), creating the ability to facilitate multiple coordinated views across different realities for visual analytics.

Figure 6 
STREAM [28] combines spatially-aware tablets with head-mounted AR displays for visual data analysis using a 3D parallel coordinates visualization. STREAM’s adaptation mechanisms allow users to seamlessly switch between the AR visualization and the tablet visualization without losing context. For example, the user on the right holds their tablet vertically, allowing STREAM to adapt their AR scatter plot with the tablet’s visual space. In contrast, the collaborator’s (left user) AR visualization is unaffected.
Figure 6

STREAM [28] combines spatially-aware tablets with head-mounted AR displays for visual data analysis using a 3D parallel coordinates visualization. STREAM’s adaptation mechanisms allow users to seamlessly switch between the AR visualization and the tablet visualization without losing context. For example, the user on the right holds their tablet vertically, allowing STREAM to adapt their AR scatter plot with the tablet’s visual space. In contrast, the collaborator’s (left user) AR visualization is unaffected.

For example, STREAM [28] combines an immersive AR environment using an AR headset with a spatially-aware tablet for interacting with 3D parallel coordinates visualizations, consisting of linked 2D scatter plots. Here, the AR headset allows users to see the visualization in stereoscopic 3D space. At the same time, the tablet provides familiar touch interaction on individual 2D components of the visualization, e. g., 2D scatter plots, see Figure 6. Furthermore, to reduce the cognitive demand when switching between both interfaces, STREAM automatically adapts the representation of both interfaces to the user’s implicit interaction by tracking the tablet’s position in space: Once a user holds their tablet in front of them (i. e., indicating that the user wants to switch between devices), the selected 2D component of the visualization in AR (e. g., a 2D scatter plot) rotates toward the user’s viewing direction, while the tablet adapts its content to show the same 2D component on screen—effectively merging both visual spaces into one interaction space. This adaptation allows users to seamlessly switch between AR and tablet visualization without losing context.

4 Outlook

Adaptive interfaces play a crucial role in developing new interaction paradigms, especially when improving performance or user experience. With this overview, we introduced how adaptive interfaces might leverage various user inputs for improved performance and UX, easier learning, and improved information engagement. Our overview presents current adaptive contents, interactions, and presentations applications. It serves as an initial design space to showcase how current systems use inputs for adaptation and hints at how future systems might adapt to the users’ actions. For adaptation purposes, current work focuses on the usage of physiological input for either content or presentation, motion tracking data and ergonomic metrics for interaction.

We highlight that in the future, especially with sensor fusion, multi-model adaptive systems are important to consider as they have a high chance of capturing the full context for the user. Therefore, such systems have higher success potential.

Future work will aim to formalize interaction paradigms that can generalize across application domains and classify which combinations of inputs might be more suitable for multi-model adaptive systems.

With this goal in mind, it is first necessary to clarify that the basis for adaptation accurately represents and relates to users’ states, e. g., physical discomfort and focused attention. This relationship is especially troublesome for physiological measurements and their construct validity [7]. However, we cannot claim a one-to-one explanatory relationship with the construct of interest. Therefore, the amount of diagnostic accuracy necessary for adaptive systems will inevitably vary between systems until an acceptable cost-benefit ratio is achieved [17].

Second, current systems focus on a single user; however, in the future, we envision that dynamically adapting to multi-user scenarios will improve collaboration [2]. It is unclear if the results from the single-user investigation will hold reliably in multi-user scenarios, e. g., using the user’s EDA for adaptation.

Third, data privacy is a central concern for users of adaptive systems that rely on input from users who do not control most of their physiological activity, i. e., physiological computing systems. Adaptive systems present significant potential for asymmetry in data protection [49], i. e., the system may not disclose to the user where his or her data are stored or who has access to this data. Moreover, adaptive control loops aim to manipulate users’ states toward a positive goal. Here, the debate is not over on the adaptation’s direction but on who keeps control over the adaptive process [45]. These considerations bolster claims that mutual accountability [45] and giving users authority over the system are fundamental conditions that necessitate future work.

Award Identifier / Grant number: 251654672

Funding statement: This research was funded by the Deutsche Forschungs-gemeinschaft (DFG, German Research Foundation) – Project-ID 251654672 – TRR 161.

About the authors

Francesco Chiossi

Francesco Chiossi is a PhD researcher in the Media Informatics Group at the Department of Computer Science of the LMU Munich. He obtained a M.Sc. from the University of Padua in neuroscience and applied cognitive science. In his work, he focuses on implicit measures of human behavior, such as electrodermal activity and electroencephalography, as an implicit input to design physiologically-adaptive systems across the virtuality continuum.

Johannes Zagermann

Johannes Zagermann is a PhD researcher in the Human-Computer Interaction Group at the University of Konstanz. He studied Business Information Systems at the Hochschule Furtwangen University (BSc, 2010) and Information Engineering at the University of Konstanz and Linköping University (MSc, 2015). His research interests span the areas of cross-device interaction, multi-modal interaction, and hybrid user interfaces.

Dr. Jakob Karolus

Jakob Karolus is a postdoctoral researcher at the Human-Centered Ubiquitous Media lab at the Ludwig Maximilian University of Munich. His research focuses on establishing guidelines and techniques for proficiency-aware systems based on ubiquitous sensing technologies. His key interests lie in investigating opportunities and the design of engaging experiences for users to understand their own proficiency. He also conducts research in employing electromyography for sensory augmentation and novel interaction paradigms in human-computer interaction.

Nils Rodrigues

Nils Rodrigues is a PhD researcher at the Visualization Research Center (VISUS) of the University of Stuttgart since 2016. He studied software engineering at the University of Stuttgart (2013) and has also worked as a consultant for business integration. His research interests include information visualization, layout algorithms, and eye tracking.

Dr. Priscilla Balestrucci

Priscilla Balestrucci is a postdoctoral researcher at Ulm University. She studied neuroscience (PhD, 2017) and medical engineering (MSc, 2013; BSc, 2011) at the University of Rome “Tor Vergata”. Her research focuses on how information from multiple sensory systems and prior knowledge affect sensorimotor control.

Prof. Dr. Daniel Weiskopf

Daniel Weiskopf is Professor at the Visualization Research Center (VISUS) of the University of Stuttgart, Germany. He received his Dr. rer. nat. degree (similar to Ph.D.) in Physics from the University of Tübingen, Germany, in 2001, and the Habilitation degree in Computer Science from the University of Stuttgart in 2005. His research interests include visualization, visual analytics, eye tracking, human-computer interaction, computer graphics, and special and general relativity.

Jun.-Prof. Dr. Benedikt Ehinger

Benedikt Ehinger is Tenure-Track Professor at the Center for Simulation Technology and the Institute for Visualization and Interactive Systems in Stuttgart, Germany. He studied Cognitive Science at the University of Osnabrück (BSc;MSc; PhD, 2018). In 2018 he joined the Predictive Brain Lab with Floris de Lange at the Donders Institute, Nijmegen, the Netherlands as a PostDoc. His research interests are EEG, Eye-Movements, Statistics, Timeseries and Visualization.

Jun.-Prof. Dr. Tiare Feuchtner

Tiare Feuchtner is a Tenure-Track Professor at the Department of Computer Science of the University of Konstanz since 2021. She studied Computer Science at Aarhus University (Ph.D., 2018), TU Berlin (MSc, 2015), and TU Wien (BSc, 2011), was visiting researcher at the EventLab in Barcelona, and junior researcher at the Telekom Innovation Laboratories (T-Labs) in Berlin and the Austrian Institute of Technology (AIT) in Vienna.

Prof. Dr. Harald Reiterer

Harald Reiterer is Professor at University of Konstanz and Chair for Human-Computer Interaction, Department of Computer Science. He received the Ph.D. degree in computer science from the University of Vienna, in 1991. In 1995 the University of Vienna conferred him the venia legendi (Habilitation) in Human-Computer Interaction. From 1990-1995 he was visiting researcher at the GMD in St. Augustin/Bonn (now Fraunhofer Institute for Applied Information Technology), and from 1995-1997 Assistant Professor at the University of Vienna. His research interests include different fields of Human-Computer Interaction, like Interaction Design, Usability Engineering, and Information Visualization.

Prof. Dr. Lewis L. Chuang

Lewis L. Chuang is Professor of Humans and Technology at the Chemnitz University of Technology since 2022. He read psychology at York and Manchester and received his PhD in neuroscience from Tübingen University in 2011. His research addresses the role and reliability of implicit psychophysiological activity as an input for human-computer interactions.

Prof. Dr. Marc Ernst

Marc Ernst is Chair of the Department of Applied Cognitive Psychology at Ulm University since 2016. He studied physics in Heidelberg and Frankfurt/Main and received his PhD in 2000 from Tübingen University for his work on human visuomotor behavior at the Max Planck Institute for Biological Cybernetics. He was visiting researcher at UC Berkeley (2000–2001), research scientist and group leader at the MPI in Tubingen (2001–2010), and professor at Bielefeld University (2011–2016).

Prof. Dr. Andreas Bulling

Andreas Bulling is Professor of Computer Science at the University of Stuttgart where he heads the research group Human-Computer Interaction and Cognitive Systems. He received his MSc. in Computer Science from the Karlsruhe Institute of Technology (KIT), Germany, and a PhD in Information Technology and Electrical Engineering from ETH Zurich. His research interests are in fundamental computational methods to analyse and model everyday human behavior and cognition.

Jun.-Prof. Dr. Sven Mayer

Sven Mayer is an assistant professor of Human-Computer Interaction at LMU Munich. In his research, he uses machine learning tools to design, build, and evaluate future human-centered interfaces. He focuses on hand- and body-aware interactions in contexts such as large displays, augmented and virtual reality, and mobile scenarios.

Prof. Dr. Albrecht Schmidt

Albrecht Schmidt received his Ph.D. degree in computer science from Lancaster University, in 2002. He is currently a Professor with LMU Munich and a Chair for Human-Centered Ubiquitous Media, Department of Informatics. His research encompasses the development and evaluation of augmented and virtual reality applications to enhance the user’s life quality through technology. This includes the amplification of human cognition and physiology through novel technology.

References

1. Emile Aarts. Ambient intelligence: a multimedia perspective. IEEE mMltimedia, 11(1):12–19, 2004.10.1109/MMUL.2004.1261101Search in Google Scholar

2. Fabio Babiloni and Laura Astolfi. Social neuroscience and hyperscanning techniques: past, present and future. Neuroscience & Biobehavioral Reviews, 44:76–93, 2014.10.1016/j.neubiorev.2012.07.006Search in Google Scholar PubMed PubMed Central

3. Myroslav Bachynskyi, Gregorio Palmas, Antti Oulasvirta, and Tino Weinkauf. Informing the design of novel input methods with muscle coactivation clustering. ACM Transactions on Computer-Human Interaction, 21(6):1–25, 2015.10.1145/2687921Search in Google Scholar

4. Priscilla Balestrucci and Marc O Ernst. Visuo-motor adaptation during interaction with a user-adaptive system. Journal of Vision, 19:187a, 2019.10.1167/19.10.187aSearch in Google Scholar

5. Sebastian Boring, Marko Jurmu, and Andreas Butz. Scroll, tilt or move it: Using mobile phones to continuously control pointers on large public displays. In Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group: Design: Open 24/7, OZCHI ’09, page 161–168, New York, NY, USA, 2009. Association for Computing Machinery.10.1145/1738826.1738853Search in Google Scholar

6. Wolfram Boucsein. The use of psychophysiology for evaluating stress-strain processes in human-computer interaction. Engineering Psychophysiology. Issues and Applications, pages 289–309, 2000.Search in Google Scholar

7. John T Cacioppo and Louis G Tassinary. Inferring psychological significance from physiological signals. American Psychologist, 45(1):16, 1990.10.1037/0003-066X.45.1.16Search in Google Scholar

8. Ricardo Chavarriaga, Aleksander Sobolewski, and José del R Millán. Errare machinale est: the use of error-related potentials in brain-machine interfaces. Frontiers in Neuroscience, page 208, 2014.10.3389/fnins.2014.00208Search in Google Scholar PubMed PubMed Central

9. Francesco Chiossi, Robin Welsch, Steeven Villa, Lewis Chuang, and Sven Mayer. Virtual reality adaptation using electrodermal activity to support the user experience. Big Data and Cognitive Computing, 6(2):19, 2022.10.3390/bdcc6020055Search in Google Scholar

10. Kenneth De Jong. Adaptive system design: a genetic approach. IEEE Transactions on Systems, Man, and Cybernetics, 10(9):566–574, 1980.10.1109/TSMC.1980.4308561Search in Google Scholar

11. Heiner Deubel and Werner X Schneider. Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research, 36(12):1827–1837, 1996.10.1016/0042-6989(95)00294-4Search in Google Scholar PubMed

12. Anind K. Dey. Providing Architectural Support for Building Context-Aware Applications. PhD thesis, Georgia Institute of Technology, 2000.Search in Google Scholar

13. Andrew T. Duchowski. Gaze-based interaction: A 30 year retrospective. Computers & Graphics, 73:59–69, June 2018.10.1016/j.cag.2018.04.002Search in Google Scholar

14. Marc O. Ernst and Heinrich H. Bülthoff. Merging the senses into a robust percept. Trends in Cognitive Sciences, 8(4):162–169, 2004.10.1016/j.tics.2004.02.002Search in Google Scholar PubMed

15. João Marcelo Evangelista Belo, Anna Maria Feit, Tiare Feuchtner, and Kaj Grønbæk. XRgonomics: facilitating the creation of ergonomic 3D interfaces. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–11, 2021.10.1145/3411764.3445349Search in Google Scholar

16. Stephen Fairclough. A closed-loop perspective on symbiotic human-computer interaction. In International Workshop on Symbiotic Interaction, pages 57–67. Springer, 2015.10.1007/978-3-319-24917-9_6Search in Google Scholar

17. Stephen H. Fairclough. Fundamentals of physiological computing. Interacting with Computers, 21(1-2):133–145, 2009.10.1016/j.intcom.2008.10.011Search in Google Scholar

18. Stephen H. Fairclough and Louise Venables. Prediction of subjective states from psychophysiology: A multivariate approach. Biological Psychology, 71(1):100–110, 2006.10.1016/j.biopsycho.2005.03.007Search in Google Scholar PubMed

19. Leah Findlater and Jacob Wobbrock. Personalized input: Improving ten-finger touchscreen typing through automatic adaptation. Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI’12, pages 815–824, 2012.10.1145/2207676.2208520Search in Google Scholar

20. Philipp Fleck, Aimee Sousa Calepso, Sebastian Hubenschmid, Michael Sedlmair, and Dieter Schmalstieg. RagRug: A Toolkit for Situated Analytics. IEEE Transactions on Visualization and Computer Graphics, pages 1–1, 2022.10.1109/TVCG.2022.3157058Search in Google Scholar PubMed

21. Society for Psychophysiological Research Ad Hoc Committee on Electrodermal Measures, Wolfram Boucsein, Don C Fowles, Sverre Grimnes, Gershon Ben-Shakhar, Walton T Roth, Michael E Dawson, and Diane L Filion. Publication recommendations for electrodermal measurements. Psychophysiology, 49(8):1017–1034, 2012.10.1111/j.1469-8986.2012.01384.xSearch in Google Scholar PubMed

22. Wilfred J. Hansen. User engineering principles for interactive systems. In Proceedings of the November 16–18, 1971, Fall Joint Computer Conference, AFIPS’71 (Fall), page 523–532, New York, NY, USA, 1972. Association for Computing Machinery.Search in Google Scholar

23. Stefan Haufe, Jeong-Woo Kim, Il-Hwa Kim, Andreas Sonnleitner, Michael Schrauf, Gabriel Curio, and Benjamin Blankertz. Electrophysiology-based detection of emergency braking intention in real-world driving. Journal of Neural Engineering, 11(5):056011, 2014.10.1088/1741-2560/11/5/056011Search in Google Scholar PubMed

24. Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang Irani. Consumed endurance: A metric to quantify arm fatigue of mid-air interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’14, page 1063–1072, New York, NY, USA, 2014. Association for Computing Machinery.10.1145/2556288.2557130Search in Google Scholar

25. James E Hoffman and Baskaran Subramaniam. The role of visual attention in saccadic eye movements. Perception & Psychophysics, 57(6):787–795, 1995.10.3758/BF03206794Search in Google Scholar

26. Kristina Höök. Steps to take before intelligent user interfaces become real. Interacting with computers, 12(4):409–426, 2000.10.1016/S0953-5438(99)00006-5Search in Google Scholar

27. Ann Huang, Pascal Knierim, Francesco Chiossi, Lewis L Chuang, and Robin Welsch. Proxemics for human-agent interaction in augmented reality. In CHI Conference on Human Factors in Computing Systems, CHI’22, New York, NY, USA, 2022. Association for Computing Machinery.10.1145/3491102.3517593Search in Google Scholar

28. Sebastian Hubenschmid, Johannes Zagermann, Simon Butscher, and Harald Reiterer. STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI’21, pages 1–14, New York, NY, USA, May 2021. Association for Computing Machinery.10.1145/3411764.3445298Search in Google Scholar

29. Robert J. K. Jacob and Keith S. Karn. Eye tracking in human-computer interaction and usability research: Ready to deliver the promises. The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, 2003.Search in Google Scholar

30. Jakob Karolus, Hendrik Schuff, Thomas Kosch, Paweł W. Wozniak, and Albrecht Schmidt. Emguitar: Assisting guitar playing with electromyography. In Proceedings of the 2018 Designing Interactive Systems Conference, DIS’18, pages 651–655, Hong Kong, China, June 2018. Association for Computing Machinery.10.1145/3196709.3196803Search in Google Scholar

31. Jakob Karolus, Paweł W. Wozniak, Lewis L. Chuang, and Albrecht Schmidt. Robust gaze features for enabling language proficiency awareness. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI’17, page 2998–3010, New York, NY, USA, 2017. Association for Computing Machinery.10.1145/3025453.3025601Search in Google Scholar

32. Waldemar Karwowski. A review of human factors challenges of complex adaptive systems: Discovering and understanding chaos in human performance. Human Factors, 54(6):983–995, 2012. PMID: 23397808.10.1177/0018720812467459Search in Google Scholar PubMed

33. Jens Kohlmorgen, Guido Dornhege, Mikio Braun, Benjamin Blankertz, Klaus-Robert Müller, Gabriel Curio, Konrad Hagemann, Andreas Bruns, Michael Schrauf, Wilhelm Kincses, et al.Improving human performance in a real operating environment through real-time mental workload detection. Toward Brain-Computer Interfacing, 409422:409–422, 2007.10.7551/mitpress/7493.003.0031Search in Google Scholar

34. Thomas Kosch, Jakob Karolus, Havy Ha, and Albrecht Schmidt. Your skin resists: exploring electrodermal activity as workload indicator during manual assembly. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems, pages 1–5, 2019.10.1145/3319499.3328230Search in Google Scholar

35. Joan Llobera, Bernhard Spanlang, Giulio Ruffini, and Mel Slater. Proxemics with multiple dynamic characters in an immersive virtual environment. ACM Transactions on Applied Perception, 8(1):Article 3, November 2010.10.1145/1857893.1857896Search in Google Scholar

36. Fabien Lotte, Laurent Bougrain, Andrzej Cichocki, Maureen Clerc, Marco Congedo, Alain Rakotomamonjy, and Florian Yger. A review of classification algorithms for eeg-based brain–computer interfaces: a 10 year update. Journal of Neural Engineering, 15(3):031005, 2018.10.1088/1741-2552/aab2f2Search in Google Scholar PubMed

37. Sven Mayer, Valentin Schwind, Robin Schweigert, and Niels Henze. The effect of offset correction and cursor on mid-air pointing in real and virtual environments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI’18, page 1–13, New York, NY, USA, 2018. Association for Computing Machinery.10.1145/3173574.3174227Search in Google Scholar

38. Lynn McAtamney and E Nigel Corlett. Rula: a survey method for the investigation of work-related upper limb disorders. Applied Ergonomics, 24(2):91–99, 1993.10.1016/0003-6870(93)90080-SSearch in Google Scholar

39. Roberto Merletti and Dario Farina. Surface Electromyography: Physiology, Engineering and Applications. IEEE Press Series on Biomedical Engineering. John Wiley & Sons, March 2016.10.1002/9781119082934Search in Google Scholar

40. Christian Mühl, Brendan Allison, Anton Nijholt, and Guillaume Chanel. A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges. Brain-Computer Interfaces, 1(2):66–84, 2014.10.1080/2326263X.2014.912881Search in Google Scholar

41. Klaus-Robert Müller, Michael Tangermann, Guido Dornhege, Matthias Krauledat, Gabriel Curio, and Benjamin Blankertz. Machine learning for real-time single-trial eeg-analysis: from brain–computer interfacing to mental state monitoring. Journal of Neuroscience Methods, 167(1):82–90, 2008.10.1016/j.jneumeth.2007.09.022Search in Google Scholar PubMed

42. Philipp Müller, Sander Staal, Mihai Bâce, and Andreas Bulling. Designing for noticeability: Understanding the impact of visual importance on desktop notifications. In CHI Conference on Human Factors in Computing Systems, CHI’22, New York, NY, USA, 2022. Association for Computing Machinery.10.1145/3491102.3501954Search in Google Scholar

43. John E. Muñoz, Shi Cao, and Jennifer Boger. Kinematically adaptive exergames: Personalizing exercise therapy through closed-loop systems. In 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), pages 118–1187, 2019.10.1109/AIVR46125.2019.00026Search in Google Scholar

44. Domen Novak, Matjaž Mihelj, and Marko Munih. Dual-task performance in multimodal human-computer interaction: a psychophysiological perspective. Multimedia Tools and Applications, 56(3):553–567, 2012.10.1007/s11042-010-0619-7Search in Google Scholar

45. Rosalind W. Picard and Jonathan Klein. Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14(2):141–169, 2002.10.1016/S0953-5438(01)00055-8Search in Google Scholar

46. Narun Pornpattananangkul, Shannon Grogans, Rongjun Yu, and Robin Nusslock. Single-trial eeg dissociates motivation and conflict processes during decision-making under risk. Neuroimage, 188:483–501, 2019.10.1016/j.neuroimage.2018.12.029Search in Google Scholar PubMed PubMed Central

47. Sébastien Puma, Nadine Matton, Pierre-V Paubel, Éric Raufaste, and Radouane El-Yagoubi. Using theta and alpha band power to assess cognitive workload in multitasking environments. International Journal of Psychophysiology, 123:111–120, 2018.10.1016/j.ijpsycho.2017.10.004Search in Google Scholar PubMed

48. Felix Putze, Johannes Popp, Jutta Hild, Jürgen Beyerer, and Tanja Schultz. Intervention-free selection using eeg and eye tracking. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, ICMI’16, page 153–160, New York, NY, USA, 2016. Association for Computing Machinery.10.1145/2993148.2993199Search in Google Scholar

49. Carson Reynolds and Rosalind Picard. Affective sensors, privacy, and ethical contracts. In CHI’04 Extended Abstracts on Human Factors in Computing Systems, pages 1103–1106, 2004.10.1145/985921.985999Search in Google Scholar

50. Nils Rodrigues, Lin Shao, Jia Jun Yan, Tobias Schreck, and Weiskopf. Eye gaze on scatterplot: Concept and first results of recommendations for exploration of sploms using implicit data selection. In Proceedings of the 14th ACM Symposium on Eye Tracking Research & Applications, ETVIS’22, New York, NY, USA, 2022. Association for Computing Machinery. Article 59.10.1145/3517031.3531165Search in Google Scholar

51. Rufat Rzayev, Sven Mayer, Christian Krauter, and Niels Henze. Notification in vr: The effect of notification placement, task and environment. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play, pages 199–211, 2019.10.1145/3311350.3347190Search in Google Scholar

52. T Scott Saponas, Desney S. Tan, Dan Morris, and Ravin Balakrishnan. Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-computer Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’08, pages 515–524, New York, NY, USA, 2008. ACM.10.1145/1357054.1357138Search in Google Scholar

53. B. Schilit, N. Adams, and R. Want. Context-Aware Computing Applications. In 1994 First Workshop on Mobile Computing Systems and Applications, pages 85–90, December 1994.10.1109/WMCSA.1994.16Search in Google Scholar

54. Bill Schilit, Norman Adams, and Roy Want. Context-aware computing applications. In 1994 First Workshop on Mobile Computing Systems and Applications, pages 85–90. IEEE, 1994.10.1109/WMCSA.1994.16Search in Google Scholar

55. Jae-eun Shin, Boram Yoon, Dooyoung Kim, and Woontack Woo. A User-Oriented Approach to Space-Adaptive Augmentation: The Effects of Spatial Affordance on Narrative Experience in an Augmented Reality Detective Game. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1–13, Yokohama Japan, May 2021. ACM.10.1145/3411764.3445675Search in Google Scholar

56. Nelson Silva, Tanja Blascheck, Radu Jianu, Nils Rodrigues, Daniel Weiskopf, Martin Raubal, and Tobias Schreck. Eye tracking support for visual analytics systems: Foundations, current applications, and research challenges. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications, ETRA’19, New York, NY, USA, 2019. Association for Computing Machinery. Article 11.10.1145/3314111.3319919Search in Google Scholar

57. Richard Skarbez, Missie Smith, and Mary C. Whitton. Revisiting milgram and kishino’s reality-virtuality continuum. Frontiers in Virtual Reality, 2:647997, 2021.10.3389/frvir.2021.647997Search in Google Scholar

58. Ben Steichen and Bo Fu. Towards adaptive information visualization-a study of information visualization aids and the role of user cognitive style. Frontiers in Artificial Intelligence, page 22, 2019.10.3389/frai.2019.00022Search in Google Scholar PubMed PubMed Central

59. Dereck Toker, Cristina Conati, Giuseppe Carenini, and Mona Haraty. Towards adaptive information visualization: on the influence of user characteristics. In International Conference on User Modeling, Adaptation, and Personalization, pages 274–285. Springer, 2012.10.1007/978-3-642-31454-4_23Search in Google Scholar

60. Lisa-Marie Vortmann, Felix Kroll, and Felix Putze. EEG-based classification of internally-and externally-directed attention in an augmented reality paradigm. Frontiers in Human Neuroscience, page 348, 2019.10.3389/fnhum.2019.00348Search in Google Scholar PubMed PubMed Central

61. Scott R. Vrana. The psychophysiology of disgust: Differentiating negative emotional contexts with facial EMG. Psychophysiology, 30(3):279–286, May 1993.10.1111/j.1469-8986.1993.tb03354.xSearch in Google Scholar PubMed

62. Daniel M. Wolpert and Michael S. Landy. Motor control is decision-making. Current Opinion in Neurobiology, 22(6):996–1003, 2012.10.1016/j.conb.2012.05.003Search in Google Scholar PubMed PubMed Central

Received: 2022-05-22
Revised: 2022-08-14
Accepted: 2022-08-15
Published Online: 2022-08-31
Published in Print: 2022-08-26

© 2022 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 25.2.2024 from https://www.degruyter.com/document/doi/10.1515/itit-2022-0035/html
Scroll to top button