Skip to content
BY 4.0 license Open Access Published by De Gruyter Oldenbourg September 6, 2022

Immersive analytics: An overview

  • Karsten Klein

    Dr. Karsten Klein is a postdoctoral researcher at the University of Konstanz, where he works on the design and implementation of visual and immersive analytics approaches for complex data from application areas, especially life sciences, as well as on network analysis and graph drawing algorithms and the underlying graph theoretic principles.

    EMAIL logo
    , Michael Sedlmair

    Prof. Dr. Michael Sedlmair is a professor at the University of Stuttgart and is leading the research group for Visualisation and Virtual/Augmented Reality there. He received his Ph.D. degree in Computer Science from the University of Munich. His research interests focus on visual and interactive machine learning, perceptual modelling for visualisation, immersive analytics and situated visualisation, novel interaction technologies, as well as the methodological and theoretical frameworks underlying them.

    and Falk Schreiber

    Prof. Dr. Falk Schreiber is full professor and head of the Life Science Informatics group at the University of Konstanz. He worked at universities and research institutes in Germany and Australia and is adjunct professor at Monash University Melbourne. His main interests are network analysis and visualisation, immersive analytics as well as multi-scale modelling of processes, all with a focus on life science applications from metabolic processes to collective behaviour.

Abstract

Immersive Analytics is concerned with the systematic examination of the benefits and challenges of using immersive environments for data analysis, and the development of corresponding designs that improve the quality and efficiency of the analysis process. While immersive technologies are now broadly available, practical solutions haven’t received broad acceptance in real-world applications outside of several core areas, and proper guidelines on the design of such solutions are still under development. Both fundamental research and applications bring together topics and questions from several fields, and open a wide range of directions regarding underlying theory, evidence from user studies, and practical solutions tailored towards the requirements of application areas. We give an overview on the concepts, topics, research questions, and challenges.

ACM CCS:

1 Introduction

Technological advances in recent years have resulted in wide availability of high-quality display hardware such as 3D-capable display walls, holographic displays and virtual (VR) and augmented reality (AR) head-mounted displays (HMDs). Such devices provide affordances that differ from a standard desktop setting, e. g. regarding physical immersion, stereoscopic 3D (S3D), field of view/regard, user tracking, hand-held controllers, and in general multi-modal interaction. They can be used to create immersive environments (IE) for data analysis and communication that go beyond the capabilities of classical desktop settings. At the same time, software frameworks and in cases such as VR HMDs also full software eco-systems have been developed around such hardware, lowering the threshold to prototype and develop immersive visualisations as well as to integrate automated analysis solutions.

Thus, the current situation is the ideal foundation to investigate the potential of such environments for data exploration and analysis. Well-designed IE approaches might greatly improve current data analysis capabilities, and have been commended for use in a variety of application areas [16], [20], [52], [61], [65]. However, the combination of visualisation, interaction, and device and environment characteristics such as S3D, movement tracking, display size, and multi-modal interaction constitutes a large design space.

Sometimes proposed to provide limitless workspace, immersion of the analyst in the data, improvement of the engagement and support of the mental state of flow during the task, as well as interaction without barriers, all these potential benefits come with limits and potential drawbacks. In addition, depending on the IE design, they may come with additional effort for the analyst to exploit them, such as occlusion-handling, coordination of physical movement, spatial organisation of data, precise interaction [12], and more complex navigation [76].

The research direction of Immersive Analytics (IA) is concerned with the systematic examination of the benefits and challenges of the use of IE for data analysis, and the development of corresponding suitable IA designs that improve the quality and efficiency of the analysis process. Dwyer et al. define IA to be ‘the use of engaging, embodied analysis tools to support data understanding and decision making’ [27]. They see engagement and immersion at the heart of IA, with the expectation of a corresponding positive impact on the analysis. While immersive analytics is a term that was coined only a few years back [15], the research is grounded in work that is much older, and combines results from several areas and communities, such as human-computer interaction (HCI), data visualisation, VR, AR, visual analytics, and data science. Already in the 1990’s, Milgram et al. [55], [56] described a ‘reality-virtuality continuum’ that connects completely real to completely virtual, i. e. fully technology-mediated, environments. They saw mixed-reality visual displays as a subset of VR-related technologies merging real and virtual worlds along the continuum: ‘a generic mixed reality (MR) environment [is] one in which real world and virtual world objects are presented together within a single display, that is, anywhere between the extrema of the RV continuum’ [56]. Skarbez et al. [70] later proposed a revision of the continuum concept to cater for the developments in technology, and argue that the continuum would actually be discontinuous and that the virtual reality endpoint cannot be reached. More importantly, they extend the definition of mixed reality to the case of real world and virtual world stimuli being presented together in a single percept instead of a display only. Fonnet and Prié emphasise the connection of IA to previous research and related research areas in their survey of the development of immersive analytics and its predecessors over thirty years [29].

There are several challenging questions connected to this field, ranging from fundamentals and an underlying theory to applications and societal impact. Examples include:

  1. How can we quantify immersion in an analytics process? This encompasses objective characteristics of the IE, such as technological affordances, as well as the subjective experience and state of the analyst induced by conducting the process in an IE.

  2. How can we quantify the impact of immersion on this process and the analysts’ experience and insight generation? Do changes of immersion trigger better analysis quality or experience? Are there required levels of immersion?

  3. What are links between technological affordances of the IE and user engagement and immersion? Can we name minimal requirements and give guidelines for which combinations of affordances support user engagement and immersion?

  4. How can we best support analytics and decision making tasks with immersive analytics – which IE designs improve the analysis quality and user experience for a specific application setting?

  5. What are new potentials and benefits that IA brings for tasks in specific applications areas? Can we tailor IE-based approaches for specific applications such that the analysis can be improved significantly, justifying the additional resources and efforts? What are the drawbacks to the use of IEs in practice?

While evaluation of current approaches still often mainly targets classical metrics such as task performance, e. g. accuracy and time, the potential importance of immersive environments and their impact on the analysis might go much deeper, and involve a better internal data and knowledge representation, also known as ‘mental map’ [45], as well as long-term effects, such as on the recall of data characteristics and analysis results.

Here we will provide an overview of the field of immersive analytics and discuss central aspects, research questions, and challenges.

2 Concepts and assumptions

2.1 Immersion

A term that covers central aspects of the underlying concepts of IA is immersion. As it is the case with many other terms that are used to describe related concepts, immersion is overloaded and has diverging and sometimes fuzzy definitions in the literature. It has been discussed in a variety of contexts, e. g. for learning, gaming, story-telling, and data analysis, and has been identified with or delineated from a variety of other terms such as engagement or flow [60].

Generally speaking, the concept of immersion in IA is supposed to cover characteristics of an interaction of a human with an environment. This environment can comprise a single medium, or a combination of media, and can be computer-mediated, as in the example of a VR environment. The interaction can be as reduced as pure consumption, e. g. simply reading a text in a book or on a screen, or as complex as a multi-modal interaction with physical navigation, gesture, and speech recognition. Immersion is assumed to support the engagement of the analyst and thus subsequently to support the analysis process.

A main distinction that is found in the literature is the separation of the capabilities of the environment for multi-modal / visual representation and interaction, and the impact on as well as the (re)action of the human. The former are rather static, technological aspects, but might also comprise the specific design of a representation as well as its dynamics, e. g. in adaptive visualisations, which adjust to the user’s requirements, abilities, and actions, whereas the latter comprises elements of the psychological state, engagement, emotion, and perception. Consequently, one attempt to capture these aspects is the separation into ‘psychological’ and ‘technological’ immersion [27]. However, there are also at least questions if further or more detailed aspects, such as long-term acquisition of skills (also including time outside of an IE), or more general learning, should be explicitly taken into account (see [65]).

The investigation of immersion has been conducted for many years already in the area of gaming research [10], [14], [41], and more recently also for visual data stories, see Isenberg et al. [37]. Agrawal et al. [2] give a literature review on the term immersion in the context of immersive audiovisual experiences. For data analysis, the term became popular in the context of IA [49]. Donalek et al. [25] discuss immersive and collaborative data visualisations in VR, and note that immersion provides benefits over traditional visualisation (perception, understanding, relationship retention), but don’t explicitly define the term and seem to identify it with 3D VR visualisation. While aspects of technological immersion can be a precursor or requirement for psychological immersion, there are further non-technical aspects such as a narrative or a specific visualisation metaphor that might strongly influence psychological immersion.

Slater and Wilbur [72] distinguish between immersion and presence, where the former describes characteristics of the employed technology, such as field of view, resolution, and degree of freedom, and the latter is the sense of ‘being there’, a state of consciousness that is the ultimate goal of virtual reality. Presence is supposed to increase the engagement, and to turn the experience of the user from the consumption of computer-generated imagery to the impression of an experienced reality, e. g. places visited or real-world objects explored. Note that according to the authors in this definition immersion requires a self-representation in the VE, and immersion is increased when physical reality is shut out, which is only partially the case across the reality-virtuality continuum. Thus, immersion here is restricted to the characteristics of the system, and sensori-motor contingencies that it supports, a definition which according to Thomas et al. [78] is widely used in the VR/AR communities. Given the goal of IA, to improve data analysis by employing immersive technologies, these characterisations seem to be more demanding than required, as the lack of the impression of being somewhere else might not comprise the analysis quality. More recently, presence as the sense of ‘being there’ has also been termed place illusion [71] to avoid confusion about the term’s intended meaning.

Brown and Cairns [10], [13] investigate immersion in the context of games, and propose a definition that includes three levels, engagement, engrossment, and total immersion. Jennett et al. [41] state that a clear understanding of the concept in the context of gaming is still missing, explore quantification measures and present corresponding experiments. They conclude that immersion in their definition can be measured subjectively and objectively, and that it can be associated also with negative experience, such as anxiety. A few years later, Calleja states that there is rather more than less confusion about the term [14], and proposes to distinguish absorption-sense (sense of involvement and engagement, psychological immersion) and transportation-sense (feeling of being transported in a different world) of immersion. In particular for gaming, immersion is often described to include the player’s response to the system and the corresponding psychological state.

Further terms that are often used in the discussion of immersion are spatial immersion, where the user’s first-person 3D view of the surrounding world is supported by characteristics such as large field of view, movement tracking, haptic feedback, and embodied interaction. While often associated with VRE, spatial immersion can also be induced by spatial AR, where projections on the real-world environment are employed [77].

2.2 Multi-modal representation and interaction

A major assumption in immersive analytics is that natural and rich interaction supports immersion by removing the barrier between the analyst and the data [12]. Thus, natural user interfaces that use physical metaphors and everyday operations and movements are investigated in the context of data analysis and corresponding tasks. Using S3D already extends the possibilities on how to perceive and interact with data representations. The integration of multiple senses beyond pure vision has the potential to create more natural interfaces that further improve the analyst’s experience and analysis capabilities. McCormack et al. give an overview and a design framework on multisensory immersive analytics [53], and Martin et al. present a survey on multimodality in VR [50].

2.2.1 Visualisation

Data visualisation is the most established way for data representation both in classical and in immersive environments, using our main, high-bandwidth, sense for perceiving our environment. A major differentiation between visualisation in classical and in many immersive environments is the use of S3D. For certain tasks and data [47], S3D can provide major advances compared to standard 2D data visualisation, as the third dimension can be used as an additional channel for further data dimensions, provides multiple viewing perspectives, and the space in the third dimension can be used for both arrangement of objects and their encoding, for example eliminating the overlap that often occurs in 2D arrangements. However, this additional freedom comes with further potential issues [54], such as dependence on a viewing perspective, where the selection of a favourable one can incur further effort, the occlusion from stacking of objects along the depth direction, depth perception and the corresponding requirement to incorporate proper depth cues for orientation. Individual disabilities such as the lack of 3D perception in a part of the population are further issues that need to be taken into consideration.

Figure 1 
Visualisations and physicalisation of ser/thr protein phosphatase 1 (PP1). From left to right: 3D holographic visualisation of PP1 (PDB ID: 1FJM [7], [32], [33]) on a looking glass device, data physicalisation with two 3D printed surface models in different sizes and a standard stereoscopic visualisation on a 2D screen with the molecular visualisation software ChimeraX [31], [64].
Figure 1

Visualisations and physicalisation of ser/thr protein phosphatase 1 (PP1). From left to right: 3D holographic visualisation of PP1 (PDB ID: 1FJM [7], [32], [33]) on a looking glass device, data physicalisation with two 3D printed surface models in different sizes and a standard stereoscopic visualisation on a 2D screen with the molecular visualisation software ChimeraX [31], [64].

Figure 2 
VR exploration of a cell with information regarding protein localisation and metabolism using a combined VR visualisation environment based on CAVE2 and zSpace (taken from [73]).
Figure 2

VR exploration of a cell with information regarding protein localisation and metabolism using a combined VR visualisation environment based on CAVE2 and zSpace (taken from [73]).

2.2.2 Haptics and data physicalisation

The main communication channel of classical visualisations is, unsurprisingly, the visual channel. By venturing into immersive spaces, however, multi-modal and multi-sensory experiences of data might be delivered. When immersing into data, it is specifically interesting to explore the idea of haptically “touching data”. One way of providing a haptic data experience of data is through data physicalisation. Here, physical objects, e. g. from 3D printing, are used to represent data. For example, Figure 1 shows a 3D-printed surface model of a protein. Touching a physical data object might reveal additional information such as texture, stiffness, temperature, and weight [40] that goes beyond traditional visualisations. In addition, the objects might be tracked to act as a interaction token, e. g. to support basic manipulation operations such as rotation. The challenge of physical objects as visualisations lies mainly in their inflexibility to change and the cost of creating them.

Figure 3 
Spatially-resolved transcriptomics data of the heart in an immersive analytics tool using the zSpace (taken from [8]).
Figure 3

Spatially-resolved transcriptomics data of the heart in an immersive analytics tool using the zSpace (taken from [8]).

Figure 4 
PropellerHand is an ungrounded hand-mounted force-feedback device based on two rotatable propellers; it can be used to simulate touching visualisations such as immersive line charts (taken from [1]).
Figure 4

PropellerHand is an ungrounded hand-mounted force-feedback device based on two rotatable propellers; it can be used to simulate touching visualisations such as immersive line charts (taken from [1]).

One way to overcome these issues is to leverage haptic feedback devices that allow to “touch” purely virtual objects. There is a plethora of haptic feedback devices that could be used to physically experience objects in Virtual Reality [68]. Defining an immersive visualisation as such a virtual object also would allow to touch it with such devices. Figure 4 provides an example of an hand-mounted, propeller-based feedback device that was used to touch and haptically interact with abstract data visualisations. Usually, the haptic experience of such devices is inferior to touching real physical objects and necessitate to instrument the user to different degrees. Still, they allow to deal with dynamically changing data which is inherent to almost all interactive visualisations. Furthermore, they offer interesting opportunities for visually disabled people to interact with data (little research is available on that though).

2.2.3 Sonification

Hearing is a further sense that can be used both independently, as in data sonification, as well as in combination with vision and other senses for data analysis. What makes sound particularly interesting in the context of immersive analytics is the potential of using spatial location of sound in a physical or virtual environment, and to use it to overcome issues of pure visualisation such as occlusion and restricted visual variables by additional encoding of data qualities in sound variables such as pitch, timbre, loudness, and patterns. Data sonification has been often used in art or art-related projects [74], [21], and has been discussed to be an important component in multi-modal immersive analytics approaches. In addition, there is a considerable body of work on general aspects of multimodality, such as contradiction, crosstalk, and overload. However, both research on the fundamentals of use in immersive analytics, such as considerations of the wider design space and suitable encodings, as well as practical use in systems and frameworks are still limited. Interesting questions in this context are if the encoding should be redundant to visual encoding or complementary, what metaphors and design patterns can be developed, and were the limits are, e. g. in precision or overload contribution of sound to the analysis, and in how far evidence with crossmodal perception hold in IE [3].

2.3 Collaboration

Support for collaboration beyond the capabilities of standard desktop environments was stated as an explicit goal of IA research [9]. By breaking the barriers of the confined space in desktop environments, IE provide ample opportunities for the support of collaborative data analysis [17]. This includes both collocated as well as remote collaboration and their combination. Large display walls and environments such as the CAVE provide the physical space for collocated collaboration, whereas in particular virtual environments allow to scale remote collaboration easily [67]. Important factors for the design of corresponding IEs are proper representations of collaborators and their actions, support for communication and interaction between participants, as well as support for provenance, e. g. to hand over the current state of analysis in asynchronous settings. Important related questions are what the required characteristics of collaborator representation are, how communication is supported, how IEs and potentially different devices [30] can be used to create both shared and private interactive views on the data, and how analysis methods are integrated in a consistent way.

2.4 Situated analytics

Much of the existing research in IA focuses on leveraging VR displays for the purpose of data visualisation and analytics. The interesting characteristics of VR for such scenarios include stereoscopy for spatial tasks on 3D data, but also an almost unlimited “space to think” [4]. While some of the existing IA approaches have also been used in AR, so far they have been rarely truly situated or embedded into the real world context. Doing exactly this is the goal of situated analytics [28].

Situated analytics focuses on how analytical work can be embedded into the user’s spatial environment. To do this, a computational understanding of the real world is needed, which then can be leveraged by AR to display virtual data representations along with real physical referents. The interesting question of course is how exactly the relation between real physical objects and virtual data visualisations is implemented. Thomas et al. [78], for instance, see in situated analytics those visualisations that are perceived in close spatial and temporal proximity to its referent. Data from a smart meter might for instance be shown right next to it, like a virtually attached display [28]. Willet et al. [83] follow a similar idea of situatedness, but go one step further and call visualisations that are directly overlaid onto referents “embedded visualisations”.[1] CAD model data might for instance be shown directly in place to simulate different setups of production chains [36]. Or you show pollution data right in the streets where they were measured [81]. An accurate registration of the virtual data with the physical environment is needed for such solutions.

Situated analytics displays bear a huge potential as they merge the real and the virtual in a seamless way. In comparison to VR-approaches, they thus do not necessitate the user to get detached from the real world and instead bring data visualisation exactly into their current spatial and social environment. With highly sophisticated AR displays (which do not exist yet) one can even imagine that such an approach might eventually replace traditional screen-based displays. For the time being, however, the main benefit lies in tasks that intrinsically need to combine physical and virtual components.

Characterising the full potentials of situated analytics is still an open research question. One of the challenges is that developing truly situated/embedded visualisations is non-trivial. While much progress has been made on the technical sophistication of AR hardware (e. g., Microsoft’s HoloLens 2), developing situated analytics tools still requires to combine different concepts from computer vision, augmented reality, internet of things, and visualisation. Dedicated toolkits that support this development process have only very recently started to become available [28]. Even with good toolkits, however, many design challenges remain. These primarily stem from the fact that, for situated analytics, visualisation designs need to be directly incorporated into real world circumstances. Classical visualisations on screen, and even more so immersive visualisations in VR, are viewed in a fairly controlled environment. Situated analytics solutions, in contrast, might be strongly impacted by environmental factors, such as environmental visual clutter, changing light conditions, or simply users moving around with their AR displays. All of these factors pose a plethora of open research questions for visualisation, and call for novel robust and adaptable visualisation designs.

3 Major research topics and challenges

3.1 Concepts for and evaluation of immersion

As we have seen in the previous discussion, immersion is a term that has multiple descriptions and approaches for measuring, including quantitative and qualitative measures. Characterising and quantifying immersion in the context of IA is thus a major topic and challenge in current research. In particular the support for different levels of immersion induced by the affordances and the design of the IE is a major open question. Further, the immersion along complex workflows in application areas, and the impact of changes of environments and corresponding changes of the immersion and focus on the analysis is investigated. Cummings and Bailenson [19] performed a meta-analysis on 83 studies to investigate if greater levels in the immersive qualities of an environment elicit higher levels of presence (as defined by Slater and Wilbur). They conclude that technological immersion has a medium-sized effect on presence. Miller and Bugnariu conclude from their meta-study that the impact of technological immersion might be strongly depending on user characteristics and tasks [58].

3.2 Understanding and evaluating perception for data analysis in IE

Perception plays a major role in data visualisation, providing the fundamental mechanisms underlying any visual data analysis and communication. A large body of research has been concerned with perception on 2D displays, also for data analysis. Immersive environments often provide S3D visualisation, and through 3 or 6 degrees of freedom also different perspectives and frames of reference for the data representation. Large display walls, and CAVE or VR environments also often support larger display space and field of regard than classical desktop environments. Occlusion, overload, depth distortion, and distance can however make perception, visual search, and the mapping of spatial organisation to a mental map more difficult. Thus, the effect and limits of IE for perception in analysis tasks need to be investigated and characterised, and corresponding guidelines created regarding effective encoding and layout of data in IE. There is significantly less research so far in this direction compared to the research on classical 2D displays. An initial step in that direction is the investigation of graphical perception for IA [82], which compares three display modalities for scatterplot visualisation in a limited setting of 100−200 data points, including VR and AR HMDs.

3.3 Understanding and supporting the mental map

The problem of supporting the creation and maintenance of a mental map is concerned with the question of the internal representation of knowledge on a certain data set, its structure and content, and the connection to the external representation, i. e. the computer-mediated content. The visualisation, but also the interaction with and the navigation in the data and representation spaces, need to be designed in a way that facilitates this maintenance as well as the user’s orientation and cognition using this map. While for standard 2D environments the investigation of the mental map with respect to data analysis has been conducted already for many years [5], [59], the transfer to IE has been performed only recently [45]. In particular the effects of S3D, spatial immersion, and multi-modal representation and interaction are of interest. The exploitation of the spatial memory through spatial organisation and physical navigation [76], as well as the effects of large field of view and field of regard and of direct interaction [12] are supposed to improve the mental map, which however needs further evidence from user studies.

3.4 Scalability

Scalability in IA has several aspects, including the limits of human perception and cognition and the technological restrictions such as rendering performance. While more and more powerful hardware supports to render large numbers of objects in virtual and augmented environments, the requirements for low-latency high-framerate rendering of content are still demanding as even a small deviation can have detrimental effects on the user experience [11] and subsequently on the analysis. A major challenge in IEs that goes beyond the considerations for traditional visual analytics [66] is to exploit the specific IE affordances to create scalable representations and navigation schemes. While the large design space offers opportunities for scalability that go beyond traditional environments, the design has to be carefully considered to not put too much strain on the user and to balance the tradeoff between the potentially influencing factors for efficiency, effort, and orientation such as the number of navigation operations, representation levels, and the amount of physical navigation [6]. An important research direction for scalability is the investigation of the suitability of abstraction and aggregation approaches and potential transfer of established approaches into IE.

3.5 Interaction and navigation

Immersive environments provide a rich set of options for interaction with data representations, and navigation in the data and its representations. Gesture and speech recognition, eye tracking, physical navigation, physicalisation, mid-air interaction, and hand-held controllers can be used to support corresponding operations. These can provide direct interaction, for example with 3D representations of data points, or target indirect interaction, for example through 2D menus blended into a S3D environment. The designs have to be carefully chosen to ensure that the requirements are met, e. g. regarding required precision, and that no heavy burden is put on the user, e. g. avoiding effects such as the ‘gorilla arm’ or ‘midas touch/gaze’ [38]. Wang et al. [79] investigated the impact of matching the input and output device dimensionality. They found no evidence to suggest that the parity of dimensionality between input and output devices plays an important role on performance, but a tradeoff between accuracy and speed when using either 2D or 3D devices.

3.6 Transitional interfaces

Given the availability of a broad range of devices and technologies used in data analysis for representation and interaction, as well as the complexity of analysis workflows which might include multiple stages and a collaboration within or between groups, switching between different devices and environments might be necessary or advantageous for analysis. The research direction of transitional interfaces [35] looks into the requirements and the possible designs for such situations, with the goal that users can seamlessly switch between contexts, which are constituted by interaction spaces such as VR and AR, and potentially also by different data scales and representations. One vision for transitional interfaces is that systems from different locations along the RV-continuum can be integrated as coherent and consistent interfaces in a shared physical space [42]. That way, transitions between traditional 2D, VR, and AR would allow to choose the best available support of a task at hand, and also collaborations across degrees of virtuality could be facilitated [69].

4 Use cases

We organise our use case examples according to the amount of spatial information in the data under analysis.

4.1 Analysis of data in the context of the environment

4.1.1 Animal behaviour research

Figure 5 
TeamWise animal movement analysis tool showing a visualisation of a bird returning from winter migration to Lake Constance (taken from Klein et al. [44] under creative commons license [18]).
Figure 5

TeamWise animal movement analysis tool showing a visualisation of a bird returning from winter migration to Lake Constance (taken from Klein et al. [44] under creative commons license [18]).

Figure 6 
Storks soaring, shown in VR with satellite imagery of the environment.
Figure 6

Storks soaring, shown in VR with satellite imagery of the environment.

Animal behaviour research investigates the drivers and mechanisms of animal behaviour and interaction. Lightweight sensor tags allow to collect large data sets of movement and further parameters such as acceleration, and physiological or environmental conditions. The resulting trajectories and further derived data structures can be analysed to detect behavioural patterns and events and to categorise periods into classes of behaviour such as foraging or fighting [24]. Thus, a combination of spatial and temporal data is relevant for analysis. While this analysis can make use of automated analysis methods and statistics, the results as well as the original movement data often need to be investigated in the context of the geographic environment it occurred in. Both the integration of potential environmental drivers for an animal’s decision as well as the visual interpretation by an analyst are thus components that are required in the analysis workflow. Immersive environments are well suited to support the resulting combination of 3D spatial information, such as terrain, satellite imagery, trajectories and abstract data representations such as interaction networks or charts, see Figures 5 and 6. The display space can be used to show further environmental context that an animal might use as orientation, alternative perspectives to better analyse the animal’s perception of the environment or other animals, or for additional linked data views. An important associated question is which or how much information can be embedded in the 3D visualisation, and where instead linked views and 2D charts provide better overview and interpretation possibilities. The evaluation of the interpretation of the displayed information in a 3D visualisation is thus an important part of immersive analytics research.

4.2 Analysis of data with spatial referents

4.2.1 Molecules

In biology and biochemistry, 3D models of molecules are routinely used for analysis of molecular interaction, e. g. binding analysis and the analysis and communication of molecular docking processes. While technologies such as the Cave automatic virtual environment (CAVE) already provided stereoscopic 3D and collaboration environments thirty years ago, the advent of high-quality VR head-mounted displays allows for remote collaboration, high-resolution rendering and intuitive interaction with molecular structures. Stereoscopic 3D and haptic feedback have been utilised in docking and further investigation of the use of IE for the domain [22], [43], [61], [63].

4.2.2 Spatial transcriptomics

Spatially-resolved transcriptomics is a novel method which allows to systematically investigate gene expression with spatial resolution [51]. It provides a large amount of data that has to be explored and analysed in spatial (3D) context. An example of an immersive analytics system designed to support the interpretation of spatially-resolved transcriptomics dataset is 3D-Cardiomics-VR [8], see Figure 3. It allows to visually explore within an organ (such as the heart), which then is represented as the spatial referent, patterns of gene expression and compare genes based on their 3D expression profiles.

4.2.3 Brain data analysis

Brain data analysis can involve several aspects of importance for the analysis of brain-related questions, such as brain anatomy, structure, and function and temporal activity. The data can be acquired from sources such as electroencephalography (EEG) and functional magnetic resonance imagery (fMRI) measurements for activity, and diffusion tensor imaging (DTI) for structure, and MRI, computed tomography (CT) or positron emission tomography (PET) for the anatomy. General goals are a better understanding of the inner workings of the brain and its architecture, to find activity patterns, and to compare cohorts or individuals regarding structure or function. Typical applications are to detect, classify, monitor, predict diseases, such as Dementia or Alzheimer’s disease, as well as the impact of physical damage, to identify inter-species differences and similarities, and potential evolutionary developments, and to correlate behaviour and brain activity for a deeper understanding of the underlying mechanisms.

For most of these application tasks, both the understanding of the anatomy and structure as well as the topology of functional relations are relevant. Due to this combination of spatial information, such as the brain’s anatomy and structure, and abstract data, such as activity correlations, IE that use S3D representations of the brain as well as charts and abstract representations, e. g. node-link diagrams, have the potential to provide more direct and intuitive analysis interfaces. See Figure 7 for an example of such a combination. Immersive environments might not only help in the analysis of data [62], [39], but already in the preprocessing steps, e. g. the modelling of 3D structures from imaging slices [84].

Figure 7 
Brain data analytics – a brain model that allows to distinguish anatomical regions is combined with brain activity data. The activity data acquisition provides time series that can be depicted in a classical chart, but also turned into a network by correlation analysis, which allows to map activity correlations between regions into the 3D model. Interaction allows to rotate the model and to select regions for in-depth analysis of their activity and correlations. Taken from [39], © 2019 IEEE.
Figure 7

Brain data analytics – a brain model that allows to distinguish anatomical regions is combined with brain activity data. The activity data acquisition provides time series that can be depicted in a classical chart, but also turned into a network by correlation analysis, which allows to map activity correlations between regions into the 3D model. Interaction allows to rotate the model and to select regions for in-depth analysis of their activity and correlations. Taken from [39], © 2019 IEEE.

The design of proper environments for use in practice poses however several open challenges, including proper metaphors for representation, intuitive interaction and linking, and the seamless integration of analysis methods [39]. In addition, the difficulties stemming from the dynamic nature of the data, the challenges stemming from the application and parameterisation of sophisticated processing steps along the analysis pipeline, and our still restricted understanding of the brain’s inner workings require not only interactive visualisations, and different views and perspectives, but also human-centered computational analytics to allow assessment of intermediate results, e. g. from image processing and parcellation, and subsequent adjustments to method settings.

Besides the dynamics, also the uncertainty that is built up along the processing pipeline [23] is a major challenge. Consideration of uncertainty in analysis methods as well as the integration of corresponding information for visualisation are still a major challenge in current research [80].

4.3 Abstract data

While mapping of spatial data, in particular three-dimensional data, into S3D environments looks like a natural choice, the value of S3D visualisations, in particular for abstract data, needs further motivation and justification [46]. Met with reservations from the data visualisation community for some time, visualisation of abstract data in IE still offers several potential advantages, and in particular in combination with spatial data these advantages can be exploited. Recent initial research shows the potential of S3D visualisations for exploration and analysis of abstract data such as high-dimensional data [34], [57], e. g. when dimensionality reduction is used, or networks [45], [17], [48].

Due to their prevalence in a variety of application areas, where they are used to model and represent data for both communication of knowledge and analysis purposes, networks have attracted high attention in recent research on IA. As they have no intrinsic spatial structure or direction, there is a lot of freedom in the design of visual representations, including encoding, layout, and metaphor. Most representations employ the node-link or matrix metaphor, of which the former lends itself more naturally to a 3D representation, see Figure 8. The layout is usually given as node positions and routing of the edges as curves or segments, and established layout methods such as the force-directed and stress minimisation methods can be easily extended to work on three dimensions. While layout methods can more easily optimise certain goals in 3D, such as distance representation due to the additional dimension, that however does not automatically lead to improved readability, e. g. due to perspective-dependent occlusion and distortion. Navigation, in particular in large-scale networks, poses a further challenge that is not yet well investigated, requiring efforts to better understand the mechanisms behind mental map creation, orientation in networks and S3D network representations, including the evaluation of traversal methods [75], [26].

Figure 8 
Investigation of mental map preserving node-link network visualisation where a user interacts with an immersive 3D network visualisation in VR. Taken from [45], © 2020 IEEE.
Figure 8

Investigation of mental map preserving node-link network visualisation where a user interacts with an immersive 3D network visualisation in VR. Taken from [45], © 2020 IEEE.

5 Discussion

Despite the promises of more efficient and more engaging environments for data analysis, IA solutions haven’t received broad acceptance in real-world applications outside of several core areas. These areas have either already been early adoptors of advanced technology, such as chemistry, where stereoscopic 3D modelling has been used already for some time in drug development, or need data representation in a spatial context that is natural for representation in IE, e. g. simulations in disaster management. There are several potential impediments for the further employment of IA in practical applications. These include in particular

  1. the lack of evidence and guidelines on efficient IA designs for specific application requirements, including a comprehensive characterisation of the design space,

  2. the rapid technological advance in the last years, which made the maintenance of code bases and design approaches difficult as they had to be adjusted nearly constantly to new software, APIs, and device capabilities,

  3. the existence of well-established high-quality solutions in standard environments, which partly cover for the lack in design opportunities with sophisticated and well-implemented approaches,

  4. the still sometimes cumbersome handling of hardware (e. g. sharing of VR HMDs),

  5. and the potentially detrimental effects in particular of VR devices on communication (still lacking collaboration support and blocking out the surroundings) and well-being.

Both fundamental research and applications bring together topics and questions from several fields, and open a wide range of directions regarding underlying theory, evidence from user studies, and practical solutions tailored towards the requirements of an application area. We gave an overview on the concepts, topics, research questions, and challenges.

Award Identifier / Grant number: 251654672 - TRR 161

Funding statement: This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 251654672 - TRR 161.

About the authors

Dr. Karsten Klein

Dr. Karsten Klein is a postdoctoral researcher at the University of Konstanz, where he works on the design and implementation of visual and immersive analytics approaches for complex data from application areas, especially life sciences, as well as on network analysis and graph drawing algorithms and the underlying graph theoretic principles.

Prof. Dr. Michael Sedlmair

Prof. Dr. Michael Sedlmair is a professor at the University of Stuttgart and is leading the research group for Visualisation and Virtual/Augmented Reality there. He received his Ph.D. degree in Computer Science from the University of Munich. His research interests focus on visual and interactive machine learning, perceptual modelling for visualisation, immersive analytics and situated visualisation, novel interaction technologies, as well as the methodological and theoretical frameworks underlying them.

Prof. Dr. Falk Schreiber

Prof. Dr. Falk Schreiber is full professor and head of the Life Science Informatics group at the University of Konstanz. He worked at universities and research institutes in Germany and Australia and is adjunct professor at Monash University Melbourne. His main interests are network analysis and visualisation, immersive analytics as well as multi-scale modelling of processes, all with a focus on life science applications from metabolic processes to collective behaviour.

References

1. Alexander Achberger, Frank Heyen, Kresimir Vidakovic, and Michael Sedlmair. Propellerhand: A hand-mounted, propeller-based force feedback device. In The 14th International Symposium on Visual Information Communication and Interaction, pages 1–8, 2021.10.1145/3481549.3481563Search in Google Scholar

2. Sarvesh Agrawal, Adèle Simon, Søren Bech, Klaus Bærentsen, and Søren Forchhammer. Defining immersion: Literature review and implications for research on immersive audiovisual experiences. Journal of the Audio Engineering Society, october 2019.10.17743/jaes.2020.0039Search in Google Scholar

3. Marcos Allue, Ana Serrano, Manuel G. Bedia, and Belen Masia. Crossmodal Perception in Immersive Environments. In Alejandro Garcia-Alonso and Belen Masia, editors, Spanish Computer Graphics Conference (CEIG). The Eurographics Association, 2016.Search in Google Scholar

4. Christopher Andrews and Chris North. Analyst’s workspace: An embodied sensemaking environment for large, high-resolution displays. In IEEE Conference on Visual Analytics Science and Technology (VAST), pages 123–131, 2012.10.1109/VAST.2012.6400559Search in Google Scholar

5. Daniel Archambault, Helen Purchase, and Bruno Pinaud. Animation, small multiples, and the effect of mental map preservation in dynamic graphs. IEEE Transactions on Visualization and Computer Graphics, 17(4):539–552, 2011.10.1109/TVCG.2010.78Search in Google Scholar PubMed

6. Robert Ball, Chris North, and Doug A. Bowman. Move to improve: Promoting physical navigation to increase user performance with large displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’07, pages 191–200. Association for Computing Machinery, 2007.10.1145/1240624.1240656Search in Google Scholar

7. H. M. Berman. The protein data bank. Nucleic Acids Research, 28(1):235–242, 2000.10.1093/nar/28.1.235Search in Google Scholar PubMed PubMed Central

8. Denis Bienroth, Hieu T Nim, Dimitar Garkov, Karsten Klein, Sabrina Jaeger-Honz, Mirana Ramialison, and Falk Schreiber. Spatially resolved transcriptomics in immersive environments. Visual computing for Industry, Biomedicine, and Art, 5(1):1–13, 2022.10.1186/s42492-021-00098-6Search in Google Scholar PubMed PubMed Central

9. Mark Billinghurst, Maxime Cordeil, Anastasia Bezerianos, and Todd Margolis. Collaborative immersive analytics. In Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors, Immersive Analytics, pages 221–257. Springer, 2018.10.1007/978-3-030-01388-2_8Search in Google Scholar

10. Emily Brown and Paul Cairns. A grounded investigation of game immersion. In CHI’04 Extended Abstracts on Human Factors in Computing Systems, pages 1297–1300. Association for Computing Machinery, 2004.10.1145/985921.986048Search in Google Scholar

11. Kjell Brunnström, Elijs Dima, Tahir Qureshi, Mathias Johanson, Mattias Andersson, and Marten Sjöström. Latency impact on quality of experience in a virtual reality simulator for remote control of machines. Signal Processing: Image Communication, 89:116005, 2020.10.1016/j.image.2020.116005Search in Google Scholar

12. Wolfgang Büschel, Jian Chen, Raimund Dachselt, Steven Drucker, Tim Dwyer, Carsten Görg, Tobias Isenberg, Andreas Kerren, Chris North, and Wolfgang Stuerzlinger. Interaction for immersive analytics. In Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors, Immersive Analytics, pages 95–138. Springer, 2018.10.1007/978-3-030-01388-2_4Search in Google Scholar

13. Paul Cairns, Anna Cox, and A. Imran Nordin. Immersion in digital games: Review of gaming experience research. In Handbook of Digital Games, chapter 12, pages 337–361. John Wiley & Sons, Ltd, 2014.10.1002/9781118796443.ch12Search in Google Scholar

14. Gordon Calleja. In-Game: From Immersion to Incorporation. The MIT Press, 2011.10.7551/mitpress/8429.001.0001Search in Google Scholar

15. Tom Chandler, Maxime Cordeil, Tobias Czauderna, Tim Dwyer, Jaroslaw Glowacki, Cagatay Goncu, Matthias Klapperstueck, Karsten Klein, Kim Marriott, Falk Schreiber, and Elliot Wilson. Immersive Analytics. In 2015 Big Data Visual Analytics, BDVA 2015, pages 1–8. IEEE, 2015.10.1109/BDVA.2015.7314296Search in Google Scholar

16. Tom Chandler, Thomas Morgan, and Torsten Wolfgang Kuhlen. Exploring immersive analytics for built environments. In Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors, Immersive Analytics, pages 331–357. Springer, 2018.10.1007/978-3-030-01388-2_11Search in Google Scholar

17. Maxime Cordeil, Tim Dwyer, Karsten Klein, Bireswar Laha, Kim Marriott, and Bruce Hunter Thomas. Immersive collaborative analysis of network connectivity: Cave-style or head-mounted display? IEEE Transactions on Visualization and Computer Graphics, 23(1):441–450, Jan 2017.10.1109/TVCG.2016.2599107Search in Google Scholar PubMed

18. Creative commons license. http://creativecommons.org/licenses/by/4.0/.Search in Google Scholar

19. James J. Cummings and Jeremy N. Bailenson. How immersive is enough? a meta-analysis of the effect of immersive technology on user presence. Media Psychology, 19(2):272–309, 2016.10.1080/15213269.2015.1015740Search in Google Scholar

20. Tobias Czauderna, Jason Haga, Jinman Kim, Matthias Klapperstück, Karsten Klein, Torsten Kuhlen, Steffen Oeltze-Jafra, Björn Sommer, and Falk Schreiber. Immersive analytics applications in life and health sciences. In Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors, Immersive Analytics, pages 289–330. Springer, 2018.10.1007/978-3-030-01388-2_10Search in Google Scholar

21. Data sonification archive. https://sonification.design/.Search in Google Scholar

22. Bruno Daunay and Stéphane Régnier. Stable six degrees of freedom haptic feedback for flexible ligand–protein docking. Computer-Aided Design, 41(12):886–895, 2009.10.1016/j.cad.2009.06.010Search in Google Scholar

23. Michael de Ridder, Karsten Klein, and Jinman Kim. A review and outlook on visual analytics for uncertainties in functional magnetic resonance imaging. Brain informatics, 5(2):1–19, 2018.10.1186/s40708-018-0083-0Search in Google Scholar PubMed PubMed Central

24. Urška Demšar, Kevin Buchin, Francesca Cagnacci, Kamran Safi, Bettina Speckmann, Nico Van de Weghe, Daniel Weiskopf, and Robert Weibel. Analysis and visualisation of movement: an interdisciplinary review. Movement Ecology, 3, 2015. Article Number: 5.10.1186/s40462-015-0032-ySearch in Google Scholar PubMed PubMed Central

25. Ciro Donalek, S. G. Djorgovski, Alex Cioc, Anwell Wang, Jerry Zhang, Elizabeth Lawler, Stacy Yeh, Ashish Mahabal, Matthew Graham, Andrew Drake, Scott Davidoff, Jeffrey S. Norris, and Giuseppe Longo. Immersive and collaborative data visualization using virtual reality platforms. In 2014 IEEE International Conference on Big Data (Big Data), pages 609–614, 2014.10.1109/BigData.2014.7004282Search in Google Scholar

26. Adam Drogemuller, Andrew Cunningham, James Walsh, Bruce H. Thomas, Maxime Cordeil, and William Ross. Examining virtual reality navigation techniques for 3D network visualisations. Journal of Computer Languages, 56:100937, 2020.10.1016/j.cola.2019.100937Search in Google Scholar

27. Tim Dwyer, Kim Marriott, Tobias Isenberg, Karsten Klein, Nathalie Riche, Falk Schreiber, Wolfgang Stuerzlinger, and Bruce H Thomas. Immersive Analytics: An Introduction. In Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors, Immersive Analytics, pages 1–23. Springer, 2018.10.1007/978-3-030-01388-2Search in Google Scholar

28. Philipp Fleck, Aimée Sousa Calepso, Sebastian Hubenschmid, Michael Sedlmair, and Dieter Schmalstieg. Ragrug: A toolkit for situated analytics. IEEE Transactions on Visualization and Computer Graphics, 2022.10.1109/TVCG.2022.3157058Search in Google Scholar PubMed

29. Adrien Fonnet and Yannick Prie. Survey of Immersive Analytics. IEEE Transactions on Visualization and Computer Graphics, pages 1, 2019.10.1109/TVCG.2019.2929033Search in Google Scholar PubMed

30. B. Fröhler, C. Anthes, F. Pointecker, J. Friedl, D. Schwajda, A. Riegler, S. Tripathi, C. Holzmann, M. Brunner, H. Jodlbauer, H.-C. Jetter, and C. Heinzl. A survey on cross-virtuality analytics. Computer Graphics Forum, 41(1):465–494, 2022.10.1111/cgf.14447Search in Google Scholar

31. Thomas D. Goddard, Conrad C. Huang, Elaine C. Meng, Eric F. Pettersen, Gregory S. Couch, John H. Morris, and Thomas E. Ferrin. UCSF ChimeraX: Meeting modern challenges in visualization and analysis. Protein Science, 27(1):14–25, 2018.10.1002/pro.3235Search in Google Scholar PubMed PubMed Central

32. Jonathan Goldberg, Hsien-bin Huang, Young-guen Kwon, Paul Greengard, Angus C. Nairn, and John Kuriyan. Serine/threonine phosphatase-1. PDB - Protein Data Bank, 1995.Search in Google Scholar

33. Jonathan Goldberg, Hsien-bin Huang, Young-guen Kwon, Paul Greengard, Angus C. Nairn, and John Kuriyan. Three-dimensional structure of the catalytic subunit of protein serine/threonine phosphatase-1. Nature, 376(6543):745–753, 1995.10.1038/376745a0Search in Google Scholar PubMed

34. Antonio Gracia, Santiago Gonzalez, Victor Robles, Ernestina Menasalvas, and Tatiana Landesberger. New insights into the suitability of the third dimension for visualizing multivariate/multidimensional data: A study based on loss of quality quantification. Information Visualization, 15, 11 2014.10.1177/1473871614556393Search in Google Scholar

35. Raphael Grasset, Julian Looser, and Mark Billinghurst. Transitional interface: Concept, issues and framework. In ISMAR’06: Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality, pages 231–232, 10 2006.10.1109/ISMAR.2006.297819Search in Google Scholar

36. Dominik Herr, Jan Reinhardt, Robert Krueger, Guido Reina, and Thomas Ertl. Immersive Visual Analytics for Modular Factory Layout Planning. In IEEE Immersive Analytics Workshop, 2017.10.1016/j.procir.2018.03.200Search in Google Scholar

37. Petra Isenberg, Bongshin Lee, Huamin Qu, and Maxime Cordeil. Immersive visual data stories. In Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors, Immersive Analytics, pages 165–184. Springer, 2018.10.1007/978-3-030-01388-2_6Search in Google Scholar

38. Robert JK Jacob. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems (TOIS), 9(2):152–169, 1991.10.1145/123078.128728Search in Google Scholar

39. S. Jaeger, K. Klein, L. Joos, J. Zagermann, M. de Ridder, J. Kim, J. Yang, U. Pfeil, H. Reiterer, and F. Schreiber. Challenges for brain data analysis in VR environments. In 2019 IEEE Pacific Visualization Symposium (PacificVis), pages 42–46, 2019.10.1109/PacificVis.2019.00013Search in Google Scholar

40. Yvonne Jansen, Pierre Dragicevic, Petra Isenberg, Jason Alexander, Abhijit Karnik, Johan Kildal, Sriram Subramanian, and Kasper Hornbæk. Opportunities and challenges for data physicalization. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pages 3227–3236, 2015.10.1145/2702123.2702180Search in Google Scholar

41. Charlene Jennett, Anna L. Cox, Paul Cairns, Samira Dhoparee, Andrew Epps, Tim Tijs, and Alison Walton. Measuring and defining the experience of immersion in games. International Journal of Human-Computer Studies, 66(9):641–661, 2008.10.1016/j.ijhcs.2008.04.004Search in Google Scholar

42. Hans-Christian Jetter, Jan-Henrik Schröder, Jan Gugenheimer, Mark Billinghurst, Christoph Anthes, Mohamed Khamis, and Tiare Feuchtner. Transitional Interfaces in Mixed and Cross-Reality: A New Frontier?, pages 46–49. Association for Computing Machinery, 2021.Search in Google Scholar

43. Laura J. Kingsley, Vincent Brunet, Gerald Lelais, Steve McCloskey, Kelly Milliken, Edgardo Leija, Stephen R. Fuhs, Kai Wang, Edward Zhou, and Glen Spraggon. Development of a virtual reality platform for effective communication of structural data in drug discovery. Journal of Molecular Graphics and Modelling, 89:234–241, 2019.10.1016/j.jmgm.2019.03.010Search in Google Scholar PubMed

44. Karsten Klein, Michael Aichem, Ying Zhang, Stefan Erk, Björn Sommer, and Falk Schreiber. TEAMwISE: synchronised immersive environments for exploration and analysis of animal behaviour. Journal of Visualization, 24(4):845–859, 2021.10.1007/s12650-021-00746-2Search in Google Scholar

45. Joseph Kotlarek, Oh-Hyun Kwon, Kwan-Liu Ma, Peter Eades, Andreas Kerren, Karsten Klein, and Falk Schreiber. A study of mental maps in immersive network visualization. In Proc. IEEE Pacific Visualization – IEEE PacificVis, pages 1–10, 2020.10.1109/PacificVis48177.2020.4722Search in Google Scholar

46. Matthias Kraus, Johannes Fuchs, Björn Sommer, Karsten Klein, Ulrich Engelke, Daniel Keim, and Falk Schreiber. Immersive analytics with abstract 3D visualizations: A survey. Computer Graphics Forum, 41(1):201–229, 2022.10.1111/cgf.14430Search in Google Scholar

47. Matthias Kraus, Karsten Klein, Johannes Fuchs, Daniel A Keim, Falk Schreiber, and Michael Sedlmair. The value of immersive visualization. IEEE computer graphics and applications, 41(4):125–132, 2021.10.1109/MCG.2021.3075258Search in Google Scholar PubMed

48. O. Kwon, C. Muelder, K. Lee, and K. Ma. A study of layout, rendering, and interaction methods for immersive graph visualization. IEEE Transactions on Visualization and Computer Graphics, 22(7):1802–1815, July 2016.10.1109/TVCG.2016.2520921Search in Google Scholar PubMed

49. Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors. Immersive Analytics, volume 11190 of LNCS. Springer, 2018.10.1007/978-3-030-01388-2Search in Google Scholar

50. Daniel Martin, Sandra Malpica, Diego Gutierrez, Belen Masia, and Ana Serrano. Multimodality in vr: A survey. ACM Comput. Surv., dec 2021. Just Accepted.10.1145/3508361Search in Google Scholar

51. Vivien Marx. Method of the year: spatially resolved transcriptomics. Nature Methods, 18:23–25, 2021.10.1038/s41592-020-01033-ySearch in Google Scholar PubMed

52. Arif Masrur, Jiayan Zhao, Jan Oliver Wallgrün, Peter LaFemina, and Alexander Klippel. Immersive applications for informal and interactive learning for earth science. In Workshop on Immersive Analytics. Exploring Future Interaction and Visualization Technologies for Data Analytics. In conjunction with IEEE VIS, 2017.Search in Google Scholar

53. Jon McCormack, Jonathan C. Roberts, Benjamin Bach, Carla Dal Sasso Freitas, Takayuki Itoh, Christophe Hurter, and Kim Marriott. Multisensory Immersive Analytics, pages 57–94. Springer International Publishing, Cham, 2018.10.1007/978-3-030-01388-2_3Search in Google Scholar

54. John P. McIntire, Paul R. Havig, and Eric E. Geiselman. Stereoscopic 3d displays and human performance: A comprehensive review. Displays, 35(1):18–26, 2014.10.1016/j.displa.2013.10.004Search in Google Scholar

55. Paul Milgram and Fumio Kishino. A taxonomy of mixed reality visual displays. IEICE Trans. Information Systems, vol. E77-D, no. 12:1321–1329, 1994.Search in Google Scholar

56. Paul Milgram, Haruo Takemura, Akira Utsumi, and Fumio Kishino. Augmented reality: a class of displays on the reality-virtuality continuum. In Hari Das, editor, Telemanipulator and Telepresence Technologies, volume 2351, pages 282–292. International Society for Optics and Photonics, SPIE, 1995.10.1117/12.197321Search in Google Scholar

57. Patrick Millais, Simon L. Jones, and Ryan Kelly. Exploring data in virtual reality: Comparisons with 2D data visualizations. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, CHI EA’18, pages 1–6. Association for Computing Machinery, 2018.10.1145/3170427.3188537Search in Google Scholar

58. Haylie L. Miller and Nicoleta L. Bugnariu. Level of immersion in virtual environments impacts the ability to assess and teach social skills in autism spectrum disorder. Cyberpsychology, Behavior, and Social Networking, 19(4):246–256, 2016.10.1089/cyber.2014.0682Search in Google Scholar PubMed PubMed Central

59. Kazuo Misue, Peter Eades, Wei Lai, and Kozo Sugiyama. Layout adjustment and the mental map. Journal of Visual Languages & Computing, 6(2):183–210, 1995.10.1006/jvlc.1995.1010Search in Google Scholar

60. Jeanne Nakamura and Mihaly Csikszentmihalyi. The Concept of Flow, pages 239–263. Springer Netherlands, Dordrecht, 2014.10.1007/978-94-017-9088-8_16Search in Google Scholar

61. Magnus Norrby, Christoph Grebner, Joakim Eriksson, and Jonas Boström. Molecular rift: Virtual reality for drug designers. Journal of Chemical Information and Modeling, 55(11):2475–2484, 2015.10.1021/acs.jcim.5b00544Search in Google Scholar PubMed

62. C. Nowke, M. Schmidt, S. J. van Albada, J. M. Eppler, R. Bakker, M. Diesrnann, B. Hentschel, and T. Kuhlen. Visnest – interactive analysis of neural activity data. In 2013 IEEE Symposium on Biological Data Visualization (BioVis), pages 65–72, 2013.10.1109/BioVis.2013.6664348Search in Google Scholar

63. Michael B. OĆonnor, Simon J. Bennie, Helen M. Deeks, Alexander Jamieson-Binnie, Alex J. Jones, Robin J. Shannon, Rebecca Walters, Thomas J. Mitchell, Adrian J. Mulholland, and David R. Glowacki. Interactive molecular dynamics in virtual reality from quantum chemistry to drug binding: An open-source multi-person framework. The Journal of Chemical Physics, 150(22):220901, 2019.10.1063/1.5092590Search in Google Scholar

64. Eric F. Pettersen, Thomas D. Goddard, Conrad C. Huang, Elaine C. Meng, Gregory S. Couch, Tristan I. Croll, John H. Morris, and Thomas E. Ferrin. USF ChimeraX: Structure visualization for researchers, educators, and developers. Protein Science, 30(1):70–82, 2021.10.1002/pro.3943Search in Google Scholar PubMed PubMed Central

65. Jaziar Radianti, Tim A. Majchrzak, Jennifer Fromm, and Isabell Wohlgenannt. A systematic review of immersive virtual reality applications for higher education: Design elements, lessons learned, and research agenda. Computers & Education, 147:103778, 2020.10.1016/j.compedu.2019.103778Search in Google Scholar

66. George Robertson, David Ebert, Stephen Eick, Daniel Keim, and Ken Joy. Scale and complexity in visual analytics. Information Visualization, 8(4):247–253, 2009.10.1057/ivs.2009.23Search in Google Scholar

67. Alexander Schäfer, Gerd Reis, and Didier Stricker. A survey on synchronous augmented, virtual and mixed reality remote collaboration systems. ACM Comput. Surv., apr 2022. Just Accepted.10.1145/3533376Search in Google Scholar

68. Hasti Seifi, Farimah Fazlollahi, Michael Oppermann, John Andrew Sastrillo, Jessica Ip, Ashutosh Agrawal, Gunhyuk Park, Katherine J Kuchenbecker, and Karon E MacLean. Haptipedia: Accelerating haptic device discovery to support interaction & engineering design. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pages 1–12, 2019.10.1145/3290605.3300788Search in Google Scholar

69. Adalberto L. Simeone, Mohamed Khamis, Augusto Esteves, Florian Daiber, Matjaž Kljun, Klen Čopič Pucihar, Poika Isokoski, and Jan Gugenheimer. International workshop on cross-reality (xr) interaction. In Companion Proceedings of the 2020 Conference on Interactive Surfaces and Spaces, ISS’20, pages 111–114. Association for Computing Machinery, 2020.10.1145/3380867.3424551Search in Google Scholar

70. Richard Skarbez, Missie Smith, and Mary C. Whitton. Revisiting milgram and kishino’s reality-virtuality continuum. Frontiers in Virtual Reality, 2, 2021.10.3389/frvir.2021.647997Search in Google Scholar

71. Mel Slater. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philosophical Transactions of the Royal Society B: Biological Sciences, 364:3549–3557, 2009.10.1098/rstb.2009.0138Search in Google Scholar PubMed PubMed Central

72. Mel Slater and Sylvia Wilbur. A framework for immersive virtual environments (FIVE): Speculations on the role of presence in virtual environments. Presence: Teleoperators and Virtual Environments, 6(6):603–616, 1997.10.1162/pres.1997.6.6.603Search in Google Scholar

73. Björn Sommer and Falk Schreiber. Integration and virtual reality exploration of biomedical data with CmPI and VANTED. it – Information Technology, 59(4):181–190, 2017.10.1515/itit-2016-0030Search in Google Scholar

74. Sonification art. https://sonificationart.wordpress.com/.Search in Google Scholar

75. Johannes Sorger, Manuela Waldner, Wolfgang Knecht, and Alessio Arleo. Immersive analytics of large dynamic networks via overview and detail navigation. In 2nd International Conference on Artificial Intelligence & Virtual Reality, 2019.10.1109/AIVR46125.2019.00030Search in Google Scholar

76. Wolfgang Stuerzlinger, Tim Dwyer, Steven Drucker, Carsten Görg, Chris North, and Gerik Scheuermann. Immersive human-centered computational analytics. In Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors, Immersive Analytics, pages 139–163. Springer, 2018.10.1007/978-3-030-01388-2_5Search in Google Scholar

77. Bruce H Thomas, Michael Marner, Ross T Smith, Neven Abdelaziz Mohamed Elsayed, Stewart Von Itzstein, Karsten Klein, Matt Adcock, Peter Eades, Andrew Irlitti, Joanne Zucco, et al.Spatial augmented reality–a tool for 3D data visualization. In 2014 IEEE VIS international workshop on 3DVis (3DVis), pages 45–50. IEEE, 2014.10.1109/3DVis.2014.7160099Search in Google Scholar

78. Bruce H. Thomas, Gregory F. Welch, Pierre Dragicevic, Niklas Elmqvist, Pourang Irani, Yvonne Jansen, Dieter Schmalstieg, Aurélien Tabard, Neven A. M. ElSayed, Ross T. Smith, and Wesley Willett. Situated analytics. In Kim Marriott, Falk Schreiber, Tim Dwyer, Karsten Klein, Nathalie Henry Riche, Takayuki Itoh, Wolfgang Stuerzlinger, and Bruce H. Thomas, editors, Immersive Analytics, pages 185–220. Springer, 2018.10.1007/978-3-030-01388-2_7Search in Google Scholar

79. Xiyao Wang, Lonni Besançn, Mehdi Ammi, and Tobias Isenberg. Understanding differences between combinations of 2D and 3D input and output devices for 3D data visualization. International Journal of Human-Computer Studies, 163:102820, 2022.10.1016/j.ijhcs.2022.102820Search in Google Scholar

80. Daniel Weiskopf. Uncertainty visualization: Concepts, methods, and applications in biological data visualization. Frontiers in Bioinformatics, 2, 2022.10.3389/fbinf.2022.793819Search in Google Scholar PubMed PubMed Central

81. Sean White and Steven Feiner. Sitelens: Situated visualization techniques for urban site visits. In Proceedings of the 2009 CHI Conference on Human Factors in Computing Systems, pages 1117–1120, 2009.10.1145/1518701.1518871Search in Google Scholar

82. Matt Whitlock, Stephen Smart, and Danielle Albers Szafir. Graphical perception for immersive analytics. In 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), pages 616–625, 2020.10.1109/VR46266.2020.00084Search in Google Scholar

83. Wesley Willett, Yvonne Jansen, and Pierre Dragicevic. Embedded data representations. IEEE Transactions on Visualization and Computer Graphics, 23(1):461–470, 2017.10.1109/TVCG.2016.2598608Search in Google Scholar PubMed

84. Shea B Yonker, Oleksandr O Korshak, Timothy Hedstrom, Alexander Wu, Siddharth Atre, and Jürgen P. Schulze. 3D medical image segmentation in virtual reality. Electronic Imaging, 2019.10.2352/ISSN.2470-1173.2019.2.ERVR-188Search in Google Scholar

Received: 2022-05-23
Revised: 2022-08-11
Accepted: 2022-08-12
Published Online: 2022-09-06
Published in Print: 2022-08-26

© 2022 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 23.2.2024 from https://www.degruyter.com/document/doi/10.1515/itit-2022-0037/html
Scroll to top button