New digital reality as a spectrum of technologies and experiences that digitally simulate and extend reality in one way or another across different human senses has received considerable attention in recent years. In particular, we have witnessed great advances in mixed reality (MR) technologies, such as Virtual Reality (VR) and Augmented Reality (AR) technology, which provide enormous potential for application domains like training, simulation, education, entertainment, health, and sports. However, also other forms of digitally enhanced reality (XR) supports novel forms of immersion and experiences while generating, visualizing and interacting with digital content either displayed in fully-immersive virtual environments or superimposed into our view of the real world, and will significantly change the way we work, travel, play, and communicate. Consequently, we face dramatic changes in interactive media creation, access, and perception. In this special issue, we solicit work that addresses novel interaction design, interfaces, and implementation of new digital reality in which our reality is blended with the virtuality with a focus on users’ needs, joy, and visions.
We live in a complex world, in which physical and digital environments, media, and interactions are woven together. The lines between natural and artificial, reality and virtuality, human and machine intelligence seem to blur. We observe that those originally distinct semantic differentials are no longer clearly distinguishable. Instead, they form a continuum in which these two aspects define the endpoints, but the much larger area is given by a mixed space in-between.
Let us consider some examples of such blended or mixed spaces: About 300.000 years ago, our world consisted only of natural surroundings. Then, the Homo sapiens started to produce artificial objects ranging from stone tools, the wheel, to calculators. Nowadays, most objects in our urban areas are artificially made by humans, and even naturally appearing objects are often artificial ones or at least partly artificial. For instance, even natural trees have been typically planted and pruned by humans, and thereby they move along the continuum towards more artificial objects.
Next, in classical computer systems (based on binary representation), data is encoded in fundamental units called bits, where each bit represents a distinctive state, which is either 1 or 0. With this technology, human intelligence has already been simulated in some tasks (e. g. image recognition or speech analysis and synthesis) and sometimes even extended for other tasks (e. g. numerical data processing) by computers. However, we observe a shift from this classical model of computing towards other approaches such as quantum computing. Here, data is encoded into quantum bits (qubits), which can represent a one, a zero, or a coherent superposition of both states simultaneously. The probabilistic approach of superposition together with quantum entanglement, which allows qubits to affect the state of other qubits, promises enormous performance improvement for several complex tasks. Again, with qubits, the area between 0 and 1 can be considered as a continuum.
Another example can be found in virtual reality (VR), which has gained enormous attention recently. VR denotes an immersive computer-generated multisensory stimulation, in which users can dive into a fully artificial reality. Again, VR can be seen as a semantic differential to the real world. However, other techniques such as augmented reality (AR) or augmented virtuality (AV) combine virtual and real objects in a coherent and consistent space – denoted as a reality-virtuality continuum – along which the different mixed reality (MR) technology can be described.
These new digital realities with an increasingly blended space of mixed realities require new paradigms for the interaction and communication between humans and technology. One major goal of HCI research is to support humans in using technology so that they can solve their tasks in an effective, efficient, and satisfying way. In this context, the analysis, design, implementation, and evaluation of future human-computer interactions (HCI) play an essential role.
2 A Brief History of New Digital Realities
Digital transformation and new digital realities affect nowadays almost every aspect of our daily lives and will further shape how we work, learn, communicate, and live in the future . In particular, two emerging technologies with the potential to have a long-term impact on human-computer interaction are VR and AR. One of the first fictional descriptions of Virtual Reality occurred in 1935 by the American science fiction writer Stanley G. Weinbaum. In the story, a professor invented a pair of goggles that enabled “a movie that gives one sight and sound [...] taste, smell, and touch. [...] You are in the story, you speak to the shadows (characters) and they reply, and instead of being on a screen, the story is all about you, and you are in it.” . With his explanation, Weinbaum perfectly describes a psychological state known as telepresence, which is key in defining Virtual Reality . The invention of VR, which denotes the generation of completely synthetic environments using computer technology , dates back to the 1960s when Morton Heilig patented one of the earliest examples of immersive computer-generated environments, the Sensorama . The motivation here was to make people feel like they were in the movie simulating a motorcycle ride through a city environment and letting people see the road, hear the engine, feel the vibration, and smell the motor’s exhaust. While the Sensorama was a static device in which people had to put their head into, Comeau and Bryan created 1961 the first head-mounted display (HMD) called the Headsight, consisting of two video screens, one for each eye, as well as a magnetic tracking device. Around the same time, Ivan Sutherland envisioned the Ultimate Display as a “room within which the computer can control the existence of matter” . In 1968, it was Sutherland who created the first partially see-through head-mounted display (HMD)  known as the Sword of Damocles, which is widely considered to be a precursor of current AR technology. Though having its origins in the same decade as VR, it took almost 30 years before AR emerged as an independent research field (for an extensive historical background of both VR and AR see ). In contrast to immersive VR technology which fully replaces the view to the real world with computer-generated content, AR aims at enhancing real-world views by embedding additional virtual content . Both technologies became particularly popular in the gaming and entertainment sector . According to the Gartner hype cycle 2015 , both technologies already passed the phases of technology breakthrough and the peak of inflated expectations, and are estimated to reach mainstream adoption by 2025. In the report of 2019, both VR and AR were even excluded from the list of emerging technologies because, according to Gartner, they already reached a mature state . However, related technologies such as immersive workspaces, for instance, for online meetings, or AR clouds for sharing virtual content, have entered the innovation trigger and will further boost AR and VR technologies.
A more differentiated analysis of the strengths and weaknesses of both technologies led to a set of customized applications beyond the entertainment industry. VR can immerse users in environments, which represent a different place or time while being isolated from the real world. This can be beneficial for applications such as therapy, military training, and design review . In contrast, AR technology is preferable to VR systems if users are required to see and interact with their real environment, for example, for navigation, maintenance tasks, and computer-assisted surgeries . Regarding the virtualization of real-world objects, VR and AR are only two stages within a continuum, which was introduced by Paul Milgram in 1994 . While purely real environments (REs) and completely virtual environments (VEs) mark the extremes of this continuum, a range of Mixed Reality (MR) environments, i. e. spaces that combine real and virtual elements, lies in between. Whitton et al.  describe fundamental challenges to enable such MR environments, including the merge of real and virtual scene elements, the tracking of physical objects, as well as the simulation of plausible virtual-physical interactions. If the MR environment is predominantly real and only single physical objects are embedded, the overall state is denoted as AR. As opposed to this, Augmented Virtuality (AV) refers to a primarily VE to which some amount of real objects has been added .
Though Milgram’s continuum provides a basis for the continuous virtualization of an environment, it is limited in the sense that it only considers augmentation, i. e. the addition of virtual content, as a possibility to increase the virtuality of objects. Related research projects, for example, by Broll et al. , demonstrate that REs can also be modified by (partially) removing real content, resulting in a Diminished Reality (DR) state. Analogous to the correlation of AV and AR, Diminished Virtuality (DV) is the equivalent to DR as it refers to a predominantly VE, from which some of the virtual elements are removed and replaced by visualizations of real-world objects. While augmentation and diminishment are opposite to each other on a conceptual level, they both increase the overall virtuality of the affected objects. This is because technologically the diminishment of real-world objects also requires the superposition of virtual content, which in this particular case represents the portion of the scene that was previously obscured by these objects. Instead of adding virtual or removing real content, an alternative to achieve such a (partial or full) virtualization is the modulation of existing objects, for example, by using image filters in order to highlight specific features, or by virtually transforming visual properties of the physical object.
To integrate augmentation, diminishment, and modulation into a comprehensive taxonomy, Mann et al.  introduced an orthogonal axis to Milgram’s continuum, called Mediality. However, we argue that modulation of REs also decreases the amount of unaltered real objects, and therefore can be understood as a third path to increase the virtuality of such environments. Consequently, we extend Milgram’s continuum by two additional paths between reality and virtuality resulting in (i) augmentation, (ii) diminishment, and (iii) modulation of physical and virtual elements within an environment. The three operations should not be understood to be mutually exclusive since they can be applied simultaneously to different parts of a physical object to increase its overall level of virtuality resulting in new digital realities in which reality and virtuality are blended for novel forms of HCI. Hence, we will live in new digital realities in which physical and virtual objects are blended together. However, physical objects are not limited to the traditional augmentation, modulation, and diminishment, but all objects can change their state along the RV continuum, i. e., also virtual objects might materialize, for instance, by digital fabrication or 3D printing technologies. Furthermore, real, mixed, and virtual objects can interact with each other in a physically plausible way. And finally, humans will be able to seamlessly experienced blended spaces without requiring them to switch the display technology. The blending of reality and virtuality will eventually become our new digital reality.
3 About this Special Issue
The previous sections described the main ideas of blending reality and virtually into new digital realities. The following section will briefly introduce the different articles being part of this special issue and explain how they blend with this trend. The articles that are part of this special issue demonstrate, for example, how researchers can understand the current state, learn from specific contexts to improve the current state, explore novel interaction concepts, and look at visionary approaches for the next generation digital realities.
3.1 “How can I grab that?” – Solving Issues of Interaction in VR by Choosing Suitable Selection and Manipulation Techniques
The selection and manipulation of objects in Virtual Reality face application developers with a substantial challenge as they need to ensure a seamless interaction in three-dimensional space. Assessing the advantages and disadvantages of selection and manipulation techniques in specific scenarios and regarding usability and user experience is a mandatory task to find suitable forms of interaction. This article presents a taxonomy allowing the classification of techniques regarding multiple dimensions. The issues are then associated with these dimensions. Furthermore, the article analyzes the results of a study comparing multiple selection techniques and presents a tool allowing developers of VR applications to search for appropriate interaction techniques and to get scenario dependent suggestions based on the data of the executed study.
3.2 The Shared View Paradigm in Asymmetric Virtual Reality Setups
Asymmetric VR applications are a substantial subclass of multi-user VR that offers not all participants the same interaction possibilities. In an educational scenario, for example, learners can use immersive VR HMD technology to inform themselves at different virtual exhibits. Educators can use a desktop PC for guiding learners through the exhibits and still paying attention to safety aspects in the real world. In such scenarios, educators must ensure that learners have been informed about all virtual exhibits. A common visualization technique that supports educators is to render the view of the learners on their desktop screen referred to as a shared view paradigm. However, this straightforward visualization involves challenges. For example, educators have no control over the scene. This article investigates five alternative techniques to approach the shared view paradigm and visualize the gaze of users (‘view visualizations’). Furthermore, the authors propose three techniques that can support educators to understand what parts of the scene learners already have explored (‘exploration visualizations’).
3.3 appRaiseVR – An Evaluation Framework for Immersive Experiences
For the diverse VR application areas, it is essential to understand the user’s condition to ensure a safe, pleasant, and meaningful VR experience. However, VR experience evaluation is still in its infancy. This article takes up this research desideratum by conflating diverse expertise and learnings about experience evaluation in general and VR experiences in particular into a systematic evaluation framework, which is called appRaiseVR. Therefore, the authors conducted two focus groups (bottom-up approach) with experts working in different fields of experience evaluation (e. g., Movie Experience, Theatre Experiences). After clustering the results of both focus groups, the authors conflated those results and the learnings about experience evaluation stemming from the field of user experience into the final framework (top-down approach). The present framework appRaiseVR offers high-level guidance for evaluators with different expertise and contexts.
3.4 Mixed Reality based Collaboration for Design Processes
Due to constantly and rapidly growing digitization, requirements for international cooperation are changing. Tools for collaborative work such as video telephony are already an integral part of today’s communication across companies. However, these tools are not sufficient to represent the full physical presence of an employee or a product as well as its components in another location. This article focuses on Mixed Reality collaboration in manufacturing and quality assurance processes. It presents a novel object-centered approach that compromises Augmented and Virtual Reality technology as well as design suggestions for remote collaboration. Remote collaboration prototypes using HMD-based as well as tablet-based AR are developed for the context of quality assurance. Visualization procedures as well as role based interaction methods to support the collaboration in design processes are presented. As a result, key areas for future research are identified and a design space for the use of Augmented and Virtual Reality remote collaboration in the manufacturing process in the automotive industry is described.
3.5 Investigating the Relationship between Emotion Recognition Software and Usability Metrics
However various forms of general purpose sentiment/emotion recognition software are available, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. This article investigates if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. A large scale usability test (N=125) was performed for the three modalities text, speech, and face with a counterbalanced within-subject design with two websites of varying usability. Results show that emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. This article discusses reasons for these results and how to continue research with more sophisticated methods.
This issue presents work on Digital Reality, including VR, AR, and usability measuring using emotion recognition. The presented works show that the future, the way we live, work, and are entertained heavily depend on the advancements in computer science, which help adding to the quality of displayed content, and allow for the creation of photorealistic, vivid, and more life-like VR experiences as well as for integrating digital media and computation in our physical world. The natural interplay of virtual objects and their real environment was already subject to several research projects, which thereby reveal future prospects of Blended Spaces (e. g., the simulation of realistic illumination effects  or shadows  in MR environments). Considering the virtual content separately, the film industry demonstrates algorithms to create computer-generated content that is already close to being indistinguishable from a real video recording.
While the rendering of these graphics still takes a lot of resources in terms of time and computational power, it is reasonable to assume that real-time renderings with an equal quality will be achievable with improving computing systems that support digital reality , . Beyond computer graphics and technology evolution, New Digital Reality also touches areas of analogue life, especially when technology captures us in the physical world.
About the authors
Katrin Wolf is a professor for Human-Computer Interaction at the Beuth University of Applied Sciences Berlin. Previously, she held a professorship for Media Informatics at Hamburg University of Applied Sciences and at Berlin University of Art and Design. Her research interests lie at the intersection of human–computer interaction and interaction design, focusing on how to make novel technologies more usable and useful. To date, Katrin’s research has focused on technologies and domains including mobile and wearable systems; virtual, augmented and mixed reality, as well as interactive exhibitions.
After her PhD, received from T-Labs at Technical University Berlin, Katrin has been a Postdoctoral Researcher at University of Stuttgart 2014-2015, where she worked in the Human-Computer Interaction Lab headed by Albrecht Schmidt. Her work experience includes research visits/internships at the Social NUI Group at University of Melbourne, INRIA (Bordeaux, FR), Glasgow University (Glasgow, UK), CSIRO (Sydney, AUS), and HITLab (Christchurch, NZ) as well as working as interface designer at the Jewish Museum Berlin.
Katrin published in the most competitive conferences of her field including ACM CHI, ACM TEI and ACM MobileHCI. She is actively involved in the research community. Most notably, she was General Chair of the German conference on Mensch & Computer (MuC) 2019, Student Research Competition Chair of CHI 2019 and 2020 as well as Program Chair of MUM 2018, AHs2019, and TEI2021.
 M. Borg, S. S. Johansen, D. L. Thomsen, and M. Kraus. Practical Implementation of a Graphics Turing Test. In Proceedings of the International Symposium on Visual Computing (ISVC), pages 305–313. Springer, 2012.10.1007/978-3-642-33191-6_30Search in Google Scholar
 P. Cipresso, I. A. C. Giglioli, M. A. Raya, and G. Riva. The Past, Present, and Future of Virtual and Augmented Reality Research: A Network and Cluster Analysis of the Literature. Frontiers in Psychology, 9:2086, 2018.10.3389/fpsyg.2018.02086Search in Google Scholar PubMed PubMed Central
 J. Herling and W. Broll. Advanced Self-Contained Object Removal for Realizing Real-Time Diminished Reality in Unconstrained Environments. In Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pages 207–212, 2010.10.1109/ISMAR.2010.5643572Search in Google Scholar
 M. L. Heilig. Sensorama Simulator, August 28 1962. US Patent 3,050,870.Search in Google Scholar
 S. Mann. Mediated Reality with Implementations for Everyday Life. Presence Connect, 1, 2002.Search in Google Scholar
 Koelle, M., Wolf, K., & Boll, S. Beyond LED status lights-design requirements of privacy notices for body-worn cameras. In Proceedings of the Twelfth International Conference on Tangible, Embedded, and Embodied Interaction, pages 177–187 (2018, March).10.1145/3173225.3173234Search in Google Scholar
 P. Milgram, H. Takemura, A. Utsumi, and F. Kishino. Augmented Reality: A Class of Displays on the Reality-Virtuality Continuum. In Telemanipulator and Telepresence Technologies, volume 2351, pages 282–292. International Society for Optics and Photonics, 1995.10.1117/12.197321Search in Google Scholar
 S. Pessoa, G. Moura, J. Lima, V. Teichrieb, and J. Kelner. Photorealistic Rendering for Augmented Reality: A Global Illumination and BRDF Solution. In Proceedings of the IEEE Conference on Virtual Reality (VR), pages 3–10, 2010.10.1109/VR.2010.5444836Search in Google Scholar
 T. Rhee, L. Petikam, B. Allen, and A. Chalmers. MR360: Mixed Reality Rendering for 360 Panoramic Videos. IEEE Transactions on Visualization and Computer Graphics, 23(4):1379–1388, 2017.10.1109/TVCG.2017.2657178Search in Google Scholar PubMed
 D. Smith and B. Burke. Hype Cycle for Emerging Technologies, 2019. http://www.gartner.com/en/documents/3956015/hype-cycle-for-emerging-technologies-2019/, 2019. Accessed: 2020-02-04.Search in Google Scholar
 I. E. Sutherland. The Ultimate Display. In Proceedings of the IFIP Congress, pages 506–508, 1965.Search in Google Scholar
 M. Walker and B. Burton. Hype Cycle for Emerging Technologies, 2015. http://www.gartner.com/en/documents/3100227/hype-cycle-for-emerging-technologies-2015/, 2015. Accessed: 2020-02-04.Search in Google Scholar
 Weinbaum, Stanley G. Pygmalion’s spectacles. Simon and Schuster, 2016.Search in Google Scholar
 M. Whitton, B. Lok, B. Insko, and F. Brooks. Integrating Real and Virtual Objects in Virtual Environments. In Proceedings of the International Conference on Human-Computer Interaction (HCI), 2005.Search in Google Scholar
 Wouters, N., Kelly, R., Velloso, E., Wolf, K., Ferdous, H. S., Newn, J., … & Vetere, F. Biometric Mirror: Exploring Ethical Opinions towards Facial Analysis and Automated Decision-Making. In Proceedings of the 2019 on Designing Interactive Systems Conference, pages 447–461 (2019, June).10.1145/3322276.3322304Search in Google Scholar
© 2020 Walter de Gruyter GmbH, Berlin/Boston