One key aspect of the Internet of Things (IoT) is, that human machine interfaces are disentangled from the physicality of the devices. This provides designers with more freedom, but also may lead to more abstract interfaces, as they lack the natural context created by the presence of the machine. Mixed Reality (MR) on the other hand, is a key technology that enables designers to create user interfaces anywhere, either linked to a physical context (augmented reality, AR) or embedded in a virtual context (virtual reality, VR). Especially today, designing MR interfaces is a challenge, as there is not yet a common design language nor a set of standard functionalities or patterns. In addition to that, neither customers nor future users have substantial experiences in using MR interfaces.
Prototypes can contribute to overcome this gap, as they continuously provide user experiences of increasing realism along the design process. We present ExProtoVAR, a tool that supports quick and lightweight prototyping of MR interfaces for IoT using VR technology.
With the advent of more powerful mobile computing power and inexpensive display technologies, mixed reality (MR) (augmented reality (AR) + virtual reality (VR)) has received increasing attention in industry and in the consumer market. Augmented reality is defined as an interactive experience of a real-world environment in which certain elements are “augmented” with perceptual information generated by computers, e. g. smartglasses or tablets. Virtual reality, on the other hand, is defined as an immersive, interactive computer-generated experience situated in a simulated environment in which auditory, visual, haptic, and other types of sensory feedback are incorporated. Combining and mixing aspects of AR and VR defines mixed reality. Today, new platforms and new devices create an expanding design space of MR systems .
The topic of MR is only scarcely addressed by the design community (e. g. ) and little to none design patterns and principles addressing MR applications exist. This makes it even more important to support the early phases of prototyping, as designers and users can only rely on minimal shared experiences. While others have used AR to support general user centered design processes , , we are using virtual reality to support the user centered design of MR applications specifically. DART , as an early approach for AR prototyping, was very successful at its time, providing a tool chain enabling designers to create working prototypes as a result. This, however, brought about a stronger focus on the technology side in the design process, restricting the designer to what was possible with the technologies provided with DART. With ExProtoVAR (Experience Prototypes in VR for AR applications) we want to abstract away from specific hardware solutions and software implementations and focus on the design of the user’s interaction with the MR system and the experience of the users especially with systems that have IoT connectivity. The internet of things (IoT) is a computing concept consisting of the idea that everyday physical objects are connected to the internet and are thereby able to identify themselves to other devices, forming computational clusters to work together.
Important aspects, that have to be considered when designing for MR, are: (1) the context (spatially) of the situation, (2) the dynamics of the situation (temporally), (3) the user’s whole body in the loop, (4) the features of the MR device, and (5) the interplay with IoT devices (sensors, actuators) situated in the real world.
While (1) and (2) depend on the application scenario, (3) depends on the user, the task and her experience with MR. The choice of the MR device (4) may be part of the design or fixed, however, it will always influence (3). The integration of devices (5) poses especially challenges on taking into account different effects of IoT devices (e. g. delay of information, sensor unreliability, in-accurateness, message communication).
The situational aspects (1+2+5) will often be very specific, e. g. when designing for MR maintenance of a certain machine with close connection to IoT devices, and they might cover several locations and states (representing task progress). During design and prototyping, it is thus essential for the designers to have a realistic representation to work with. The choice of the MR device (4) will, at least in industrial settings, depend on the interaction between 1, 2 and 3 to achieve a certain performance goal.
In the project Prototyping for Innovation (ProFI ), we try to address these challenges with ExProtoVAR: a lightweight tool to create interactive virtual prototypes of MR applications. The central idea is to cover situations (1+2) using sequences of panoramas. In contrast to other tools, ExProtoVAR is primarily used in VR. It gives the designer an immersive editor at hand, with which virtual tours and state transitions can easily be defined in close interaction with the users. Areas of interest can be defined and sketches (images) or HTML prototypes of MR content can be integrated to design the MR application. Different presentation modes, such as in-view (attached to the screen) and in-situ (attached to the environment), can be defined. Relevant MR devices (iPhone, Samsung Tablet, Google Glass, Microsoft HoloLens) are simulated (4) to support an informed choice. Virtual representations of IoT devices can be integrated in the interaction loop. Using the IoT middleware MQTT , they can be coupled with external devices and other prototyping tools, such as Node-RED . In evaluation mode, users can immerse into the prototype, try it out and use audio and text annotations to comment on its features.
The idea to use panoramic images or videos for prototyping MR has been introduced before (e. g. ). Others have used 3D simulations in immersive virtual reality  with expensive hardware for AR prototyping. In a sense, we are using VR to simulate MR/AR. This is a quite successful approach applied by the AR community to explore designs for AR hardware and basic mechanisms (e. g. displays) , , , , , .
ExProtoVAR concentrates on holistic MR experiences produced with a low technical barrier and low developmental costs. Regarding the distinction between low-fidelity and high-fidelity prototypes (see e. g.  in the mobile context), our work ranges in the medium-fidelity range. One way it extends upon PARnorama  is, that it immerses the user into the situation rather than showing only a snippet of the situation on the screen of the AR device. Furthermore, the tool enables to simulate an environment covering IoT interactions and message communication with sensors in the real world.
2 Motivation and Approach
To build prototypes for MR interfaces for IoT interactions, a tool is needed to support users, designers and developers in different steps of a prototyping cycle. In the ProFI project , we orient ourselves to the double diamond model . This model divides the design processes into four distinct phases called Discover, Define, Develop and Deliver. The prototyping process for MR can be seen as a special case leading to several challenges on the way to a product (see Figure 2).
The first phase is characterized by a creative divergent thinking and a tool is needed to make customers acquainted with the possibilities of MR and make MR concepts come alive. As MR is very dependent on the situation, access to the environment the application is planned for, is crucial. A tool is required to meet and immerse into the environment and discuss possible augmentation ideas.
In the second phase, designers need to develop a clear brief that frames the fundamental design challenge. To this end, a tool can be of great benefit, which enables the designer to create an interactive prototype in an easy way and within a short period of time.
Several prototypes are created, tested and iterated, targeting at a concrete prototype. Additionally, technical adaptations and solutions need to be developed, as well as usability tests to be performed. The interplay between the application and IoT devices has to be tested. As different people are working an different aspects, e. g. designers on interface design and technicians on which device to use, these aspects always need to be integrated in one prototype, so that side effects can be accounted for. A shared tool is required which is able to integrate the different prototypes and ideas.
In the last phase, the resulting project (a product, service or environment) is finalized, produced and launched. A prototype with notes and additional explanations can help to serve as a benchmark for the solution offering a tool representing the different ideas and stages of the prototype process.
With ExProtoVAR, we present a lightweight tool to create interactive virtual prototypes of MR applications. In the following sections, we showcase the functionality of ExProtoVAR by presenting its usage in a project carried out with the company MODUS Consult AG, Germany to demonstrate the advantages of a MR client introduced in an industry 4.0 scenario. The scenario consists of three stations, (1) an enterprise resource planning (ERP) system connected to IoT devices used to support a picking task, (2) a placing task to insert parts at the right places into a carrier, (3) operating a machine and being informed about the machine status and the processes with respect to the ERP system. In this scenario, a LEGO machine building LEGO robots was chosen as a machine prototype because it covers the same processes occurring in a regular machine. Therefore, the developed MR client can be transfered to any other machine scenario alike. The target device was specified to be a Microsoft HoloLens. The HoloLens is a mixed reality headset, enabling the user to engage with digital content and interact with holograms in the real world. The application guides the user during a pick and place process while receiving the required content from the ERP system and the IoT sensors. The integration of the IoT devices allows the application to automatically provide feedback matching to the current situation and progress of the worker. Thereby, the user is, e. g., able to see which LEGO part to take out of a stock box or to place into a defined position of the carrier and is informed about the task’s progress.
3.1 Modeling the Situation Context— Panoramas
To cover the context, situation panoramas are used (see Figure 4 (1)). This serves two purposes; on the one hand, it helps the user to immerse in the situation the application is developed for and thus to get a feeling for using the application in the relevant context and, additionally, the environment can be used to identify those areas which later should be used as AR markers. Consumer panorama cameras (see Figure 3 left) are sufficient, small and easy to use. We wanted to create a tool with which a situation can be modeled in less than an hour and with which it is easy to deal with several alternative images.
As in the beginning of the project, it was not settled which stock boxes to use and how the AR marker would look like, the first panorama pictures just covered a plain table, an installation of typical stock boxes and the machine to be operated (see Figure 2 left). Even though not being very precise, this panorama image already helped to provide a basis on how the system would look like and which parts needed to be integrated.
As the project moved on and insights were gained on which IoT devices, e. g. which sensors to use, this led to a refinement of the stock boxes. The design of the stock boxes again had to be coordinated with the designers and the technicians developing the AR marker. By using a shared scenario, all involved persons can see how the project changes, evolves and progresses while integrating their changes.
3.2 Integrating MR Devices
In ExProtoVAR, different simulated devices can be used to experience the MR content, e. g. a Samsung Galaxy phone, an Apple iPad, an Apple iPhone, a Google Glass or a Microsoft HoloLens. They are simulated with respect to their display size, location and display type. Handheld devices are used in the virtual scene by moving the VR controller, glasses automatically follow the user’s head movements. By this end, the user is able to look around while scanning the environment for augmentations and develop a feeling for the behavior of the different devices. Figure 5 depicts the augmentation of the LEGO machine perceived by the user wearing a virtual HoloLens in VR. Figure 9 shows as an example of the simulation of an iPhone displaying the ERP interface.
3.3 Defining MR Interactions
Sketches of the information to be displayed at the stock boxes and to be used as MR content are created in a traditional 2D way. Figure 2 on the left shows first sketches of information to be displayed in the appropriate contexts. As a next step, the designer can easily structure the experience by putting the panorama pictures and MR content (e. g. images, HTML pages) in dedicated folders on the smart phone (see Figure 4.2).
However, the main part of prototyping happens then immersed in VR. To create the VR experience, the designer inserts the phone in the GearVR, puts on the headset (Samsung GearVR, see Figure 3 upper left) and uses the controller to define the logical, temporal and spatial aspects of the situation by use of situation-links. Situation-links can be used to define a walk-through by defining links leading from one room to the next room or to a position in a room (see Figure 7 right). But also, logical connections are possible, e. g. linking a situation standing in front of a machine to a situation standing at the same position, but with the machine switched on.
To cover the simulation of AR content, the user marks areas in the panoramas and links them to the prepared MR content (we call this event-links, see Figure 4.3). Whenever in the defined situation the virtual device is moved over the marked area, the device displays the MR content. In case of the device being glasses, the content is displayed whenever the user focuses the marked area. By this means, ExProtoVAR helps to create an impression on how augmentation information would appear in the situational context. Figure 6 displays the design process of the MR widget displaying information about the placing process. The top row depicts the development of the content design, the lower row shows how the design is integrated into the situational context; the left representing the design fitted into the basic scenario, the middle depicting the AR simulation with ExProtoVAR in VR, and the right showing how the widget is displayed in the final product.
3.4 Defining IoT Interactions
But not only the interaction between the world and the HoloLens needs to be modeled. Also the interaction between the real world and IoT devices, e. g., with the sensors in the stock boxes need to be accounted for (see Figure 4.4). The interplay with the environment and IoT devices can be modeled in ExProtoVAR on a high level. But the VR environment simulating the MR scenario is not supposed to be too complex, as its purpose is to serve in early prototyping. In ExProtoVAR, it is for example possible to integrate distance sensors in the VR world. To this end, their position and their range inside the VR world have to be marked (see marking process of event-links Figure 7). After defining the conditions for triggering sensor events, the effects can be defined. To this end, ExProtoVAR publishes all interactions as messages on the network using the middleware MQTT  and the message broker Mosquitto . Using services, such as Node-RED , the designer can specify the event logic and the interplay with the IoT devices. By this means, it is possible to create virtual sensors and actuators and link them either to other virtual IoT devices or to real hardware, also connected using MQTT. A virtual light switch could thus trigger a real LED, a real touch sensor could change the visualization in VR, etc. In our scenario, e. g., the ultrasonic distance sensors recognizing when the user grabs into one of the stock boxes are modeled in VR and connected via MQTT. This allows the software engineers responsible for the integration of the solution with the ERP system to prototype the software interfaces in interaction.
The ExProtoVAR tool allows for a gradual coupling between the real world and the VR simulation of ExProtoVAR. It is possible to link the events of the real sensor data with the simulation of the HoloLens or, the other way around, to use the virtual sensors for broadcasting their events to the real HoloLens. By this means, the prototyping process can gradually be tested and evolves towards the final product. Figure 8 displays the interplay of the ExProtoVAR tool and the real world. On the left side the user is shown in the interaction loop with the prototyping tool diving deep into the interaction with the VR world. The right side of the figure displays the user wearing a real MR device, operating with the real sensors. Both users can exchange events via MQTT and interact with one another.
3.5 Defining MR Device Interactions
ExProtoVAR supports not only basic scanning for MR content, but also the modeling of interactive screens. For the evaluation of the screen designs in VR, the simulated device can be brought up closer to the user by the press of a button. The device is then filling almost all of the screen and is detached from hand movements. In screen interaction mode, the VR controller operates a virtual cursor which can be used to simulate touch events on the MR device’s screen. If the MR content is an interactive HTML page, e. g. a click dummy, HTML events can be triggered as usual and the content will interactively change. An additional “situation” protocol (scene://) is used to allow the HTML content to trigger situation transitions to new panoramas. MQTT messages can be triggered as well, e. g., to simulate actions that control a machine and evoke new states. By this means, in our showcase it is possible to simulate the interaction with the ERP interface. The user can press buttons on the simulated screen and thereby switch between different assembly requests or take a look at the available stock (Figure 9).
3.6 Annotations and Videotaping an Interaction
After a prototype has been created with the ExProtoVAR tool, it can be reviewed by users (see Figure 4.5). To enable the user to provide context specific feedback, it is possible to mark areas in the prototype and leave written notes and spoken audio notes. The notes will be linked to the chosen situation at the marked position and can be edited or complemented in subsequent sessions. This supports the feedback cycle and eases the communication about context dependent issues. An additional feature of ExProtoVAR is, that it can provide a compressed version of a prototype as a video export. Users can videotape an interactive session, covering all relevant interactions with the MR application in the virtual world. This can then serve as a basis for the developmental phase or can be used to present the prototype to several people at once, without necessarily using the VR equipment.
4 Conclusion and Future Work
Designing contextualized MR applications requires a careful consideration of the situation. We have presented ExProtoVAR, which combines VR and panorama imaging technologies to immerse designers and users in the situation, allowing them to define and evaluate interaction designs in context. Simulations of MR devices and different visualization styles allow for a functional evaluation, while the immersion supports an emotional evaluation at the same time. The integration of MQTT makes it easy to connect to IoT prototyping scenarios and integrates ExProtoVAR in a larger infrastructure of prototyping tools. Meeting several of the requirements for a prototyping tool defined in the double diamond model, ExProtoVAR can therefore be used in the discover phase to immerse into the environment and discuss augmentation ideas. It can thereafter also be deployed to provide an interactive prototype in the define phase which gets refined, tested and connected to IoT devices during the develop phase. In the deliver phase, the prototype developed with ExProtoVAR can still be of benefit offering notes and insights gained during the prototyping process as well as providing the possibility to recall intermediate states of the prototyping process and the final result.
Funding statement: This work has been partly supported by the Federal Ministry of Education and Research (BMBF) of Germany in the project “ProFI – Prototyping for Innovation” (FKZ: 01IS16015), http://prototyping4innovation.de/.
About the authors
Thies Pfeiffer studied Informatics at Bielefeld University and obtained his doctoral degree on human-machine interaction in 2010. In 2006, he founded Mediablix, a usability consultancy, and in 2017 Raumtänzer, a specialist for mixed reality solutions. He is currently researcher in the Central Lab Facilities at CITEC, the Cluster of Excellence Cognitive Interaction Technology at Bielefeld University, and technical director of the Virtual Reality Lab and the Immersive Media Lab.
Nadine Pfeiffer-Leßmann studied Informatics at Bielefeld University and received her doctoral degree in 2011. She worked in different research projects concerned with situated communication, artificial intelligence and cognition, as well as software engineering for smart kitchens. Since 2017 she works at Neuland-Medien as a researcher and senior architect with a focus on mixed reality, prototyping and IoT.
 Günter Alce, Klas Hermodsson, Mattias Wallergård, Lars Thern, and Tarik Hadzovic. A prototyping method to simulate wearable augmented reality interaction in a virtual environment—a pilot study. 01 2015.10.11159/vwhci.2015.003Search in Google Scholar
 Andrew Banks and Rahul Gupta. MQTT version 3.1. 1. OASIS standard, 29, 2014.Search in Google Scholar
 Matthias Berning, Jin Nakazawa, Takuro Yonezawa, Michael Beigl, Till Riedel, and Hide Tokuda. PARnorama: 360 degree interactive video for augmented reality prototyping. In UbiComp 2013 Adjunct—Adjunct Publication of the 2013 ACM Conference on Ubiquitous Computing, pages 1471–1474, 2013.10.1145/2494091.2499570Search in Google Scholar
 Jacob Buur and Astrid Soendergaard. Video Card Game: An Augmented Environment for User Centred Design Discussions. In Proceedings of DARE 2000 on Designing Augmented Reality Environments, DARE ’00, pages 63–69, New York, NY, USA, 2000. ACM.10.1145/354666.354673Search in Google Scholar
 Sebastian Büttner, Henrik Mucha, Markus Funk, Thomas Kosch, Mario Aehnelt, Sebastian Robert, and Carsten Röcker. In The Design Space of Augmented and Virtual Reality Applications for Assistive Environments in Manufacturing: A Visual Approach, pages 433–440, 2017. ACM Press.10.1145/3056540.3076193Search in Google Scholar
 Marco de Sá and Elizabeth Churchill. Mobile Augmented Reality: Exploring Design and Prototyping Techniques. In Proceedings of the 14th International Conference on Human-computer Interaction with Mobile Devices and Services, MobileHCI ’12, pages 221–230, New York, NY, USA, 2012. ACM.10.1145/2371574.2371608Search in Google Scholar
 Design Council. A study of the design process—The Double Diamond, 2005.Search in Google Scholar
 Panos E. Kourouthanassis, Costas Boletsis, and George Lekakos. Demystifying the design of mobile augmented reality applications. Multimedia Tools and Applications, 74(3):1045–1066, February 2015.10.1007/s11042-013-1710-7Search in Google Scholar
 C. Lee, S. Bonebrake, T. Hollerer, and D. A. Bowman. A Replication Study Testing the Validity of AR Simulation in VR for Controlled Experiments. In 2009 8th IEEE International Symposium on Mixed and Augmented Reality, pages 203–204, October 2009.10.1109/ISMAR.2009.5336464Search in Google Scholar
 Cha Lee, Scott Bonebrake, Doug A. Bowman, and Tobias Höllerer. The role of latency in the validity of AR simulation. In 2010 IEEE Virtual Reality Conference (VR), pages 11–18, March 2010.10.1109/VR.2010.5444820Search in Google Scholar
 Roger A. Light. Mosquitto. https://mosquitto.org/, accessed 2018-05-01, 2009.Search in Google Scholar
 Blair MacIntyre, Maribeth Gandy, Steven Dow, and Jay David Bolter. DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences. In Proceedings of the 17th Annual ACM Symposium on User Interface Software and Technology, UIST ’04, pages 197–206, New York, NY, USA, 2004. ACM.10.1145/1029632.1029669Search in Google Scholar
 Nick O’Leary and Dave Conway-Jones. Node-RED, v0.18.4. https://nodered.org/, accessed 2018-05-01, 2013.Search in Google Scholar
 ProFI Project Consortium. BMBF project Prototyping for Innovation. http://www.prototyping4innovation.de, 2018.Search in Google Scholar
 Eric Ragan, Curtis Wilkes, Doug A. Bowman, and Tobias Höllerer. Simulation of Augmented Reality Systems in Purely Virtual Environments. In 2009 IEEE Virtual Reality Conference, pages 287–288, March 2009.10.1109/VR.2009.4811058Search in Google Scholar
 Patrick Renner and Thies Pfeiffer. Attention guiding techniques using peripheral vision and eye tracking for feedback in augmented-reality-based assistance systems. In 2017 IEEE Symposium on 3D User Interfaces (3DUI), pages 186–194, March 2017.10.1109/3DUI.2017.7893338Search in Google Scholar
 Yan Shen, Sohkhim Ong, and Andrew Y. C. Nee, Augmented reality for collaborative product design and development. Design Studies, 31(2):118–145, March 2010.10.1016/j.destud.2009.11.001Search in Google Scholar
 Erik Steindecker, Ralph Stelzer, and Bernhard Saske. Requirements for Virtualization of AR Displays within VR Environments. In Randall Shumaker and Stephanie Lackey, editors, Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments, number 8525 in Lecture Notes in Computer Science, pages 105–116, June 2014. Springer International Publishing.10.1007/978-3-319-07458-0_11Search in Google Scholar
 Abou Moussa Wafaa, Nelly De Bonnefoy, Emmanuel Dubois, Patrice Torguet, and Jean-Pierre Jessel. Virtual Reality Simulation for Prototyping Augmented Reality. In 2008 International Symposium on Ubiquitous Virtual Reality, pages 55–58, July 2008.Search in Google Scholar
© 2018 Walter de Gruyter GmbH, Berlin/Boston