Objective. VR is evolving into everyday technology. For all diverse application areas, it is essential to understand the user’s condition to ensure a safe, pleasant, and meaningful VR experience. However, VR experience evaluation is still in its infancy. The present paper takes up this research desideratum by conflating diverse expertise and learnings about experience evaluation in general and VR experiences in particular into a systematic evaluation framework (appRaiseVR).
Method. To capture diverse expertise, we conducted two focus groups (bottom-up approach) with experts working in different fields of experience evaluation (e. g., Movie Experience, Theatre Experiences). First, we clustered the results of both focus groups. Then, we conflated those results and the learnings about experience evaluation stemming from the field of user experience into the final framework (top-down approach).
Results. The framework includes five steps providing high-level guidance through the VR evaluation process. The first three steps support the definition of the experience and evaluation conditions (setting, level, plausibility). The last two steps guide the selection to find an appropriate time course and tools of measure.
Conclusion. appRaiseVR offers high-level guidance for evaluators with different expertise and contexts. Finally, establishing similar evaluation procedures might contribute to safe, pleasant, and meaningful VR experiences.
In this paper we report results from a web- and video-based study on the perception of a request for help from a robot head. Colored lights, eye-expressions and politeness of the used language were varied. We measured effects on expression identification, hedonic user experience, perceived politeness, and help intention. Additionally, sociodemographic data, a ‘face blindness’ questionnaire, and negative attitudes towards robots were collected to control for possible influences on the dependent variables. A total of n = 139 participants were included in the analysis. In this paper, the focus is placed on interaction effects and on the influence of covariates. Significant effects were found for the interaction of LED lighting and eye-expressions and for language and eye-expressions on help intention. The expression identification is significantly influenced by the interaction of LED lighting and eye-expressions. Several significant effects of the covariates were found, both direct and from interaction with independent variables. Especially the negative attitudes towards robots significantly influence help intention and perceived politeness. The results provide information on the effect of different design choices for help requesting robots.
The selection and manipulation of objects in Virtual Reality face application developers with a substantial challenge as they need to ensure a seamless interaction in three-dimensional space. Assessing the advantages and disadvantages of selection and manipulation techniques in specific scenarios and regarding usability and user experience is a mandatory task to find suitable forms of interaction. In this article, we take a look at the most common issues arising in the interaction with objects in VR. We present a taxonomy allowing the classification of techniques regarding multiple dimensions. The issues are then associated with these dimensions. Furthermore, we analyze the results of a study comparing multiple selection techniques and present a tool allowing developers of VR applications to search for appropriate selection and manipulation techniques and to get scenario dependent suggestions based on the data of the executed study.
Due to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.
Due to constantly and rapidly growing digitization, requirements for international cooperation are changing. Tools for collaborative work such as video telephony are already an integral part of today’s communication across companies. However, these tools are not sufficient to represent the full physical presence of an employee or a product as well as its components in another location, since the representation of information in a two-dimensional way and the resulting limited communication loses concrete objectivity. Thus, we present a novel object-centered approach that compromises of Augmented and Virtual Reality technology as well as design suggestions for remote collaboration. Furthermore, we identify current key areas for future research and specify a design space for the use of Augmented and Virtual Reality remote collaboration in the manufacturing process in the automotive industry.