Abstract
While traditional videoconferencing causes privacy issues, virtual meetings are not yet widely used. Their communication quality still lacks usability and important non-verbal communication cues, such as body language, are underrepresented. We aim at exploring virtual avatars’ body language and how it can be used to indicate meeting attendees’ communication status. By comparing users’ perceptions of avatar behavior, we found that avatar body language across gender can be an indication of communication willingness. We derive resulting body language design recommendations and recommend using attentively behaving avatars as default body language and to indicate being busy through actions of the avatar, such as drinking, typing, or talking on a phone. These actions indicate that users are temporarily busy with another task, but still are attending the meeting. When users are unavailable, their avatars should not be displayed at all and in cases of longer meeting interruptions, the avatar of a user should leave the virtual meeting room.
1 Introduction
Nowadays, virtual meetings are an essential tool for collaborative work across the entire globe or for those working from home. Compared to the more established video conferencing, virtual reality (VR) meetings do not use webcams to share video information of the meeting, but instead personify attendees in the form of avatars in VR. The intensive usage of video conferencing software during the COVID‑19 pandemic showed that such systems can cause privacy issues, while VR meetings lead to an increase in involvement, spatial presence, and experienced realism compared to video conference systems [47]. Besides all obvious advantages, we observe that attendees often multi-task during virtual meetings,[1] which makes it harder to moderate them and to ensure productivity of the conversation.

Avatar on the right indicates communication status through body language: Left: Attentive indicated through facing posture. Right: Busy indicated through typing on a laptop.
Several information can be expressed through body language, such as emotion [14] as well as how carefully a person is listening to a conversation or how willing they are to communicate [5]. Thus, we easily understand whether a person is listening or doing something in parallel through body language in physical meetings [46]. Avatars however do not let us read the conversation involvement of meeting attendees very well as nonverbal signals given through mimic, gesture, and body posture are missing.
Inspired by status signs used in video conference systems, such as Skype, to inform about the availability of a person to be contacted, we propose to provide the VR meeting attendees with information about other attendees’ conversation status, attention, and engagement. Referring to our ability to “read” body language, we propose using avatars’ body language to show if they are highly involved in the conversation, multi-tasking, or even not listening for some time.
We expect that avatars acting in a way that represents the cognitive involvement of users in VR meetings can increase empathy between virtual meeting attendees, which – as a consequence – would ease work with remote collaborators and ensure communication productivity.
We aim at extending previous research that investigated emotion estimated through the pose of avatars [10], [28], [30], effects of the body language of avatars in virtual meetings [50], and possibilities of influencing a conversation through body language [44] through proposing avatars’ body language as non-verbal information about the conversation readiness and involvement in VR meetings.
While standing up and talking on the phone to somebody else would be impolite in a physical meeting, we possibly could use such behaviors in a virtual meeting to indicate a parallel task. As no social rules for such behavioral meeting status exist, we aim to explore if and how a set of avatar behaviors would be interpreted in VR meetings. Hence, we created a simulation of a VR meeting with avatars showing typical as well as untypical behavior, such as carefully listening, checking their phone, and having a nap.
In a user study, we found that the body language of an avatar can indeed represent the communication status of their corresponding user and provide design recommendations about what body language including behaviors are appropriate in VR meetings and which are not. With this work, we contribute to the field of VR collaboration as well as avatar research. We hope to inspire future work as this paper serves as a proof-of-concept, and more work is needed to develop better VR meeting systems.
This is an extended version of an earlier published paper [31].
2 Related Work
Two research areas are particularly related to this work: (1) visualizations of the communication status in virtual meeting systems and (2) body posture and emotion expression of avatars.
2.1 Communication Status
A large body of research on virtual meetings explores why it is important to know about the status of the other participants during a meeting [38], [48], or advantages and disadvantages of virtual meetings, like the visibility of the audience [37]. Other works show how the virtual communication status can be received or symbolized [20], [25], [34], [42].
McCarthy et al. showed that it is essential for every virtual conference attendee to be informed about the status of the conference and about the status of every other attendee [38]. Otherwise, synchronization problems and communication problems can occur. Stimmel patented such a workplace intercommunication tool that could communicate remote co-workers’ status through different devices [48]. The system enables users to determine the status of others by showing text blocks in an extra window when choosing a user.
Leacock et al. explored how to have conversations in multiple virtual environments at the same time and proposed to display the users’ attendance status by having a representation in form of a 2D or 3D avatar in the room where the user is present [34]. Unlike this work, the avatars gave no different communication status cues, but only showed which users were present in which room.
Cobb et al. investigated online status indicators (OSIs) using apps that were commonly known by the participants. The OSIs were graphical symbols like dots or written text. They found that users often do not correctly understand OSIs and that OSIs lack desired privacy preferences [15]. Nardi et al. found by investigating early Instant Messaging (IM) tools that the knowledge about the availability of co-workers is important for the users. The availability was shown through the co-workers’ names in a list [39]. Greenberg used bold graphics of people working at a computer for symbolizing that the user was at their place. When the graphic fades out the user was inactive and no graphic indicated that the user was logged out [22].
In opposite to graphical symbols representing the remote user, DeGuzman et al. used a physical artifact to symbolize the status of users’ availability as well as activity in an IM [17]. A plastic ball changed its size depending on the online status of a user mapping large size to online and small to offline. Moreover, the ball contracted when the user was writing.
Further research used automatic communication detection of the users’ availability status [8], [24], [26]. Begole et al., for example, detected a person’s activity through sensors and used graphical symbols displayed in front of users’ names to communicate their available status [8]. A yellow diamond was here used to show possible unavailability and a red triangle for probable unavailability.
The readiness to communicate or the level of communication engagement can also be given through non-verbal cues, such as gestures and gaze. Schneider and Pea, for example, showed that mutual gaze perception improves the quality of collaborative learning [43]. Collaborative learning groups of two members used an eye tracking system that showed each participants’ gaze to the other user as small dots on the screen. The collaboration and learning gain was higher than for groups without gaze perception. Bai et al. found that remote collaboration can be enhanced by combining gaze and gesture cues. In the experiment, the remote expert and the local worker shared the same view which was augmented to the local worker with gaze and gestures of the remote expert [4]. This led to a higher feeling of co-presence and the local worker being able to see what the remote expert exactly refers to.
2.2 Body Posture and Emotion Expression
Body language in a narrow sense means how we communicate just by the move of our body, but can also include hand gestures, facial expressions, or how we use our voice [18]. Body postures and emotion expression can indicate how much somebody is engaged in a conversation or willing to talk as they are important nonverbal cues [40].
Body language has often been researched for collaborative virtual environments [9], [50]. Benford et al. defined an initial list of parameters for collaborative virtual environments (e. g., availability, gesture, facial expression, etc.) [9]. Especially for conversations, they pointed out facial expressions and gestures as important by being a strong external representation of emotions. Tromp and Snowdon defined detailed facial, gestural expression as well as a natural movement and activity as requirements for having a good virtual body language [50].
Jian-xia et al. showed that the body language of an instructor in video lectures has a significant impact on the learning effect compared to an instructor with no body language [27]. Also, body expressions and body postures are used to perceive and recognize the affective status of a human-like avatar [10], [28], [29], [30]. Negative affects as well as positive affects are dimensions to describe emotional states of mind [53].
Berthouze et al. showed that the recognition of emotions is a requirement for rich social interaction and that body language is important in affective communication [10]. Kleinsmith and Berthouze eveloped a system that enables affective posture detection using sequences of static postures from motion capture data [29]. The system recognized the affective states nearly as well as observers who were presented the affective expressions as animations, which included both, form and temporal information. This demonstrates that even a single pose can show a good affective expression or user status.
Kleinsmith and Berthouze also showed that body expressions are important in nonverbal communication and in perceiving somebody’s affective status [30]. Bosch et al. researched on automatic detection of affective states because they assume the detection of affective states is a key component for intelligent educational learning interfaces [11]. They wanted the system to detect the states of students in a normal class situation by their facial expressions. They found that automatic detection of the affective states is at least possible for some affective states (e. g., boredom, confusion, delight, engagement, and frustration).
A large body of researchers have been working on the relation between emotion and body postures [1], [6], [16], [45]. André et al. showed the importance of affection for life-like characters as that affect can enhance the credibility of a virtual character and create a more natural conversation manner [1]. In addition, an affect can be used to change or modify a character’s behavior and create emotions in social interactions. Beck et al. confirmed body language as a good medium for robots to display emotions [6]. In a study they found that emotional body language showed by an agent or a human is interpreted quite similar. Using 176 computer-generated mannequin figures, Coulson showed that some emotions were recognized with high concordance (anger, disgust, fear, happiness, sadness, and surprise) while others were hardly recognised (disgust) [16]. In addition, anatomical variables and viewing angle influenced participants’ responses. Si and McDaniel found that body language plays a more important role than facial expressions when representing the attitude of non-humanoid robots [45].
2.3 Summary
Previous work on the communication status in virtual systems show that visualizing if a user is available and actively involved enriches virtual communication. Previous work on body language and body posture recognition shows that emotions can be understood using images as well as animation. While the emotion (being recognizable through body language recognition) is an indicator on how willing a person is to communicate and how carefully they are following a conversation, previous work did not combine body language recognition and communication status visualization. Thus, this work aims at exploring the usage of avatars’ body language to increase the understanding of participants in VR meetings by displaying their communication status and willingness.
3 Concept
The concept of status visualisation in computer-supported communication is widely used, such as availability icons in Skype, tick marks when messages have been sent or received in WhatsApp, and green dots when friends are online in Facebook. In inter-personal communication, we are moreover able to read body language, which gives us additional information about the ongoing conversation. For example, turning the body and head towards a talking person and looking at their mouth is usually interpreted as carefully listening [2], and shortly inhaling is a hint somebody is starting to say something.
Since current digital communication lacks such additional communication hints, we propose to use avatars’ body language as side information in virtual meeting. Here, we want to clarify that the avatar’s body language is NOT meant to represent the body of the communication partner. Instead, the avatar’s body is meant to provide additional information about the conversation status of a user. To give one example: We are attending a virtual meeting and get an important phone call which we have to answer and that requires all of our attention for some minutes. Now we are wondering how that temporal unavailability could be visualized? While we answer the phone in the real world (and of course mute our microphone in the virtual meeting) our avatar could fall asleep and awake after we ended our phone call as sketched in Figure 2. While sleeping in a real meeting is socially not accepted, a sleeping avatar indicating temporal absence would probably be accepted.

The sleeping avatar in the virtual meeting highlighted through the red circle represents the temporal absence of the meeting attendee.

Behaviors designed to explore avatars’ body language perception in virtual meetings.
To better understand what avatars’ behaviors or according body languages are useful to represent virtual meeting attendees, we designed a set of eleven behaviors, see Figure 3. These range from “usual” meeting behaviors, like carefully listening, to behaviors that could indicate temporal absence, like sleeping, but are not “normal” during a physical meeting.
4 Experiment
To better understand how avatar’s body language can represent a communication status in a VR meetings, we determined following research question:
What communication state does an avatar’s behavior or body language represent during VR meetings?
4.1 Experiment Design
We have chosen to not only vary the avatar’s behavior during the meeting situation but also the avatar’s gender to understand if there are gender effects on perception / perception changes based on observed gender. For behavior, we chose a set of behaviors that range from those that are normally expected from a meeting attendee to those that are definitely not common. Thus, we hope to cover a wide range of behaviors and to learn what behavior can serve as indicator of meeting attendees’ communication status.
We designed a controlled experiment with an 11x2 within subjects design and the independent variables behavior (attentive, crossingArms, drinking, facing, wavingFingers, napping, nervous, talking on the phone, relaxing, sleeping, typing), and gender (female, and male).
The dependent variables were animation understanding, social realism (SRe), social richness (SRi), self-assessed mood (PANAS_self), mood of avatar (PANAS), willingness to communicate (WTC), and reasons to communicate.
4.2 Measurements
To ensure that the behaviors we designed were correctly understood by our participants, we asked:
What is the person we have highlighted doing?
It is commonly known that a positive attitude of people is perceived as an invitation to talk to them, while we tend to avoid talking to rude or unfriendly people [19], [51]. Hence, we measured the perceived mood of avatars using the German version [12] of the Panas questionnaire [49], [52].
The mood of our conversation partner also affects our mood [19], [51], which can invite us to communicate when being in a positive mood or leads to a lower will to communicate when being in a worse mood. Therefore, we measured how the shown behavior of avatars affects our self-assessed mood also using the Panas questionnaire.
We did not find established questionnaires measuring willingness to communicate. Hence, we designed according to questions to complete our measures, which were answered through 7-item Likert scales:
How much do you think the person represented by this avatar follows the conversation?
In your opinion, to what extent is the person represented by this avatar willing to actively participate in the conversation, e. g. by commenting or asking questions?
How much do you think the situation allows talking to the person represented by this avatar? This will to communicate corresponds to the green icon known from Skype, see Figure 4, left.
How much do you think the situation only allows you to talk to the person the avatar represents in urgent matters since that person is busy? This will to communicate corresponds to the yellow icon known from Skype, see Figure 4, center.
How much do you think the situation does not allow you to address the person represented by this avatar? This will to communicate corresponds to the red icon known from Skype, see Figure 4, right.

Skype symbols shown beside the 7-item Likert scale willingness to communicate questions as example according to the respective question.
Why would you talk to the marked person in a VR meeting?
Why would you not talk to the marked person in a VR meeting?
4.3 Participants
We recruited 28 participants (9 female, 17 male, 2 with another gender) with an age range from 19 to 45 years and an average of 25 years (SD = 5,4). The participants were recruited via e-mail lists at two universities.
4.4 Apparatus

Virtual meeting scene in which always the avatar on the right was animated with the varying behaviors. Top: Female avatar is facing the user. Bottom: Male avatar is facing the user.
For our apparatus, we built a virtual meeting scene using Unity3D version 2019.2.6f1 which was rendered as 22 videos (11 per behavior for each gender). In the videos, the simulated avatar (which was always the avatar on the front right side, see Figure 5) varied in behavior (see Figure 3) and gender according our independent variables, while the other avatars had a neutral idle animation. The characters and animations were taken from Mixamo[2] and modified in Unity3D. To draw the attention of participants to the avatar that acted according to our independent variables, an arrow and a text marked that avatar. All videos had the same length and were uploaded to LimeSurvey[3] to then be made available to attendees via their browsers. The results from the questionnaires were saved online for each participant per behavior and gender of the avatar as CSV file.
4.5 Task
The participants were asked to watch the different videos and to imagine sitting with the avatars that represent attendees in an online meeting. Their attention during each video was drawn to the avatar sitting right to them using an arrow sign.
4.6 Procedure
The experiment was conducted online on own personal computers. On the first page of the online survey, the participants were introduced to the study purpose and then, they were asked to agree to a consent form. After filling in a demographic questionnaire, the eleven behavior animations for both genders were shown to the participants in randomized order, which resulted in twenty-two conditions per participant. The videos could be watched as often as desired, and no time limit was given. After each video, participants filled in the questionnaires and answered the semi-structured questions, which were presented on separate pages. At the top of each page the video could be watched again.
5 Results
We first analyzed our quantitative data and then analyzed the qualitative data according our quantitative results to explain and better understand them.
5.1 Quantitative Results
For analyzing the quantitative results, we used a Friedman test to identify significant differences for the within-subject variable behavior for the dependent variables (willingness to communicate, social richness, social realsim, mood of avatar, self-assessed mood). Post-hoc analysis with Wilcoxon Signed-Rank tests were conducted with a Bonferroni correction applied for the p-value resulting in a significance value of 0.0045. Additionally, a Wilcoxon Signed-Rank test was used to see if gender effects the dependent variables significantly.
5.1.1 Social Richness
A Friedman test indicated significant differences for the social richness regarding the behaviors, (
Social richness per gender led to following medians:
5.1.2 Social Realism
A Friedman test indicated a significant difference for social realism between the differently behaving avatars (
5.1.3 Perceived Mood of Avatars
A Friedman test indicated significant differences for the positive affect scale for the mood of the avatar caused through the different behaviors (
5.1.4 Self-Assessed Mood
A Friedman test indicated significant differences in positive moods caused through the avatars’ behaviors (
5.1.5 Summary
While the results do not show any effect of gender on social richness, social realism, perceived mood of avatars, and self-assessed mood, behavior significantly effected each of our independent variables. For social richness, sleeping was rated significantly lower than any other behavior, and napping was poorer than the avatar with wavingFingers and the one who was nervous. Regarding social realism, sleeping again was rated lowest, in particular than the behaviorsattentive and facing, but interestingly also than typing. Furthermore, talking and drinking had comparably low realism rating. In regard to avatar’s behavior effects on perceived mood of the avatar, we found that a facing avatar was perceived to be in a much better mood than those who had crossedArms or wavingFingers. On the other side, sleeping had a negative effect on the perception of the mood of the avatar, which was rated significantly lower compared to avatars who had crossedArms, who were drinking, wavingFingers, were nervous, or even those that were napping. When looking at how the user’s mood is effected by the avatar’s behavior, we found that attentive and facing led to significantly mood increase compared to sleeping or napping. Moreover, sleeping and napping increase the perceived mood of avatar significantly more than avatars having crossedArms or those that are relaxing, drinking, nervous, and typing.
5.2 Qualitative Results
How our animated behaviors are understood effects their perception. Hence, we explored if the animations representing a behavior were correctly understood using open coding, see Table 1, second column. We analyzed the qualitative comments on reasons to speak or avoid speaking to the avatar using closed coding and coded the data according to the behaviors to better understand our quantitative analyses. Finally, we applied descriptive statistics to show how much commonly known Skype symbols fit the willingness to communicate with certainly behaving avatars, see Table 1, right column.
The table shows how each behavior was understood as well as the most frequently given reasons to start or avoid speaking to the person represented by the avatar with a certain behavior. In the brackets, the first number shows how often that particular answer was given and the second number is the total amount of given answers for that question. In total 56 are possible, 28 per avatar gender, but not every participant had answered that question. The third and fourth columns represent the most given reasons to talk or not talk to a person represented by a certainly behaving avatar. The last column shows the median fit of the behavior with the known Skype symbols (up from a median of 4.0), which represents the neutral rating in a 7-item Likert scale. See Appendix, Table A7–A9 for complete data on Skype icon fit.
Behavior | Understanding | Reasons To Speak To The Person | Reasons To Not Speak To The Person | equivalent Skype Symbol |
attentive | listen attentively (43/43) | attentive/interested body posture (32/43) & if i need help/have a question (11/43) | no reason (43/43) |
![]() |
crossingArms | sitting with crossed arms and listen (30/43) & bored/annoyed (13/43) | person seems to be interested/is looking at me (25/43) & ask why the person is so dismissive (18/43) | dismissive body posture (27/43) & no reason (16/43) |
![]() ![]() |
drinking | drinking (39/39) | just when having a question (21/39) & no reason (10/39) & seems available (8/39) | person is drinking and can not answer (22/43) & no reason (17/43) |
![]() ![]() |
facing | listen attentively (41/41) | attentive body posture/seems to be interested (29/41) & to ask something (12/41) | no reason (41/41) |
![]() |
napping | tired/falling asleep (33/42) & sits with lowered head (9/42) | to wake up the person (24/42) & ask about the condition (18/42) | person is not interested/is bored (33/42) & person is sleeping (9/42) |
![]() |
nervous | nervous (33/42) & sitting attentive (9/42) | asking if the person is in a hurry/needs a break (23/42) & is attentive (19/42) | person seems to be nervous (23/42) & no reason (19/42) |
![]() ![]() |
relaxing | sitting leaned back/listening (43/43) | seems to be interested / turned towards me (32/43) & to ask for the opinion (11/43) | no reason (23/43) & bored/not interested (20/43) |
![]() |
sleeping | sleeping (44/44) | to wake up the person (33/44) & no reason (11/44) | because the person is sleeping (44/44) |
![]() |
talking | talking on the phone (45/45) | when having an important question (14/45) & ask the person to leave the room (21/45) & no reason (10/45) | busy/talking (39/45) & no reason (6/45) |
![]() ![]() |
typing | working on the laptop (45/45) | when having an important question (18/45) & has information/is making notes (19/45) &would not speak to the person (8/45) | busy/working (33/45) & no reason (12/45) |
![]() ![]() |
wavingFingers | listening (23/44) & waiting impatient (21/44) | looks interested (24/44) & apparently wants to share something (20/44) | nervous/impatient (20/44) & no reason (24/44) |
![]() |
5.2.1 Attentive
The always correctly understood attentivebehavior, see Table 1, made participants feel motivated to ask the avatars for help (mentioned in 11 out of 43 comments) or would encourage them to participate in the conversation (mentioned in 32 out of 43 comments). These results explain and are in line with the highly rated self-accessed mood as well as the highly perceived mood of the avatar:
If I need help I would first talk to this person (P.22, attentive).
It looks like the person wants to participate in the conversation (P.3, attentive).
Sits very upright and straight. Looks in my direction. Probably wants to participate (P.5, attentive).
5.2.2 CrossingArms
Avatars with crossingArms were understood differently among the participants. While the majority (30 out of 43 participants) understood such behavior positively as listening, 13 participants thought the avatar would be bored, see Table 1:
The person is listening crossing the arms (P.28, crossingArms) versus
The person is bored and waiting for the end (P.14, crossingArms).
Because the focus is on me and the attention is mainly on me (P.5, crossingArms).
Because the person crosses their arms and does not look like they are open to interaction (P.10, crossingArms).
5.2.3 Drinking
The well understood drinkingbehavior (see Table 1) only sometimes indicated that the avatar seems to be available (8 out of 39 times), but most (21 out of 39 answering) participants would ask that avatar questions:
The person looks available for conversation (P.10, drinking).
If a person is drinking, the person is not able to talk (P.14, drinking).
5.2.4 Facing
The always correctly understood facing avatars, see Table 1, gave (just like the attentivebehavior) participants no reasons to not talk to them. Besides the body posture that also creates a pleasant climate (highlighted in 29 out of 41 comments), participants mentioned that the avatar with the facingbehavior seems to be ready to receive questions or looks experienced and willing to help (confirmed in 12 out of 41 comments):
To ask for the person’s opinion because the person seems to have experience (P.19, facing).
Seems calm and ready to participate. Pleasant climate (P.8, facing).
5.2.5 Napping
Napping was mostly understood as intended (33 out of 42 times), but some participants (9 out of 42 times) thought the avatar were sitting with a lowered head/look down, see Table 1.
Consequently, most participants would not talk to persons represented by a napping avatars, but only to wake them up (24 out of 42 times) or to ask for their condition (18 out of 42 times):
To wake the person up so the person takes part in the meeting (P.8, napping).
To ask for the state of health (P.13, napping).
5.2.6 Nervous
The nervousbehavior was also understood in two different ways, as nervous by 33 (out of 42 participants) and as attentive by 9 participants:
The person is nervous (P.3, nervous). Versus:
The person listens cheerfully (P.23 nervous).
To ask if we should pause the meeting situation for a moment (P.8, nervous).
I would not talk to the person as the person seems to be nervous and stressed (P.22, nervous).
The person has an upright posture and seems to be in the conversation already (P.16, nervous).
5.2.7 Relaxing
The correctly understood behaviorrelaxing, see Table 1, caused both, reasons to talk and reasons to avoid talking to a person.
As reasons to talk to the person the participants mentioned that the avatar seems interested (32 out of 43 times) and that they would ask for the person’s opinion (11 out of 43 times):
The person is interested and even if not active they could have an interesting idea (P.6, relaxing).
I would ask the person for their opinion (P.13, relaxing).
The body posture looks rather bored and like taking it not 100 % seriously (P.2, relaxing).
5.2.8 Sleeping
Sleeping was clearly understood, see Table 1, and participants would not talk to persons represented by these avatars, only to wake the person up (mentioned in 33 out of 44 comments) or to ask that user to leave the meeting room (mentioned in 33 out of 44 comments):
I would wake the person up (P.16, sleeping).
I would ask the person to leave the room because it is very impolite to sleep during a meeting (P.3, sleeping).
5.2.9 Talking
While the action of the talking on the phonebehavior was clearly understood as such, see Table 1, that behavior was sometimes perceived as impolite (21 out of 45 times) but also interpreted as business (10 out of 45 times):
The person is not participating in the meeting at all (P.5, talking). Versus:
The person seems to have an important call (P.8, talking).
The person seems busy and stressed (P.10, talking).
I would talk to the person when having an important question (P.7, talking).
5.2.10 Typing
Typing, understood as intended, see Table 1, was interpreted as being busy (33 out of 45 times), but sometimes the typing avatar was also perceived as someone making notes about the meeting:
The person is busy with private stuff (P.16, typing). Versus:
The person is documenting the points that are important for me (P13, typing).
To ask about the status of the research (P.23, typing).
5.2.11 WavingFingers
Avatars with wavingFingers were also understood in two different ways. While 23 out of 44 times the participants thought that the avatar was listening, 21 out of 44 participants perceived the avatar as impatient:
The person follows intently the talk (P.2, wavingFingers) versus
The person seems annoyed and restless (P.19, wavingFingers).
The person is looking at me, attention is on me, open body posture (P.14, wavingFingers).
The person seems to be stressed and nervous. I would not talk to the person (P.19, wavingFingers).
5.2.12 Summary
Our qualitative results show that the behaviorsattentive and facing were clearly indicating that a person was attentively participating in a meeting, and participants would feel invited to talk to them. We identified three business indicating behaviors, which would only encourage talking to these user in urgent cases or after the behavior would be set again to an availability indicating one. Drinking, for example, would prevent meeting attendees to talk to a person in that moment, at least if there is nothing important that has to be said. The behaviorstalking on the phone and typing were interpreted as actions that should not be interrupted. Hence, talking and typing avatars would also prevent people to talk to that person, if no urgent reason is given. The behaviorscrossingArms, wavingFingers, nervous, and relaxing caused confusion in understood and perceived willingness to communicate and hence, did not serve as a clear motivation to talk to a user represented through such behaving avatar. Napping and sleeping avatars would clearly prevent a user from communicating with a meeting attendee and rather be perceived as impolite.
6 Discussion of the Study Results
While most behaviors were clearly understood, some were not. In the following, we grouped the behaviors depending on which availability they understandably represent and adding one group of behaviors that are not clearly understandable.
Our quantitative as well as qualitative results show that, similar to real-world body language [5], [46], a behavior can give hints how carefully a person is listening in a conversation and how willing that person is to communicate. Quantitative results indicated significant effects of an animated avatar’s behavior in a meeting on perceived social richness, social realism, perceived mood of avatars, and self-assessed mood across gender. Qualitative feedback helped to better understand how an avatar with certain behavior would be perceived. Accordingly, we discuss design recommendations for each of our behavior categories and finally point at limitations of our study.
All eleven behaviors categorised by their representation of availability.
Available | Busy | Unavailable | Uncertain |
attentive | drinking | napping | crossing arms |
facing | talking | sleeping | nervous |
typing | relaxing | ||
waving fingers |
6.1 Behavior Categories
Asking how the behaviors were understood as well as why users would talk to a person represented by a certainly behaving avatar helped to categorize the behaviors into different groups, see Table 2. The behaviorsattentive and facing were exactly understood as intended, see Table 1, second column. Moreover, the reasons to speak to the person, see Table 1, third and fourth column, and the fit with the availability Skype icon, see Table 1, right column, make these behaviors a good representation for a category named available. Drinking, talking on the phone, and typing were also completely understood as intended. For these three behaviors the participants would avoid talking to the person because they are busy and consequently not able to answer without interrupting their task. However, for all these three behaviors participants would talk to the person if it would be urgent. Consequently, these three behaviors can be described with the label busy, which is in line with the corresponding selected yellow Skype icon. While sleeping was always understood, napping was understood by 3/4 as indented and by 1/4 as “sitting with lowered head”. Sleeping and napping make it impossible to communicate and the absence of eye contact is a sign of low communication engagement. Consequently, participants would mainly avoid talking to persons that are represented by such a behaving avatar, which is in line with the high fit with the red Skype icon. Therefore, we would categorise these two behaviors as unavailable. All other behaviors were neither clearly understood nor taken as clear hints to either talk or avoid talking to a person. Therefore these behaviors do not allow for concluding a communication recommendation. Thus, we labeled them with uncertain.
6.2 Design Recommendations
In the following, we derive our results into design recommendations for displaying the communication status of a virtual meeting attendee indicated through the behavior of their avatar.
6.2.1 Availability Indicating Avatars
Availability representing avatars look attentive through facing other meeting attendees. They do not do things in parallel, neither drinking water nor typing on their laptop or talking on their phone. We recommend using attentively behaving avatars in a virtual meeting to create a more realistic meeting situation (shown in our results of social realism). Such avatars can increase the communication quality and outcome as (1) the user represented by an attentive avatar will be perceived to be in a better mood (compared to behaviors of all other categories) and (2) an attentive avatar positively affects the mood of other meeting attendees [19], [51]. Consequently, attentively behaving avatars non-verbally invite others to ask questions or to ask them for help, which supports not only communication but also collaboration.
6.2.2 Temporal Business Indicating Avatars
During virtual meetings especially when working from home, attendees are often temporarily distracted, for example, by their children or when their doorbell rings. Our results show that simulated virtual activities performed by an avatar, such as drinking water, can be used as an indicator to show that a meeting attendee is temporarily busy with something else. If little cognitive load requiring virtual actions are shown, e. g. drinking water, the user is still perceived as responsive and thus, questions could be asked. Communication behaviors, such as talking on the phone or typing on a laptop, already requires verbal cognitive load, as explained through Wickens’ multiple resource theory [54]. Hence, virtual communication actions would suggest that only questions should be asked if their answers are urgently needed and cannot wait until the user finished their parallel task.
Behaviors indicating temporarily business are not negatively affecting the mood of other meeting attendees, but help to distinguish when meeting attendees are not fully focusing on the meeting situation and when they are.
6.2.3 Unavailability Indicating Avatars
Through our results, we have learned that using avatars in a meeting that indicate unavailability is a contradiction in itself: similarly, persons who are absent in a physical meeting, cannot be seen. Our participants understood the behaviorsnapping and sleeping as what they would represent in a physical world and perceived such behaviors as impolite, just like it would be the case in a physical meeting. Even if we have mentioned before the experiment that the virtual behaviors do not necessarily represent the behaviors of the user represented by an avatar, participants did not distinguish between both, the user and the avatar. Instead, napping and sleeping avatars decreased the social richness during the meeting. Therefore, if a user is not participating in a virtual meeting, their avatar should not be shown in the meeting, not even as sleeping characters. Consequently, if a user pauses the meeting attendance, their avatar should leave the room.
6.2.4 Uncertainty
Some behaviors were not clearly understood and caused uncertainty regarding the communication willingness of a virtual meeting attendee. In particular avatars showing the behaviors crossed arms, being nervous, sitting relaxed and leaned back, and wavingFingers were sometimes taken as sign of impoliteness and boredom, which is in line with the recommendation to avoid such behavior in real-world conversations [21], [35], [44]. Interestingly, all behaviors that caused uncertainty were not having eye contact with other avatars, moved their body (without any aim), or had a closed body posture. Consequently, avatars representing meeting attendees should not move around, have eye contact with conversation partners, and should not cross their arms. Otherwise, the willingness to communicate in a virtual meeting could decrease.
6.3 Limitations of Study Results
We have used a vignette study [3], [41] as online survey to conduct our experiment. It has been shown that results from a vignette experiment match with the effects of the same attributes with data collected in a real-world setting remarkably well. However, results of same research suggested that subtle differences in survey designs can produce significant differences in performance [23]. Using mixed methods and various quantitative and qualitative measures that all indicated similar results, we feel confident about the external validity of our findings generated through a vignette study.
In our online survey, the virtual meeting was shown as 2D environment. However, conducting the same experiment in an immersive 3D VR would most probably increase social presence [47], we expect that this increase would equally effect the measures across our independent variables and hence, not effect the results.
Moreover, the cultural background of the participants might influence the avatar’s body language interpretation, which might be a limitation factor of our experiment.
Finally, the situation context in our virtual meeting videos lack precision. It remains, for example, unclear what user behavior should be represented by which avatar animation. In the following section, we therefore discuss possible ways to select a behavior for avatars and how that could be implemented.
7 Avatar Behavior Selection
Knowing that avatars’ body language can be used to indicate a communication status and gaining knowledge about which facts therefore have to be considered, in the following, we will discuss possible ways to select an avatar’s behavior.
7.1 Manual Selection
Based on our study results, we defined three behavior categories: (available, busy, and unavailable), see Table 2. Just as we know it from existing communication applications, such as Skype, the user could set their avatar’s behavior by themselves out of the possible behavior animations. Then the communication status would, for example, show a typing avatar if the user is currently busy but able to spontaneously attend the conversation if needed. As shown by our study results, unavailable users should not be presented by an avatar at all, as sleeping and napping avatars would make the represented user appear impolite.
However, being established, a manual selection requires the concentration of the users (i. e., update the change of availability), which can disturb the concentration on the meeting. Thus, an automated system would be advantageous.
7.2 Automated Selection
An automated selection of avatar behavior representing the user’s communication status is challenging as the behavior of the avatar is not similar to the real behavior of the users, see Section 3. There are two possible ways of automated avatar’s behavior selection, a static and a dynamically learned one, which we will briefly discuss here.
In a static setup, the action of the user is causing an avatar’s behavior selection based on predefined assumption. The system, for example, could recognize if the user is wearing the HMD usually used to attend the virtual meeting. That information could be used to switch between the states available and unavailable. To distinguish between available and (shortly busy), further activities and actions could be interpreted as not attentively attending the meeting (which would results in assigning a busy behavior to the avatar). The detection of parallel used programs could be used as indicators for a reduced attention drawn to the virtual meeting. However, eye movement analysis have been used for activity recognition [13], and body postures, facial expressions, or gestures of a user allow an automated system to recognize the user’s affective status [10], [28], [29], [30], such analyses can only provide vague information about the user and would most probably cause errors when setting a communication status based on physical data analyses.
In contrast, a learning approach as addition of a manual communication status setup could capture physical data as well as the communication status set at the same time the physical data has been recorded. Thus, the physical data could be used as labels for the set communication status and chosen avatar’s behavior animation. Such labeled physical data could then be used to train neuronal networks to automatically propose an avatar behavior in dependence of the user’s gaze, body pose, and body motion. While such constant monitoring and data storage can improve the quality of the automated system, it also can cause privacy issues [7], [32], [33].
7.3 Summary
In this section, we showed different possibles to implement the behavior setting of avatars as communication status cue. While an automated system could ease the interaction for the user, it lacks scalability across users as well as privacy issues. Therefore, a manual selection of an appropriate animation representing one’s communication status would to date be the most robust way to implement our proposed approach.
8 Conclusion
Aiming at exploring effects of virtual avatars’ body language in virtual meetings on conversation quality, we compared how two avatars, a female and a male, are perceived when behaving differently. Results of a controlled vignette study led to following conclusions:
The behavior and respective body language of an avatar can give hints on how carefully their representing person is listening to a conversation as well as how willing that person is to communicate.
Same body language can be used for both, male and female avatars.
Attentive behaviors indicate that a person is listening and questions to that person can be answered.
Persons whose avatar is drinking, typing, or talking on a phone are perceived to be busy, which can be used for VR meeting attendees to indicate being temporally (as long the action takes) only available for urgent questions.
There are many avatar behaviors that are not clearly understood, such as crossing arms or sitting nervously on a chair. Such behaviors should be avoided as some users might find them impolite.
A sleeping or napping avatar clearly indicated that their users are not taking part in a conversation. As such behavior is perceived as impolite it should not be used. Absent users shall rather not be represented by an avatar at all.
Funding source: Bundesministerium für Bildung und Forschung
Award Identifier / Grant number: 01JKD1701B
Funding source: Deutsche Forschungsgemeinschaft
Award Identifier / Grant number: 425869442
Funding statement: This work is funded by the German Ministry of Education and Research (BMBF) within the GEVAKUB project (01JKD1701B) as well as by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 425869442 and is part of Priority Program SPP2199 Scalable Interaction Paradigms for Pervasive Computing Environments.
Appendix
Results of the social richness subscale with a Bonferroni correction for the p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 3.96 | 4.071 | 101.5 | .1657 |
crossingArms | relaxing | 3.96 | 3.821 | 96.5 | .2068 |
crossingArms | drinking | 3.96 | 3.857 | 135 | .4592 |
crossingArms | wavingFingers | 3.96 | 4.0 | 94.5 | .1126 |
crossingArms | nervous | 3.96 | 4.071 | 163 | .7508 |
crossingArms | facing | 3.96 | 4.036 | 107 | .1353 |
crossingArms | napping | 3.96 | 3.071 | 63.5 | .0077 |
crossingArms | talking | 3.96 | 4.0 | 131 | .2582 |
crossingArms | sleeping | 3.96 | 1.964 | 18.0 | .0003 |
crossingArms | typing | 3.96 | 3.536 | 116 | .2108 |
attentive | relaxing | 4.071 | 3.821 | 94 | .0653 |
attentive | drinking | 4.071 | 3.857 | 91.5 | .0560 |
attentive | wavingFingers | 4.071 | 4.0 | 123.5 | .6591 |
attentive | nervous | 4.071 | 4.071 | 141 | .3808 |
attentive | facing | 4.071 | 4.036 | 145.5 | .6472 |
attentive | napping | 4.071 | 3.071 | 53 | .0055 |
attentive | talking | 4.071 | 4.0 | 100.5 | .0567 |
attentive | sleeping | 4.071 | 1.964 | 27.5 | .0008 |
attentive | typing | 4.071 | 3.536 | 97 | .0779 |
relaxing | drinking | 3.821 | 3.857 | 144.5 | .8750 |
relaxing | wavingFingers | 3.821 | 4.0 | 95.0 | .0692 |
relaxing | nervous | 3.821 | 4.071 | 121.5 | .1048 |
relaxing | facing | 3.821 | 4.036 | 89 | .0280 |
relaxing | napping | 3.821 | 3.071 | 83 | .0187 |
relaxing | talking | 3.821 | 4.0 | 152 | .7775 |
relaxing | sleeping | 3.821 | 1.964 | 24.0 | .0002 |
relaxing | typing | 3.821 | 3.536 | 132 | .4116 |
drinking | wavingFingers | 3.857 | 4.0 | 101.5 | .0601 |
drinking | nervous | 3.857 | 4.071 | 94.5 | .1127 |
drinking | facing | 3.857 | 4.036 | 96.5 | .0757 |
drinking | napping | 3.857 | 3.071 | 60.5 | .0105 |
drinking | talking | 3.857 | 4.0 | 148.5 | .7063 |
drinking | sleeping | 3.857 | 1.964 | 44.0 | .0008 |
drinking | typing | 3.857 | 3.536 | 151.5 | .7672 |
wavingFingers | nervous | 4.0 | 4.071 | 164 | .5477 |
wavingFingers | facing | 4.0 | 4.036 | 149.5 | .7264 |
wavingFingers | napping | 4.0 | 3.071 | 53.5 | .0019 |
wavingFingers | talking | 4.0 | 4.0 | 85.5 | .0382 |
wavingFingers | sleeping | 4.0 | 1.964 | 19.0 | .0002 |
wavingFingers | typing | 4.0 | 3.536 | 82 | .0302 |
nervous | facing | 4.071 | 4.036 | 131.5 | .2637 |
nervous | napping | 4.071 | 3.071 | 48.5 | .0037 |
nervous | talking | 4.071 | 4.0 | 131 | .1634 |
nervous | sleeping | 4.071 | 1.964 | 26.0 | .0001 |
nervous | typing | 4.071 | 3.536 | 86 | .0229 |
facing | napping | 4.036 | 3.071 | 54.5 | .0063 |
facing | talking | 4.036 | 4.0 | 89 | .0280 |
facing | sleeping | 4.036 | 1.964 | 19.5 | .0004 |
facing | typing | 4.036 | 3.536 | 71 | .0239 |
napping | talking | 3.071 | 4.0 | 95.5 | .0421 |
napping | sleeping | 3.071 | 1.964 | 37.0 | .0012 |
napping | typing | 3.071 | 3.536 | 90.5 | .0526 |
talking | sleeping | 4.0 | 1.964 | 26.0 | .0002 |
talking | typing | 4.0 | 3.536 | 160.5 | .9570 |
typing | sleeping | 3.536 | 1.964 | 15.0 | .0001 |
Results of the social realism subscale with a Bonferroni correction for the p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 5.67 | 5.917 | 47 | .1617 |
crossingArms | relaxing | 5.67 | 5.5 | 94.5 | .9839 |
crossingArms | drinking | 5.67 | 4.333 | 54.5 | .0110 |
crossingArms | wavingFingers | 5.67 | 5.167 | 103 | .9404 |
crossingArms | nervous | 5.67 | 5.5 | 83 | .4112 |
crossingArms | facing | 5.67 | 6.083 | 49.5 | .3386 |
crossingArms | napping | 5.67 | 4.917 | 48.5 | .0348 |
crossingArms | talking | 5.67 | 4.0 | 48.5 | .0113 |
crossingArms | sleeping | 5.67 | 4.0 | 45 | .0046 |
crossingArms | typing | 5.67 | 6.0 | 100 | .8518 |
attentive | relaxing | 5.917 | 5.5 | 75 | .1584 |
attentive | drinking | 5.917 | 4.333 | 39.0 | .0026 |
attentive | wavingFingers | 5.917 | 5.167 | 86.5 | .1937 |
attentive | nervous | 5.917 | 5.5 | 110.5 | .1614 |
attentive | facing | 5.917 | 6.083 | 90 | .8404 |
attentive | napping | 5.917 | 4.917 | 50.5 | .0077 |
attentive | talking | 5.917 | 4.0 | 53.0 | .0032 |
attentive | sleeping | 5.917 | 4.0 | 42.5 | .0021 |
attentive | typing | 5.917 | 6.0 | 109 | .5692 |
relaxing | drinking | 5.5 | 4.333 | 67.5 | .0183 |
relaxing | wavingFingers | 5.5 | 5.167 | 120.5 | .8455 |
relaxing | nervous | 5.5 | 5.5 | 110 | .3942 |
relaxing | facing | 5.5 | 6.083 | 77.5 | .1862 |
relaxing | napping | 5.5 | 4.917 | 97 | .1296 |
relaxing | talking | 5.5 | 4.0 | 82.5 | .0313 |
relaxing | sleeping | 5.5 | 4.0 | 69 | .0118 |
relaxing | typing | 5.5 | 6.0 | 118.5 | .7947 |
drinking | wavingFingers | 4.333 | 5.167 | 64 | .0243 |
drinking | nervous | 4.333 | 5.5 | 98 | .1372 |
drinking | facing | 4.333 | 6.083 | 42.5 | .0063 |
drinking | napping | 4.333 | 4.917 | 114.5 | .3103 |
drinking | talking | 4.333 | 4.0 | 139.5 | .5359 |
drinking | sleeping | 4.333 | 4.0 | 135.5 | .6785 |
drinking | typing | 4.333 | 6.0 | 40.5 | .0091 |
wavingFingers | nervous | 5.167 | 5.5 | 121 | .4070 |
wavingFingers | facing | 5.167 | 6.083 | 70.5 | .1975 |
wavingFingers | napping | 5.167 | 4.917 | 70.5 | .0689 |
wavingFingers | talking | 5.167 | 4.0 | 40.0 | .0029 |
wavingFingers | sleeping | 5.167 | 4.0 | 41 | .0054 |
wavingFingers | typing | 5.167 | 6.0 | 121 | .4070 |
nervous | facing | 5.5 | 6.083 | 58.5 | .0473 |
nervous | napping | 5.5 | 4.917 | 114 | .3036 |
nervous | talking | 5.5 | 4.0 | 75.5 | .0332 |
nervous | sleeping | 5.5 | 4.0 | 90 | .0298 |
nervous | typing | 5.5 | 6.0 | 55 | .1073 |
facing | napping | 6.083 | 4.917 | 59.5 | .0295 |
facing | talking | 6.083 | 4.0 | 50.5 | .0077 |
facing | sleeping | 6.083 | 4.0 | 36.5 | .0020 |
facing | typing | 6.083 | 6.0 | 79 | .5193 |
napping | talking | 4.917 | 4.0 | 67.5 | .0553 |
napping | sleeping | 4.917 | 4.0 | 52.5 | .0161 |
napping | typing | 4.917 | 6.0 | 36 | .0057 |
talking | sleeping | 4.0 | 4.0 | 102 | .6388 |
talking | typing | 4.0 | 6.0 | 37.0 | .0012 |
sleeping | typing | 4.0 | 6.0 | 33.0 | .0008 |
Results of the positive affects for the mood of the avatar from the PANAS questionnaire with a Bonferroni corrected p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 2.2 | 3.075 | 36.5 | .0004 |
crossingArms | relaxing | 2.2 | 2.375 | 140.5 | .7859 |
crossingArms | drinking | 2.2 | 2.2 | 131.5 | .8432 |
crossingArms | wavingFingers | 2.2 | 2.85 | 34.0 | .0005 |
crossingArms | nervous | 2.2 | 2.5 | 114.5 | .1212 |
crossingArms | facing | 2.2 | 2.975 | 71.5 | .0248 |
crossingArms | napping | 2.2 | 1.42 | 45.0 | .0015 |
crossingArms | talking | 2.2 | 2.2 | 141 | .5628 |
crossingArms | sleeping | 2.2 | 1.0 | 0.0 | <.0001 |
crossingArms | typing | 2.2 | 2.225 | 138.5 | .7424 |
attentive | relaxing | 3.075 | 2.375 | 35.5 | .0011 |
attentive | drinking | 3.075 | 2.2 | 30.5 | .0002 |
attentive | wavingFingers | 3.075 | 2.85 | 87.5 | .0434 |
attentive | nervous | 3.075 | 2.5 | 89 | .0162 |
attentive | facing | 3.075 | 2.975 | 120 | .2527 |
attentive | napping | 3.075 | 1.425 | 9.5 | <.0001 |
attentive | talking | 3.075 | 2.2 | 35.0 | .0004 |
attentive | sleeping | 3.075 | 1.0 | 4.0 | <.0001 |
attentive | typing | 3.075 | 2.225 | 24.0 | .0005 |
relaxing | drinking | 2.375 | 2.2 | 146.5 | .6667 |
relaxing | wavingFingers | 2.375 | 2.85 | 62.0 | .0039 |
relaxing | nervous | 2.375 | 2.5 | 107.5 | .0840 |
relaxing | facing | 2.375 | 2.975 | 77 | .0370 |
relaxing | napping | 2.375 | 1.425 | 51.0 | .0027 |
relaxing | talking | 2.375 | 2.2 | 162 | .7317 |
relaxing | sleeping | 2.375 | 1.0 | 0.0 | <.0001 |
relaxing | typing | 2.375 | 2.225 | 169 | .8689 |
drinking | wavingFingers | 2.2 | 2.85 | 47.0 | .0032 |
drinking | nervous | 2.2 | 2.5 | 95.5 | .0246 |
drinking | facing | 2.2 | 2.975 | 84 | .0346 |
drinking | napping | 2.2 | 1.425 | 23.5 | .0002 |
drinking | talking | 2.2 | 2.2 | 149.5 | .9886 |
drinking | sleeping | 2.2 | 1.0 | 0.0 | <.0001 |
drinking | typing | 2.2 | 2.225 | 148 | .9544 |
wavingFingers | nervous | 2.85 | 2.5 | 112 | .0641 |
wavingFingers | facing | 2.85 | 2.975 | 169 | .8688 |
wavingFingers | napping | 2.85 | 1.425 | 14.0 | <.0001 |
wavingFingers | talking | 2.85 | 2.2 | 32.0 | .0003 |
wavingFingers | sleeping | 2.85 | 1.0 | 1.0 | <.0001 |
wavingFingers | typing | 2.85 | 2.225 | 31.5 | .0007 |
nervous | facing | 2.5 | 2.975 | 126 | .3259 |
nervous | napping | 2.5 | 1.425 | 26.0 | <.0001 |
nervous | talking | 2.5 | 2.2 | 89 | .0479 |
nervous | sleeping | 2.5 | 1.0 | 5.0 | <.0001 |
nervous | typing | 2.5 | 2.225 | 85.5 | .0222 |
facing | napping | 2.975 | 1.425 | 37.0 | .0021 |
facing | talking | 2.975 | 2.2 | 51.0 | .0027 |
facing | sleeping | 2.975 | 1.0 | 7.0 | <.0001 |
facing | typing | 2.975 | 2.225 | 83 | .0324 |
napping | talking | 1.425 | 2.2 | 77.5 | .0222 |
napping | sleeping | 1.425 | 1.0 | 14.0 | .0018 |
napping | typing | 1.425 | 2.225 | 47.0 | .0019 |
talking | sleeping | 2.2 | 1.0 | 14.0 | <.0001 |
talking | typing | 2.2 | 2.225 | 142.5 | .8303 |
sleeping | typing | 1.0 | 2.225 | 3.0 | <.0001 |
Results of the negative affects for the mood of the avatar from the PANAS questionnaire with a Bonferroni correction for the p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 2.1 | 1.45 | 118.5 | .3680 |
crossingArms | relaxing | 2.1 | 1.25 | 50.5 | .0078 |
crossingArms | drinking | 2.1 | 1.15 | 111.5 | .2713 |
crossingArms | wavingFingers | 2.1 | 1.85 | 100 | .5900 |
crossingArms | nervous | 2.1 | 2.0 | 92 | .0972 |
crossingArms | facing | 2.1 | 1.3 | 20.5 | .0003 |
crossingArms | napping | 2.1 | 1.7 | 97 | .2122 |
crossingArms | talking | 2.1 | 1.775 | 117 | .3456 |
crossingArms | sleeping | 2.1 | 1.05 | 15.5 | .0002 |
crossingArms | typing | 2.1 | 1.15 | 63.5 | .0234 |
attentive | relaxing | 1.45 | 1.25 | 67 | .1557 |
attentive | drinking | 1.45 | 1.15 | 113.5 | .6728 |
attentive | wavingFingers | 1.45 | 1.85 | 90.5 | .1483 |
attentive | nervous | 1.45 | 2.0 | 52 | .0051 |
attentive | facing | 1.45 | 1.3 | 64.5 | .2194 |
attentive | napping | 1.45 | 1.7 | 102 | .4260 |
attentive | talking | 1.45 | 1.775 | 125.5 | .3194 |
attentive | sleeping | 1.45 | 1.05 | 49.5 | .0668 |
attentive | typing | 1.45 | 1.15 | 61 | .4629 |
relaxing | drinking | 1.25 | 1.15 | 100.5 | .3978 |
relaxing | wavingFingers | 1.25 | 1.85 | 51 | .0047 |
relaxing | nervous | 1.25 | 2.0 | 21.5 | .0002 |
relaxing | facing | 1.25 | 1.3 | 79 | .5196 |
relaxing | napping | 1.25 | 1.7 | 76.5 | .1044 |
relaxing | talking | 1.25 | 1.775 | 92 | .0975 |
relaxing | sleeping | 1.25 | 1.05 | 53 | .0908 |
relaxing | typing | 1.25 | 1.15 | 109.5 | .8346 |
drinking | wavingFingers | 1.15 | 1.85 | 82.5 | .0537 |
drinking | nervous | 1.15 | 2.0 | 40.5 | .0018 |
drinking | facing | 1.15 | 1.3 | 66.5 | .1505 |
drinking | napping | 1.15 | 1.7 | 109 | .5697 |
drinking | talking | 1.15 | 1.775 | 112 | .6378 |
drinking | sleeping | 1.15 | 1.05 | 28.5 | .0025 |
drinking | typing | 1.15 | 1.15 | 66.5 | .0879 |
wavingFingers | nervous | 1.85 | 2.0 | 95 | .1160 |
wavingFingers | facing | 1.85 | 1.3 | 34.0 | .0009 |
wavingFingers | napping | 1.85 | 2.0 | 94 | .1803 |
wavingFingers | talking | 1.85 | 1.775 | 132 | .4117 |
wavingFingers | sleeping | 1.85 | 1.05 | 21.0 | .0002 |
wavingFingers | typing | 1.85 | 1.15 | 58 | .0149 |
nervous | facing | 2.0 | 1.3 | 10.0 | <.0001 |
nervous | napping | 2.0 | 1.7 | 41.5 | .0033 |
nervous | talking | 2.0 | 1.775 | 79.5 | .0439 |
nervous | sleeping | 2.0 | 1.05 | 12.0 | .0001 |
nervous | typing | 2.0 | 1.15 | 6.0 | <.0001 |
facing | napping | 1.3 | 1.7 | 39 | .0428 |
facing | talking | 1.3 | 1.775 | 53 | .0170 |
facing | sleeping | 1.3 | 1.05 | 60 | .4339 |
facing | typing | 1.3 | 1.15 | 74.5 | .9245 |
napping | talking | 1.7 | 1.775 | 145.5 | .8977 |
napping | sleeping | 1.7 | 1.05 | 19.5 | .0040 |
napping | typing | 1.7 | 1.15 | 25 | .0258 |
talking | sleeping | 1.775 | 1.05 | 49.5 | .0041 |
talking | typing | 1.775 | 1.15 | 91.5 | .1572 |
sleeping | talking | 1.05 | 1.775 | 27 | .0607 |
Results of the positive affects for the self assessed mood from the PANAS questionnaire with a Bonferroni correction for the p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 2.43 | 2.575 | 68 | .0332 |
crossingArms | relaxing | 2.43 | 2.125 | 118.5 | .7949 |
crossingArms | drinking | 2.43 | 2.15 | 62.5 | .1125 |
crossingArms | wavingFingers | 2.43 | 2.525 | 100.5 | .3985 |
crossingArms | nervous | 2.43 | 2.35 | 135 | .9273 |
crossingArms | facing | 2.43 | 2.85 | 25.5 | .0017 |
crossingArms | napping | 2.43 | 1.575 | 39.0 | .0044 |
crossingArms | talking | 2.43 | 2.05 | 124.5 | .9482 |
crossingArms | sleeping | 2.43 | 1.575 | 41.5 | .0057 |
crossingArms | typing | 2.43 | 2.35 | 99.5 | .3806 |
attentive | relaxing | 2.575 | 2.125 | 60.5 | .0321 |
attentive | drinking | 2.575 | 2.15 | 42.5 | .0063 |
attentive | wavingFingers | 2.575 | 2.525 | 82.5 | .0913 |
attentive | nervous | 2.575 | 2.35 | 67 | .0307 |
attentive | facing | 2.575 | 2.85 | 124.5 | .4661 |
attentive | napping | 2.575 | 1.575 | 14.0 | .0002 |
attentive | talking | 2.575 | 2.05 | 74 | .0515 |
attentive | sleeping | 2.575 | 1.575 | 27.0 | .0012 |
attentive | typing | 2.575 | 2.35 | 51 | .0142 |
relaxing | drinking | 2.125 | 2.15 | 81.5 | .1437 |
relaxing | wavingFingers | 2.125 | 2.525 | 117 | .5229 |
relaxing | nervous | 2.125 | 2.35 | 109.5 | .8347 |
relaxing | facing | 2.125 | 2.85 | 37.5 | .0022 |
relaxing | napping | 2.125 | 1.575 | 48.5 | .0113 |
relaxing | talking | 2.125 | 2.05 | 135 | .9272 |
relaxing | sleeping | 2.125 | 1.575 | 44 | .0074 |
relaxing | typing | 2.125 | 2.35 | 97.5 | .3463 |
drinking | wavingFingers | 2.15 | 2.525 | 92.5 | .1663 |
drinking | nervous | 2.15 | 2.35 | 98 | .5429 |
drinking | facing | 2.15 | 2.85 | 19.0 | .0002 |
drinking | napping | 2.15 | 1.575 | 50.5 | .0237 |
drinking | talking | 2.15 | 2.05 | 96 | .4978 |
drinking | sleeping | 2.15 | 1.575 | 50.5 | .0135 |
drinking | typing | 2.15 | 2.35 | 75.5 | .6627 |
wavingFingers | nervous | 2.525 | 2.35 | 104 | .3010 |
wavingFingers | facing | 2.525 | 2.85 | 55 | .0115 |
wavingFingers | napping | 2.525 | 1.575 | 58 | .0149 |
wavingFingers | talking | 2.525 | 2.05 | 110.5 | .4027 |
wavingFingers | sleeping | 2.525 | 1.575 | 62 | .0207 |
wavingFingers | typing | 2.525 | 2.35 | 81 | .1395 |
nervous | facing | 2.35 | 2.85 | 39.5 | .0016 |
nervous | napping | 2.35 | 1.575 | 67.5 | .0553 |
nervous | talking | 2.35 | 2.05 | 125.5 | .7037 |
nervous | sleeping | 2.35 | 1.575 | 57.5 | .0250 |
nervous | typing | 2.35 | 2.35 | 100.5 | .6019 |
facing | napping | 2.85 | 1.575 | 14.5 | .0002 |
facing | talking | 2.85 | 2.05 | 63.5 | .0134 |
facing | sleeping | 2.85 | 1.575 | 19.0 | .0003 |
facing | typing | 2.85 | 2.35 | 31.0 | .0011 |
napping | talking | 1.575 | 2.05 | 40.5 | .0052 |
napping | sleeping | 1.575 | 1.575 | 79 | .5193 |
napping | typing | 1.575 | 2.35 | 38 | .0070 |
talking | sleeping | 2.05 | 1.575 | 6.0 | .0002 |
talking | typing | 2.05 | 2.35 | 91.5 | .4036 |
sleeping | typing | 1.575 | 2.35 | 14.0 | .011 |
Results of the negative affects for the self assessed mood from the PANAS questionnaire with a Bonferroni correction for the p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 1.75 | 1.1 | 58.5 | .0474 |
crossingArms | relaxing | 1.75 | 1.175 | 37.5 | .0022 |
crossingArms | drinking | 1.75 | 1.225 | 59.5 | .0514 |
crossingArms | wavingFingers | 1.75 | 1.525 | 84 | .4328 |
crossingArms | nervous | 1.75 | 1.6 | 102 | .4259 |
crossingArms | facing | 1.75 | 1.275 | 63.5 | .1212 |
crossingArms | napping | 1.75 | 1.725 | 108.5 | .8076 |
crossingArms | talking | 1.75 | 2.05 | 74.5 | .0178 |
crossingArms | sleeping | 1.75 | 1.975 | 113.5 | .6727 |
crossingArms | typing | 1.75 | 1.3 | 93 | .4340 |
attentive | relaxing | 1.1 | 1.175 | 94.5 | .6949 |
attentive | drinking | 1.1 | 1.225 | 67.5 | .4327 |
attentive | wavingFingers | 1.1 | 1.525 | 45 | .0250 |
attentive | nervous | 1.1 | 1.6 | 49 | .1118 |
attentive | facing | 1.1 | 1.275 | 52 | .2457 |
attentive | napping | 1.1 | 1.725 | 53 | .0521 |
attentive | talking | 1.1 | 2.05 | 43.5 | .0008 |
attentive | sleeping | 1.1 | 1.975 | 47 | .0171 |
attentive | typing | 1.1 | 1.3 | 49.5 | .1165 |
relaxing | drinking | 1.175 | 1.225 | 74.5 | .2543 |
relaxing | wavingFingers | 1.175 | 1.525 | 55.5 | .0120 |
relaxing | nervous | 1.175 | 1.6 | 41 | .0095 |
relaxing | facing | 1.175 | 1.275 | 69 | .1784 |
relaxing | napping | 1.175 | 1.725 | 24.5 | .0015 |
relaxing | talking | 1.175 | 2.05 | 2.0 | <.0001 |
relaxing | sleeping | 1.175 | 1.975 | 21.0 | .0004 |
relaxing | typing | 1.175 | 1.3 | 57.5 | .0437 |
drinking | wavingFingers | 1.225 | 1.525 | 86 | .1882 |
drinking | nervous | 1.225 | 1.6 | 50 | .0699 |
drinking | facing | 1.225 | 1.275 | 81.5 | .5863 |
drinking | napping | 1.225 | 1.725 | 48 | .1023 |
drinking | talking | 1.225 | 2.05 | 31.0 | .0007 |
drinking | sleeping | 1.225 | 1.975 | 45.5 | .0261 |
drinking | typing | 1.225 | 1.3 | 41.5 | .2930 |
wavingFingers | nervous | 1.525 | 1.6 | 117.5 | .7700 |
wavingFingers | facing | 1.525 | 1.275 | 81.5 | .2370 |
wavingFingers | napping | 1.525 | 1.725 | 111.5 | .6261 |
wavingFingers | talking | 1.525 | 2.05 | 76 | .0114 |
wavingFingers | sleeping | 1.525 | 1.975 | 101 | .2602 |
wavingFingers | typing | 1.525 | 1.3 | 99 | .5662 |
nervous | facing | 1.6 | 1.275 | 82 | .2439 |
nervous | napping | 1.6 | 1.725 | 105 | .4848 |
nervous | talking | 1.6 | 2.05 | 51.5 | .0048 |
nervous | sleeping | 1.6 | 1.975 | 104.5 | .3078 |
nervous | typing | 1.6 | 1.3 | 91 | .6010 |
facing | napping | 1.275 | 1.725 | 71.5 | .2107 |
facing | talking | 1.275 | 2.05 | 42.0 | .0012 |
facing | sleeping | 1.275 | 1.975 | 61.5 | .0604 |
facing | typing | 1.275 | 1.3 | 78.5 | .7603 |
napping | talking | 1.725 | 2.05 | 65 | .0150 |
napping | sleeping | 1.725 | 1.975 | 79.5 | .3408 |
napping | typing | 1.725 | 1.35 | 62 | .3060 |
talking | sleeping | 2.05 | 1.975 | 77.5 | .0382 |
talking | typing | 2.05 | 1.3 | 25.0 | .0006 |
sleeping | typing | 1.975 | 1.3 | 42.5 | .0610 |
Results for the question how much the situation allows talking to the person which corresponds to the green icon known from Skype with a Bonferroni correction for the p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 4.75 | 6.0 | 51 | .0136 |
crossingArms | relaxing | 4.75 | 4.75 | 137 | .7078 |
crossingArms | drinking | 4.75 | 4.0 | 126.5 | .4996 |
crossingArms | wavingFingers | 4.75 | 5.5 | 69 | .1040 |
crossingArms | nervous | 4.75 | 5.0 | 108 | .7934 |
crossingArms | facing | 4.75 | 6.0 | 64 | .0136 |
crossingArms | napping | 4.75 | 3.0 | 31 | .0011 |
crossingArms | talking | 4.75 | 1.5 | 14.5 | <.0001 |
crossingArms | sleeping | 4.75 | 1.0 | 4 | <.0001 |
crossingArms | typing | 4.75 | 3.0 | 17 | .0001 |
attentive | relaxing | 6.0 | 4.75 | 66.5 | .0289 |
attentive | drinking | 6.0 | 4.0 | 34 | .0079 |
attentive | wavingFingers | 6.0 | 5.5 | 58.5 | .2368 |
attentive | nervous | 6.0 | 5.0 | 39 | .0237 |
attentive | facing | 6.0 | 6.0 | 63.5 | .5282 |
attentive | napping | 6.0 | 3.0 | 9 | <.0001 |
attentive | talking | 6.0 | 1.5 | 5.5 | <.0001 |
attentive | sleeping | 6.0 | 1.0 | 2.5 | <.0001 |
attentive | typing | 6.0 | 3.0 | 9 | <.0001 |
relaxing | drinking | 4.75 | 4.0 | 75 | .1573 |
relaxing | wavingFingers | 4.75 | 5.5 | 73 | .1364 |
relaxing | nervous | 4.75 | 5.0 | 114.5 | .9720 |
relaxing | facing | 4.75 | 6.0 | 44.5 | .0130 |
relaxing | napping | 4.75 | 3.0 | 43.5 | .0008 |
relaxing | talking | 4.75 | 1.5 | 6.5 | <.0001 |
relaxing | sleeping | 4.75 | 1.0 | 11 | <.0001 |
relaxing | typing | 4.75 | 3.0 | 11 | <.0001 |
drinking | wavingFingers | 4.0 | 5.5 | 38 | .0375 |
drinking | nervous | 4.0 | 5.0 | 89 | .2204 |
drinking | facing | 4.0 | 6.0 | 36.5 | .0059 |
drinking | napping | 4.0 | 3.0 | 29.5 | .0009 |
drinking | talking | 4.0 | 1.5 | 0 | <.0001 |
drinking | sleeping | 4.0 | 1.0 | 0 | <.0001 |
drinking | typing | 4.0 | 3.0 | 14.5 | .0001 |
wavingFingers | nervous | 5.5 | 5.0 | 68.5 | .2837 |
wavingFingers | facing | 5.5 | 6.0 | 44.5 | .1278 |
wavingFingers | napping | 5.5 | 3.0 | 11.5 | .0002 |
wavingFingers | talking | 5.5 | 1.5 | 1.5 | <.0001 |
wavingFingers | sleeping | 5.5 | 1.0 | 12.5 | <.0001 |
wavingFingers | typing | 5.5 | 3.0 | 13 | <.0001 |
nervous | facing | 5.0 | 6.0 | 45 | .0077 |
nervous | napping | 5.0 | 3.0 | 25 | .0005 |
nervous | talking | 5.0 | 1.5 | 2 | <.0001 |
nervous | sleeping | 5.0 | 1.0 | 4 | <.0001 |
nervous | typing | 5.0 | 3.0 | 10.5 | .0001 |
facing | napping | 6.0 | 3.0 | 17 | .0001 |
facing | talking | 6.0 | 1.5 | 0 | <.0001 |
facing | sleeping | 6.0 | 1.0 | 3.5 | <.0001 |
facing | typing | 6.0 | 3.0 | 4.5 | <.0001 |
napping | talking | 3.0 | 1.5 | 10 | .0001 |
napping | sleeping | 3.0 | 1.0 | 7.5 | .0001 |
napping | typing | 3.0 | 3.0 | 107 | .5213 |
talking | sleeping | 1.5 | 1.0 | 58.5 | .9317 |
talking | typing | 1.5 | 3.0 | 10.5 | <.0001 |
sleeping | typing | 1.0 | 3.0 | 35 | .0005 |
Results for the question how much the situation only allows talking to the person in urgent matters which corresponds to the yellow icon known from Skype with a Bonferroni correction for the p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 4.0 | 2.5 | 64.5 | .0139 |
crossingArms | relaxing | 4.0 | 3.0 | 78.5 | .1173 |
crossingArms | drinking | 4.0 | 4.0 | 91 | .3928 |
crossingArms | wavingFingers | 4.0 | 3.0 | 67 | .0904 |
crossingArms | nervous | 4.0 | 4.0 | 155 | .8386 |
crossingArms | facing | 4.0 | 2.75 | 63 | .0669 |
crossingArms | napping | 4.0 | 3.75 | 77.5 | .1843 |
crossingArms | talking | 4.0 | 4.0 | 133.5 | .4335 |
crossingArms | sleeping | 4.0 | 3.75 | 124 | .4564 |
crossingArms | typing | 4.0 | 4.25 | 78 | .0672 |
attentive | relaxing | 2.5 | 3.0 | 79.5 | .5314 |
attentive | drinking | 2.5 | 4.0 | 41 | .0165 |
attentive | wavingFingers | 2.5 | 3.0 | 79 | .5163 |
attentive | nervous | 2.5 | 4.0 | 46 | .0468 |
attentive | facing | 2.5 | 2.75 | 52.5 | .6681 |
attentive | napping | 2.5 | 3.75 | 19 | .0037 |
attentive | talking | 2.5 | 4.0 | 68.5 | .0593 |
attentive | sleeping | 2.5 | 3.75 | 85.5 | .0647 |
attentive | typing | 2.5 | 4.25 | 37.5 | .0037 |
relaxing | drinking | 3.0 | 4.0 | 49 | .0358 |
relaxing | wavingFingers | 3.0 | 3.0 | 137.5 | .9877 |
relaxing | nervous | 3.0 | 4.0 | 76.5 | .0341 |
relaxing | facing | 3.0 | 2.75 | 85.5 | .7004 |
relaxing | napping | 3.0 | 3.75 | 64 | .0137 |
relaxing | talking | 3.0 | 4.0 | 81.5 | .0847 |
relaxing | sleeping | 3.0 | 3.75 | 73.5 | .0846 |
relaxing | typing | 3.0 | 4.25 | 68.5 | .0064 |
drinking | wavingFingers | 4.0 | 3.0 | 40 | .0143 |
drinking | nervous | 4.0 | 4.0 | 103 | .6628 |
drinking | facing | 4.0 | 2.75 | 55 | .0346 |
drinking | napping | 4.0 | 3.75 | 118.5 | .5512 |
drinking | talking | 4.0 | 4.0 | 152.5 | .7874 |
drinking | sleeping | 4.0 | 3.75 | 142 | .8185 |
drinking | typing | 4.0 | 4.25 | 94.5 | .1109 |
wavingFingers | nervous | 3.0 | 4.0 | 59 | .0483 |
wavingFingers | facing | 3.0 | 2.75 | 91 | 8709 |
wavingFingers | napping | 3.0 | 3.75 | 18 | .0011 |
wavingFingers | talking | 3.0 | 4.0 | 83.5 | .0568 |
wavingFingers | sleeping | 3.0 | 3.75 | 72 | .0250 |
wavingFingers | typing | 3.0 | 4.25 | 37.5 | .0022 |
nervous | facing | 4.0 | 2.75 | 40 | .0258 |
nervous | napping | 4.0 | 3.75 | 77.5 | .1830 |
nervous | talking | 4.0 | 4.0 | 145.5 | .4452 |
nervous | sleeping | 4.0 | 3.75 | 127 | .5094 |
nervous | typing | 4.0 | 4.25 | 74.5 | .0525 |
facing | napping | 2.75 | 3.75 | 46 | .0088 |
facing | talking | 2.75 | 4.0 | 79.5 | .0747 |
facing | sleeping | 2.75 | 3.75 | 87 | .1200 |
facing | typing | 2.75 | 4.25 | 41.5 | .0057 |
napping | talking | 3.75 | 4.0 | 114.5 | .9721 |
napping | sleeping | 3.75 | 3.75 | 117 | .7568 |
napping | typing | 3.75 | 4.25 | 85 | .2866 |
talking | sleeping | 4.0 | 3.75 | 106 | .7408 |
talking | typing | 4.0 | 4.25 | 92 | .1588 |
sleeping | typing | 3.75 | 4.25 | 87 | .1984 |
Results for the question how much the situation does not allow talking to the person which corresponds to the red icon known from Skype with a Bonferroni correction for the p-value of 0.0045. Significant results are printed in bold.
Sample 1 (S1) | Sample 2 (S2) |
|
|
Z | p-value |
crossingArms | attentive | 3.0 | 1.75 | 35 | .0081 |
crossingArms | relaxing | 3.0 | 3.0 | 76 | .1673 |
crossingArms | drinking | 3.0 | 3.75 | 101.5 | .6218 |
crossingArms | wavingFingers | 3.0 | 2.0 | 54 | .0553 |
crossingArms | nervous | 3.0 | 2.0 | 73.5 | .1412 |
crossingArms | facing | 3.0 | 1.5 | 37.5 | .0063 |
crossingArms | napping | 3.0 | 4.5 | 21 | .0048 |
crossingArms | talking | 3.0 | 5.0 | 40.5 | .0017 |
crossingArms | sleeping | 3.0 | 6.25 | 20 | .0001 |
crossingArms | typing | 3.0 | 4.0 | 80.5 | .0459 |
attentive | relaxing | 1.75 | 3.0 | 82.5 | .6111 |
attentive | drinking | 1.75 | 3.75 | 24 | .0040 |
attentive | wavingFingers | 1.75 | 2.0 | 47.5 | .7520 |
attentive | nervous | 1.75 | 2.0 | 49.5 | .3364 |
attentive | facing | 1.75 | 1.5 | 45 | .3914 |
attentive | napping | 1.75 | 4.5 | 25.5 | .0006 |
attentive | talking | 1.75 | 5.0 | 24.5 | .0003 |
attentive | sleeping | 1.75 | 6.25 | 14 | <.0001 |
attentive | typing | 1.75 | 4.0 | 37.5 | .0038 |
relaxing | drinking | 3.0 | 3.75 | 51.5 | .0448 |
relaxing | wavingFingers | 3.0 | 2.0 | 82.5 | .6117 |
relaxing | nervous | 3.0 | 2.0 | 126 | .9869 |
relaxing | facing | 3.0 | 1.5 | 44.5 | .1267 |
relaxing | napping | 3.0 | 4.5 | 40 | .0016 |
relaxing | talking | 3.0 | 5.0 | 34 | .0005 |
relaxing | sleeping | 3.0 | 6.25 | 2.5 | <.0001 |
relaxing | typing | 3.0 | 4.0 | 61 | .0108 |
drinking | wavingFingers | 3.75 | 2.0 | 56 | .0205 |
drinking | nervous | 3.75 | 2.0 | 40.5 | .0489 |
drinking | facing | 3.75 | 1.5 | 38,5 | .0040 |
drinking | napping | 3.75 | 4.5 | 42 | .0104 |
drinking | talking | 3.75 | 5.0 | 59.5 | .0055 |
drinking | sleeping | 3.75 | 6.25 | 27.5 | .0002 |
drinking | typing | 3.75 | 4.0 | 90 | .1434 |
wavingFingers | nervous | 2.0 | 2.0 | 61.5 | .4739 |
wavingFingers | facing | 2.0 | 1.5 | 32.5 | .2064 |
wavingFingers | napping | 2.0 | 4.5 | 25.5 | .0003 |
wavingFingers | talking | 2.0 | 5.0 | 25.5 | .0010 |
wavingFingers | sleeping | 2.0 | 6.25 | 2 | <.0001 |
wavingFingers | typing | 2.0 | 4.0 | 41.5 | .0056 |
nervous | facing | 2.0 | 1.5 | 44 | .1221 |
nervous | napping | 2.0 | 4.5 | 18 | .0002 |
nervous | talking | 2.0 | 5.0 | 35.5 | .0006 |
nervous | sleeping | 2.0 | 6.25 | 9.5 | <.0001 |
nervous | typing | 2.0 | 4.0 | 35 | .0050 |
facing | napping | 1.5 | 4.5 | 23.5 | .0002 |
facing | talking | 1.5 | 5.0 | 23.5 | .0002 |
facing | sleeping | 1.5 | 6.25 | 17.5 | <.0001 |
facing | typing | 1.5 | 4.0 | 27.5 | .0021 |
napping | talking | 4.5 | 5.0 | 73.5 | .0280 |
napping | sleeping | 4.5 | 6.25 | 24 | .0005 |
napping | typing | 4.5 | 4.0 | 72.5 | .3629 |
talking | sleeping | 5.0 | 6.25 | 51.5 | .0449 |
talking | typing | 5.0 | 4.0 | 47 | .0093 |
sleeping | typing | 6.26 | 4.0 | 27 | .0004 |
References
[1] Elisabeth Andre, Martin Rumpler, Patrick Gebhard, Steve Allen and Thomas Rist, 2000. Integrating Models of Personality and Emotions into Lifelike Characters. Lecture Notes in Computer Science 1814, 150–165. 10.1007/10720296_11.Search in Google Scholar
[2] Michael Argyle, 2013. Bodily communication. Routledge, London.10.4324/9780203753835Search in Google Scholar
[3] Christiane Atzmüller and Peter Steiner, 2010. Experimental Vignette Studies in Survey Research. Methodology: European Journal of Research Methods for The Behavioral and Social Sciences 6, 128–138. 10.1027/1614-2241/a000014.Search in Google Scholar
[4] Huidong Bai, Prasanth Sasikumar, Jing Yang and Mark Billinghurst, 2020. A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376550.10.1145/3313831.3376550Search in Google Scholar
[5] John L Barkai, 1990. Nonverbal communication from the other side: speaking body language. San Diego L. Rev. 27, 101.Search in Google Scholar
[6] Aryel Beck, Brett Stevens, Kim Bard and Lola Cañamero, 2012. Emotional body language displayed by artificial agents. ACM Transactions on Interactive Intelligent Systems (TiiS) 2. 10.1145/2133366.2133368.Search in Google Scholar
[7] Kathrin Bednar, Sarah Spiekermann and Marc Langheinrich, 2019. Engineering Privacy by Design: Are engineers ready to live up to the challenge? The Information Society 35 (3), 122–142.10.1080/01972243.2019.1583296Search in Google Scholar
[8] James ”Bo” Begole, Nicholas E. Matsakis and John C. Tang, 2004. Lilsys: Sensing Unavailability. In Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work (CSCW ’04). Association for Computing Machinery, New York, NY, USA, 511–514. https://doi.org/10.1145/1031607.1031691.10.1145/1031607.1031691Search in Google Scholar
[9] Steve Benford, John Bowers, Lennart E. Fahlén, Chris Greenhalgh and Dave Snowdon, 1995. User Embodiment in Collaborative Virtual Environments. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’95). ACM Press/Addison-Wesley Publishing Co., USA, 242–249. https://doi.org/10.1145/223904.223935.10.1145/223904.223935Search in Google Scholar
[10] N. Berthouze, T. Fushimi, M. Hasegawa, A. Kleinsmith, H. Takenaka and L. Berthouze, 2003. Learning to recognize affective body postures. In The 3rd International Workshop on Scientific Use of Submarine Cables and Related Technologies, 2003. IEEE, Lugano, Switzerland, 193–198. 10.1109/CIMSA.2003.1227226.Search in Google Scholar
[11] Nigel Bosch, Sidney D’Mello, Ryan Baker, Jaclyn Ocumpaugh, Valerie Shute, Matthew Ventura, Lubin Wang and Weinan Zhao, 2015. Automatic Detection of Learning-Centered Affective States in the Wild. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI ’15). Association for Computing Machinery, New York, NY, USA, 379–388. https://doi.org/10.1145/2678025.2701397.10.1145/2678025.2701397Search in Google Scholar
[12] B. Breyer and M. Bluemke, 2016. Deutsche Version der Positive and Negative Affect Schedule PANAS (GESIS Panel). Zusammenstellung sozialwissenschaftlicher Items und Skalen (ZIS). https://doi.org/10.6102/zis242.Search in Google Scholar
[13] Andreas Bulling, Jamie A. Ward, Hans Gellersen and Gerhard Tröster, 2011. Eye Movement Analysis for Activity Recognition Using Electrooculography. IEEE Transactions on Pattern Analysis and Machine Intelligence 33 (4), 741–753. 10.1109/TPAMI.2010.86.Search in Google Scholar PubMed
[14] Ginevra Castellano, Loic Kessous and George Caridakis, 2008. Emotion recognition through multiple modalities: face, body gesture, speech. In: Affect and emotion in human-computer interaction. Springer, Springer-Verlag Berlin Heidelberg, 92–103.10.1007/978-3-540-85099-1_8Search in Google Scholar
[15] Camille Cobb, Lucy Simko, Tadayoshi Kohno and Alexis Hiniker, 2020. User Experiences with Online Status Indicators. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376240.10.1145/3313831.3376240Search in Google Scholar
[16] Mark Coulson, 2004. Attributing Emotion to Static Body Postures: Recognition Accuracy, Confusions, and Viewpoint Dependence. Journal of Nonverbal Behavior 28 117–139. 10.1023/B:JONB.0000023655.25550.be.Search in Google Scholar
[17] Edward S. De Guzman, Margaret Yau, Anthony Gagliano, Austin Park and Anind K. Dey, 2004. Exploring the Design and Use of Peripheral Displays of Awareness Information. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’04). Association for Computing Machinery, New York, NY, USA, 1247–1250. https://doi.org/10.1145/985921.986035.10.1145/985921.986035Search in Google Scholar
[18] Joseph A DeVito, Susan O’Rourke and Linda O’Neill, 2000. Human communication. Longman, New York.Search in Google Scholar
[19] Eric L Einspruch and Bruce D Forman, 1985. Observations concerning research literature on neuro-linguistic programming. Journal of Counseling Psychology 32 (4), 589.10.1037/0022-0167.32.4.589Search in Google Scholar
[20] Stefanie M. Faas, Andrea C. Kao and Martin Baumann, 2020. A Longitudinal Video Study on Communicating Status and Intent for Self-Driving Vehicle – Pedestrian Interaction. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3313831.3376484.10.1145/3313831.3376484Search in Google Scholar
[21] Carol Kinsey Goman, 2009. The nonverbal advantage: Secrets and science of body language at work. ReadHowYouWant. com, United States.Search in Google Scholar
[22] Saul Greenberg, 1996. Peepholes: Low Cost Awareness of One’s Community. In Conference Companion on Human Factors in Computing Systems (CHI ’96). Association for Computing Machinery, New York, NY, USA, 206–207. https://doi.org/10.1145/257089.257283.10.1145/257089.257283Search in Google Scholar
[23] Jens Hainmueller, Dominik Hangartner and Teppei Yamamoto, 2015. Validating vignette and conjoint survey experiments against real-world behavior. Proceedings of the National Academy of Sciences 112 (8), 2395–2400.10.1073/pnas.1416587112Search in Google Scholar PubMed PubMed Central
[24] Stacie Hibino and Audris Mockus, 2002. HandiMessenger: Awareness-Enhanced Universal Communication for Mobile Users. In Proceedings of the 4th International Symposium on Mobile Human-Computer Interaction (Mobile HCI ’02). Springer-Verlag, Berlin, Heidelberg, 170–183.10.1007/3-540-45756-9_14Search in Google Scholar
[25] Mohammed (Ehsan) Hoque, Matthieu Courgeon, Jean-Claude Martin, Bilge Mutlu and Rosalind W. Picard, 2013. MACH: My Automated Conversation Coach. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’13). Association for Computing Machinery, New York, NY, USA, 697–706. https://doi.org/10.1145/2493432.2493502.10.1145/2493432.2493502Search in Google Scholar
[26] Eric Horvitz, Paul Koch, Carl M. Kadie and Andy Jacobs, 2002. Coordinate: Probabilistic Forecasting of Presence and Availability. In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI’02). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 224–233.Search in Google Scholar
[27] Cao Jian-xia, Man Shuo and Mei Lei, 2019. Influence of Instructors’ Body Language on Students’ Learning Outcome in Micro Lectures. In Proceedings of the 2019 11th International Conference on Education Technology and Computers (ICETC 2019). Association for Computing Machinery, New York, NY, USA, 76–79. https://doi.org/10.1145/3369255.3369269.10.1145/3369255.3369269Search in Google Scholar
[28] Andrea Kleinsmith and Nadia Bianchi-Berthouze, 2007. Recognizing Affective Dimensions from Body Posture. In Affective Computing and Intelligent Interaction, Ana C. R. Paiva, Rui Prada and Rosalind W. Picard (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 48–58.10.1007/978-3-540-74889-2_5Search in Google Scholar
[29] Andrea Kleinsmith and Nadia Bianchi-Berthouze, 2011. Form as a Cue in the Automatic Recognition of Non-acted Affective Body Expressions. In Affective Computing and Intelligent Interaction, Sidney D’Mello, Arthur Graesser, Björn Schuller and Jean-Claude Martin (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 155–164.10.1007/978-3-642-24600-5_19Search in Google Scholar
[30] Andrea Kleinsmith and Nadia Bianchi-Berthouze, 2013. Affective Body Expression Perception and Recognition: A Survey. Affective Computing, IEEE Transactions on 4, 15–33. 10.1109/T-AFFC.2012.16.Search in Google Scholar
[31] Marco Kurzweg, Jens Reinhardt, Wladimir Nabok and Katrin Wolf, 2021. Using Body Language of Avatars in VR Meetings as Communication Status Cue. In Mensch Und Computer 2021 (MuC ’21). Association for Computing Machinery, New York, NY, USA, 366–377. https://doi.org/10.1145/3473856.3473865.10.1145/3473856.3473865Search in Google Scholar
[32] Saadi Lahlou, Marc Langheinrich and Carsten Röcker 2005. Privacy and Trust Issues with Invisible Computers. Commun. ACM 48 (3), 59–60. 10.1145/1047671.1047705.Search in Google Scholar
[33] Marc Langheinrich and Florian Schaub, 2018. Privacy in mobile and pervasive computing. Synthesis Lectures on Mobile and Pervasive Computing 10 (1), 1–139.10.2200/S00882ED1V01Y201810MPC013Search in Google Scholar
[34] Matthew Leacock, David Van Wie and Paul J Brody, 2013. Interfacing with a spatial virtual communication environment. US Patent 8,397,168.Search in Google Scholar
[35] Hedwig Lewis, 2012. Body language: A guide for professionals. SAGE Publications India, India.Search in Google Scholar
[36] Matthew Lombard, Theresa B Ditton and Lisa Weinstein, 2009. Measuring presence: the temple presence inventory. In Proceedings of the 12th annual international workshop on presence. International Society for Presence Research, United States, 1–15.Search in Google Scholar
[37] Jennifer Marlow, Jason Wiese and Daniel Avrahami, 2017. Exploring the Effects of Audience Visibility on Presenters and Attendees in Online Educational Presentations. In Proceedings of the 8th International Conference on Communities and Technologies (C&T ’17). Association for Computing Machinery, New York, NY, USA, 78–86. https://doi.org/10.1145/3083671.3083672.10.1145/3083671.3083672Search in Google Scholar
[38] John C McCarthy, Victoria C Miles, Andrew F Monk, Michael D Harrison, Alan J Dix and Peter C Wright, 1991. Four generic communication tasks which must be supported in electronic conferencing. ACM SIGCHI Bulletin 23 (1), 41–43.10.1145/122672.122681Search in Google Scholar
[39] Bonnie A. Nardi, Steve Whittaker and Erin Bradner, 2000. Interaction and Outeraction: Instant Messaging in Action. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (CSCW ’00). Association for Computing Machinery, New York, NY, USA, 79–88. https://doi.org/10.1145/358916.358975.10.1145/358916.358975Search in Google Scholar
[40] Virginia P Richmond, James C McCroskey and Mark Hickson, 2008. Nonverbal behavior in interpersonal relations. Allyn & Bacon, Boston.Search in Google Scholar
[41] Peter Henry Rossi, 1982. Measuring social judgments: The factorial survey approach. SAGE publications, Incorporated, CA, USA.Search in Google Scholar
[42] Daniel Roth, Gary Bente, Peter Kullmann, David Mal, Chris Felix Purps, Kai Vogeley and Marc Erich Latoschik, 2019. Technologies for Social Augmentations in User-Embodied Virtual Reality. In 25th ACM Symposium on Virtual Reality Software and Technology (VRST ’19). Association for Computing Machinery, New York, NY, USA, Article 5, 12 pages. https://doi.org/10.1145/3359996.3364269.10.1145/3359996.3364269Search in Google Scholar
[43] Bertrand Schneider and Roy Pea, 2013. Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-supported collaborative learning 8 (4), 375–397.10.1007/s11412-013-9181-4Search in Google Scholar
[44] Christopher F Sharpley, 1987. Research findings on neurolinguistic programming: nonsupportive data or an untestable theory? Journal of Counseling Psychology 34 (1), 103–107.10.1037/0022-0167.34.1.103Search in Google Scholar
[45] Mei Si and Joseph Dean McDaniel, 2016. Using Facial Expression and Body Language to Express Attitude for Non-Humanoid Robot: (Extended Abstract). In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems (AAMAS ’16). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1457–1458.Search in Google Scholar
[46] John Spiegel and Pavel Machotka, 1974. Messages of the body. Free Press, NY, USA.Search in Google Scholar
[47] Frank Steinicke, Nale Lehmann-Willenbrock and Annika Luisa Meinecke, 2020. A First Pilot Study to Compare Virtual Group Meetings Using Video Conferences and (Immersive) Virtual Reality. In Symposium on Spatial User Interaction (SUI ’20). Association for Computing Machinery, New York, NY, USA, Article 19, 2 pages. https://doi.org/10.1145/3385959.3422699.10.1145/3385959.3422699Search in Google Scholar
[48] Carol L Stimmel, 2004. Virtual workplace intercommunication tool. US Patent 6,678,719.Search in Google Scholar
[49] Vincent Tran, 2013. Positive Affect Negative Affect Scale (PANAS). Springer New York, New York, NY, 1508–1509. https://doi.org/10.1007/978-1-4419-1005-9_978.10.1007/978-1-4419-1005-9_978Search in Google Scholar
[50] Jolanda Tromp and Dave Snowdon, 1997. Virtual Body Language: Providing Appropriate User Interfaces in Collaborative Virtual Environments. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST ’97). Association for Computing Machinery, New York, NY, USA, 37–44. https://doi.org/10.1145/261135.261143.10.1145/261135.261143Search in Google Scholar
[51] Hitomi Tsujita and Jun Rekimoto, 2011. Smiling Makes Us Happier: Enhancing Positive Mood and Communication with Smile-Encouraging Digital Appliances. In Proceedings of the 13th International Conference on Ubiquitous Computing (UbiComp ’11). Association for Computing Machinery, New York, NY, USA, 1–10. https://doi.org/10.1145/2030112.2030114.10.1145/2030112.2030114Search in Google Scholar
[52] David Watson, Lee Anna Clark and Auke Tellegen, 1988. Development and validation of brief measures of positive and negative affect: the PANAS scales. Journal of personality and social psychology 54 (6), 1063.10.1037/0022-3514.54.6.1063Search in Google Scholar
[53] David Watson and Auke Tellegen, 1985. Toward a consensual structure of mood. Psychological bulletin 98 (2), 219.10.1037/0033-2909.98.2.219Search in Google Scholar
[54] Christopher D Wickens, 2008. Multiple resources and mental workload. Human factors 50 (3), 449–455.10.1518/001872008X288394Search in Google Scholar PubMed
© 2022 Walter de Gruyter GmbH, Berlin/Boston