A growing body of literature has established that spatiotemporal metaphoric reasoning processes can be affected by the active experience of motion (such as actual motion, fictive motion, and abstract motion). In this study, the effects of metaphoric gestures on spatiotemporal metaphor use and the effects of addressee perspective on comprehension of these gestures are investigated. Participants were asked an ambiguous question that yields different responses depending on which metaphor variant is used. This question was asked with simultaneously produced metaphoric gestures depicting either sagittal or lateral motion and presented to participants either in shared perspective (side by side) or opposing (face to face) perspective. Findings suggest that not only does gesture influence metaphoric reasoning in discourse interpretation, but that addressees reliably interpret gestures from their own perspective, even when it is not shared with the speaker. Furthermore, conversational bystanders similarly adopt the perspective of the addressee in gesture comprehension.
Spatiotemporal metaphors, in which spatial language is used to talk about time, are pervasive throughout everyday language. The general conceptual metaphor TIME IS SPACE and the related metaphor TIME IS MOTION together reflect that the experience of time is the experience of motion through space. Time itself may be in motion: the weekflewby, the dayslowedto acrawl, spring break isfast approaching; or the self may be moving through time: we’recoming up faston spring break; we’reapproachingthe weekend. These different conceptualizations of the relationship between time, space, and the self all spring from the continual experience of time and space together, and thus we reason about our experience of time in terms of our experience of space. It has been shown that this kind of systematic metaphoric relationship, in which physically experienced domains (such as space) are used to reason about abstract domains (such as time), is not only pervasive throughout language, but is an integral aspect of general cognition (Kövecses 2002; Lakoff and Johnson 1980, 1999).
TIME IS SPACE in particular has been proposed to be a cognitive universal (Evans 2004; but see also Sinha et al. 2011 for a potential exception), and is often realized with the self conceptualized as the spatiotemporal deictic center, with the future in front of the self and the past behind (although the reverse may also be true, as in Aymara as discussed in Núñez and Sweetser 2006). While this general metaphoric relationship between time and space is posited to be a universal, different variants are attested cross-linguistically. Given its universality, this particular metaphor system has been the subject of extensive psychological and linguistic inquiry, especially with regards to ego-reference point (Ego-RP) metaphorical variants where time is relativized to the position of the self (‘ego’). These spatiotemporal metaphors conceive of time in one of two ways. Either the self is moving forward through time in the ego-motion variant: we’re approaching spring break; or time is moving towards the self in the time-motion variant: spring break is approaching. Both of these variations are attested in English. In contrast, time-reference point (Time-RP) spatiotemporal metaphors conceptualize time as a sequence of events without the self as a landmark: ChristmasfollowsThanksgiving or the daydraggedon make no reference to the ego either as a landmark or as a figure in motion (Núñez et al. 2006).
Prior studies have shown that current experiences of space and motion lead English speakers to think about time using one of the two Ego-RP metaphors. McGlone and Harding (1998) and many following studies (e. g., Boroditsky 2000; Boroditsky and Ramscar 2002; Matlock et al. 2005; Ramscar et al. 2010 among others) investigate this by posing an ambiguous question: Wednesday’s meeting has beenmoved forwardtwo days. What day is the meeting on now? This question has two potential answers. If the ego-motion metaphor is used, then the meeting will be moved to Friday because the perspective of the moving ego sends the date farther ahead into the future. In contrast, if the time-motion metaphor is used, then the meeting will be moved to Monday because time is in motion towards the self. When the meeting moves forward, it moves closer to the self and therefore closer to the present. This ambiguous question has been used to show that reasoning with either of the Ego-RP variants can be primed in a variety of contexts. In general, it appears that the motivation of spatiotemporal metaphor priming is driven by active use of spatial imagery. Boroditsky (2000) and Boroditsky and Ramscar (2002) have shown that thinking about or experiencing motion, such as viewing visual images that invoke spatial imagery; traveling by plane; or imagining rolling in a chair, can reliably elicit one metaphor or another depending on whether the subject is thinking about themselves as moving or stationary. This work was extended by Matlock et al. (2005) to fictive motion, in which non-actual motion encoded in linguistic contexts such as the bike pathrunsalongside the creek elicits ego-motion reasoning, whereas comparable sentences without fictive motion (the bike path is next to the creek) do not reliably prime motion-based metaphorical reasoning. These results were further extended into the domain of abstract motion by Matlock et al. (2011), wherein serial processes that involve mentally “moving” from one item in a sequence to the next (such as counting or reciting the alphabet) were also shown to prime the Ego-RP variants as based on whether the abstract motion was “forward” (e. g., counting up “away” from zero) or “backward” (e. g., counting down “toward” zero). Although the motion itself may be fictive or abstract, the motor control associated with it must be mentally simulated (Sullivan and Barth 2012). Furthermore, such variation in temporal reasoning can also be primed by non-spatial factors, such as individual differences in personality and lifestyle (Duffy and Feist 2014).
The following studies seek to add to the body of literature on spatiotemporal metaphor and gesture comprehension by considering the influence of gesture on spatiotemporal metaphor reasoning in relationship to perspective. Gestures constitute a critical point of inquiry with regards to communication, in that they are tightly linked with speech both temporally and in terms of the information they carry (e. g., Clark 1996; Kendon 1986, 2004; McNeill 1992, 2005). Metaphoric gestures iconically represent the experiential source domain of a conceptual metaphor, often in conjunction with spoken language encoding the abstract domain of the metaphor (McNeill 1992; Cienki and Müller 2008). For example, let’s deal with thatlater only has temporal language (“later”) but an accompanying ‘co-timed’ forward-moving gesture provides the spatial source domain of the ego-motion spatiotemporal metaphor. The gesture represents the forward motion component of the ego-motion metaphor, and so the complete speech+gesture ‘package’ comprises complementary information drawn from both communication streams (Bavelas and Chovil 2000; McNeill and Duncan 2000).
While it has been shown that gesture is taken by addressees as an integrated part of the speaker’s intended message (e. g., Goldin-Meadow and Sandhofer 1999; Hostetter 2011; Kelly et al. 2014, 2010; Kendon 1994), less is known about how semantically contentful gestures are understood by addressees, and how gesture comprehension influences the addressee’s thought processes. While gesturing clearly constitutes a component of the speaker’s intended message, it does not require conscious engagement on the part of the addressee or speaker; metaphoric gestures are largely produced and registered on a subconscious level. Thus metaphoric gesture presents a natural point of investigation: It is within the communicative context and carries information reflective of the metaphor the speaker is using, but it is non-linguistic in nature. Hence, if a metaphoric gesture influences the way addressees respond to an ambiguous spatiotemporal metaphor in speech, this suggests that the gesture influences how they are reasoning about time.
Prior work by Jamalian and Tversky (2012) has provided initial evidence that presentation of metaphoric gestures can influence the metaphors people reason with, including sagittal gestures comprising forwards/backwards motion. Their findings indicate that a forward sagittal gesture (away from the self) primes use of the ego-motion metaphor, and a backwards sagittal gesture (toward the self) primes use of the time-motion metaphor when speaker and addressee are standing side-by-side. Other prior spatiotemporal priming studies focusing on the Ego-RP metaphors have presented the ambiguous test question in a variety of contexts, including one-on-one in-person, one-to-many in-person (i. e., students in a lecture hall), printed text, and text viewed on a computer screen (Boroditsky and Ramscar 2002; Kranjec 2006; Núñez et al. 2006).
This study seeks to focus specifically on the roles of context and perspective in gestural comprehension and their effects on temporal reasoning. The issue of perspective in multimodal communication has begun to receive increased attention as it has become apparent that an understanding of communicative context must recognize that para-linguistic communication is inherently produced from different viewpoints, just as language is (e. g., Sweetser 2012). For example, a gesturer can enact the actions of a character using her whole body, such that her viewpoint aligns with that of the character (similar to the use of the first person), or represent entire actions and objects using her hands (similar to the use of the third person) (see McNeill 1992 for an overview of character and observer viewpoint in gesture).
It is also known that interlocutor perspective interacts with gesture, such that speakers take their addressee’s locations – and hence perspectives – into account when producing deictic gestures (Özyürek 2000). Narayan (2012) has shown that interlocutors align their gestures’ perspectives in reflection of a shared cognitive alignment, suggesting a cognitive model that includes a shared perspective. Most work focuses on gesture production as it relates to perspective. Less studied, however, is how addressees comprehend gestures in the context of their own perspectives, and the degree to which they attend to and accommodate the perspective of the speaker or maintain their own perspectives. In a non-conversational experiment, de la Fuente et al. (2015) found that observers in a face-to-face setup maintain their own, egocentric, perspective while primed by another individual’s motor actions. Observers interpreted the right or left side of the actor as “good” or “bad” depending on their position relative to the actor. When the observer stood behind the actor, the observer and actor’s valency judgements were in agreement, reflecting their shared perspective. When observer and actor were face-to-face, their judgments were reversed, suggesting each maintained their own perspective with regards to the actor’s hand movements. In this study the actor’s hand movements were not gestures per se, but interactions with physical objects in either fluent or disfluent contexts. While de la Fuente et al. (2015) suggest that observers may maintain their own perspective, it remains to be seen whether the same holds in more conversational settings, where the interlocutors’ gestures are produced as part of a communicative utterance. Consequently, this article seeks to investigate the influence of metaphoric gesture on metaphoric reasoning and the role of addressee perspective in the act of comprehending such metaphoric gestures.
It is important to consider which types of gestures could potentially prime use of the Ego-RP metaphors. Moore (2006: 232) observes that “ego-centered time has to do with the experience of ‘now’ and the constantly changing status of times relative to ‘now’”. In other words, Ego-RP reasoning requires a sense of a deictic present. This accords with Casasanto and Jasmin’s (2012) observation that sagittal gestures co-occur with temporally deictic language. That is, sagittal gestures encode use of the Ego-RP metaphor by making use of the physical self as the deictic present and the space in front of the self as the future, with events further in the future located farther in front of the speaker.
Given that the ambiguous test question discussed in section 1.1 refers to metaphoric forward motion, use of sagittal forward/backwards gestures is appropriate in this context. The forward/backward motion of the gesture reflects the spatiotemporal metaphor encoded in the speech. The speaker’s use of next Wednesday establishes sufficient common ground with the addressee such that both will understand that the meeting is to occur at a particular point in time. As the speaker is using the TIME IS SPACE metaphor, this entails that the present is metaphorically located at her body and the future is in front of her. Therefore, as the meeting is scheduled in the future, the speaker can manually position it at a location in front of herself, and then move the meeting forward farther into the future with a forward gesture away from herself. Conversely, she can move the meeting closer to the present (as anchored by the body) with a backwards gesture towards herself (see Figure 1). This leads to the prediction in conjunction with the ambiguous test question that the forward away from speaker gesture should prime the ego-motion metaphor and thus more Friday responses. The backward towards speaker gesture should prime the time-motion metaphor and thus more Monday responses.
This aspect of the present study serves to replicate the study in Jamalian and Tversky (2012), which had a similar design and found that most participants who saw the away from speaker gesture responded that the meeting was moved to Friday for the ambiguous test question, and the majority of participants who saw the toward speaker gesture provided a Monday response, as predicted. This study was performed with the speaker and addressee standing side-by-side, so that they shared the same perspective. However, they did not include a control condition without a gesture prime. Therefore, this study will include a no gesture control, to provide a comparative baseline. This is necessary because control baselines reported in prior studies vary widely (e. g., ranging from 29 % to 77 % Friday responses; Hauser et al. 2009; Sullivan and Barth 2012). Hence, results for the gesture conditions must be interpreted in light of a study-specific baseline.
Since natural conversation frequently occurs in face-to-face contexts, the issue of perspective must also be considered. In face-to-face conversation, the addressee must make a choice between maintaining his own perspective or taking that of his interlocutor. A gesture away from the speaker is a gesture toward the addressee, and vice-versa; hence, it could be interpreted egocentrically as toward the addressee, or other-centrically as away from the addressee. In contrast, if speaker and addressee are standing side-by-side, a gesture away from the speaker is also a gesture away from the addressee, and a gesture toward the speaker is a gesture toward the addressee. Thus, in addition to replicating the influence of gestural direction on metaphoric reasoning, this study will also vary the perspective of the addressee. When face-to-face, the addressee may either (a) take the perspective of his interlocutor, in which case the away from speaker condition would prime ego-motion and the toward speaker condition time-motion; or (b) retain his own perspective and thus perceive the away from speaker condition as a motion towards the self, priming time-motion, and take the toward speaker condition as a motion away from the self, priming ego-motion.
Furthermore, prior studies as described above have shown that mentally simulating motion primes which metaphoric variant subjects reason with (Boroditsky 2000; Boroditsky and Ramscar 2002; Matlock et al. 2005, 2011; Sullivan and Barth 2012). It may be the case that subjects’ experience of motion in their environment or their own motion can have an effect on which metaphor they use. Thus the study design must also account for the fact that current physical experience could influence results. Since the experience of self-motion primes ego-motion, subjects will be presented with the test question in either motion (e. g., while on a bus) or stationary conditions (e. g., sitting in an office).
Given the above design, it is predicted that the sagittal away from speaker gesture will produce more Friday responses overall, because it is congruent with the ego-motion metaphor. The sagittal toward speaker gesture is predicted to produce more Monday responses overall, because it is congruent with the time-motion metaphor. It is unclear if perspective will affect results, given that face-to-face communication is a common experience (Goodwin 1981 and many others). Furthermore, it has been shown that even in the lack of physical co-presence, addressees choose to take either egocentric or other-centric perspectives based on how they perceive the interlocutor’s ability to adapt their utterance for the addressee (Duran et al. 2011). Given that this is a familiar communicative context, participants may assume their interlocutor is easily able to convey the question from the addressee’s perspective; on the other hand, speakers are less likely to take the perspective of the addressee when they are physically co-present, possibly because the addressee takes on a greater role in ensuring mutual understanding (Schober 1995). This factor may cause addressees to assume the speaker is maintaining speaker perspective. Furthermore, non-conversational observers maintain their own perspective (de la Fuente et al. 2015). Hence, because spatial perspective-taking is complex and flexible, it is unclear whether face-to-face participants will accommodate to the speaker and take the speaker’s perspective, or maintain their own.
Finally, physical experience is not predicted to affect results, as they will not be intentionally primed to consider their current physical experience. Prior studies have shown that spatiotemporal metaphor priming occurs when people actively attend to their spatial experiences, not simply undergo motion (Boroditsky and Ramscar 2002).
Experiment 1 involved the selection of 168 speakers of English (93 female, 75 male) in Baltimore, Maryland.
Each participant was asked the ambiguous test question, “Next Wednesday’s meeting has been moved forward two days. What day is the meeting on now?” Participants were expected to respond with either Friday or Monday. There were 12 conditions, consisting of 12 possible combinations of gesture, physical experience and perspective situations (see Table 1).
|Main condition||Variable||Predicted effect|
|Gesture||Sagittal away gesture||Increased Fridays|
|Sagittal toward gesture||Increased Mondays|
|Physical experience||Motion||No effect|
Participants were asked the ambiguous test question in one of three gesture conditions (gesture away from speaker; gesture towards speaker; or no gesture). In the gesture away from speaker and toward speaker conditions, the experimenter would ask the test question while using her right hand placed horizontally in front of her in order to produce one gesture bending at the elbow either away from or towards the body respectively while co-timed with the word “forward” (see Figure 1). In the control condition, the experimenter would ask the test question while not making any gestures, with her hands at her sides.
In order to account for the fact that a person’s physical experience may influence which metaphor is chosen, 84 participants were asked the test question while they were simultaneously or very recently in motion (i. e., on a bus, in a car, walking) and 84 participants were asked the test question after they had been stationary for a lengthy period of time (i. e., sitting in an office, studying in a library).
In addition, in order to investigate the use of perspective by participants, 84 participants were asked the test question while standing side-by-side with the experimenter and 84 participants were asked the test question while standing face-to-face with the experimenter.
As predicted, when participants saw a gesture towards the speaker they were more likely to provide a Monday response. It was found that 70 % of the 56 participants in this condition responded with Monday, and 30 % responded with Friday; this can be compared to the 56 control participants who saw no gesture. Of the control participants 62.5 % responded with Friday and 37.5 % responded with Monday. This indicates that the towards gesture condition increased the likelihood of a Monday response over the baseline  (Fisher’s Exact Test, p<0.01). However, contrary to prediction, only 55.4 % of the 56 participants who saw a gesture away from the speaker responded with Friday, and 44.6 % responded with Monday. No significant difference between the away and control groups was found (Fisher’s Exact Test, p>0.05; Figure 2).
A logistic regression model was fitted to the data to test the potential effects of the three main conditions (i. e., gesture, perspective, and physical experience) and demographic variables (gender and native language). Among the test conditions, gesture was found to significantly improve the fit of the model (p< 0.01), suggesting that the direction of gesture significantly contributed to participants’ responses. Perspective was also significant (p=0.04). Physical experience (p>0.05) and gender (p > 0.05) were found to be non-significant, suggesting that neither condition influenced participant responses. Native language (English or non-English) was marginally significant (p=0.07). As a result, the 14 non-native English speakers were removed from the data set for a total of 154 participants.
A second logistic regression model was then fitted to the native English speakers data set, using only the three test conditions as main factors. Again it was found that the gesture condition significantly improved the fit of the model (p < 0.01; Table 2) and perspective significantly improved the fit of the model (p < 0.05; Table 2). Physical experience was not significant (p > 0.05; Table 2).
|Gesture vs. no gesture||−0.1102||0.4050||−0.272||0.78562|
Since gesture condition and perspective were both significant factors affecting participants’ responses, a third model testing the interaction of gesture and perspective with physical experience as a main effect was fitted to the responses of native English speaking participants. A likelihood ratio test comparing this model to the prior one found no meaningful difference between the two (Table 3). The lack of a difference between the model with main effects only and the model including an interaction between gesture and perspective suggests that participants’ responses within all three gesture conditions were not different across perspective conditions.
|Model||Residual Df||Residual Dev||Df||Deviance||p (χ²)|
|Main effects model||149||196.00|
However, when comparing the English speakers’ gestures by perspective, a difference appears: 65 % of participants in the side-by-side condition viewing the away from speaker gesture responded with Friday and 35 % with Monday, as is predicted. In contrast only 46 % of face-to-face participants viewing the away from speaker gesture responded with Friday and 53 % with Monday. In other words, in the away from speaker condition significantly more side-by-side participants preferred Friday than face-to-face participants (one-sided Binomial Exact Test, p < 0.05). Nonetheless, participants in both side-by-side and face-to-face conditions viewing the toward speaker gesture favored Monday (68 % and 76 %, respectively) over Friday (32 % and 24 %). There was no significant difference between two groups (Binomial Exact Test, p > 0.05). Both perspective conditions produced responses congruent with the prediction that the toward speaker gesture would prime the time-motion variant and thus Monday responses. The reversal in the away from speaker responses between perspective conditions suggests that addressees may be maintaining their own perspective, but is inconsistent with the toward speaker results.
Overall, Experiment 1 results partially supported our predictions and replicated the results of Jamalian and Tversky (2012) as summarized in Table 4. However, the results of the away from speaker condition contradicted expectations and the findings of Jamalian and Tversky in the face-to-face condition.
|Away gesture||More Friday responses||Prediction not supported; fewer Friday responses than control group|
|Toward gesture||More Monday responses||Prediction supported|
|Perspective||Effect uncertain||No effect|
|Physical experience||No effect||Prediction supported|
In Experiment 1, the results for the gesture condition, which included both gesture variants (either a gesture away from the speaker or a gesture toward the speaker) and the no gesture control, fully supported predictions in the toward speaker condition. The gesture toward the speaker elicited more Monday responses as predicted because it evokes the metaphoric motion of time towards ego from the speaker’s perspective. The no gesture condition elicited more Friday responses, reflecting a baseline preference also found in other studies (e. g., Sullivan and Barth 2012).
Nevertheless, not all of the results proved to be as predicted, as not all results for the away gesture condition were congruent with the research hypothesis. It was predicted that the gesture away from the speaker, which evokes the metaphoric motion of the ego through time, would yield more Friday responses than Monday responses due to its congruence with the ego-motion metaphor. In fact, the gesture away from the speaker should have produced more Friday responses than the control group, since this experiment’s control provides a baseline preference for Friday responses (59 % of responses) over Mondays. Instead, there was no overall significant difference found in Monday versus Friday responses for the away from speaker gesture. These results are contrary to those of Jamalian and Tversky (2012), who did find a strong preference (over 80 %) for Friday responses in this condition.
The results for the effect of perspective (i. e., shared side-by-side perspective or opposing face-to-face perspective) contributed to the fit of the model, but there was not a significant effect of the interaction between gesture and perspective. When considering only those in the shared perspective condition, those in the away and control conditions strongly preferred the ego-motion metaphor, as predicted (65 % in the away condition and 70 % of controls responded with Friday). Furthermore, 71 % of participants in the toward condition responded with Monday, as expected. These results can be compared to those in the opposing perspective condition: Both the control and away conditions demonstrated a decreased preference for Friday (46 % of away participants and 46 % of control participants), contrary to expectations. Conversely, the toward condition participants maintained the strong Monday preference (74 %).
One potential explanation for this divergence in results is that addressees are maintaining their own perspective in the face-to-face condition: They are interpreting the speaker’s away gesture as a gesture toward the addressee, thus priming a preference for Mondays. In the shared perspective condition, a gesture towards the speaker is also towards the addressee and a gesture away from the speaker is also away from the addressee. However, in the opposing perspective condition, the gesture towards the speaker is away from the addressee and the gesture away from the speaker is towards the addressee. As a result of this contrast, it is possible that perspective interacts with how participants perceived the gesture direction. This accords with de la Fuente et al.’s (2015) finding that face-to-face observers maintain their own perspective. If this were the case, it would be predicted that in the opposing perspective condition, (a) the gesture toward the speaker would prime ego-motion Friday responses and (b) the gesture away from the speaker would prime time-motion Monday responses. These results support the second prediction (b), but not the first (a). Furthermore, perspective should not affect the control condition, where no gesture is present.
Physical co-presence has been extensively argued to be the natural state of language use (e. g., Bavelas and Gerwing 2007; Goodwin 1981; Fillmore 1998; and many others). Thus, it is clear that English speakers are accustomed to viewing the gestures of their interlocutors in this state, which often involves opposing perspectives. Speakers will verbally and gesturally refer to their own left and right when giving directions intended to be from the viewpoint of the character navigating the route (Emmorey et al. 2001; Taylor and Tversky 1996). Furthermore, they are also more likely to maintain their own egocentric perspective when addressees are physically co-present (Schober 1995).
In typical face-to-face discourse, if the speaker maintains her own perspective, the addressee has to take the perspective of the speaker in order to accurately mentally model the speaker’s intent. In other words, addressees are accustomed to calculating spatial reference from the speaker’s deictic center and frame of reference as opposed to automatically taking their own. Consequently, in this study addressees may be interpreting the gestures from the speaker’s perspective, which is congruent with what is known about native English speakers and their choice of character viewpoint gestures (i. e., they typically take the perspective of the speaker). However, the mixed results of the opposing perspective condition, and the nonsignificant results of the gesture x perspective interaction, suggest that English speakers may not reliably do so. The effects of perspective will be further explored in Experiment 2.
An alternative explanation may lie in the design of the experiment. In this experiment, after the speaker poses the test question she naturally returns her hand to a resting position. The recovery of the hand to resting reflects the natural stroke-hold-recovery phases in gesture production (McNeill 1992). Given that this second movement is a movement toward the speaker, it may have confounded the away from speaker gesture results by introducing a second movement in the opposite direction of the intended test condition (see Figure 3).
In contrast, in the Jamalian and Tversky (2012) study, the investigator used both hands rather than just one to produce the gesture; therefore, it is possible that the increased markedness of the symmetrical two-handed gesture may have improved the reliability of their study design. In order to further investigate this issue, Experiment 2 will control for the return movement and potential variation in gesture production between trials.
Results indicated no effect of physical experience; that is, whether a participant had been in a recent state of motion or stasis did not influence their response to the test question. Most previous research related to spatiotemporal metaphor priming involves actively attending to spatial imagery that involves imagining motor control (Sullivan and Barth 2012) or otherwise priming by sequences that invoke abstract motion (Matlock et al. 2011). Conversely, in this study, the participants’ physical experience of motion was not particularly salient, and they were not asked to attend to it prior to being presented with the test question. For example, participants were not first asked about how long they had been traveling or how long they had been sitting. As participants were not primed to attend to their physical experience status, it had no effect on which metaphor they were using. These results support the claim that the spatial priming of spatiotemporal metaphors depends on active spatial processing as opposed to passive experience, which is reflected in previous studies that required such priming prior to the test question.
Experiment 2 was designed to replicate and expand upon the results of Experiment 1, in addition to resolving potential issues that may have contributed to the inconsistent away from speaker gesture results of Experiment 1. The basic 3-way design of forward gesture/backward gesture/no gesture from Experiment 1 was maintained. However, participants were presented with the gesture and test question in a video computer-mediated format rather than in person. This had three benefits: (a) the gesture is consistent across every trial; (b) the video stops before the forward and backward gestures are returned to resting position, such that the stimulus only presents the intended direction of motion and stops as soon as the question is asked, prior to the gesture returning to rest; and (c) perspective effects can be observed by comparing the side-by-side results of Experiments 1 (in-person) and 2 (computer-mediated). In computer-mediated communication such as videoconferencing, perspective must necessarily be in opposition as addressees are facing each other through the computer screen.
In Experiment 1, an effect of perspective (i. e., shared vs. opposed) was observed. Given that in Experiment 1, participants reliably took the perspective of the speaker in the shared condition, but responses varied in the opposing condition, it is predicted that their responses will be the reverse of those in the shared condition of Experiment 1. The mixed effects of opposing perspective in Experiment 1 may be due to the confounding factors of the gesture stimuli, as discussed above. Hence, in this controlled format it is predicted that the toward speaker gesture – a gesture away from the addressee – will prime the ego-motion variant and hence more Friday responses; and the away from speaker gesture – a gesture toward the addressee – will prime the time-motion variant and hence more Monday responses.
Due to the observed preference in gesture production for lateral rather than sagittal spatiotemporal metaphoric gestures (Casasanto and Jasmin 2012), the potential contribution of lateral (i. e., left-to-right or right-to-left) gestures to disambiguation of the test question is also tested. While the lateral gestures do not constitute forward movement with reference to the speaker, the rightwards-moving gesture may still be interpreted as moving the deadline forward in time, as it represents forward motion of time on the lateral axis (Cooperrider and Núñez 2009). This lateral association between space and time is posited to be the result of converging cultural influences, such as orthographic systems, temporal artifacts like calendars, and other practices such as counting on a horizontal number line (Winter et al. 2015).
However, the lateral gesture is side-to-side for both speaker and addressee in a first-person perspective (i. e., when the speaker is directly facing the camera or addressee). Given that it is predicted only sagittal motion (i. e., toward or away from the addressee) should impact temporal reasoning, the lateral gesture should not prime use of either Ego-RP variant in first-person perspective. In contrast it may have an effect in another communicative alternative: that of the bystander third-person, a non-active conversational participant (e. g., Goffman 1976). In this situation, the bystander views the scene from a perspective such that a movement lateral to the speaker is a sagittal movement from the camera’s point of view (see Figure 4).
To test both these possibilities, the stimuli will be presented in either a first-person perspective condition, wherein the speaker directly addresses the camera (a parallel to the face-to-face condition of Experiment 1), or a third-person perspective condition, wherein the camera constitutes a bystander watching a side view of the speaker asking the test question to an addressee. Thus it is predicted that lateral gestures will only have an effect in the third-person condition, where they constitute sagittal motion from the camera’s – and hence the participant’s – perspective.
To summarize, it is predicted that the sagittal away from speaker gesture will produce more Monday responses overall, and the sagittal toward speaker gesture will produce more Friday responses overall. Finally, the sideways gestures should only affect results when in a third-person perspective, such that they are perceived by the viewer as forward/backward motion.
Experiment 2 involved the selection of 510 native-speakers of English on Amazon Mechanical Turk  (http://www.mturk.com). Forty-four people were removed from the data set for either not consenting or not giving a Monday or Friday response for a total of 466 participants (M=275; F=190; 1 unreported) who were randomly assigned to one of five possible conditions (see Table 5).
|Main condition interactions||Variable||Predicted effect|
|Sagittal gestures||Sagittal away gesture||Increased Mondays|
|Sagittal toward gesture||Increased Fridays|
|Lateral gestures+1st Person||Lateral left to right gesture||No effect|
|Lateral right to left gesture||No effect|
|Lateral gestures+3rd Person||Lateral left to right gesture||Increased Fridays|
|Lateral right to left gesture||Increased Mondays|
|Control||No gesture||No effect|
The same question from Experiment 1 (“Next Wednesday’s meeting has been moved forward two days. What day is the meeting on now?”) was asked in video format. Participants were asked the ambiguous test question in one of five gesture conditions (gesture away from speaker; gesture towards speaker; gesture left to right; gesture right to left; or no gesture). The away from speaker gesture, toward speaker gesture and no gesture conditions were the same as in Experiment 1. In the gesture left to right and gesture right to left conditions, the experimenter would ask the test question while using her right hand placed perpendicular in front of her in order to produce one gesture bending at the elbow either left-to-right or right-to-left respectively while co-timed with the word “forward” (see Figure 5).
Two hundred and ninety two participants were asked the test question in a first-person perspective (i. e., the experimenter faced the camera in order to ask the question directly to the participant) while 174 were asked the test question in a third-person perspective (i. e., the experimenter faced another person while asking the question, with the camera facing both people from the side) (see Figure 6).
As predicted, when participants saw a gesture toward the speaker they were more likely to provide a Friday response in comparison to the away gesture. Of the participants who saw a gesture toward the speaker, 66 % of 117 participants responded with Friday while 34 % responded with Monday. Furthermore, 53 % of the 119 participants who saw a gesture away from the speaker responded with Monday, and 47 % responded with Friday. Of the 71 control participants who saw no gesture, 62 % responded with Friday and 38 % responded with Monday (see Figure 7). This baseline preference for Friday responses in the control condition suggests that the away condition produced a higher rate of Mondays (53 %) as compared to the control (38 %).
A sum-coded logistic regression model was fitted to the data to test the potential effects of the two main conditions (i. e., gesture and perspective) and their interactions (see Table 6). Among the test conditions, gesture was found to significantly improve the fit of the model (p < 0.05). The interaction of perspective and gesture marginally improved the model fit (p=0.097).
|Condition||df||Deviance||p (χ² distribution)|
Turning to the effects of the specific conditions, there was a significant effect of sagittal gesture (p < 0.05) and the interaction between gesture type (sagittal vs. lateral) and perspective (p < 0.05). Specifically, the away gesture is more likely to elicit a Monday response and the toward gesture to elicit a Friday response. As there was no significant interaction between sagittal gesture direction and perspective (p > 0.05), participants did not differentially respond to sagittal gestures in different perspective conditions (see Table 7). In other words, participants differed in their responses to lateral gestures in the third-person condition, but they did not differ in their responses to sagittal gestures. Given these findings, we now turn to describe these results in more detail.
|Gesture vs. no gesture||0.29001||0.34127||0.850||0.3954|
|Sagittal vs. lateral gesture||0.29599||0.26024||1.137||0.2554|
|Lateral gesture (leftward vs. rightward)||0.55880||0.36456||1.533||0.1253|
|Sagittal gesture (away vs. toward)||0.82400||0.37148||2.218||0.0265*|
|Perspective (3rd vs. 1st person)||−0.29502||0.22536||−1.309||0.1905|
|Lateral vs. sagittal×3rd vs. 1st||−1.25628||0.50508||−2.487||0.0129*|
|Leftward vs. rightward×3rd vs. 1st||−0.85605||0.85513||−1.001||0.3168|
|Away vs. toward×3rd vs. 1st||−0.09003||0.53776||−0.167||0.8670|
Turning first to the sagittal gestures, pairwise comparisons of the away and toward gestures in each perspective condition show that participants responded similarly irrespective of perspective (Fisher’s Exact test, p > 0.05; see Figure 8). For the away from speaker condition, 51 % of first-person perspective participants and 56 % of third-person participants responded with Monday, reflecting the overall preference for the time-motion metaphor in the away from speaker condition. Similarly, for the toward speaker condition, 69 % of first-person participants and 63 % of third-person participants responded with Friday, reflecting the overall preference for the ego-motion metaphor in the toward speaker condition.
Next, a comparison of the left to right gesture responses in each perspective condition shows that participants respond differently (Fisher’s Exact Test, p < 0.05; Figure 9). Specifically, 45 % of first-person participants responded with Friday, compared to 78 % of third-person participants. In contrast, comparison of the right to left gesture responses in each perspective condition show similar responses (Fisher’s Exact Test, p > 0.05): 59 % of first-person participants responded with Friday as did 72 % of third-person responses. Furthermore, comparing the third-person responses to each other, there is no significant difference (Fisher’s Exact Test, p > 0.05). Hence, while third-person responses to the lateral gestures are not different – contra predictions – they are different from first-person responses.
Finally, we can also investigate the difference between the leftward and rightward gesture responses. Both third-person conditions were not different from the baseline of a 63 % Friday response rate or each other (Fisher’s Exact Test, p values > 0.05). In contrast, the left to right gesture response in the first person is marginally different from that of the baseline (Fisher’s Exact Test, p=0.057), with a 45 % Friday response rate. The right to left first-person condition, with a 59 % Friday response rate, is not different from that of the baseline (Fisher’s Exact Test, p > 0.05).
This complex set of findings in the lateral conditions can be summarized as follows: Participants do not respond differentially to lateral gestures in third-person perspective, as they exhibit a Friday preference irrespective of gesture direction. In the first person, the left to right gesture produces a preference for Mondays over Fridays, with a 45 % Friday rate significantly different from the baseline measure. The right to left gesture in the first person produces a preference for Fridays, similar to baseline.
Experiment 2 sought to investigate the effects of perspective on interpretation of gestural motion in the context of temporally ambiguous language. It was predicted that motion towards the addressee would reliably prime use of the time-motion metaphor, whereas motion away from the addressee would reliably prime use of the ego-motion metaphor. Hence, in the face-to-face setting sagittal gestures should produce this effect, whereas in the third-person bystander setting, lateral gestures should produce this effect. Results both supported this overall hypothesis and revealed unexpected effects of lateral gestures, as summarized in Table 8.
|Sagittal away gesture||More Monday responses||Prediction supported; Results congruent with gesture from addressee perspective|
|Sagittal toward gesture||More Friday responses||Prediction supported; Results congruent with gesture from addressee perspective|
|1st person lateral gestures||No effect||Results congruent with gesture from addressee perspective|
|3rd person leftward gesture||More Friday responses||No effect|
|3rd person rightward gesture||More Monday responses||No effect|
As expected, responses to sagittal gestures in the Experiment 2 first-person setting were the reverse of those in the Experiment 1 shared perspective condition, in that the away from speaker gesture elicited Monday responses and the toward speaker gesture elicited Friday responses. This indicates that addressees are reliably maintaining their own perspective rather than adapting to that of the speaker. In cases where they are side-by-side with the speaker, both parties have the same perspective and hence participant responses appear to be congruent with the perspective of the speaker. When in face-to-face settings addressees’ responses are not congruent with the perspective of the speaker; instead addressees’ responses reflect their own perspective. This suggests that in the side-by-side setting, the apparent speaker-oriented congruency is a side effect of the fact that speaker and addressee share a perspective rather than addressee bias towards the speaker. Consequently, shared perspective participants performed similarly in both gesture conditions. Those who viewed a gesture moving toward them reliably responded to the test question with Monday, indicating a preference for the time-motion metaphor; and participants who viewed a gesture moving away from them reliably responded to the test question with Friday, indicating a preference for the ego-motion metaphor. As such, interpretation of these metaphoric gestures is sensitive to perspective: A physical motion produced in a face-to-face orientation will be interpreted differently than the same motion produced in a side-by-side orientation. Overall, addressees tended to interpret these gestures from their own perspective. In face-to-face settings, this entails privileging their own perspective over that of the speaker.
It was originally predicted that lateral gestures in the first-person condition would not have an effect on interpretation of the ambiguous test question. Since the gestures do not depict the forward/backward motion relevant to the “moved forward” test question, it was predicted that participants would not take the lateral gestures into account and perform consistently with the baseline. In contrast, results showed that participants viewing a left to right (from the speaker’s perspective) gesture produced more Monday responses, and those viewing a right to left gesture produced more Friday responses. Although these gestures do not have the sagittal motion predicted to be salient to the test question, they do have motion salient to spatiotemporal metaphor. One hypothesis, advanced by Casasanto and Jasmin (2012), is that lateral motion evokes the Time-RP metaphor: Time is conceived of as a series of sequential locations, with no reference to the ego as a spatiotemporal landmark. Typically, this metaphor is linguistically realized in utterances such as TuesdayfollowsMonday. In gesture it is commonly realized as a left-to-right lateral gesture in English, where earlier time points are on the left and later ones on the right. However, Dancygier and Sweetser (2014) argue that this lateral gesture derives from a conceptualization of the experience of time as forward motion, where the ‘forward’ orientation is perpendicular to that of the speaker rather than from the speaker’s perspective. The lateral gesture still evokes an ‘ego motion’ spatiotemporal metaphor, as the ego is moving forward along the lateral axis. Irrespective of the specific spatiotemporal metaphor at play the result is that a person’s left is earlier and right is later in time.
However, a gesture left to right from the speaker’s perspective is right to left from the addressee’s perspective, and vice versa. Maintaining their own perspective, first-person participants viewing such a gesture effectively see the gesture moving earlier in time. This accords with the preference for Monday – an earlier date – in this condition. In contrast, those viewing the speaker’s right to left gesture effectively see the gesture moving later in time to Friday. This accords with a Friday preference in this condition. Thus, although the gestures are not on the sagittal axis, participants are still taking it into account and interpreting it from their own perspective, consistent with the first-person sagittal results. They still make use of the information in the gesture to disambiguate the ambiguous metaphor in the speech. Given that speakers reliably produce gestures encoding different metaphors than those used in their speech (Cienki and Müller 2008), it follows that addressees are able to take into account conceptualizations of time in different communication modalities and integrate them into one complete message. Prior work has shown that speakers produce these lateral metaphoric gestures to describe sequences of events, even when their speech uses the sagittal Ego-RP metaphors as in the ambiguous “moved forward” test question (Casasanto and Jasmin 2012). Here we have produced evidence that addressees perceive both streams of metaphoric communication and integrate them accordingly.
The results of the third-person sagittal gesture conditions provide further support for the finding that addressees maintain their own perspective. We originally predicted that the third-person sagittal gestures would not have an effect on participant response due to the orientation of the camera. In the third person, the bystander is perpendicular to the conversation between addressee and speaker, such that the speaker’s sagittal gestures are lateral movements from the perspective of the bystander. Given the prediction that only forward/backward movements should affect interpretation of the test question, it was predicted that in this condition addressees would not interpret the sagittal gestures as relevant.
However, we instead found that participants in the third-person sagittal conditions produced the same responses as those in the corresponding first-person sagittal conditions: the toward speaker gesture produced more Friday responses and the away from speaker gesture produced more Monday responses. Participants, as ‘bystanders’ to a conversation, have at least three potential perspectives: the addressee’s, the speaker’s, and their own. Originally we predicted that the participant ‘bystanders’ would effectively keep their own perspectives and hence not interpret the gestures in this condition as meaningful. However, these results suggest that they are taking the perspective of the addressee in the conversation, as similarly demonstrated in the first-person sagittal gesture conditions. These findings would further suggest that in general, bystanders are mentally ‘participating’ in the conversation as addressees, which stands to reason as, similar to the addressees, they are listening rather than speaking. This construal of the self as an addressee leads participants as bystanders to interpret the speaker’s gestures from addressee perspective.
In contrast to the third-person sagittal gestures conditions, participants in the third-person lateral gestures conditions did not perform in a manner consistent with addressee perspective. If they had, it would be expected that left-to-right gestures should produce more Monday responses and right-to-left gestures should produce more Friday responses, as discussed above for the first-person lateral gestures. However, participants in third-person conditions instead responded similarly to those in the baseline control group; both gesture directions produced an overall Friday preference comparable to that of the control no gesture response rate. Further investigation is needed to clarify why this condition stands as an outlier in comparison to results for the rest of the experiment, which accord with an overall preference for addressee perspective. It is possible that in this condition, the position of the camera ‘bystander’ renders the lateral motion of the gesture less visually salient or perceptible, such that the participants were not attending to it or did not perceive it fully. A sagittal gesture, when viewed at a 90° angle of the camera in relation to the conversational participants, traverses a sizable portion of the viewer’s visual field. In contrast, a lateral gesture in the same context does not effectively transverse the viewer’s visual field. Hence, it may be that this type of gestural motion was not taken as informative by participants due to decreased perceptual salience. As a result, they defaulted to baseline responses in the absence of disambiguating information.
This study contributes to what is known about gesture as well as to what is known about spatiotemporal metaphor and abstract reasoning. Within the field of gesture, it is well established that representative gestures carry information complementary to spoken information (McNeill 1992). A metaphoric gesture visually depicts only the source domain of the metaphor (in this case, motion). As a result, the spoken part of language provides the target domain language (in this case, time) while the gesture provides the source domain; the integrated speech/gesture unit constitutes the complete metaphoric message (McNeill 1992; Cienki and Müller 2008). Gesture is an integral aspect of communication synchronized with language, such that addressees naturally perceive it as salient. Gesture is understood as an implicit part of the communicative discourse space, and therefore the addressee not only receives but interprets information encoded in the gesture in the context of the overall speech/gesture utterance.  Therefore, in the present study, participants relied on the motion parameter of the gesture to disambiguate the inherent ambiguity of the test question.
Furthermore, we have seen evidence that participants can incorporate gestural content into their understanding of the question even when the perspective of the gesture does not align with that of the metaphor. While the ambiguous test question evokes one of the two Ego-RP metaphor variants, the lateral gesture does not evoke the forward motion of the Ego-RP metaphor from the speaker’s perspective. Rather, depending on one’s analysis, it either evokes the Time-RP metaphor – in which the ego is not present – or an ego-motion metaphor in which the ego moves forward along the lateral axis (Dancygier and Sweetser 2014). Nonetheless, participants incorporated the lateral gesture content into their disambiguation of the question. As has been previously shown, speakers easily produce multiple metaphoric conceptualizations simultaneously in gesture and speech (e. g., Cienki and Müller 2008); furthermore, addressees reliably understand the source domain information of metaphoric gestures (Cienki 2005). Here we show that addressees not only have access to the source domain information of metaphoric gestures, but can comprehend them as well even in ambiguous speech contexts.
This study also considers the role of perspective in the context of gesture comprehension. Whereas prior work on the role of gesture in metaphoric reasoning controlled for the issue of perspective by restricting their investigation to a single, shared, perspective (Jamalian and Tversky 2012), here we consider the role of perspective in face-to-face communication. We find that irrespective of the speaker’s perspective, addressees tend to interpret metaphoric gestures from their own points of view. In other words, rather than mentally rotate their perspective to take that of the speaker’s in face-to-face settings, addressees interpret gestures as if they were produced from their own perspectives. This expands upon the findings of de la Fuente et al. (2015), which found similar results in a non-conversational setting with non-gestural manual movement. We also find evidence that third-person bystanders witnessing a conversation take the perspective of the addressee rather than the speaker. However, it is also known that over time, interlocutors can align their gestures to reflect a common perspective (Narayan 2012); this suggests that perspective-taking is not set, but fluid, and interacts with the interlocutors’ shared conceptualization of the speech content. In this study, the test question was presented without prior discourse context or any further exchange on the part of the conversational participants. It may be the case that addressee perspective is a sort of ‘base setting’ that participants defaulted to, in the absence of any pressure from the communicative context to take the speaker’s perspective. Future work should investigate the role of discourse setting in establishing and changing perspective-taking. For example, it may be that a longer conversational exchange, in which gestural information is reliably relayed from the speaker’s perspective, can induce addressees to interpret gesture from speaker perspective instead of maintaining their own.
Lastly, this study serves as a crucial reminder that gesture must be attended to by the researcher in spatiotemporal metaphor and other priming research. People will attend to gesture as a given aspect of communication even if the experiment does not intend for gesture to be a factor in the study, provided that the gesture is perceived by participants as relevant to the communicative package. This adds to a growing body of evidence that situation and context play an important role in metaphoric priming. For example, Duffy and Feist (2014) show that individual differences in personality as well as lifestyle – such as whether an individual is a university administrator or student – affect responses to the same ambiguous test question. While it has been long known that pragmatic context influences discourse interpretation, it is becoming increasingly clear that gesture must be considered an integral aspect of the communicative context and therefore must be included in metaphor-related analyses, particularly with regards to issues of embodied cognition.
The authors would like to thank Eve Sweetser; the Berkeley Gesture and Multimodality Group; the audiences of ICLC 12 and ISGS 6; Gale Stam; Kashmiri Stec; and Oana David for initial comments, as well as Bodo Winter and an anonymous reviewer for their valuable feedback. Additional gratitude is expressed to Clara Cohen who aided in the statistical analysis.
Bavelas, Janet B. & Nicole Chovil. 2000. Visible acts of meaning: An integrated message model of language in face-to-face dialogue. Journal of Language and Social Psychology 19(2). 163–194. doi:10.1177/0261927X00019002001 Search in Google Scholar
Bavelas, Janet B. & Jennifer Gerwing. 2007. Conversational hand gestures and facial displays in face-to-face dialogue. In Klaus Fiedler (ed.), Social communication, 283–308. New York: Psychology Press. Search in Google Scholar
Cienki, Alan. 2005. Image schemas and gesture. In Beate Hampe (ed.), From perception to meaning: Image schemas in cognitive linguistics 421–442. Berlin & New York: Mouton de Gruyter. Search in Google Scholar
Cienki, Alan & Cornelia Müller. 2008. Metaphor, gesture, and thought. In Raymond W. Gibbs, Jr. (ed.), The Cambridge handbook of metaphor and thought, 483–501. Cambridge: Cambridge University Press. Search in Google Scholar
Clark, Herbert. H. 1996. Using language. Cambridge: Cambridge University Press. Search in Google Scholar
Dancygier, Barbara & Eve Sweetser. 2014. Figurative language. Cambridge: Cambridge University Press. Search in Google Scholar
de la Fuente, Juanma, Daniel Casasanto & Julio Santiago. 2015. Observed actions affect body-specific associations between space and valence. Acta Psychologica 156. 32–36. doi:10.1016/j.actpsy.2015.01.004 Search in Google Scholar
Duffy, Sarah E. & Michele I. Feist. 2014. Individual differences in the interpretation of ambiguous statements about time. Cognitive Linguistics 25(1). 29–54. doi:10.1515/cog–2013–0030 Search in Google Scholar
Duran, Nicholas D., Rick Dale & Roger J. Kreuz. 2011. Listeners invest in an assumed other’s perspective despite cognitive cost. Cognition 121(2). 22–40. doi:10.1016/j.cognition.2011.06.009 Search in Google Scholar
Emmorey, Karen, Barbara Tversky & Holly A. Taylor. 2001. Using space to describe space: Perspective in speech, sign, and gesture. Spatial Cognition and Computation 2(3). 1–24. doi:10.1023/A:1013118114571 Search in Google Scholar
Evans, Vyvyan. 2004. The structure of time: Language, meaning and temporal cognition. Amsterdam: John Benjamins. Search in Google Scholar
Fillmore, Charles. 1998. Pragmatics and the description of discourse. In Asa Kasher (ed.), Pragmatics: Communication, interaction, and discourse, 385–407. New York, NY: SUNY Press. Search in Google Scholar
Goldin-Meadow, Susan & Catherine Momeni Sandhofer. 1999. Gestures convey substantive information about a child’s thoughts to ordinary listeners. Developmental Science 2(1). 67–74. doi:10.1111/1467-7687.00056 Search in Google Scholar
Goodwin, Charles. 1981. Conversational organization: Interaction between speakers and hearers. New York, NY: Academic Press. Search in Google Scholar
Hauser, David J., Margaret S. Carter & Brian P. Meier. 2009. Mellow Monday and furious Friday: The approach-related link between anger and time representation. Cognition and Emotion 23(6). 1166–1180. doi:10.1080/02699930802358424 Search in Google Scholar
Jamalian, Azadeh & Barbara Tversky. 2012. Gestures alter thinking about time. In Naomi Miyake, David Peebles & Richard P. Cooper (eds.), Proceedings of the 34th annual conference of the cognitive science society, 503–508. Austin, TX: Cognitive Science Society. Search in Google Scholar
Kelly, Spencer D., Meghan Healey, Aslı Özyürek & Judith Holler. 2014. The processing of speech, gesture, and action during language comprehension. Psychonomic Bulletin & Review 22(2). 517–523. doi:10.3758/s13423-014-0681–7 Search in Google Scholar
Kelly, Spencer D., Aslı Özyürek & Eric Maris. 2010. Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science 21(2). 260–267. doi:10.1177/0956797609357327 Search in Google Scholar
Kendon, Adam. 1986. Current issues in the study of gesture. In Jean-Luc Nespoulous, Paul Perron & André Roch Lecours (eds.), The biological foundations of gestures: Motor and semiotic aspects, 23–47. Hillsdale, NJ: Lawrence Erlbaum. Search in Google Scholar
Kendon, Adam. 2004. Gesture: Visible action as utterance. Cambridge: Cambridge University Press. Search in Google Scholar
Kövecses, Zoltán. 2002. Metaphor: A practical introduction. New York, NY: Oxford University Press. Search in Google Scholar
Kranjec, Alexander. 2006. Extending spatial frames of reference to temporal concepts. In Ron Sun & Naomi Miyake (eds.), Proceedings of the 28th annual conference of the cognitive science society, 447–452. Mahwah, NJ: Lawrence Erlbaum. Search in Google Scholar
Lakoff, George & Mark Johnson. 1980. Metaphors we live by. Chicago: University of Chicago Press. Search in Google Scholar
Lakoff, George & Mark Johnson. 1999. Philosophy in the flesh: The embodied mind and its challenge to Western thought. New York, NY: Basic Books. Search in Google Scholar
Matlock, Teenie, Kevin J. Holmes, Mahesh Srinivasan & Michael Ramscar. 2011. Even abstract motion influences the understanding of time. Metaphor and Symbol 26(4). 260–271. doi:10.1080/10926488.2011.609065 Search in Google Scholar
Matlock, Teenie, Michael Ramscar & Lera Boroditsky. 2005. On the experiential link between spatial and temporal language. Cognitive Science 29(4). 655–664. doi:10.1207/s15516709cog0000_17 Search in Google Scholar
McGlone, Matthew S., & Jennifer L. Harding. 1998. Back (or forward?) to the future: The role of perspective in temporal language comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition 24(5). 1211–1223. doi:10.1037/0278-7322.214.171.1241 Search in Google Scholar
McNeill, David. 1992. Hand and mind: What gestures reveal about thought. Chicago, IL: University of Chicago Press. Search in Google Scholar
McNeill, David. 2005. Gesture and thought. Chicago, IL: University of Chicago Press. Search in Google Scholar
McNeill, David & Susan D. Duncan. 2000. Growth points in thinking-for-speaking. In David McNeill (ed.), Language and gesture: Window into thought and action, 141–161. Cambridge: Cambridge University Press. Search in Google Scholar
Narayan, Shweta. 2012. Maybe what he means is he actually got the spot: Perspective and cognitive viewpoint in a gesture study. In Barbara Dancygier & Eve Sweetser (eds.), Viewpoint in language: A multimodal perspective, 113–135. Cambridge: Cambridge University Press. Search in Google Scholar
Núñez, Rafael E., Benjamin A. Motz & Ursina Teuscher. 2006. Time after time: The psychological reality of the ego- and time- reference-point distinction in metaphorical construals of time. Metaphor and Symbol 21(3). 133–146. doi: 10.1207/s15327868ms2103_1 Search in Google Scholar
Núñez, Rafael, & Eve Sweetser. 2006. With the future behind them: Convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science 30(3). 401–450. doi:10.1207/s15516709cog0000_62 Search in Google Scholar
Özyürek, Aslı. 2000. The influence of addressee location on spatial language and representational gestures of direction. In David McNeill (ed.), Language and gesture: Window into thought and action, 64–83. Cambridge: Cambridge University Press. Search in Google Scholar
R Core Team. 2015. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/ Search in Google Scholar
Ramscar, Michael, Teenie Matlock & Melody Dye. 2010. Running down the clock: The role of expectation in our understanding of time and motion. Language and Cognitive Processes 25(5). 589–615. doi:10.1080/01690960903381166 Search in Google Scholar
Schober, Michael F. 1995. Speakers, addressees, and frames of reference: Whose effort is minimized in conversations about locations? Discourse Processes 20(2). 219–247. doi:10.1080/01638539509544939 Search in Google Scholar
Sinha, Chris, Vera Da Silva Sinha, Jörg Zinken & Wany Sampaio. 2011. When time is not space: The social and linguistic construction of time intervals and temporal event relations in an Amazonian culture. Language and Cognition 3(1). 137–169. doi:10.1515/langcog.2011.006 Search in Google Scholar
Sprouse, Jon. 2011. A validation of Amazon mechanical turk for the collection of acceptability judgments in linguistic theory. Behavior Research Methods 43(2). 155–167. doi:10.3758/s13428-010-0039-7 Search in Google Scholar
Sullivan, Jessica L. & Hilary C. Barth. 2012. Active (not passive) spatial imagery primes temporal judgements. The Quarterly Journal of Experimental Psychology 65(6). 1101–1109. doi:10.1080/17470218.2011.641025 Search in Google Scholar
Sweetser, Eve. 2012. Introduction: Viewpoint and perspective in language and gesture, from the ground up. In Barbara Dancygier & Eve Sweetser (eds.), Viewpoint in language: A multimodal perspective, 1–24. Cambridge: Cambridge University Press. Search in Google Scholar
Winter, Bodo, Teenie Matlock, Samuel Shaki & Martin H. Fischer. 2015. Mental number space in three dimensions. Neuroscience and Biobehavioral Reviews 57. 209–219. doi:10.1016/j.neubiorev.2015.09.005 Search in Google Scholar