Skip to content
BY 4.0 license Open Access Published by De Gruyter Mouton March 3, 2022

LOOKing for multi-word expressions in American Sign Language

  • Lynn Hou ORCID logo EMAIL logo
From the journal Cognitive Linguistics

Abstract

Usage-based linguistics postulates that multi-word expressions constitute a substantial part of language structure and use, and are formed through repeated chunking and stored as exemplar wholes. They are also re-used to produce new sequences by means of schematization. While there is extensive research on multi-word expressions in many spoken languages, little is known about the status of multi-word expressions in the mainstream U.S. variety of American Sign Language (ASL). This paper investigates recurring multi-word expressions, or sequences of multiple signs, that involve a high-frequency sign of visual perception glossed as look and the family of ‘look’ signs. The look sign exhibits two broad functions: look/‘vision’ references literal or metaphorical vision and look/‘reaction’ signals a person’s reaction to a visual stimulus. Data analysis reveals that there are recurring sequences in distinct syntactic environments associated with the two functions of look, suggesting that look is in the process of grammaticalization from a verb of visual perception to a stance verb. The sequences demonstrate the emergence of linguistic structure from repeated use through the domain-general cognitive process of chunking in ASL.

1 Introduction

Multi-word expressions form a central part of language. They come in all shapes and sizes, varying in complexity and specificity. A multi-word expression is a unit that is longer than one word and conveys meaning that may not be predicted from individual words (Arnon and Snider 2010; Barlow and Kemmer 1994; Biber 2009; Bybee 2006, 2010; Bybee and Torres Cacoullos 2009; Ellis 2002; Erman and Warren 2000; Goldberg 2006; Haiman 1985; Hopper 1987; Sinclair 1991; Thompson and Mulac 1991; Wray 2002).[1] A few documented examples from English and Spanish include all hell broke loose, going to/gonna, what’s X doing Y, quedarse sorprendido ‘to become surprised’, and ponerse nervioso ‘to become nervous.’ From a usage-based linguistics perspective, multi-word expressions are integral for facilitating language acquisition and processing. They are formed through chunking, which are two or more words co-occurring repeatedly and over time, through repetition and reuse with some variation, the multi-word expressions are committed to memory (Bybee 2010: Chapter 3). The words develop a bond and create new symbolic associations, leading to entrenchment. Subsequently, the words are further strengthened; with each subsequent use, they can be processed as units (Langacker 2008). The repeated production can lead to articulatory reduction and fusion of the words as well as the fluency of the sequence, rendering it easier and faster to access and produce, a process known as automatization.

Chunking enables language users to create “prefabs” or conventionalized multi-word expressions and store them as exemplars (Bybee 2010; Bybee and Eddington 2006; Bybee and Torres Cacoullos 2009). Prefabs are prepackaged units that have been entrenched in memory and are ‘recycled’ in the words of Dąbrowska (2014) across repeated usage events, sometimes leading to new constructions (Bybee 2010; Bybee and Torres Cacoullos 2009; Erman and Warren 2000). The chunking of words appeals to many language researchers because it sheds insight on the cognitive representation of language – how users store, access, and retrieve them as part of everyday language use, and how this process relates to frequency, entrenchment, and automatization (Divjak 2019: Chapter 5). Moreover, higher-frequency expressions in particular syntactic contexts contribute to the grammaticalization of lexical items. The grammaticalization can be observed by the reduced transparency in the analyzability of internal structure and the shifts in semantic-pragmatic meaning in which an item becomes increasingly abstract and subjective (Bybee 2003, 2010; Traugott 1995, 2003; Bybee and Torres Cacoullos 2009).

Multi-word expressions have been well-investigated in spoken languages such as English and Spanish, yet they are severely understudied in signed languages by comparison (Hou and Morford 2020; Lepic 2019; Wilkinson 2016; Wilkinson et al. in press). This study fills in the gap by investigating the sequences that contain a high-frequency American Sign Language (ASL) sign, glossed as look. In many ASL dictionaries, look (alternatively glossed as look-at) is listed as a one-handed form with a V-handshape in which the extended index and middle fingers point outward from the signer and moves in any direction in the space ahead of the signer’s body, as shown in Figure 1.[2] This form is traditionally viewed as an unmodified form produced without context. Some researchers would call this as a ‘citation form’ or a lexeme that serves as an organizing unit for all morphophonological variants of look (Fenlon et al. 2015).

Figure 1: 

look, ‘to look at’ from ASL Signbank (2021).
Figure 1:

look, ‘to look at’ from ASL Signbank (2021).

Now consider the two instances of look, boldfaced in Figure 2, accompanied by the English glosses of the ASL signs and an English translation.[3] , [4] The images are extracted from a vlog (video blog) posted to a public group ASL That! on Facebook. The first instance exhibits path movement that targets a spatial location on the left side of the signer’s body that is associated with the ‘video’ under discussion.[5] The signer produces a visible mouthing of the English word ‘look’ in co-occurrence with look and subsequently fingerspells v-i-d-e-o. The sequence pro.2 look v-i-d-e-o gives a straightforward reading of the video as the object of visual perception from the perspective of a second-person agent. The subsequent sign, decide.for.yourself, refers to the signer’s urging the viewers to interpret the meaning of the video once they watch it.

Figure 2: 
“You have a look at the video, decide for yourself; it’s baffling, (let’s) discuss what’s going on.”
Source: ASL THAT! (2017). Signing Naturally Numbers 6-9 Double tap? Timestamp: 00:01:10
Images extracted from: https://www.facebook.com/groups/ASLTHAT/permalink/2024014911163339/.
Figure 2:

“You have a look at the video, decide for yourself; it’s baffling, (let’s) discuss what’s going on.”

Source: ASL THAT! (2017). Signing Naturally Numbers 6-9 Double tap? Timestamp: 00:01:10

Images extracted from: https://www.facebook.com/groups/ASLTHAT/permalink/2024014911163339/.

The second instance of look, on the other hand, exhibits reduced path movement that is not targeted at any discourse-meaningful location in space; it points away from the signer’s body, mirroring their own outward gaze. Note that neither the second instance contains an explicit agent nor does it point in the same direction as the first instance did in the previous utterance. There is also no visible English mouthing co-occurring with look. Rather, the signer’s face assumes furrowed brows, squinted eyes, and pressed lips. This constellation of facial expressions gives the reading of a puzzled reaction to a previously identified visual stimulus. The look instance is followed by mind.puzzled and combined with the facial expressions, the whole construction functions as a unit and can be interpreted to mean “it’s baffling”. The unit signals the use of look as a pivot to the signer’s reaction to the video; the reaction is an attitudinal stance, providing a window to the mind of the signer.

These preliminary observations form the base of the main arguments of the study presented here: (1) the look sign is grammaticalizing from a verb of visual perception to a stance verb; these changes can be observed in the context of multi-word expressions (as well as the form-meaning mappings); (2) the more high-frequency expressions are highly conventionalized units, likely prefabs, and (3) the schematization of multi-word expressions allow the productivity of new constructions. ASL multi-word expressions offer evidence of frequency effects for the grammaticalization of look. The frequency effects substantiate chunking, entrenchment, and automatization as domain-general cognitive processing mechanisms that are not only specific to spoken languages but also occurs in signed languages (Lepic 2016, 2019; Wilkinson 2016; Wilkinson et al. in press).

This paper is organized as follows. Section 2 reviews the background on multi-word expressions in signed languages and the theoretical approaches for analyzing them. Section 3 discusses the data used for the present study, and the corpus-like approach for analyzing the data. The same section also discusses the results, segueing to Section 4 for a qualitative analysis of the multi-word expressions and the theoretical implications of chunking in the formation of these sequences and the grammaticalization and schematization of look. The paper wraps up the discussion on multi-word expressions in signed languages.

2 Background on multi-word expressions in signed languages

Broadly, a multi-word expression in a signed language is defined as a sequence of identifiable signs functioning as a larger unit that may be conventionalized in meaning and form (Hou and Morford 2020; Lepic 2019; Wilkinson 2016; Wilkinson et al. in press).[6] Currently, the development of signed language corpora lags far behind globally spoken language corpora such as English and Spanish. There is no publicly available, machine-readable corpus for ASL yet (Lepic 2019; Morford and MacFarlane 2003; Occhino et al. 2021; Wilkinson 2016).[7] Even when such corpora become a reality, there is a very long way to go before one can search for frequency of individual signs and frequency of co-occurrence of strings of multiple signs easily as one can for English (Börstell 2022). There are several existing signed language corpora that are publicly available online such as Australian Sign Language (Auslan), Sign Language of the Netherlands (NGT) and Swedish Sign Language (STS) (Börstell 2022). Although these corpora are at a stage where one could start searching for n-grams, or the sequential string of words at any length, the small size of these corpora would not yield highly generalizable results beyond the dataset.

While ASL has a few documented idioms, including the classic one train go sorry (one English equivalent is ‘missing the boat’), the use and frequency of such idioms are unknown (Wilkinson et al. in press). Apart from the lack of large-scale corpus data for signed languages, there is also the methodological question of identifying multi-word expressions. Signed language corpora may differ in how they treat the more entrenched sequences, especially if they exhibit phonetic reduction and/or fusion (Börstell et al. 2016). In their study of two-sign compounds in the STS corpus, Börstell et al. (2016) investigated the distribution and duration of compounds that were either tagged as reduced or non-reduced. They found that the reduced compounds exhibited significantly shorter duration than non-reduced ones. Moreover, the reduced compounds had more recurring sign types, whereas non-reduced compounds had bigger sign types but fewer tokens per type and more hapaxes.

Finally, there has been a prevalence of structuralist and generative-formal approaches in the scholarship of sign language linguistics from the advent of this discipline, though usage-based approaches have only been applied in earnest for the past decade (Janzen 2018; Lepic 2019; Lepic and Occhino 2018; Wilcox 2014; Wilcox and Occhino 2016; Wilkinson et al. in press). Most scholars have focused on describing how individual signs are made of discrete building blocks, positing derivational rules for forming grammatical sentences out of these blocks. The underlying implication is that users access individual signs and parse them according to the rules. Scholars have also focused on the ‘simultaneity’ of linguistic structures such as the use of space for reference tracking through agreeing/indicating verbs (Lillo-Martin and Meier 2011; Schembri et al. 2018) or the simultaneous use of the hands and parts of the body of conveying different kinds of information at the same time (Napoli and Sutton-Spence 2010; Vermeerbergen et al. 2007). Yet scholars have not paid the same amount of attention to the sequential process of how signs chunk together and form larger conventionalized units akin to word chunks in spoken languages.

There is some compelling evidence that chunking is not limited to auditory processing. Wilkinson (2016) conducted one comprehensive study about collocational frequency of not constructions in ASL. She identified three high-frequency collocations: not have.to, why not, and not understand. Figures 3 and 4 exhibit non-reduced and reduced collocations of not have.to, respectively. Figure 3 shows how not have.to is easily analyzed as a sequence of two distinct signs. Figure 4 shows the chunking leads to the fusion of the signs as a unit, evidenced by the co-occurrence of the reduced path movement and the extension of the thumb and the bent index finger. Both figures differ in the analyzability of the internal structure, which also shapes meaning. Wilkinson proposes that the non-reduced form of not have.to gives a literal reading of obligatoriness, whereas the reduced form denotes a more bleached meaning of obligatoriness.

Figure 3: 
A non-reduced collocation of not have.to.
Figure 3:

A non-reduced collocation of not have.to.

Figure 4: 
A reduced collocation of not have.to.
Figure 4:

A reduced collocation of not have.to.

Wilkinson proposed that the frequency effects in why not and not understand can be observed by the semantic bleaching of the meaning, the pragmatic strengthening of subjectivity, and the speaker’s involvement in the discourse. The non-reduced form of why not gives the literal reading of cognitive reasoning of ‘why’, whereas the reduced form conveys the meaning of suggestion. For the third collocation, the non-reduced form of not understand gives the literal meaning of the cognitive inability to process information. The reduced form marks indifference to a given topic in the discourse, foregrounding the user’s subjectivity that extends beyond the cognitive inability to understand. Wilkinson argued that the analysis of not collocations show that ASL is similarly sensitive to frequency effects of chunking, just as spoken languages are, exhibiting loss of the analyzability of the internal structure and semantic bleaching, and attesting to the automatization and fluidity of processing from repetition.

ASL has other multi-word expressions that vary along the continuum of analyzability of internal structure. Lepic (2019) identified two ASL verb-argument constructions interpreter bring.in and take.to hospital. These constructions are not entirely fixed, in the sense that the ordering of the signs can be rearranged without altering the basic meaning, although one order is more common than the other order. Lepic suggested that they can constitute conventionalized multi-word expressions that exhibit more clearly analyzable structure compared to the not collocations (Wilkinson et al. in press). The preliminary observations of the verb-argument constructions and the not collocations suggest that multi-word expressions emerge from chunking and that higher-frequency ones can lead to changes in the internal structure and even semantic-pragmatic shift, forming what Wilkinson calls “schematic, fused constituent structures.”

My observations of the ASL data, such as Figure 2, led me to hypothesize that look is an excellent candidate for investigating multi-word expressions. The first argument is that look has been identified as a high-frequency token sign in ASL, reported to occur 6.3 times per 1,000 signs in a small-scale study of 4,111 signs for lexical frequency (Morford and MacFarlane 2003). The high lexical frequency of this sign does not appear to be specific to ASL. In British Sign Language (BSL), one sign glossed as look had the ranking of 15 and another sign glossed look2 the ranking of 56 in the top 100 most frequent signs in conversational data (Fenlon et al. 2014). These two same signs were determined to be the second and third most frequent verbs in a dataset of 1,612 verbs (Fenlon et al. 2018). In Auslan, which is related to BSL, one sign glossed as look was ranked as the fifth most frequent type in a lexical frequency study of 63,436 tokens (Johnston 2012). In Swedish Sign Language (STS), one sign glossed as look.at had the ranking of 39 out of 300 sign types (Börstell et al. 2016). These lexical frequency studies did not investigate the frequency effects of look in the context of n-grams. However, Wilkinson’s (2016) study for not collocations in ASL and Börstell et al.’s (2016) study for frequency and duration of signs in STS suggested that multi-word expressions in signed languages are sensitive to frequency effects as multi-word expressions in spoken languages are. A high-frequency sign like look could exhibit frequency effects that may include loss of internal structure and semantic bleaching with a shift to subjectivity. Moreover, in the absence of an ASL corpus, it would be somewhat easier to investigate the frequency effects of a high-frequency sign like look compared to those of a low-frequency sign, albeit the sampling can limit the potential of statistical testing.

The second argument is that sensory perception is a rich domain of inquiry for language change. Cross-linguistic studies have shown how the physical domain of sensory perception verbs extend to the more metaphorical and abstract domains of experience such as visual and auditory cognition and communication (Evans and Wilkins 2000; Majid et al. 2018; San Roque et al. 2018; Sweetser 1990; Traugott and Dasher 2005; Viberg 1983). These studies demonstrated how the meaning of verbs shifts from activity to experience; very likely, this change occurred from repetition of use in particular syntactic environments. Moreover, the extension of these verbs led to the development of pragmatic discourse markers in spontaneous conversation such as the English look forms (Brinton 2001; Romaine and Lange 1991) and the evidential markers such as see (Kendrick 2019) and the Romance language equivalents (Fagard 2010; Waltereit 2006). Third, vision dominates the sensory perception of metaphorical extensions cross-linguistically (Majid et al. 2018; Sweetser 1990; Winter et al. 2018). Signed languages are no exception to this tendency. Many sighted deaf signers rely on visual information in the world. They talk about themselves in relation to the world through what they can access the most. Their lived experiences tend to be grounded in the visual orientation, naturally shaping their language structure and use. Thus, it is not implausible to believe that verbs of visual perception in ASL, including look, could be used for stance-marking to convey one’s experiences and understanding of the world.

2.1 Background on look-at

The term ‘American Sign Language’ refers to a constellation of language varieties used by deaf and hard-of-hearing people used in the United States and anglophone Canada, as well as other parts of the world and, in the contemporary period, generally refers to the standard variety in use at Gallaudet University (Hill 2015).[8] , [9] The language has the basic word order of SVO in transitive clauses, as the example in Figure 2 illustrates, and SV in intransitive clauses (Fischer 1975; Liddell 1980). Pronominal and nominal arguments can be omitted and understood implicitly once they have been established in discourse (Wulf et al. 2002). In some instances the word order is more flexible with topic-comment structure (Janzen 1999) or agreeing verbs, which has been observed in many different signed languages. Some verbs mark their core arguments through spatial modification of the verb forms (Fenlon et al. 2018; Hou and Meier 2018; Mathur and Rathmann 2012; Meir 1998; Padden 1988). Such verbs have been traditionally analyzed as directional verbs, agreeing verbs, or indicating verbs; the terminological choice depends on the researcher’s theoretical position. These verbs, including look, generally denote verbs of transfer with two animate arguments. But look can be an exception to the generalization since the visual stimulus of the verb does not have to be animate as evidenced by Figure 2.

The grammatical category of look is generally defined as an agreeing/indicating verb. This leads to the grouping of different morphophonological variants of one “lexeme” for analysis as demonstrated in Figures 5 and 6 (Fenlon et al. 2015). These variants differ in the change in the direction of the path movement in which the verb points at, or in number of hands involved, but they do not fundamentally change in meaning, since they all pertain to the general activity of looking at a visual stimulus. Figure 5 shows one variant of look that points at the signer as the referent, giving the interpretation of ‘look at me’ whereas Figure 6 shows another variant of look that means two referents are looking at each other. A similar approach has been taken for BSL. There is one BSL sign glossed as look2 that bears a strong resemblance to the ASL look; it is listed as a one-handed sign in the BSL SignBank (Fenlon et al. 2015).[10] Signers can produce this form as a two-handed sign in a way that conveys the meaning of either two people looking at something or two people looking at each other. Fenlon et al. (2015) do not treat such variants as separate lexemes but rather different variants of one lexeme.

Figure 5: 
‘Look at me’.
Source: Street Leverage. (2012). Trudy Suggs: Deaf Disempowerment and Today’s Interpreter Timestamp 00:02:38
Image extracted from: https://youtu.be/pDSNKRaOmo8?t=158.
Figure 5:

‘Look at me’.

Source: Street Leverage. (2012). Trudy Suggs: Deaf Disempowerment and Today’s Interpreter Timestamp 00:02:38

Image extracted from: https://youtu.be/pDSNKRaOmo8?t=158.

Figure 6: 
‘Look at each other’.
Source: ASLized! (2017). ASL in Academic Settings: Language Features Timestamp: 00:23:49
Image extracted from: https://youtu.be/VX18-4m-EN0?t=1429.
Figure 6:

‘Look at each other’.

Source: ASLized! (2017). ASL in Academic Settings: Language Features Timestamp: 00:23:49

Image extracted from: https://youtu.be/VX18-4m-EN0?t=1429.

For other ASL signs as presented in Figures 7 10, researchers may treat them as separate lexemes for lexicography purposes. These signs pertain to different types of looking activities as well as metaphorical and subjective dimensions of vision. The change of the meaning corresponds to change in form. The formational changes appear to involve a combination of changes of direction, manner, and orientation of path movement, selection and representation of facial expressions, the number of hands, and the configuration of the hands (Frishberg and Gough 2000; Klima and Bellugi 1979; Naughton 2001). Figure 7 means ‘to observe’ or ‘to examine’ and is a prototypically two-handed sign, although it can be produced with one hand. Both hands are symmetrical for having the same handshape and movement; they have the same V-handshape and move alternatingly in a circular manner. Figure 8 means ‘view’/‘perspective’ or ‘to look at something.’ The sign is a non-symmetrical two-handed sign, in which one hand moves and the other hand does not. The active hand has the V-handshape pointing towards the other hand, a stationary 1-handshape. Figure 9 means ‘to read.’ The sign is also a non-symmetrical two-handed sign. The active hand has the V-handshape that moves downward over a stationary B-handshape. These signs are a few of the many signs with other extended meanings such as ‘look forward to’, ‘reminisce’, ‘admire’, ‘look down on someone’, and ‘look someone up and down’ (Naughton 2001). Figure 9 shows a one-handed sign that functions as an imperative for directing one’s attention to a stimulus. This sign co-occurs with a visible mouth configuration: rounded lips with a protruding tongue that exhibits trilled flapping (Liddell 2003: 131–132). The tongue movement resembles movement of the consonant of a lateral approximant [l] with flapping action while the lips resemble the back rounded vowel [ʊ]. This sign points at an intended referent and may exhibit reduced path movement, co-occurring with heightened affective facial expressions. Signers can ‘hold’ this sign with the mouthing for as long as necessary for dramatic effect and can use it covertly, without the accompanying manual sign, to direct one’s attention to the stimulus.

Figure 7: 
‘To observe’ or ‘to examine’.
Source: The Daily Moth. (2017). The Daily Moth 2-9-17 Timestamp: 00:13:45
Image extracted from: https://youtu.be/84IoWf70j38?t=825.
Figure 7:

‘To observe’ or ‘to examine’.

Source: The Daily Moth. (2017). The Daily Moth 2-9-17 Timestamp: 00:13:45

Image extracted from: https://youtu.be/84IoWf70j38?t=825.

Figure 8: 
‘View/perspective’ or ‘look at something in particular’.
Source: Frye, Callie. (2020). DCARA March 20, 2019 Timestamp: 00:01:50
Image extracted from: https://youtu.be/kVlVQaTr7Mk?t=110.
Figure 8:

‘View/perspective’ or ‘look at something in particular’.

Source: Frye, Callie. (2020). DCARA March 20, 2019 Timestamp: 00:01:50

Image extracted from: https://youtu.be/kVlVQaTr7Mk?t=110.

Figure 9: 
‘To read’.
Source: ASLized! (2013). Deaf Schools (with audio and captions) Timestamp: 00:01:33
Image extracted from: https://youtu.be/mkwYHheJQVw?t=93.
Figure 9:

‘To read’.

Source: ASLized! (2013). Deaf Schools (with audio and captions) Timestamp: 00:01:33

Image extracted from: https://youtu.be/mkwYHheJQVw?t=93.

Figure 10: 
‘Look!’ as an affective imperative.
Source: Deafies in Drag. (2020). Look! Timestamp: 00:00:45
Image extracted from https://youtu.be/4zwGKXlkmqg?t=44.
Figure 10:

‘Look!’ as an affective imperative.

Source: Deafies in Drag. (2020). Look! Timestamp: 00:00:45

Image extracted from https://youtu.be/4zwGKXlkmqg?t=44.

All the above signs differ more in meaning and form compared to the cluster of different variants of the lexeme look. What they share is the form of the V-handshape and the meaning of concrete or abstract visual perception. They form a network of associations on the basis of morphological and semantic relations. In this network, look can be conceptualized as a prototypical and central member that extends its meaning to other signs (Naughton 2001).

There are other ASL signs relating to visual perception. Some are more distinct in form compared to the family of ‘look’ signs and thus may be considered as separate lexemes. One sign, conventionally glossed as see in Figure 11, bears a similar meaning as look. Both signs share the V-handshape but differ by palm orientation, the facing of fingertips, and location. They differ in the meaning of visual perception with respect to agency. Naughton (2001) states that the difference is the event type representing sensory perception: activity constitutes a process controlled by a human agent whereas experience is a process that happens to an agent who cannot control it (c.f. Viberg 1983). look is an activity verb. see is an experience verb. Naughton also states both verbs can extend different subjective meanings. see has epistemic functions of anticipation, possibility, and doubt, and can function as an evidential imperative. A few signs from the family of ‘look’ signs mark the signer’s evaluation of a visual stimulus – among the signs cited as examples are look.up.and.down, look.down.on, and view, but not look.[11]

Figure 11: 

see, ‘to see’ from ASL SignBank (2021).
Figure 11:

see, ‘to see’ from ASL SignBank (2021).

2.2 Subjective uses of look

A few other scholars have made preliminary observations about the subjective meaning of look in addition to referent marking and the metaphorical extensions of other signs from the family of ‘look’ signs. In an elicited study of psych verb constructions in ASL, Winston (2013) suggested that look (glossed here as look-at in the example below) is a potential “light verb” that follows a psych verb as a main verb. The outcome is the production of a caused psych verb event. In (1), the signer says that when the children looked at the clown, they laughed loudly in amusement, rendering look as more of an experience than an activity. look is directed towards the object, as indicated by the subscripts, and co-occurs with affective non-manual markers, which spread to the adjacent main verb belly.laugh.at.

(1)

clown b children a a look-at b ‘belly-laugh-at++’

“As for the clown, the children looked at him and enjoyed.” (Winston 2013: 34)

Winston proposes the following template for look-at when it functions as a potential light verb:

object b subject a [ a look-at b main-verb b ] b

In the template, the object is fronted, the subscripts index verb agreement, and the brackets represent the scope of the affective non-manual markers as well as a user’s expressive language, thoughts, and/or actions. In the case of (1), the signer enacts the action of the children laughing. This enactment has been referred as role shift, depiction, or constructed action (most common term), though not quite interchangeably, for different signed languages, and is marked by a perceptible shift in the signer’s body through change of non-manual markers and sometimes an explicit pronoun or noun for introducing the character and enacting them (Cormier et al. 2015; Hodge and Cormier 2019; Lillo-Martin 2012; Quer 2016). This enactment also encompasses a variety of quotative and non-quotative constructions with the common denominator of representing the character. While there is no unique marker or a unique group of markers that signals constructed action, look seems to be a common trigger for signaling it in ASL and even other signed languages (Engberg-Pedersen 1993; Healy 2015; Liddell 2003; Meier 1990; Naughton 2001).[12] , [13]

In a study of affective constructions in ASL, Healy (2015) found that look is a common occurrence in these type of constructions and analyzes it as a discourse marker that anticipates an experiencer’s reaction to a visual stimulus. The stimulus can be either a concrete or non-concrete entity. Signers may point to either a meaningful spatial location associated with the entity or to an arbitrary spatial location that is not associated with any entity, as exemplified by the second construction of look in Figure 2. In some instances, the signer can experience the stimulus through other senses such as touch by extension of look (glossed as look-at in the example below) as in (2):

(2)
last-night pro1 work typing-on-computer feel sudden-vibration look-at what’s-up
“Last night I was working at my desk and felt a sudden vibration. I wondered what caused it.” (Healy 2015: 150)

Healy also observed that the verb does not exhibit path movement and moreover, the verb and the signer’s eye gaze are not always aligned with one another. The verb may be pointing in one direction while the eye gaze is looking in another direction. The reduced path movement and the misalignment of the manual and non-manual properties underscore the experiencer’s cognitive attending to the stimulus in question while looking at it.

The observations of Winston (2013) and Healy (2015) generally corroborate my observations of look as a stance verb. Yet there are a few fundamental differences that must be highlighted. First, I do not readily accept the analysis of look as a light verb, because the data presented in this paper shows that look does not always have an adjacent main verb with which it forms a complex predicate.[14] Rather I analyze it as a stance verb that is in the process of grammaticalizing from a verb of visual perception. Second, my analysis builds on more naturalistic data sources from the internet, whereas the earlier analyses are largely based on elicited data. This has potential implications for analysis from a usage-based perspective, as more usage data may yield a wider range of functions of the constructions in which look occurs.

3 Data sources

For the present study, the data consists of 65 videos and vlogs of ASL by 38 distinct deaf signers.[15] The data summed up to 8 h and 21 min, and consisted of three major genres: news, monologue, and conversation. The appendix lists the major details of the data and the video sources. Since there is no publicly accessible, machine-readable ASL corpus yet, the Internet is an opportunistic corpus in which researchers can forage for data in ASL and other signed languages instead (Hou et al. 2020, 2022). The drawback is that identifying, collecting, and transcribing data is an immensely labor-intensive and time-consuming process. There are no standardized notation systems equivalent to the International Phonetic Alphabet for representing signed languages and thus one cannot search for data directly. One must manually search for signed language videos on the Internet and annotate them with English glosses, which are generally arbitrary, as researchers have different research goals and approaches to interpreting signs. There is likewise no standard for representing morphosyntactic analysis in signed languages like the Leipzig Glossing Rules for spoken languages. Yet this drawback is offset by the availability of multiple videos and vlogs on the Internet. These materials have been voluntarily created and produced by deaf signers on video platforms such as YouTube and social media like Facebook. Such data can be more representative of different types of usage of ASL from a larger and more diverse pool of deaf signers, and relevant for the present study, these data are likely to include usage that is rich with signer subjectivity. This offers the opportunity for researchers to add internet data to their existing corpus data and/or their collection of elicited data from idealized deaf native signers.

3.1 Methodology

The methodology of the study is modeled after Wilkinson’s (2016) study of not collocations in ASL with some modifications. As previously mentioned, look has two broad functions with some potentially distinct formational properties. The ‘vision’ function seems to be associated with prototypically one- and two-handed forms that exhibit clear path movement and less affective facial expressions and can co-occur with visible mouthing of English words. The ‘reaction’ function seems to be associated with prototypically one-handed forms that may exhibit reduced path movement and more affective facial expressions, and constructions that convey the signer’s saying, thoughts, and/or feelings. The ‘vision’ and ‘reaction’ functions are not exclusively based on these formational properties, which are not analyzed here, but also based on the analysis of the phrasal context in which look occurs. Some tokens are also ambiguous in the sense that the function of a look token simultaneously exhibits vision and reaction or overlaps with both vision and reaction. In some instances, the function is unclear.

The first step was to identify all forms belonging to the family of ‘look’ signs. Many forms are represented by other English glosses for approximate meaning. There are two rationales for considering the family of ‘look’ signs instead of just the look sign like the one in Figure 1. One, preliminary observations indicated the reaction function did not always occur in the variants of the look lexeme but potentially a few other signs of the family. Two, looking at the whole family can better capture emergent patterns of multi-word expressions of various ‘look’ signs, regardless of how lexemes may be categorized. Some forms of the look lexeme share the core meaning of looking at a stimulus but differ in the direction of path movement, e.g., one form means ‘to look to the right’ and another form means ‘to look at me’. Such forms would be generally lumped together since they are considered variants of the same lexeme due to their semantic association. However, I treated any form as distinct when I observed recurring patterns, such as the four tokens of the sequence look.at.me pro.1 ‘to look at me’ and two tokens of happen look.at.me pro.1 ‘happen to look at me’. These patterns led me to justify splitting look.at.me from most look forms and to consider that the associations of related forms of a lexeme “are gradient and depend upon the degree of semantic and phonological similarity and the token of frequency of the specific items” (Bybee and Torres Cacoullos 2009: 188). Other forms such as ‘to read’ or ‘to observe’ are separated from look on the basis of the combination of meaning and formational properties.

Next, the preceding and following signs of the look forms were coded. The scope of the string of signs for identifying the sequences was not limited to bigrams, e.g., only one sign that immediately preceded or followed the target sign, but included trigrams and quadgrams. The leeway of the scope allowed for identifying and grouping sequences and examining the type of syntactic environments in which the sequences appeared. The scope was coded for five preceding signs and five following signs of the target sign, unless the boundary of the utterance was marked by the signer clasping their hands or putting their hands down. This scope allowed for analyzing the function of look beyond just looking at its formational properties and the immediate adjacency of the signs. This yielded a more in-depth understanding of the functions of look in the wider scope at the utterance and clausal levels for better understanding the different types of syntactic environments in which higher-frequency sequences emerged. Next, the recurring sequences were identified. A sequence is considered recurring if it met the frequency threshold of two occurrences, i.e., the sequence occurred at least two times in the dataset. Once all the sequences were identified for each look token, they were scanned and flagged for all recurring sequences.

The see forms and the string of adjacent signs were then coded. Some words occur more frequently than other words, and there are differing views of how some recurring words may co-occur more than only by chance (Gries 2012) or how they co-occur because users select them, perceiving them to have a sequential relation on the basis of meaning (Bybee 2010). The entire dataset used in this study has not been annotated, which limits the statistical testing options such as comparing the frequency of individual signs and the frequency of co-occurrence of sequences of multiple signs and such as measuring the strength of association between signs and constructions for a collostructional analysis (Stefanowitsch and Gries 2003). The coding of see and their string of signs allowed me to determine whether there were sufficient data to compare shared signs among potentially recurring, overlapping sequences of look/‘reaction’, look/‘vision’, and see, by means of chi-square tests.

3.2 Results

The data yielded a total of 706 tokens and 36 types from the family of ‘look’ signs (the ‘see’ signs are discussed in §3.3). The functions of look are distinguished by the labels, look/‘reaction’ and look/‘vision’. Table 1 summarizes the tokens and types for the functions, and the tokens categorized as ambiguous. Table 2 summarizes the recurring n-grams (n ≥ 2). The results of the quadgrams are not reported here, as there were no more than two or three tokens for each frequent quadgram.

Table 1:

Summary of look tokens and types by function.

Function Token count Type count
Reaction 174 1
Vision 369 18
Ambiguous 163 17
Total 706 36
Table 2:

Summary of frequent (n ≥ 2) look n-grams.

Function Bigrams Trigrams Quadgrams
Reaction 38 24 4
Vision 77 13 2
Ambiguous 28 5 0
Total 143 42 6

3.2.1 Reaction

There are multiple recurring sequences observed in the 174 tokens of look/‘reaction’. Table 3 presents 24 recurring trigrams, and Table 4 presents 38 recurring bigrams. The “s” is short for sign, representing look, and the “s−1” represents the sign preceding look while “s+1”, and “s+2” represent signs following look, respectively. These three tables show that pro.1 recurs in the majority of the quadgrams, trigrams, and bigrams. Table 4 shows that 55% of the bigrams have pro.1 in the s−1 slot and 21% have oic (short for oh.i.see) in the s+1 slot. 10% of the trigrams have the sequence pro.1 look oic. Other recurring sequences with a frequency higher than 5% are pro.1 look pro.1, look pro.1, look oic, pro.3 look, and look palm.up. For the bigrams, there are only 38 hapaxes preceding look (22%) and 47 hapaxes following look (27%).

Table 3:

Frequent (n ≥ 2) trigrams with look/‘reaction’ (n = 174).

Rank s−1 s+1 s+2 Count
1 pro.1 oic 17 10%
2 pro.1 pro.1 12 7%
3 pro.3 oic 7 4%
4 pro.1 palm.up, pro. 3, yes, wow 3 2%
5 get.inspired palm.up 3 2%
6 pro.1 can’t, feel, get.inspired, gut.instinct, hold.on, mind.puzzled, none, really, that, these.two, wave.no, wonder 2 1%
7 people oic, question 2 1%
8 palm.up palm.up 2 1%
9 oic realize 2 1%
Table 4:

Frequent (n ≥ 2) bigrams with look/‘reaction’ (n = 174).

Rank s−1 s+1 Count
1 pro.1 95 55%
2 oic 36 21%
3 pro.1 13 7%
4 pro.3 11 6%
5 palm.up 10 6%
6 people 8 5%
7 palm.up wow, yes 5 3%
8 fine, get.inspired, hold.on, mind.puzzled 4 2%
9 deaf, pro. 2, sign.fluently pro.3, question, really, think, wave.no, wonder 3 2%
10 maybe, woman, will, secretary awful, be.fascinated, can’t, dismiss, feel, gut.instinct, how, no, none, that, these.two, thinking.hard 3 1%

In the following examples, the look/‘reaction’ is interpreted as ‘be + like’, similar to what Padden (1986) and Lillo-Martin (1995) used for translating quotative and non-quotative constructions in ASL to English. This interpretation also echoes the grammaticalized English ‘like’ to introduce reported speech and thought (Romaine and Lange 1991). In such examples, pro.1 represents the first-person pronoun, which does not vary in form for case, and pro.3 represents a third-person pronoun, and oic is an interactive sign commonly used for two purposes: backchanneling in conversations or to signal realization. palm.up is a polysemous discourse marker in both signed languages and co-speech gestures. As indicated by the gloss, the form is the rotation of one or two open hands with an upward palm orientation (Cooperrider et al. 2018; McKee and Wallingford 2011). While there is a wide range of functions associated with this form, it is generally translated to mean ‘well’ as in a filler or an exclamation.

The sequences that occur with a frequency of more than 5% are pro.1 look oic, pro.1 look pro.1, pro.1 look, look oic, and look pro.1. The sequence pro.1 look is generally followed by some sort of reaction; this is exemplified in Figure 12, in which the signer immediately signals an intimate connection to what another signer was saying. Although pro.1 precedes look in 55% of reactions, non-first person referents also occupy the experiencer role such as the third-person pronoun pro.3 and people which together account for 11% of the reactions. Such referents occur in both the more conventionalized sequences like look oic and more schematic sequences such as look concerned. Figure 13 shows pro.3 look oic; the signer was recalling her first meeting with the former U.S. president Barack Obama, and conveying his realization that the signer was deaf. Figure 14 shows people look concerned followed by another reaction that demonstrates the fatigue of being concerned. The particular sequences pro.1 look pro.1 and look pro.1 warrant further explanation, as without context, these sequences can give the wrong reading of “I look at myself” and “X looks at me.” Rather, the repetition of the first-person pronoun signals a pivot to the signer’s reaction, highlighting stance-taking from a first-person perspective.

Figure 12: 
“I was like, I totally understood (what the person was saying).”
Source: Frye, Callie. (2020). DCARA March 20, 2019 Timestamp: 00:00:43-00:00:45

https://youtu.be/kVlVQaTr7Mk?t=325.
Figure 12:

“I was like, I totally understood (what the person was saying).”

Source: Frye, Callie. (2020). DCARA March 20, 2019 Timestamp: 00:00:43-00:00:45

https://youtu.be/kVlVQaTr7Mk?t=325.

Figure 13: 
“He was like oh I see, you’re deaf, got it.”
Source: McFeely, Sheena. (2011). The Pearls – Leah Katz Hernandez Interview Timestamp: 00:02:40-00:02:43
Images extracted from: https://youtu.be/zlR2EGi6_wA?t=160.
Figure 13:

“He was like oh I see, you’re deaf, got it.”

Source: McFeely, Sheena. (2011). The Pearls – Leah Katz Hernandez Interview Timestamp: 00:02:40-00:02:43

Images extracted from: https://youtu.be/zlR2EGi6_wA?t=160.

Figure 14: 
“People were like concerned, finding it unpleasant as it went on.”
Source: The Daily Moth. (2017). The Daily Moth 2-13-17 Timestamp: 00:00:57-00:00:59
Images extracted from: https://youtu.be/Ge8tepq-9bQ?t=57.
Figure 14:

“People were like concerned, finding it unpleasant as it went on.”

Source: The Daily Moth. (2017). The Daily Moth 2-13-17 Timestamp: 00:00:57-00:00:59

Images extracted from: https://youtu.be/Ge8tepq-9bQ?t=57.

In other bigrams, after the popular oic, there are some recurring specific signs such as yes, wow, get.inspired, gut.instinct, hold.on, and mind.puzzled. These recurrences suggest the look/‘reaction’ co-occurs with a variety of signs, some of them cognitive in nature and others exclamatory, that are commonly used to express attitudinal stance. By contrast, this pattern is not as robust with the family of look.at signs that exhibit the vision function.

3.2.2 Vision

There are 369 tokens with the vision function from the family of ‘look’ signs, which include observe and read. The gloss look/‘vision’ refers to the look form specifically. Table 5 lists all the 18 ‘look’ types that occurred in the dataset. For the sake of space, the tables of recurring n-grams are limited to look here. Table 6 and Table 7 list recurring trigrams (n = 6) and bigrams (n = 40), respectively.

Table 5:

List of different types of ‘look’ (n = 369).

Rank Type Count Percentages
1 look 150 41%
2 read 44 12%
3 view 42 11%
4 examine 32 9%
5 browse 21 6%
6 look.at.me 15 4%
7 look.at.each.other 10 3%
8 look+one 9 2%
9 watch 9 2%
10 sightsee 5 1%
11 look.back.on 5 1%
12 look+object 5 1%
13 look.back.and.forth 5 1%
14 turn.to.look 4 1%
15 look+look++ 4 1%
16 admire 3 1%
17 look.down.on 3 1%
18 predict 3 1%
Table 6:

Frequent (n ≥ 2) trigrams with look/‘vision’ (n = 150).

Rank s−2 s−1 s+1 Count
1 index palm.up 3 2%
2 pro.1 pro.1 2 1%
3 w-e-b-s-i-t-e index 2 1%
4 stop pro.1 2 1%
5 you not.yet 2 1%
6 will see 2 1%
Table 7:

Frequent (n ≥ 2) bigrams with look/‘vision’ (n = 150).

Rank s−1 s+1 Count
1 pro.1 23 51%
2 palm.up 9 6%
3 palm.up 9 6%
4 can, index pro.1 6 4%
5 pro.2 see 5 3%
6 index 4 3%
7 never, o-r sun, that 3 2%
8 can’t, fine, grab.opportunity, have.to, look, must, not, not.yet, now, pick.up, start, tend.to, pro.3-pl, wait, will, pro.2-pl follow, look, on, 2 1%
poss.1-pl, poss.3,
q-u-a-l-i-t-y, t-v,
v-i-d-e-o, word,
y-o-u-t-u-b-e, poss.2

These tables show a lower frequency of recurring sequences compared to the tables reported for the n-grams of look/‘reaction’. First, the most frequent sequence pro.1 look only accounts for 15% of the sample. This sequence is used to mark visual perception of an object, as exemplified in Figure 15. Second, the trigrams and bigrams show a lower distribution of the first-person pronoun. Finally, the bigrams show the distribution of a wider variety of modals, negators, possessives, and referents co-occurring with look. None of these signs group together as a category that would distinctly signal to the signer’s reaction to a visual stimulus, even when considered in the larger context of discourse. The bigrams also have more hapaxes; there are 61 hapaxes preceding look (41%) and 96 hapaxes following look (64%).

Figure 15: 
“I looked at the comments scrolling them under the vlog.”
Source: The Daily Moth. (2019). The Daily Moth 3-22-2019 Timestamp: 00:18:04-00:18:07
Images extracted from: https://youtu.be/rTJt6dTwc0k?t=1084.
Figure 15:

“I looked at the comments scrolling them under the vlog.”

Source: The Daily Moth. (2019). The Daily Moth 3-22-2019 Timestamp: 00:18:04-00:18:07

Images extracted from: https://youtu.be/rTJt6dTwc0k?t=1084.

The patterning of the n-grams for look/‘vision’ differs from the patterning of n-grams for the look/‘reaction’ with respect to the distribution of different signs that follow look. The link between these two sets of patterns can be observed in the ambiguous look tokens.

3.2.3 Ambiguous tokens

There are 163 tokens from the family of ‘look’ signs that have been categorized as ambiguous. Table 8 lists all the ‘look’ types that occurred in the dataset; there is a clear overlap between this table and Table 5. The tables of recurring n-grams are limited to look. Tables 9 and 10 list recurring trigrams (n = 2) and bigrams (n = 10), respectively.

Table 8:

List of different types of ‘look’ categorized as ambiguous (n = 163).

Rank Type Count Percentages
1 look 58 36%
2 read 27 17%
3 examine 21 13%
4 look.at.me 13 8%
5 browse 12 7%
6 look+object 5 3%
7 look-up-and-down 4 2%
8 look-at+one 4 2%
9 admire 3 2%
10 look-back-on 3 2%
11 turn-to-look-at 3 2%
12 look-at-each-other 2 1%
13 keep-looking-on 2 1%
14 look-back-and-forth 2 1%
15 look-down-on 2 1%
16 view 1 1%
17 watch 1 1%
Table 9:

Frequent (n ≥ 2) trigrams with look categorized as ambiguous (n = 58).

Rank s−1 s+1 Count
1 pro.1 pro.3 2 3%
2 pro.1 pro.1 2 3%
Table 10:

Frequent (n ≥ 2) bigrams with look categorized as ambiguous (n = 58).

Rank s−1 s+1 Count
1 pro.1 20 34%
2 palm.up 6 10%
3 pro.1 5 9%
4 pro.3 4 7%
5 palm.up umm 3 5%
6 index, pro.3, woman learn 2 3%

The sequence pro.1 look is one of the most frequent recurring sequence in the bigrams and trigrams, similar to what has been observed for look/‘reaction’ and look/‘vision’. What makes a sequence ambiguous is the expression of stance. One example is the exclusive use of facial expressions to show one’s reaction to a visual stimulus. In Figure 16, the signer produces a facial expression following pro.1 look which can be interpreted as a negative emotional stance from witnessing a scene of a person whispering to another person and then walking away; however, the signer does not make an explicit comment about their stance nor elaborates on it and instead continues narrating the events following the scene. Another example of ambiguity is the simultaneous expression of vision and reaction as demonstrated by the type of token and by the subsequent signs. Consider Figure 17 for its demonstration of two instances of pro.1 read. The first instance encodes vision with Edgar Allen Poe as the object of reading, and the second instance is ambiguous because the subsequent sign of pro.1 read encodes a reaction that shows the signer finding Poe too difficult to understand. The combination and interaction of the two functions in the phrasal context renders the second instance of pro.1 read ambiguous.

Figure 16: 
“They whispered to them, telling them it’s okay, and then they walked away. I looked at them in disbelief. The interpreter came up to me and sat across from me.”
Source: Street Leverage. (2012). Trudy Suggs: Deaf Disempowerment and Today’s Interpreter Timestamp: 00:04:36-00:04:41
Images extracted from: https://youtu.be/pDSNKRaOmo8?t=276.
Figure 16:

“They whispered to them, telling them it’s okay, and then they walked away. I looked at them in disbelief. The interpreter came up to me and sat across from me.”

Source: Street Leverage. (2012). Trudy Suggs: Deaf Disempowerment and Today’s Interpreter Timestamp: 00:04:36-00:04:41

Images extracted from: https://youtu.be/pDSNKRaOmo8?t=276.

Figure 17: 
“I was assigned to read Edgar Allen Poe, I read it, it went over my head.”
Source: ASLized! (2013). Deaf Schools (with audio and captions) Timestamp: 00:01:29-00:01:34
Images extracted from: https://youtu.be/mkwYHheJQVw?t=89.
Figure 17:

“I was assigned to read Edgar Allen Poe, I read it, it went over my head.”

Source: ASLized! (2013). Deaf Schools (with audio and captions) Timestamp: 00:01:29-00:01:34

Images extracted from: https://youtu.be/mkwYHheJQVw?t=89.

3.3 Frequency distribution of LOOK and SEE

There are 210 tokens of see from the family of ‘look’ signs. The gloss see refers to Figure 11. This sign can be also a symmetrical two-handed form. The two-handed forms (n = 16) were excluded from the study for the time being, since they warrant additional investigation for their functions. One form, see-see, is a distinctly reduced form of see that only indicates one’s anticipation about the outcome of a situation, i.e., ‘let’s see’ (Naughton 2001). Whereas see is used to refer to the perception of a visual stimulus, see-see cannot be used likewise.

Table 11 presents different types and tokens of the see signs. Table 12 summarizes the frequent see and see-see n-grams; no other form was observed to have any recurring sequences. Tables 13 and 14 list trigrams, and bigrams, respectively, for see. The quadgrams, which overlap with some of the trigrams, are not listed here, since they are heavily a part of scripted lines from the ASL radio show, The Daily Moth. The bigrams and trigrams for see-see are not listed here due to the low token count, which makes it difficult to draw generalizations about the co-occurrence of signs with that particular sign.

Table 11:

Summary of types and tokens of see and see-see (n = 210).

Type Token count
see 198
see-see 12
Total 210
Table 12:

Summary of frequent (n ≥ 2) see and see-see n-grams.

Type Bigrams Trigrams Quadgrams
see 55 29 7
see-see 4 3 0
Total 59 32 7
Table 13:

Frequent (n ≥ 2) trigrams with see ( n = 198).

Rank s−2 s−1 s+1 s+2 Count
1 what.do happen 7 3%
2 all now 5 2%
3 now pro.2 5 2%
4 pro.1 finish 4 1%
5 can palm.up 3 1%
6 pro.1 can 3 1%
7 twitter there 2 1%
8 will what.do 2 1%
9 l-i-n-k up.there 3 1%
10 pro.1-pl will 3 1%
11 look, palm.up, people can 2 1%
12 can picture 2 1%
13 pro.1 hope 2 1%
14 pro.1 index, many, pro.1, that 2 1%
15 article say 2 1%
16 l-i-n-k for 2 1%
17 look see 2 1%
18 negative++ about 2 1%
19 short c-l-i-p 2 1%
20 will look 2 1%
21 never before 2 1%
22 people poss.2 2 1%
23 someone pro.3 2 1%
24 on twitter 2 1%
Table 14:

Frequent (n ≥ 2) bigrams with see ( n = 198).

Rank s−1 s+1 Count
1 pro.1 23 12%
2 can 22 11%
3 what.do 11 6%
4 index, pro.2, that 10 5%
5 will 9 4%
6 palm.up 8 4%
7 finish 7 4%
8 look, now l-i-n-k, poss.2 6 3%
9 palm.up picture, there 5 2%
10 never, pro.3 before, many, 4 2%
poss.3, pro.1
11 index, want, twitter appear, negative++, pro.3, pro.3-pl, short 3 2%
12 can’t, from.now.on, hard, hope, i-f, not, people, pro.2, someone, watch article, body, 2 1%
c-l-i-p, different,
i-f, index+one, look, more, next, notice, obvious, people, two, where, word

According to the tables below, the most frequent trigram is see what.do happen ‘see what will happen’, which signals anticipation of an outcome for a situation. The most frequent bigram is pro.1 see followed closely by can see. Other frequent bigrams show a distribution of various modals, possessives, and referents, and have few signs that are cognitive in meaning and would be associated with attitudinal stance. The clustering of signs co-occurring with see is similar to what is observed for look/‘vision’.

3.4 Statistical analysis

The look/‘reaction’, look/‘vision’, see, and see-see sequences share certain recurring signs in the co-occurrence patterns. Are the observed frequencies of co-occurrence of certain signs in the overlapping sequences significantly more likely than their alternatives (e.g., X vs. Y)? Twelve chi-squared tests were conducted for pairs of relevant alternatives. Given the number of comparisons, the critical p-value < 0.01 is corrected to p < 0.0008. Table 15 summarizes the results of the chi-square tests that showed the statistically significant difference between pairs of overlapping sequences that co-occurred with pro.1. It is the most frequent sign that co-occurred with all the targeted signs. The results show that for pro.1 in the s−1 slot, there is a preference for pro.1 to collocate with look/‘reaction’ over look/‘vision’, see and see-see. There is also a preference for pro.1 to collocate with look/‘vision’ over see-see. Finally, there is a preference for pro.1 to collocate with see over see-see.

Table 15:

Summary of chi-square test results for six paired sequences co-occurring with pro.1.

Paired sequence pro.1 s
look/‘reaction’ (Freq. = 95) v. look/‘vision’ (Freq. = 23) χ 2 = 43.932, p = 3.399e-11
look/‘reaction’ (Freq. = 95) v. see (Freq. = 23) χ 2 = 43.932, p = 3.399e-11
look/‘reaction’ (Freq. = 95) v. see-see (Freq. = 4) χ 2 = 83.65, p < 2.2e-16
look/‘vision’ (Freq. = 23) v. see-see (Freq. = 4) χ 2 = 13.37, p = 0.00026
see (Freq. = 23) v. see-see (Freq. = 4) χ 2 = 13.37, p = 0.00026
Paired sequence s +   pro.1
look/‘vision’ (Freq. = 23) v. see (Freq. = 4) χ 2 = 13.37, p = 0.00026

Additionally, the results show that for pro.1 in the s+1 slot, there is a preference for pro.1 to collocate with look/‘vision’ over see-see. Other pairs of overlapping sequences had lower frequency of occurrences of co-occurring with other signs and, crucially, did not show any statistically significant differences. There was no difference, for example, between the look/‘reaction’and look/‘vision’ for the co-occurrence of pro.1 in the s+1 slot.

4 Analysis and discussion

The study revealed that there are recurring sequences involving look and its family of morphologically related signs and at least one likely conventionalized multi-word expression, pro.1 look/‘reaction’. The range of the number of signs in these sequences is consistent with the literature about multi-word expressions in different spoken languages falling in the range of two to four words (Green 2017; Pothos and Juola 2001). The study also showed that there are emergent patterns of recurring sequences associated with different functions of look and particular varieties of syntactic environments in which they occur.

4.1 Syntactic environments of different look types

The high-frequency sequence pro.1 look is the only sequence that recurs across the three categories of vision, reaction, and ambiguous. It is most robust for look/‘reaction’, accounting for 55% of the 174 tokens, and least robust for look/‘vision’, accounting for only 15% of the 150 tokens. It is relatively frequent in the ambiguous category, accounting for 34% of the 58 tokens. The degree of the frequency that the sequence pro.1 look occurs across these functions appears to correlate to the type of look, the co-occurrence of a cluster of signs with this sequence, and the syntactic environment. First, look/‘reaction’ exhibits a highly specialized meaning which presents the signer’s attitudinal stance towards a visual stimulus. This type may be accompanied by phonetic reduction, manifested by reduced path movement in the look sign, and heightened affective facial expressions, but this warrants further investigation. Second, the stance is strengthened by the frequent co-occurrence of a first-person singular pronoun, and the expression of stance. Third, look/‘reaction’ occurs in a more restricted syntactic environment, whereas look/‘vision’ and the ambiguous look constructions occur in the following environments:

  1. The presence of an explicit object in a post-verbal position, e.g., ‘comments’ (Figure 15),

  2. The co-occurrence with modals in a pre-verbal position, e.g., can look ( Figure 18),

  3. The co-occurrence with negators in a pre-verbal position, e.g., not look (Figure 19),

  4. The formation of a complex predicate by the co-occurrence of look with another verb, e.g., look see (Figure 20)

  5. The nominalization of look, e.g., ‘look back’ or ‘reminiscence’ as a noun (Figure 21),

Figure 18: 
“We Americans can look at the Polish (dance).”
Source: The Daily Moth. (2017). The Daily Moth. 2-22-17 Timestamp: 00:11:42-00:11:44
Images extracted from: https://youtu.be/ohrr3PHkEQE?t=702.
Figure 18:

“We Americans can look at the Polish (dance).”

Source: The Daily Moth. (2017). The Daily Moth. 2-22-17 Timestamp: 00:11:42-00:11:44

Images extracted from: https://youtu.be/ohrr3PHkEQE?t=702.

Figure 19: 
“He did not look both ways.”
Source: The Daily Moth. (2017). The Daily Moth 2-20-17 Timestamp: 00:05:34-00:05:36
Images extracted from: https://youtu.be/3NkoX0RZKgE?t=334.
Figure 19:

“He did not look both ways.”

Source: The Daily Moth. (2017). The Daily Moth 2-20-17 Timestamp: 00:05:34-00:05:36

Images extracted from: https://youtu.be/3NkoX0RZKgE?t=334.

Figure 20: 
“(What we) have to do is watch and see what happens (with ICE).”
Source: The Daily Moth. (2017). The Daily Moth 2-13-17 Timestamp: 00:08:28-00:08:29
Images extracted from: https://youtu.be/Ge8tepq-9bQ?t=508.
Figure 20:

“(What we) have to do is watch and see what happens (with ICE).”

Source: The Daily Moth. (2017). The Daily Moth 2-13-17 Timestamp: 00:08:28-00:08:29

Images extracted from: https://youtu.be/Ge8tepq-9bQ?t=508.

Figure 21: 
“Looking back at my time in the deaf institute” (lit. “my reminiscence of the deaf institute”).
Source: ASLized! (2013). Deaf Schools (with audio and captions) Timestamp: 00:02:46-00:02:48
Images extracted from: https://youtu.be/mkwYHheJQVw?t=166.
Figure 21:

“Looking back at my time in the deaf institute” (lit. “my reminiscence of the deaf institute”).

Source: ASLized! (2013). Deaf Schools (with audio and captions) Timestamp: 00:02:46-00:02:48

Images extracted from: https://youtu.be/mkwYHheJQVw?t=166.

The syntactic environments are not mutually exclusive of one another. Theoretically, look/‘vision’ can have a combination of the properties such as the co-occurrence of look with a modal and an explicit object (see Figure 18).

Second, a more in-depth investigation of the internal structure of the constructions could yield a more fine-grained analysis, but this is hindered by the laborious difficulty of identifying clausal boundaries in ASL. There is an ongoing discussion about identifying clausal and sentential boundaries in signed languages (Hodge 2014; Jantunen 2017; Johnston 2019; Ormel and Crasborn 2012). There has yet to be a systematic investigation on the body of syntactic and prosodic cues for identifying the boundaries of basic and complex utterances based on spontaneous discourse in ASL specifically, though there is some research based on contextually isolated, elicited data. So for the time being, I do not make any specific claims about the clausal boundaries for the look constructions, particularly for the look/‘reaction’ ones, so what can be instantiated in the ‘reaction’ slot may be in the same clause or constitute another clause. Apart from the clause issue, the present data shows that there are observed differences in the syntactic environment of look/‘vision’ and look/‘reaction’ and in-between. This is illustrated in the three basic constructional schemas in Figure 22. The reaction construction, however, has a more restricted syntactic environment, suggesting a loss of broad syntactic usage of the sign. This loss paves the way for the emergence of a new construction from the broader constructions, an indicator of grammaticalization.

Figure 22: 
Three constructional schemas for the recurring look sequences associated with vision, ambiguous, and reaction, respectively.
Figure 22:

Three constructional schemas for the recurring look sequences associated with vision, ambiguous, and reaction, respectively.

4.2 Grammaticalization

Grammaticalization research on signed languages has focused on the incorporation of manual gestures and facial expressions as grammatical and lexical morphemes into signed languages (Janzen 2012, 2018; Janzen and Shaffer 2002; Wilcox 2004, 2007) from cognitive linguistics perspectives and from formal linguistics perspectives (Meir 2003; Pfau and Steinbach 2006). These studies have demonstrated how certain elements of phenomena of grammaticalization can be specific to the modality, i.e., transmission channel, of language, and other elements are not specific to modality. Of interest is the ASL case study of know (Janzen 2018). As a verb, it co-occurs with subject and object pronouns or noun phrases. As a discourse marker, i.e., ‘I know’ or ‘you know’, know generally does not co-occur with any nominals. As a topic marker, know co-occurs at the beginning of a topic phrase and co-occurs with raised eyebrow and potentially a slight backward head tilt. The changes observed in the lexical and grammatical uses of know show the transformation of the syntactic dimension: the relatively free syntactic units become constrained grammatical morphemes.

Usage-based theories postulate that usage drives the grammaticalization of lexical items on the phonetic, semantic, and syntactic dimensions (Bybee 2003, 2010; Traugott 2003). The lexical item not only becomes a grammatical morpheme, as indicated by phonetic and semantic changes, but a new construction emerges from an old construction as well – both form and meaning change in the emergence of new structures. The English construction going to/gonna is a well-documented example. The grammaticalization of gonna from going to did not happen from mere repetition of the item itself, but rather through repetition of the instantiation of the item in the purpose construction [movement verb + Progressive + purpose clause]. This step produced a new construction [be going to verb] that conveys the intention reading (Bybee 2003). Other movement verbs such as traveling, riding, or journeying, can be applied to the purpose construction. However, these verbs cannot be instantiated in the verb slot of the intention construction because they do not give the same reading as gonna does.

In the case of look in ASL, the grammaticalization process is ongoing. The syntactic environment of look/‘vision’ narrows for that of look/‘reaction’ as the meaning becomes more specialized with an emphasis on pragmatic strengthening. Different ‘look’ types can be used to preface reactions as observed in some of the ambiguous constructions. In Figure 23, the look form means ‘to look up at an object’, which in turn refers to an antiquated telephone in a museum display. This type is also distinct on the upward direction of the path movement, and the ambiguous function is based on the meaning of possibility which is conveyed by the modal can and the non-subjectivity from a third-person viewpoint, combined with a hypothetical stance.

Figure 23: 
“Everywhere people can look up at it (=the red telephone) to remember how awful war is.”
Source: The Daily Moth. (2017). The Daily Moth 2-20-17 Timestamp: 00:11:38-00:11:41
Images extracted from: https://youtu.be/3NkoX0RZKgE?t=698.
Figure 23:

“Everywhere people can look up at it (=the red telephone) to remember how awful war is.”

Source: The Daily Moth. (2017). The Daily Moth 2-20-17 Timestamp: 00:11:38-00:11:41

Images extracted from: https://youtu.be/3NkoX0RZKgE?t=698.

Many ambiguous constructions can be viewed as the intermediary between vision and reaction, or as part of a continuum of the array of look constructions that are undergoing grammaticalization. Figure 24 represents a visual representation of the grammaticalization process for look moving from vision towards reaction. The brackets represent the schema, and the parentheses represent an optional slot; the formational properties associated with the functions have yet to be fully investigated quantitatively. The schemas illustrate that the subject transitions from the agent who looks at a stimulus to the experiencer who expresses their reaction to it. The change of the syntactic construction exhibits a greater degree of subjectivity. A sequence [pro.1 look/‘vision’ (object)] can represent some degree of subjectivity on the basis of an experience of a looking activity from a first-person viewpoint, whereas a sequence [pro.1 look/‘reaction’ reaction] clearly represents a stance from a similar viewpoint.

Figure 24: 
A visual representation of the grammaticalization process of the ASL sign look.
Figure 24:

A visual representation of the grammaticalization process of the ASL sign look.

4.3 Prefabs and schematization

In the data, the most robust patterns are the look/‘reaction’ constructions. The most frequent sequence is pro.1 look, which can be properly viewed as a prefab. Other sequences cluster around a group of recurring signs, which are cognitively oriented, that precede and/or follow look, indicating the strength of association of such signs with reaction. These sequences may constitute prefabs, stored as multiple instances of exemplar wholes, rather than as individual component parts; they would be entrenched as autonomous chunks, facilitating retrieval and processing as chunks (Bybee 2010). These prefabs also allow for the instantiation of a more schematic template for sequences in which the immediate slots that precede and follow look can be filled with other signs. The template accounts for the creation of novel sequences, as evidenced by the lower frequency sequences in the data: [(experiencer) look/‘reaction’ reaction]. Although the [experiencer] slot is commonly filled by a first-person pronoun, this slot is also filled by other pronominal and nominal arguments such as pro.3 and people, and the occasional discourse marker palm.up.[16] In some instances, the experiencer is not explicitly mentioned. For the [reaction] slot, it is filled by the interactive oic for 21% of the tokens for bigrams and 10% of the tokens for trigrams. This slot is also filled by other lower frequency but recurring signs such as pro.1, palm.up, wow, get.inspired, hold.on, mind.puzzled, and yes. The slot is also filled by other signs that are hapaxes and even a longer string of signs that constitute the reaction as seen in Figure 25. The reaction is not necessarily limited to individual signs but rather a string of signs that convey the signer’s stance.

Figure 25: 
“I got the feeling that something was off.”
Source: Street Leverage. (2012). Trudy Suggs: Deaf Disempowerment and Today’s Interpreter Timestamp: 00:11:42-00:11:45
Images extracted from: https://youtu.be/pDSNKRaOmo8?t=702.
Figure 25:

“I got the feeling that something was off.”

Source: Street Leverage. (2012). Trudy Suggs: Deaf Disempowerment and Today’s Interpreter Timestamp: 00:11:42-00:11:45

Images extracted from: https://youtu.be/pDSNKRaOmo8?t=702.

The repeated co-occurrence of certain signs shows how they cohere together in recurring sequences and give rise to the emergence of relatively fixed, conventionalized strings of units as prefabs. They also give rise to the schematization of these units, allowing for the creation and re-use of new structures. The issue of whether users use abstraction or analogy, or both, for producing such novel structures remains an open question.

5 Conclusion

Sign language linguistics has come a long way since its advent in the 1960s, when ASL was first heralded as a full-fledged language with its own grammar. However the investigation of multi-word expressions has only begun to advance to the point where researchers are moving beyond structuralist and formal-generative approaches and looking at the structure of ASL in terms of recurring chunks of structure in discourse (Lepic 2019; Wilkinson et al. in press). This paradigmatic shift provides the opportunity of ascertaining whether recurring sequences in signed languages emerge from domain-general cognitive mechanisms and how these sequences contribute to linguistic structure and meaning. This opportunity is made possible with the rise and spread of internet data for ASL and signed language corpora such as Auslan, BSL, STS, and many more.

Usage-based linguistics posits that grammar emerges from repeated use in particular discourse contexts. The use is shaped by the application of domain-general cognitive processes including chunking, entrenchment, and automatization. Chunking leads to the formation of multi-word expressions, and higher frequency chunks contribute to the grammaticalization of certain units. The empirical inquiry of what multi-word expressions exist and how they emerge in different signed languages has only recently been considered with the rise of corpus data. This avenue of investigation contributes to the inquiry with the case study of the ongoing grammaticalization of a high-frequency ASL sign, look. The study offers evidence for chunking, based on the co-occurrence of the first-person pronoun with different functions of the family of ‘look’ signs in various syntactic environments.

First, it appears that look/‘vision’ occurs in a wide variety of sequences in diverse syntactic environments, whereas look/‘reaction’ occurs in a much more restricted syntactic environment and tends to co-occur with a first-person pronoun in a construction that represents the signer’s attitudinal stance. Second, pro.1 look/‘reaction’ is a highly conventionalized unit, a prefab, as confirmed by the chi-square tests. Other look sequences are more schematic, co-occurring with non-first person experiencers and various stances that may or may not be quotative. The type of sequences exhibits variation along the spectrum of degrees of fixedness, in which prefabs occupy one end and more schematic multi-word expressions with open slots on the other end. Lastly, look can convey the simultaneous meaning of visual perception and subjectivity, showing the ambiguity and gradience that arises along the continuum of the functions. The ambiguity also reflects the ongoing process of grammaticalization, and on a larger scale, reflecting the more general patterns of polysemy and semantic changes in sensory perception verbs. More importantly, the evidence of recurring sequences confirms that there are chunks of structure in ASL. This raises the possibility that the higher frequency sequences are stored, entrenched, and automated as exemplar wholes, creating networks of fixed constructions while allowing for the schematization of these constructions and enabling signers to be productive and creative with their language.

Data availability statement

Where available, the links to the videos are provided in the Appendix and as part of the Figure labels. The annotations and metadata are available in the Dryad repository at https://doi.org/10.25349/D93W4Z.


Corresponding author: Lynn Hou, University of California, Santa Barbara, CA, USA, E-mail:

Acknowledgements

The author would like to express gratitude to the colleagues who supported this article with comments, ideas, and suggestions: Ben Anible, Rich Bailey, Eric Campbell, Savithry Namboodiripad, Corrine Occhino, Sandy Thompson – and most importantly, Ryan Lepic and Erin Wilkinson. Thanks to the three anonymous reviewers for their constructive feedback and Petar Milin for the statistical consultation. Finally, I express my appreciation to Dagmar Divjak and Sherman Wilcox for seeing through the final revisions of the article.

Appendix

ASL internet data
Number of videos 65
Number of signers 38
Total duration of video data 8 h and 21 min
Duration of news data 3 h and 37 min
Duration of monologue data 3 h and 35 min
Duration of conversation data 1 h and 9 min

Video sources

Note: Some of the videos are on YouTube or Vimeo. Two are in a public group on Facebook. Three videos are not listed here; they are either no longer available for public viewing or missing. These videos are hereafter labeled as “Missing Facebook video 1”, “Missing Facebook video 2”, and “Missing YouTube Video 1” in the Dryad dataset.

References

Arnon, Inbal & Neal Snider. 2010. More than words: Frequency effects for multi-word phrases. Journal of Memory and Language 62(1). 67–82. https://doi.org/10.1016/j.jml.2009.09.005.Search in Google Scholar

Barlow, Michael & Suzanne Kemmer. 1994. A schema-based approach to grammatical description. In Susan D. Lima, Roberta L. Corrigan & Gregory K. Iverson (eds.), The reality of linguistic rules, 19–42. Amsterdam/Philadelphia: Benjamins.10.1075/slcs.26.05barSearch in Google Scholar

Biber, Douglas. 2009. A corpus-driven approach to formulaic language in English: Multi-word patterns in speech and writing. International Journal of Corpus Linguistics 14(3). 275–311. https://doi.org/10.1075/ijcl.14.3.08bib.Search in Google Scholar

Börstell, Carl. 2022. Searching and utilizing corpora. In Jordan Fenlon & Julie A. Hochgesang (eds.), Signed language corpora. Washington, D.C.: Gallaudet University Press.Search in Google Scholar

Börstell, Carl, Thomas Hörberg & Robert Östling. 2016. Distribution and duration of signs and parts of speech in Swedish Sign language. Sign Language & Linguistics 19(2). 143–196.10.1075/sll.19.2.01borSearch in Google Scholar

Brinton, Laurel J. 2001. From matrix clause to pragmatic marker: The history of look-forms. Journal of Historical Pragmatics 2(2). 177–199. https://doi.org/10.1075/jhp.2.2.02bri.Search in Google Scholar

Butt, Miriam. 2010. The light verb jungle: Still hacking away. In Mengistu Amberber, Brett Baker & Mark Harvey (eds.), Complex predicates: Cross-linguistic perspectives on event structure, 48–78. Cambridge: Cambridge University Press.10.1017/CBO9780511712234.004Search in Google Scholar

Bybee, Joan. 2003. Mechanisms of change in grammaticization: The role of frequency. In Brian D. Joseph & Richard D. Janda (eds.), The handbook of historical linguistics, 602–623. Malden, MA: Blackwell.10.1002/9780470756393.ch19Search in Google Scholar

Bybee, Joan. 2006. From usage to grammar: The mind’s response to repetition. Language 82(4). 711–733.10.1353/lan.2006.0186Search in Google Scholar

Bybee, Joan. 2010. Language, usage and cognition. Cambridge: Cambridge University Press.10.1017/CBO9780511750526Search in Google Scholar

Bybee, Joan & David Eddington. 2006. A usage-based approach to Spanish verbs of “becoming”. Language 82(2). 323–355.10.1353/lan.2006.0081Search in Google Scholar

Bybee, Joan L. & Rena Torres Cacoullos. 2009. The role of prefabs in grammaticization How the particular and the general interact in language change. In Roberta Corrigan, Edith A. Moravcisk, Hamid Ouali & Kathleen Wheatley (eds.), Formulaic Language, Vol. 1: Distribution and historical change (Typological Studies in Language 82), 187–218. Philadelphia: John Benjamins.10.1075/tsl.82.09theSearch in Google Scholar

Cooperrider, Kensy, Natasha Abner & Susan Goldin-Meadow. 2018. The palm-up puzzle: Meanings and origins of a widespread form in gesture and sign. Frontiers in Communication 3. 23. https://doi.org/10.3389/fcomm.2018.00023.Search in Google Scholar

Cormier, Kearsy, Sandra Smith & Zed Sevcikova Sehyr. 2015. Rethinking constructed action. Sign Language & Linguistics 18(2). 167–204.10.1075/sll.18.2.01corSearch in Google Scholar

Dąbrowska, Ewa. 2014. Recycling utterances: A speaker’s guide to sentence processing. Cognitive Linguistics 25(4). 617–653. https://doi.org/10.1515/cog-2014-0057.Search in Google Scholar

Divjak, Dagmar. 2019. Frequency in language: Memory, attention and learning. Cambridge: Cambridge University Press.10.1017/9781316084410Search in Google Scholar

Ellis, Nick C. 2002. Frequency effects in language processing: A review with implications for theories of implicit and explicit language acquisition. Studies in Second Language Acquisition 24(2). 143–188. https://doi.org/10.1017/S0272263102002024.Search in Google Scholar

Engberg-Pedersen, Elisabeth. 1993. Space in Danish Sign language. Hamburg: Signum-Verlag.Search in Google Scholar

Erman, Britt & Beatrice Warren. 2000. The idiom principle and the open choice principle. Text – Interdisciplinary Journal for the Study of Discourse 20(1). 29–62. https://doi.org/10.1515/text.1.2000.20.1.29.Search in Google Scholar

Evans, Nicholas & David Wilkins. 2000. In the mind’s ear: The semantic extensions of perception verbs in Australian languages. Language 76(3). 546–592.10.2307/417135Search in Google Scholar

Fagard, Benjamin. 2010. É vida, olha …: Imperatives as discourse markers and grammaticalization paths in Romance: A diachronic corpus study. Languages in Contrast 10(2). 245–267. https://doi.org/10.1075/lic.10.2.07fag.Search in Google Scholar

Fenlon, Jordan, Kearsy Cormier & Schembri Adam. 2015. Building BSL SignBank: The Lemma Dilemma revisited 1. International Journal of Lexicography 28(2). 169–206. https://doi.org/10.1093/ijl/ecv008.Search in Google Scholar

Fenlon, Jordan, Schembri Adam & Kearsy Cormier. 2018. Modification of indicating verbs in British Sign language: A corpus-based study. Language 94(1). 84–118. https://doi.org/10.1353/lan.2018.0002.Search in Google Scholar

Fenlon, Jordan, Schembri Adam, Ramas Rentelis, David Vinson & Kearsy Cormier. 2014. Using conversational data to determine lexical frequency in British Sign language: The influence of text type. Lingua 143. 187–202. https://doi.org/10.1016/j.lingua.2014.02.003.Search in Google Scholar

Fischer, Susan D. 1975. Influences on word order change in American Sign language. In Charles N. Li (ed.), Word order and word order change, 1–25. Austin, TX: University of Texas Press.Search in Google Scholar

Frishberg, Nancy & Bonnie Gough. 2000. Morphology in American Sign language. Sign Language & Linguistics 3(1). 103–131.10.1075/sll.3.1.08friSearch in Google Scholar

Goldberg, Adele E. 2006. Constructions at work: The nature of generalization in language. Oxford: Oxford University Press.10.1093/acprof:oso/9780199268511.001.0001Search in Google Scholar

Green, Clarence. 2017. Usage-based linguistics and the magic number four. Cognitive Linguistics 28(2). 209–237. https://doi.org/10.1515/cog-2015-0112.Search in Google Scholar

Gries, Stefan Th. 2012. Frequencies, probabilities, and association measures in usage-/exemplar-based linguistics: Some necessary clarifications. Studies in Language 36(3). 477–510.10.1075/bct.67.02griSearch in Google Scholar

Haiman, John. 1985. Ritualization and the development of language. In William Pagliuca (ed.), Perspectives on grammaticalization, 3–28. Amsterdam/Philadelphia: John Benjamins.10.1075/cilt.109.07haiSearch in Google Scholar

Healy, Christina. 2015. Construing affective events in ASL. Washington, D.C.: Gallaudet University PhD Dissertation.Search in Google Scholar

Hill, Joseph C. 2015. Language attitudes in deaf communities. In Adam C. Schembri & Ceil Lucas (eds.), Sociolinguistics and deaf communities, 146–174. Cambridge: Cambridge University Press.10.1017/CBO9781107280298.007Search in Google Scholar

Hodge, Gabrielle. 2014. Patterns from a signed language corpus: Clause-like units in Auslan (Australian Sign language). Sydney, Australia: Macquarie University PhD Dissertation.Search in Google Scholar

Hodge, Gabrielle & Kearsy Cormier. 2019. Reported speech as enactment. Linguistic Typology 23(1). 185–196. https://doi.org/10.1515/lingty-2019-0008.Search in Google Scholar

Hopper, Paul J. 1987. Emergent grammar. Berkeley Linguistic Society 13. 139–157.10.3765/bls.v13i0.1834Search in Google Scholar

Hou, Lynn, Lepic Ryan & Erin Wilkinson. 2020. Working with ASL internet data. Sign Language Studies 21(1). 32–67. https://doi.org/10.1353/sls.2020.0028.Search in Google Scholar

Hou, Lynn, Lepic Ryan & Erin Wilkinson. 2022. Managing sign language video data collected from the internet. In Andrea Berez-Kroeker, Bradley McDonnell, Eve Koller & Lauren Collister (eds.), Open handbook of linguistic data management. Cambridge, MA: MIT Press Open.10.7551/mitpress/12200.003.0045Search in Google Scholar

Hou, Lynn & Jill P. Morford. 2020. Using signed language collocations to investigate acquisition: A commentary on Ambridge (2020). First Language 40. 585–591. https://doi.org/10.1177/0142723720908075.Search in Google Scholar

Hou, Lynn & Richard P. Meier. 2018. The morphology of first-person object forms of directional verbs in ASL. Glossa: A Journal of General Linguistics 3(1). 114. https://doi.org/10.5334/gjgl.469.Search in Google Scholar

Jantunen, Tommi. 2017. Constructed action, the clause and the nature of syntax in Finnish Sign language. Open Linguistics 3(1). 65–85. https://doi.org/10.1515/opli-2017-0004.Search in Google Scholar

Janzen, Terry. 1999. The grammaticization of topics in American Sign language. Studies in Language 23(2). 271–306. https://doi.org/10.1075/sl.23.2.03jan.Search in Google Scholar

Janzen, Terry. 2012. Lexicalization and grammaticalization. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language – An international handbook, 816–841. Berlin: Walter de Gruyter.10.1515/9783110261325.816Search in Google Scholar

Janzen, Terry. 2018. KNOW and UNDERSTAND in ASL: A usage-based study of grammaticalized topic constructions. In K. Aaron Smith & Dawn Nordquist (eds.), Functionalist and usage-based approaches to the study of language: In honor of Joan L. Bybee (Studies in Language Companion 192), 59–87. Amsterdam: John Benjamins.10.1075/slcs.192.03janSearch in Google Scholar

Janzen, Terry & Barbara Shaffer. 2002. Gesture as the substrate in the process of ASL grammaticalization. In Richard P. Meier, Kearsy Cormier & David Quinto-Pozos (eds.), Modality and structure in signed and spoken languages, 199–223. Cambridge: Cambridge University Press.10.1017/CBO9780511486777.010Search in Google Scholar

Johnston, Trevor. 2012. Lexical frequency in Sign languages. The Journal of Deaf Studies and Deaf Education 17(2). 163–193. https://doi.org/10.1093/deafed/enr036.Search in Google Scholar

Johnston, Trevor Alexander. 2019. Clause constituents, arguments and the question of grammatical relations in Auslan (Australian Sign language): A corpus-based study. Studies in Language. International Journal 43(4). 941–996. https://doi.org/10.1075/sl.18035.joh.Search in Google Scholar

Kendrick, Kobin H. 2019. Evidential vindication in next turn: Using the retrospective “see?” in conversation. In Laura J. Speed, Carolyn O’Meara, Lila San Roque & Asifa Majid (eds.), Perception metaphors, 253–274. Amsterdam: John Benjamins.10.1075/celcr.19.13kenSearch in Google Scholar

Klima, Edward & Ursula Bellugi. 1979. The signs of language. Cambridge, MA: Harvard University Press.Search in Google Scholar

Langacker, Roland W. 2008. Cognitive grammar: A basic introduction. Oxford: Oxford University Press.10.1093/acprof:oso/9780195331967.001.0001Search in Google Scholar

Lepic, Ryan. 2016. The great ASL compound hoax. In Aubrey Healey, Ricardo Napoleão de Souza, Pavlina Pešková & Moses Allen (eds.), Proceedings of the high desert linguistics society conference, vol. 11, 227–250. Albuquerque, NM: University of New Mexico.Search in Google Scholar

Lepic, Ryan. 2019. A usage-based alternative to “lexicalization” in Sign language linguistics. Glossa: A Journal of General Linguistics 4(1). 23. https://doi.org/10.5334/gjgl.840.Search in Google Scholar

Lepic, Ryan & Corrine Occhino. 2018. A construction morphology approach to Sign language analysis. In The construction of words (Studies in Morphology), 141–172. Cham: Springer. https://link.springer.com/chapter/10.1007/978-3-319-74394-3_6 (accessed 22 June 2018).10.1007/978-3-319-74394-3_6Search in Google Scholar

Liddell, Scott K. 1980. American Sign language syntax. The Hague: Mouton Publishers.10.1515/9783112418260Search in Google Scholar

Liddell, Scott K. 2003. Grammar, gesture, and meaning in American Sign language. Cambridge, MA: Cambridge University Press.10.1017/CBO9780511615054Search in Google Scholar

Lillo-Martin, Diane. 1995. The point of view predicate in American Sign language. In Karen Emmorey & Judy Reilly (eds.), Language, gesture, and space, 155–170. Hillsdale, N.J.: Lawrence Erlbaum Associates.Search in Google Scholar

Lillo-Martin, Diane. 2012. Utterance reports and constructed action in sign and spoken languages. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language – An international handbook, 365–387. Berlin: Walter de Gruyter.10.1515/9783110261325.365Search in Google Scholar

Lillo-Martin, Diane & Edward S. Klima. 1990. Pointing out differences: ASL pronouns in syntactic theory. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in Sign language research, vol. 1: Linguistics, 191–210. Chicago: University of Chicago Press.Search in Google Scholar

Lillo-Martin, Diane & Richard P. Meier. 2011. On the linguistic status of ‘agreement’ in Sign languages. Theoretical Linguistics 37(3/4). 95–141. https://doi.org/10.1515/thli.2011.009.Search in Google Scholar

Majid, Asifa, Seán G. Roberts, Ludy Cilissen, Karen Emmorey, Brenda Nicodemus, Lucinda O’Grady, Bencie Woll, Barbara LeLan, Hilário de Sousa, Brian L. Cansler, Shakila Shayan, Connie de Vos, Gunter Senft, N. J. Enfield, Rogayah A. Razak, Sebastian Fedden, Sylvia Tufvesson, Mark Dingemanse, Ozge Ozturk, Penelope Brown, Clair Hill, Olivier Le Guen, Vincent Hirtzel, Rik van Gijn, Mark A. Sicoli & Stephen C. Levinson. 2018. Differential coding of perception in the world’s languages. Proceedings of the National Academy of Sciences 115(45). 11369–11376. https://doi.org/10.1073/pnas.1720419115.Search in Google Scholar

Mathur, Gaurav & Christian Rathmann. 2012. Verb agreement. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language – An international handbook, 136–157. Berlin: Walter de Gruyter.10.1515/9783110261325.136Search in Google Scholar

McKee, Rachel & Sophia L. Wallingford. 2011. ‘So, well, whatever’: Discourse functions of palm-up in New Zealand Sign language. Sign Language & Linguistics 14(2). 213–247.10.1075/sll.14.2.01mckSearch in Google Scholar

Meier, Richard P. 1990. Person Deixis in American Sign language. In Susan D. Fischer & Patricia Siple (eds.), Theoretical issues in Sign language research, vol. 1: Linguistics, 175–190. Chicago: University of Chicago Press.Search in Google Scholar

Meir, Irit. 1998. Syntactic-semantic interaction in Israeli Sign language verbs: The case of backwards verbs. Sign Language & Linguistics 1(1). 3–33. https://doi.org/10.1075/sll.1.1.03mei.Search in Google Scholar

Meir, Irit. 2003. Grammaticalization and modality: The emergence of a case-marked pronoun in Israeli Sign language. Journal of Linguistics 39(1). 109–140. https://doi.org/10.1017/S0022226702001664.Search in Google Scholar

Morford, Jill & James MacFarlane. 2003. Frequency characteristics of American Sign language. Sign Language Studies 3(2). 213–225. https://doi.org/10.1353/sls.2003.0003.Search in Google Scholar

Napoli, Donna Jo & Rachel Sutton-Spence. 2010. Limitations on simultaneity in Sign language. Language 86(3). 647–662. https://doi.org/10.1353/lan.2010.0018.Search in Google Scholar

Naughton, Karen. 2001. Linguistic description and analysis of verbs of visual perception in American Sign language (ASL). Albuquerque, NM: University of New Mexico Unpublished Doctoral Dissertation.Search in Google Scholar

Nilsson, Anna-Lena. 2004. Form and discourse function of the pointing toward the chest in Swedish Sign language. Sign Language & Linguistics 7(1). 3–30. https://doi.org/10.1075/sll.7.1.03nil.Search in Google Scholar

Occhino, Corrine, Jami N. Fisher, Joseph C. Hill, Julie A. Hochgesang, Emily Shaw & Tamminga Meredith. 2021. New trends in ASL variation documentation. Sign Language Studies 21(3). 350–377. https://doi.org/10.1353/sls.2021.0003.Search in Google Scholar

Ormel, Ellen & Onno Crasborn. 2012. Prosodic correlates of sentences in signed languages: A literature review and suggestions for new types of studies. Sign Language Studies 12(2). 279–315.10.1353/sls.2011.0019Search in Google Scholar

Padden, Carol. 1986. Verbs and role-shifting in ASL. In Carol Padden (ed.), Proceedings of the 4th national symposium on signing research and teaching, Las Vegas, Nevada. Washington, D.C.: The National Association of the Deaf.Search in Google Scholar

Padden, Carol A. 1988. Interaction of morphology and syntax in American Sign language. New York: Garland Press.Search in Google Scholar

Pfau, Roland & Markus Steinbach. 2006. Modality-independent and modality-specific aspects of grammaticalization in Sign languages. Linguistics in Potsdam 24. 5–98. https://doi.org/10.1075/lic.10.2.07fag.Search in Google Scholar

Pothos, Emmanuel M. & Patrick Juola. 2001. Linguistic structure and short term memory. Behavioral and Brain Sciences 24(1). 138–139. https://doi.org/10.1017/S0140525X01463928.Search in Google Scholar

Pudans-Smith, Kimberly K., Katrina R. Cue, Ju-Lee A. Wolsey, Ju-Lee A. Wolsey & M. Diane Clark. 2019. To deaf or not to deaf: That is the question. Psychology 10. 2091–2114. https://doi.org/10.4236/psych.2019.1015135.Search in Google Scholar

Quer, Josep. 2016. Reporting with and without role shift: Sign language strategies of complementation. In Pfau Roland, Markus Steinbach & Annika Hermann (eds.), A matter of complexity: Subordination in Sign languages, 204–230. Berlin: Mouton de Gruyter.10.1515/9781501503238-009Search in Google Scholar

Romaine, Suzanne & Deborah Lange. 1991. The use of like as a marker of reported speech and thought: A case of grammaticalization in progress. American Speech 66(3). 227–279.10.2307/455799Search in Google Scholar

San Roque, Lila, Kobin H. Kendrick, Elisabeth Norcliffe & Asifa Majid. 2018. Universal meaning extensions of perception verbs are grounded in interaction. Cognitive Linguistics 29(3). 371–406. https://doi.org/10.1515/cog-2017-0034.Search in Google Scholar

Schembri, Adam, Kearsy Cormier & Fenlon Jordan. 2018. Indicating verbs as typologically unique constructions: Reconsidering verb ‘agreement’ in Sign languages. Glossa: A Journal of General Linguistics 3(1). 89. https://doi.org/10.5334/gjgl.468.Search in Google Scholar

Sinclair, John. 1991. Corpus, concordance, collocation. Oxford: Oxford University Press.Search in Google Scholar

Stefanowitsch, Anatol & Stefan Th. Gries. 2003. Collostructions: Investigating the interaction of words and constructions. International Journal of Corpus Linguistics 8(2). 209–243. https://doi.org/10.1075/ijcl.8.2.03ste.Search in Google Scholar

Sweetser, Eve. 1990. From etymology to pragmatics: Metaphorical and cultural aspects of semantic structure. Cambridge: Cambridge University Press.10.1017/CBO9780511620904Search in Google Scholar

Thompson, Sandra A. & Anthony Mulac. 1991. A quantitative perspective on the grammaticalization of epistemic parentheticals in English. In Elizabeth C. Traugott & Bernd Heine (eds.), Approaches to grammaticalization, 313–339. Amsterdam/Philadelphia: John Benjamins.10.1075/tsl.19.2.16thoSearch in Google Scholar

Traugott, Elizabeth C. 2003. Constructions in grammaticalization. In Brian D. Joseph & Richard D. Janda (eds.), The handbook of historical linguistics, 624–647. Malden, MA: Blackwell.10.1002/9780470756393.ch20Search in Google Scholar

Traugott, Elizabeth Closs. 1995. Subjectification in grammaticalisation. In Dieter Stein & Susan Wright (eds.), Subjectivity and subjectivisation: Linguistic perspectives, 31–54. Cambridge, UK: Cambridge University Press.10.1017/CBO9780511554469.003Search in Google Scholar

Traugott, Elizabeth Closs & Richard B. Dasher. 2005. Regularity in semantic change. Cambridge: Cambridge University Press.Search in Google Scholar

Vermeerbergen, Myriam, Lorraine Leeson & Onno Alex Crasborn. 2007. Simultaneity in Signed languages: Form and function. Amsterdam: John Benjamins.10.1075/cilt.281Search in Google Scholar

Viberg, Åke. 1983. The verbs of perception: A typological study. Linguistics 21(1). 123–162. https://doi.org/10.1515/ling.1983.21.1.123.Search in Google Scholar

Waltereit, Richard. 2006. Imperatives, interruption in conversation, and the rise of discourse markers: A study of Italian guarda. Linguistics 40(5). 987–1010. https://doi.org/10.1515/ling.2002.041.Search in Google Scholar

Wilcox, Sherman. 2004. Gesture and language: Cross-linguistic and historical data from signed languages. Gesture 4. 43–73.10.1075/gest.4.1.04wilSearch in Google Scholar

Wilcox, Sherman. 2007. Routes from gesture to language. In Elena Pizzuto, Paola Pietrandrea & Raffaele Simone (eds.), Verbal and Signed languages. Comparing structures, constructs, and methodologies, 107–131. Berlin: De Gruyter Mouton.10.5380/rabl.v4i1/2.52651Search in Google Scholar

Wilcox, Sherman. 2014. Moving beyond structuralism: Usage-based Signed language linguistics. Linguas de Señas e Interpretación 5. 97–126.Search in Google Scholar

Wilcox, Sherman & Corrine Occhino. 2016. Constructing signs: Place as a symbolic structure in signed languages. Cognitive Linguistics 27. 1–34. https://doi.org/10.1515/cog-2016-0003.Search in Google Scholar

Wilkinson, Erin. 2016. Finding frequency effects in the usage of NOT collocations in American Sign language. Sign Language & Linguistics 19(1). 82–123. https://doi.org/10.1075/sll.19.1.03wil.Search in Google Scholar

Wilkinson, Erin, Lepic Ryan & Lynn Hou. Usage-based grammar: Multi-word expressions in American Sign language. In Janzen Terry & Barbara Shaffer (eds.), Signed language and gesture research in cognitive linguistics. Berlin: de Gruyter Mouton, in press.Search in Google Scholar

Winston, Charlotte. 2013. Psychological verb constructions in American Sign language. West Lafayette: Purdue University MA thesis.Search in Google Scholar

Winter, Bodo, Marcus Perlman & Asifa Majid. 2018. Vision dominates in perceptual language: English sensory vocabulary is optimized for usage. Cognition 179. 213–220. https://doi.org/10.1016/j.cognition.2018.05.008.Search in Google Scholar

Wray, Alison. 2002. Formulaic language and the lexicon. Cambridge: University Press.10.1017/CBO9780511519772Search in Google Scholar

Wulf, Alyssa, Dudis Paul, Robert Bayley & Ceil Lucas. 2002. Variable subject presence in ASL narratives. Sign Language Studies 3(1). 54–76. https://doi.org/10.1075/sl.18035.joh.Search in Google Scholar

Zeshan, Ulrike. 2002. Sign language in Turkey: The story of a hidden language. Turkic Languages 6. 229–274. https://doi.org/10.1075/celcr.19.13ken.Search in Google Scholar

Received: 2020-08-25
Accepted: 2022-01-15
Published Online: 2022-03-03
Published in Print: 2022-05-25

© 2022 Lynn Hou, published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 4.2.2023 from https://www.degruyter.com/document/doi/10.1515/cog-2020-0086/html?s=09
Scroll Up Arrow