Skip to content
BY-NC-ND 3.0 license Open Access Published by De Gruyter Mouton April 24, 2015

“Making meaning”: Communication between sign language users without a shared language

  • Ulrike Zeshan EMAIL logo
From the journal Cognitive Linguistics

Abstract

In a small group of deaf sign language users from different countries and with no shared language, the signers’ initial conversational interactions are investigated as they meet in pairs for the very first time. This case study allows for a unique insight into the initial stages of pidginisation and the conceptual processes involved. The participants use a wide range of linguistic and communicative resources, and it can be argued that they construct shared multilingual-multimodal cognitive spaces for the purpose of these conversations. This research explores the nature of these shared multilingual-multimodal spaces, how they are shaped by the signers in interaction, and how they can be understood in terms of conceptual blending. The research also focuses on the meta-linguistic skills that signers use in these multilingual-multimodal interactions to “make meaning”.

1 Introduction

This article presents a case study of how meaning is co-created and negotiated between sign language users from different countries who do not have any language in common. This is part of a larger study during which the signers’ improvised conversations were videotaped over a six-week period. Each participant has competence in more than one language, typically the sign language and the written language of their country of origin, but none of the participants shares fluency in a language with any other participant. The signed interactions resulting from this situation are referred to as “cross-signing” here (cf. Bradford et al. 2013), a newly coined term emphasising the cross-linguistic nature of the situation and the communication across language barriers and cultural differences.

It has long been known anecdotally that deaf people from different countries are able to establish communication with each other far more quickly than would ever happen in the case of spoken languages. However, the way in which this develops ab initio has not been studied systematically. Thus the aim of research on cross-signing has been to track the development of ad hoc emerging communication right from the beginning, when participants meet for the very first time, and over a substantial period of time. The unique dataset gathered during this research highlights the meta-linguistic skills at work in this peculiar situation, and has the potential to impact on the understanding of a number of wider issues, for instance with respect to the development of pidgin languages or the importance of metalinguistic skills in this type of communication. This research is also very much in line with current debates on multimodal interaction (e.g., Enfield and Levinson 2006; Streeck et al. 2011).

The cross-signing study presents a unique angle on the development of pidgin languages, instantiating how a visual-gestural jargon can arise in this kind of situation. Jargons are the early precursors of pidgins and represent “unsystematic and variable forms of a second language used in interethnic communication” (Bakker 2008: 151). Lefebvre (2004: 7) characterises pidgins and creoles as “an extreme case of languages in contact”, which involves accelerated language change in the context of a multilingual community. However, spoken language research only ever documents the results of these various language contact situations but not the processes involved in the genesis of an early semi-conventionalised contact variety right from the beginning. In cross-signing, the process of jargon creation is accelerated to such an extent that its genesis can be observed with an immediacy not available in spoken language research. By contrast, the emergence of speech-based jargons in the initial pre-pidgin stages of language contact has not been documented within a single first-time conversation in a way that would parallel the “cross-signing” phenomenon documented here.

The scope of this article is constrained and limited in several ways. First ofall, the article deals only with data collected from the initial meetings of the participants. Secondly, the incipient communication between pairs of signers is exemplified here by investigating how participants communicate about concepts associated with numerals. Within the wider aim of the cross-signing study to investigate communicative strategies in this unique context, this focus on numerals is a manageable domain for a first systematic approach to the data, and the findings presented here will need to be cross-referenced with further data analysis in due course.

After introducing the methodology and data in Section 2, the range of structures found across all participants for expressing numerals is detailed in Section 3. Section 4 explores the notion of a shared multilingual-multimodal space developing between the participants during their conversational interactions, while Section 5 focuses on the interactional sequences that occur when participants negotiate the use of communicative resources available to them. After a discussion Section (6), the article concludes with the wider implications of the study presented in Section 7.

2 Methodology and data

Central to this article is the notion of a shared multilingual-multimodal space that emerges between each pair of participants and that contains the lexemes and structures “agreed on” between the participants in the conversation. As elaborated in the latter sections of this article, this space is conceived of not as static but as changing and expanding continuously as the conversation proceeds. It can be thought of metaphorically as a jointly created communicative toolkit, a shared conceptual space that, in the absence of a conventional shared inventory for communication, includes an array of multilingual and multimodal resources. Use of these resources is exemplified in the following utterance (1) by one of the research informants, a signer from Indonesia who is trying to describe his home town on the island of Java (Jawa in Bahasa Indonesia).

(1)

This utterance consists of both manual and non-manual actions, first using the manual alphabet from Indonesian Sign Language accompanied by the silent mouth shape of the word Jawa, then an iconic movement tracing the shape of the island, and finally an exophoric index finger point which is directed at a map of Indonesia on the opposite wall and is co-ordinated with the addressee’s pointing gesture. Combining various communicative resources in this way in order to clarify the intended meaning is typical of these interactions.

The video data used here come from casual conversations between four sign language users from different countries: Japan, Indonesia, Jordan, and the UK. The four participants spent six weeks together at the International Institute for Sign Languages and Deaf Studies (iSLanDS) in the UK in May-July 2012. The three non-UK participants were selected on the basis of their linguistic background as follows:

  1. No or minimal exposure to International Sign (IS), the sign variety that is used by deaf people for the purpose of transnational contact.[1]

  2. No or minimal competence in English.

  3. Excellent competence in their own sign language.

The research team included intermediaries fluent in the respective native sign languages in order to facilitate selection of the participants and interactions with them ahead of and during the research period, in the form of participant information and feedback as well as briefings and debriefings. These facilitators are members of the iSLanDS Institute and supported the research process in many ways throughout the project, including acting as interpreters for the international participants. This was part of the ethics procedures of the project in order to ensure that these participants would benefit from the research visit to the UK and would not experience any psychologically negative impact from joining a complex, challenging linguistic environment.[2]

The linguistic selection criteria ensured that the four participants had no language in common at the beginning of the research period. Although the UK participant is fluent in both IS and English, this did not result in any shared linguistic background because the other three participants are unfamiliar with these languages. Table 1 lists the linguistic backgrounds of the four participants (the participant IDs have subscripts indicating the country of origin).

Table 1:

Linguistic backgrounds of the participants.

FluentIntermediateMinimal
CPBRT(female)British Sign Language, English (written), International SignJordanian Sign Language
MSJD (male)Jordanian Sign LanguageArabic (written)English (written),
British Sign Language
HMJP (male)Japanese Sign Language,English (written)
Japanese (written)
MIIND (male)Indonesian Sign LanguageBahasa Indonesia (written)English (written)

Before coming to the UK to participate in the research, MIIND, MSJD and HMJP had acquired a few isolated words and phrases in English. MSJD and MIIND had also occasionally encountered deaf foreigners in their home countries, but without acquiring IS from these contacts. MSJD did learn a few signs from British Sign Language through encountering a group of deaf UK travellers in Jordan for a few days, and vice versa for CPBRT. MSJD and MIIND are less fluent in the written languages of their home countries than CPBRT and HMJP, and for all participants, signing is the primary means of communication while writing is used as a second language.

The four participants were videotaped in paired casual conversations repeatedly: immediately upon arrival, after one week, and after a further four weeks. Every signer was videotaped in conversation with every other signer, resulting in six paired conversations for each round of filming. In addition, a communicative task involving picture stimuli was conducted during the first round and the third round of filming, immediately after the casual conversations.[3] For this article, the analysis focuses on the casual conversations filmed immediately upon arrival. This choice of data is motivated by the research question pursued here. Due to the interest in the ways in which signers co-create meaning in these conversations, the most revealing observations can be expected from the initial conversations. These are the situations where the difficulties of communicating across linguistic barriers are greatest, and therefore, the participants are maximally challenged to make optimal use of all communicative resources at their disposal. Throughout this article, the various examples confirm this expectation.

Table 2 shows the amount of videotaped data (min:sec) obtained from each pair’s initial casual conversation. The length of conversations is broadly similar ranging from 38 minutes to 56 minutes. To facilitate spontaneity, it would have been counter-productive to impose strictly equal lengths of conversations. Nearly 50% of all data was annotated using the ELAN multimedia annotator software (see Wittenburg et al. 2006). As the annotation of video data with ELAN is a very time-consuming effort, the amount of annotated data is substantial and in line with other research on sign languages where a corpus of conversational sign language data is used (e.g., de Vos 2012; Lutalo-Kiingi 2014).

Table 2:

Summary of data.

ParticipantsRecorded dataAnnotated data
HMJP with CPBRT38:2620:23
HMJP with MIIND44:2021:44
CPBRT with MSJD51:1420:49
MSJD with MIIND42:3729:26
HMJP with MSJD56:1328:08
MIIND with CPBRT48:5920:11
Total data4:41:492:20:41

As the analysis focused on the expression of numerals, those utterances containing numerals were annotated on a sign-by-sign basis. In addition, a coding schema was used identifying the type of numeral construction in each of the utterances, and this is the basis for the quantitative data that are included in this article. Figure 1 shows a screenshot of data annotation with ELAN.

Figure 1: Data annotation with ELAN.
Figure 1:

Data annotation with ELAN.

In addition to filming conversations, post-hoc introspective interviews were conducted with all four participants after the initial round of filming. In these interviews, each participant was shown the video recording of the conversations they had been involved in, and asked to comment on the interaction. They were also asked specific questions by the research team, such as the reasons for their choice of a particular sign, whether they had understood their interlocutor’s communication, what they thought the interlocutor was trying to say, and what they themselves were aiming to convey to the interlocutor in each segment of the conversation.

Conducting the post-hoc interviews was a time-consuming process, and therefore, it was only possible to cover the initial round of conversations. For the interview sessions, the research team met separately with each participant, and in the case of the three non-UK visitors, the iSLanDS member interpreting between IS and their respective sign language joined and sometimes led the sessions. The participants’ comments were noted down in English together with the time code of the video recording that the comment referred to. The research team gained many interesting insights from these introspective interviews, and often comparing the notes from each participant is the only way to establish that signers have actually miscommunicated. Indeed, signers may be unaware that they have miscommunicated until each person is asked specifically to comment on what they understood and aimed to convey.

Finally, this work draws on aspects of the analytical and methodological framework of Conversation Analysis (e.g., Schegloff 1991; 2007; Sidnell and Stivers 2012). Where the focus is on detailed qualitative analyses of specific interactions, this framework provides a helpful way of visualising the data including relevant features such as overlapping turns and the duration of signs.

3 Communicative resources for numerical-quantitative concepts

This section focuses on the range of expression of numerical-quantitative concepts found across all participants, including a variety of constructions involving numeral signs which occur in the data when talking about topics such as dates, time periods, age, fractions, money and currencies, schooling and educational systems, family constellations, and the like. The focus is deliberately on a particular subset of quantification, where numerals are part of the construction in one way or another (e.g., ‘20 dollars’), but excludes instances of quantification where the construction includes a quantifier (e.g., ‘a little bit of money’). This provides a coherent, narrowly circumscribed domain, which is preferable given the complexity of the interactions.

During the analysis process, a number of structures were identified that the signers used to express numerals. The categories used for ELAN coding are organised hierarchically as seen in Figure 2, and examples of each category are given below. All examples are from the data, and the video file name and time code is noted in each case.

Figure 2: Hierarchical organisation of coding categories.
Figure 2:

Hierarchical organisation of coding categories.

3.1 Digits

One of the strategies used most frequently in the data consists of extending the number of fingers that correspond to the intended numeral. There is some variation in the data as to which fingers are used for numerals, and hand orientation also varies between palm-inward and palm-outward. Quantities between one and five are always expressed by one-handed signs in the “digits” category in the data, while those between six and ten are always two-handed when this strategy is used (Figure 3). While it would be logically possible to use, for instance, two fingers of each hand to express ‘four’, this does not occur anywhere in the data. For numbers greater than ten, several signs in sequence are needed and are added up, as seen in Figure 4.

Figure 3: Two versions of EIGHT.Glosses in capital letters are used to represent signs in this article, as is the convention in sign language research.
Figure 3:

Two versions of EIGHT.[4]

Figure 4: TEN TWO ‘12′.
Figure 4:

TEN TWO ‘12′.

3.2 Digital

The digital strategy involves signing the numerals as a sequence of individual digits, following the sequence of written numbers, and as such it only applies to numbers 10 and above. It can be exploited using one hand or two hands. If two hands are used and each digit is conveyed by its own handshape, it is possible to present two digits simultaneously (Figure 5), or to hold one hand in place while signing further digits with the second hand (Figure 6). While the digital strategy as such is attested in several sign languages (cf. Zeshan et al. 2013), the structures seen in Figures 5 and 6 are particularly interesting because they are cross-linguistically very rare in sign languages (cf. Zeshan and Sagara forthcoming).

Figure 5: ELEVEN.
Figure 5:

ELEVEN.

Figure 6: THOUSAND.
Figure 6:

THOUSAND.

3.3 Numeral incorporation

This is a strategy frequently used in many sign languages (e.g., Liddell 1996 for American Sign Language; Ktejik 2013 for JSL). It involves a type of simultaneous morphology, where a quantifiable unit is expressed at the same time as a numerical value. The numerical value is represented by a numeral handshape, and the rest of the sign represents a unit, such as time units (e.g., hour, month, year), monetary units (e.g., dollar, rupiah), and the like. A separate coding category was established for this type in order to identify whether the signers used numeral incorporation or expressed the numeral and the quantifiable unit as two separate signs. The numerical component of the sign is typically one-handed, but it may be two-handed depending on the form of the sign for the quantifiable unit. In Figure 7, which means ‘four months’, the four extended fingers provide the numerical value, and a downward movement along the index finger of the other hand provides the meaning ‘month’ (# is used between sign glosses to indicate numeral incorporation).

Figure 7: FOUR#MONTH.
Figure 7:

FOUR#MONTH.

3.4 Lexical

Numeral signs were coded as “lexical” if they could not be analysed according to any of the above categories. This may occur with single-digit numerals that use a specific numeral handshape rather than extended fingers (Figure 8), or in signs for 10 and above that are monomorphemic. The latter are rare in the data, presumably because they tend to be non-iconic, and therefore signers may have a dispreference for their use in this kind of communicative situation.

Figure 8: SIX (one-handed, little finger extended).
Figure 8:

SIX (one-handed, little finger extended).

3.5 Writing

In addition to using signs, signers also resorted to using various representations of writing. This is of particular interest given that the signers come from backgrounds that use different scripts. The type of intended script was not coded in the annotations, but three representations were differentiated: writing in the air (Figure 9), writing on the palm of the hand (Figure 10), and writing on any other surface. As can be seen from the example dialogues in Sections 4 and 5, signing and representations of writing are also combined in complex ways.

Figure 9: Writing in the air.
Figure 9:

Writing in the air.

Figure 10: Writing on the palm.
Figure 10:

Writing on the palm.

3.6 Numerals in cross-signing and in monolingual signing

In order to put the above structures in the context of the participants’ native sign languages, it is relevant to summarise and compare briefly the main characteristics of their numeral systems. Like the overwhelming majority of sign languages, British Sign Language (BSL), Jordanian Sign Language (LIU, from the Arabic Lughat al-Ishara Urduniyya), Japanese Sign Language (JSL) and Indonesian Sign Language (IndoSL) all have decimal numeral systems, i.e., built on 10 as the numeral base (cf. Zeshan et al. 2013), and all use morphologically complex forms to construct higher numerals. Numerals in JSL have a particularly complex phonology and morphology, using both compounding and numeral incorporation, as well as significant influence from written kanji on the form of numeral signs (Sagara 2014). Out of these four sign languages (BSL, LIU, JSL and IndoSL), IndoSL is the only one that does not use any numeral incorporation, and this is atypical across sign languages (Sagara and Zeshan 2013).

The digits strategy is used in all four sign languages for numerals up to five, and this is ubiquitous, if not universal, across sign languages. By contrast, none of the four sign languages uses the digital strategy in their numeral systems, as this is a cross-linguistically rare option. Small sets of monomorphemic lexical numerals are also found in all four sign languages; for instance, BSL has lexical numerals ELEVEN, TWELVE, HUNDRED, and THOUSAND, among others. Finally, several numerals in JSL and LIU are iconically motivated by written numbers, but writing as such (in the air or on a surface) cannot be considered part of the linguistic system in any of the four sign languages.

Dialectal variation has been reducing over the past decades in JSL (Sagara 2014). By contrast, the sociolinguistic situation of IndoSL is characterised by multi-dialectalism, and this is particularly pervasive in numerals. A large range of diverse numeral types occur in IndoSL varieties (Palfreyman forthcoming), and MIIND is familiar with many of these. BSL numerals are also subject to dialectal variation (Stamp 2013), though the individual formational variants fall into fewer different types of numerals compared to IndoSL. Dialectal variation in numerals has not been investigated in LIU so far.

While the influence of writing on numeral signs is relatively straightforward to recognise in these languages as well as in cross-signing, the influence of co-speech gesture in our data is more difficult to ascertain because systematic documentation of co-speech gestures used by hearing people in the domain of numbers is largely unavailable for the countries relevant here. Thus a comparison between co-speech gestures and cross-signing is not pursued further in this article. However, the role of iconicity, reflected in the potential of numeral signs to “look like” their referents, is of great importance in cross-signing. To the extent that gestures for numbers are often iconic, the role of gestures is implicit when discussing iconicity in the data. However, separating gestures from signs in signed output is a difficult issue in sign language linguistics, so that it is preferable for the purposes of the present investigation to view this issue with a focus on the role of iconicity instead. As detailed in the next section, it then becomes clear that in the cross-signing data, there is a strong overall preference for more iconic forms over less iconic forms.

Iconicity in sign languages has been classified in a number of different ways as there are various ways in which signs can be iconic in the sense of a non-arbitrary form-meaning relationship (cf. Taub 2001; Rosenstock 2008). For instance, demonstrating an intended number by showing the corresponding number of extended fingers is different from using a handshape or movement that derives from writing and where the iconic relationship is between a sign and the number’s written representation. In this article, these distinctions are not explored further and we are only concerned with whether or not there is a non-arbitrary relationship between a numeral sign and the number it represents.

3.7 Distribution of numeral representations in the data

Table 3 shows the distribution of numeral forms in the cross-signing data. In the table, there are separate sections for numbers below 10 and numbers from 10 onwards. Where the range of numbers that a structure is used for is further limited, this is indicated in brackets after the label at the top of each column. For instance, the one-handed digits strategy only occurs with numbers 1–5. Expressions of “zero” and expressions of years in dates (e.g., ‘June 2008′) are not included in the table because their expression varies only with respect to the use of one versus two hands. “Zero” (17 occurrences in the data) is always expressed by a round handshape that iconically represents the written number. Years in dates (20 occurrences in the data) are always expressed with the digital strategy.

Table 3: Distribution of numeral strategies across signers.
Table 3:

Distribution of numeral strategies across signers.

In the context of the structures in monolingual BSL, LIU, JSL and IndoSL, some interesting patterns emerge from these data. A total of 748 numerals were coded for the types listed in Table 3 (a few values are circled as they are discussed in detail below). For each type of numeral, the total of occurrences is shown for each of the signers in bold. Below each total number of occurrences, there is a breakdown showing how many times the numeral type was used with which of the other interlocutors. This is important because one of the issues of interest here concerns the question whether the signers use particular types of numerals more with some interlocutors than with others. This is the issue of linguistic accommodation, in the sense of ‘following the lead’ of one’s interlocutor by using the same types of constructions that are used by the interlocutor. Accommodation is discussed in more detail in Section 5.

Table 3:

Distribution of numeral strategies across signers.

For numerals below 10, the four signers overwhelmingly use finger extension, i.e., the ‘digits’ type. This is clearly the dominant pattern. Numeral incorporation is also used frequently by all signers except by MSJD. Writing is not used at all as a source for expressing numbers below 10. Overall, the patterns in numerals below 10 look similar across all signers. The two-handed ‘digits’ type occurs far more frequently in the Japanese-Jordanian pair (HMJP with MSJD have 56 out of 140 occurrences). However, this type has no real competitor because numeral incorporation is used almost exclusively with numbers 1–5 and lexical signs are rare overall. Therefore, these data are simply the result of numbers between 6 and 9 occurring more frequently in the conversations between this particular pair.

For numerals above 10, there are always several options for expressing the same number, and therefore we can identify both personal preferences, where individual signers differ from others, and accommodation effects, where one signer adapts to the strategies used by another. Lexical numerals are very rare in the data when expressing numbers above 10. Writing is used only by one of the participants (MSJD) for expressing actual numerals, although there are other instances in the data where writing is used as part of complex constructions with numerals, e.g., for dash (-) or slash (/) symbols.

The main options to express numbers above 10 are the ‘digital’ type, both one-handed two-handed, and the two-handed ‘digits’ type. Looking at the totals in bold horizontally to see which of the signers prefers which option, it is clear immediately that the Japanese participant HMJP has a strong preference for the two-handed digital type (40 occurrences out of 67, i.e., nearly 60%). It is quite possible that this is due to the linguistic and cultural background of HMJP, as this type was used in earlier varieties of Japanese Sign Language, and can also sometimes be seen in the gestures of hearing people in Japan (Sagara 2014). In modern-day JSL, this numeral type has been replaced by other types and is now only seen rarely in some older signers (ibid.). HMJP makes frequent use of this type with all of his interlocutors. Interestingly, MIIND and MSJD also use this type, but only when in conversation with HMJP. In conversation with other interlocutors, they use two-handed digital numerals only once or not at all. In other words, MIIND and MSJD accommodate the Japanese participant’s linguistic choice, while the British participant CPBRT shows no such accommodation effect.

In the two-handed digital type, the linguistic accommodation is a one-way affair, but mutual accommodation is also visible in the data. The two-handed ‘digits’ pattern is relatively rare in most of the dyads, occurring no more than six times in any pair, with one exception. In the Japanese-Jordanian pair (HMJP with MSJD) this type occurs 29 times, out of a total of 61 occurrences across all signers; i.e., 48% of all occurrences happen within this particular pair of signers. As HMJP and MSJD both use the two-handed ‘digits’ type frequently with each other, but infrequently or not at all with any of their other interlocutors, the pattern seems to point to linguistic accommodation that is mutual in this case. Repeated accommodation naturally leads to conventionalisation of linguistic expressions across participants in the conversations, which is essential in the development of an initial signed jargon in the cross-signing situation.

Looking at all strategies across all signers, it is clear that there is a strong preference for iconically motivated signs, regardless of whether or not such signs occur in the participants’ native sign languages. Signs in the ‘lexical’ category are strongly dispreferred, as are signs with forms based on writing that is not intelligible across cultures, such as the kanji-based numerals in JSL, which are entirely absent from these data. Instead, signers prefer to either directly show the number of extended fingers or to use sequences of signs that represent the way numbers are written. The data in Table 3 present evidence of these general tendencies, but also reveal individual signers’ preferences, as well as showing both one-way and mutual effects of linguistic accommodation. On the basis of this preliminary understanding, we can now take a closer look at qualitative data to consider how the various numeral strategies play out in specific interactions.

4 Combining communicative resources in multilingual-multimodal space

The analysis of data aims at revealing the ways in which the deaf participants operate within a shared multimodal-multilingual space in the particular communicative situations they are engaged in. In this section, selected segments of signed conversations are presented in order to exemplify the use of multiple linguistic and other communicative resources, and how these interact with oneanother. This analysis draws on approaches to multimodal interaction research.

Previous extensive research has demonstrated that the traditional bias in linguistics towards spoken language (or, even more restrictively, written language) does not provide a sufficient account of human communication, given that the primary setting where language is overwhelmingly used is for the purpose of face-to-face communication. It can thus be argued that, far from being peripheral to speech, gestures and other multimodal behaviours constitute an integral and intricately structured part of human communication (McNeill 1992; Kendon 2004). Work on linguistic aspects of multimodal interaction has so far focused primarily on the interplay between speech and the gestural channel of communication with respect to an increasingly diverse array of individual languages (e.g., Enfield 2003 on Lao; Iwasaki 2008 on Japanese). Multimodal interactions that are also multilingual, as is the case in the present study, are only beginning to receive attention from researchers, as for instance in Gullberg (2011) with respect to multimodality in second language acquisition.

The recognition that transmission of the linguistic message involves more than one channel sits well with research in sign language linguistics, where the multi-channel nature of signed communication has long been recognised. In sign language linguistics, it is common to recognise several channels which are simultaneously active and coordinated, such as the hands and arms, the facial expressions, the mouth movements derived from spoken words (“mouthings”), and head and body postures (cf. Sandler 1999; Wilbur 2000; Sandler and Lillo-Martin 2006). The use of mouthings is related to a (secondary representation of) spoken language, while all the other simultaneous channels represent different components of a sign language (Boyes Braem and Sutton-Spence 2001). However, in the communicative situation of cross-signing, it is evident that the sources of the utterances observed are much more varied. In the absence of any shared language, the participants in the conversation are involved in a difficult “meaning-making” task that challenges the entirety of their multilingual, multimodal and meta-linguistic skills.

In order to represent the interaction and, in particular, the timing of co-produced speech, signs, gestures, and other communicative behaviours, a multi-tiered representational system is needed. The ELAN annotator is particularly suited to representing various simultaneous aspects of multimodal interactions, as well as annotating observations and analysis categories on separate tiers. Its representational system is organised like a musical score, where temporal alignment is represented on the vertical axis across the different tiers.

In the examples from the data, a notation adapted from Conversation Analysis (CA), as seen, for instance, in Eggins and Slade (1997), is used to transcribe examples. The turns in conversation are numbered consecutively, along with the participant’s ID label. In addition to the capital letter glosses of signs, below the signs mouthings are notated in double quotes and other nonmanual actions are notated in brackets. This transcription also captures temporal coordination within and between turns as illustrated in Figure 11. In addition, screenshots of signs to illustrate what the utterances look like are available in the appendix, where a complete list of abbreviations can also be found.

Figure 11: Transcription of examples.
Figure 11:

Transcription of examples.

A separate representation is used to indicate which linguistic and communicative resources are active at which point in the conversation. For the purpose of the analysis, it is not only the timing of the several communicative channels that is of interest, but also various forms of cross-modal interplay that are used creatively by the signers for “making meaning” in conversation. In other words, it is of interest to see what aspects of meaning are contributed by which of the communicative resources present in the interaction. Therefore, these resources are notated using the following labels:

ownSIGNthe participant’s own sign language
otherSIGNthe sign language of the participant’s interlocutor
invSIGNinvented signs belonging to neither of the participants’ sign languages
English[5]:writinga written language
English:mouthingmouthing based on a spoken language

In the conversations, signers actually use a much wider array of linguistic and communicative resources, including strategies such as pantomime; drawing in the air or on surfaces; various forms of manual alphabets (fingerspelling); exophoric pointing to objects and other referents in the vicinity; and signs from other languages, for example American Sign Language (ASL). However, the above list covers the options used for communicating about numeral concepts, and further communicative options that occur elsewhere in the conversation outside the domain of numerals are disregarded for the purpose of this article. These categories are used for a qualitative exploration of examples from the data only, so no quantitative data counts have been undertaken. Moreover, Iavoid a distinction between signs and gestures here. As mentioned above, it would be very contentious to argue that an invented form should be classified as a gesture rather than a sign, especially in the absence of compelling data about how hearing gesturers communicate about numbers in each of the cultures involved. The important point is that newly invented forms that are outside the linguistic inventory of any of the national sign languages play a prominent role in these conversations. Thus the label invSIGN is conceived of as broad enough to cover possible gestural influences.

The following examples illustrate the distribution of multilingual and multimodal resources in utterances, and how they contribute to the overall meaning that is being communicated.

4.1 Example: fractions

In example (2), the signers from Indonesia and Japan discuss fractions. HMJP aims to convey that the proportion of deaf people in Japan is 1 in 1,000. He first uses a two-handed digital representation for 1,000 (with the left hand signing ONE held in space while the right hand signs ZERO three times). This is followed by a horizontal line representing the fraction, and the numeral ONE. The sign for 1,000 is an iconically based invention, as the corresponding sign in JSL is based on a written kanji and would be unintelligible to the Indonesian signer. However, the way in which they are sequenced and displayed in space is aligned with the Japanese way of writing fractions, that is, bottom to top (denominator, then numerator). In the response, MIIND repeats the same elements introduced by HMJP, but in the reverse order, top to bottom, as this is the way fractions are written in Indonesia.

(2)

Source video file: Convers-HM-MI-06Jun2012_01 (see video stills in the appendix and full length video online: http://dx.doi.org/10.1515/cog-2015-0011)

Time code: 00:04:45 – 00:05:05

1HMJP
2MIIND
3HMJP
4MIIND
5HMJPONE  DEAF

ZERO ZERO ZERO ZERO HEARING(a) HEARING(b) HEARING(c)
INDEX:own finger
6MIIND(puzzled facial expression with frown—————————) (slow nod)
‘One is deaf and thousands are hearing. – Ah.’

This communicative segment is based on a misunderstanding in the previous discourse. MIIND was trying to ask how many deaf friends HMJP has in Japan, using a sign from ASL for FRIEND. However, HMJP misunderstood this to mean ‘people’, and hence his response ‘there are one in 1,000 (deaf people in Japan)’. Utterance (5) includes three different signs for HEARING (a, b and c).

This example shows how the different source languages and modes interact with each other to produce the utterances. JSL and IndoSL are the primary, preferred languages of the participants and the source of the numerals ONE and ZERO which happen to be the same in both sign languages. In addition, both signers use the invented sign for 1,000 that is not found in their respective sign languages, and this iconic invention interacts with the writing systems used in Japan and Indonesia respectively. So the two modes of communication are signing and writing (via an indirect representation as “writing in the air”). The interaction between these communicative resources is shown in the representation under (3) below. The two top lines contain the same sign glosses as in the previous example notation; non-manual behaviours have been omitted for the sake of clarity. The lines below indicate the various communicative channels that contribute to the utterance. Whenever a channel is actively contributing to the meaning of the utterance, this is marked with xxxxx underneath the sign glosses. Sometimes there is an additional comment under the xxxxx to specify what aspect of the utterance this particular channel is contributing at this point in time.

(3)
HMJPONE ZERO ZERO        ONE ZERO ZERO ZERO  ONE ZERO BAR ONE
MIIND    ONE BAR ONE ZERO ZERO ZERO  (ONEBAR)
ownSIGNxxxxxxxxxxxxxx  xxxxx  xxxxxxxxxxxxxxxxxxxxxxxxxxx
form of 1 and 0
otherSIGN
invSIGNxxxxxxxxxxxxxx    xxxxxxxxxxxxxxxxxxxxx
Japan.:writingxxxxxxxxxxxxxxxxx       xxxxxxxxxxxxxxxxxxxxxx
Indon.: writing   xxxxxxxxxxxxxxxxxxxxxxxxxxxx   xxxxxxxxxx

As is suggested by the “musical score” notation, each channel of communication is active throughout the conversation, not just when the signer draws on it for a particular utterance. For instance, the underlying knowledge about writing conventions in a particular language is always available in the background, and at any time, signers can choose to integrate this knowledge into the utterance. Just like in an orchestra, where players of all instruments are present at all times and monitoring what is going on, ready to join in at the right moment, the multilingual-multimodal capabilities are always available to be integrated into utterances, either as a “solo” or in combination with other elements.

Signers do not alternate between different languages and modes, but exploit various possibilities of integrating them ad hoc and creatively into utterances. Thus in the expression for ‘1 in 1,000’ used by HMJP, the numeral items themselves are signed, but the way they are arranged in space is aligned with literacy conventions in Japan. Therefore, as the notation in (3) shows, both signing and Japanese writing are active and contribute to the expression. The creative process of blending elements from several sources in this way is explored in more detail in Section 4.3.

4.2 Example: dates

Example (4), from the conversation between the Jordanian and the Indonesian participant, is more complex because the signers are repeatedly miscommunicating and trying to resolve the situation. The participants are discussing the dates of a planned trip to London. MSJD repeatedly tries to convey the 26th of June as being the correct date. From the post-hoc introspective interview conducted with the Indonesian participant MIIND, it is clear that MIIND repeatedly failed to understand what date was being referred to. At the end of this segment, they move on to discuss the length of the trip to London, without having resolved the miscommunication about the date. The interplay of various source languages and communication modes is particularly interesting here, as MSJD makes several attempts at clarifying the date, involving different communicative resources.

(4)

Source video file: Convers-MI-MS-07June2012-1 (see video stills in the appendix and full length video online: http://dx.doi.org/10.1515/cog-2015-0011)

Time code: 00:23:43 – 00:24:18

1MSJDSIX SLASHSIX TWO     NO——-
2MIIND          FIVE BEFORE FIVE
          “five“     five”
3MIINDFIRST SECOND THIRD FOURTH FIFTH FIVE
4MSJD————————————————————— IX:fwd NOT (. . .)
‘On June 26. – Earlier, in the fifth (month, i.e., May). – No, not that.’
5MSJDIX:fwd LONDON AFTER (writing on palm: 6 / 2 6) SIX TWO— SIX
6MIIND           (gaze to MS’s hands—-)    SIX TWO
     (nod—————-)    (nod—————————————–)
‘London is afterwards, on 26 June. – 6. .2 . . .’
7MSJD(writing in air: ٦) NO (writing in air: 6 2 / 6) SIX
8MIIND(nod———————————————————–)
‘On six . . . no, six-and-twenty June.’
9MSJDGO ALL BYE-BYE LONDON STAY SLEEP FOUR DAY FOUR————– STAY LONDON FOUR
10MIINDFOUR DAY DAY FOUR

‘We will all go to London, bye-bye, and stay there overnight for four days. – Four days. – We stay in London four (days).

This is a particularly clear example of how this kind of communication is both multilingual and multimodal. In the first segment, the expression that MSJD uses (SIX SLASH SIX TWO) partly reflects Jordanian Sign Language (LIU) and written Arabic in terms of the order of elements. In particular, the numeral ‘26’ is signed in LIU by combining SIX and TWENTY, in this order. This is the same as in spoken Arabic sitta-wa-ishreen (literally ‘six-and-twenty’), but is the opposite of IndoSL, where ‘26’ is signed TWO SIX, in this order. The complete date would be signed in LIU with the sequence SIX TWENTY SLASH SIX MONTH, and this in turn is modelled on the order of writing the date in Arabic as used by MSJD, which is 6 – 2 (written from right to left) followed by slash – 6 (written from left to right). The numeral SIX that MSJD uses is an invented sign using the “digits” strategy. This is more iconic in this context than the Jordanian sign, which resembles the written Arabic numeral. Interestingly, MIIND uses a mouthing from English (“five”), which happens to be part of his small repertoire of English.[6]

(5)
MSJDSIX SLASH SIX TWO       NO   IX:fwd NOT
MIIND   FIVE FIRST-SECOND-THIRD-FOURTH-FIFTH FIVE
ownSIGNxxxxxxxxxxxxxxx  xxxxxxxxxxxxxxxxxxxxxx
sequence 6-2
otherSIGN
invSIGNxxxxxxxxxxxxxxxxx
numeral signs and month-before-day order
Engl: writing
Engl: mouthing   xxxx          xxxx
   “five”          “five”
Arabic: writingxxxxxxxxxxxxxxxxx
sequence 6–2 and slash

In the next segment (6), MSJD changes his strategy and attempts to show MIIND the written numbers on his palm. The form of the written numbers on the palm conforms to written English and maintains the month-before-day order used previously (it is not clear where the US-style month-before-date order comes from). Although MIIND looks at MSJD’s hands, he is still unable to decode the date, and still trying to understand the numeral with the “reversed” order (SIX TWO for ‘26’).

(6)
MS-JDLONDON IX:down AFTER6 / 2 6SIX TWOSIX
MI-INDSIX TWO
ownSIGNxxxxxxxxxxxxxx
order of elements
otherSIGN
invSIGNxxxxxxxxxxxxxxxxxxxxxxxx
number signsnumbersigns
Engl: writingxxxxxxx
form of numbers (on palm)
Engl: mouthing
Arabic:writingxxxxxxxxxxxxx
order of elements

In one further attempt shown in (7), MSJD now resorts to writing in the air. The initial attempt at writing in Arabic numerals is quickly abandoned, and there is an overt marking of self-initiated repair (the sign NO). The subsequent new combination of English-style writing (this time with day-before-month), but with interference from LIU and Arabic in terms of the order of some of the elements, is not understood by MIIND either, and they move on to discussing a different subject.

(7)
MSJD٦NO6 2 / 6SIX
MIIND
ownSIGNxxxxxxxxx
sequence 6-2
otherSIGN
invSIGNxxxx
numbersign
Engl: writingxxxxxxxxxx
form of numbers (in air)
Engl: mouthing
Arabic:writingxxxxxxxxxxxxx
numeral‘6′sequence 6–2

This example demonstrates how multilingual-multimodal resources interact to contribute to the overall meaning. The creative inventions that signers use are not recruited from any pre-existing linguistic inventory, but arise from the interplay of existing communicative resources, meta-linguistic skills and linguistic creativity. These inventions are often closely intertwined with elements from their primary sign languages and other secondary languages of literacy that they have some degree of fluency in.

4.3 Multilingual-multimodal spaces

The examples in Sections 4.1 and 4.2 elucidate the way in which multilingual and multimodal options are realised in these interactions, and this differs markedly from monolingual signing. It is true that all sign languages make use of a range of communicative resources and use several simultaneous manual and nonmanual channels. However, if interlocutors share the same language, there is no need for repeated differential expression of the same concept through a variety of signs in the immediate vicinity of each other. As many sign languages do have several alternative ways of signing numerals (see, for instance, Palfreyman (forthcoming) on Indonesian Sign Language), several of those forms may occur in a discourse, particularly in the case of inter-dialectal conversations. However, cross-signing is peculiar in that the differential expression of numerals clusters narrowly together, so that signs from one’s own sign language, invented signs, writing, and mouthing all contribute to the “making of meaning” within the same immediate interaction. Repetition is also characteristic of these interactions, either by one and the same signer, or by both signers repeating signs to each other, sometimes several times back and forth. This is evident in most of the examples discussed in this article.

This clustering of alternative expressions can be quantified in the data. Across the coded data, there are 45 instances of numerical expressions where the numeral is signed in more than one way within the same immediate interaction. This data count covers only manual signs and not the other semiotic types. Table 4 shows that all participants engage in these interactions, where there is some negotiation as to the formation of numeral signs. Usually, there are two different forms of numerals in the interaction, but occasionally, there are three different forms.

Table 4:

Multiple differential expression of numerals.

ParticipantsTwo different numeral formsThree different numeral forms
HMJP with CPBRT71
HMJP with MIIND62
CPBRT with MSJD51
MSJD with MIIND21
HMJP with MSJD101
MIIND with CPBRT90
Total : 45396

The data support the hypothesis that these interactions are evidence of the way in which the target meaning is a matter of negotiation. In the majority of cases, in 26 out of 45 interactions, i.e., 58%, both participants are involved in the variable expression of numerals. This is evidence of the active co-creation of numeral forms in interaction. Sometimes each of the participants produces a different form, while at other times, both participants swap their respective numeral forms back and forth until agreement on the intended meaning has been reached.

In the remaining 19 cases, i.e., 42%, only one of the signers produces several numeral sign forms to express the same number. This can happen as a form of self-repair, or in response to a non-manual or manual signal from the interlocutor that indicates non-comprehension. In all cases, repetition of the numeral forms is a common strategy, either by one signer or by both. Interestingly, the differential expression of numerals does not follow any particular pattern with regard to the greater or lesser iconicity or transparency of the signs. For instance, it is not the case that the more directly iconic ‘digits’ type is always the one that is added after a less iconic type has been produced; the reverse also happens.

The above examples suggest that the communicative situation in cross-signing may best be viewed as a process of dynamic interaction between three multilingual-multimodal spaces: each signer’s own space, and an intersubjective space that is shared between the two participants. At the beginning of data collection for cross-signing, each participant comes to the table with his or her own multilingual-multimodal space, which includes all the gestural, written, spoken and signed languages and modes that the individuals have experienced in their lifetime. Importantly, participants were also given a detailed preparatory briefing in their own sign language that explained the tasks and aims involved in this research. Thus they had time to think about these tasks, although they did not seem to undertake any particular preparation.

As the participants have never met before, they are necessarily unaware of the specific content of their interlocutors’ multilingual-multimodal space, apart from general information about each person’s country of origin. During the interaction, a shared multilingual-multimodal space is created and successively enriched with linguistic structures and other strategies. As participants become increasingly familiar with each other, the shared space expands and includes more and more communicative resources, while discarding failed communicative attempts. Those strategies that are felt to be successful (such as the digital strategy for expressing numerals) become part of the shared multilingual-multimodal space, and are used repeatedly. Strategies that are unsuccessful (like the use of numerals “written in the air” in Arabic script) are discarded and do not enter the shared space.

The shared multilingual-multimodal space is a dynamic and intersubjective repository of linguistic structures, including both fully and partially specified forms as well as generalisable construction types. In many cases, the linguistic material contributed to the shared space is itself the result of complex metalinguistic reasoning on the part of each signer. In fact, the way in which multilingual and multimodal resources come together in specific linguistic expressions of numerals has a lot in common with “blended spaces” as described in Fauconnier and Turner (2002). In their framework, blended spaces are “small conceptual packets constructed as we think and talk, for purposes of local understanding and action” (Fauconnier and Turner 2002: 102). In the blended space, parts of cognitive structures are constructed from several input spaces by bringing them together in a novel way, and the same could be said of the linguistic and communicative entities in the examples discussed so far. The elements from different languages and modalities can each be considered to be located in separate input spaces. For instance, with respect to example (4), written Arabic, Jordanian Sign Language, and invented signs come from three different types of input spaces. They are then blended together in the actual utterance (in turn 1), which has elements from each of the inputs. Importantly, exactly how to configure these elements is not a predictable, automatic process but is a matter of imaginative creativity on the part of the signer. This is what enables the signer to re-blend the elements differently (in turn 5 and turn 7) when his initial utterance is not understood. Blended space theory is useful for the present analysis because there are many parallels in the process and indeed, the blending of linguistic forms can simply be considered as a special case of conceptual blending. In the tabular representations of turns from examples (2) and (4), we find blending whenever more than one row is marked as active (by xxxxxx). As communication progresses, the numerals that appear as outputs in the blended spaces of each signer are in turn combined into a secondary space which is explicitly intersubjective. Through negotiation, signers reach an understanding as to which signs and structures have become shared knowledge, and this is visible most clearly in examples where signers are facing a communication barrier.

The construction of utterances through blending is exemplified in Figure 12, which uses example (2) to show the complex recurrence of blending, moving from each signer’s own blended space to the intersubjective space. The intersubjective shared space eventually includes the two-handed digital strategy of signing numerals with multiple digits (in this case, 1,000) and the BAR element of written fractions (i.e., the vinculum), as well as both ways of signing fractions in the signing space (top-to-bottom and bottom-to-top) and their individual components. At this stage, the two interlocutors have not “agreed on” a consistent direction of signing fractions in the signing space, and they are not pursuing this topic further. Elements that have not been used in the conversation, for instance the JSL kanji-based sign for ‘1,000′, are kept outside of the shared space.

Figure 12: Blending of input spaces in cross-signingThe connection between elements in the individual signers’ blended spaces and in the intersubjective blended space is only exemplified once, for the arrows representing the spatial arrangement of signing fractions. The other elements that are pulled through to the intersubjective space are not connected by lines as this would make the figure too busy and difficult to read. (1h = 1-handed, 2h = 2-handed).
Figure 12:

Blending of input spaces in cross-signing[7] (1h = 1-handed, 2h = 2-handed).

Thus several parallels between conceptual blending as described in Fauconnier and Turner (2002) and the innovation of linguistic structures through blending in the cross-signing data are apparent. The process of conceptual blending is iterative, so that the output of one blend can serve as the input to another blend, just as the structures produced by each signer combine again into the content of the shared space. The resulting cognitive and, in our case, linguistic structure may gain its own unique properties not copied from or inherent in any of the input spaces; indeed, the linguistic creativity of the signers relies on exploiting these possibilities. And just as the mental spaces involved in conceptual blending are partial constructs, the content of the shared multilingual-multimodal space is only partially specified at any given time.

The process of constructing the shared space can be observed indirectly through certain sequences of interaction, and this is discussed further in Section 5. For the purpose of this article, the focus is on the linguistic and other communicative resources that are present in the shared multilingual-multimodal space. This is not to ignore the important role of broader cognitive and non-linguistic interactional strategies in these conversations, such as the principles described in Levinson (2006) as part of the human “interaction engine”, or issues of shared intentionality (Tomasello 2008) and joint attention (Moore and Dunham 1995). All of these factors are very relevant to both the conversational data and the data from the communicative tasks in cross-signing, but exploring them in detail is beyond the scope of this article. Throughout the conversation, the contents of the shared multilingual-multimodal space become part of the interaction’s “common ground” and are crucial elements in establishing what Clark and Brennan (1991: 148) refer to as “the grounding criterion: that we and our addressees mutually believe that they have understood what we meant well enough for current purposes”. Of course, these beliefs are also underpinned by these same non-linguistic interactional principles.

It should be argued that the shared space is conceptually present from the beginning, as both participants clearly expect to communicate with each other with some success right from the start. Thus the initial shared space would be filled not with actual linguistic structures and communicative resources, but with conjectures in terms of what each participant expects to have in common with the other participant. These expectations will be either falsified during conversation, and the associated strategies and structures discarded (“This did not work, I won’t use it again”), or confirmed and committed permanently to the shared space (“I have now established that this can and will be used for further communication”). There is evidence in the data and from the post-hoc interviews that participants operate with such expectations and consciously track their falsification or confirmation (see Section 6 for further comments on meta-linguistic skills).

What is perhaps surprising with respect to cross-signing is the speed and relative ease with which a shared basis for communication develops. Apparently, this phenomenon does not occur with speakers of spoken languages, whose communication would be more limited for much longer in the absence of any shared language.[8] The sub-topic investigated here may seem to be relatively easy to negotiate, given that so many potentially iconic strategies are available to express numerals – indeed, this is why this domain was chosen for the initial investigation. However, the same processes of developing a shared “toolkit” of communicative resources, from broad strategies to the narrowing down of signs for reference to particular lexical items, can be observed throughout these conversations in many other domains of meaning, and this will be explored further in future research with these data.

5 Interactional sequences in multilingual-multimodal space

The previous section has considered the shared communicative resources that cross-signing participants build up over the course of their conversations and has explored the concept of a shared multilingual-multimodal space that is constantly changing and expanding. This section examines some of the details of this process and co-opts approaches from Conversation Analysis and variationist sociolinguistics to show how signers negotiate the use of communicative resources in typical interactional sequences when they are addressing a communication difficulty. In particular, the focus is on interactional sequences that provide overt evidence for the construction of a shared ad hoc repertoire for the purpose of each specific communicative situation. If, as is being assumed here, the process of “making meaning” during cross-signing is essentially collaborative, the mechanisms involved necessarily rely on the specific kinds of interactions that happen between the participants. This rationale has provided the motivation for trying to identify patterns in interactional sequences between participants.

As noted in Section 2, for the purpose of this investigation, approaches from Conversation Analysis (e.g., cf. Schegloff 1987; 1991; 2007; Sidnell and Stivers 2012) have been co-opted. This is useful because Conversation Analysis (CA) provides a framework for dealing with patterns of interactional sequences. However, the way in which a CA-type approach is used here is tailored to the specific research question pursued.

The value of CA as an approach for analysing the cross-signing data lies in the emphasis on interactional sequence types that achieve specific communicative functions. For instance, sequences such as the adjacency pairs “question – answer”, “offer – acceptance” or “request – compliance”, or more complex sequences involving pre-, post-, and insert expansions (Schegloff 2007), represent identifiable interactional types; that is, they can be found repeatedly within and across languages. In the cross-signing data, such interactional types can similarly be identified, using labels that have been defined specifically for the purpose of this analysis in order to categorise typical interactional sequences.

Much of the online meta-linguistic monitoring that participants constantly undertake in their interactions may have no overt manifestation, particularly if the communication is flowing smoothly. Although the participants have reported some of their internal reasoning during the introspective interviews, it is important to back this up with direct evidence from the linguistic data. Therefore, the analysis in this section focuses on segments in the conversation where the signers are trying to overcome a problem with communicating the intended information. Such segments allow for a clearer insight into the strategies that signers use in the co-creation of meaning.

A typical interaction that is found repeatedly in the cross-signing data consists of the following sequence, here called the IAP-sequence:

  1. INTRODUCE: This is the beginning of a sequence, and it involves one of the participants introducing a novel linguistic structure or communicative strategy not previously used. These can be existing items or newly invented items.

  2. ACCOMMODATE: In many cases, the other participant takes up the “suggested” construction and uses it in the immediate or deferred response to the previous utterance; that is, the second participant accommodates the first participant’s choice.

  3. PERSIST: When a strategy has been introduced (through INTRODUCE) and acknowledged (through ACCOMMODATE), both participants often maintain use of the strategy repeatedly in the following discourse.

It should be noted that this sequence has been identified with respect to the domain of quantification involving the use of numeral signs, and where the necessary linguistic negotiation is both more complex and more overt than in other instances because the signers are faced with a communicative challenge. It remains to be seen in how far this model is applicable to other communicative domains and how far it can be generalised. The model is illustrated in the examples below.

5.1 Example: dates

In example (8), the two signers from Japan and from Indonesia have just met for their first video recording, and this segment is from the very beginning of the conversation (starting at 00:03:09). The Indonesian signer (MI) is trying to find out the Japanese signer’s (HM) arrival date in the UK.

(8)

Source video file: HM-MI-06Jun2012_01 (see video stills in the appendix and full length video online: http://dx.doi.org/10.1515/cog-2015-0011)

Time code: 00:03:09 – 00:03:21

1HMJP
2MIIND
3HMJP
4MIIND
5HMJP
6MIIND
7MIIND
8HMJP
9MIIND
10HMJP
11MIIND

Both signers use two slightly different versions of the digital numeral strategy in utterances (1) – (3), that is, signing TWO NINE for ‘29′. The JSL sign for ‘29′ is completely different, involving numeral incorporation for ‘20′ (TWO#tens) and a one-handed numeral ‘9′. Therefore, the numeral sign introduced here by HMJP represents a creative invention driven by the need to increase the level of iconicity.

In utterances (7), (9) and (10), a new communicative resource is added, a representation of writing. Both signers use this resource in the same way, by a tracing movement with the index finger. However, while the Indonesian signer uses a dash (-), the Japanese signer uses a forward slash (/), as writing dates in Japanese involves either a slash or a dot (.), but not a dash.[9] The signs here follow the month-before-day order of JSL, which is itself aligned to spoken/written Japanese. In Bahasa Indonesia and IndoSL, the order of elements is day-before-month.

This interaction contains several IAP-sequences of various complexity. In the simplest case, there is a single sign that is used repeatedly back and forth between the signers. Repeating signs in this way is one of the most common negotiation strategies in the data for agreeing which signs to use in the conversation and building up a shared lexicon. An example of a minimal IAP-sequence, utterance (5) and (6) above, can be summarised as in (9)

(9)
SignerRelevant part of utteranceIAP-SequenceOvert markers
HM-JPBEFOREI
MI-INDBEFOREAnod, “ah”
HM-JPBEFOREP

The overt communicative behaviour marking a successful step in the process, or the lack thereof, often consists of non-manual signals, such as the nod and the “ah”-mouthing in this example.

It is useful to recognise some degree of variation and still categorise the interactional sequence as being of the same IAP type. One common variation involves partial or modified accommodation, notated as A’ instead of A.[10] That is, the structure innovated in the initial turn is not taken up in the following turn(s) in an identical way, but in a partial or modified way. This is the case in utterances (1) – (3) in the example (8). As shown in (10), MIIND adopts the digital strategy of expressing the numeral ‘29′ introduced by HM, but uses a slightly different sign with a different handshape on one of the hands (with the little finger folded in rather than the thumb).

(10)
SignerRelevant part of utteranceNumeral typeIAP-SequenceOvert markers
HMJPFIVE TWO NINE(a)digitalI
MIINDTWO NINE(b)digitalA’
HMJPTWO NINE(a)digitalPnod

Signing of the complete date including the dash or slash in utterances (7) – (11) involves an inserted repair sequence, as shown in (11). The initial introduction (I) is not understood by HMJP, who signals this by partial repetition until MIIND repeats the same signs again. Again, the accommodation by HMJP is partial (A’), adopting the order of elements, but replacing the element derived from writing. This is accompanied by both manual and non-manual signals; pointing to the interlocutor or his/her hands, often accompanied by nodding, is another common meta-linguistic marker of comprehension that occurs in the data.[11] Finally, the repeated use of the agreed construction (PERSIST) is also partial in the last utterance (notated P’), as MIIND only repeats the final sign.

(11)
SignerRelevant part of utteranceIAP-SequenceRepair sequenceOvert markers
MIINDTWONINEDASHFIVEIX
HMJPTWONINETWONINEX
MIINDTWO NINE DASH FIVEIX
HMJPTWO NINE SLASH FIVEA’nod, “ah”, INDEX:MI
MIINDFIVEP’nod

This article does not focus on repair sequences per se, but many of the more extended IAP-sequences include self-initiated or other-initiated repairs (see Kitzinger 2012; Dingemanse et al. 2013), sometimes as multiple occurrences. Repair sequences are identified in the notation of the examples, but not subcategorised or subdivided into phases as they are not being explored in more detail in their own right.

In more complex sequences, it is possible to have multiple identical or modified instances of INTRODUCE, ACCOMMODATE, and PERSIST, as well as parallel processes of meaning negotiation for more than one structure or lexical item. These are shown in Sections 5.2 and 5.3 below.

5.2 Example: time period

Example (12) is from the conversation between the British and the Jordanian signer (CPBRT and MSJD). The interaction shows a parallel process of negotiating the meaning of both the numeral ‘8′ and the time unit ‘year’. The two different versions used for signing EIGHT are both of the “two-handed digits” type but use a different configuration of fingers on one of the hands: extended middle, ring and little finger in EIGHT(a) versus extended thumb, index and middle finger in EIGHT(b) – the other hand has all five fingers extended in both signs. Likewise, there are two different signs for ‘year’: YEAR(a) is from International Sign and YEAR(b) is from LIU.

(12)

Source video file Convers-CP-MS-06June2012_3 (see video stills in the appendix and full length video online: http://dx.doi.org/10.1515/cog-2015-0011)

Time code: 00:00:27 – 00:00:33

1CPBRT
2MSJD
3MSJD

In this interaction, the initial introduction by CPBRT is not successful at first, signalled by MSJD’s frowning facial expression. MSJD responds with his own version of signs (I’). For the sign EIGHT(b), a straightforward AP sequence follows, but for expressing ‘year’, the signers return to the original YEAR(a) sign. It is interesting to observe that for the numeral sign, CPBRT accommodates MS’s choice of sign, while for ‘year’, MSJD accommodates the choice of CPBRT. It is likely that there may be asymmetries in the data in terms of who accommodates whom how often and under which circumstances, but this has not been investigated systematically yet.

(13)
SignerRelevant part of utteranceNumeral typeIAP-Sequ.1IAP-Sequ.2Overt markers
CPBRTEIGHT(a) YEAR(a)2h:digitsII
MSJDEIGHT(b) YEAR(b) EIGHT(b)2h:digitsI’I’frown
CPBRTEIGHT(b) YEAR(a)2h:digitsAI
MSJDEIGHT(b)2h:digitsPGOOD, nod
MSJDYEAR(a)A

The back-and-forth between different variants used for the same meaning is a very important strategy for building up a shared inventory of signs. In effect, after this sequence CPBRT and MSJD have both learned each other’s signs for ‘year’, so that either sign could be used in the ensuing conversation further down the line. This seems to be a very effective way of increasing the repertoire of shared multilingual resources.

5.3 Example: time period

Another example of discussing a time period, taken from the same conversation as the previous example, shows that just as the ACCOMMODATE and PERSIST stages may not be straightforward, the INTRODUCE stage may also be complex. In example (14), CPBRT attempts several times to convey the concept of ‘month’. A total of six versions are used by the signers before they can successfully resolve the meaning. In the process, the multilingual-multimodal resources used include a one-handed manual alphabet (fingerspelling J-U-N-E), two variants of spatial pointing (INDEX) and three variants of MONTH from IS (a), BSL (b) and LIU (c).

(14)

Convers-CP-MS-06June2012_3 (see full length video online: http://dx.doi.org/10.1515/cog-2015-0011)

00:01:40 – 00:01:55

1CP-BRTFOUR INDEX:four points along horizontal forward line
2MSJD
3CPBRT
4CPBRT
5CPBRT
6MSJD
7CPBRT
8MSJD
9CPBRT
10MSJD
11CPBRT

In (15), each new attempt at communicating the target concept is notated as I’, and the negotiation also includes repeated repair. In utterance (4), CPBRT engages in self-initiated repair during which MONTH(b) is clearly not primarily intended for MSJD, as CPBRT looks away from MSJD while signing it. Instead, MONTH(b) is part of the repair while CPBRT struggles to think of yet another way of how to convey the concept. The multiple exact repetition (AP) of the successfully understood variant is typical after an extended negotiation of meaning, and there are multiple back-channel responses visible in the interaction, including a head nod and index point to MSJD’s hand in the final utterance (11). The variant that is “agreed on” eventually, using MONTH(c), is morphologically simple, while the variant with numeral incorporation, FOUR#MONTH(a),[12] which is morphologically complex, is discarded.

(15)
SignerRelevantpartofutteranceNumeral typeIAP-sequenceRepair sequenceOvert markers
CPBRTFOUR INDEX:four pointsI
MSJDFOUR DAY DAY-AND-NIGHTI’XNO, headshake
CPBRTFOUR#MONTH(a)incorporatedI’X
CPBRTFOUR MONTH(b)digitsI’Xlook away
CPBRTINDEX:down J-U-N-E INDEX:downXnod
CPBRTINDEX:four fingersI’X
MSJDFOUR MONTH(c)digitsI’X
CPBRTFOUR MONTH(c)digitsA
MSJDNEXT NEXT FOUR MONTH(c)PINDEX, nod

6 Discussion

As mentioned before, the case study here deals with a relatively straightforward semantic domain, which also provides many options for iconic representation. Yet this kind of shared repertoire is built up “on the fly” for all kinds of semantic and grammatical domains, including more abstract domains such as colour that are more difficult to represent iconically, and signers also need to keep track of each of the three other participants they communicate with. Initial qualitative evidence, from semantic domains other than numerals, supports the notion that signers actively monitor these intersubjective multilingual-multimodal repertoires. This evidence still needs to be assembled systematically, but a few comments about the kind of reckoning that signers engage in continuously can be made at this point. One interesting source of evidence comes from the post-hoc introspective interviews that were conducted with each signer separately after the initial conversations. In these interviews, signers explained why they chose to sign the way they did, and what they did and did not understand from their interlocutor’s signing. It seems apparent from these interviews that all participants continuously entertain multiple simultaneous hypotheses, both about what their interlocutor is likely to understand (which then in turn influences the choices in their own signed output), and about the likely meaning of what their interlocutor is signing to them. For instance, notes from the post-hoc interviews include feedback such as: MIIND reckons that HMJP possibly thinks MIIND is asking him for the names of people (MIIND-HMJP, feedback from MIIND at 00:19:08).

The following notes from the post-hoc interviews illustrate the kinds of reasoning and trial-and-error that can be involved in the choice of lexical signs (the notes are written up in the third person although the signers reported their feedback in the first person). Such quotes also provide explicit evidence that signers keep track of both the current conversation and previous conversations with other participants:

MIIND-MSJD, feedback from MIIND (00:06:41)

MIIND uses the Indonesian sign for ‘Monday’ (signed on the nose). Just after he has said this, he realises that MSJD won’t understand the sign, and wonders about fingerspelling the Indonesian word for ‘Monday’ (SENIN). Then he hesitates again, calculates how many days ago, and says ‘THREE AGO’.

CPBRT-HMJP, Feedback from HMJP (00:17:56)

HMJP decides to sign the number ‘12′ in this particular way [i.e., two-handed digital] because he feels it is easier for CPBRT to understand as they had already signed ‘10′ before, using the index finger and ‘zero’ handshape [i.e., using a two-handed digital form].

MIIND- CPBRT, feedback from MIIND (00:01:42)

MIIND is using the Japanese sign for ‘England’ because he knows that CPBRT has already met the Japanese signer. He does not know CPBRT’s own sign for ‘England’, so he hopes she knows the Japanese sign.

There is evidence in the video data that signers try to maximise any opportunities at learning and using their interlocutors’ signs, such as in this instance:

CPBRT-HMJP, feedback from HMJP (00:18:02)

HMJP understands the British Sign Language sign COLLEGE because it is before university, and the sign for ‘university’ had already been negotiated. He shows the Japanese sign UNIVERSITY, as it is equivalent for the same age group.

The signers do this even when they are unsure about the meaning of a sign. Later on in the same conversation, CPBRT uses the BSL sign BOY, which is not iconic. This is later tentatively used by HMJP although he reports in his interview that at this stage, he is not sure whether the sign indeed has the meaning he suspects it to have.

In addition to evidence from the post-hoc interviews, the occurrence of IAP-sequences constitutes further overt evidence of the meta-linguistic negotiation that characterises the cross-signing conversations. This model also reflects the conflicting motivations that signers are managing: on the one hand, they are motivated to introduce new linguistic materials to the conversation, in the hope that at least one of them will be understood. On the other hand, there is also a motivation to persist with using the same forms once they have been brought up, which conflicts with the independent motivation to accommodate the interlocutor’s linguistic choices.

The interactional sequences exemplified here illustrate the mechanism by way of which the shared space is being filled with linguistic and other communicative resources. Metaphorically speaking, a signer retrieves a target structure or lexical item from his/her multilingual-multimodal space. The accommodation of this choice signals that these elements have been understood, as they havebeen mirrored back in the subsequent turn(s), and therefore, they have become part of the shared space and can henceforth be used continuously.[13] The intersubjectivity of linguistic conventions, in the sense that “users know their interlocutors share the convention, that is, everyone is potentially both a producer and a comprehender and they all know this” (Tomasello 2003: 12) is not a given at the beginning of cross-signing, unlike in interactions where a shared language is available. In the cross-signing situation, signers cannot operate on the basis of readily available intersubjective conventions where each person knows that the other person knows the same sign-meaning combinations. Instead, intersubjectivity needs to be established explicitly through negotiation, often using the process of IAP sequences, and signers actively keep track of the outcomes of these implicit or explicit meta-linguistic negotiations. There is thus a specifically meta-linguistic level of constant shared attention to the state of the developing joint repository of the “agreed-upon” forms.

After successful completion of an IAP-sequence, the agreed linguistic forms become part of the interaction’s “common ground”. Clark and Brennan (1991: 129–131) emphasise that in order to achieve grounding in conversation and make a complete contribution to the communicative interaction, interlocutors must cooperate and go through a “presentation phase” (A presenting an utterance to B to consider) and an “acceptance phase” (B accepting A’s utterance as comprehensible). This is parallel to the INTRODUCE and ACCOMMODATE/PERSIST phases in the IAP-model.

Within sociolinguistics, the notions of accommodation and persistence can be used to frame the understanding of how variation plays out among several conversational participants, e.g., whether they accommodate each other or persist with their own variant regardless of the conversational partner and his/her actions (Szmrecsanyi 2005). Persistence effects, the tendency of speakers to re-use forms that have been used before, can play a role in accounting for data in quantitative variationist sociolinguistics in terms of speakers choosing between several available variants of a linguistic variable (Gries 2005; Szmrecsanyi 2005). In understanding the back-and-forth negotiation between participants in cross-signing with respect to numerals, the notion of accommodation is very similar; the signer who re-uses a structure first introduced by the other participant, is accommodating this choice. However, the notion of persistence is different with respect to the cross-signing study. Here persistence is defined as the continued use of the target structure by either or both of the participants, regardless of who introduced the structure and who accommodates whom. Persistence in this sense is evidence of the fact that a particular structure is now present in the shared multilingual-multimodal space between the two signers.[14]

Looking at the same phenomenon from a different angle, the ACCOMMODATE/PERSIST phases can also be seen as instances of cognitive entrenchment, as discussed in Fauconnier and Turner (2002) and in Langacker (1999). Through re-using the structures that are being introduced, they become automatic routines that are subsequently available to the signers as ready pre-packaged items. Again, the significance of cross-signing data lies in the fact that we can observe this process from the very initial stages of entrenchment, rather than concluding post-hoc that entrenchment has taken place already.

Repair sequences are obviously of great interest for work on cross-signing, and another study focusing on a different set of cross-signing data is currently exploring the specifics of repair sequences under these circumstances.[15] At this point, it shall merely be noted that there are certain commonalities between a typical IAP-sequence and a typical repair sequence. In both cases, there is often visible evidence that some linguistic entity is tentatively being put forward for negotiation because the initial turn may be “try-marked” (cf. Sacks and Schegloff 1979 on the use of rising intonation for try-marking). A possible signed equivalent to spoken language try-marking through rising intonation can be seen in utterance (1) of example (8): there is a long gestural hesitation, followed by eye gaze first to the signer’s own hands and then to the addressee.

When INTRODUCE is not immediately followed by ACCOMMODATE because there is a problem with comprehension, a repair sequence may intervene, sometimes repeatedly, until a form is found that is suitable for the shared multilingual-multimodal space. The repair may be self-initiated (SIR), in which case the initial introduction (I) is followed by another introduction (I’) by the same signer. Alternatively, it may be other-initiated (OIR), in which case there is some signal of incomprehension, such as a questioning facial expression (equivalent to huh? in spoken English), repetition of the sign with a questioning or frowning facial expression, or a counter-suggestion (I’), e.g., ‘do you mean TWO SIX?’. The absence of a back-channel response can also be understood by the interlocutor as a prompt for repair. In the present article, however, the focus is on resources and processes in the co-creation of meaning rather than on repair mechanisms.

Given the complexity of the many meta-linguistic and communicative tasks to be carried out simultaneously during cross-signing, it is hardly surprising that the initial conversations are full of difficulties, hesitations, misunderstandings, and repairs. Despite the relatively straightforward domain of numerals, signers frequently make multiple attempts at communicating about this domain, yet in the end they usually find enough common ground to convey the semantic content successfully. However, this is not at all the case throughout the entire conversation and across the various other semantic domains covered. There are many instances where attempts at communicating something are abandoned, or where signers think they have understood each other, but have actually miscommunicated. The latter can only be identified through the post-hoc interviews, which is why this methodology was so important for this study. Nevertheless, all signers have engaged in these conversations with readiness and ease, and all pairs have communicated about a range of topics.

7 Conclusion

Having considered the data and their possible interpretations, it is useful to consider what these findings can tell us about other issues within various sub-fields of linguistics. Does such a unique communicative setting throw light on other issues, and can it be used as a window into aspects of language and/or cognition?

First of all, the data discussed here are eminently compatible with usage-based views of language, where “language structure emerges from language use” and “the grammatical dimension of languages is a product of a set of historical and ontogenetic processes” (Tomasello 2003: 5, cf. Hopper’s 1987 view on “emergent grammar”). Rather than viewing language as something that relies on innate and modular language faculties providing specific algorithms for encoding language, Langacker (1999: 99) argues that “[i]t is not the linguistic system per se that constructs and understands novel expressions, but rather the language user, who marshals for this purpose the full panoply of available resources.” This is intended to apply even to interactions where everyone can rely on one or several shared languages. However, in the initial stages of the cross-signing situation there is no shared linguistic system to rely on and therefore, one is left with the usage-based model of language as the most (and possibly the only) appropriate approach to account for this kind of communication. This also accounts naturally for the fact that participants in this situation use whatever semiotic resources are available in the situation, regardless of whether or not these are conventional linguistic structures. Usage-based models of language also emphasise that linguistic competence is composed of individual elements and “schemas” at all levels of specificity, from the most specific to the most general (Langacker 1999). This is equally true of cross-signing, where the intersubjective shared space includes both specific signs and schematic patterns. In Figure 12, for instance, the shared space includes individual signs such as ONE and ZERO, general constructional patterns such as the “two-handed digital” numeral strategy, and complex expressions such as “1 in 1,000”.

This study also provides strong support for the recent trend in linguistics to take the multimodality of language seriously. It can be said that the particular setting of the cross-signing study maximises the occurrence of multimodal interaction. The examples discussed here have revealed the complexities of communication relying on a complicated interplay of multilingual-multimodal resources. The data and models discussed here also emphasise the shared and collaborative nature of communicative states and processes. Signers shape an intersubjective multilingual-multimodal space during their conversations, and in the process jointly negotiate their way through miscommunications and repairs. The IAP-model that has been used to account for this process shares some interesting similarities with “conceptual pacts” as discussed in Brennan and Clark (1996), although the latter relies on a monolingual environment (English). In the same way as set out in Brennan and Clark (1996: 149–150), participants in cross-signing also establish their shared multilingual-multimodal spaces step-by-step, often with initial uncertainty, separately with each particular addressee, and by way of joint negotiation.

With respect to the study of jargons and pidgins, the cross-signing data are unique in that, due to the affordances of the visual-gestural language modality, we can observe “in vitro” the fast-tracking and the very first steps of linguistic conventionalisation, with an immediacy that is unavailable for spoken languages. Over the course of six weeks, there clearly is scope for this early jargon to develop into a somewhat more stable and standardised incipient pidgin, involving all participants together as a social group and potentially relying on BSL as an incipient lexifier language. In how far this happens remains to be seen in further research on the second and third rounds of conversation. In addition, linguistic innovation also plays an important role in the development of pidgins and creoles (e.g., Samarin 1968; Roberts and Bresnan 2008). Typically, innovation in spoken language pidgins and creoles involves the creative re-arrangements and re-analyses of existing lexical material found in the so-called substratum and superstratum languages. For instance, French de l’eau (‘water’) and de l’huile (‘oil’) with the partitive article is reanalysed as a monomorphemic lexeme dilo and delwile in Seychelle Creole (Michaelis and Rosalie 2013), and the English noun fellow has given rise to a suffix – pelain in Australian and Melanesian pidgins (Mühlhäusler 1996; Baker 1996). However, sign languages also allow for the creation of new lexical material de novo due to the powerful role that iconicity and multi-modality play in cross-signing communication.[16]

Thus this study also raises issues about modality differences between signed and spoken languages in the domain of language contact and pidginisation. The question of language modality has been explored from various angles in the past and this has been an important contribution by previous research in sign language linguistics (e.g., Meier et al. 2002; Perniss et al. 2007). However, a comparison of pidginisation processes in signed and spoken languages and the possible consequences for the resulting linguistic varieties has not been undertaken. In fact, the very existence of sign language pidgins arising from the same sociolinguistic settings as spoken language pidgins is not widely recognised in sign language linguistics.

Within sign language linguistics, research on cross-signing can contribute important insights into the development and linguistic status of International Sign (IS). IS has developed as a contact variety between deaf signers from different countries and is used widely in international gatherings of deaf people such as the conferences and congresses of the World Federation of the Deaf (cf. McKee and Napier 2002). IS has the sociolinguistic characteristics of a pidgin and has at times been recognised as such, though its linguistic status is a contested issue (Supalla and Webb 1995). The study of cross-signing can offer a window into the past of the development of IS, as in its initial stages, it undoubtedly developed from interactions just like the ones reported in this study.

Finally, this study is a tribute to the range of linguistic and meta-linguistic skills that are at work in these conversations. The signers simultaneously and continuously need to resolve a whole range of communicative challenges, for which some evidence from the post-hoc introspective interviews has been discussed above: deciding which linguistic items, structures, and other communicative strategies to use; making best guesses about the intended meaning of the interlocutors’ signed output; monitoring and interpreting the interlocutor’s non-verbal responses such as non-manual back-channel responses; and keeping track of those signs and structures that have entered the shared repertoire they have with a particular interlocutor at a given point in time. The recent concept of “Deaf Gain” (Bauman and Murray 2010; 2014) proposes that deaf sign language users may have unique advantages over speakers in some respects. The cross-signing data represent a remarkable display of meta-linguistic capacity, and extreme language contact of this kind may be one of the communicative settings where signers have a considerable advantage over speakers.

Acknowledgements

The research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme; we are grateful for funding of this research under the project “Multilingual behaviours in sign language users” (MULTISIGN), Grant Agreement number 263647. I am very grateful to the four participants in the study: Claire Perdomo, Hayashi Masaomi, Muhammad Isnaini, and Mohammed Salha. Other members of the iSLanDS Institute who were members of the wider research team are also gratefully acknowledged: Paul Scott, Nicholas Palfreyman, and Keiko Sagara, whose dedication as facilitators for the international participants throughout the project duration supported the research process in many crucial ways; Sibaji Panda who trained and coordinated a substantial team of student annotators in India; Anastasia Bradford who cross-checked data for the project; Sam Lutalo-Kiingi who was responsible for important parts of data collection and participant briefings; and Jennifer Webster who has helped with data organisation and editing of the article. Finally, I am appreciative to Dr Connie de Vos, Dr Susanne Michaelis, and to the peer reviewers for helpful comments on the successive draft versions of this article.

Appendix 1: Transcription conventions and abbreviations

“  ”

mouthing

(   )

nonmanual action

GLOSS-

false start

GLOSS——

sign held in its final position

GLOSS(a, b, c...)

variants of formally different signs with the same meaning

GLOSS-GLOSS

single sign requiring more than one English word for the gloss

GLOSS#GLOSS

sign with numeral incorporation

W-O-R-D

fingerspelled word using a manual alphabet

INDEX or IX

pointing sign using the index finger

INDEX:fwd

index finger pointing forward

INDEX:down

index finger pointing downward in front of the signer

IS

International Sign

BSL

British Sign Language

LIU

Jordanian Sign Language

IndoSL

Indonesian Sign Language

JSL

Japanese Sign Language

ASL

American Sign Language

Appendix 2: Video examples

Example (2)

Example (4)

Example (8)

Example (12)

References

Baker, Philip.1996. Australian and Melanesian Pidgin English and the fellows in between. In PhillipBaker & AnandSyea (eds.), Changing meanings, changing functions: Papers relating to grammaticalization in contact languages (Westminster Creolistics series 2), 243258. London: Westminster Press.Search in Google Scholar

Bakker, Philip.2008. Pidgins versus creoles and pidgincreoles. In SilviaKouwenberg & JohnVictor Singler (eds.), Handbook of pidgin and creole studies, 130157. Oxford: Wiley-Blackwell.10.1002/9781444305982.ch6Search in Google Scholar

Bauman, Dirksen & JosephMurray.2010. Deaf studies in the 21st century: Deaf-gain and the future of human diversity. In MarcMarschark & PatriciaSpencer (eds.), Oxford handbook of deaf studies, language, and education, Vol. 2, 210225. New York: Oxford University Press.Search in Google Scholar

Bauman, Dirksen & JosephMurray (eds.). 2014. Deaf gain: Raising the stakes for human diversity. Minneapolis, MN: University of Minnesota Press.Search in Google Scholar

Boyes Braem, Penny &, RachelSutton-Spence. 2001. The hands are the head of the mouth: The mouth as articulator in sign languages. Hamburg: Signum Press.Search in Google Scholar

Bradford, Anastasia, KeikoSagara & UlrikeZeshan. 2013. Multilingual and multimodal aspects of “cross-signing” – A study of emerging communication in the domain of numerals. Paper presented at the 11th Theoretical Issues in Sign Language Research conference (TISLR11), University College London, 13–15 July.Search in Google Scholar

Brennan, Susan E. & Herbert H.Clark. 1996. Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory and Cognition22(6). 14821493.Search in Google Scholar

Clark, Herbert H. & Susan E.Brennan. 1991. Grounding in communication. In Lauren B.Resnick, John M.Levine & Stephanie D.Teasley (eds.), Perspectives on socially shared cognition, 127149. Washington, DC: American Psychological Association.10.1037/10096-006Search in Google Scholar

de Vos, Connie.2012. Sign-spatiality in Kata Kolok: How a village sign language inscribes its signing space. Nijmegen: Max Planck Institute for Psycholinguistics PhD thesis.10.1075/sll.16.2.08vosSearch in Google Scholar

Dingemanse, Mark, FranciscoTorreira & N.J.Enfield.2013. Is “Huh?” a universal word? Conversational infrastructure and the convergent evolution of linguistic items. PLoS One8(11). e78273.10.1371/journal.pone.0078273Search in Google Scholar

Eggins, Suzanne & DianaSlade.1997. Analysing casual conversation. London: Cassell.Search in Google Scholar

Enfield, Nick2003. Demonstratives in space and interaction: Data from Lao speakers and implications for semantic analysis. Language79(1). 82117.10.1353/lan.2003.0075Search in Google Scholar

Enfield, Nick & Stephen C.Levinson (eds.). 2006. Roots of human sociality: Culture, cognition and interaction. Oxford: Berg.Search in Google Scholar

Fauconnier, Gilles & MarkTurner.2002. The way we think: Conceptual blending and the mind’s hidden complexities. New York: Basic Books.Search in Google Scholar

Gries, Stefan2005. Syntactic priming: A corpus-based perspective. Journal of Psycholinguistic Research34(4). 365399.10.1007/s10936-005-6139-3Search in Google Scholar

Gullberg, Marianne.2009. Reconstructing verb meaning in a second language: How English speakers of L2 Dutch talk and gesture about placement. Annual Review of Cognitive Linguistics7(1). 221244.10.1075/arcl.7.09gulSearch in Google Scholar

Gullberg, Marianne.2011. Multilingual multimodality: Communicative difficulties and their solutions in second-language use. In JürgenStreeck, CharlesGoodwin & CurtisLeBaron (eds.), Embodied interaction: Language and body in the material world, 137151. Cambridge: Cambridge University Press.Search in Google Scholar

Hopper, Paul. 1987. Emergent grammar. Berkeley Linguistics Society13. 139157.10.3765/bls.v13i0.1834Search in Google Scholar

Iwasaki, Shimako. 2008. Collaborative construction of talk in Japanese conversation. Los Angeles, CA: University of California PhD thesis.Search in Google Scholar

Kendon, Adam.2004. Gesture: Visible action as utterance. Cambridge: Cambridge University Press.10.1017/CBO9780511807572Search in Google Scholar

Kitzinger, Cathy.2012. Repair. In JackSidnell & TanyaStivers (eds.), The handbook of conversation analysis, 229256. Chichester: John Wiley & Sons, Ltd.10.1002/9781118325001.ch12Search in Google Scholar

Ktejik, Mish.2013. Numeral incorporation in Japanese Sign Language. Sign Language Studies13. 186209.10.1353/sls.2013.0003Search in Google Scholar

Langacker, Ronald W.1999. Grammar and conceptualization. Berlin: Mouton de Gruyter.10.1515/9783110800524Search in Google Scholar

Lefebvre, Claire.2004. Issues in the study of pidgin and creole languages. Amsterdam & Philadelphia, PA: John Benjamins.10.1075/slcs.70Search in Google Scholar

Levinson, Stephen C.2006. On the human “interaction engine”. In N. J.Enfield & Stephen C.Levinson (eds.), Roots of human sociality: Culture, cognition and interaction, 3969. Oxford: Berg.10.4324/9781003135517-3Search in Google Scholar

Liddell, Scott K.1996. Numeral incorporating roots and non-incorporating prefixes in American Sign Language. Sign Language Studies92. 201226.10.1353/sls.1996.0008Search in Google Scholar

Lutalo-Kiingi, Sam. 2014. A descriptive grammar of morphosyntactic constructions in Ugandan Sign Language (UgSL). Preston: University of Central Lancashire PhD thesis.Search in Google Scholar

McKee, Rachel & JeminaNapier. 2002. Interpreting in International Sign Pidgin: An analysis. Journal of Sign Language Linguistics5(1). 2754.10.1075/sll.5.1.04mckSearch in Google Scholar

McNeill, David. 1992. Hand and mind: What gestures reveal about thought. Chicago, IL: University of Chicago Press.Search in Google Scholar

Meier, Richard P., KearseyCormier & DavidQuinto-Pozos (eds.). 2002. Modality and structure in signed and spoken languages. Cambridge: Cambridge University Press.10.1017/CBO9780511486777Search in Google Scholar

Michaelis, Susanne Maria & MarcelRosalie. 2013. Seychelles Creole. In SusanneMaria Michaelis, PhilippeMaurer, MartinHaspelmath & MagnusHuber (eds.), The survey of pidgin and creole languages. Vol. II: Portuguese-based, Spanish-based and French-based languages, 261270. Oxford: Oxford University Press.Search in Google Scholar

Moore, Chris & Philip J.Dunham (eds.). 1995. Joint attention: Its origins and role in development. Hillsdale, NJ: Erlbaum.Search in Google Scholar

Mühlhäusler, Peter. 1996. Linguistic ecology. London: Routledge.Search in Google Scholar

Palfreyman, Nicholas.Forthcoming. Variation and change in the numeral system of Indonesian sign language varieties. In UlrikeZeshan & KeikoSagara (eds.), Semantic fields in sign languages: Colour, kinship and quantification (Sign Language Typology Series No. 6). Berlin: De Gruyter Mouton & Lancaster: Ishara Press.Search in Google Scholar

Perniss, Pamela.2007. Space and iconicity in German Sign Language (DGS). Nijmegen: Max Planck Institute for Psycholinguistics PhD thesis.10.1075/sll.11.1.17perSearch in Google Scholar

Perniss, Pamela, RolandPfau & MarkusSteinbach (eds.). 2007. Visible variation: Cross-linguistic studies in sign language structure. Berlin: Mouton de Gruyter.Search in Google Scholar

Roberts, Sarah J. & JoanBresnan. 2008. Retained inflectional morphology in pidgins: A typological study. Linguistic Typology12(2). 269302.10.1515/LITY.2008.039Search in Google Scholar

Rosenstock, Rachel.2008. The role of iconicity in International Sign Language. Sign Language Studies8(2). 131159.10.1353/sls.2008.0003Search in Google Scholar

Sacks, Harvey &, Emanuel A.Schegloff.1979. Two preferences in the organization of reference to persons in conversation and their interaction. In GeorgePsathas (ed.), Everyday language, 1521. New York: Irvington.Search in Google Scholar

Sagara, Keiko.2014. The numeral system of Japanese Sign Language from a cross-linguistic perspective. Preston: University of Central Lancashire MPhil dissertation.Search in Google Scholar

Sagara, Keiko & UlrikeZeshan.2013. Typology of cardinal numerals and numeral incorporation in sign languages. Poster presented at the 11th Theoretical Issues in Sign Language Research conference (TISLR11), University College London, 13–15 July.Search in Google Scholar

Samarin, William J.1968. Lingua francas of the world. In Joshua A.Fishman (ed.), Readings in the sociology of language, 660672. The Hague: Mouton.10.1515/9783110805376.660Search in Google Scholar

Sandler, Wendy.1999. The medium and the message: Prosodic interpretation of linguistic content in Israeli Sign Language. Sign Language and Linguistics2(2). 187215.10.1075/sll.2.2.04sanSearch in Google Scholar

Sandler, Wendy & DianeLillo-Martin. 2006. Sign language and linguistic universals. Cambridge: Cambridge University Press.10.1017/CBO9781139163910Search in Google Scholar

Schegloff, Emanuel A., GailJefferson & HarveySacks.1977. The preference for self-correction in the organization of repair in conversation. Language53. 361382.10.1353/lan.1977.0041Search in Google Scholar

Schegloff, Emanuel A.1982. Discourse as an interactional achievement: Some uses of “Uh huh” and other things that come between sentences. In DeborahTannen (ed.), Analyzing discourse: Text and talk, 7193. Washington, DC: Georgetown University Press.Search in Google Scholar

Schegloff, Emanuel A.1987. Analyzing single episodes of interaction: An exercise in conversation analysis. Social Psychology Quarterly50(2). 101114.10.2307/2786745Search in Google Scholar

Schegloff, Emanuel A.1991. Conversation analysis and socially shared cognition. In Lauren B.Resnick, John M.Levine & Stephanie D.Teasley (eds.), Perspectives on socially shared cognition, 150171. Washington, DC: American Psychological Association.10.1037/10096-007Search in Google Scholar

Schegloff, Emanuel A.2007. Sequence organization in interaction. Cambridge: Cambridge University Press.10.1017/CBO9780511791208Search in Google Scholar

Sidnell, Jack & TanyaStivers (eds.). 2012. The handbook of conversation analysis. Chichester: John Wiley & Sons, Ltd.10.1002/9781118325001Search in Google Scholar

Stamp, Rose.2013. Sociolinguistic variation, language change and dialect contact in the British Sign Language (BSL) lexicon. London: University College London PhD dissertation.Search in Google Scholar

Streeck, Jürgen, Charles, Goodwin & CurtisLeBaron (eds.). 2011. Embodied interaction: Language and body in the material world. Cambridge: Cambridge University Press.Search in Google Scholar

Supalla, Ted & RebeccaWebb.1995. The grammar of International Sign: A new look at pidgin languages. In KarenEmmorey & Judy S.Reilly (eds.), Sign, gesture and space, 333353. Mahwah, NJ: Lawrence Erlbaum.Search in Google Scholar

Szmrecsanyi, Benedikt.2005. Language users as creatures of habit: A corpus-based analysis of persistence in spoken English. Corpus Linguistics and Linguistic Theory1(1). 113149.10.1515/cllt.2005.1.1.113Search in Google Scholar

Taub, Sarah F.2001. Language from the body: Iconicity and metaphor in American Sign Language. Cambridge: Cambridge University Press.10.1017/CBO9780511509629Search in Google Scholar

Tomasello, Michael.2003. Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press.Search in Google Scholar

Tomasello, Michael.2008. Origins of human communication. Cambridge, MA: MIT Press.10.7551/mitpress/7551.001.0001Search in Google Scholar

Wilbur, Ronnie B.2000. Phonological and prosodic layering of non-manuals in American Sign Language. In KarenEmmorey & HarlanLane (eds.), The signs of language revisited: An anthology to honour Ursula Bellugi and Edward Klima, 213244. Mahwah, NJ: Lawrence Erlbaum.Search in Google Scholar

Wittenburg, Peter, HennieBrugman, AlbertRussel, AlexKlassmann & HanSloetjes.2006. ELAN: A professional framework for multimodality research. Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC), 1556–1559. http://www.lrec-conf.org/proceedings/lrec2006/pdf/153_pdf.pdf (accessed 28 October 2013).Search in Google Scholar

Zeshan, Ulrike, Cesar Ernesto EscobedoDelgado, HasanDikyuva, SibajiPanda & Conniede Vos. 2013. Cardinal numerals in village sign languages: Approaching cross-modal typology. Linguistic Typology17(3). 357396.10.1515/lity-2013-0019Search in Google Scholar

Zeshan, Ulrike & KeikoSagara (eds.). Forthcoming. Semantic fields in sign languages: Colour, kinship and quantification (Sign Language Typology Series No. 6). Berlin: De Gruyter Mouton & Lancaster: Ishara Press.Search in Google Scholar

Received: 2013-12-20
Revised: 2014-11-24
Accepted: 2015-1-23
Published Online: 2015-4-24
Published in Print: 2015-5-1

©2015 Zeshan published by De Gruyter Mouton

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License.

Downloaded on 19.3.2024 from https://www.degruyter.com/document/doi/10.1515/cog-2015-0011/html
Scroll to top button