In higher education settings, deaf students commonly find themselves seated with hearing students and receiving instruction from hearing lecturers, often through sign language interpreters. A few of the world’s universities, however, offer courses with a national sign language serving as the primary language of instruction, often under the direction of deaf lecturers. Usually, these deaf lecturers are bilingual/multilingual, skilled both in national sign language(s) and spoken/written language(s). Their teaching practices, which we call deaf-led, have yet to be studied in depth, and there is a lack of knowledge regarding how instruction is conducted by them, what resources are used and how, etc. This study, therefore, examines how deaf lecturers in higher education settings create a visually based learning environment for their students by using their whole repertoire of semiotic resources (e.g., different languages, gestures, pointing, pictures, etc.). These deaf-led practices are multilingual and multimodal, since several languages, modes and media are in play simultaneously. Although teaching in higher education always is multimodal (e.g., through the presence of oral talk, written texts, pictures, tables and diagrams), these deaf-led practices differ from other higher education settings with respect to the absence of sound-based modes. Instead, deaf-led practices, such as the ones reported on in this study, rely solely on several visual modes in play. We look at these visually based practices through the lens of translanguaging, a natural and flexible use of different semiotic resources that are used together in order to create an accessible learning environment.
1.1 Bilingual higher education through the lens of translanguaging
The concept of translanguaging has its roots in bilingual education and was first described as a teaching strategy that teachers could use in aim to develop both the students’ language and their content knowledge, but has more recently been described merely as bilingual practices (Mazak 2017). The notion of translanguaging is broad and can therefore be interpreted in many ways. For example, Mazak (2017) describes translanguaging as a language ideology, a theory of bilingualism, a pedagogical stance, a set of practices, and as transformational (p. 5f). In this study, we take this broad description of translanguaging as our starting point. Specifically, however, we view translanguaging as a pedagogical practice where multilingualism is the norm, and, in so doing, we agree with Li (2014), who claims that in bilingual and multilingual educational settings, translanguaging is an effective pedagogical practice, because it has a crucial impact upon student development in social relationships and identity. For example, in the classroom, translanguaging has been described as a process in which two languages are used together for meaning-making, experience-shaping, understanding, and knowledge. It is a powerful mechanism for mediating ideas and constructing understandings using the whole language repertoire of individuals (Baker 2011; Garcia 2009). Translanguaging thereby allows bilingual teachers and students use of their entire repertoire of linguistic and semiotic resources, and when teachers use translanguaging as pedagogy, they build flexibly upon bilingual students’ language practices (see also Garcia and Wei 2014).
The number of studies that examine bilingual education through a translanguaging lens has grown in recent years, but only a few have focused on teachers in higher education. One such study, conducted by Mazak and Herbas-Donoso (2015), examined a bilingual professor’s translanguaging practices while teaching students in an undergraduate science course. At the university, the language of instruction was Spanish, but the students were also required to have a working knowledge of English. The following practices of translanguaging were identified:
using English key terminology in discussion of scientific content in Spanish
reading text in English and talking about it in Spanish
using Spanish cognates while referring to English text
talking about figures labeled in English using Spanish
pronouncing English acronyms in Spanish. (2015, p. 704)
Mazak and Herbas-Donoso argue that such practices activated all the meaning-making resources at the students’ disposal and apprenticed them to the larger scientific discourse community (p. 705). They also claim that the professor’s use of translanguaging promoted the students’ perception of their bilingualism as a beneficial resource and as a fundamental preparation for further academic studies and scientific work in both languages. In our study, we found that the deaf lecturers use translanguaging in a way similar to that of the professor in Mazak and Herbas-Donoso’s study. However, we also found differences, mainly because the deaf-led practices were based solely on visual modes, and the translanguaging thus can be described as visually-oriented, a form of translanguaging in higher education that, to our knowledge, has not been examined previously. This practice is, as is all other educational practices, multimodal, although it is just visual-multimodal.
2 The concept of multimodality
Norris (2004) points out that all interactions between people are multimodal; in a conversation, the interlocutors are simultaneously aware of the language in use, its different components (such as intonation, prosody and content), body positions, clothes, facial expressions, etc. Such a view on interaction is common in social linguistics. For example, Goffman (1974) suggests that people use a range of modes in interactions with the aim of co-constructing definitions of what is going on. Such modes can best be described as resources for meaning-making that are socially shaped and culturally created (Kress and Van Leeuwen 2001). These include images, colors, speech, gestures, writing, etc. Language has always been regarded as having a central role in interaction, but it is only one mode among many and can assume different positions in interactions. Sometimes, it may take a superior position and, at other times, a subordinated one, as other modes, such as gaze, head movements, etc., take a superior position instead (Norris 2004). Bezemer and Jewitt (2010) argue that modes of communication other than languages are increasingly regarded as relevant to social linguistic research and that ‘mode’ in the social semiotic approach “is privileged as an organizing principle of representation and communication, and therefore treated as a central unit of analysis” (p. 183).
Multimodality has emerged both as a pedagogical approach and as a communication theory. Archer (2006) contends that “As a theory of communication, multimodality accounts for the multiplicity of modes of meaning-making, and contributes to the theorising of links between shifting semiotic landscapes, globalisation, re-localisation, and identity formation. As a particular approach to pedagogy, a multimodal pedagogy seeks to go beyond written and spoken language to value a range of modes through multimodal assessment practices” (p. 3). Therefore, in educational settings, we can analyze different modes, such as books, PowerPoint presentations, images, speech, and gestures, which are often used simultaneously, and examine how these modes together form a multimodal learning environment where teachers and students create meaning (see, e.g., Mondada 2012; Hjulstad 2016; Tapio 2013; Thesen 2016). In our study, however, sound-based modes such as speech are non-existing, and some modes that have been described as being outside of language per se (e.g. gestures, body positions, eye gaze) are actually an integral part of sign languages, something we will describe further below.
Kusters et al. (2017) claim that scholars who have studied multimodal communication have primarily focused on monolingual interaction, while scholars interested in translanguaging primarily have focused on bilingual/multilingual communication without taking modality into account. They argue that it is time to put the different perspectives together, as they believe that the concept of semiotic resources allows for a “fresh perspective on the multimodal and multilingual aspects of communication and a more nuanced understanding of translanguaging, that recognises the different ways in which individuals draw on their multimodal linguistic resources to make meaning” (p. 2). In this article, we aim to conduct such a merging between the study of multimodality and multilingualism, because the data in our study clearly show that translanguaging with both a signed language and a written language is not possible without the interplay of several modes at once.
3 The signing mode
Norris (2004) describes spoken language as sequentially structured, which allows for adding prefixes to words, or subordinated clauses to main clauses, and in so doing makes the words or sentences being used more complex. She means that modes as gesture and gaze cannot be used in similar ways. In sign languages, however, the use of the hands together with body movements and facial expressions, in fact, contribute a great deal to linguistic structure and complexity. Manual signing, i.e. use of hands, is mainly used for expressing lexical items, and the use of non-manual features is mainly for grammatical functions. Non-manual features include different facial expressions and movements, e.g. the raising and lowering of eyebrows, mouth actions and head movements. These non-manual features are crucial to sign language structure. Even if signed languages are also, basically, sequentially structured, with linguistic entities being produced in a linear pattern just as they are in spoken languages, a prominent component of signed languages is simultaneity, i.e. the possibility of producing linguistic information through various modes simultaneously (see e.g. Meier 2002).
When it comes to the lexicon, signed languages, in contrast to spoken languages, usually have subcategories. For example, Swedish Sign Language (SSL) has three subcategories: 1) Lexical signs, which are highly conventionalized signs, such as APPLE and DEAF. 1 Many (but not all) of these signs tend to have a one-to-one correspondence with spoken words. Such signs are common in the sign dictionaries that exist worldwide. 2) Productive signs, which are signs that are often created in a specific context and frequently come in a multiple meaning package. There is a continuing debate among linguists on how to label these signs according to different theoretical frameworks; the most common labels used for these signs are “classifier constructions” or “depicting verbs” (Liddell 2003). 3) Fingerspelling, which is the use of a hand alphabet to express names or words from spoken/written languages. Furthermore, a lexical item is often expressed with a specific mouth action. Sign language linguists commonly divide mouth actions in two main categories: mouthing (silent mouth movements with borrowed elements from the spoken languages) and mouth gestures (a set of different subcategories, i.e. genuine mouth components lexically bound to a specific set of signs or adverbial mouth actions, etc.) (Crasborn et al. 2008; Johnston et al. 2016). For this study, fingerspelling and mouthing are of particular interest, as they provide a channel to spoken languages.
Finally, pointing (the use of the INDEX hand) is a prominent component of signed languages, and also of special interest to this study. Pointing has been a subject of research for sign language linguists as well as gesture researchers for some time (see e.g. Sotaro 2003 for an overview). Sign language linguists have described pointing as pointing signs or as deictic signs with respect to the sign lexicon. Thus, pointing signs have often been described in terms of pronouns, but they can have other functions, too, for example, as determiners or adverbials (for an overview, see Cormier 2012). Furthermore, some signs can “point” to locations in the signing space to make meaning and reference, as do indicating verbs and auxiliaries. At the same time, pointing is common among non-signers as well, often being treated as gesturing by gesture researchers. The line that is drawn between gesture and sign is not always obvious regarding pointing in signed settings. The status of pointing is a widely discussed topic among sign language linguists, and there are different views on how to describe pointing, e.g. using linguistic or gestural definitions (see e.g. Johnston 2013).
Pointing as such is a salient part of classroom settings, both as a linguistic sign, i.e. pointing to referents functioning as a nominal part of utterances, as a clause element in a syntactic clause (e.g. subject or object) or as a pure gesture, i.e. solely pointing to something, i.e. a referent or area in the space (e.g. on the whiteboard).
More generally, pointing functions as an efficient pedagogical tool for highlighting the topic being talking about; this is a recurring use among non-signers as well. We also found in this study that pointing is an important part of the translanguaging practice because it is used to link signs and other language modes, i.e. it is part of chaining (see also below, and in Section 5.5. Chaining).
3.1 Language mixing and chaining
The difference in modes between sign and speech allow for the blending of languages and for the creation of manual sign systems, which often is in relation to providing visual representations of spoken languages. This often means using one sign for one word and relying on the spoken language’s structure, leading to the sign language’s features of simultaneity being used less frequently. Berent (2012) provided an account of such bimodal mixing exemplified through “true” ASL 2-English bilingual mixing (e.g. use of non-manual grammatical features from ASL, together with fingerspelling or English word order) and other mixed bimodal systems, such as sign-supported speech. The degree of mixing, however, depends on, e.g., the situation, the speaker, and the addressee. Studies have also shown that there are limitations on mixing, as there are specific properties that cannot be combined (see e.g. Emmorey et al. 2008).
In recent years, questions have emerged concerning DHH students with respect to translanguaging and language mixing. According to Swanwick (2016), spoken and signed language can either be mixed sequentially or blended simultaneously, and she argues that translanguaging theory can help us to create an understanding of how teachers and students mix and blend their languages in the classroom, using their entire repertoire with the goal of creating meaning and developing new knowledge. Through translanguaging theory, we can arrive at a better understanding of what adults and children already do in the classrooms, because translanguaging is not only concerned with “what language repertoires are in play but with how individuals creatively draw on their language repertoires to scaffold learning”, as Swanwick contends (p. 421).
Other studies in the deaf education field have described the complex use of different languages and resources in instruction as chaining (e.g. Bagga-Gupta 2004; Humphries and MacDougall 1999; Tapio 2013). Humphries and MacDougall (1999) describe chaining as a linking of two languages, between, e.g., English and ASL. According to them, chaining can be done in different ways, for example, through a chain of fingerspelling, pointing, signing, etc., with the aim of “emphasizing, highlighting, objectifying and generally calling attention to equivalencies between languages” (p. 90). As an example, the teacher can point at a written word, fingerspell it, perform its signed counterpart, and again point at the written word. The deaf teachers in Humphries and MacDougall’s study used chaining an average of 30 times in comparison to the 5.5 times it was used by hearing teachers; thus, chaining seems to be a more common strategy among deaf teachers, particularly when introducing new vocabulary. Bagga-Gupta (2004), in turn, described the concept of chaining to a greater extent, dividing the concept into local chaining, event chaining, and synchronized chaining when analyzing data from deaf education classrooms in an upper secondary school in Sweden. By local chaining, Bagga-Gupta means the same as Humphries and MacDougall (1999) above. In event chaining, Swedish, SSL, or both languages are used during different temporal phases of the lessons, e.g. the students can read a text in Swedish and discuss it afterward in SSL. Finally, in synchronized chaining, Bagga-Gupta describes how both Swedish and SSL can be used simultaneously, e.g. through the students’ signing in parallel with their reading.
In our study, we consider such forms of chaining as a part of the larger translanguaging practice, where a rich mixing and blending of languages is common.
4 Study design and method of analysis
This study was conducted through an ethnographic approach in a higher education setting in Sweden where both deaf and hearing (signing) students were enrolled. Three deaf lecturers (one male, two female) were video-recorded while instructing students in four different subjects: “Sign language”, “Swedish as a second language for deaf”, “Sign language and teaching”, and “Cognitive grammar”. All three lecturers are bilingual (SSL and Swedish) and are also skilled in English, and each has many years of experience teaching at different levels. Due to there being a limited number of SSL-Swedish bilingual university lecturers, two of the three subjects are also authors of this paper. The first author was responsible for the main analysis work, while the second author primarily analyzed the first author’s lessons. When all sequences were analyzed, we put them together with the aim of comparing the results, noting that the three lecturers used the same strategies, to a great extent, and handled the technologies and used the languages in similar ways.
Two cameras were used in the fieldwork: one directed towards the lecturer, and one towards the students. In total, the study data consists of 18 hours of video-recordings from 9 lessons (approximately 3 hours each in length).
Using multimodal analysis, we studied what was going on in the classroom interaction. We examined the use of a range of modes, all existing in visual form, and how these modes interplayed, while together creating a translanguaging practice. We started by identifying recurring phenomena and patterns in classroom interaction and noted sequences of particular interest in terms of the study’s aims, and thereafter we conducted a deeper analysis with the help of the annotation tool ELAN (EUDICO Linguistic Annotator). 3 ELAN is a flexible computer-based tool suitable for analyzing sign language texts because it provides linking between video sequences and transcriptions. In addition to linguistic features in the video, other features like media and modes can be transcribed, and comments may be added to the annotations.
We designed an ELAN template with annotation protocols according to the aims of the study. In line with our theoretical background, we focused on how the lecturers used languages and modes in play during the sequences, and we analyzed four specific features: language, mode, interaction, and pointing. Although languages can be expressed through different modes, we chose to separate language and mode from each other in the analysis in order to make it possible to see which specific languages were in play in the classrooms and in which modes they were expressed. For example, SSL has no written form and is always expressed in a signing mode, but, as we will show below, another sign language also occurs in the data. Similarly, Swedish can be expressed in both the spoken and written mode, but it is not only Swedish that occurs in the data, but also English. The feature interaction was chosen because we found that the teachers interacted extensively with different media during their instruction; among other things, they signed toward the PowerPoint slides, touched the SmartBoard screen, and handled remote controls, computers, etc. Finally, the feature pointing was chosen for gestures that are not clearly a linguistic component of SSL (e.g. do not function like pronouns, etc.), i.e., pointings that link languages and modes together, or that direct attention to different words, pictures, points, etc.
During the analysis, we continuously refined the concepts used in the feature tiers. For example, we started by annotating the use of SSL simply as “SSL”, but when we found that the use of fingerspelling was very frequent, we started from the beginning again in order to differentiate between them. We also commented on types of fingerspelling, e.g. whether it represented names, concepts, etc.
Taken together, the multimodal analysis using ELAN as a tool made it possible for us to identify when and where different languages occurred, how the lecturers connected the languages together, which media were most used, and how different modes interplayed. These features and patterns we could, thereafter, incorporate into a framework of translanguaging. In the following, we will provide a qualitative sample of illustrative examples from the analysis.
5 Deaf-led translanguaging practices in higher education
The analysis reveals a creative and complex use of semiotic resources by the lecturers, which will be highlighted below. We did note differences depending on the topic being taught, as well as individual differences among lecturers, but in the following sections, we will focus on common recurring phenomena and patterns.
5.1 Several languages in play
The first theme in our analysis focuses on the languages in play. We found that three languages were frequently used: SSL, Swedish, and English. The lecturers used SSL as their primary language for instruction and communicated with the students in this language predominantly, both when lecturing and discussing, or explaining, course content, various tasks, etc. Swedish and English, in turn, occurred primarily in the PowerPoint presentations, on the whiteboard, and in books and articles. However, one striking observation we made was that the three languages were often in play simultaneously. As illustrated in Figure 1, for example, the lecturer is signing in SSL while the PowerPoint presentation that is visible behind her provides text in both Swedish and English, at times even within the same sentences (highlighted with the help of the boxes with language labels in the figure).
Here, the lecturer is teaching the subject “Cognitive grammar”, using examples from a book in English. The text in the PowerPoint presentation consists of Swedish clarifications, in part, as well as English passages from the book. To complicate this sequence of instructions even further, the author of the course book actually describes examples from Danish Sign Language via English, something that on a less apparent and more basic level adds another language to the sequence. This illustrative example shows how several languages occur simultaneously. While the act of mixing Swedish and English in the same written sentences is not the most common way that these practices play out in the classroom, in general, the use of both languages in a single slide did appear as a very common feature in the studied classrooms. What is different in this deaf-led context compared to other (spoken language-based) context is that it there are three languages in play, where two appear in written modes and one in signing mode. And all three languages are expressed visually, indicating a pure visually-oriented form of teaching.
Another example showing how different languages are intertwined came from the course entitled “Swedish as a second language for the deaf”, where the lecturer instructed the students in different types of texts and used translations between different languages. In one sequence, illustrated in Figure 2, the lecturer uses the Internet to show a short movie. Before starting, he tells the students that the language in the movie will be ASL and then asks them in SSL if they understand this language. Some of the students reply that they have only a little knowledge of the language, and the lecturer replies that they will discuss the content afterwards to ensure that all of them arrive at a common understanding.
It is interesting that although the teaching subject is Swedish, and the students receive instruction through both Swedish and SSL, two additional languages are used. Namely, not only is ASL used in the movie, but English is as well.
In sum, our analysis reveals that more than one language is in play nearly all the time in deaf-led higher education instruction in Sweden, which clearly shows that this setting is a highly multilingual learning environment where translanguaging is both natural and necessary for meeting the learning objectives of these students.
5.2 A range of media used
The study classrooms were found to be very well equipped with different media, e.g. whiteboard, computer, projector, etc., along with computer software such as PowerPoint and Word (see Figure 3).
Although the uses of white screens, computers, projectors, and PowerPoint slides are common in teaching in general (see, e.g., Mazak and Herbas-Donoso 2015; Mondada 2012), they are crucial for translanguaging purposes in deaf-led classrooms, because they enable lecturers to use both SSL and other languages in visual forms in teaching. Without these media, Swedish and English would not be possible to use, because they are not accessible in spoken forms for deaf lecturers and students.
5.3 Multimodal classroom practice
Continuing the analysis with a focus on multimodality, we found that different modes existed simultaneously, as shown in Figure 4. As indicated previously in this article, almost all the resources present are solely visually accessible, i.e. images, diagrams/figures, and enactments, in addition to languages that primarily occurred in the signed and written modes.
In the sequence illustrated in Figure 4, the lecturer is teaching in the course “Sign language”, focusing particularly on the motion structures of signed languages. The lecturer here has created a PowerPoint slide consisting of a figure over the motion structure for the sign BIRD-SIT-ON-A-LINE 4 together with a picture of a signer, showing how the sign looks in the example. The figure and image interact through the lines that connect different parts in the sign to different parts of the structure. The lecturer explains the figure and its connection to the sign in the image through SSL, repeating the example and the connections several times. Behind him, there is written text on the whiteboard, and the heading and concepts are in written language on the PowerPoint slide as well. As we can see here, two language modes are used, i.e. signing mode and written mode. In addition, there are several other modes in play, for example, the figure and the image.
The sequence described above is a common and illustrative example of how several particularly visual modes interplay in deaf-led classrooms. However, we also found another action used by two of the lecturers in some sequences. These are actions that can be described as enactments. The two lecturers, one man and one woman, use enactments to further illustrate different situations, actions, or contexts, in addition to using signed or written explanations. They simply show with the whole body how something can look in reality. This is also a visual strategy, in a mode other than language. In Figure 5, the use of an enactment by one of the lecturers is illustrated.
Here, the lecturer describes the motion associated with the concept ‘jump’. He points at the images in the PowerPoint slide, signs JUMP in different manners several times, and ends the explanation by actually jumping. Adding the use of another mode in the form of enactments brought examples, actions, or explanations to life for the students.
5.4 Visualizing other languages
As illustrated above, multiple languages are simultaneously in play in classrooms, expressed in different modes. In the deep analysis, however, we found another recurring phenomenon in the lecturers’ behavior when teaching, which cannot be described simply in terms of language or mode. It is an action used by the lecturers to express, or, as we have chosen to describe it, to visualize one language in the mode of another language. This is possible just because the language bases are so totally different, and its occurrence challenges traditional ways of treating language and mode. The strategy can be divided into three categories, according to the classroom data: a) fingerspelling, b) visualizing language structures, and c) mouthing.
Fingerspelling is usually used to express names or represent words/concepts from other languages. In our analysis, we found that fingerspelling was a very frequent and recurring phenomenon in the instruction, a result that is in line with the findings of previous studies. For example, Padden and Gunsauls (2003) found that fingerspelling occurs very frequently among deaf native signers in the U.S., in order to both represent English vocabulary and as a signifier of contrastive meaning (i.e. show differences in English compared to ASL structure). Also, Humphries and MacDougall (1999) found that deaf teachers fingerspell twice as much as hearing teachers, especially when chaining.
In our data, the lecturers mainly fingerspelled central concepts in Swedish or English, for example, entire words or just an affix in a compound together with another lexical sign. Through visualizing Swedish or English concepts in such way, it is possible for lecturers to talk about concepts, theories, or phenomena in other languages in SSL. Such fingerspelling helps students to develop their vocabulary in both Swedish/English and SSL, something they can benefit from when they read textbooks. An example comes from a lesson in the subject “Sign language”. The use of @b in the gloss indicates that it is a fingerspelling:
DECIDE FELLOW PHONOTACTICS@b POSS SWEDISH STRUCTURE GOOD PHONOTACTICS@b FELLOW GOOD
‘We decided to follow the Swedish phonotactic structure; it was best to do [when we started SSL research in Sweden].’
The lecturer is here explaining that researchers in Sweden decided to use Swedish phonotactic structure in their analysis of SSL. He fingerspells the concept ‘phonotactics’, which provides a visualization of the Swedish word through the signed mode.
The second category, b) visualizing language structures, is a negotiation between the lecturer and the students. We primarily found this category in the subject “Swedish as a second language for deaf”, when the lecturer needed to visualize Swedish language structure when discussing, for example, core sentences in a Swedish text. The lecturer begins the sequence with an explanation that the students will be working with Swedish texts, which they were tasked to analyze and to break down into core sentences. She announces that the class will first do one sentence together and that the students are to work in pairs thereafter. The lecturer turns to the PowerPoint slide, where one longer sentence in Swedish is visible. She points to it and waits for the students to read it all. Thereafter, she negotiates the working procedure:
OK@b CORE SENTENCE WE MIX LANGUAGE SIGN-FLUENT AND point at text HARD HOLD-SEPARATE OF-COURSE BUT MUST TRY SIGN-SWEDISH AWFUL BUT BECOME SO
‘Okay, this is about core sentences. We have to mix languages here, because it is hard to hold fluent signing and this Swedish text separated, of course. We have to try, so we use signed Swedish. It is awful, but it must be done in this way.’
After this, the lecturer signs word after word in the sentence, which means that she uses signs from SSL in sequential Swedish word order with the aim of visualizing the Swedish sentence through the signing mode. After this, the teacher switches back to SSL to discuss the Swedish sentence further.
In the last category, c) mouthing, the lecturer uses the mouth (without any voice added) in two different ways. One is when the mouth voiceless forms a Swedish word without manual sign, i.e. mouthing. This mostly happens when giving non-manual feedback, e.g. agreeing with or reacting to something the students have said. Such mouthing was, however, limited to a few words, as ‘precis’ [‘exactly’] and ‘bra’ [‘good’]. The second way of using voiceless mouthing was when the lecturer explained an English concept for the students on the subject of “Swedish as a second language for deaf”:
CONTINUE TURN-TO INTEGRATE USE WAY HOW DO INTEGRATE
‘We continue and turn to an integrated approach, which is about how to integrate … ’
Here, the word ‘integrated’ was written in English on the PowerPoint slide, where most of the other text was in Swedish. When the lecturer explains the concept ‘integrated approach’, she does not choose the mouth form that matches the SSL sign INTEGRATED (Sw. ‘integrerad’), but instead, she chooses to mouth the English word – but with a Swedish-based pronunciation (/integrated/instead of/ˈintiˌɡreitid/).
To summarize, the three categories clearly reveal different strategies for representing features of other languages in a visual rather than auditory way. We can represent it as putting one language into another language’s clothes. This is a technical way of mixing features from two languages, and it is possible simply because the languages come from different modes. From this perspective, we suggest that this visualization can be considered as intramodal translanguaging, since the signing mode is used both for representing sign language and written language simultaneously. None of the languages are here expressed in their natural linguistic forms; the sign language grammar disappears while the Swedish/English written mode are transferred into the signing mode (c.f. Emmorey et al. 2008).
The last of the five themes in our analysis focuses on a very frequent and recurring feature in the lecturers’ instruction. This feature is concerned with how the lecturers in different ways connect different languages and modes together. This can be associated with the practice of chaining as has been described above. The lecturers create chains between the languages and modes in complex and different ways, and we propose three subcategories for this chaining strategy: a) pointing, b) underlining, and c) placing signs.
The first category, pointing, is the most frequent means of creating chains employed by the lecturers in the data. Here, we focus on the situations in which the lecturers point to figures, images, diagrams, or texts – mainly on the whiteboard, SmartBoard, or white screen (showing a PowerPoint slide). The pointing directs the students’ attention to the content in different modes and is an important tool for the lecturers in adopting a translanguaging pedagogy. One example of such a pointing comes from the subject “Cognitive grammar”. Here, the lecturer uses PowerPoint slides in which she has copied citations and examples from an English textbook. In addition, she has written sentences, headings, and additional explanations in Swedish on the same slides. When talking about the content with the students, she uses SSL.
In Figure 6, the lecturer uses pointing to highlight and direct the attention to different concepts in the written text in two ways: sometimes she points before signing, and sometimes she points simultaneously as she signs with the other hand. The sign and written modes are thus linked together, and the pointing is an important and efficient strategy for the teacher to switch between languages. Here, the languages are expressed in their original and correct linguistic forms, and therefore constitute an explicit form of visual translanguaging, i.e. visual linking between the sign/written languages as well as modes.
The second category, b) underlining, is similar to pointing, but here, the lecturers instead move the hand (which can present different shapes) to underline or highlight the text, key points, or figures/images. It can be likened to using a pen to underline a sentence in written text, but here it is done “in the air” and can be conducted vertically, horizontally, or circularly, and is only cognitively visible. We found the use of underlining to be very frequent in deaf-led classrooms.
The last category, c) placing signs, differs from the first two categories. Here, the lecturer places the signs directly in connection to the text or figures/images. The manual tool in the form of the additional hand is skipped here, but exists on a cognitive plane because the connection between the different modes is clear through the placing of signs. Figure 7 shows one illustrative example of such placing of signs.
The lecturer is explaining the motion structure of sign languages in the subject “Sign language”, and, when this slide appears in the presentation, he first explains and shows how the sign SIT is performed in SSL in different ways. Thereafter, he links his explanation to the image and places his hand in front of the image, holding the hand in the same shape as SIT. Even if a placing sign does not have an explicit manual form of linking, we suggest that there is a typical chaining strategy also being used here, driven by underlying cognitive processes, i.e. embodying signs into the picture on the screen.
Taken together, we suggest the chaining strategies used by the lecturers to link different languages and modes together constitute tools for creating translanguaging practice in the classroom.
6 Discussion and conclusion
In this article, we have illustrated frequent and recurring classroom activities in a unique higher education setting. The examples show a natural multimodal practice in which multilingual lecturers and students interact with each other, sharing the same languages and cultures. Our attempt to describe this practice through the lens of translanguaging has led to new insights that we will discuss further here. Firstly, some words regarding our positionality in this study are required. As authors, we have analyzed and written about each other’s practices. It was important for us to not analyze our own practices, but focus only on the other’s, and doing so has provided us with some fascinating insights. Despite apparent personal differences, it was interesting for us to recognize strategies from our own teaching being used by another lecturer. We learned a great deal from our own practices that we would never be able to identify without the kind of deep analysis that was carried out in this study.
Humphries and MacDougall (1999) suggest that the life experiences of deaf teachers shape their ways of talking about and explaining, e.g., different concepts, and building upon what they know can be hard for their students to understand. From our analysis, and the illustrative examples provided in this study, it can similarly be concluded that the deaf lecturers are highly skilled in several languages and use these in a flexible and obvious way in the classrooms in terms of translanguaging. Also, we found these individuals to be very visually oriented in their teaching: They draw from their own deaf-visual experience in order to help their students understand the teaching content. They use a set of visual translanguaging strategies to direct the students’ attention to different, visually apparent modes that are focused on in different phases of the instruction. We therefore contend that the visual-based strategies used by the deaf lecturers with the goal of facilitating student learning and academic achievement in higher education practice are a part of translanguaging in ways similar to those presented in Mazak and Herbas-Donoso’s (2015) study, as well as the chaining procedure described by Bagga-Gupta (2004), Humphries and MacDougall (1999), and Tapio (2013). However, the visual-oriented translanguaging practice we have examined here also differ from other translanguaging practices in its different use of language modes. While other translanguaging practices may use both the spoken and written modes of the cooperating languages, sign languages do not have a widely used written counterpart other than coded transcription symbols/notes that are used primarily for research and documentation purposes. Therefore, for deaf people, the sign language and the written language are always used in parallel for different purposes, and they supplement each other in many ways. The sign language only occurs in signing mode, while the national language primarily occurs in its written mode, and both languages are therefore obvious and a natural part of the language repertoire of deaf people. Another phenomenon in translanguaging practices where a sign language is one of the languages is the different possibilities for mixing languages simultaneously. This is not only in regard to the existence of a written language on a PowerPoint slide together with the lecturers’ signing, as mentioned above, but also concerns the visualization of a language through the mode of another language. When the lecturers in our study visualize written Swedish or English through the SSL mode, we suggest describing it as a form of intramodal translanguaging as the core lies in that fact that both languages are being expressed through the same signing mode no matter the language (English or Swedish). For example, Swedish itself comes not in its original mode (spoken or written), but rather, it is visualized in a “dressed” mode, i.e. within the signing mode, hence the concept of “intramodal” as opposite to “crossmodal” (switching between signing SSL and speaking Swedish). This intramodal translanguaging make it possible for deaf people to talk about concepts, phrases or sentences in the national language, or to borrow words from it in a dialogue, and thus compensate for the lack of spoken or written mode in sign language. However, translanguaging in general is only possible if the interlocutors share two or more languages, although they can have different skills in them. In our deaf-led higher educational settings, the more skilled lecturers provide language practices that help the students to develop all their languages and to gain a better understanding of the course content.
We are truly grateful for the comments made by two anonymous reviewers as well as the insightful and valuable comments provided by the guest editor, Annelies Kusters, on the final version of this article, which helped us structure and amend the article. We also thank Sara Domström for the illustrations, and the lecturers and students who participated in this study.
Bagga-Gupta, Sangeeta. 2004. Visually oriented language use: Discursive and technological resources in Swedish Deaf pedagogical arenas. In Mieke Van Herreweghe & Myriam Vermeerbergen (eds.), To the Lexicon and Beyond: Sociolinguistics in European Deaf Communities, 171–207. Washington: Gallaudet University Press. Google Scholar
Baker, Colin. 2011. Foundations of Bilingual Education and Bilingualism, 5th edn. Bristol: Multilingual Matters. Google Scholar
Berent, Gerald P. 2012. Sign language–Spoken language bilingualism and the derivation of bimodally mixed sentences. In Tej K. Bhatia & William C. Ritchie (eds.), The Handbook of Bilingualism and Multilingualism, 2nd, 351–374. Wiley-Blackwell. ISBN: 9781118332382. DOI:. CrossrefGoogle Scholar
Bezemer, Jeff & Carey. Jewitt. 2010. Multimodal analysis: Key issues. In Lia Litosseliti (ed.), Research Methods in Linguistics, 80–97. London: Continuum. Google Scholar
Cormier, Kearsy. 2012. Pronouns. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign Language: An International Handbook, 227–240. Berlin/Boston, MA: De GruyterMouton. Google Scholar
Crasborn, Onno, Els Van Der Kooij, Dafydd Waters, Benice Woll & Johanna. Mesch. 2008. Frequency distribution and spreading behavior of different types of mouth actions in three sign languages. Sign Language & Linguistics 11(1). 45–67. CrossrefGoogle Scholar
Emmorey, Karen, Helsa B. Borinstein, Robin Thompson & Tamar H. Gollan. 2008. Bimodal bilingualism. Bilingualism: Language and Cognition 11(01). 43–61. Google Scholar
Garcia, Ofelia. 2009. Bilingual Education in the Twenty-First Century. A Global Perspective. Malden, MA: Wiley-Blackwell. Google Scholar
Garcia, Ofelia & Li Wei. 2014. Translanguaging: Language, Bilingualism and Education. Basingstoke, UK: Palgrave Macmillan.Google Scholar
Goffman, Erving. 1974. Frame analysis: An Essay on the Organization of Experience. Cambridge: Harvard University Press. Google Scholar
Hjulstad, Johan. 2016. Practices of organizing built space in videoconference-mediated interactions. Research on Language and Social Interaction 49(4). 325–341. DOI:. CrossrefWeb of ScienceGoogle Scholar
Humphries, Tom & Francine MacDougall. 1999. “Chaining” and other links. Making connections between American sign language and English in two types of school settings. Visual Anthropology Review 15(2). 84–94. Google Scholar
Johnston, Trevor, Jane Van Roekel & Adam. Schembri. 2016. On the conventionalization of mouth actions in Australian Sign Language. Language and Speech 59(1). 3–42. CrossrefWeb of ScienceGoogle Scholar
Kress, Gunther & Theo. Van Leeuwen. 2001. Multimodal discourse. The modes and media of contemporary communication. London: Hodder Education. Google Scholar
Kusters, Annelies, Max Spotti, Ruth Swanwick & Elina. Tapio. 2017. Beyond languages, beyond modalities: Transforming the study of semiotic repertoires. International Journal of Multilingualism 14(3). 219–232. Web of ScienceCrossrefGoogle Scholar
Liddell, Scott K. 2003. Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press. Google Scholar
Mazak, Catherine. 2017. Theorizing translanguaging practices in higher education. In Catherine Mazak & Kevin Carroll (eds.), Translanguaging in Higher Education: Beyond Monolingual Ideologies, 1–10. Bristol: Multilingual Matters. Google Scholar
Mazak, Catherine & Claudia. Herbas-Donoso. 2015. Translanguaging practices at a bilingual university: A case study of a science classroom. International Journal of Bilingual Education and Bilingualism 18(6). 698–714. DOI:. CrossrefWeb of ScienceGoogle Scholar
Meier, Richard P. 2002. Why different, why the same? Explaining effects and non-effects of modality upon linguistic structure in sign and speech. In Richard P. Meier & Kearsy Cormier (ed.), Modality and Structure in Signed and Spoken Languages, 1–25. New York: Cambridge University Press. Google Scholar
Mondada, Lorenza. 2012. Video analysis and the temporality of inscriptions within social interaction: The case of architects at work. Qualitative Research 12(3). 304–333. CrossrefWeb of ScienceGoogle Scholar
Norris, Sigrid. 2004. Analyzing multimodal interaction: A methodological framework. London: Routledge. Google Scholar
Sotaro, Kita (ed.). 2003. Pointing: Where Language, Culture, and Cognition Meet. Mahwah, NJ: Lawrence Erlbaum Associates. Google Scholar
Swanwick, Ruth. 2016. Scaffolding learning through classroom talk: The role of translanguaging. In Patricia Elizabeth Spencer & Marc Marschark (eds.), The Oxford Handbook of Deaf Studies in Language, 420–430. Oxford: Oxford University Press. Google Scholar
Tapio, Elina. 2013. A nexus analysis of English in the everyday life of FinSL signers: A multimodal view on interaction. Doctoral dissertation. University of Oulu. Jyväskylä: Jyväskylä University Printing House. Google Scholar
Thesen, Lucia. 2016. The past in the present: Modes, gaze and changing communicative practices in lectures. In Arlene Archer & Esther Odilia Breuer (eds.), Multimodality in Higher Education, 31–52. Leiden: Brill. Google Scholar
Max Planck Institute for Psycholinguistics, The Language Archive, Nijmegen, The Netherlands,http://tla.mpi.nl/tools/tla-tools/elan/
About the article
Published Online: 2018-03-03
Published in Print: 2018-03-26
Citation Information: Applied Linguistics Review, Volume 9, Issue 1, Pages 90–111, ISSN (Online) 1868-6311, ISSN (Print) 1868-6303, DOI: https://doi.org/10.1515/applirev-2017-0078.
© 2018 Holmström and Schönström, published by De Gruyter. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0