While it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied. We argue that they have a crucial role to play in the foundations of semantics, for two reasons. First, in some cases sign languages provide overt evidence on crucial aspects of the Logical Form of sentences, ones that are only inferred indirectly in spoken language. For instance, sign language ‘loci’ are positions in signing space that can arguably realize logical variables, and the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in spoken language, but which are arguably overt in sign language. Second, along one dimension sign languages are strictly more expressive than spoken languages because iconic phenomena can be found at their logical core. This applies to loci themselves, which may simultaneously function as logical variables and as schematic pictures of what they denote (context shift comes with some iconic requirements as well). As a result, the semantic system of spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account (as recently argued by Goldin-Meadow and Brentari). Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.
While it is now accepted that sign languages should inform and constrain theories of ‘Universal Grammar’, their role in ‘Universal Semantics’ has been under-studied (but see Zucchi 2012 for an introduction).  In this article, we will argue that they have a crucial role to play in the foundations of semantics, for two reasons.
First, in some cases sign languages provide overt evidence on crucial aspects of the Logical Form of sentences that are only inferred indirectly in spoken language. One example pertains to sign language ‘loci’, which are positions in signing space that can arguably realize logical variables; the fact that they are overt makes it possible to revisit foundational debates about the syntactic reality of variables, about mechanisms of temporal and modal anaphora, and about the existence of dynamic binding. Another example pertains to mechanisms of ‘context shift’, which were postulated on the basis of indirect evidence in some spoken languages, but which are arguably overt in sign language.
Second, along one dimension sign languages are strictly more expressive than spoken languages because iconic phenomena can be found at their logical core. This applies to loci themselves, which may simultaneously be logical variables and schematic pictures of what they denote; and context shift comes with some iconic requirements as well. As a result, the semantic system usually described for spoken languages can in some respects be seen as a simplified version of the richer semantics found in sign languages. Two possible conclusions could be drawn from this observation. One is that the full extent of Universal Semantics can only be studied in sign languages. An alternative possibility is that spoken languages have comparable expressive mechanisms, but only when co-speech gestures are taken into account. Either way, sign languages have a crucial role to play in investigations of the foundations of semantics.
In addition to these two claims about the general relevance of sign language to foundational questions in semantics, the empirical debates we investigate (pertaining to individual, temporal and modal reference, to context dependency, and to event decomposition) have all been taken to have foundational repercussions beyond semantics proper (e.g. Cresswell 1990; Kaplan 1989; Vendler 1967). To the extent that this is the case, the specific conclusions we reach in each domain are of more general interest as well.
In the rest of this introduction, we briefly explain why sign languages have taken an important role in studies of ‘Universal Grammar’; we state our main hypotheses about the role they should play in semantic studies; and we introduce our elicitation methods and transcription conventions.
As Sandler and Lillo-Martin write to introduce their ground-breaking survey (Sandler and Lillo-Martin 2006 p. xv),
sign languages are conventional communication systems that arise spontaneously in all deaf communities. They are acquired during childhood through normal exposure without instruction. Sign languages effectively fulfill the same social and mental functions as spoken languages, and they can even be simultaneously interpreted into and from spoken languages in real time.
While our understanding of their history is often quite incomplete (but see Delaporte 2007; on the history of LSF [=French Sign Language], and Delaporte and Shaw 2009; and Shaw and Delaporte 2010, 2014 on the history of ASL [=American Sign Language]), the natural development of several recent sign languages has been documented in great detail by linguists and psycholinguists; to mention but one prominent example, the development of Nicaraguan Sign Language has been traced through several generations of signers since its inception in the late 1970s (Brentari and Coppola 2013; Meir et al. 2010 ). For our purposes, what matters is that sign languages have come to play an important role in studies of universals in phonology, morphology and syntax, for linguistic and neurological reasons.
Starting from the least linguistic approach, a major finding of neurological studies is that,
“overwhelmingly, lesion and neuroimaging studies indicate that the neural systems supporting signed and spoken language are very similar: both involve a predominantly left-lateralised perisylvian network. Recent studies have also highlighted processing differences between languages in these different modalities.” (MacSweeney et al. 2008)
To give an example, the core method in neuroimaging studies of language perception consists in identifying (i) areas that are activated by spoken language but not by a comparable non-linguistic auditory control, and (ii) areas that are activated by sign language but not by a comparable non-linguistic visual control. The areas identified by (i) and (ii) turn out to be the classic language centers in both modalities (Broca’s area and Wernicke’s areas). Similarly, brain lesions have comparable effects in the two domains. As mentioned by MacSweeney et al. (2008) and much recent research, these results certainly don’t entail that there are no differences in the neural underpinnings of spoken and sign language, but they strongly suggest that such differences are found in the background of considerable similarities.
From a linguistic perspective, the spoken/sign language comparison is easiest to perform within syntax. Sandler and Lillo-Martin (2006) discuss pervasive typological similarities between signed and spoken language in this domain. While the sign language of a given Deaf community may be influenced by the spoken language of the surrounding hearing community (for instance by the device of fingerspelling, which makes it possible to borrow words from written languages), it is always a safe bet that its grammar is entirely distinct. One reason is that natural sign languages usually make heavy use of devices that have no direct counterpart in spoken words, for instance positions in signing space or ‘loci’ used to realize pronominal meanings, but with many more distinctions than are afforded by spoken language pronouns, as is discussed below. Even when some rules of the surrounding spoken language could be borrowed, this need not at all be the case, due to the independent history and development of sign languages. For instance, both American and Italian Sign Languages (ASL and LIS – among quite a few others) are descended from Old French Sign Language, and thus share some properties with contemporary French Sign Language (LSF); English influence on ASL can certainly happen, but the two languages have very different histories.  Italian Sign Language has an underlying word order of the type SOV (Subject Object Verb), whereas Italian has an underlying SVO order (Cecchetto et al. 2006). Examples could easily be multiplied.
More surprising perhaps, even sign language phonology has been argued to form a natural class with its spoken language counterpart. Without going into details, the general picture is in fact more intuitive than might initially appear. A crucial insight of contemporary phonology is that in language the atomic elements of sound are not phonemes but features, many of which can be seen as articulatory instructions to the production system (as a second approximation, one must take into account perceptual features as well). Thus b, d and g share some articulatory properties (e.g. all three involve vibrating vocal folds), but they differ in one crucial respect: b is produced with the lips (it is a labial consonant), d is produced with the tongue blade (it is a coronal consonant), and g is produced with the tongue body (it is a dorsal consonant; see e.g. Halle 1978). In effect, these properties encode articulatory gestures, a notion that has immediate counterparts in sign language, though the articulators are of course different. As a result, it is natural to ask whether the organization and behavior of these features is similar across modalities. For instance, in both domains phonologists have argued that features have a tree-like organization whereby some features are dependent on others, and are subject to rules of assimilation in which some features of a phoneme can spread to a neighboring phoneme (see Sandler and Lillo-Martin 2006 pp. 10–11). Thus some aspects of phonological organization appear to be common to spoken and to sign language (see Rawski 2018 for a formal approach to the comparative phonology of speech and sign).
In what follows, we will take for granted the conclusions of recent linguistic research on the role of sign languages in studies of Universal Grammar, i.e. of the shared properties and parameters of variation found in the phonology, morphology and especially syntax of human languages. Universal Semantics can be similarly defined as the comparative study of interpretive processes in language, with the goal of determining which interpretive properties are universal and which are open to variation (and if possible, why). Given the other similarities found between spoken and sign languages, it should go without saying that the latter have a role to play in studies of Universal Semantics. But we will argue that some properties of sign languages should give them a central role in foundational studies of semantics. Specifically, we will argue that sign languages can bring special insights into the foundations of semantics, for two reasons.
First, we will argue that sign languages can provide overt evidence on some key aspects of the logical structure of language, ones that one can only infer indirectly in spoken languages. We state this as a hypothesis of ‘Logical Visibility’ in (2).
Examples will involve in particular (i) covert variables that have been posited to disambiguate relations of binding in spoken language, and are realized as loci in sign languages; and (ii) covert operations of context shift, which have been argued to be useful to analyze the behavior of indexicals in some spoken languages, and are realized as Role Shift in sign languages. The scope of ‘Logical Visibility’ should not be exaggerated, however: we were cautious to state the hypothesis existentially, as being about some mechanisms that are covert in spoken language but overt in sign language. This is in fact a well-worn type of argument in semantic typology. For instance, Szabolcsi (2001) (following Kiss 1991) argues that Hungarian ‘wears its LF on its sleeve’ because the scope of quantifiers is disambiguated by their surface position.  In effect, Hungarian offers another case of Logical Visibility. Some of the cases we discuss in sign language have a particular importance because they bear on the relation between logical variables and their antecedents, and involve means of disambiguation (namely ‘loci’) that go beyond what spoken language can offer, possibly for reasons that are intrinsically related to the signed modality.
Second, we will argue that, in some areas, the semantics of sign languages is strictly more expressive than that of spoken language, due to iconic resources found at their logical core, as stated in (3).
Examples will primarily involve sign language loci, which can simultaneously fulfill the role of variables and display pictorial/diagrammatic properties. Here too, we will not be claiming that iconic effects don’t exist in spoken language, but we will suggest that the richness of iconicity in sign language and its seamless integration to the logical engine of the language raise particular challenges.
A note might be in order about the history of these hypotheses.
Hypothesis 1 has an old instantiation in Lillo-Martin and Klima (1990), who suggested that sign language loci can be analyzed as logical indices. This hypothesis is commonly accepted in sign language research (Sandler and Lillo-Martin 2006), although we will discuss many semantic consequences that are less standard, as well as some problems for it raised by Kuhn (2016). In the verbal domain, Wilbur (2003, 2008) postulated an ‘Event Visibility Hypothesis’ according to which crucial aspects of the event structure of verbal predicates is visible in the phonological form of the predicate sign.
Hypothesis 2 combines two strands of research that are often separated. While several researchers have noted the role iconic considerations in the analysis of sign language anaphora (Liddell 2003; Kegl 2004), few have attempted to construct a framework that incorporates iconicity within the logical frameworks used in contemporary formal semantics. Researchers of the ‘iconic camp’ put much emphasis on iconic phenomena, often with interesting empirical insights, but with little connection to formal syntax and usually none to model-theoretic semantics (e.g. Cuxac 1999; Cuxac and Salandre 2007; Taub 2001; Liddell 2003). Researchers from the ‘formalist camp’ often develop their theories within contemporary generative grammar, but even in cases in which iconic phenomena play a prominent role (as in ‘agreement’ verbs, discussed below), they can only incorporate iconic phenomena as a separate system, without combining its semantics with the grammatical spine of the language (e.g. Lillo-Martin and Klima 1990; Neidle et al. 2000; Sandler and Lillo-Martin 2006). Following Schlenker et al. (2013), we motivate the development of a formal semantics with iconicity to address this challenge. 
We will attempt to make Hypothesis 1 plausible in Sections 2–3, where we consider various cases in which sign languages provide overt evidence on crucial aspects of the Logical Form of sentences; these particularly pertain to the role and properties of variables, which can arguably be overt in sign languages (Section 2); but context shift and aspect display interesting cases of ‘visibility’ as well (Section 3). We will then turn to Hypothesis 2 in Sections 4–5, where we consider a dimension along which sign languages are strictly more expressive than spoken languages because they can make use of much richer iconic resources. Importantly, these iconic phenomena are found at the logical core of sign language, an in particular at the level of loci, which can simultaneously function as logical variables and as schematic pictures of what they denote (Section 4); iconicity also interacts in interesting ways with constructions that have been the object of active discussions in semantics, such as context shift (Section 5). Finally, in Section 6 we discuss two possible views of the role of sign languages in studies of Universal Semantics. One view is that, along one dimension at least, sign languages provide information that is mostly missing from spoken languages due to the poverty of iconic resources in the latter. An alternative view is that rich iconic phenomena can be obtained in spoken languages as well, but only when co-speech gestures are fully integrated to semantic studies.
Two remarks are in order about elicitation methods and transcription conventions.
Most of the data discussed below are cited from recent articles published in linguistics journals. In many cases, data were elicited using the ‘playback method’;, with repeated quantitative acceptability judgments (1–7, with 7=best) and repeated inferential judgments (on separate days) on videos involving minimal pairs (see e.g. Schlenker et al. 2013; Schlenker 2014); we have kept quantitative judgments when these appeared in earlier articles (they appear as superscripts at the beginning of sentences). In a nutshell, the playback method involves two steps. First, a sign language consultant signs sentences of interest on a video, as part of a paradigm (e.g. often with 2 to 5 sentences) signed as minimal pairs. Second, the consultant watches the video (usually of his own minimal pairs), provides quantitative acceptability ratings, and (when relevant) inferential judgments, enters them in a computer, and redundantly signs them on a video. This step can be repeated on other days, usually with the same consultant – but it may also be repeated with the same videos but different consultants if feasible and necessary. This method has the advantage of allowing for the precise assessment of minimal pairs (signed on the same video), in a quantitative, replicable way. Even when the judgments are obtained from a very small number of consultants (often one or two), the repetition of the task makes it possible to assess the stability of the judgments; and if necessary this method could be turned into an experimental one in the future. Note that when judgments were obtained with a more traditional categorization of sentences as ‘acceptable’/‘unacceptable’ (rather than with a quantitative scale), we kept the convention of having sentences preceded by * if unacceptable, and by nothing if acceptable.
In the following, sign language sentences are glossed in capital letters, as is standard. Expressions of the form WORD–i, WORDi and […EXPRESSION…]i indicate that the relevant expression is associated with the locus (=position in signing space) i. A suffixed locus, as in WORD–i, indicates that the association is effected by modulating the sign in such a way that it points towards locus i; a subscripted locus, as in WORDi or […EXPRESSION…]i, indicates that the relevant expression is signed in position i. Locus names are assigned from right to left from the signer’s perspective; thus when loci a, b, c are mentioned, a appears on the signer’s right, c on the left, and b somewhere in between (special conventions will be introduced for high and low loci when relevant). IX (for ‘index’) is a pointing sign towards a locus; it is glossed as IX-i if it points towards (or ‘indexes’, as linguists say) locus i; the numbers 1 and 2 correspond to the position of the signer and addressee respectively. As will be explained in greater detail below, IX-i is a standard way of realizing a pronoun corresponding to locus i; but sometimes the pointing sign IX-i serves to establish rather than to retrieve a locus i (see fn. 13). Agreement verbs include loci in their realization – for instance the verb a,b-MEET starts out from the two loci a and b, and means that the individuals denoted by these loci met. When an expression indexes a default locus, it is usually written without a letter index (e.g. IX rather than IX-a). IX-arc-i refers to a plural pronoun indexing locus i, as it involves an arc motion towards i rather than a simple pointing sign. CLa stands for a classifier signed in locus a. Specifications are sometimes added to distinguish different classifiers – e.g. CL-hang stands for a classifier denoting a person in hanging position. Finally, rep is used when an expression is repeated. (When citing sign language sentences from other publications, we try to keep their transcription conventions, which might occasionally lead to small inconsistencies across examples.)
In most cases, we omit non-manual expressions and manual modulations, except in our discussion of Role Shift in Section 3.1, where they are crucial. RSa encodes Role Shift, typically realized with at least body shift and eyegaze shift, and corresponding to the perspective of a character associated with locus a. When non-manual modulations are encoded, they appear on a line above the signs they modify, and ^ encodes raised eyebrows, while ~ encodes lowered eyebrows.
We start our discussion of Logical Visibility with sign language loci, which were analyzed by several researchers (starting with Lillo-Martin and Klima 1990) as the overt manifestation of logical variables. It should be borne in mind that variables have played two slightly different roles in recent semantics. In the tradition of formal logic as well as in syntax and semantics, a quantifier can bind a variable only in case the latter is ‘in its scope’ (in the logician’s terms), or equivalently is ‘c-commanded by it’ (in the syntactician’s terminology).  Thus in (4)a the variable x in P(x) is semantically dependent on (‘bound by’) the universal quantifier ∀x, but the second of occurrence of x, in Q(x), is not dependent on ∀x because it is not in its scope. Exactly the same results hold of (4)b modulo the replacement of the universal quantifier ∀x with an existential quantifier ∃x.
The situation changed when semanticists and logicians developed ‘dynamic semantics’, a class of systems designed to allow existential quantifiers (but not universal quantifiers) to ‘bind’ outside of their standard scope (Kamp 1981; Heim 1982; and especially Groenendijk and Stokhof 1991). Linguistically, such systems were motivated by the observation that an existential quantifier may control a (so-called ‘donkey’ ) pronoun which is not within its scope, as in: A student cheated and he will be disciplined (such dependencies are much more difficult with universal quantifiers, hence the asymmetric treatment between (4)a and (4)b: only the former gives rise to a non-standard, ‘dynamic’ notion of dependency). The argument from sign language will follow these developments. We will initially argue that loci can be the overt realization of variables, but without taking a stand on the dynamic vs. non-dynamic nature of the underlying logic; and we will use this initial observation to argue that sign language has time- and world-denoting variables in addition to individual variables. Only then will we go back to the debate about dynamic semantics and argue that sign language data suggest that some quantifiers can control pronouns that are not within their standard scope.
Sentences such as (5)a and (6)a can be read in three ways, depending on whether the embedded pronoun is understood to depend on the subject, on the object, or to be deictic.
These ambiguities have been analyzed in great detail in frameworks that posit that pronouns have the semantics of variables, which may be bound by a quantifier, or left free – in which case they receive their value from an assignment function provided by the context. For instance, in the textbook analysis of Heim and Kratzer (1998), one way to represent the ambiguity of (5)a is through the representation in (5)b, where a bona fide Logical Form would be obtained by choosing the index i, k or m for the pronoun he (since the subject and object are referring expressions, there are several alternative ways to represent the ambiguity). (6)b summarizes three possible Logical Forms of (6)a within the same framework, depending on whether he is given the index i, k or m.
Sometimes these representations can get quite complex, for instance to capture the fact that plural pronouns may be simultaneously bound by several quantifiers, as in (the relevant reading of) (7)a, represented as in (7)b.
In this case, it is essential, on the relevant reading, that they should be simultaneously dependent on a representative and on a senator, hence the ‘sum’ index i+k that appears on they in (7)b.
While it is clear that, say, the sentence in (6)a is ambiguous, it is not at all uncontroversial that one should capture its readings by positing invisible indices that are nonetheless supposed to be cognitively and semantically real. In fact, following in the footsteps of Quine (1960), a movement of ‘variable-free semantics’ has proposed to analyze such ambiguities without recourse to variables to begin with (e.g. Jacobson 1999, 2012).
In this section, we survey recent results that suggest that sign language displays an overt version of something close to the indices of (5)–(7), and that this fact can be used to revisit some foundational questions in semantics. However, it will prove useful to distinguish between two versions of this hypothesis of ‘Variable Visibility’. According to the Weak Version, it is possible to associate to both a pronoun and to its antecedent a symbol (namely a locus) that marks their dependency, and to associate to different deictic pronouns different symbols if they denote different objects. According to the Strong Version, the symbols in question – loci – really do display the behavior of variables – which as we will see below is a strictly stronger (and possibly overly strong) claim. 
We will establish the plausibility of Variable Visibility (Section 2.2), and then we will use it to revisit a foundational debate about intensional semantics: does natural language have time and world variables (Section 2.3)? We will then focus on the distinction between standard quantification and dynamic quantification, already introduced in connection to (4), asking: can sign language variables be dynamically bound (Section 2.4)? We will sketch a positive answer to both questions: first, the various versions of the pointing sign can have temporal and modal in addition to nominal uses; second, the dependency between pronouns and their antecedents made visible by loci behaves very much like dynamic binding, including in some cases in which dynamic approaches and their main competitors (‘E-type approaches’) make different predictions. We will then revisit the choice between the Weak and the Strong Version of Variable Visibility by discussing data due to Kuhn (2016), who argues that loci display some properties that are unexpected of bona fide variables (Section 2.5).
As mentioned, Lillo-Martin and Klima (1990) argued that logical variables or ‘indices’, which are usually covert in spoken languages, can be overtly realized in sign language by positions in signing space or ‘loci’.  In case a pronoun is used deictically or indexically, its locus usually corresponds to the actual position of its denotation, be it the speaker, the addressee, or some third person (e.g. Meier 2012). If the pronoun is used anaphorically, the antecedent typically establishes a locus, which is then ‘indexed’ (=pointed at) by the pronoun. In (9)a (ASL), the sign names Bush and Obama establish loci by being signed in different positions; in (9)b, the antecedent noun phrases are accompanied with pointing signs that establish the relevant loci. In quantificational examples, indexing disambiguates among readings, as in (10) (LSF).
A crucial property of sign language anaphora is that loci can be created ‘on the fly’ in many different positions of signing space, and that there is no clear upper bound on the number of loci that can simultaneously be used, besides limitations of performance (since signers need to be able to distinguish loci from each other, and to keep their position and denotation in memory). Now there are spoken languages in which third person reference can be disambiguated by grammatical means, for instance by way of a distinction between proximate and obviative marking (in Algonquian, see Hockett 1966) or in switch-reference systems (e.g. Finer 1985). But these only make it possible to distinguish among a small number of third person elements – typically two or three (for instance, ‘proximate’, ‘obviative’, and sometimes ‘double obviative’ in obviative systems). By contrast, there seems to be an unlimited number of potential distinctions in sign language, and in this case the signed modality – and specifically the fact that loci can be realized as points in space – seems to play a crucial role in Variable Visibility.
The cases we discussed above involved singular loci. But when a pronoun denotes a plurality, it can be realized by an ‘arc’ pointing sign, which thus indexes a semi-circular area; and there are also dual and even trial pronouns when the pronoun denotes two or three individuals. Strikingly, these pronouns can simultaneously index several loci in cases corresponding to the ‘split antecedents’ discussed in (7). Thus in (11), the dual pronouns THE-TWO-a,b is realized as a horizontal 2 that goes back and forth between the two loci; and it can be checked that this is no accident: if the position of the loci is modified, the movement that realizes THE-TWO changes accordingly.
More complex cases can easily be constructed, with trial or plural pronouns indexing more than two loci.
Since there appears to be an arbitrary number of possible loci, it was suggested that these do not spell out morpho-syntactic features, but rather are the overt realization of formal indices (Lillo-Martin and Klima 1990; Sandler and Lillo-Martin 2006; we revisit this point in Section 2.5). While pointing can have a variety of uses in sign language (Sandler and Lillo-Martin 2006; Schlenker 2011a), we will restrict our attention to pronominal uses. Importantly, there are some striking similarities between sign language pronouns and their spoken counterparts, which makes it desirable to offer a unified theory.
The first similarity is that sign language pronouns obey at least some of the syntactic constraints on binding studied in spoken language syntax. For instance, versions of the following rules have been described for ASL (Lillo-Martin 1991; Sandler and Lillo-Martin 2006; Koulidobrova 2011): Condition A, which mandates that a reflexive pronoun such as himself corefer with a local antecedent (e.g. Hei admires himselfi); Condition B, which prohibits a non-reflexive pronoun from overlapping in reference with a local antecedent (hence the deviance of #Hei admires himi, understood with coreference); and Strong Crossover, which prohibits a quantificational expression from moving to the left of a coindexed pronoun that c-commands its base position (hence the deviance of #[Which man]i does hei think I will hire ti, where ti is the base position of the interrogative expression, and hei is coindexed with it).
The second similarity is that, in simple cases at least, the same ambiguity between strict and bound variable readings is found in both modalities (see Sandler and Lillo-Martin 2006; further cases will be discussed below); this is illustrated in (12), which has the same two readings as in English: the third person mentioned can be understood to like his mother, or the speaker’s mother. 
A third similarity pertains to cases of ‘donkey anaphora’, or apparent binding without c-command, which as mentioned are found in sign and in spoken language alike (Schlenker 2011b) – a point we will investigate in greater detail in Section 2.4.
It is thus a reasonable hypothesis that the pronominal systems of sign and spoken language share at least a common core. We will now explore cases in which sign language loci arguably provide evidence for specific conclusions in some foundational debates in semantics. 
We turn to the debate concerning the existence of an abstract anaphoric mechanism that applies in similar fashion to the nominal, temporal and modal domains.  In a nutshell, we argue that ASL loci have all three uses, and thus provide an argument in favor of the existence of such an abstract system. In what follows, it will be a good rule of thumb to take temporal and modal uses of loci to have roughly the same meaning as the English word then, which has both temporal and modal uses; the crucial difference is that in ASL the very same word can have nominal, temporal and modal uses (and locative uses as well, as we will see shortly); and that it arguably ‘wears its indices on its sleeves’ because of the variable-like uses of loci.
The point is by no means trivial. In the tradition of modal and tense logic, it was thought that expressions are only implicitly evaluated with respect to times and possible worlds: language was thought to be endowed with variables denoting individuals, but not with variables denoting times or possible worlds. By contrast, several researchers argued after Partee (1973) and Stone (1997) that natural language has time- and world-denoting variables – albeit ones that manifest themselves as affixes (tense, mood) rather than as full-fledged pronominal forms. In other cases (e.g. the word then in its temporal and modal uses), it is reasonable to posit that spoken language indeed has an overt temporal/modal pronoun (Iatridou 1994; Izvorski 1996; Schlenker 2004a; Bhatt and Pancheva 2006), but that it happens to be pronounced differently from individual-denoting pronouns. Now if a single abstract anaphoric system is indeed at work across the nominal, temporal and modal domains, one might expect that some languages have a single pronoun that can be used across categories. It has been argued before that there are indeed morphological or syntactic similarities across these categories (Bittner 2001; Bhatt and Pancheva 2006). Here we make the simple suggestion that ASL pronouns in their various forms can have nominal, temporal, modal and also locative uses.
The full argument has three steps:
As we discussed above, nominal anaphora in sign language usually involves (i) the establishment of positions in signing space, called ‘loci’, for antecedents; (ii) pointing towards these loci to express anaphora. Both properties are also found in the temporal and modal domains.
This observation doesn’t just hold of the singular index; temporal uses of dual, trial and plural pronouns can be found as well. The phenomenon is thus general and it is not plausible to posit that it is an accident that all these morphologically distinct pronouns simultaneously have nominal, temporal and modal uses: indexing per se seems to have all these uses.
Temporal and modal anaphora in ASL can give rise to patterns of inference that are characteristic of so-called ‘donkey’ pronouns (i.e. pronouns that depend on existential antecedents without being in their syntactic scope).
Here we will be content to just illustrate the first step of the argument, and refer the reader to Schlenker (2013a) for further details.
Let us start with temporal indexing: It can be seen in (13) that the same possibilities are open for temporal anaphora as were displayed for nominal anaphora in (9)–(10): antecedents establish loci; pronominal forms retrieve them by way of pointing. 
As can be seen, temporal indexicals, when-clauses (which are semantically similar to definite descriptions of times), and existential time quantifiers (sometimes) can all give rise to patterns of anaphora involving the same pronoun IX as in the nominal case (the existential cases involves a case of ‘dynamic binding’ whose nominal counterpart is discussed at greater length in Section 2.4). Importantly, loci appear in the usual signing space, which is in front of the signer. Although the words for tomorrow and yesterday are signed on the ‘time line’, which is on a sagittal plane (tomorrow is signed in the front, yesterday towards the back), no pointing occurs towards it, at least in this case (but see Emmorey 2002 for discussion).
Let us turn to modal indexing. While there are no clear world indexicals or world proper names, modals such as can are standardly analyzed as existential quantifiers over possible worlds; and if-clauses have occasionally been treated as definite descriptions of possible worlds (e.g. Bittner 2001; Schlenker 2004a; Bhatt and Pancheva 2006). Both cases can give rise to locus indexing in ASL:
We conclude that explicit anaphoric reference to times and possible worlds is possible in ASL with: – though our analysis leaves it entirely open whether times and worlds should be primitive types of entities in our ontology, or should be treated as varieties of a more general category of situations.
While the argument could stop here, the sign language data argue for a further conclusion, namely that temporal and modal reference might be more similar to locative reference than to individual anaphora. The argument hinges on a peculiarity which we will revisit in greater detail in Section 4.5. When an individual has been associated with a spatial location in previous discourse, one can refer to him by pointing towards the locus associated with that location (though pointing to the original locus of the individual is often possible too). To illustrate, let us consider first the locative example in (15). The loci a (on the signer’s right) and b (on the signer’s left) are respectively associated with FRENCH CITY and AMERICAN CITY in the first sentence. But in the second sentence, the pronouns in bold, which index these loci, refer to John rather than to these locations. Intuitively, they can be thought of as referring to John-in-the-French-city vs. John-in-the-American-city (a point to which we return below).
Strikingly, similar facts hold in the temporal domain. In the first sentence of (16), the loci a (on the signer’s right) and c (on the signer’s left) are respectively associated with times at which John was a college student and a college professor; JOHN is associated with locus b, in the middle. In the second sentence, however, the pronouns in bold index the same loci but refer to John rather than to time periods.
The same pattern is found in the modal domain:
The loci a and c, which are initially associated with possible situations in which John is a college student and a college professor respectively, are used in the second sentence to refer to John himself.
There are two broad conclusions that one could draw from these observations. One possibility is that locative, temporal and modal anaphora form a natural class which is somewhat different from individual anaphora. This might for instance be because the ‘primitive elements’ of natural language ontology include individuals on the one hand, and a broad class of ‘situations’ on the other – with locative, temporal and nominal anaphora as subspecies of situation anaphora. An alternative possibility is that it is just because individuals can naturally be taken to be situated at locations, times and worlds that individual loci undergo locative shift.  This analysis might make sense within the broad iconic theory we will sketch in Section 4.
Irrespective of the issues raised by locative shift, our main conclusion is that a single anaphoric system, locus indexing, makes it possible to realize nominal, temporal and modal anaphora – as well as locative anaphora, as we just saw. Importantly, it is too early to tell whether it is the signed modality per se that makes it possible to have all-purpose pronouns. It might be a grammatical accident of ASL that the pointing sign, in its various incarnations (singular dual, trial and plural forms), can have all these uses. If so, it should be possible in principle to find a spoken language with a pronoun underspecified in exactly the same way; in essence, such a pronoun would fulfill both the functions of he/she/it and of then. Another possibility is that there is something essential about loci – for instance the fact that they can serve as ‘pure’ logical variables – which allows them to have nominal, temporal, modal and locative uses alike. We leave this question open, and turn to a case of visibility which is arguably connected in a more essential way to the signed modality.
As we mentioned at the outset, recent semantic research has argued that variables can be ‘dynamically bound’ by existential quantifiers, in a way which is not predicted by standard notions of scope.  In this section, we argue that sign language loci may be dynamically bound, and thus provide an argument in favor of dynamic treatments of anaphora.
Our argument has the following logic. If indeed loci can be the overt realization of some variables, binding relations, which must be inferred in spoken languages, can be made overt in sign languages. Specifically, by virtue of the device of locus establishment and retrieval, the connection between pronouns and their antecedents is sometimes made formally explicit in sign language. This will turn out to show that a pronoun can indeed be dependent on an existential quantifier without being in its scope / c-command domain.
In some cases, it seems that the standard notion of scope / c-command is necessary. No man drinks if he drives has a meaning akin to no man x is such that x drinks if x drives, and he does appear in the scope of no man, as shown in (18)a. By contrast, If no man drinks, he drives does not allow the pronoun to be dependent on no man because he is not in the scope of no man, as shown in (18)b (to be felicitous, he would have to refer to some salient individual).
When no man is replaced with a man, however, the facts change. If a man drinks, he suffers is naturally interpreted as: If a man drinks, that man suffers; the pronoun is dependent on the quantifier although it is not within its scope, as shown in (19).
1. Dynamic Semantics: One view is that the logic underlying natural language is just different from standard logic. Dynamic semantics developed new rules that make it possible for a variable or a pronoun to depend on an existential quantifier or an indefinite without being in its scope (this may be done by treating indefinites themselves as variables, as in Kamp 1981; Heim 1982; or by allowing existential quantifiers to bind outside of their syntactic scope, as in Groenendijk and Stokhof 1991).
2. E-type analysis: The opposing view is that no new logic is needed for natural language because the assimilation of pronouns to variables was too hasty. On this view, the pronoun he in (19) should be analyzed as a concealed description such as the man, or the man who drinks (e.g. Evans 1980; Heim 1990; Elbourne 2005); analyses that make this assumption are called ‘E-type theories’. In some E-type theories, the pronoun is literally taken to come with an elided noun – for instance, in this case he=he
man, where man is unpronounced and he is a version of the (this identity is morphologically realized in German, where der means both the and he) (Elbourne 2005). In other E-type theories, the pronoun is taken to have a richer semantic content, with for instance he=the man who drinks (e.g. Heim 1990). We henceforth restrict attention to the former analysis (Elbourne’s), which is one of the most elegant and articulated E-type theories currently on the market (see Schlenker 2011c for a discussion of other E-type theories in the present context).
Each analysis involves some refinements, which we will only briefly mention. The dynamic analysis develops rules of semantic interpretation that allow he in (19) to depend on a man without being in its scope. This formal connection is taken to be represented in language through unpronounced variables similar to those of logic. Thus the sentence If [a man]x drinks, hex suffers is taken to include a variable x that encodes the dependency of he on a man.
For its part, the E-type analysis must address two challenges. (i) First, it must explain which man the pronoun he (analyzed as meaning the man) refers to in (19) – for there is certainly more than one man in the world. The standard solution is to take the word if to make reference to situations that are small enough to contain just one man. If a man drinks, he suffers is thus analyzed as: In every situation s in which a man drinks, the man in s suffers, with one man per situation. (ii) Second, the E-type analysis must explain what kind of formal link connects he to a man in (19). While the thrust of the approach is that this link is not directly interpreted (or else the analysis would be granting the main point of the dynamic solution), there appears to be some formal connection between the pronoun and its antecedent, which forces the latter to be a noun phrase. The motivation for this conclusion is that when one keeps the meaning of the if-clause constant, the presence of a noun phrase is still crucial to license the pronoun. For instance, John is married and John has a wife are usually synonymous; but although (20)a is grammatical, (20)b is not – it seems that the pronoun is missing a noun phrase as its antecedent.
This is known as the problem of the ‘formal link’ between the pronoun and its antecedent (Heim 1990). While different E-type theories give different solutions to this problem, we will follow here Elbourne’s elegant analysis (Elbourne 2005): the desired data can be derived if her is represented as the
wife, with ellipsis of wife, which must be recovered through a syntactic operation; ellipsis resolution can in effect establish the desired formal link between she and its antecedent.
In Schlenker (2011b, 2011c), ASL and LSF data were used to suggest that dynamic approaches predict the correct indexing patterns in the case of donkey sentences; and that E-type approaches in general, and Elbourne’s analysis in particular, are faced with a dilemma: either they are refuted by our sign language data, or they must be brought so close to dynamic semantics that they might end up becoming a variant of it.
To understand the nature of the dilemma, we must focus on so-called ‘bishop’ sentences, which are characterized by the fact that two pronouns are dependent on non-c-commanding indefinite antecedents with symmetric semantic roles, as in (21)a.
The ‘bishop’ sentence in (21)a is crucial because the situations referred to by the if-clause include two bishops that play symmetric roles (if a bishop x meets a bishop y, it is also true that a bishop y meets a bishop x).
– The dynamic analysis in (21)b has no difficulty here because each noun phrase introduces a separate variable; this allows each pronoun to depend on a different quantifier because hex and himy carry different variables (we could also have hey/himx, but not hex/himx or hey/himy: the pronouns must carry different variables to refer to different bishops, or else the sentence would be understood as involving self-blessings – and in addition a reflexive would be needed).
– The E-type analysis must first postulate that the two bishops mentioned in the antecedent of (21)a are in principle distinguishable by some descriptions. This is not quite trivial: if bishop b meets bishop b’, by virtue of the (symmetric) meaning of meet, it is also the case that bishop b’ meets bishop b. In the theory developed in Elbourne (2005), the if-clause is taken to quantify over extremely fine-grained situations – so fine-grained, in fact, that a situation <x, y, meet> in which x meets y is different from a situation <y, x, meet> in which y meets x. But this is not quite enough: to obtain the right meaning, the pronouns must still be endowed with some additional material – perhaps provided by the context – to pick out different bishops in a given case <bishop1, bishop2, meet>. Even with the device of very fine-grained situations, (21)c is thus insufficient because it does not specify which bishop each pronoun refers to; in (21)c’, the pronouns are enriched with the (stipulated) symbols #1 vs. #2, which are intended to pick out the ‘first’ or the ‘second’ bishop in <bishop1, bishop2, meet>. But the question is how these index-like objects end up in the Logical Form. For Elbourne’s theory, the dilemma is as follows:
Horn I. If #1 and #2 are provided by a mechanism – possibly a contextual one – which is independent from NP ellipsis resolution, we make the counterintuitive prediction that the two pronouns can be dependent on the same quantifier while still carrying different index-like objects.  Importantly, this is so while we keep the intended meaning constant, a meaning that involves one bishop blessing the other bishop (no self-blessings here!). The reason for the counterintuitive prediction is that the role of distinguishing the two bishops falls on the symbols #1 and #2, which are provided independently from the ellipsis resolution process by which pronouns are formally linked to their antecedents. So long as ellipsis provides the right NP (just bishop in (20)a), and no matter where this NP is obtained, the right truth conditions will be produced.
Horn II. If #1 and #2 are inherited by the mechanism of ellipsis resolution itself, we will end up with something very close to a dynamic analysis: the antecedents carry a formal index, and the pronouns recover the very index carried by their antecedent, as is illustrated in (22):
The innovation of the analysis is in essence to add a story about ellipsis to a dynamic-style account, but it is not clear at all that the latter has been replaced with a classical account.
For spoken languages, one might well want to choose Horn I of the dilemma – after all, he and him in (20)a do not ‘wear their antecedents on their sleeves’, hence the counterintuitive prediction (namely that the two pronouns could go back to the same antecedent while still yielding the intended meaning) is just this, counterintuitive. But things are different in ASL (and LSF): in this case, there is clear evidence that the only way to obtain the intended reading is to ensure that the pronouns index different antecedents. This is shown in a structurally related example in (23). 
Importantly, our sign language data do not refute all E-type analyses; in fact, it has been repeatedly argued in the literature that some E-type accounts are notational variants of dynamic accounts (Dekker 2004). But they suggest that there is overt motivation for the main ingredients of the dynamic analysis:
A sign language locus appears to play very much the role of a formal index, which is carried by a pronoun and by the antecedent it is anaphoric to.
Just as is the case with loci, the formal relation which is mediated by dynamic indices is not constrained by c-command.
The semantics of indices and quantifiers guarantees that two indices introduced by different quantifiers (as in (21)b) can ‘refer’ – under an assignment function– to different individuals; the fact that two pronouns carry the same or different variables will thus have a direct semantic reflex, as is desired (for instance for the data in (23), where indexing matters).
We take examples such as (9)–(11), as well as much  of the foregoing discussion, to have established the plausibility of the Weak Hypothesis of Variable Visibility in (8)a: a given locus may be associated both with a pronoun and to its antecedent to mark their dependency; furthermore, deictic pronouns that refer to different objects may be associated with different loci. But this does not prove that loci share in all respects the behavior of logical variables, and thus these facts do not suffice to establish the Strong Hypothesis of Variability in (8)b.
This stronger hypothesis is attacked by Kuhn (2016), who argues that loci should be seen as features akin to person and gender features, rather than as variables. On a positive level, Kuhn argues that the disambiguating effect of loci in (9)–(11) can be explained if loci are features that pronouns inherit from their antecedents, just as is the case of gender features in spoken languages (and it is uncontroversial that these are not variables). On a negative level, Kuhn argues that treating loci as variables predicts that they should obey two constraints that are in fact refuted by his ASL data.
First, a variable is constrained to depend on the structurally closest operator it is co-indexed with. Thus the boxed variable x1 in (24)a cannot be semantically dependent on the universal quantifier ∀x1 because of the intervening quantifier ∃x1 – by contrast with (24)b, where the intervening quantifier carries a different index. For the same reason, the boxed variable in both formulas cannot be free and refer (deictically, in linguistic parlance) to a fixed individual.
By the same token, the two occurrences of the variable x1 in (25) must have the same semantic value – in particular, if no quantifier binding x1 appears at the beginning of the formula, both occurrences will be free and will denote a fixed individual.
Kuhn (2016) argues that both predictions are incorrect: first, expected cases of variable capture fail to arise under the quantificational adverb only; second, multiple occurrences of the same locus may refer to different individuals.
(i) Variable capture
First, Kuhn shows that under only, the loci-as-variables view fails to generate some attested readings, as in the multiply ambiguous sentence in (26). (Like ours, Kuhn’s examples are assessed on a 7-point scale, with 7=best.)
Let us focus on the (available) ‘bound-free’ reading, on which the boxed possessive is read as bound by ONLY-ONE while the underlined possessive refers to Billy. On this reading, the sentence means: Jessica told me that only Billy is an individual x such that x told x’s mother Billy’s favorite color. Here the boxed possessive POSS-b is read as bound, so ONLY-ONE must somehow bind this variable, say by way of a Logical Form akin to (27). We assume that IX-b BILLY comes with a requirement that b denotes Billy, and that there is an empty copula preceding ONLY-ONE to yield a meaning such as: ‘Billy is the only person who…’.
But now comes the problem: if the boxed possessive POSS-b is bound by λb, the underlined pronoun POSS-b, which is lower in the structure, cannot get a deictic reading on which it denotes Billy. But this would be needed in order to derive the ‘bound-free’ reading which is available here.
On the view that loci may be interpreted, these data suggest that there are some environments in which they can be disregarded as well. Precisely this view is standard so-called ‘phi-features’, i.e. person, gender, and number features, which are believed to be interpreted on free pronouns but to remain uninterpreted on bound variables under only, a point to which we return shortly; this similarity between loci and phi-features is what we call ‘Kuhn’s Generalization’.
(ii) Locus re-use
Second, Kuhn shows that in (28) a single locus is assigned to John and Mary, and another locus is assigned to Bill and Suzy. As a result, the boxed occurrences IX-a and IX-b refer to John and Mary respectively, while the underlined pronouns IX-a and IX-b refer to Mary and Suzy.
As Kuhn observes, this example is problematic for the variable-based view. The initial association of the proper name JOHN with variable a should force a to refer to John; but then how can a also refer in the same clause, and without any intervening binder, to Mary? By contrast, these data are unproblematic for the feature-based analysis of loci: just like two Noun Phrases may bear the same feminine gender features while denoting different individuals, so it is with loci-as-features. (Locus re-use is certainly limited by pragmatic or other constraints – a more standard strategy is to assign one locus per individual. Kuhn’s argument is really an existential proof that in some cases loci display a behavior which is incompatible with the view that they spell out variables.)
Loci as features (Kuhn 2016)
Kuhn solves these problems by treating loci as features which are not interpreted (so that neither the problem of variable capture nor the problem of variable re-use can arise in the first place), but are inherited by a mechanisms of morpho-syntactic agreement; this allows him to provide a variable-free treatment of loci, which is developed in great detail in Kuhn (2016) (as Kuhn observes, the fact that loci are not variables does not show that there are no variables in the relevant Logical Forms, just that loci are not them; giving a variable-free treatment of these data is thus a possibility but certainly not a necessity).
To see the appeal of the treatment of loci as features, consider the behavior of the feminine features of her in (29) and of the first person features of my in (30).
In the simple examples in (29)a and (30)a, these features constrain the denotation of the possessive pronoun, which may only denote female individuals in (29)a, and only the speaker in (30)a. But something interesting happens in (29)b and (30)b: due to the semantics of only, the possessive pronoun can be interpreted as a bound variable which ranges over non-female individuals and over non-speakers. This has lead several researchers (e.g. Heim 1991, 2008; Kratzer 2009; Schlenker 1999, 2003; Stechow 2004) to posit that in this case these features remain uninterpreted, something that some of these frameworks represent in the Logical Forms in (29)c-(30)c by including the features on variables, but in barred form to indicate that they can be ‘deleted under agreement’ with their binder.
If features can be inherited without being interpreted in such cases, why couldn’t the same mechanism be extended to ASL loci so as to account for Kuhn’s cases of ‘variable capture’? If so, it could be that these are just features, which constrain antecedence relations but are not otherwise interpreted. Kuhn must accept the consequence that features need not be part of a closed inventory, since there is no natural upper bound on the number of loci that can appear in a sentence (though there are clear performance limitations). While this goes against standard assumptions in morpho-syntax, these could be revised, especially in view of the expressive means afforded by the signing space, where loci can easily be created ‘on the fly’ as geometric points.
Kuhn’s analysis has the further advantage of explaining why the same locus can be used to refer to distinct individuals in the case of ‘locus re-use’ in (28). For him, this case is no different from that of ambiguous uses of the pronoun she, which may refer to different individuals as long as they are female: features may but need not disambiguate reference.
Finally, Kuhn’s analysis would need to be extended to cover deictic loci. As mentioned, when individuals are present in the discourse situation, the signer normally points towards them to realize deixis. In that case, then, one would need to posit that the point in signing space that corresponds to the individual’s actual position plays the role of a feature; and here too, one needs to posit that there is potentially an unbounded number of possible features. 
This treatment of loci as features brings features rather close to variables, however. First, Kuhn needs to accept that features are not part of a closed inventory, as is usually assumed, but rather can be created ‘on the fly’ in sign language, and can be associated with antecedents in an arbitrary way (since for third person loci there is nothing ‘intrinsic’ about the antecedent that forces it to introduce a locus on the right or on the left, say) – just like indices according to variable-full treatments. If Kuhn’s device were applied to predicate logic, it would amount to giving a variable-free treatment of formulas such as (31)a–b, one in which x and y do not have an autonomous semantics (unlike variables), but still provide information about which argument position is quantified ∀, ∃.
In addition, Kuhnian features also resemble variables in their ability to be associated to individuals in the extra-linguistic context when they are not dependent on a linguistic expression.
Finally, it is important to note that while Kuhn’s analysis goes against the Strong Version of Variable Visibility (=the view that loci can genuinely display the behavior of variables – (8)b), it is fully compatible with the Weak Version, which only posits that sign language has covert counterparts of the indices taken to disambiguate the English sentences in (5)–(7).
Loci as variables and as features (Schlenker 2016)
In view of Kuhn’s objections, can we preserve the view that loci are sometimes overt variables? A positive answer is sketched in Schlenker (2016), which suggests that loci may both display the behavior of variables and of features – they are thus ‘featural variables’. Specifically, when they are interpreted their semantics is given by an assignment function, just like that of standard indices. But they may be disregarded in precisely the environments in which person or gender features can be disregarded. Furthermore, in many environments loci constrain the value of covert variables. 
To make things concrete, Schlenker (2016) gives loci a presuppositional semantics modeled after that of second person features. But whereas the latter are context-dependent, as shown in (32)a, loci have an assignment-dependent semantics, as shown in (32)b.
Note that an effect of rule (32)a is that the pronoun youi only comes with a requirement it the index i denote an addressee of the context, not that that it denote the one and only addressee. This is useful in sentences such as Youi and youk should stop talking to each other, where the two pronouns denote different individuals. We will see shortly that a similarly weak constraint is similarly useful in the analysis of loci.
With the rule in (32)a, an overt locus can constrain the value of a covert variable, as illustrated in (33).
In such cases, then, loci are interpreted variables that constrain the value of other expressions. Crucially, Schlenker (2016) assumes that phi-features but also locus features can be deleted under binding, as was illustrated in (29)–(30) above.
How does this analysis address Kuhn’s problem of ‘variable capture’? Presumably Kuhn must assume that, by one mechanism or another, ONLY-ONE in (26) can inherit the locus feature of BILLY. The key is then to assume that variables can be bound by (λ-operators introduced by) BILLYb or by ONLY-ONEb, but that in any event the feature b which they inherit need not be interpreted. As an example, the ‘bound-free’ reading is represented in (34)b; barred loci are assumed to be deleted under binding.
What about Kuhn’s problem of ‘locus re-use’? In order to account for (28), all we need to posit is that a (and m) denotes the plurality John+Mary; and under this assumption, we don’t need feature deletion, as seen in (35).
The key is that in (32)a and (32)b alike we have a requirement that the denotation of the pronoun should be a (mereological) part of (rather than identical to) the denotation of a variable. This makes it possible to posit that in cases of locus re-use the denotation of the locus is less specific than that of the covert variable whose value it constrains.
In the end, the view that loci can be the overt manifestation of variables can be maintained, but it must be refined; in particular, one must accept Kuhn’s insight that loci may display the behavior of features – which doesn’t mean they don’t also display the behavior of variables.  (In addition, we will come back in Section 4.3 to uses of plural pronouns that can be analyzed within a loci-as-variables analysis, but not so easily within a pure loci-as-features framework.)
In this section, we turn to further cases – not related to loci – in which sign language makes overt some of Logical Forms that are usually covert in spoken language. The first case involves context shifting operators, which were argued in semantic research to be active but covert in spoken language (e.g. Schlenker 2003; Anand and Nevins 2004; Anand 2006). Following Quer (2005), we propose that context shift can be realized overtly in sign language, by way of an operation called ‘Role Shift’. We then move to the aspectual domain, and summarize results which suggest that some primitive categories in the representation of aspectual classes are made visible in sign language but are usually covert in spoken language (the ‘Event Visibility Hypothesis’ of Wilbur 2003).
Two strands of research on context-dependency have come together in recent years.  In the semantics of spoken languages, considerable attention has been devoted to the phenomenon of context shift, as evidenced by the behavior of indexicals. While these were traditionally thought to depend rigidly on the context of the actual speech act (Kaplan 1989), it turned out that there are languages and constructions in which this is not so: some attitude operators appear to be able to ‘shift the context of evaluation’ of some or all indexicals (e.g. Schlenker 1999, 2003, 2011d; Anand and Nevins 2004; Anand 2006). In research on sign languages, there has been a long-standing interest in Role Shift, an overt operation (often marked by body shift and/or eyegaze shift) by which the signer signals that he adopts the perspective of another individual (e.g. Padden 1986; Lillo-Martin 1995; Sandler and Lillo-Martin 2006). Role Shift comes in two varieties: it may be used to report an individual’s speech or thought – henceforth ‘Attitude Role Shift’. Or it may be used to report in a particularly vivid way an individual’s actions (henceforth ‘Action Role Shift’; a more traditional term in sign language research is ‘Constructed Action’) .
Quer (2005) connected these two strands of research by proposing that Attitude Role Shift is overt context shift, a position we will now develop within a broader typological perspective.
As summarized in Quer, to appear, Role Shift across sign languages is morpho-syntactically characterized by non-manual markers such as the following: (i) ‘temporary interruption of eye contact with the actual interlocutor and direction change of eye gaze towards the reported interlocutor’; (ii) ‘slight shift of the upper body in the direction of the locus associated with the author of the reported utterance’; (iii) ‘change in head position’; (iv) ‘facial expression associated to the reported agent.’
How should Role Shift be analyzed semantically? Quer (2005) and others argue that Attitude Role Shift is an overt instance of context shift because some or all indexicals that appear in its scope acquire a shifted interpretation. For such an argument to be cogent, however, an alternative analysis must be excluded, one according to which the role-shifted clause is simply quoted – for quoted clauses are arguably mentioned rather than used, which obviates the need to evaluate their content relative to a shifted context.  Quer’s argument is in two steps (2005, 2013). First, he shows that some indexicals in Attitude Role Shift in Catalan Sign Language (LSC) have a shifted interpretation, i.e. are intuitively evaluated with respect to the context of the reported speech act. Second, he shows that in some of these cases clausal quotation cannot account for the data because other indexicals can be evaluated with respect to the context of the actual speech act. This pattern is illustrated in (36), where the first person pronoun IX-1 is evaluated with respect to the reported context (and thus refers to Joan), while HERE is evaluated with respect to the actual context.
As emphasized by Quer (2013), it is also possible to understand HERE as being shifted; but the reading with a ‘mixing of perspectives’ found in (36) is crucial to argue that there is context shift rather than standard quotation. 
In order to account for his data, Quer (2005) makes use of a framework developed in Schlenker (2003), in which attitude operators could bind object-language context-variables, with the result that a given embedded clause could include both shifted and shifted indexicals. In Schlenker (2003), the argument for this possibility of a ‘mixing of perspectives’ came from preliminary Amharic data, where two occurrences of a first person marker could be evaluated with respect to different contexts, as in (37). 
Schlenker (2003) argued that the same ‘mixing of perspectives’ could be found in Russian. The argument was based on the view that the Russian present tense is an indexical that can be shifted under attitude reports, which gave rise to mixed cases in which tense was shifted but personal pronouns were not, as in (38) (here the ‘non-first person’ contribution of the third person pronoun is evaluated from the perspective of the speaker of the actual context; and an embedded first person pronoun would fail to get a shifted reading).
Schematically, Schlenker (2003) posited Logical Forms such as those in (39), where an attitude verb binds a context variable c, while a distinguished variable c* denoting the actual speech act remains available for all indexicals. As a result, when two indexicals indexical1 and indexical2 appear in the scope of an attitude verb, they may be evaluated with respect to different context variables, as is illustrated in (39).
While agreeing that some attitude verbs are context shifters, Anand and Nevins (2004) and Anand (2006) argued that Mixing of Perspectives is undesirable. Specifically, they showed that in Zazaki, an Indo-Aryan language of Turkey, if an indexical embedded under an attitude verb receives a shifted reading, so do all other indexicals that are found in the same clause – a constraint they labeled ‘Shift Together’:
For Anand and Nevins (2004) and Anand (2006), a covert context-shifting operator is optionally present under the verb say in Zazaki, but crucially it does not bind context variables, and just manipulates an implicit context parameter. When the operator is absent, the embedded clause behaves like an English clause in standard indirect discourse. When the context-shifting operator is present, it shifts the context of evaluation of all indexicals within its scope – hence the fact that we cannot ‘mix perspectives’ within the embedded clause. This is schematically represented in (41):
While the initial debate was framed as a choice between two competing theories of context shift, an alternative possibility is that different context-shifting constructions pattern differently in this connection, as is illustrated by the distinction between our Russian and Zazaki data. The sign language data that have been explored thus far argue for this ecumenical view: some languages allow for Mixing Perspectives, while others obey Shift Together. Arguing for Mixing of Perspectives, the data from Catalan Sign Language in (36) mirror the Russian data in that two indexicals that appear in the same clause may be evaluated with respect to different contexts. Similarly, German Sign Language allows for Mixing of Perspectives, with a shifted indexical co-existing with an unshifted one in the same clause (Herrmann and Steinbach 2012; Hübl and Steinbach 2012; Quer 2013). Arguing for Shift Together, Schlenker to appear, a, 2015 shows that American and French Sign Language replicate the Zazaki pattern: under Role Shift, all indexicals are obligatorily shifted. A case in point is displayed in (42), where the first person pronoun IX-1 and the adverb HERE are both signed under Role Shift, and both are obligatorily interpreted with a shifted meaning.
In sum, given the available data, it seems that the typology of context-shifting operations in sign language mirrors that found in spoken language: some languages/constructions obey Shift Together, whereas others allow for Mixing of Perspectives. The difference between the two modalities is, of course, that in sign language Role Shift is overtly realized.
If this analysis is on the right track, several important questions arise. First, are there spoken languages in which some overt operators also force context shift? To our knowledge, none has been described.  Second, if one answers in the negative, what might the be source of this typological distinction between spoken and sign language? Two considerations might prove relevant. First, Role Shift is simultaneous with the expressions it affects, as indicated for instance by the line above the embedded clause in (42); instances of simultaneous marking exist in spoken language (e.g. at the intonational level) but are more restricted than in sign language. Second, in sign language various expressions can be used iconically, and one could take the body shift and eyegaze shift that characterize Role Shift to be an instantiation of an imitation of somebody else’s speech; we come back to this possibility in Section 5.3.
While this basic picture of Role Shift as overt context shift is appealingly simple, it abstracts away from important complexities.
First, Role Shift doesn’t just occur in attitude reports (=‘Attitude Role Shift’), but it can also be used in action reports, especially to display in a particularly vivid fashion some parts of the action through iconic means (=‘Action Role Shift’). Attitude Role Shift can target an entire clause, as well as any indexicals within it (optionally or obligatorily depending on the language). By contrast, Action Role Shift is more constrained; depending on the author, in ASL it is believed to just target verbs (Davidson 2015), or possibly larger constituents, but if so only ones that contain no indexicals or first person agreement markers (Schlenker to appear, a). Be that as it may, any context-shifting analysis of Role Shift must be extended in non-trivial ways to account for Action Role Shift (for a proposal, see Schlenker to appear, a, to appear, b).
Second, the literature suggests that Catalan and German Sign Language Role Shift (LSC and DGS), which allows for Mixing of Perspectives (as in (36)), cannot be analyzed as simple instances of quotation. But the facts are far less clear in American and French Sign Language (ASL and LSF), precisely because in those languages all indexicals that have been tested are obligatorily shifted under Role Shift, which makes it possible to envisage a quotational analysis. In spoken languages, the standard strategy to disprove a quotational analysis of a clause under say is to establish a grammatical dependency between the embedded clause and the matrix clause – with the assumption that ‘grammatical dependencies do not cross quotation marks’ (presumably because quoted material is mentioned, not used). Thus quotation is possible in (43)b and (44)b but not in (43)a and (44)a because in the latter two cases a grammatical dependency exists between the embedded clause and the matrix clause, involving a moved interrogative expression (‘wh-extraction’) in (43)a and a dependency between a Negative Polarity Item (NPI) and its negative licenser in (44)b.
Now in the data reported in Schlenker (to appear, a, to appear, b), ASL Role Shift allows for wh-extraction out of role-shifted clauses, but so does another construction that is plausibly quotational (because it involves a sign for quotation at the beginning of a non-role-shifted clause). For this reason, the evidence that the role-shifted clause doesn’t involve quotation is weak – maybe quotation does allow for wh-extraction in our ASL data, for unknown reasons. Furthermore, another standard test of indirect discourse fails; it involves the licensing of a Negative Polarity Item, ANY, by a negative element found in the matrix clause. When the embedded clause is in standard indirect discourse, ‘any’ can be licensed by a matrix negation both in the English sentence in (45)a, and in an analogous sentence in ASL. When the clause is quoted, as in the English example in (45)b, ‘any’ cannot be licensed from by a negation in the matrix clause. Crucially, an analogous sentence with Role Shift in ASL displays a pattern similar to (45)b, which suggests that Attitude Role Shift does have a quotational component.
In addition, in LSF wh-extraction out of role-shifted clause fails, just as it fails out of a quoted sentence in the English data in (43)a; this too suggests that Attitude Role Shift has a quotational component. Thus in ASL and LSF, the argument that Role Shift involves context shift rather than quotation depends rather heavily on the existence of Action Role Shift, which couldn’t be analyzed in quotational terms (because it is used to report actions rather than thought- or speech-acts). By contrast, in Catalan and German Sign Language the argument against a quotational analysis is fairly strong due to the ability of role-shifted clauses to mix perspectives.
Finally, Schlenker (to appear, a, b), following much of the literature, argues that Role Shift comes with a requirement that some elements be interpreted iconically (and suggests that the quotational effects discussed in (ii) above are a special case of iconicity). We come back to this point in Section 5.3.
Cases of Visibility are not limited to the domains of reference (as in Section 2) and context-dependency (as in Section 3.1). Wilbur (2003) argued that sign language makes visible the logical structure of verbs – and coined the term ‘Event Visibility’ to label her main hypothesis. To introduce it, a bit of background is needed. Semanticists traditionally classify event descriptions as telic if they apply to events that have a natural endpoint determined by that description, and they call them atelic otherwise. John spotted Mary and John built the house have such a natural endpoint – the point at which John spotted Mary and completed the house, respectively; John knew Mary and John danced lack such a natural endpoint and are thus atelic. As summarized in Rothstein (2004), "the standard test for telicity is the use of temporal modification: in α time modifies telic VPs and for α time modifies atelic VPs as in [(46)]":
Now Wilbur’s hypothesis is that the distinction between telic and atelic predicates is often realized overtly in ASL. In Wilbur and Malaia’s (2008) words, the observation was that
ASL lexical verbs could be analyzed as telic or atelic based on their form: telic verbs appeared to have a sharper ending movement to a stop, presumably reflecting the semantic end-state of the affected argument (… ). These end-states were observed to be overtly marked in ASL by several mechanisms: (1) change of handshape aperture (open/closed or closed/open); (2) change of handshape orientation; and (3) abrupt stop at a location in space or contact with a body part. (…) The observation that semantic verb classes are characterized by certain movement profiles was formulated as the Event Visibility Hypothesis (EVH) for sign languages: "In the predicate system, the semantics of the event structure is visible in the phonological form of the predicate sign" (Wilbur 2008: 229).
On a theoretical level, Wilbur (2008) posits that in ASL and other sign languages, telicity is overtly marked by the presence of an affix dubbed EndState, and which "means that an event has a final state". Its phonological form is "a rapid deceleration of the movement to a complete stop", which can come in several varieties, as illustrated in (47).
Remarkably, then, Wilbur’s findings suggest that sign language makes it articulate overtly some grammatically relevant aspects of event decomposition. In Section 5.2, we will revisit Wilbur’s Event Visibility Hypothesis, asking whether it might not follow from a more general property of structural event iconicity.
In the cases we have discussed up to this point, sign language makes some aspects of the Logical Forms of sentences more transparent than they are in spoken language, for accidental or sometimes for essential reasons (notably, the fact that indices are overtly realized in sign language but not in spoken language). In this section, we turn to cases in which sign language has greater expressive power than spoken languages because it makes greater use of iconic resources. There are certainly iconic phenomena in spoken language, for instance in the sentence The talk was loooooong: the excessive duration of the vowel gives a vivid idea of the real or experienced duration of the talk (as one might expect, saying that the talk was shooooooort would yield a rather odd effect).  But sign language make far more systematic use of iconicity, presumably because their depictive resources are much greater than those of spoken languages. While one might initially seek to separate neatly between a ‘grammatical/logical’ and an ‘iconic’ component in sign language, we will see that the two are closely intertwined: iconic phenomena are found at the core of the logical engine of sign language. In particular, we will revisit in detail the case of sign language loci, and we will argue that in some cases they are simultaneously logical variables and schematic pictures of what they denote.
Before we plunge into the richness of iconic data, it will be useful to have a highly simplified example of a system that combines logic with iconicity. Let us compare two ways of defining an assignment function for a simple first-order logic in (48).
– In (48)a, two sorts of variables are used, xi and yi; yi ranges over the entire domain D, whereas xi is constrained to range over a designated subdomain Dx. It goes without saying that nothing substantive would change if we decided to call these variables ai and bi instead of xi and yi.
– In (48)b, the variables xi are iconic variables, in the following (admittedly strange!) sense: they are constrained to only denote objects that resemble the shape of x – hence… cross-like objects. Here the particular symbol used for the variables matters, as the assignment function would be constrained in a very different way if the variables xi were replaced with variables ai, with the requirement that the latter denote objects that resemble the symbol a.
The practical use of a partly iconic assignment function as in (48)b is admittedly limited, in part because the iconic potential of letters of the alphabet isn’t very rich. But things are different in sign language, as we will now see.
But first, what are iconic effects in sign language?  To see an intuitively clear example, consider the verb GROW in (49), which can be realized in a variety of ways, six of which were tested in (50); in the ‘slow movement’ row, we have included pictures of the beginning and endpoint of GROW.
As is seen in (50), the sign for GROW starts out with the two hands forming a sphere, with the closed fist of the right hand inside the hemisphere formed by the left hand; the two fists then move away from each other on a horizontal plane (simultaneously, the configuration of the right hand changes from closed to open position). The signer varied two main parameters in (50): the distance between the endpoints; and the speed with which they were reached (only the first parameter is depicted). All variants were entirely acceptable, but yielded different meanings, shown in (50). Intuitively, there was a mapping between the physical properties of the sign and the event denoted: the broader the endpoints, the larger the final size of the group; the more rapid the movement, the quicker the growth process.
Such effects are pervasive in sign language. Schlenker et al. (2013) asked (i) how they interact with the representation and interpretation of variables; and (ii) how this interaction should be modeled. Their main claim was that sign language loci are simultaneously variables and pictorial representations: their values are provided by assignment functions, but the interpretation function in general and the assignment function in particular are constrained to preserve some geometric properties of signs, and thus they have an iconic component. In effect, this proposal attempted to reconcile two camps in sign language research. The ‘formalist camp’ (e.g. Lillo-Martin and Klima 1990; Neidle et al. 2000; Sandler and Lillo-Martin 2006) emphasizes the importance of predictive formal models, but traditionally it had relatively little to say about iconic considerations. The ‘iconic camp’ (e.g. Cuxac 1999; Taub 2001; Liddell 2003) emphasizes the importance of iconic conditions, but does so within frameworks that are considered insufficiently explicit by the formalist side. Schlenker et al. (2013) claimed that some of the insights of the iconic camp are essential for a proper understanding of the semantics of sign language variables; but this understanding requires the kind of formal frameworks espoused by formalists – hence the necessity to incorporate to the latter an explicit iconic component.
We will argue below that sign language loci can play the role of iconic variables, i.e. of symbolic expressions that are both logical variables and schematic pictures of what they denote. We make our case on the basis of three phenomena: (i) plural loci, where relations of inclusion and complementation among loci are directly reflected in their denotations (Section 4.3); (ii) high and low loci, which can be used for individuals whose head is (actually or metaphorically) situated up or down (Section 4.4); and (iii) instances of ‘locative shift’, where a nominal locus can ‘move’ in the horizontal space to join locative loci that its denotation is associated with (Section 4.5; we briefly discussed locative shift in connection with temporal anaphora in Section 2.3 above).
We will make use of the first case study (=plural loci) to introduce an ‘iconic semantics’ as one in which some geometric properties of signs are preserved by the interpretation function. In this particular case, we will be able to be quite explicit about the resulting semantics. While we believe that other iconic uses of loci could in the future be handled in an equally explicit fashion, we will leave things at a more informal level in the rest of our discussion, partly for readability, and partly because the integration of logical and iconic notions is still in its infancy.
The simplest instance of an iconic constraint concerns plural ASL and LSF loci, which are usually realized as (semi-)circular areas.  These can be embedded within each other, and we hypothesize that this gives rise to cases of structural iconicity, whereby topological relations of inclusion and relative complementation in signing space are mapped into mereological analogues in the space of loci denotations.
Our initial focus is on the anaphoric possibilities made available in English by the sentence Most students came to class. Recent research has argued that such a sentence makes available two discourse referents for further anaphoric uptake: one corresponding to the maximal set of students, as illustrated in (51)b (‘maximal set anaphora’); and one for the entire set of students, as illustrated in (51)c (‘restrictor set anaphora’).
Crucially, however, no discourse referent is made available for the set of students that didn’t come to class (‘complement set anaphora’, as this is the complement of the maximal set within the restrictor set); this is what explains the deviance of (51)a. This anaphoric pattern, whereby they in (51)a is read as referring to the students that did not come, is at best limited when the initial quantifier is few, and nearly impossible with most. Nouwen (2003) argues that when available, complement set anaphora involves inferred discourse referents: no grammatical mechanism makes available a discourse referent denoting the complement set – here: the set of students who didn’t come.
On the basis of ASL and LSF data, Schlenker et al. (2013) made two main observations.
Observation I. When a default plural locus is used in ASL, data similar to (51) can be replicated – e.g. complement set anaphora with most is quite degraded. This is illustrated in (52), with average judgments (per trial) on a 7-point scale, with a total of 5 trials and 3 informants.
Observation II. When embedded loci are used, the effect is circumvented: one large locus (written as ab, but signed as a single circular locus) denotes the set of all students; a sub-locus (=a) denotes the set of students who came; and a complement locus (=b) thereby becomes available, denoting the set of students who didn’t come, as illustrated in (53) and (54).
Schlenker et al. (2013) account for Observation I and Observation II by assuming that Nouwen is right that in English, as well as ASL and LSF, the grammar fails to make available a discourse referent for the complement set, i.e. the set of students who didn’t come; but that the mapping between plural loci and mereological sums preserves relations of inclusion and complementation, which in (53)a makes available the locus b.
The main assumptions are that (a) the set of loci is closed with respect to relative complementation: if a is a sublocus of b, then (b-a) is a locus as well; and (b) assignment functions are constrained to respect inclusion and relative complementation: if a is a sublocus of b, the denotation of a is a subpart of the denotation of b, and (b-a) denotes the expected complement set. These conditions are stated more completely in (55):
Since it is unusual to take a symbol to be part of another symbol, it should be emphasized that the notation a ⊆ b is to be taken literally, with the locus (and thus symbol) a being a subpart of the locus b (this can for instance be further analyzed as: the set of points a in signing space is a subset of the set of points b in signing space). The condition a ⊂ b iff s(a) ⊂ s(b) should thus be read as: the locus a is a proper subpart of the locus b just in case the denotation of a is a proper subpart of the denotation of b.  If we wanted to state an analogous condition in a more standard system in which the variables are letters rather than loci, we could for instance require that the denotation s(v) of a variable called v should be a subpart of the denotation s(w) of a variable called w because graphically v can be viewed as a subpart of w. Because inclusion of one symbol in another is so uncommon with letters, this would of course be a very odd condition to have; but it is a much more natural condition when the variables are loci rather than letters.
Let us now see how the conditions on loci in (55) derive our sign language data. In (53)a, where embedded loci are used, we can make the following reasoning:
– Since a is a proper sublocus of a large locus ab, we can infer by (55)a(ii) that (ab-a) (i.e. b) is a locus as well;
– by (55)b(i), we can infer that s(a) ⊂ s(ab);
– and by (55)b(ii), we can infer that s(b)=s(ab)-s(a).
In this way, complement set anaphora becomes available because ASL can rely on an iconic property which is inapplicable in English. But this does not mean that there is a proper grammatical (non-iconic) difference between these two languages: as we saw, with default loci the English data are replicated, which suggests that Nouwen’s assumption that the grammar does not make available a discourse referent for the complement set applies to ASL just as it does to English. Rather, it is because of iconic conditions on plural loci, not grammar in a narrow sense, that a difference does arise in the case of embedded loci.
One additional remark should be made in connection with our discussion of the debate between the analyses of loci as variables vs. as features (in Section 2.5). It is notable that the locus b in (53)a and (54) is not inherited by way of agreement, since it is not introduced by anything. From the present perspective, the existence of this locus is inferred by a closure condition on the set of loci, and its interpretation is inferred by an iconic rule. But the latter makes crucial reference to the fact that loci have denotations. It is not trivial to see how this result could be replicated in a variable-free analysis in which loci don’t have a denotation to begin with. Presumably, the complement set locus would have to be treated as being deictic (which is the one case in which the variable-free analysis has an analogue of variable denotations). This might force a view in which complement set loci are handled in a diagrammatic-like fashion, with co-speech gestures incorporated in signs – a point to which we return in 6.1.
In the preceding section, relations of inclusion and relative complementation among loci were shown to be preserved by the interpretation function. We now turn to cases in which the vertical position of loci is meaningful and argues for an iconic analysis as well.
While loci are usually established in a single horizontal plane, in some contexts they may be signed high or low.  Our point of departure lies in the inferences that are obtained with high and low loci in such cases. An ASL example without quantifiers, from Schlenker et al. (2013), is given in (56). In brief, high loci are used to refer to tall, important or powerful individuals, whereas low loci are used to refer to short individuals (similar data were described for LSF in Schlenker et al. 2013). Loci of normal height are often unmarked and thus do not trigger any relevant inference.
As can be seen, the relevant inferences are preserved under negation, which provides initial motivation for treating them as presuppositional in nature, a proposal that has been made about the semantic specifications of pronouns, such as gender, in spoken language (Cooper 1983).
Importantly, high and low loci can appear under binding, with results that are expected from the standpoint of a presuppositional analysis. From this perspective, (57)a is acceptable because the bound variable heri ranges over female individuals; and (57)b is acceptable to the extent that one assumes that the relevant set of directors only comprises females.
Similar conditions on bound high and low loci apply in (58)–(59) (here too, similar examples were described for LSF):
As argued in Schlenker et al. (2013), it will not do to treat height specifications of loci as contributing information about an intrinsic property of their denotations, for instance in terms of being tall or short. This is because in some of their uses they provide information about the spatial position of the upper part of a person’s body. This is shown by the paradigm in (61), which is about people that are either standing on a branch, or hanging, upside down, from a branch. In these examples, a finger classifier was used to represent the relevant individuals, knuckles up for the ‘standing’ position, knuckles down for the ‘hanging’ position. The signer attempted to keep the middle of the initial classifier (representing a philosopher) at a constant height, as shown in (60). It turned out that the orientation of the denoted person – in standing or hanging position – had consequences for the acceptability of high and low loci: the same tall philosopher could be referred to with a high locus when he was in standing position, and with a low locus when he was in hanging position. 
More specifically, in this paradigm, the sentence is kept constant, except for two parameters: the classifiers in loci a and b may correspond to a person in standing or hanging position, as represented in (60); and the pronouns IX-a and IX-b index five different levels in each case, with Level 1 being the highest, and Level 5 being the lowest. While extreme positions are dispreferred, the heights that can be targeted are a bit higher in the ‘standing’ than in the ‘hanging’ condition, as shown by the partial ratings in (61) (see Schlenker et al. 2013 for full ratings). In essence, the interpretation function seems to be preserving a certain ordering: if a locus i is above a neutral locus n, the denotation of i must be above the denotation of n on some salient ordering; and when talking about people in physical situations, it would seem that the salient ordering in question is often given by the relative positions of their upper bodies.
A formal analysis was developed in Schlenker et al. (2013), based on the idea that height differences among loci should be proportional to the height differences among their denotations. The analysis took as its starting point the presuppositional theory of gender features developed in Cooper (1983), given in (62): a pronoun shei evaluated under an assignment function s refers to s(i), unless the presupposition triggered by the feminine features of she – that its denotation is female – is not satisfied.
Schlenker et al. (2013) extend this presuppositional analysis to high and low loci, but with an iconic condition in the presuppositional part, boldfaced in (63).
As was the case in our analysis of plural loci in Section 4.3, loci have the semantics of variables, but their realization – specifically: their height in signing space – affects their meaning. In words, the condition in (63) considers a pronoun IX-i indexing a locus i, and compares its height to that of a neutral locus n. It says that the height difference between the denotations s(i) and s(n) should be proportional to the height difference between the loci i and n, with a multiplicative parameter αc>0; in particular, this condition imposes that orderings be preserved. Here it is the same notion of height which is applied to loci and to their denotations: while loci have the semantics of variables, their interpretation is affected by their real world properties qua geometric objects in signing space. 
Several questions were addressed in more sophisticated research.
(i) First, how seriously should one take the analogy between the context-dependency of she in (62) and of high loci in (63)? In both cases, the boxed parts indicate that femaleness or height must be evaluated with respect to the context of utterance, not with respect to the world of evaluation. By considering cases in which an attitude holder has mistaken beliefs about another person’s height or gender, Schlenker et al. (2013) show that English feminine features and ASL height specifications alike seem to go with the real rather than the perceived specifications when the pronoun is embedded under an attitude operator. 
(ii) Second, how rich is the iconic semantics that is required for the analysis of pronouns? In the semantics in (63), only the vertical heights of loci play a role. But Schlenker (2014) considers cases of rotation in which one needs to posit that loci are areas of space rather than points, and constitute schematic pictures of what they denote. In effect, one needs to borrow aspects of a ‘pictorial semantics’, recently investigated in a very different context by Greenberg (2013). Specifically, we will posit in some cases that there must be a geometric projection (satisfying requirements to be determined) between objects and the loci that denote them.
The point was made in Schlenker (2014) by ASL and LSF paradigms such as that in (64) (from ASL). Here CLa is a finger (person-denoting) classifier on the right, representing a tall astronaut; CLb is a finger (person-denoting) person classifier on the left representing a short astronaut.
Our goal was to show that (i) in ‘standing’ position, ‘tall person’ indexing could be higher than ‘short person’ indexing – as is expected given the discussion in Section 4.4.1; but in addition, that (ii) the indexed position could rotate in accordance with the position of the denoted person on the assumption that there was a geometric projection between the structured locus and the denoted situation. Accordingly, (i) (64) makes reference to a tall and to a short individual; (ii) they are rotated as shown in (65), which depicts the approximate target of upper part vs. lower part indexing in the various situations, with the finger classifiers rotated to represent the different positions of their denotations.
Let us concentrate for the moment on the boxed part of (64)a. In these examples, each of the two finger classifiers represented an individual, one taller than the other, with the knuckles representing the upper part of the body; in the case of the tall individual, the locus extended above the knuckles, with the result that the reflexive SELF-a_upper_part targeted a position above the knuckles in the ‘vertical position, heads up’ case; this is represented in the left-hand figure in (65). But as different cases of rotation were considered, the finger classifiers rotated accordingly, and the ‘upper part’ of the locus indexed by SELF-a_upper_part did as well, as represented in the right-hand figure in (65). 
(iii) Third, one could ask how integrated to the grammatical system height specifications are. We mentioned in (i) that their semantics in Schlenker et al. (2013) was modeled after that of gender features, albeit with an iconic twist. Schlenker (2014) cautiously suggests that height specifications resemble gender features in another respect: they can somehow be disregarded under ellipsis. An example is given in (66)a, where the elided VP has a bound reading, unlike its overt counterpart in (66)b. On the (standard) assumption that VP ellipsis is effected by copying part the antecedent VP, this suggests that the feminine features of that antecedent can be ignored by ellipsis resolution, as represented with a barred pronoun in (66)b.
The unboxed part of (64)a was designed to test whether ASL ellipsis makes it possible to disregard height specifications as well. Here the antecedent VP includes a reflexive which indexes the upper part of a locus, which is adequate to refer to a giant but not to a short person. Despite this apparent mismatch, the elided sentence is acceptable – unlike the overt counterpart in (64)b, which includes a reflexive SELF referring to a short person but with high specifications. Thus in ASL height specifications can be ignored by the mechanism that computes ellipsis resolution, just as is the case for gender (and other) features.
The interpretation of these results requires some care, however. The main question is whether the ability of an element to be disregarded under ellipsis is only true of featural elements, or targets a broader class. Schlenker (2014) didn’t give a final answer, and we will see below that co-speech gestures in spoken language, which certainly don’t count as ‘features’, can almost certainly be disregarded in this way.
In the precending sections, we have summarized  evidence for the following two claims:
(i) ASL and LSF ‘high’ loci have an iconic semantics [=loci may stand in geometric relations that reflect the geometric arrangement of their denotations];
(ii) ‘high’ loci display a phi-feature-like behavior [=height specifications can be disregarded – possibly under agreement – by ellipsis and focus-sensitive constructions].
But two important questions remain.
(iii) Do these high loci display a (quasi-) gradient behavior, in the following sense: when two loci are interpreted iconically, can a third one be ‘sandwiched’ between them, with the expected interpretation? (We write ‘quasi-gradient’ rather than ‘gradient’ behavior because fully gradient behavior would be impossible to test, as it would require infinitely many examples; in addition, obvious limitations of perception would force the system to break down when distinctions become too fine-grained.)
(iv) Is this iconic behavior due to the loci themselves, or possibly to the classifiers that they are associated with in several of our examples (e.g. (64))? The question is of some importance because, as we discuss in Section 5.1, some classifiers were independently argued to display an iconic behavior.
Let us consider the example in (67). The pronouns index 4 different heights that reflect the height of [the heads of] their denotations, as is illustrated in (68). This begins to establish Point (iii), combined with Point (i). (67)c shows that these height specifications are disregarded in the course of ellipsis resolution, for otherwise the elided occurrences of SELF taking IX-b and IX-d as antecedents would have the ‘wrong’ feature specifications – which in turn should yield deviance, as in the control sentence in (67)b, which contrast with (67)a; this establishes Point (ii). In addition, the absence of classifiers establishes Point (iv).
The first sentence of (69) is analogous to (67)a. The third sentence establishes that the gymnasts operated a vertical rotation, hence additional heights (as is illustrated in (70)), but now below the position of the bar – which reinforces Points (i) and (iii); Points (ii) and (iv) are preserved as in (67).
Arguably, then, height specifications of loci display grammatical properties of phi-features and a highly iconic/gradient behavior.
In the cases we have discussed so far, the position of loci relative to the vertical plane – high or low – introduced semantic and iconic conditions on their denotations.  We further showed that singular loci are in some cases structured areas rather than mere points, with a head and a foot; and that due to their picture-like qualities they may be rotated in signing space depending on the positions of their denotations. Now it might well be thought that, by contrast, the position of loci on the horizontal plane is purely logical/grammatical, with two loci corresponding to distinct variables just in case they are at different points of the horizontal plane. But as we saw in (15) and later examples, this is not so: when an individual has been associated with a spatial location in previous discourse, one can refer to him by pointing towards the locus associated with that location (though pointing to the original locus of the individual is often possible too); we further suggested that this property extends more broadly to situations, i.e. to loci that refer to times or possible worlds/situations. It remains to understand how this phenomenon comes about, and whether it is iconic in nature.
Importantly, ‘locative shift’ seems to affect all sorts of expressions that involve loci, not just ‘pointing signs’ – which might seem somehow special due to their iconic uses, and also the fact that they have non-individual-denoting uses, as we saw above. This is shown by the example in (71), which involves a possessive pronoun POSS that either indexes the locus b associated with JOHN or the locus c associated with [AMERICAN CITY]c. It is clear that the second option does not yield a meaning on which the apartment somehow belongs to the American city in question.
On the other hand, it is important to observe that different readings are obtained in (71)a and (71)b. In (71)a, no inference is derived about the location of John’s apartment – given the context, it could be in France or in the US. By contrast, (71)b makes reference to John’s American apartment.
At this point, a tentative way to capture the data is to posit a semantics that is more fine-grained than is usually thought, with loci referring either to a individual, to a situation, or to a ‘situation slice’ of an individual, which can be thought of as an individual-at-a-situation. Using pairs of the form<individual, situation>to capture the latter case, we can state a preliminary hypothesis in (72).
The important point is that a locus referring to a situation s can be re-cycled as a locus referring to an individual-at-situation-s. But recycling this locus will yield potentially different readings than referring to the original individual-denoting locus.
Without giving a full account of these cases, we would now like to ask whether locative shift can in some cases involve iconic uses. This appears to be the case. In (73)a, two loci a and b are introduced for JOHN and PETER respectively, but in addition BUILDING LEANING introduces an area of signing space representing the tower of Pisa, leaning rightwards from the signer’s perspective. This is represented in (74) and encoded in (73) with the symbol //.
In both (73)a and (73)b, JOHN comes with a pointing sign IX-top towards a location near the top of the tower. This could be a locative-shifted-version of locus a, or a spatial locus meaning there. But what is striking is that the possessive pronoun POSS indexes the same locus. The crucial observation is that when the context allows it – by making clear that John was in the upper part of the tower – the possessive POSS can index the position top, which corresponds to the upper part of the tower rather to the locus a originally introduced by JOHN. In addition, it can be ascertained that the possessive is used as a bound variable: the elided VP following NOT is naturally understood with a bound reading. (As in some of our earlier examples, ellipsis resolution can disregard specifications of the antecedent, since the second sentence of (73) is understood to involve Peter showing his hand in the middle/lower part of the building, rather than towards the top. )
Interestingly, there also appear to be some (preliminary) cases in which a reflexive pronoun and its antecedent do not share the same locative specifications. This is the case in (75). Here the reflexive SELF indexes loci that appear in (74)a: locus a is the locus associated with JOHN, while top is a location corresponding to the top of the sign for LEANING-//, representing a building in leaning position. Importantly, the second clause in (75)a and (75)c is interpreted on a bound variable reading; the translations reflect this.
The striking observation is that IX-a SEE SELF-top is understood to mean that John (originally associated with locus a) saw himself being at the top of the tower; this shows that the reflexive pronoun makes an iconic contribution. It can be further ascertained with a clause involving ellipsis, namely IX-b NOT referring to Peter, that the boldfaced VP is indeed interpreted on a bound variable reading. In other words, SELF-top simultaneously displays the behavior of a bound variable and of an iconic element.
In conclusion, the various pronouns we have just discussed display a grammatical behavior as bound variables while also contributing iconic information about the position of their denotations, possibly analyzed as situation stages of individuals (a direction explored in Schlenker to appear, c). In this domain, sign language semantics has a more expressive semantics than spoken language, which is devoid of rich iconic mechanisms of pronominal reference.
As mentioned at the outset, iconic conditions are pervasive in sign language, and are definitely not limited to the semantics of variable-like constructions. With no claim to exhaustivity, we discuss below three cases that have been important in the recent literature and are also of foundational interest.
We already saw in (64)–(65) that person classifiers can in part function as schematic pictures of their denotations. They belong to a much broader class of ‘classifier constructions’, which were shown in Emmorey and Herzig (2003) to give rise to gradient iconicity effects in native signers of ASL. To assess this effect, they gradually modified the position of a classifier representing a small object (a sticker) relative to a handshape representing a flat object (a bar). The small object classifier is called the ‘F-handshape’ because it looks like the F of the manual alphabet, the flat object construction is called the ‘B-handshape’ because it looks like the B of the manual alphabet. Emmorey and Herzig describe their experiment as follows:
Participants were asked to place a dot (a 1/2 inch round sticker) in relation to a bar (a line) drawn in the center of a square frame. Where the sticker should be placed was indicated by a native signer (on videotape), who produced a classifier construction in which the F-handshape (specifying a small round object—the dot sticker) was positioned in signing space either above or below a horizontal B-handshape (specifying a flat, surface-prominent object—the bar).
They produced ASL stimuli with 30 different positions for the F-handshape relative to the B-handshape, 6 of which are represented in (76)a. The average positions selected by the deaf signers appear in (76)b; positions 1, 8, 15, 16, 23 and 30 correspond to the stimuli in (76)a.
As can be seen, deaf signing participants placed the dot in a position that corresponded to the position of the F-handshape classifier relative to the B-handshape, with effects that were both iconic and gradient (to the extent that gradience can be assessed on the basis of 30 examples).
While the formal analysis of such constructions is still under study, it is clear that one will need rules that make reference to iconic conditions. This can be achieved by directly incorporating iconic conditions in semantic rules, as we did for high and low loci above, and as was sketched for the case of classifiers in Schlenker (2011a). Alternatively, one could take these expressions to have a demonstrative component that makes reference to the gesture performed while realizing the sign itself, a proposal made in Zucchi (2011) and in Davidson (2015). An example from Zucchi (2011) is given in (77)a and paraphrased in (77)b.
Here CL-vehicle-DRIVE-BY is a classifier predicate used to describe the movement of vehicles. The movement of the classifier predicate in signing space tracks in a gradient fashion the movement performed by the relevant car in real space. As informally shown in (77)b, Zucchi takes the classifier predicate to have a normal meaning (specifying that a vehicle moved) as well as a demonstrative component, which is self-referential; in effect, the classifier predicate ends up meaning something like: ‘moved as demonstrated by this very sign’. We come back in Section 6.1 to the possibility that sign language semantics should quite generally make reference to a gestural component.
In our discussion of loci, we saw that these lead a dual life: on the one hand, they have – in some cases at least – the behavior of logical variables; on the other hand, they can also function as schematic pictures of what they denote. As it turns out, we believe that a similar conclusion holds of Wilbur’s cases of ‘Event Visibility’ discussed in Section 3.2: sign language phonology makes it possible to make visible key parts of the representation of events, but also to arrange them in iconic ways (see Kuhn 2015b and Kuhn and Aristodemo 2017 for a detailed discussion involving pluractional verbs, i.e. verbs that make reference to plurality of events). A case in point can be seen in (78), which includes 5 different realizations of the sign for UNDERSTAND, three stages of which appear in (79) (all the signs involve lowered eyebrows, represented as a ~ above the sign).
As illustrated in (79), UNDERSTAND is realized by the progressive closing of a tripod formed by the thumb, index and middle finger of the dominant hand (right hand for a right-handed signer). But different meanings are obtained depending on how the closure is effected. As is seen in (78)d-e, the examples become odd when two changes of speed occur within the realization of the sign. But with a single change of speed, as in (78)b–c, the result is acceptable and semantically interpretable: if the sign starts slow and ends fast, one infers that the corresponding process had a similar time course; and conversely, when the sign starts fast and ends slow. (In this example the facial expressions remain constant, with lowered eyebrows throughout the realization of the sign; more natural examples are obtained when the facial expressions are also modulated, and in such cases more changes of speed can be produced and interpreted – but in these more complex examples it is difficult to tease apart the relative role of the manual vs. non-manual modulation in the semantic effects obtained).
The same facts hold of atelic verbs. Thus in (80) the atelic verb REFLECT, which in accordance with Wilbur’s generalization lacks a sharp ending, can be modulated so as to map the course of the event. While the modulations with 2 changes of speed in (80)d-e were deemed by our consultant to be artistic forms that one could use only in theater, cases with a single change of speed in (80)a–c were natural and interpreted iconically.
In this case, then, it seems that sign language doesn’t just make visible discrete elements for which there is indirect evidence in spoken language (notably, the endpoint of event type denoted by the verb). It also makes use of them in gradient and iconic ways that would be hard to replicate in spoken language (not for lack of gradience, but simply because the vocal stream has fewer iconic resources than the signed modality). In the long term, two theoretical possibilities should be considered. One that is that event iconicity should work alongside Wilbur’s Event Visibility Hypothesis, which should thus retain a special status, with discrete but covert distinctions of spoken language made visible in sign language. An alternative is that Wilbur’s data are a special case of event iconicity; on this view, telic and atelic verbs alike have the ability to map in a gradient fashion the development of an event, and it is for this more general reason that telic verbs mark endstates in a designated fashion. (We come back to this point in Section 6.1, where we note that that non-signers have some knowledge of Wilbur’s generalization and can thus guess fairly accurately the classification of sign language verbs among telic vs. atelic.) 
We now turn once again to the issue of Role Shift. In Section 3.1, we suggested that Role Shift can be analyzed as a visible instance of context shift. But we will now see that this analysis is incomplete, and must be supplemented with a principle that makes reference to iconicity. In brief, we suggest that Role Shift is a visible instance of context shift, but one which comes with a requirement that the expressions under Role Shift should be interpreted maximally iconically. The argument is in two steps. First, we suggest that Role Shift under attitude reports (=Attitude Role Shift) has a strong quotational component, at least in ASL and LSF. Second, we suggest that Role Shift in action reports (=Action Role Shift) has an iconic component.
As was mentioned in Section 3.1.3, Schlenker (to appear, a; to appear b) notes that even in his ASL data, which allow for wh-extraction out of role-shifted clauses under attitude verbs, some tests suggest that these have a quotational component. First, an ASL version of the test discussed in (44), with ANY (which in some environments has a clear NPI behavior) suggests that it cannot appear under Role Shift without being quoted. Second, another test of indirect discourse based on licensing of ellipsis from outside the attitude report similarly fails. For simplicity, we will just lay out its logic on an English example:
In (81)a, the elided VP in the second sentence is licensed by the first sentence, and one definitely does not infer that John’s words involved an elided VP. The facts are different in (81)b, which clearly attributes to John the use of the very words I don’t – hence a possible deviance if the context does not explain why John might have used a construction with ellipsis. When some information to this effect is added, the ellipsis within quotation marks becomes of course acceptable, as seen in (82)b.
While standard indirect discourse in ASL patterns like English with respect to the licensing of ellipsis, the facts are different under Role Shift; there an elided VP is interpreted exactly as if it were quoted.
Finally, some non-linguistic properties of role-shifted clauses must usually be attributed to the agent rather than to the signer, and in this respect Role Shift differs from standard indirect discourse and resembles quotation. Schlenker (to appear, b) established this generalization (which is unsurprising for the traditional sign language literature) by asking his consultants to sign sentences in which the signer displays a happy face, something encoded with a happy face :-), followed by ----------- over the expressions that were accompanied by this facial expression. Importantly, this happy face is not a grammaticalized non-manual expression. The consultant was asked to start the happy face at the beginning of the report, to maximize the chance that it would be seen to reflect the signer’s (rather than the agent’s) happiness. In standard indirect discourse, this is indeed what was found, as shown in (83). In Attitude Role Shift, by contrast, the judgments in (84) suggest that it is more difficult to attribute the happy face to the signer only, despite the fact that it starts outside the role-shifted clause, and that the context is heavily biased to suggest that the agent of the reported attitude was anything but happy.
At this point, one may conclude that despite the possibilities of wh-extraction out of Role Shift discussed in connection to (42), role-shifted clauses under attitude verbs just involve quotation (possibly mixed quotation, hence the possibility of wh-extraction; see Maier (2014a, 2014b)).  But as mentioned above, Schlenker (to appear, a, to appear, b) argues (i) that Role Shift can also be applied, with specific grammatical constraints, to reports of actions rather than of attitudes (‘Action Role Shift’ vs. ‘Attitude Role Shift’); (ii) that in such cases a quotational analysis would be inadequate, as the situations reported need not have involved thought or speech; and (iii) that nonetheless, Role Shift applied to action reports comes with a requirement that whatever can be interpreted iconically should be so interpreted. The suggestion is thus that Action Role Shift provides an argument against a quotational analysis, and provides independent evidence for positing a rule of context shift, combined with a mechanism of ‘iconicity maximization’ under Role Shift.
To get a sense for the main facts, consider first (85). It does not involve Role Shift, and it is possible to understand the signer’s happy face as reflecting the speaker’s rather than the agent;s attitude. But things are different in (86): in this action report under Role Shift, the signer’s happy face is naturally taken to reflect the agent’s attitude. More generally, under Action Role Shift, a happy face on the agent’s part is normally attributed to the agent (see Schlenker to appear, b for refinements, and for LSF data).
Schlenker (to appear, b) took these and related to suggest that iconic material is preferably understood to reflect properties of the reported action under Role Shift.
The analysis proposed in Schlenker (to appear, b) posits that Attitude and Action Role Shift alike should be analyzed as context shift, but with an important addition: expressions that appear under Role Shift should be interpreted maximally iconically, i.e. so as to maximize the possibilities of projection between the signs used and the situations they make reference to. Following a long tradition (e.g. Clark and Gerrig 1990), Schlenker (to appear, b) argues that quotation can be seen as a special and particular stringent case of iconicity, and that the condition of Maximal Iconicity can thus capture properties of both Attitude and Action Role Shift. Putting together the non-iconic (context-shifting) part of the analysis developed in Section 3.1 and these iconic conditions, the theory has the following structure:
Role Shift has a broadly uniform semantics across attitude and action cases: it shifts the context of evaluation of the role-shifted clause.
In ASL and LSF, role-shifted indexicals are obligatorily shifted. Things are different in Catalan and German Sign Language, where mixing of perspectives is possible.
In ASL and LSF, all indexicals can appear under Attitude Role Shift, but only some indexicals can appear under Action Role Shift (this was captured formally by assuming that Action Role Shift gives rise to different kinds of shifted contexts than Attitude Role Shift).
Under Attitude and Action Role Shift alike, signs are interpreted maximally iconically in the scope of the context shift operator.
– In attitude reports, every sign can be interpreted as being similar to an element of the situation which is reported – namely by way of quotation.
In action reports, this is not so (as these need not involve speech or thought act), but all potentially iconic features of signs are interpreted iconically and thus taken to represent features of the reported situations.
In both cases, expressions that appear under Role Shift are both used (as these are instances of indirect discourse) and mentioned because they have a strong iconic (and sometimes quotational) component.
If this analysis is on the right track, one key question is why context shift in sign language should come with a condition of iconicity maximization. One possibility is that such a condition exists in spoken language context as well but hasn’t been described yet (however Anand 2006 argues that in Zazaki context shift need not be quotational). Another possibility is that iconicity maximization under context shift is a specific property of sign language. Be that as it may, it seems clear that if Role Shift is to be analyzed as context shift, special provisions must be made for iconic effects.
While we have been focusing on cases of iconicity that interact with the ‘logical engine’ of language, there are many further cases of sign language iconicity that are worthy of interest. To cite but one (reviewed in Emmorey 2014), Meir (2010) showed that in American and in Israeli Sign Language there are constraints on metaphorical extensions of iconic signs. This can be illustrated with the example of EAT in (87), which has an iconic structure, with the handshape corresponding to the action of holding food, the signer’s mouth standing for the eater’s mouth, and the inward movement of the hand corresponding to the action of putting food into one’s mouth.
Now Meir (2010) notes that this verb does not allow for the same metaphorical extensions as the English verb eat, in particular in sentences such as The acid ate the metal. Meir proposes an explanation based on the requirement that the iconic mapping of the sign, encoded in the first column of (87)b, should match the desired metaphorical mapping. In order to get the desired meaning for The acid ate the metal, we need something that represents the fact that the object of eat is consumed. But this is precisely what is not represented in the iconic mapping given by EAT in Israeli Sign Language. Meir suggests that this is what blocks the metaphorical extension, in accordance with the mapping principle in (88).
Meir’s constraint also serves to show indirectly that the iconicity of verbs such as EAT is not just an inheritance of their (possibly gestural) history, but is still active in the signers’ minds today. On a more general level, we saw that iconic conditions interact in intricate ways with the ‘logical engine’ engine; in this case, we see that they constrain pragmatic/non-literal interpretation as well.
If the foregoing is on the right track, it should be clear that sign language has, in some areas, strictly richer expressive resources than spoken language does, in particular due to its ability to incorporate iconic conditions at its logical core. Furthermore, in several areas (height specifications of loci, speed modulations of verbs, and classifiers), these iconic properties appear to be gradient in nature. There are two conclusions one might draw from these observations.
One could conclude that spoken language is, in some areas, a simplified version of what sign language can offer. Specifically, as a first approximation one could view spoken language semantics as a semantics for sign language from which most iconic elements have been removed, and indices have been made covert. From this perspective, if one wishes to understand the full scope of Universal Semantics, one might be better inspired to start from sign than from spoken language: the latter could be understood from the former once the iconic component is disregarded, but the opposite path might prove difficult. This situation is not unlike that found within spoken language syntax with respect to case theory. While syntacticians have developed theories of abstract case for all languages, including English, the effects of case are much easier to see in languages with rich declensions such as Latin, Russian or Hungarian; an analysis of case that disregarded the latter would probably miss essential facts about case theory.
An alternative possibility is that our comparison between sign and spoken language was flawed in the first place; in Goldin-Meadow and Brentari’s words (2017), “sign should not be compared to speech – it should be compared to speech-plus-gesture”. What might be special about sign language is that signs and co-speech gestures are realized in the same modality. By contrast, they are realized in different modalities in spoken language, which has lead many researchers to concentrate solely on the spoken component. This leaves open the possibility that when co-speech gestures are reintegrated to the study of spoken language, sign and spoken languages end up displaying roughly the same expressive possibilities.
Let us give a few illustrations of how the debate could be developed.
We noted in Section 4.3 that plural pronouns in ASL and LSF can give rise to instances of ‘structural iconicity’ when a plural locus is embedded within another plural locus. One could view this as a case in which sign language has a mechanism which is entirely missing in sign language. But the realization of sign language loci makes it possible to use them simultaneously as diagrams. From this perspective, the right point of comparison for our examples with ‘complement set anaphora’ in Section 4.3 are spoken language examples accompanied with explicit diagrams with the same shape as embedded loci in (54), and to which one can point as one utters the relevant pronouns.  For this reason, a comparison between spoken and sign language should start with situations in which speakers can use gestures to define diagrams. This comparison has not been effected yet.
As summarized in Section 4.4, it was argued in Schlenker et al. (2013) and Schlenker (2014) that high loci have an iconic semantics, and in addition that their height specifications behave like ‘features’ in some environments, notably under ellipsis: just like gender features, height specifications can apparently be disregarded by whatever mechanism interprets ellipsis resolution. We fell short of arguing that this shows that height specifications are features, for good reason. First, Schlenker (2014) shows that it is hard to find cases in which height specifications really behave differently from other elements that contribute presuppositions on the value of a referring expression (some paradigms displaying this difference were found in ASL but not in LSF).  Second, when co-speech gestures are taken into account in spoken language, it appears that they too can be disregarded under ellipsis (Schlenker 2015b, to appear, d) . Thus in (89)a the co-speech gesture (for a tall person) that accompanies the Verb Phrase can be disregarded under ellipsis; whereas in the control in (89)b, deviance is obtained if the gesture that accompanies the antecedent Verb Phrase is explicitly repeated in the second clause (whereas a gesture for a short person is acceptable).
The same argument can be made on the basis of the ‘hanging’ co-speech gesture in (90), which can be disregarded under ellipsis.
These observations suggest that one could account for height specifications of loci in at least two ways. One could analyze them by analogy with features in spoken language, and argue that they share their behavior under ellipsis. Alternatively, one could seek to analyze height specifications as co-speech gestures that happen to be merged with signs, and to explain their behavior under ellipsis by the fact that other co-speech gestures can somehow be transparent to ellipsis resolution.
We suggested above that Role Shift is ‘visible context shift’, with an important addition: Attitude and Action Role Shift alike have an iconic component (‘Maximize Iconicity!’) which has not been described for spoken language context shift.  But one could challenge this analysis by taking Role Shift to be in effect indicative of the fact that the role-shifted signs have a demonstrative component, and thus are in effect both signs and co-speech gestures. This is the theoretical direction explored by (Davidson 2015). Following Lillo-Martin 1995, 2012, Davidson takes Role Shift to behave in some respects like the expression ‘be like’ in English, which has both quotational and co-speech uses, as illustrated in (91).
But there is an importance difference: for Davidson, the Role Shift morpheme, "in contrast to English “like”, is produced simultaneously with other lexical material, consistent with a tendency toward simultaneous verbal morphology in sign languages versus sequential morphology in spoken languages". More specifically, Davidson suggests that in Role Shift the signer’s body acts as a classifier and is thus used to demonstrate another person’s signs, gestures or actions. She draws inspiration from Zucchi’s analysis of classifier constructions, briefly discussed in Section 5.1 above. Thus for Davidson, no context shift is involved; rather, the signer’s body is used to represent another individual in the same way as the classifiers discussed in Section 5.1 represent an object. A potential advantage of her analysis is that it immediately explains the iconic effects found in Role Shift, since by definition Role Shift is used to signal the presence of a demonstration. We refer the reader to Schlenker (to appear, b) for a comparison between the context-shifting and gestural analyses.
Strickland et al. (2015) revisit Wilbur’s Hypothesis of Event Visibility, discussed in Sections 3.2 and 5.2 above. They show that non-signers that have not been exposed to sign language still ‘know’ Wilbur’s generalization about the overt marking of telic endpoints in sign language: when asked to choose among a telic or atelic meaning (e.g. ‘decide’ vs. ‘think’) for a sign language verb they have never seen, they tend to be accurate in choosing the telic meaning in case endpoints are marked. Furthermore, this result holds even when neither meaning offered to them is the actual meaning of the sign, which rules out the possibility that subjects use other iconic properties to zero in on the correct meaning.
These results can be interpreted in at least two ways. One is that Wilbur’s principle is so deeply entrenched in Universal Grammar that it is accessed even by non-signers. An alternative possibility is that these use general and abstract iconic principles to determine when a sign/gesture can or cannot represent a telic event. This leaves open the possibility that Event Visibility derives from a general property of cognition rather than from specific properties of sign language – and possibly that similar effects could be found with gestures in spoken language. (Future research will have to determine whether the iconic modulations of verbs discussed in Section 5.2 are correctly interpreted by non-signers.)
Besides the (relatively theory-neutral) comparison of the expressive resources of spoken and sign language, one could ask whether in the end the logic-with-iconicity at work in sign language should be analyzed as one or as two systems (the same question might apply to the interaction of iconic effects with logic in spoken language, especially if one studies it with co-speech gestures). The traditional view is certainly that grammar and iconicity are two separate modules (see Cogill-Koez 2000; Macken et al. 1993). But as argued in this piece, there is a non-trivial interaction between grammar and iconicity at the logical core of sign language: one and the same expression – say, a singular or a plural locus – can display a logical behavior (e.g. as a bound variable) while also having an iconic function. This doesn’t mean that a two-module theory couldn’t be developed; but the relevant notion of ‘module’ would have to be appropriately abstract. In the end, one will have to develop criteria for what counts as a ‘module’ on the basis of linguistic or non-linguistic data – so as to determine whether one can isolate a natural class of grammatical phenomena that exclude iconicity in sign language, or whether grammar and iconicity are so intertwined that they should be seen as a single unified module. On the assumption that differences across modules also correspond to differences of brain implementation, neuro-imaging data might directly bear on this issue; sophisticated research is ongoing on this topic, including as part of a comparison between signs and co-speech gestures (e.g. Xu et al. 2009; Emmorey and Ozyurek 2014).
While the main issues are wide open, we hope to have convinced the reader that sign language has the potential to alter radically the way we look at natural language semantics, and that investigating Universal Semantics from the standpoint of sign language might help reconsider foundational questions about the logical core of language, and its expressive power.  We have suggested that two questions could be illuminated in this way. One pertains to the logical engine of language, some of whose main components are arguably visible in sign but not in spoken language. The other pertains to the expressive power of language, which in its signed modality has a rich iconic component that is rarely taken into account in formal studies of spoken language. Our investigations leave open whether spoken language can match the expressive resources of sign language when co-speech gestures are taken into account; and they also don’t decide whether in the end ‘grammar’ and ‘iconicity’ should be seen as two modules, or one – in fact, the criteria for deciding this question remain to be developed. 
Special thanks to Brent Strickland, who provided very helpful comments on an earlier version, to Karen Emmorey for providing references on vocal iconicity, and to Masha Esipova and Adam Schembri for helpful remarks. I also greatly benefited from some very constructive referee comments on earlier pieces.
Consultants: This research summarizes research that appeared in various articles, which owe a lot to the work of the following consultants: ASL: Jonathan LambertonLSF: Yann Cantin, Ludovic DucasseTheir contribution is gratefully acknowledged.
Pictures: Pictures that are not cited from published work are stills from videos cited in the text; they are used with the consultants’ explicit consent, which is gratefully acknowledged.
Grant acknowledgments: The research leading to these results received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement N°324115–FRONTSEM (PI: Schlenker). Research was conducted at Institut d’Etudes Cognitives, Ecole Normale Supérieure - PSL Research University. Institut d’Etudes Cognitives is supported by grants ANR-10-LABX-0087 IEC et ANR-10-IDEX-0001-02 PSL*.
Prior work: This paper explicitly borrows from earlier publications on sign language semantics (references are added at the beginning of the relevant sections). While the data and formalisms are mostly not new, the general perspective is.
Anand, Pranav. 2006. De De Se. PhD dissertation. Santa Cruz: University of California. Search in Google Scholar
Anand, Pranav & Andrew Nevins. 2004. Shifty operators in changing contexts. In R. Young (ed.), SALT XIV 20-37, Ithaca, NY: Cornell University. Search in Google Scholar
Bahan, B., J. Kegl, D. MacLaughlin & C. Neidle. 1995. Convergent evidence for the structure of determiner phrases in American Sign Language. In L. Gabriele, D. Hardison & R. Westmoreland (eds.), FLSM VI, Proceedings of the sixth annual meeting of the Formal Linguistics Society of Mid-America, vol. 2. 1–12. Bloomington, IN: Indiana University Linguistics Club Publications. Search in Google Scholar
Bhatt, Rajesh & Roumyana Pancheva. 2006. Conditionals. In M. Everaert & H. Van Riemsdijk (eds.), The Blackwell companion to syntax, vol. 1. 638–687. Boston and Oxford: Blackwell. Search in Google Scholar
Bittner, Maria. 2001. Topical referents for individuals and possibilities. In Rachel Hastings, Brendan Jackson & Zsofia Zvolenszky (eds.), Proceedings of Semantics and Linguistic Theory XI, 36–55. Ithaca: CLC. Search in Google Scholar
Brody, Michael & Anna Szabolcsi. 2003. Overt scope in Hungarian. Syntax 6.1. 19–51. Search in Google Scholar
Cecchetto, Carlo, Carlo Geraci & Sandro Zucchi. 2006. Strategies of relativization in Italian Sign Language. Natural Language and Linguistics 24. 945–975. Search in Google Scholar
Clark, Herbert H & Richard G. Gerrig. 1990. Quotations as Demonstrations. Language 66. 764–805. Search in Google Scholar
Clark, N, M Perlman & M Johansson Falck. 2013. Iconic pitch expresses vertical space. In B Dancygier, M Borkent & J Hinnell (eds.), Language and the creative mind, 393–410. Stanford, CA: CSLI Publications. Search in Google Scholar
Cogill-Koez, D. 2000. Signed language classifier predicates: Linguistic structures or schematic visual representation?. Sign Language and Linguistics 3(2). 153–207. Search in Google Scholar
Cooper, Robin. 1983. Quantification and Syntactic Theory. 21. Synthese Language Library. D. Reidel: Dordrecht. Search in Google Scholar
Cresswell, Max J. 1990. Entities and indices. Studies in linguistics and philosophy. vol. 41. Dordrecht: Kluwer. Academic Publishers. Search in Google Scholar
Cuxac, Christian. 1999. French Sign Language: Proposition of a Structural Explanation by Iconicity. In A. Braort, et al. (ed.), Gesture-based Communication in Human-Computer Interaction, 165–184. Springer. Search in Google Scholar
Cuxac, Christian & Marie-Anne Sallandre. 2007. Iconicity and arbitrariness in French Sign Language: Highly iconic structures, degenerated iconicity and diagrammatic iconicity. In E. Pizzuto, P. Pietrandrea & R. Simone (eds.), Verbal and signed languages: Comparing structures, constructs and methodologies, 13–33. Berlin: Mouton de Gruyter. Search in Google Scholar
Davidson, Kathryn. 2015. Quotation, Demonstration, and Iconicity. Linguistics & Philosophy 38. 477–520. Search in Google Scholar
Deal, Amy Rose. 2017. Shifty asymmetries: Universals and variation in shifty indexicality. Manuscript, Berkeley: University of California. Search in Google Scholar
Dekker, Paul. 2004. Cases, adverbs, situations and events. In H. Kamp & B. Partee (eds.), Context dependence in the analysis of linguistic meaning, Amsterdam: Elsevier. Search in Google Scholar
Delaporte, Yves. 2007. Dictionnaire étymologique et historique de la langue des signes française: Origine et évolution de 1200 signes. Les Essarts-le-Roi, France: Éditions du Fox. Search in Google Scholar
Delaporte, Yves & Emily Shaw. 2009. Gesture and Signs through History. Gesture 9(1). 35–60. Search in Google Scholar
Diane, Lillo-Martin. 1995. The point of view predicate in American Sign Language. In K. Emmorey & J. Reilly (eds.), Language, gesture, and space, 155–170. Hillsdale, NJ: Lawrence Erlbaum Associates. Search in Google Scholar
Eckardt, Regine. 2014. The semantics of free indirect discourse: How texts allow us to mind-read and eavesdrop. Brill. Search in Google Scholar
Elbourne, Paul. 2005. Situations and individuals. Cambridge, MA: MIT Press. Search in Google Scholar
Emmorey, K. & M. Herzig. 2003. Categorical versus gradient properties of classifier constructions in ASL. In K. Emmorey (ed.), Perspectives on classifier constructions in signed languages, 222–246. Mahwah NJ: Lawrence Erlbaum Associates. Search in Google Scholar
Emmorey, Karen. 2002. Language, cognition, and the brain: Insights from sign language research. Mahwah, NJ: Erlbaum. Search in Google Scholar
Emmorey, Karen & Brenda Falgier. 2004. Conceptual locations and pronominal reference in american sign language. Journal of Psycholinguistic Research 33(4). 321–331. Search in Google Scholar
Emmorey, Karen & Asli Ozyurek. 2014. Language in our hands: Neural underpinnings of sign language and co-speech gesture. In M. S. Gazzaniga, & G. R. Mangun (Eds.), The cognitive neurosciences, 5th edn., 657–666. Cambridge, Mass: MIT Press. Search in Google Scholar
Evans, Gareth. 1980. Pronouns. Linguistic Inquiry 11(2). 337–362. Search in Google Scholar
Finer, Daniel. 1985. The syntax of switch-reference. Linguistic Inquiry 16(1). 35–55. Search in Google Scholar
Geach, Peter. 1962. Reference and generality. An examination of some medieval and modern theories. Ithaca, NY: Cornell University Press. Search in Google Scholar
Giorgolo, Gianluca. 2010. Space and time in our hands. Utrecht: Uil-OTS, Universiteit. Search in Google Scholar
Goldin-Meadow, Susan & Diane Brentari. 2017. Gesture, sign and language: The coming of age of sign language and gesture studies. Behavioral and Brain Sciences. doi:10.1017/S0140525X15001247. Search in Google Scholar
Greenberg, Gabriel. 2013. Beyond resemblance. Philosophical Review 122(2). 2013. Search in Google Scholar
Groenendijk, Jeroen & Martin Stokhof. 1991. Dynamic predicate logic. Linguistics and Philosophy 14(1). 39–100. Search in Google Scholar
Halle, Morris. 1978. Knowledge unlearned and untaught: What speakers know about the sounds of their language. In Morris Halle, Joan Bresnan & George Miller (eds.), Linguistic theory and psychological reality, Cambridge: MIT Press. Search in Google Scholar
Heim, Irene. 1982. The semantics of definite and indefinite noun phrases. Ph.D. Dissertation. Amherst: University of Massachusetts. Search in Google Scholar
Heim, Irene. 1990. E-type pronouns and donkey anaphora. Linguistics and Philosophy 13. 137–177. Search in Google Scholar
Heim, Irene. 1991. ‘The first person’, Class handouts. MIT. Search in Google Scholar
Heim, Irene. 2008. Features on bound pronouns. In Daniel Harbour, David Adger & Susana Bejar (eds.), Phi-theory: Phi-features across modules and interfaces, Oxford University Press. Search in Google Scholar
Heim, Irene & Angelika Kratzer. 1998. Semantics in generative grammar. Oxford: Blackwell. Search in Google Scholar
Herrmann, Annika & Markus Steinbach. 2012. Quotation in sign languages – A visible context shift. In I. Van Alphen & I. Buchstaller (eds.), Quotatives: Cross-linguistic and cross disciplinary perspectives, 203–228. Amsterdam: John Benjamins. Search in Google Scholar
Hockett, Charles F. 1966. What Algonquian is really like. IJAL 31(1). 59–73. JSTOR 1263449. Search in Google Scholar
Hübl, Annika & Markus Steinbach. 2012. Quotation across modalities: Shifting contexts in sign and spoken languages. Talk delivered at the workshop Quotation: Perspectives from philosophy and linguistics, Ruhr-University Bochum. Search in Google Scholar
Iatridou, Sabine. 1994. On the contribution of conditional Then. Natural Language Semantics 2. 171–199. Search in Google Scholar
Irit., Meir, Wendy Sandler, Carol Padden & Mark Aronoff. 2010. Emerging sign languages. In M. Marschark & P. Spencer (eds.), Oxford Handbook of deaf studies, language, and education, Vol. 2, 267–280. Oxford: Oxford University Press. Search in Google Scholar
Izvorski, Roumyana. 1996. The syntax and semantics of correlative proforms. In K. Kusumoto (ed.), Proceedings of NELS 26, Amherst, MA: GLSA. Search in Google Scholar
Jacobson, Pauline. 1999. Towards a variable free semantics. Linguistics and Philosophy 22. 117–184. Search in Google Scholar
Jacobson, Pauline. 2012. Direct compositionality and ‘uninterpretability’: The case of (sometimes) ‘uninterpretable’ features on pronouns. Journal of Semantics 29. 305–343. Search in Google Scholar
Kamp, Hans. 1981. A theory of truth and semantic representation. In J. A. G. Groenendijk, T. M. V. Janssen & M. J. B. Stokhof (eds.), Formal methods in the study of language, Amsterdam: Mathematical Centre. Search in Google Scholar
Kaplan, David. 1989. Demonstratives. In Joseph Almog, John Perry & Howard Wettstein (eds.), Themes from Kaplan, Oxford University Press. Search in Google Scholar
Kegl, Judy. 2004. ASL syntax: Research in progress and proposed research. Sign Language & Linguistics 7(2). 173–206. Reprint of an MIT manuscript written in 1977. Search in Google Scholar
Kiss, Katalin É. 1991. Logical structure in linguistic structure. In May Huang (ed.), Logical Structure and Linguistic Structure, 387–426. Kluwer. Search in Google Scholar
Koulidobrova, Elena. 2011. SELF: Intensifier and ‘long distance’ effects in American Sign Language (ASL). Manuscript, University of Connecticut. Search in Google Scholar
Kratzer, Angelika. 2009. Making a pronoun: Fake indexicals as windows into the properties of pronouns. Linguistic Inquiry 40(2). 187–237. Search in Google Scholar
Kuhn, Jeremy: 2015a Iconicity in the grammar: Pluractionality in (French) Sign Language. Talk, LSA 89. Search in Google Scholar
Kuhn, Jeremy: 2015b, Cross-categorical singular and plural reference in sign language. Doctoral dissertation. New York University. Search in Google Scholar
Kuhn, Jeremy & Valentina Aristodemo. 2017. Pluractionality, iconicity, and scope in French Sign Language. Semantics and Pragmatics 10(6). Search in Google Scholar
Lascarides, Alex & Matthew Stone. 2009. A formal semantic analysis of gesture. Journal of Semantics 26(4). 393–449. Search in Google Scholar
Lewis, David K. 1986. On the plurality of worlds. Oxford: Blackwell. Search in Google Scholar
Liddell, Scott K. 2003. Grammar, gesture, and meaning in American Sign Language. Cambridge: Cambridge University Press. Search in Google Scholar
Lillo-Martin, Diane. 1991. Universal Grammar and American Sign Language: Setting the null argument parameters. Dordrecht: Kluwer Academic Publishers. Search in Google Scholar
Lillo-Martin, Diane. 2012. Utterance reports and constructed action. In R. Pfau, M. Steinbach & B. Woll (eds.), Sign language: An international handbook, 365–387. De Gruyter Mouton. Search in Google Scholar
Lillo-Martin, Diane & Edward S. Klima. 1990. Pointing out differences: ASL pronouns in syntactic theory. In Susan D Fischer & Patricia Siple (eds.), Theoretical Issues in Sign Language Research, Volume 1: Linguistics, 191–210. Chicago: The University of Chicago Press. Search in Google Scholar
Macken, E., J. Perry & C. Haas. 1993. Richly grounding symbols in ASL. Sign Language Studies 81(1). 375–394. Search in Google Scholar
MacSweeney, M, CM Capek, R Campbell & B Woll. 2008. The signing brain: The neurobiology of sign language. Trends in Cognitive Sciences 12. 432–440. Search in Google Scholar
Maier, Emar. 2014a. Mixed Quotation. Survey article written for the Blackwell Companion to Semantics. Manuscript, University of Groningen. Search in Google Scholar
Maier, Emar. 2014b. Mixed quotation: The grammar of apparently transparent opacity. Semantics & Pragmatics 7(7). 1–67. Search in Google Scholar
Meier, Richard. 2012. Language and modality. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Handbook of sign language linguistics, Berlin: Mouton de Gruyter. Search in Google Scholar
Neidle, Carol, Judy Kegl, Dawn MacLaughlin, Benjamin Bahan & Robert G. Lee. 2000. The syntax of American Sign Language: Functional categories and hierarchical structure. Cambridge, MA: The MIT Press. Search in Google Scholar
Nouwen, Rick: 2003. Plural pronominal anaphora in context. Number 84 in Netherlands Graduate School of Linguistics Dissertations. Utrecht: LOT. Search in Google Scholar
Okrent, Arika. 2002. A modality-free notion of gesture and how it can help us with the morpheme vs. gesture question in sign language linguistics, or at least give us some criteria to work with. In R.P. Meier, D.G. Quinto-Pozos & K.A. Cormier (eds.), Modality and structure in signed and spoken languages, 175–198. Cambridge: Cambridge University Press. Search in Google Scholar
Padden, Carol A.: 1986, Verbs and role-shifting in American Sign Language. In Carol Padden (ed.), Proceedings of the Fourth National Symposium on Sign Language Research and Teaching, Silver Spring, MD: National Association of the Deaf. Search in Google Scholar
Partee, Barbara. 1973. Some structural analogies between tenses and pronouns in English. The Journal of Philosophy 70. 601–609. Search in Google Scholar
Perlman, M. & A. Cain. 2014. Iconicity in vocalizations, comparisons with gesture, and implications for the evolution of language. Gesture 14,. 320–350. Search in Google Scholar
Perlman, M., R. Dale & G. Lupyan. 2015. Iconicity can ground the creation of vocal symbols. Royal Society Open Science 2. 150152. Search in Google Scholar
Quer, Josep. 2005. Context shift and indexical variables in sign languages. In E. Georgala and J. Howell (eds.), Proceedings of Semantic and Linguistic Theory (=SALT) XV, 152–168. Ithaca, NY: CLC Publications. Search in Google Scholar
Quer, Josep: 2013, Attitude ascriptions in sign languages and role shift. In Leah C Geer (ed.), Proceedings of the 13th Meeting of the Texas Linguistics Society, 12–28. Austin: Texas Linguistics Forum. Search in Google Scholar
Quine, Williard V. 1960. Variables explained away. Proceedings of the American Philosophical Society 104(3). 343–347. Search in Google Scholar
Rawski, Jonathan: 2018, The Logical Complexity of Phonology Across Speech and Sign, Manuscript, SUNY. Search in Google Scholar
Reinhart, Tanya. 1983. Point of view in language—The use of parentheticals. In G. Rauch (ed.), Essays on deixis, 169–194. Tuebingen: Gunter Narr Verlag. Search in Google Scholar
Rothstein, Susan. 2004. Structuring events: A study in the semantics of lexical aspect. (Explorations in Semantics 2). Malden, MA & Oxford: Blackwell. Search in Google Scholar
Sandler, Wendy & Diane Lillo-Martin. 2006. Sign language and linguistic universals. Cambridge University Press. Search in Google Scholar
Schlenker, Philippe. 1999. Propositional attitudes and indexicality: A cross-Categorical approach. Doctoral dissertation, MIT. Search in Google Scholar
Schlenker, Philippe. 2003. A plea for monsters. Linguistics & Philosophy 26. 29–120. Search in Google Scholar
Schlenker, Philippe. 2004a. Conditionals as definite descriptions (a referential analysis). Research on Language and Computation 2. 417–462. Search in Google Scholar
Schlenker, Philippe. 2004b. Context of thought and context of utterance. Mind & Language 19/3. 279–304. Search in Google Scholar
Schlenker, Philippe. 2011a. Iconic agreement. Theoretical Linguistics 37(3–4). 223–234. Search in Google Scholar
Schlenker, Philippe. 2011b. Donkey anaphora: the view from Sign Language (ASL and LSF). Linguistics and Philosophy 34(4). 341–395. Search in Google Scholar
Schlenker, Philippe. 2011c. Quantifiers and variables: Insights from Sign Language (ASL and LSF). In B.H. Partee, M. Glanzberg & J. Skilters (eds.), Formal semantics and pragmatics: Discourse, context, and models, The Baltic International Yearbook of Cognition, Logic and Communication, Vol. 6, 2011. New Prairie Press, Manhattan, US. Search in Google Scholar
Schlenker, Philippe. 2011d. Indexicality and De Se Reports. In Semantics, edited by von Heusinger, Maienborn and Portner, vol. 2, Article 61, 1561–1604. Mouton de Gruyter. Search in Google Scholar
Schlenker, Philippe. 2013a. Temporal and modal anaphora in Sign Language (ASL). Natural Language and Linguistic Theory 31(1). 207–234. Search in Google Scholar
Schlenker, Philippe: 2013b. Anaphora: Insights from Sign Language (Summary). In S. R. Anderson, J. Moeschler & F. Reboul (eds.), L’Interface langage-cognition The Language-cognition Interface. Actes du 19e Congrès International des Linguistes, Genève, 22–27 juillet 2013. Librairie Droz, Genève, Switzerland. Search in Google Scholar
Schlenker, Philippe. 2014. Iconic features. Natural Language Semantics 22(4). 299–356. Search in Google Scholar
Schlenker, Philippe. to appear, a. Super Monsters – Part I. To appear in Semantics & Pragmatics. Search in Google Scholar
Schlenker, Philippe. to appear, b. Super Monsters – Part II. To appear in Semantics & Pragmatics. Search in Google Scholar
Schlenker, Philippe. to appear, c. Locative Shift. To appear in Glossa. Search in Google Scholar
Schlenker, Philippe. to appear, d. Iconic Pragmatics. To appear in Natural Language & Linguistic Theory. Search in Google Scholar
Schlenker, Philippe, Jonathan Lamberton & Mirko Santoro. 2013. Iconic variables. Linguistics & Philosophy 36(2). 91–149. Search in Google Scholar
Sharvit, Yael. 2008. The puzzle of free indirect discourse. Linguistics and Philosophy 31. 353–395. Search in Google Scholar
Stechow, Arnim von. 2004. Binding by verbs: Tense, person and mood under attitudes. In Horst Lohnstein & Susanne Trissler (eds.), The syntax and semantics of the left periphery, 431–488. Berlin – New York: Mouton de Gruyter. Search in Google Scholar
Stone, M. 1997. The anaphoric parallel between modality and tense. IRCS Report 97 – 06. Philadelphia, PA: University of Pennsylvania Press. Search in Google Scholar
Strickland, B., C. Geraci, E. Chemla, P. Schlenker, M. Kelepir & R. Pfau. 2015. Event representations constrain the structure of language: Sign language as a window into universally accessible linguistic biases. Proceedings of the National Academy of Sciences 112(19). 5968–5973. Search in Google Scholar
Szabolcsi, Anna. 2001. The syntax of scope. In Mark Baltin & Chris Collins (eds.), Handbook of contemporary syntactic theory, 607–634. Oxford: Blackwell. Search in Google Scholar
Taub, Sarah F. 2001. Language from the body. Cambridge University Press. Search in Google Scholar
Vendler, Zeno. 1967. Linguistics in philosophy. Ithaca, NY: Cornell University Press. Search in Google Scholar
Wilbur, Ronnie B. 2003. Representations of telicity in ASL. Chicago Linguistic Society 39. 354–368. Search in Google Scholar
Wilbur, Ronnie B. 2008. Complex predicates involving events, time and aspect: Is this why sign languages look so similar?. In J. Quer (ed.), Signs of the Time, 217–250. Hamburg: Signum. Search in Google Scholar
Wilbur, Ronnie B & Evie Malaia. 2008. Event visibility hypothesis: Motion capture evidence for overt marking of telicity in ASL. Chicago, IL: Linguistic Society of America. Search in Google Scholar
Winston, E. 1995. Spatial mapping in comparative discourse frames. In K. Emmorey & J. S. Reilly (eds.), Language, gesture, and space, 87–114. Hillsdale, NJ: Lawrence Erlbaum. Search in Google Scholar
Xu, Jiang, Patrick J. Gannon, Karen, Emmorey, Jason F Smith & Allen R. Braun. 2009. Symbolic gestures and spoken language are processed by a common neural system. Proceedings of the National Academy of Sciences of USA 106(49). 20664–20669. doi:10.1073/pnas.0909197106. Search in Google Scholar
Zucchi, Sandro. 2009. Along the time line: Tense and time adverbs in Italian Sign Language. Natural Language Semantics 17. 99–139. Search in Google Scholar
Zucchi, Sandro: 2011. Event descriptions and classifier predicates in Sign Languages. Presentation given at FEAST in Venice. June 21, 2011. Search in Google Scholar
Zucchi, Sandro. 2012. Formal semantics of sign languages. Language and Linguistics Compass 6(11). 719–734. Search in Google Scholar
© 2018 Walter de Gruyter GmbH, Berlin/Boston