Skip to content
Publicly Available Published by De Gruyter Mouton July 5, 2016

Monkey business

  • Robert C. Berwick EMAIL logo
From the journal Theoretical Linguistics

In their article “Formal Monkey Linguistics” Schlenker and colleagues have performed a great service for both primatology and linguistics by assembling in a single place much of what is currently known about the form and function of primate vocalization, along with a clarion call to apply the formal tools of modern linguistics in regard to any future development in this area. They have also quite appropriately taken great care to remind us of the gulf between human language and monkey call systems. This brief note underscores both points by illustrating some of the fundamental challenges that remain when applying the armamentarium of formal linguistics to domains fundamentally more circumscribed than human language, at least in the domain of syntax.

If the classical definition of a language is a system of sounds paired with meanings, then the pairing of, e. g., a Campbell monkey’s krak-oo call and its ‘there is an alert’ meaning would seem to qualify. So far as it goes then, in this the notion of a “monkey language” would seem to be no different from that of a “computer language.” Further, just as with computer language, the techniques drawn from formal language theory would seem to be as applicable or not to monkey language.

So far, so good. Yet, there is at least one key difference between the primate call systems and both computer and human languages that poses a potentially steep hurdle for the Schlenker methodological program. It’s a familiar difference, yet with an important implication because it impacts the tools one can use. This hurdle has a simple name: infinity. The distinction is illustrated in section 2.1 of the Schlenker article itself, which sketches the formal properties of languages–it arises from Wilhem von Hulmboldt’s dictum (1836) that perhaps the hallmark of human language is that it “makes infinite use of finite means.” As far as nearly all linguists can make out, there is no such thing as a finite language, despite occasional demurrals from some corners, notably by Everett with respect to the Amazonian aboriginal language Pirahã (2005). Suppose Everett were right: that Pirahã is basically a (huge) finite list of sentences with no generative devices. It would be irrelevant for linguistics. The scientific question has to do with the capacity for language, which Pirahã speakers share completely. They do fine with Portuguese. Everett’s “discovery” would mean that instead of using their full language capacity, Pirahã speakers instead use huge lists made up of words with some kind of regular patterns–compare the case of squirrel monkeys, below. This would be irrelevant to the study of the human language capacity, but just some kind of biological oddity, like birds that decide not to fly.

Rather, the human capacity for language without exception comprises the ability to learn any human language that has a countably infinite number of sentences; one can make any sentence longer. Human language is productively open ended–infinite generative capacity. The same holds for computer languages–unhappily for beginners, there’s no bound to the length of any correct Java program–and likewise for the system of natural numbers defined inductively by the successor function as Schlenker and colleagues observe in their example (4) at the beginning of Section 2.1.

What then of monkey languages? Again as is very familiar, so far as it appears they are finite languages of small cardinality. Campbell’s monkeys might chain together selections from four (possibly five) distinct call types, boom, hok, krak, (and wak) along with the addition of -oo into sequences of at least four calls long, but they don’t begin to get anywhere near chains of length 12 or 24, let alone anything of potentially unbounded length. As Marc Hauser observes, no animal call system grows beyond about 100 distinct forms. The monkeys don’t, it seems, make infinite use of their finite means, even though it would seem rather simple to (recursively) apply concatenation repeatedly as in (4) to count up as far as one would like, even with one call B. (4) also illustrates that there’s essentially one way to gain Humboldt’s gift. Evidently the monkeys lack that one crucial ability that Berwick and Chomsky (2016) note as the evolutionary innovation in human language syntax, an operator merge that applies to its own output. The resulting “small world” for monkey languages matters as compared to the “infinite world” for humans because, as we shall see just below, it affects what tools we can pull from the linguistic toolkit.

Furthermore, it is also well known that the sheer size of the (finite) vocabulary of any human language beggars that of any nonhuman animal call system. Nim, the chimpanzee taught American Sign Language (ASL), stopped short at about 170 ASL signs and was also unable to breach Humboldt’s divide. Other animals seem to do better–the border collie Chaser can identify 1,022 toys by name. But all this still falls well short of human competence. Not only do children acquire tens of thousands of words within in a few years, we can always coin new words–infinite use of a different sort, once again.

And this ability impacts the applicability of linguistic analysis, strongly so. “Small worlds” don’t look like “large worlds” computationally or information theoretically, and as a result the methods developed for human and computer languages that assume infinity differ from methods that do not. Our best descriptions of human finite brains and finite computers rely on the assumption that they are best thought of as infinite–not a paradox, as the late Marvin Minsky strongly emphasized along with Chomsky. Take Nim. As Yang (2013) demonstrates, the ASL sign sequences Nim acquired, such as more-apple, or apple-Nim are best described information-theoretically as though they do not form an infinite set of expressions at all, generated by any grammar, even a so-called regular or finite-state one, but rather simply formed as a list of memorized pairs. In contrast, the two-word expressions of the children Adam, Eve, and Sarah from the CHILDES corpus, e. g., the-cookie, a-apple, the-book, a book, are best described by a productive, open-ended rule system–a generative grammar–that operates freely over all the items available in the children’s vocabulary, rather than by the sheer memorization of word pairs. In short, it does not appear that Nim had what amounts to a generative grammar at all. If this is so–and the experimental evidence appears compelling–then all the modern linguistic technology grounded on the notion of productive, generative grammars would not even apply to monkey languages. Rather different formal machinery might be required.

As a result, one might go astray by analyzing monkey language as though it were a human language (or even a computer language), and this is the key cautionary note of this commentary that must be sounded in programmatic approaches like Schlenker et al. As a concrete recent example of what might go wrong, consider the results of experiments with squirrel monkeys (Saimiri sciuresu), by Ravignani et al. (2013), which claimed to discover that these animals could learn to detect so-called abstract, non-adjacent “long distance dependencies” – relationships between one element and another separated by an arbitrarily long intervening number of other items. The experimenters did this by using an artificial language of high (H) and low (L) acoustic tones, noting that this was the first time this kind of ability had been demonstrated in nonhuman primates. As the article goes on to say, such a finding might be of some importance from an evolutionary standpoint: “Human and squirrel monkey lineages diverged at least 36 [million years ago] and our findings suggest that dependency sensitivity was present in these primate ancestors…. these monkeys possess the cognitive potential to recognize the rule generating plurals of Turkish nouns, or many other linguistic phenomena” (2013, 3).

However, a more careful look at the experiment reveals a rather different possibility. The researchers simply assumed that the squirrel monkeys were using “rules” like those in a conventional human generative grammar. As soon as one drops this assumption then the apparent result also collapses. The “long distance” dependency pattern tested was a sequence of tones in the pattern LHnL or HLnH–so the dependency is a match between a single L(ow) or H(igh) tone at the beginning and end of a sequence, with some arbitrary number of H or L tones in between. As a demonstration that the monkeys indeed learned some “rule” for the “long distance dependency” evidenced in this tonal pattern language after habituation exposure to tone sequences with 2 or 3 H’s in between two Low tones, the monkeys appeared to successfully handle examples where 4 or 5 H’s intervened. That is: the squirrel monkeys had apparently generalized from 2 or 3 to 4 or 5, and, presumptively, n intervening tones. But is this analysis correct? Note that pattern languages like these can be correctly recognized without any sort of formal grammar. As Huybregts has observed (p.c.), all that it takes is to store just four tone triples: LHL, LHH, HHH, and HHL. Given this small, fixed, finite set of templates, all the squirrel monkey has to do when it hears a tone sequence of any length is make sure that every three tones it hears contains one of these four patterns. If it does, then the sequence is OK–it matches the LHnL, no matter how many High tones are between the two Low tones. This is therefore a so-called locally 3-testable pattern language, as defined by research by Rogers and Pullum (2011) that the target article appropriately cites (p. 50) as an example of applying formal analysis to subclasses of regular languages. However, the target article does not seem to draw the conclusion that follows. It’s not hard to imagine that this is in fact what the monkeys learned and then used. No human-type grammar needs to be invoked at all.

It seems less secure that Turkish speakers’ knowledge for forming plurals amounts to the same thing, because we know on other linguistic grounds that, say, the Turkish stress system is best described by a generative grammar with cyclically applied rules (Chomsky and Halle, 1968; Underhill, 1976) and that Turkish speakers obviously possess the human faculty of language, which amounts to realized generative grammars. To be sure, it is not out of the realm of possibility that Turkish speakers might somehow “compile” their rule(s) for plural formation into a format for fast computation that looks like the one the monkeys used. However, this leads back to the original problem we started with: in the case of Turkish, we can apply modern linguistic methods because we already know that human language is at work. It is not obvious how to tell when these same methods can be securely applied to nonhuman animals. In the end, it would seem that monkey language requires monkey business.

References

Berwick, Robert & Noam Chomsky. 2016. Why only us: Language and evolution. Cambridge, MA: MIT Press.10.7551/mitpress/9780262034241.001.0001Search in Google Scholar

Chomsky, Noam & Morris Halle. 1968. The sound patterns of English. Cambridge, MA: MIT Press.Search in Google Scholar

Everett, Daniel. 2005. Cultural constraints on grammar and cognition in Pirahã. Current Anthropology 46(4). 621–646.10.1086/431525Search in Google Scholar

Ravignani, Andrea, Ruth-Sophie Sonnweber, Nina Stobbe & W. Tecumseh Fitch. 2013. Action at a distance: dependency sensitivity in a New World primate. Biology Letters 9. 2013.0852.10.1098/rsbl.2013.0852Search in Google Scholar

Rogers, James & Geoffrey K. Pullum. 2011. Aural pattern recognition experiments and the subregular hierarchy. Journal of Logic, Language and Information 20. 329–42.10.1007/s10849-011-9140-2Search in Google Scholar

Underhill, Robert. 1976. A Turkish grammar. Cambridge, MA: MIT Press.Search in Google Scholar

Von Humboldt, Wilhem. 1836. Über die Verschiedenheit des menschlichen Sprachbaues und ihren Einfluss auf die geistige Entwicklung des Menschengeschlechts. Bonn: F. Dümmler.Search in Google Scholar

Yang, Charles. 2013. Ontogeny and phylogeny of language. Proceedings of the National Academy 110. 6324–6327.10.1073/pnas.1216803110Search in Google Scholar

Published Online: 2016-7-5
Published in Print: 2016-7-1

©2016 by De Gruyter Mouton

Downloaded on 10.12.2023 from https://www.degruyter.com/document/doi/10.1515/tl-2016-0002/html
Scroll to top button