Jump to ContentJump to Main Navigation
Show Summary Details
More options …
Open access from 2017!

Zeitschrift für Sprachwissenschaft


IMPACT FACTOR 2018: 0.650
5-year IMPACT FACTOR: 0.737

CiteScore 2018: 0.60

SCImago Journal Rank (SJR) 2018: 0.267
Source Normalized Impact per Paper (SNIP) 2018: 0.396

Open Access
Online
ISSN
1613-3706
See all formats and pricing
More options …
Volume 38, Issue 2

Issues

Paul M. Pietroski: Conjoining Meanings. Semantics Without Truth Values

Kai-Uwe Carstensen
Published Online: 2019-09-14 | DOI: https://doi.org/10.1515/zfs-2019-2005

Reviewed publication

Paul M. Pietroski: Conjoining Meanings. Semantics Without Truth Values. Oxford: Oxford University Press (Context and Content), 2018, X + 393 pages.

There is an enormous range of theories of meaning in linguistics and philosophy, and most notably, there is still a wide gap between logical and (especially cognitive) linguistic approaches. With his monograph Conjoining Meanings (henceforth CM), Paul M. Pietroski sets out to join the communities and to bridge this gap with an “internalist semantics” approach to meaning that is cognitive in the Chomskyan sense, but rooted in modern logic.

In Chapter 0 “Overture”, Pietroski introduces the core assumptions of CM treated in more depth in later chapters. He starts by saying that human natural languages (which he calls “Slangs”) are generative procedures that connect meanings with pronunciations. Continuing with what meanings are not, he rejects both the notion ‘meanings are concepts’ (i. e., to identify meanings and concepts) and ‘meanings are extensions’ (or corresponding/equivalent truth-conditional conceptions, i. e., the propositional and Davidsonian stances according to Speaks 2018). Instead, Pietroski proposes to view meanings as instructions for how to access simple concepts or build complex concepts. As to semantic composition, he notes that Frege’s functor-argument apparatus and derivatives like type-theoretic Lambda calculus are much too powerful to model human meaning composition. The alternative he then previews is based on a restricted kind of predication (only classificatory monadic concepts of type M and relational dyadic concepts of type D) with corresponding compositional operations (M-junction, D-junction).

Chapter 1 elaborates on the linguistic reasons for assuming a mentalist, generative, non-extensional approach, where meaning is neither based on extensions or representations of extensions, nor on relations to truth values (contrary to Lewisian, Davidsonian, and also Montagovian approaches). To exemplify his points, Pietroski uses linguistic ambiguities, Putnam’s “water” case, and Liar sentences.

Chapter 2 introduces concepts as “composable mental symbols that can be used to think about things” (p. 77) based on predicates which Pietroski shows can be motivated by classical logic (cf. 3.1) but differ from those in other current accounts. He argues that monadic predicates should not be regarded as truth-valued functions, and shows that human concepts are more restricted than what can be represented with standard logic (with its characteristic use of variables and its truth-theory-motivated use of conjunction). He also points to the observation that proper nouns cannot always be formalized with e-type expressions, which renders functor-argument application at least less straightforward. Instead of “λx [COW(x) & BROWN(x)]”, the meaning of brown cow therefore appears as “[COW(_)BROWN(_)]”, conjoined by M-junction in his formal language. Finally, Pietroski holds that dyadic predicates are sufficient to represent concept relations (as opposed to the profligate adicity of predicates in logic). In his approach, therefore, every relational aspect has to be introduced by D-joining an atomic dyadic concept with a monadic concept specifying the internal relational slot. This operation is exemplified for the concept ‘above the cow’ shown below (taken from p. 104), where the existential quantifier is introduced syncategorematically to bind the internal unsaturated slot.

I will not talk much about Chapters 3 to 7, where Pietroski motivates his approach in substantial and respectable depth. Chapter 3 is about retracing the developments from Frege via Tarski to Church (rather than via Carnap to Montague) as a basis for his restricted Tarski-style formalism. Chapter 4 elaborates on the Liar Paradoxes, and Chapter 5 discusses (event) framing effects, taking both of them as indicators for the need to rule out truth-theoretic accounts of meaning. Chapter 6 turns to linguistic evidence, including problems posed by lexical semantic polyadicity, argument linking, plurals, and (shifts between) mass/count word senses. Chapter 7 goes even further and discusses “minimal semantic instructions” in his framework, i. e., meaning composition of linguistic expressions containing tense, relative clauses, negation, and quantification. Chapter 8 (“Reprise”) shortly summarizes the basic tenets of the book.

Suffice it to say that all this is presented in an informed manner on a broad fundament of knowledge, with a detailed argumentation (sometimes “somewhat tediously” [p. 144], as he himself admits) based on examples that are well-known to members of the Linguistics & Philosophy community. The value of CM is its competently criticizing logical approaches to meaning (composition) without dismissing them. I expect it to be a source of fruitful discussion especially among theoreticians of philosophical logic. As a Cognitivist, I can subscribe to large parts of Pietroski’s argumentation, especially when it comes to the linguistic or cognitive topics. In the following, I will elaborate on points of lesser agreement.

As a start, there are some issues with terminology. First, calling natural languages “Slangs” is awkward for the ordinary linguist. Second, the whole talk of “meaning as instructions (to fetch concepts)”, e. g., “M-join(fetch@BROWN, fetch@COW)” (simplified here) is an unfortunately common case of both interpretative wording and implicit inadequate homunculization. Consider a speaking person with some content-to-be-uttered: how are the pronunciations accessed, and who is the instructor? This criticism also applies to the occasional use of “procedure” or “algorithm” by Pietroski, and shows that he takes a procedural perspective where a declarative view would be appropriate. In other words, semantics (a term Pietroski eschews, by the way) should rather be viewed as an interface between syntax and meaning (conceptual content) as proposed by some, with (Bierwisch, Lang) or without (Jackendoff) a discrete semantic level. This is explicitly denied by Pietroski (“I don’t posit any ‘interface’ between syntax and meaning”, p. 292).

This brings me to the presentation of cognitive semantic ideas in CM, and referencing in general. It is quite astonishing how many people get cited/are referred to: scarcely anyone of the who’s-who-in-logic is missing (Carnap being the notable exception), not even Leibniz, Descartes or Kant. Unfortunately, this is very different with cognitive semantic linguists, although CM represents a cognitive approach to semantics and meaning. While Fillmore (event participants and framing), Jackendoff (internal semantics) and Kamp (discourse representation formalism, tense) at least get mentioned in one or two footnotes, Lakoff (quantifiers as predicates, polysemy) does not appear at all, let alone Langacker or Goldberg (constructionist analogues of argument linking).

There is a similar asymmetry with regard to the content of CM. While Tarski and Church get an in-depth treatment, the linguistic parts appear to be rather superficially collected and handled. For example, while tense receives the standard Reichenbachian treatment, the intricacies of aspect are left out. From Williams, Pietroski borrows the distinction of external and internal arguments. (1) is his formalization of She stabbed him (p. 320).

(1)

PAST-SIMPLE(_)[EXTERNAL(_,_)[FEMALE(_)FIRST(_)]]

STAB(_)[INTERNAL(_,_)[MALE(_)SECOND(_)]]

Leaving aside the strange handling of syntactic pronominal indices as predicates (which he admits), his representation of the structural external/internal-distinction (with predicates generalizing theta roles) seems dubious to me.

Quantification is treated à la mode in CM. But in (2), the formal representation Pietroski presents for Every spy arrived (p. 331), it is by no means clear for the reader how the maximality operator is compositionally prefixed to the predicates.

(2)
[ EVERY(_)[INTERNAL(_,_)MAX:A-SPY(_)]]
[EXTERNAL(_,_)MAX:ARRIVED(_)]

Let me summarize this a bit. Yes, I think Pietroski succeeds in exposing what I regard as a neglected issue in logical semantics: the fact that meanings are throughout construed as properties represented by lambda expressions (i. e., taking a semantic argument; e. g., “λx COW(x)” for cow, “λx BROWN(x)” for brown) while their linguistic expressions obviously can differ in syntactic argument structure (yet CM offers no alternative for the corresponding distinction between referential and non-referential semantic arguments).

And yes, I also think that current semantic composition mechanisms are too powerful. However, Pietroski throws out the baby with the bath water when abandoning variables, predicates with 3+ arguments, and lambda calculus altogether. Before doing that, he should have shown that his formalism is able to deal with the intricate aspects of semantics discussed in the last 40 years or so (the distinction of linguistic and non-linguistic concepts; detailed semantic analyses of lexical items; principles of polysemy; coherently handling linguistic domains like gradation and quantification in language understanding, production, and learning), and that the desired cognitive formalization cannot be achieved by restricting and extending the available mechanisms. For example, it is hard to conceive of decompositional semantic approaches without named variables, and personally, I think that variable-binding quantifiers, not variables, are the real problem (note also the heterogeneity of “∃” and “every(_)” in CM).

Most importantly, with his internalist Fodorian semantics Pietroski has lost what is so important both for logic and cognition: the relation to the world. The count/mass-distinction and the problems it poses for semantics and ontology can be used to exemplify this point. His solution, as I understand it, is to have distinction-less stem-concepts (marked with “a”) for all nouns (e. g., “aFISH(_)”), which can be joined with certain distinguisher-concepts to yield countable or mass concepts (similar to classifier languages). Stein’s famous quotation, slightly modified, fits perfectly as an objection here: “A rose is an object is an object!” In other words, while there seem to be ways of coercion between count and mass, rose denotes countable objects, and before abstracting to some hypothesized common denominator, one might rather consider formalizing the mechanisms of coercion as part of a general theory of polysemy. Correspondingly, given the importance of the structure of the (if only perceptual) world, and recent widespread interest both in cognition and in what there is or can be (experienced) (Carstensen 2011, Decock 2018, Zlatev 2016), one should start there and then build or modify formal systems accordingly.

References

  • Carstensen, Kai-Uwe. 2011. Toward cognitivist ontologies. Cognitive Processing 12(4). 379–393. CrossrefWeb of ScienceGoogle Scholar

  • Decock, Lieven. 2018. Cognitive metaphysics. Frontiers in Psychology 9(1700). 1–11. Web of ScienceGoogle Scholar

  • Speaks, Jeff. 2018. Theories of meaning. In Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2018 Edition). https://plato.stanford.edu/archives/win2018/entries/meaning/ (03.01.2019). Google Scholar

  • Zlatev, Jordan. 2016. Turning back to experience in cognitive linguistics via phenomenology. Cognitive Linguistics 27(4). 559–572. CrossrefWeb of ScienceGoogle Scholar

About the article

Published Online: 2019-09-14

Published in Print: 2019-11-03


Citation Information: Zeitschrift für Sprachwissenschaft, Volume 38, Issue 2, Pages 299–303, ISSN (Online) 1613-3706, ISSN (Print) 0721-9067, DOI: https://doi.org/10.1515/zfs-2019-2005.

Export Citation

© 2019 Carstensen, published by De Gruyter. This work is licensed under the Creative Commons Attribution 4.0 Public License. BY 4.0

Comments (0)

Please log in or register to comment.
Log in