Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access February 15, 2023

Towards a unified representation of linguistic meaning

  • Prakash Mondal EMAIL logo
From the journal Open Linguistics


Natural language meaning has properties of both cognitive representations and formal/mathematical structures. But it is not clear how they actually relate to one another. The central aim of this article is to show that properties of cognitive representations and formal/mathematical structures of natural language meaning, albeit apparently divergent, can be united, as far as the basic properties of semantic structures are concerned. Thus, this article will formulate the form of unified representations for semantic structures. With this goal, this article takes into account standard formal-semantic representations and also Discourse Representation Theory (DRT) representations on the one hand and semantic representations in different versions of Conceptual/Cognitive Semantics (Jackendoff’s, Langacker’s and Talmy’s approaches to Conceptual/Cognitive Semantics) and representations of Mental Spaces (Fauconnier’s approach) on the other hand. The rationale behind the selection of these approaches is that the representations of semantic structures under these approaches are all amenable to unification. It must be emphasized that showing that the representations of semantic structures under these approaches can be unified does not simply amount to unifying these theories/approaches in toto. Rather, it is to demonstrate that cognitive representations and formal/mathematical structures can be shown to be inter-translatable for at least some accounts of linguistic meaning.

1 Introduction

Linguistic meaning evinces facets of cognitive representations and formal/mathematical structures. The central objective of this article is to show that they can be integrated and unified by way of being translated into one another. The underlying assumption is that the logical organization of natural language is fully compatible with the cognitive organization of linguistic structures. On the one hand, semantic structures have been analyzed in the tradition of formal semantics in terms of set-theoretic structures that have an externalist orientation (Partee 2004). In other words, linguistic expressions are mapped onto set-theoretic structures, and these set-theoretic structures have extensions in the world out there. Linguistic meaning in this tradition has been associated with denotation and truth values. So specifying the conditions under which a sentence is true is actually a specification of the meaning of the sentence. Works in formal semantics (Chierchia and McConnell-Ginet 1990, Larson and Segal 1995, Heim and Kratzer 1998) have followed these footsteps and carried forward the tradition. Further enrichments have come from Montague grammar (Dowty 1979). Meaning is thus represented as a formal object derived compositionally from linguistic expressions in formulas of logics but devoid of any psychological anchoring. On the other hand, semantic structures in cognitive/conceptual semantics are patterns of conceptualization grounded in the mind (Jackendoff 1990, 2002, Langacker 1987, 1999, Talmy 2000). On this view, semantic structures are themselves cognitive structures. Clearly, there seems to be a tension between the purely formal and abstract properties of linguistic meanings and the cognitive properties of linguistic meanings that have the grounding in the working of our cognitive machinery. Purely abstract properties appear to be non-embedded, while cognitive properties of semantic structures are embodied and also embedded. However, semantic structures in formal semantics have extensions in the real world or in some model of the real world (as in the model-theoretic view of semantics), and by virtue of this extensional orientation, abstract formal properties of semantic structures come to be related to the entities and their particular categories and grouping in the world out there. In this way, semantic structures in formal semantics come to be located in the world.

This consideration notwithstanding, the tension remains, for the properties of cognitive representations are ultimately properties or categories of the cognitive organization located in brains, whereas abstract formal properties of linguistic meanings are not properties or categories of brains or minds because they are categories of the outer world. Thus, their inherent properties seem to be irreconcilable. But, if our goal is to come up with united semantic representations, we need to have an operational demarcation of semantic structures. Hence, it would be fruitful and useful if the properties of semantic structures can be specified one way or other. Cleaving to one specific view of semantic structures in finding out the common principles of semantic representations will be irremediably lopsided. Also, in view of the fact that facets of both cognitive representations and denotative set-theoretic structures can be harmoniously found in semantic structures (Zwarts and Verkuyl 1994, Hamm et al. 2006), it is vital to explore semantic structures in terms of both cognitive representations and set-theoretic structures. The aim of this article is to show that this is indeed possible. We hope to do this not merely by showing that aspects of cognitive representations and denotative set-theoretic structures are compatible, as Zwarts and Verkuyl have done, but by showing how the representations in both frameworks can be unified with each other. As a result, we shall arrive at a sort of unified representation that encodes aspects of both cognitive representations and set-theoretic structures in a mutually harmonious fashion. It is this unified representation of linguistic meanings that can help define shared cognitive principles and formal constraints that underlie representations of linguistic meanings. With this goal in mind, we shall first look at the properties of linguistic meanings in formal semantics and then those in cognitive/conceptual semantics, and finally, an attempt would be made to show how both systems of representation can be united. The goal here is to formulate some general principles of equivalence between cognitive/conceptual representations and set-theoretic structures in order that a unified system of representation can be derived. The unified system of representation will unite and reflect the basic characteristics and aspects of both cognitive/conceptual representations and formal-logical structures. Since the goal is not to unify the divergent theories/approaches as a whole, there may be complex and idiosyncratic aspects of cognitive/conceptual and formal-logical approaches that may not be specified in the general principles. But the question of whether complex and idiosyncratic aspects of either cognitive/conceptual approaches or formal-logical approaches can be derived from the unified system of representation remains open.

Before we proceed further, a caveat is in order. The discussion of semantic structures in this article will be based on linguistic structures that occur primarily in written texts, and hence, this may be taken to differ from spoken language semantics in the sense Cienki (2017) specifies it. Although this may be supposed to spring from the written language bias in much of linguistics (Linell 2005), the hope is that the spoken language semantics can also be derived from the unified system of representation of semantic structures (see for general discussion, Du Bois 2003).

This article is structured as follows. Section 2 outlines the formal semantic approach to linguistic meaning along with its philosophical commitments. This section also touches upon the issue of semantic variation that the devices of formal semantics can permit for cross-linguistic generalizations. Section 3 focuses on aspects of semantic structures in Conceptual/Cognitive Semantics and how variation in semantic structures across languages is accommodated within this approach. This helps see how flexible the theoretical apparatus within this approach is. Then Section 4 starts with the analysis made by Zwarts and Verkuyl (1994) and goes further in showing how semantic representations on different approaches under the broader gamut of formal semantics and Conceptual/Cognitive Semantics can be unified. Section 5 reflects on some residual philosophical issues that relate to the intrinsic ontological incompatibility between cognitive representations and formal/mathematical structures and shows a plausible way out of the dilemma. Finally, some concluding remarks on the nature of unified representations of linguistic meanings are offered in Section 6.

2 Aspects of semantic structures in formal semantics

The central problem in the study of meaning in the formal semantics tradition has been the connection of language to the world. This actually dates back to Frege (1892) who focused on the referential connection between linguistic expressions and their real-world correlates. This referential connection can help relate judgeable thoughts to what they stand for. All judgeable thoughts for Frege thus consist of concepts or relations which are unsaturated (such as verb predicates) and objects which are by default saturated (such as the arguments or terms of a predicate). These components of judgeable contents are then non-judgeable. He made a distinction between sense and reference to talk about distinct ways in which different categories of linguistic expressions make correspondences with the outside world. Judgeable contents such as complete sentences are checkable for the validation of truth or falsity, whereas non-judgeable contents can also establish correspondences with the world in terms of whether they help determine a referent or simply stand for referents. If a linguistic expression (say, a name) simply stands for a referent in the world, it is its reference. A verb stands for a set of referents because it is unsaturated and, thus, for example, ‘dance’ denotes a set of dancers. On the other hand, any expression also manifests a way or mode of determining its referent, it is then the sense of the given expression. The verb ‘dance’ has senses such as ‘moving one’s body in relation to the rhythm of music’, ‘moving up and down fast’, etc. But Frege also made it clear that sense is not a subjective idea.

That is because for something to be a sense of a linguistic expression, it has to be objectively verifiable as the case or inter-subjectively acceptable. The problem of language–world connections in relation to linguistic meanings has also been dealt with in more or less Fregean terms in the current philosophical tradition (Lewis 1972, Putnam 1975, Davidson 2001). In fact, it is a problem that relates to the notion of intentionality too – the directedness of mental states towards entities in the world (Searle 1983).

Most significantly, Frege (1892) did not approve of the inner states of the mind or psychological states in the description of linguistic meanings. Although Frege (1979) was well aware that natural language evinces both logical and psychological/cognitive aspects, the part of language that admits of logical inferences to be drawn smoothly was of great concern to Frege. And the anti-psychologism has remained deeply entrenched in the tradition of formal semantics even today.[1] The denotations of various categories of linguistic expressions, such as nouns, verbs, and adpositions, are thus extra-psychological categories of entities. Denotations pick out partitions, combinations, and collections of entities in the actual world, and thus establish non-psychological correspondences between linguistic expressions and the world out there. Consequently, the properties attributed to linguistic expressions in their denotative meanings cannot be considered to be cognitive properties. These properties are generalizations over the formal abstractions of real-world entities. In spite of this being the case, the only possible approach towards a rapprochement between denotational semantics and psychological/cognitive representations is Discourse Representation Theory (DRT) (Kamp and Reyle 1993), which includes a level of semantic representation that specifies discourse context representations determining truth conditions and interpretation possibilities. It is also noteworthy that these discourse context representations are believed to ultimately originate from mental representations. However, there is no denying that the discourse context representations are actually constrained by a realist or objective interpretation of the context representations in discourse (Steedman and Stone 2006). Hence, it is really hard to detach the realist or objective interpretation from the formal representations of linguistic meanings in the overall framework of formal semantics, and the tension between cognitive aspects and formal properties of linguistic meaning remains.

In a nutshell, linguistic meanings in formal semantics are concerned with the ways truth values of sentences are dependent on the meanings of the sentence parts and also with the ways in which truth values of sentences are related to one another. Hence, semantic structures have something to do with the composition of the formulas of the parts of an expression and also with the relations between sentences in terms of their truth values. Variation in semantic structures can be traced to the ways in which the formulas of the parts of a linguistic expression are supposed to be combined. For instance, Bach and Chao (2009) argue that a sentence like ‘John walks’ would be analyzed in Straits Salish as walks(x) ∧ john(x) – where ‘walks’ and ‘john’ are predicates – but in English, this would have the form walks(j) – where ‘j’ is ‘John’. The difference here lies in whether a pronoun is incorporated into the predicate, as in Straits Salish, or excluded from the predicate, as in English. Since pronouns are affixed to predicates in Straits Salish, the name ‘John’ is treated as a predicate in itself and the whole expression looks like an open formula without the variable (i.e., ‘x’) being bound. In contrast, ‘John’ is a constant in English and is independent of the predicate that is true of it. Likewise, it is plausible that different languages would have different parts as predicates and variables. The variable for the object noun phrase incorporated into the predicate with the arity of the predicate being consequently reduced is another possibility. Thus, it will not be surprising if it turns out that both arguments of a 2-place predicate are also absorbed into the predicate, as is often observed in impersonal passive constructions. The following example from Turkish reflects this possibility.

(1) bu şato-da bo-ul-un-ur.
‘(One) is strangled (by one) in this chateau’.

There are, of course, typical cases of passivization in languages that demote the subject (as in English, French, etc.), but Turkish favours the demotion of the subject at two stages (the original subject demotion and then another level of demotion of the grammatical subject). This can be better understood in terms of two levels of argument absorption (or rather arity reduction of the predicate) that consist in one level of subject demotion at first and then the next level of demotion, as discussed in Müller (2013).

In addition, there can be cases in which variables in predicates corresponding to optional parts of an event or relation – that is, adjuncts – can be incorporated into the relevant predicates. Thus, for example, the implicative object (IMPL) in Rembarrnga, an Australian language, is incorporated into the predicate. Evans (1993) points out that such objects are actually adjuncts and often carry semantic roles such as beneficiary, possessor of the object or, cause. They acquire the argument status by way of the applicative prefix ‘-pak’, as shown in (2).

(2) matayin-Ø ŋan-pak-ŋu-ŋ
(food from) ceremony-NOM 3SG-1SG IMPL-eat-PAST
‘She ate the ceremony food on me
(‘I was responsible for the ceremony (and will also be responsible for her punishment.’))
[NOM = nominative case; PAST = past tense]

In such cases, the variable for the adjunct noun phrase which is supposed to be outside the predicate-argument structure becomes part of the relevant predicate. In (2), the adjunct is marked in the English translation. It is clear that the first person singular noun (also labelled as being in minimal number which is an alternative term for singularity) as marked by ‘-pak’ is the implicative object. That is, initially, what we have is this: P(x, y) ∧ Q(z), where P is the main verb predicate (meaning ‘eat’) whose core terms or arguments are x and y and whose adjunct is introduced by the predicate Q (meaning something like ‘through-the-personal-sponsorship-of’) whose argument is z. Then, P(x, y) ∧ Q(z) is mapped onto P′(z, y), where P′ may be considered to be a slightly altered version of P (say, ‘responsible-for (someone’s-eating)’). Both P′ and z are implicated in such cases of inferentially coded syntactic structuring, which Evans calls ‘discourse placedness’. The change in semantic structure from P(x, y) ∧ Q(z) to P′(z, y) is not merely pragmatic. It is syntactic as well since the implicative object is syntactically marked and also follows a specific precedence hierarchy in argument marking: IMPL > indirect object > object. The difference from English is that an adjunct is not grammatically marked as an argument, whereas Rembarrnga does mark it in the verb morphology. This allows us to predict that it is perhaps also plausible that a number of adjuncts may replace the original core arguments of a predicate by being incorporated into the predicate. Furthermore, Partee (1991) argued that languages could differ in terms of which domains (events or nominal arguments) languages pick up to express quantification.

(3) ngapa o-ju puta-nga-nja.
water AUX-lSG PART-drink-IMP
‘Just drink some (not all) of my water.’
(4) pirdirri, parraja, pangurnu, muku-kujukuju-rnu.
seedcakes, coolamon, scoop, UNIV-toss-PAST
‘The seedcakes, coolamon, and the scoop, he tossed them all down (swallowed them).’

[AUX = auxiliary; IMP = imperative; PART = partial quantification; UNIV = universal quantification]

Warlpiri, an Australian language, expresses quantification through pre-verbs such as ‘puta’ (meaning ‘partly’ or ‘some’), ‘muku’ (meaning ‘fully’ or ‘all’) forming a unit with verbs. Given that the kind of quantification expressed by these pre-verbs are adverbial in nature, these quantificational expressions range over the domain of events. That is, the meaning of (3) would be roughly this: just drink my water partially. In a similar manner, (4) would roughly mean that the person referred to in the sentence swallowed the seedcakes, coolamon, and the scoop fully. However, Partee also notes that the adverbial quantification here can be understood as quantification over the arguments from the role of the participants in the specific arguments. Hence, for instance, the quantification expressed in (3) can be understood to be operating on the object argument (i.e., on ‘ngapa o-ju’) only in the sense that it is the (amount of) water which is to be drunk in partial quantity. Similarly, the quantification expressed in (4) is to be interpreted as operating over the object (i.e., on ‘pirdirri, parraja, pangurnu’) such that all of these items are swallowed by the person referred to in the sentence. In this way, the quantification actually expressed by noun phrases over sets of entities in English is actually expressed over events in Warlpiri. The domain of quantification can thus be restricted or extended across languages. This is another way of coding variation in the variables and operators that apply to them in certain domains. These operators can be missing, or the variables may be reduced/absorbed, or the scope of the operators can be extended in certain other cases (as in Warlpiri). Needless to say, the most significant advantage the formal analysis of natural language meanings affords is the precision with which certain well-defined formal properties of linguistic meaning can be generalized across types of syntactic structures in natural language.

3 Aspects of semantic structures in conceptual/cognitive semantics

While mathematical generalizations over specific categories of linguistic expressions across languages are characterized and specified in a precise fashion in formal semantics, the conceptual representations of language as part of the cognitive machinery are best captured in conceptual/cognitive semantics. This is what we turn to now. Cognitive linguistic approaches towards semantics (Langacker 1987, 1999, Talmy 2000) have taken a totally different approach to natural language meaning. In contrast to other approaches, the approach adopted by conceptual/cognitive semantics advocates the view that semantic structures are not derived or mapped (only) from syntactic structures. Rather, semantics is an independent domain in the sense of not being dependent on syntax, and its units are conceptualizations ultimately grounded in the cognitive system and also connected to general (encyclopedic or pragmatic) knowledge. Symbolic (syntactic-phonological) units are mapped onto representations of conceptualizations that are anchored in the sensory-motor-perceptual processes. This lends credence to the fundamental idea that conceptualizations are naturally derived from aspects of perception, memory, and categorization. In this way, linguistic meanings are structured in terms of how they are conceptualized in the mind. Since linguistic structures themselves reflect cognitive structures, studying linguistic structures is then tantamount to making explorations into aspects of cognition as well. Aspects of embodiment emanating from sensory-motor experiences often determine the range of possible meanings. For example, the following examples help illustrate the role of sensory-motor experiences in constituting the cognitive foundations of linguistic structures.

(5) The zebra stood in the middle of a wide grass field.
(6) The zebra stood in the middle of the river.

Sentence (5) makes sense as long as we make reference to our spatial experience and knowledge of grass fields. But (6) does not have the interpretation in which the zebra’s feet are in contact with the surface of a river because the surface of the river cannot support an animal as big as a zebra. Nor can the zebra maintain an upright position by being supported by the surface of the river. What blocks this interpretation is not intrinsic to the sentence alone and does not thereby come from within the sentence. This interpretation is not after all logically impossible to be derived. Rather, such an otherwise possible meaning is blocked by aspects of our spatial experience. In Langacker’s (1987, 1999) formulation of the cognitive structuring of linguistic structures, this can be understood in terms of the relationship between the landmark (LM), which forms the spatial background, and the trajector (TR), which is the focal entity. An LM and TR can also be thought of as the ground and figure in Talmy’s (2000) sense. In the present context, the surface of a river, the LM, cannot (but a grass field in (5) can) be in contact with and thereby support, the zebra, the TR/the focal entity. Therefore, the range of possible meanings can be constrained by the cognitive structures latent in our sensory-motor-perceptual domains. In this sense, it appears that meanings are not always a function of the constraints that are imposed by grammar/syntax.

Further, Jackendoff (1990, 2002) has developed a theory of conceptual semantics within the general framework of cognitive semantics. A wide range of facts about meanings are captured in the theory. On this approach, the mind cannot relate to the world on its own; rather, some level of structure within the cognitive substrate has to do the job. It is conceptual structure (CS) that allows us to connect to the world via some sort of projection of the outer world within the mind. Hence, CS is a mental structure that encodes the world as human beings conceptualize it (Jackendoff 2002, 2007). Significantly, CS is independent of syntax but connected to it by an interface that has interface rules, which consist of words, among other things. These interface rules connect CSs to syntactic and phonological structures. There is no real distinction between linguistic rules and words which form two poles on a continuum. CS, in virtue of being an independent level of thought and reasoning, builds structures in a combinatorial manner out of conceptually distinct ontological categories such as THING, PLACE, DIRECTION, TIME, ACTION, STATE, EVENT, SITUATION, PROPERTY, MANNER, and PATH. Combinatorial structures that are built out of such categories encode category membership, predicate-argument structure, and so forth. CS is linked to another mental structure called spatial structure (SpS), where various collections of information from the visual, haptic, auditory, motor, olfactory, kinaesthetic, and somatosensory systems converge. In this sense, SpS is a kind of level of the mind where correspondences are established between CSs and information from different sensory-motor-perceptual systems. This enables SpS to encode different sensory-motor-perceptual features (shape, depth, index, colour, dimension, etc.) of objects, entities, and space in language. Interestingly, the sensory-motor-perceptual features of nouns/noun phrases can be characterized in terms of the combination of formal (the basic ontological category of entities), constitutive (the relation between an entity and its constituent parts), telic (the purpose or function of an entity), and agentive properties/features (an entity’s coming into being) as qualia structures in Generative Lexicon Theory (Pustejovsky 1995). Jackendoff’s CSs encode such qualia properties in lexical expressions; for instance, ‘music’ can be a dot product (a sort of concatenation) of sound and information in its formal qualia structure (represented as Sound • Information), apart from having notes/melody in its constitutive structure, the property of evoking affective states and aesthetic pleasure in its telic structure and also the property of being composed by someone in its agentive structure.

The bipartite organization of the conceptual machinery between CS and SpS helps language connect to the world via a series of levels of mental organization. The following examples illustrate how CSs for linguistic structures can be laid out.

(7) Sunny opened the window.
(8) A professor wants a huge library.
(7′) [Situation PAST [Event CAUSE (Object SUNNY, Object WINDOW (+DEF), Event INCH
(State BE (Object WINDOW, Property OPEN)))]]
(8′) [Situation PRES [Event WANT (object PROFESSOR, object LIBRARY (property HUGE))]][2]

Here, (7′–8′) provide the rough rendering of (7–8) in CS. In this context, CAUSE is a function in a 3-argument version whose structure looks like 〈(Object, Object, Event), Event〉, which indicates that the ordered triple (Object, Object, Event) is to be mapped onto an Event. The parentheses marked in bold enclose the arguments or terms of each of the functions written in capital letters. Similarly, WANT as a 2-place predicate requires two objects in an ordered pair 〈Object, Object〉. It is noteworthy that the ontological categories (such as Object, Situation, Event, etc.) introduce and are also specified by, grammatical categories (such as the past tense (PAST) or the present tense (PRES)), some basic conceptual functions (such as CAUSE, BE, INCH (inchoative), etc.) and all lexical items (such as ‘window’, ‘professor’, etc.). These representations are thus supposed to be the conceptual correlates of the linguistic structures concerned.

At this point, it may be worthwhile to emphasize that even though Jackendoff’s (1990, 2002) conceptual semantics can be located within the general framework of cognitive/conceptual semantics, the general orientation in cognitive semantics, especially in Langacker’s approach, may be more image-schematic (Langacker 1999, 32). In this context, Jackendoff (1996, 16–20) thinks that the system of representation of semantic structure in Langacker’s approach is actually symbolic because the notations specifying TR–LM interactions, profiling, viewpoint, etc. are not fully iconic. Here, it needs to be recognized that even though conceptualizations can be image-schematic, Langacker’s notations are symbolic. In any case, image-schematic conceptualizations need not be opposed to symbolic conceptualizations, for images may refer to or represent a variety or family of things: for example, a line drawing of ‘h’ may refer to a street plan, or a chair, or the letter ‘h’, and so on. Thus, images can have degrees of iconicity and hence can become quasi-symbolic (see Louwerse 2018, 577–9). This consideration would apply to some criticisms of Jackendoff’s approach by some cognitive linguists in the same issue as that of Jackendoff (1996). In addition, Jackendoff also adds that metaphorical extension and/or image-schema transformation are crucial in cognitive semantics in general. For instance, the derivation of different field meanings (spatial path, change of property, scheduling, etc.) of an expression such as ‘to’ in English is achieved via metaphorical extension and/or image-schema transformation in cognitive linguistics in general, whereas different field senses would be parallel instantiations of a more general schema within Jackendoff’ approach (Jackendoff 1996, 22). Even here, image-schema transformations and parallel instantiations of a more general schema can be shown to be more or less equivalent, insofar as each distinct instantiation of a general schema instantiates a mapping of the semantic argument structure onto the syntactic argument structure for each instance of image-schema transformation. This is exactly what Goldberg and Jackendoff (2004, 563–4) have done (see also Goldberg and Jackendoff 2005). As a matter of fact, Jackendoff (1996, 19–20) does say “I have no objection in principle to using circles, squares, and arrows instead of square brackets, parentheses, and functions. We should just be very clear about their status.” Overall, conceptual and cognitive-constructional representations have been shown to be integrated. But this is just one step.

Crucially, the position adopted and formulated in this article is simply that construction-based representations of semantic structures (as in Goldberg and Jackendoff 2004) and cognitive representations are not just similar and equivalent to one another in most cases – they can also be unified with formal-logical structures in formal semantics. This constitutes the distinctness of the present approach of a unified system of representation of semantic structures. More about conceptual semantic representations in relation to the formal properties of linguistic meaning will be discussed in the next section.

In a nutshell, CSs are specified by certain universal conceptual categories that are organized around certain basic conceptual functions. These conceptual functions help build conceptualizations that are anchored in the neuro-cognitive substrate. Suffice it to say, CSs have certain correspondences with syntactic structures that will become clearer once the task of sketching out the unified representation is undertaken in the next section.

Given that there are basic conceptual functions that specify the ontological categories in finer detail and take certain arguments embedded within them, semantic variation will arise from the variation in the availability of the actual conceptual functions and the embedded arguments within such functions. For instance, the conceptual function for specifying PATH as an ontological category is introduced by TO whose structure is 〈x, PATH〉. Some verbs such as ‘enter’, ‘exit’, and ‘fall’ in English can incorporate the PATH component into their own specifications. But it is usually specified by way of particles such as ‘into’, ‘down’, ‘up’, etc. in most cases of motion verbs in English. Hence, Talmy (2000a) calls such languages satellite-framed languages since the particle encodes the PATH component. On the other hand, Romance languages such as Spanish, Italian, French, etc. have verbs of motion that do not require these extra-verbal particles and incorporate the PATH component directly within their conceptual specifications. These languages are hence called verb-framed languages. A parallel typology of verbs can also be constructed for languages that encode the manner of motion into verb specifications and those that do not. English verbs such as ‘amble’, ‘jog’, ‘float’, ‘strut’, etc. are of this type, whereas Indo-Aryan languages such as Hindi, Bengali, etc. encode the manner of motion in extra-verbal satellites. This has the typological consequence that we can actually have verbs in languages that happen to incorporate both the manner of motion and the PATH component (see Slobin 2004). Besides the conceptual components of motion, there are other conceptual ingredients of events that appear to vary across languages. These are change of state (as in ‘The lady straightened the cloth’), temporal contouring (as in ‘He talked on’), action correlating (as in ‘They marched along’), and action realization (‘The thief was hunted down’). From a related perspective, discourse-referential satellites that fix reference in a discourse (such as definite and indefinite articles in English, as in ‘He just saw a peacock. The peacock is gone now’) can be located at the extreme ends of the descriptive layer of a noun phrase consisting of the nominal head along with different kinds modifiers such as demonstratives (DEM), numerals (NUM), and adjectives (A). Different languages tend to have patterns as variations of the symmetric structure DEM NUM A Noun A NUM DEM (Rijkhoff 2008, 801), and such discourse-referential satellites tend to have a parallel organization in clause – for instance, in Jakaltek, the exhortative mood in a clause and non-specificity in a noun phrase are expressed by the same form (Rijkhoff 2002, 225–9).

Furthermore, Talmy (2000a) lists a number of other conceptual categories that can be expressed either in the verb root or in the relevant satellites and inflections. For instance, the degree of realization of a state varies along a continuum in terms of whether the action or state is fully realized or removed from realization by certain approximations. English has adverbs such as ‘almost’, ‘barely’, ‘hardly’, etc. for such cases, while other languages may have satellites (such as Atsugewi) for the expression of this conceptual category. Languages may also realize phases [3] of an action or state in the verb root or the satellites. Phases are changes in the status of an action that are usually expressed in English through ‘starting’, ‘continuing’, and ‘stopping’. While continuing and starting are monotonicity-increasing in the sense that if one continues/starts reading a section of a book, he/she continues reading the book, ‘stopping’ is monotonicity-decreasing because if one stops reading a book, he/she stops reading a section of the book. Another interesting conceptual category is the rate of speed at which some action is continued or completed. While fast or slow speed may be lexicalized in verbs as in English verbs such as ‘trudge’, ‘walk’, ‘run’, ‘jog’, etc., they can be expressed through satellites as well, as in Atsugewi and Dyirbal. Causativity is another conceptual category in terms of which an event is expressed as caused rather than happening on its own. Yiddish, Japanese, and many Indo-Aryan languages express this through inflections and also lexicalized verbs (such as the difference between ‘die’ and ‘kill’ in English is indicative of the absence and presence of lexicalized causativity). Moreover, there are some other facets of semantic variation that are due to the constraints of conceptual organization on grammars, as Talmy (2011) points out. For instance, the topological principle determines the conceptual categories and the member concepts included as against those excluded from it. This applies to the schemas of closed-class expressions referring to space, time, and other possible categories, and excludes Euclidean properties such as absolutes of distance, size, shape, or angle from these conceptual schemas. The topological principle governs and also conserves magnitude neutrality, in that English prepositions such as ‘across’, ‘through’ are not sensitive to the size or shape of the object the complement of the preposition denotes. That is why both ‘The tiny spider moved across my palm’ and ‘The train ran across the state’ or both ‘We drove through the town center’ and ‘The mosquito flew through the narrow passage between my fingers’ are possible. Talmy also states that no two languages perhaps exist that differ only in whether they have two distinct types of closed-class expressions encoding differences only with respect to magnitude. Another important observation is that the open-class system (such as nouns, verbs, etc.) as a whole reflects the conceptual content and the closed-class (such as prepositions, inflections, etc.) represents the CS, although individual items of any class may exhibit opposite functions. Plus languages vary in terms of the combinations of categories (such as tense, aspect modality, number, etc.), they pick up to express the CS, even though the inventory of these categories cannot be said to be absolutely universal in all languages.

In all, it seems clear that semantic variation in conceptual/cognitive semantics is a function of the variation in the expression of conceptual categories, as opposed to the variation in the expression of operators and variables in formal semantics. Be that as it may, the variation in the expression of operators and variables can be shown to be realized or couched in terms of variation in the expression of conceptual categories. But we aim to do much more than that. The categories and variables in formal semantics and those in cognitive/conceptual semantics can be harmonized with one another. This is what we shall turn to now.

4 Towards a unified representation of semantic structures

The goal in this section is to show how the formal properties of semantic properties can be in harmony with the conceptual/cognitive properties of semantic structures. Before we proceed further, one may first note that unified representations of linguistic meanings can have the same sort of descriptive and explanatory power in revealing aspects of formal universals in semantics as any adequately equipped semantic formalism. That this task is fundamentally significant is aptly expressed by Partee (1993) – “I’m inclined to believe in strongly construed formal universals in semantics but not in anything like an innate universal stock of basic concepts underlying the lexicon. I don’t know how to make sensible empirical arguments about that, though” (Partee 1993, 9).

In addition, the need to bring matters of conceptual/cognitive representations to bear upon formal properties of linguistic meanings becomes more pressing when one considers a host of issues in semantic phenomena that demand references to mental states and cognitive representations (such as propositional attitudes conveyed by the propositional verbs like ‘believe’, ‘think’, etc. and their propositional complements). Partee (1979) recognized this long ago and hence expressed this: “What I have tried to suggest is that the linguist’s concern for psychological representation may be relevant to every semanticist’s concern for an account of the semantics of propositional attitudes. So far I don’t see how to achieve either goal; my only positive suggestion is that a good theory might be expected to achieve both at once” (Partee 1979, 9).

In recent times, Warglien et al. (2012) and Gärdenfors (2020) have also pointed to the necessity of integrating formal lexical-semantic properties of spatial expressions and event structures in general with conceptual spaces. The prevalent lacuna in the study of linguistic meanings is also articulated quite well by Krifka (2012): “On the one hand, the Frege/Montague research program, based on the idea that truth-conditions are the core ingredient of clause meaning and that meanings of complex expressions are computed from the meanings of the parts, has been extremely successful. On the other, it did not really address the central question: What, precisely, are the meanings of the smallest parts, the meanings of words, or rather, lexemes?” (Krifka 2012, 223).

In essence, having a unifying representation for the formal properties of linguistic meanings and the conceptual/cognitive properties can also serve to combine the benefits and advantages of having both sorts of properties in a single format. Linguistic meanings can thus be analyzed in a manner that partakes of aspects of the formal and cognitive approaches to meanings at the same time. From a slightly different and yet related perspective, Schiffer (2015) has argued that tensions, if any, between the compositional meaning theory in formal semantics and mental representations as part of linguistic competence have to be dispelled.

4.1 Formulating the general principles of a unified representation

Against this backdrop, we shall now explore ways of unifying semantic representations in cognitive/conceptual semantics with those in formal semantics. To that end, we shall first look at Jackendoff’s formulation of the fundamental tenets of conceptual semantics because its compatibility with formal semantics has already been discussed by Zwarts and Verkuyl (1994). The most fundamental organizing principle that underpins the formal structure of CSs in conceptual semantics is the basic organization of X-bar theory (Jackendoff 1990). Thus, the following principle of X-bar syntax would be somewhat fine-tuned to yield the basic organization of CSs.

X → [N, V, A, P…]

Here, X can take any of the values from the collection [N, V, A, P…], where N is a noun, V is a verb, A is an adjective, and P is a preposition.

(10) [X0 __(〈YP, 〈ZP〉〉)] 〈═〉 [Entity0 F(〈Entity 1〉, 〈Entity 2, 〈Entity 3〉〉)]

The formulation in (10) is a correspondence rule (‘〈═〉’ designates the correspondence) relating (9) to the basic organization of CSs. Here, X0 = X and YP and ZP are the (optional) arguments or subcategorized phrases of the head X0 (i.e., of X in (9)). YP and ZP are mapped on to the argument-concepts (Entity 2 and Entity 3) of the conceptual function F in (10). The embedding of Entity 2 inside Entity 3 indicates that one can have multiple levels of self-embedding of CSs. Finally, Entity 1 corresponds to the subject for which X0 does not admit of any subcategorization. Entity 0 could be a THING (a noun phrase) or an EVENT (a verb phrase) or a SITUATION (a sentence/clause) or a PROPERTY (an adjective phrase) or even a PATH (a prepositional/postpositional phrase). The function F can assume several forms. The most representative ones are BE, STAY, GO, CAUSE, INCH, TO, and IN whose functional structures are given as follows.

(11) BE: 〈(X, Y), STATE〉 [BE maps (X, Y) to a state]
STAY: 〈(X, Y), EVENT〉 [STAY maps (X, Y) to an event]
GO: 〈(OBJECT, PATH), EVENT〉 [GO maps (OBJECT, PATH) to an event]
[CAUSE maps either (OBJECT, OBJECT, EVENT) or (OBJECT, EVENT) to an event]
INCH: 〈STATE, EVENT〉 [INCH maps a state to an event]
TO: 〈X, PATH〉 [TO maps an X to a path]
IN: 〈X, PLACE〉 [IN maps an X to a place]

The sentences ‘Ron is in town’ and ‘Ron goes to the market everyday’ will have the following representations.

(12) [Situation PRES [State BE (Object RON, Place IN (Thing TOWN))]]
[Situation PRES [Event GO (Object RON, Path TO (Thing MARKET (+DEF)))]

It may be noted that the aforementioned representations have been simplified for the sake of readability. For instance, if the right-hand side in the formulation (10) is adhered to, [State BE (Object RON, Place IN (Thing TOWN))] in (12) will look like this: [State BE (〈Object RON〉, 〈Place IN 〈Thing TOWN〉〉)]. Likewise, [Event GO (Object RON, Path TO (Thing MARKET (+DEF)))] would look this: [Event GO (〈Object RON〉, 〈Path TO 〈Thing MARKET (+DEF)〉〉)]. But this is immaterial for our purpose. We may merely note that the arguments of a given conceptual function are separated by a comma in (12). What is more interesting from our perspective is that every rule expressing a simpler CS into a more complex one will have a successive increment in indexing of the ontological/conceptual category used.

Figure 1 shows the parallels between the basic organization of phrase structure and that of the CS for the sentence ‘Ron is in town’. One may note that the index of the ontological category STATE is incremented by 1 each time the conceptual function takes its arguments one by one. Thus, when the index is incremented by 1, the conceptual function BE takes the PLACE argument and then the new index is incremented by 1 when the THING argument is taken by the function. This may also be thought of in terms of currying, whereby any n-ary function is converted into a format of a unary function (Zwarts and Verkuyl 1994). In this way, F(X, Y) can be written as (F(X)) (Y). It is also important to observe that the index increment is analogous to the principle of succession in X-bar theory (Kornai and Pullum 1990). The principle of succession in X-bar theory states that for a context-free grammar with each non-terminal category having an index greater than 0, it obeys succession if and only if every rule rewriting some non-terminal Xn has a daughter labelled Xn−1. So the index decrement from X2 to X0 in Figure 1 is analogous to that from State 2 to State 0.

Figure 1 
                  The parallels between an X-bar structure and a conceptual structure (CS).
Figure 1

The parallels between an X-bar structure and a conceptual structure (CS).

We may now recast the formal structure of CSs in terms of Langacker’s (1987, 1999) TR–LM relations[4] or Talmy’s (2000) figure-ground relations. The aim here is to facilitate a smooth integration of an ensemble of related components in conceptual/cognitive semantics so that these conceptual/cognitive representations can be easily unified with formal semantic structures. Any event or state is a schematic representation of a situation whose primary participants are a TR and an LM in Langacker’s conception of relational predication characterizing the relational structure of events and states. When one participant of an event is recognized as the most salient entity or the primary mover that organizes, moves, affects, changes, or is aligned in a certain way to, the other salient participant, the former is recognized as the TR and the latter as the LM. The TR is the most focal entity in any event or relation which is dynamically schematized with respect to the LM, whereas the LM is often a reference point that makes the dynamic nature of the TR viable. This underwrites the fundamental asymmetry between a TR and an LM. As made clear by Langacker himself, the TR and LM correspond to a figure and a ground, respectively. A TR is a figure because a figure is the most focal entity in our perceptual organization, while the LM is the ground because the ground sets the reference point. Talmy (2000) states that the figure has unknown spatio-temporal properties, and is more movable, smaller, simpler, more prominent, more dependent, and more focal in awareness, whereas the ground is more permanently located, larger and complex in form, more familiar, and independent. Besides, Talmy also points to some distinctive characteristics of the figure and ground in complex events and/or relations. These are stated as follows (Talmy 2000, 325–9).

(i) In a temporal sequence of events, the earlier event is the ground and the later event is the figure.
(ii) In a causal relation between events, the causing event is the ground and the event caused is the figure.
(iii) In an inclusion relation between events, the larger containing event is the ground and the event contained is the figure.
(iv) In a contingency relation between events, the event that has a determinative effect is the ground and the event contingent or dependent on the ground is the figure.
(v) In a substitution relation between events, the expected and familiar event is the ground and the unexpected substituting event is the figure.

A telling consequence of (13) is that these characteristics of the figure and ground can be extrapolated smoothly to the TR and LM if they are conceptualized in processual terms.[5] In any case, what seems important is that the TR and LM are invariably present in the internal structure of an event or state even though they are not overtly expressed (see Langacker 1991). The following examples illustrate this clearly.

(14) My friend (TR) has eaten a big pizza (LM).
(15) My friend (TR) is eating.
(16) To eat is human.

While (14) has in it the overt TR and LM marked in bold, (15) has only the overt TR and (16) has none in an overt form. This makes it clear that the TR and LM may not be (always) overtly expressed in the syntactic structure. We may now try to figure out a way of reformulating the right-hand side of (10) in terms of the alignment of the TR and LM. The following generalization may thus hold.

(17) [Entity0 F(〈Entity 1〉, 〈Entity 2, 〈Entity 3〉〉)] 〈═〉 [Entity0 F(〈T1〉, 〈T2, 〈T3〉〉)]

Here, T1…Tn are the terms of the conceptual function F, and hence, T1…Tn ∈ F. Since T2 can embed another term within itself and so on, we may simplify (17) and write the following.

(18) [Entity0 F(〈Entity 1〉, 〈Entity 2, 〈Entity 3〉〉)] 〈═〉 [Entity0 F(〈T1, T2,…〉)]

Any T with an arbitrary index i when i ≥ 1 is either a TR (or the figure) or an LM (or the ground). It may be noted that T1 is usually the TR in natural language as the TR–LM asymmetry underlies the subject–object asymmetry. However, it is not cross-linguistically impossible to find constructions in which the subject happens to be the LM. Langacker (2008, 384) provides the following examples from Greenlandic Eskimo representative of an anti-passive construction in which the undergoer participant expressed by the object loses its status as the TR and, consequently, the actor participant becomes the TR.

(19) (a) arna-p niqi niri-vaa.
woman-ERG meat(ABS) eat-INDIC
‘The woman ate the meat.’
(b) arnaq niqi-mik niri-nnig-puq.
‘The woman ate (some of) the meat.’

[ERG = ergative case marking; ABS = absolutive case marking; INSTR = instrumental case marking; INDIC = indicative marker]

As indicated by the name itself, an anti-passive construction has the converse pattern of a passive construction in which the actor participant is the TR and the undergoer becomes the TR after the passivization. The anti-passive suffix -nnig in (19b) changes the status of ‘niqi’ (‘meat’) as the TR (in (19a)) to that of an LM in (19b). Since Greenlandic Eskimo is a theme-oriented language, as clarified by Langacker, it is ‘niqi’ (‘meat’) rather than ‘arna-p’ (‘woman’) that is the TR in (19a). But in (19b), it is the actor (that is, ‘arna-p’) that is the TR. It is not just Greenlandic Eskimo that exhibits the LM in the subject. This possibility is manifested in the following type of constructions in English in which there exist two kinds of TR – one at the level of the verb and another at the level of the clause (Langacker 2008, 388).

(20) The garden is buzzing with insects.
(21) The streets were bustling with shoppers.

In (20), the verb-level TR is ‘insects’, but ‘the garden’ in virtue of being the subject is the TR at the level of the clause/sentence. When the verb-level TR is our focus, the subject (‘the garden’ in (20)) happens to be the LM providing the background for the activity of the insects. Similarly, ‘shoppers’ is the verb-level TR in (21) and ‘the streets’ is the LM in this condition, but ‘the streets’ is the TR at the level of the sentence/clause. These considerations lend credence to the stipulation that any T with an arbitrary index i is either a TR or an LM. In view of the fact that Jackendoff (2002) does not discount the possibility that a conceptual function can have more than 2 arguments/terms since some arguments/terms can undergo self-embedding anyway, the following generalization can be stated to capture the formal correspondences with their set-theoretic structures.

(22) F(〈T1, T2,…〉) ≡ P(T1,…, Ti) ∨ (P′(t1…tj) … & … P″(t1…tm))

Here, ‘≡’ denotes a special equivalence sign; P is a natural language predicate (mathematically realized as a relation); i in Ti is an arbitrary number, which is ≤3 since the upper bound on all natural language predicates is 3; and P′(t1…tj)…&…P″(t1…tm) = F(〈T1, T2,…〉) represents the incorporation or conjunction of at least two predicates P′ and P″ that jointly express or realize F(〈T1, T2,…〉). Note that the value of i in P(T1,…, Ti) need not match the arity of F. More formally, the following holds.

(23) P′…&…P″ = F
[here at least one P from (P′ …&… P″) can express a Tk such that k ≥ 1]

What (22) states is that a conceptual function with its terms T1…Tn is formally equivalent to either a predicate P with its arguments or a conjoined/co-incorporated predicate. This holds even if we take into account Van Valin’s (2005) semantic/conceptual representations with the proviso that the conceptual predicate do’ for predicates requiring actors has to be fused with F in (22) in each case. In addition, (22) captures the essence of the completeness constraint which states that all the arguments explicitly specified in the semantic representation must be syntactically realized in the sentence, and all the referring expressions in the syntactic representation of a sentence must be linked to their argument positions in the semantic representation of the sentence (Van Valin 2005, 129–30). This is ensured by the fact that the argument terms of F or, for that matter, of P must be in some form realized in the syntactic structure in conformity to (10). More importantly, the sequential organization of T1, T2 … as the TR/Figure and/or LM/Ground is to be determined by the control cycle (Langacker 2013), parallel to Talmy’s (1988) force dynamics, which tells us how an actor/antagonist can exert control/force over an element/agonist that comes inside its dominion (domain of control) within the purview of a field. For instance, in (8/8′), the actor, ‘a professor’, is the TR/Focus that enters into a phase of potential interaction with the thing wanted (‘a huge library’) within its field of interaction. But, since the thing wanted, which is the LM/Ground, is not yet within the dominion or domain of control of the actor – the professor does not yet have a huge library – a sort of tension or a force-dynamic situation of opposing tendencies arises. It is in this way that the alignment among T1, T2 … on the left-hand side of (22) is established and comes to be specified by F because the conceptual contents of F can tell us what sort of force dynamics will be involved. Interestingly, the control cycle can dovetail with Langacker’s (1987, 123–6) notion of viewpoint, which can be manifested as vantage point determining how something is viewed or conceptualized from a certain perspective, thereby setting something as the foreground and something as the background. For sentences such as ‘Maya thinks a professor wants a huge library’, the viewpoint of ‘Maya’ as the TR/Focus will differ from that of ‘a professor’ and also from that of the speaker who utters the whole sentence. In this case, (8/8′) as a whole will be the LM/Ground within the whole field of ‘Maya’ as an actor or antagonist who will be in a stage of inclination towards (8/8′) and hence a force-dynamic tension will be present (see Langacker 2009, 259–89). Here, if the equivalence F(〈T1, F1(T2, T3)〉) ≡ P(T1,T4) holds for ‘Maya thinks a professor wants a huge library’, F will be ‘thinks’ and F1 will be ‘wants’ and T4 = F1(T2, T3). The control cycle specified by F1 will be within the field of the control cycle specified by F, and in this way, the viewpoint of T1 (‘Maya’) as an actor/antagonist will differ from that of T2 (‘a professor’) – this is also the reason why F1 is within the domain of arguments of F.

From a related perspective, the relationship among T1, T2 … in (22) is profiled by F, because F contains the conceptual content that specifies what sort of relationship holds among T1, T2 … and so highlighted (see Langacker 2009, 7–8). An example can help understand this better. The verbs ‘like’ and ‘please’ profile two different but mutually reversible relations between the Experiencer (the individual that undergoes an experience) and the Stimulus (the entity that exerts an impact on the experiencer) – the Experiencer has a positive affective orientation towards the Stimulus in the case of ‘like’, whereas it is the Stimulus that exerts an affective impact on the Experiencer in the case of ‘please’. Moreover, some T from among T1, T2 … within F can be a composite structure (as in cases of apposition, e.g., ‘Maya, the daughter of PM’) constituted by the joining of components of two OBJECTS (like Pustejovsky’s (1995) dot products), with the result that a number of predicate-logic predicates P′(t1…tj)…&…P″(t1…tm) have to be conjoined on the right-hand side of (22). The profiling in this case is known as corresponding profile (Langacker 1999, 194–5).

In addition, the upshot of the formulation in (22) is that F(〈T1, T2,…〉) on the left-hand side of the equation can, in a general sense, be thought of as a frame with the terms of F realized with attributes of semantic roles such as Agent, Theme, Recipient, Source, Experiencer, Goal, etc. (see Löbner et al. 2020, 2021). Given that such frames can be interpreted in terms of the first-order predicate logic formulas (Löbner 2017, 104) and also that the mother or main node of a frame can comprise not just events but also relations or states, a frame can also turn out to be equivalent to F(〈T1, T2,…〉) and, therefore, to P(T1,…, Ti) ∨ (P′(t1…tj)…&…P″(t1…tm)) as well. Another added advantage is that one instance of F(〈T1, T2,…〉) interpreted as a frame can undergo unification [6] with another instance of F(〈T1, T2,…〉) with the result that the relation between them can be a little indeterministic in allowing for different interpretations. This can also be formulated in terms of superposition, whereby meanings can be combined independently of their medium or vehicle in which meanings are expressed (words, concepts, etc.) (see Thornton 2021). This can prove to be helpful in the concatenation of conceptual functions for longer and more complex expressions. For example, the conceptual representations of ‘Roy jumps’ on the one hand and ‘Maya sings’ or ‘He sings’ on the other hand can be unified as frames when we may want to combine them as ‘Roy jumps and Maya sings’ or as ‘Roy jumps and (he) sings’, and similarly, we can do this with ‘a red car’ and ‘a blue bike’ for ‘a red car and a blue bike’. The fuller exploration of this topic is beyond the scope of this article, and hence, this may be left open.

Some illustrative examples can illuminate the stated equivalence. Let us take up the examples already introduced earlier. So we may first focus on (7–8) and their CS representations (7′–8′) repeated as follows.

(7′) [Situation PAST [Event CAUSE (Object SUNNY, Object WINDOW (+DEF), Event INCH (State BE (Object WINDOW, Property OPEN)))]]

Ignoring Entity0 in [Entity0 F(〈T1, T2,…〉)], we may now schematize (7′) as follows.

(24) CAUSE(〈T1, T2, T3〉)

Here, T1 corresponds to the object SUNNY; T2 corresponds to the object WINDOW; and T3 to the event yielded as an output by INCH. Notice that T3 is itself reduced to a conceptual function (i.e., INCH in the present context). Therefore, (24) would correspond to open(T1, T2), where open = P and T3 does not correspond to any argument of the 2-place predicate open. The CS representation of (8) as repeated below (8′) can also be treated in a similar fashion.

(8′) [Situation PRES [Event WANT (object PROFESSOR, object LIBRARY (property HUGE))]]

Schematizing (8′) yields (25).

(25) WANT(〈T1, T2〉)

Since p = want is a 2-place predicate, the argument structure of want perfectly matches that of WANT in (25). The representations of the sentences ‘Ron is in town’ and ‘Ron goes to the market everyday’ as captured in (12) can also be analyzed by following (22). The relevant formulations are provided in (26) and (27).

(26) [Situation PRES [State BE (Object RON, Place IN (Thing TOWN))]]
BE(〈T1, T2〉) ≡ be-in-town(t1)
[here be-in-town comes from the incorporation of T2 (expressed as P″) into BE (expressed as P′) and t1 = T1 = Ron]
(27) [Situation PRES [Event GO (Object RON, Path TO (Thing MARKET (+DEF)))]
GO(〈T1, T2〉) ≡ go-to-the-market(t1)
[here go-to-the-market comes from the incorporation of T2 (expressed as P″) into GO (expressed as P′) and t1 = T1 = Ron]

Since (22) is a statement of a higher-order equivalence between a generalization over conceptual functions and a generalization over the set-theoretic structures of natural language predicates, we shall show how predicates in natural language expressions can be easily translated back into CS representations. That is, we shall now demonstrate the equivalence in the converse direction – from predicate logic expressions into CS representations. A sentence such as ‘Ron takes Jon to school’ will be easily translated into a conceptual function of the type F(〈T1, T2,…〉). We may first convert ‘Ron takes Jon to school’ into (28).

(28) takes-to-school(t1, t2) [t1 = Ron, t2 = Jon in the present context]

It may be noted that to-school(t2) is our P″(t2) and takes(t1, t2) is our P′(t1, t2) as they should be on the right-hand side of (22). They have been expressed in a complex incorporated form in (28). Once this is recognized, we note that P′ = F in F(〈T1, T2,…〉) and P″ = T3 in (29) which encodes the conceptual-functional structure of the sentence ‘Ron takes Jon to school’ in a schematic form. The whole CS representation of the sentence is sketched out in (30).

(29) TAKES(〈T1, T2, T3〉)
(30) [Situation PRES [Event TAKE (Object RON, Object JON, Path TO (Thing SCHOOL))]]

Thus, we arrive at TAKES(〈T1, T2, T3〉) ≡ takes-to-school(t1, t2) with t1 = T1 = Ron and t2 = T2 = Jon. It is also easy to express (28) in terms of lambda functions since they define characteristic functions.[7] Therefore, (31) expresses the lambda-functional structure of (28).

(31) λt2[λt1 [takes-to-school(t1, t2)]]
= λt2[λt1 [takes-to-school(t1, t2)]] (Jon)
= λt1 [(takes-to-school(Jon)) (t1)]]
= λt1 [(takes-to-school(Jon)) (t1)]] (Ron)
= (takes-to-school(Jon)) (Ron)

The formulation (31) first (line 1) states that the property of t2 and the property of t1 are such that the predicate takes-to-school is predicated of t1 and t2. Then, the object argument is cancelled out with respect to the first lambda operator (line 3) because the verb and its object combine compositionally to form a verb phrase which now turns into a 1-place predicate. Finally, the subject argument is cancelled out and the argument requirement of the predicate takes-to-school is fully satisfied (line 5). We can now formulate the lambda-functional structure of (29) in a similar fashion, as shown in (32) below.

(32) λT3[λT2 [λT1 [TAKES(〈T1, T2, T3〉)]]]
= λT3[λT2 [λT1 [TAKES(〈T1, T2, T3〉)]]] (to-school)
= λT2 [λT1 [TAKES-TO-SCHOOL(〈T1, T2〉)]]
= λT2 [λT1 [TAKES-TO-SCHOOL(〈T1, T2〉)]] (Jon)
= λT1 [(TAKES-TO-SCHOOL(Jon)) (T1)]
= λT1 [(TAKES-TO-SCHOOL(Jon)) (T1)] (Ron)
= (TAKES-TO-SCHOOL(Jon)) (Ron)

It is clear that the lines 3–5 in (32) instantiate the argument structure saturation of the predicate takes-to-school in (31). The only difference lies in the λ-cancellation of T3 in (32) since T3 is incorporated as P″ into TAKES as per the formulation in (22). This shows that an additional item of complexity in the formal structure on the right-hand side of (22) is offset against a bit of simplification in the formal structure of F on the left-hand side, and conversely, a bit of simplification on the right-hand side can be offset against some amount of complexity in the formal structure of F on the left-hand side. The latter is clearly observed in cases such as the following.

(33) Avik buttered the bread.
(34) Maya pocketed the pen.
(35) The professor systematized the views on the theory.
(36) Kumar intensified the debate.

In all cases of (33–36) the predicate P on the right-hand side of (22) is expressed by F and (at least) some term from within F(〈T1, T2,…〉) realized as another F. That is, P(T1,…, Ti) can be expressed by F(〈T1, T2,…Fi〉), where Fi is a term of F. It is noteworthy that these formulations of (33–36) have what Pustejovsky (2006) calls simple predicate decomposition (involving only the insertion of sub-predicates) as opposed to full predicate decomposition (involving the insertion of more arguments/terms such as Davidson’s (1967) event variable, apart from the introduction of the sub-predicates). Since some of these sub-predicates (i.e., predicates of the sort, Fi, in F(〈T1, T2,…Fi〉)) can be used in a productive (or generative) manner but may vary across languages (Pustejovsky 1991, Pustejovsky 1995), the number of Fs inside the term structure of F(〈T1, T2,…Fi〉)) can be supposed to be underspecified for our purpose. In any case, we are now in a position to also furnish the truth-conditional statements for (32) in the appropriate manner. Thus, the following statements for truth conditions can be made about each line of (32).

(37) (i) λT3[λT2 [λT1 [TAKES(〈T1, T2, T3〉)]]] is TRUE (=1) if and only if the sentence ‘Ron takes Jon to school’ is true in a specific model of the world with appropriate assignments of values to T3, T2, and T1.
(ii) λT3[λT2 [λT1 [TAKES(〈T1, T2, T3〉)]]] (to-school) is TRUE (=1) if and only if ‘to-school’ is the path of the conceptual function TAKES.
(iii) λT2 [λT1 [TAKES-TO-SCHOOL(〈T1, T2〉)]] is TRUE (=1) if and only if (T1,T2) ∈ TAKES-TO-SCHOOL with appropriate assignments of values to T2 and T1.
(iv) λT2 [λT1 [TAKES-TO-SCHOOL(〈T1, T2〉)]] (Jon) is TRUE (=1) if and only if‘Jon’ is T2.
(v) λT1 [(TAKES-TO-SCHOOL(Jon)) (T1)] is TRUE (=1) if and only if T1 ∈ TAKES-TO-SCHOOL(Jon) with an appropriate assignment of a value to T1.
(vi) (TAKES-TO-SCHOOL(Jon)) (Ron) is TRUE (=1) if and only if (Ron, Jon) ∈ TAKES-TO-SCHOOL.

The only caveat for (ii–vi) is that the specific model of interpretation in (i) remains the same for all assignments of values to T3, T2, and T1 in (ii–vi).

Further, when conceptual functions combine to form complex concepts for complex sentences, their combination parallels that of complex formulas. For instance, the sentence ‘Joy tried to dance and Roy expected to drink’ can have a complex representation of a combination of conceptual functions, as shown in (38).

(38) [Situation PAST [State TRIED (Object JOY, ([Event DANCE (Object JOY)]))]] &
[Situation PAST [State EXPECTED (Object ROY, ([Event DRINK (Object ROY)]))]

If we ignore the tense information, the basic structure of (38) can be captured in the following fashion.

(39) TRIED(T1, ([DANCE(T1)])) & EXPECTED(T1′, ([DRINK(T1′)]))

Now (39) can be shown to have the following graphical representation of its combinatorial organization (Figure 2).

Figure 2 
                  The combinatorial organization of the conceptual functions for (39).
Figure 2

The combinatorial organization of the conceptual functions for (39).

Here, Φ is a well-formed formula consisting of a conceptual function and its terms, and we may suppose that ‘&’ is the conjunctive connective for conceptual functions. The parallel representation of (39) in predicate logic is presented in (40), and its combinatorial organization is sketched out in Figure 3.

(40) tried(Joy, ˄ dance(x)) ∧ expected(Roy, ˄ drink(x))

Figure 3 
                  The combinatorial organization of the conceptual functions for (40).
Figure 3

The combinatorial organization of the conceptual functions for (40).

Here, ˄ p indicates ‘to-P′, and ‘Joy’ happens to be a member of the set denoted by ˄ dance(x) and ‘Roy’ is a member of the set denoted by ˄ drink(x).

Overall, this shows that conceptual functions with their terms can be conjoined or be placed in disjunction. That is why if F(〈T1, T2,…〉) is a well-formed formula for a saturated concept, F(〈T1, T2,…〉) & F′(〈T1′, T2′,…〉) is also formula instantiating a saturated concept. Likewise, F(〈T1, T2,…〉) OR F′(〈T1′, T2′,…〉) is also a formula of a saturated concept with ‘OR’ being the connective for disjunction of conceptual functions with their terms.

Interestingly, the equivalence established in (22) helps also unify the forms of semantic representations arrived at with those adopted in the theory of mental spaces as well (Fauconnier 1994, 2018). Mental spaces are those network-like cognitive representations that are structured around relations and elements that those relations are predicated of. This passage is representative of the general notion of mental spaces: “We have behind the simple words vast conceptual networks that operate completely unconsciously through the activation of powerful neural circuits” (Fauconnier 2018, 118–9).

Significantly, Fauconnier (1994, 16) provides a more precise characterization of mental spaces, as can be understood from this: “I introduce the notion of mental spaces, constructs distinct from linguistic structures but built up in any discourse according to guidelines provided by the linguistic expressions. In the model, mental spaces will be represented as structured, incrementable sets – that is, sets with elements (a, b, c,…) and relations holding between them (R 1 ab, R 2 a, R 3 cbf,…), such that new elements can be added to them and new relations established between their elements.”

As is made clear in the aforementioned passages, mental spaces are those conceptual units that organize the entities and relations that are and/or can be linguistically expressed. Thus, sets with elements (a, b, c,…) are incrementable only in the sense that we can have more elements as part of (a, b, c,…). Consequently, more and more new relations among those elements can be constructed. A simple sentence such as ‘Deb came across the man from the show’ can be represented in a mental space M in the following manner.

Here, aRb (also written as Rab) holds in the mental space M, and a and b are elements in M (i.e., a, b ∈ M) with R holding true of them. Significantly, if there are two mental spaces, say, M1 and M2 and one of them is included in the other, it does not follow that the elements of the included mental space will also be the elements of the subsuming mental space. That is, if M1 ⊂ M2 and i ∈ M1, it does not automatically follow that i ⊂ M2, given that Fauconnier (1994) maintains that the symbol ‘⊂’ does not actually symbolize set inclusion. Hence, if we wish to include our M from Figure 4 in another mental space, say, M′, we may embed the sentence ‘Deb came across the man from the show’ under the matrix clause ‘We think…’, thereby producing ‘We think Deb came across the man from the show’. This is shown in Figure 5.

Figure 4 
                  The mental space M for the sentence ‘Deb came across the man from the show’.
Figure 4

The mental space M for the sentence ‘Deb came across the man from the show’.

Figure 5 
                  The inclusion of the mental space M in M′.
Figure 5

The inclusion of the mental space M in M′.

The formulation in (22) can also be understood and hence recast in terms of mental spaces in the following way.

(41) F(〈T1, T2,…〉) ≡ P(T1,…, Ti) ∨ (P′(t1… tj) …&… P″(t1…tm))
≡ R(〈T1, T2,…〉) ≡ R(T1,…, Ti) ∨ (R′(t1…tj)… &… R″(t1…tm))
≡ R(〈…, Rk,…〉) ≡ R(T1,…, Ti) ∨ (R′(t1…tj)…&…R″(t1…tm))

Each R with its elements (T1,…, Ti) or (t1…tj) or (t1…tm) instantiates some mental space(s). Also, R(〈…, Rk,…〉) indicates that some element within 〈T1, T2,…〉 may also be a relation in a mental space or across mental spaces. The sentence ‘Joy tried to dance and Roy expected to drink’, whose structure has been formulated in (38) and (40), can now be shown in mental spaces.

Mental spaces M′ and M″ are embedded within another parent mental space R characterizing the speaker’s reality. R1 and R3 are complex relations because their second terms are mental spaces themselves (M2 and M4, respectively, for R1 and R3). M2 is assumed to be included in M1 because Joy’s dancing is to take place under the scope of the mental space characterizing Joy’s trying, and similarly, Roy’s drinking is to take place within the mental space of Roy’s expectation. The relations that can hold in R as a whole also hold in either M′ or M″ because the relations in M′ do not necessarily have any logical connection (such as entailment or presupposition) to those in M″, and hence, they hold in R in general. This ensures space optimization in the Fauconnier’s sense since the similarity of M′ and M″ with R is maximized. When the relations, elements, and the background assumptions of R as a whole are also preserved in its mental sub-spaces, they are determined in R as whole in addition to being determinable in the sub-spaces. In simpler terms, the speaker uttering the sentence ‘Joy tried to dance and Roy expected to drink’ is supposed to know both Joy and Roy and what they did (whether individually or together) on a past occasion. Unless this condition is met, the space optimization would not obtain.

In any case, the important thing to emphasize is that we can translate (40) into (38/39) and then into mental spaces (as shown in Figure 6) and also vice versa. Firstly, let us recast (40) into a form amenable to a smooth conversion into (38/39). This is shown in a schematic form in (42).

(42) tried(Joy, ˄ dance(x)) ∧ expected(Roy, ˄ drink(x))
≡ P(T1, T2) ∧ P′(T1′,T2′)
≡ F(〈T1, T2〉) ∧ F′(〈T1′, T2′〉) [formal representations converted into CS representations]
≡ F(〈T1, F″〉) ∧ F′(〈T1′, F‴〉)
≡ R1(T1, T2) ∧ R3(T1′, T2′) [CS representations converted into relations in mental spaces]
≡ R1(j, R2) ∧ R3(r, R4)
≡ R1(j, R2) holds in M′: j, R2 ∈ M′ & R3(r, R4) holds in M″: r, R4 ∈ M″

Figure 6 
                  Mental spaces for the sentence ‘Joy tried to dance and Roy expected to drink’.
Figure 6

Mental spaces for the sentence ‘Joy tried to dance and Roy expected to drink’.

For simplicity in representation, we have used only those variables that can be deployed in a schematically general format for ease in understanding. In any case, P and P′ are distinguished as predicates; F, F′, F″, and F‴ are distinguished as conceptual functions; T1 and T2 are distinguished as terms from T1′ and T2′, and R1, R2, R3, and R4 have been distinguished as relations. It is easy to note that one can translate the relations holding in mental spaces through a number of steps back into their formal properties in predicate logic by using the same procedure in (42) backwards. This formulation can now smoothly dovetail with the representations of semantic structures in DRT (Kamp and Reyle 1993, Kamp et al. 2011). DRT tells us how a mental representation is built up by language users as the discourse unfolds. The fundamental unit in this theory is a discourse representation structure (DRS) consisting of two parts, namely, a universe of discourse referents (U) representing the objects under discussion and a set of DRS-conditions (Con) that provide information gradually accumulated on the discourse referents. More formally, the following specifications (somewhat simplified[8]) are relevant (see Kamp et al. 2011, 24).

(43) (i) If U is the set of all discourse referents and Con is a possibly empty set of conditions, then 〈U, Con〉 is a DRS.
(ii) If xi, xj ∈ U, then xi = xj is a condition.
(iii) If N is a name and x ∈ U, then N(x) is a condition.
(iv) If P is an n-place predicate (such that P ∈ Reln) and x1,…, xn ∈ U, then P(x1,…, xn) is a condition.
(v) If K is a DRS, then ¬K is a condition.
(vi) If K1 and K2 are DRSs, then K1 ∨ K2 is a condition.
(vii) If K1 and K2 are DRSs, then K1 ∧ K2 is a condition.
(viii) If K1 and K2 are DRSs, then K1 → K2 is a condition.

A simple illustration can help understand how a DRS is constructed. Let us take our previous example ‘Deb came across the man from the show’ diagrammed in terms of mental spaces in Figure 4. Now the following DRS represents this sentence in terms of a box diagram.

More crucially, Kamp et al. (2011) have also provided a way of mapping DRSs onto predicate logic representations. This would help unify DRSs with CS representations and also mental spaces. They have defined a specific function that maps DSR components to predicate logic representations. The following are most relevant to our purpose.

(44) (i) (〈{x1,…, xn}, {Con1,…, Conm}〉) = ∃x1… ∃xn ( (Con1) ∧…∧ (Conm))
[This maps sets of discourse referents and conditions onto a set of predicate logic formulae.]
(ii) (xi = xj) = (xi = xj)
[This maps the equivalence of two discourse referents to the same relation in predicate logic.]
(iii) (N(x)) = (N = x)
[This maps a name which is a unary (1-place) predicate in DRT to a constant in predicate logic.]
(iv) (P(x1,…, xn)) = P(t1,…, tn)
[This maps a predicate with its referents to a predicate with its terms in predicate logic.]
(v) (¬K) = ¬( (K))
[This maps a negated DRS onto the negation of the predicate logic version of the given DRS.]
(vi) (K1 ∨ K2) = (K1) ∨ (K2)
[This maps disjunctive DRSs onto the disjunction of the predicate logic versions of the given DRSs.]
(vii) (K1 ∧ K2) = (K1) ∧ (K2)
[This maps conjunctive DRSs onto the conjunction of the predicate logic versions of the given DRSs.]
(viii) (〈{x1,…, xn}, {Con1,…, Conm}〉 → Ki) = ∀x1… ∀xn [( (Con1) ∧…∧ (Conm)) → (Ki)]

The guidelines in (44i–viii) help convert the DRS shown in Figure 7 into predicate logic representations. This is shown in (45).

(45) Step 1: (Deb(x)) = Deb(x)
(the-man-from-the-show(y)) = the-man-from-the-show(y)
(came-across(x, y)) = came-across(x, y) [by (44iv)]
Step 2: (〈{x, y}, {Deb(x), the-man-from-the-show(y), came-across(x, y)}〉) = ∃x
∃y (Deb(x) ∧ the-man-from-the-show(y) ∧ came-across(x, y)) [by (44i)]
Step 3: ∃y (the-man-from-the-show(y) ∧ came-across(d, y)) [d = Deb] [by (44iii)]

Figure 7 
                  A DRS for the sentence ‘Deb came across the man from the show’.
Figure 7

A DRS for the sentence ‘Deb came across the man from the show’.

We are now ready to translate the sentence ‘Joy tried to dance and Roy expected to drink’ into its DRS, so that the equivalence shown in (42) can be extended to DRSs as well. This is done below.

We have been guided by Kamp, Van Genabith, and Reyle in introducing a higher-order predicate P* (such as dance* or drink* in Figure 8) into the DRS because the natural language predicates such as ‘try’, ‘expect’ require a predicate in their argument structure. Since a predicate is itself of a second-order category (in being a set of individuals or a set of n-tuples when n ≤ 3), P* would be a set of all such sets. Hence, when we state that the entity x ∈ y or x′ ∈ y′, the second-order variable y or y′ is actually a predicate (= a set of individuals in the present context). Although it is certainly the case that higher-order logical expressions invite meta-mathematical conundrums, the empirical necessity is to be balanced against methodological restrictions on higher-order logical expressions. Moreover, in the standard tradition of formal semantics, logical expressions containing higher-order variables have been adopted – the variable-free approach to semantics[9] is notable in this regard (Jacobson 1999; see also Baker and Jacobson 2007). In any case, one important passage from Kamp et al. (2011, 64) seems relevant in this respect:

We continue to use the old discourse referent symbols (i.e. x, y, z,… x1, x2, x3,…) and distinguish between discourse referents which stand only for individuals, those which stand only for groups and those which allow for values of either kind by means of the predicate ‘at’: a discourse referent x standing only for individuals comes with the condition ‘at(x)’, a discourse referent x standing only for groups with the complex condition ‘¬[at(x)]’, and when neither of these conditions is present this means that x can take values of either kind.

Figure 8 
                  A DRS for the sentence ‘Joy tried to dance and Roy expected to drink’.
Figure 8

A DRS for the sentence ‘Joy tried to dance and Roy expected to drink’.

We may now set out to convert the DRS in Figure 8 into (40) by using (44i–viii) so that this helps unify the DRS representations with all other representations, as shown in (42). In other words, once DRS representations are converted into predicate logic representations, they are automatically unified with the other representations via the relation of transitivity. This is achieved in (46).

(46) Step 1: (Joy(x)) = Joy(x) } [by (44iv)]
(Roy(x′)) = Roy(x′)
(y) = ˄ dance(x)
(y′) = ˄ drink(x′)
(tried(x, y)) = tried(x, y)
(expected(x′, y′)) = tried(x′, y′)
(x ∈ y) = (x ∈ ˄ dance(x)) } [by following something formally analogous to (44ii)]
(x′ ∈ y′) = (x′ ∈ ˄ drink(x′))
Step 2: (〈{x, y}, {Joy(x), Roy(x′), dance*(y), drink*(y′), tried(x, y), expected(x′, y′), x ∈ y, x′ ∈ y′}〉) = ∃x ∃x′ ∃y ∃y′ (Joy(x) ∧ Roy(x′) ∧ dance*(y) ∧ drink*(y′) ∧ tried(x, y) ∧ expected(x′, y′) ∧ x ∈ y ∧ x′ ∈ y′) [by (44i)]
Step 3: tried(j, y) ∧ expected(r, y′) [by 44iii)]
Step 4: tried(j, ˄ dance(x)) ∧ expected(r, ˄ drink(x)) [by Step 1]

We have assumed here that j = Joy and r = Roy. This yields exactly what we have in (40), repeated in (47).

(47) tried(Joy, ˄ dance(x)) ∧ expected(Roy, ˄ drink(x))

This shows that DRS representations have in fact no conflict with the representations in conceptual/cognitive semantics. Since DRS representations owe their origins to mental representations of dynamically changing contexts – Heim’s (1982) File Change semantics was a significant step in this direction[10] – DRS representations are smoothly amenable to an analysis in terms of conceptual/cognitive contents of mental representations. However, one remaining concern is that DRT in virtue of being anchored in the tradition of formal semantics cleaves to a realist interpretation of the context representations. This theory contains certain embedding functions that (as partial functions) map discourse referents (e.g., x, x′ and y, y′ in Figure 8) onto individuals in a model for variable assignments. In fact, this is in essence true of the tradition of formal semantics, and hence, it may seem that the metaphysical tensions between formal semantics and conceptual/cognitive semantics are not yet dispelled or at least neutralized. This is indeed a topic that warrants further discussion. The next section will offer some reflections on this issue.

5 Some residual philosophical issues

Even though the representations of formal properties of linguistic meanings in set-theoretic terms have been shown to be harmonized and unified with the cognitive/conceptual representations of natural language meaning, there could be remaining philosophical issues surrounding intentionality, reality, the mental entrenchment of semantic structures, and embodiment. On the one hand, the properties of semantic structures in formal semantics are such that they help relate linguistic expressions to objects and states of affairs in the outer world, thereby locating semantic properties of linguistic expressions in the world. On the other hand, CSs or conceptualizations in cognitive/conceptual semantics are located or entrenched in the mind/brain. Hence, it is not clear how they establish contact with the world out there. This is perspicuously expressed in Jackendoff (2002, 300): “In short, there appears to be no way to combine a realist semantics with a mentalist view of language, without invoking some sort of transcendental connection between the mind and the world.”

Jackendoff dismisses realist conceptions of mind-to-world connections, precisely because the notion of ‘objects in the world’ is suspect to him. If objects are referents of linguistic expressions, it does not seem clear how these objects are ultimately referred to by the mind, for after all the domain of natural language use abounds with linguistic expressions, especially referring expressions, which do not refer to anything in the real world (e.g., unicorns, Superman, Zeus, Pegasus, etc. are all linguistic expressions of this sort). Hence, Jackendoff’s solution is to push the world itself into the mind. All entities expressed in natural language are thus inhabitants of the mentally projected world, not of the world out there. That this is the common thread uniting tenets of thinking in conceptual/cognitive semantics can also be shown by quoting Langacker (2000, 26): “The meanings of linguistic expressions cannot be reduced to truth conditions, nor to direct correspondences between linguistic elements and entities out there in the world. For a linguistic semantics to be descriptively adequate and accurate with respect to the facts of natural language, it is essential that the human mind be brought into the loop.” A related, albeit somewhat different, concern can also be found in Fauconnier (1994, 152): “The construction of spaces represents a way in which we think and talk but does not in itself say anything about the real objects of this thinking and talking.”

Thus, it appears that linguistic meanings when cashed out in terms of CSs and cognitive representations are sort of embedded in the mind/brain since they are non-intentional entities having no actual or metaphysical correspondence with the real world. This argument is persuasively articulated in Jackendoff (2002, 306): “From the standpoint of neuropsychology, we must recognize that the neural assemblies responsible for storing and processing conceptual structures indeed are trapped in our brains. They have no direct access to the outside world. Hence … we must explicitly deny that conceptual structures are symbols or representations of anything in the world, that they mean anything. Rather, we want to say that they are meaning: they do exactly the things meaning is supposed to do, such as support inference and judgment. Language is meaningful, then, because it connects to conceptual structures.” (The author’s own emphasis is in italics)

Be that as it may, it is important to point out that even though linguistic meanings are conceptualizations or CSs possibly trapped in the mind/brain, they are ultimately predicated upon mental experiences and mental experiences are, by their very nature, intentional. Mental experiences include not merely abstract concepts and conceptualizations but also immediate and instantaneous sensory-motor and emotive experiences. If this holds true of experiences in general, there is no denying that experiences, especially of the sensory-motor and emotive kinds, have a two-way character. They are partly oriented to the outside world and partly oriented inwards. For example, a visual experience is an experience of something in the outer world or a motor experience of moving an object is an experience of performing an action towards that object in the real world. Likewise, an emotive experience is an experience of something whether imagined or perceived. Regardless of whether emotions are cognitive or non-cognitive in their form, emotive representations have an outward orientation in their intentional form (Mondal 2016). Moreover, emotive experiences are grounded in the body and have distinctive somatic markers (body- and brain-related signals such as fast heartbeats or the erection of body hair) that precede states of feeling (Damasio 2003). These somatic signals are often triggered by objects, events, and states of affairs in the world out there. If so, the experiences concerned must relate to something in the outer world, and hence they are bound to mean something. Sensory-motor and emotive experiences are in essence intentional, irrespective of how they remain trapped inside the brain. In this connection, it is also vital to recognize that while the valuation of a percept as being external can be mediated by a cognitive structure, as Jackendoff (2002, 2007) thinks, the association of the valuation with the given cognitive structure does not automatically give rise to something being experienced as out there in the world. Rather, it is the entity out there in the world whose experience leads to the association of the valuation with the given cognitive structure. Without the entity being experienced in the first place, the association of the valuation with the given cognitive structure cannot get off the ground in normal circumstances (barring hallucinations).

More crucially, even if conceptual/cognitive structures are trapped inside the brain/mind, there is nothing that actually prevents them from standing in relation to the outside world. One illuminating analogy in this regard is provided by Gross (2005). Thus, for instance, even if one is stuck in traffic congestion, there is nothing that actually prevents that person from standing in the relation of, say, ‘daughter-of’ to someone else staying in a place remote from the location of traffic congestion. After all, the relation of conceptual/cognitive structures to objects, events, and states of affairs in the world out there need not be construed only in causal terms, as Gross argues, for conceptual/cognitive structures certainly can stand in relation to these entities in the world out there via a mediated relation, however complex (by way of valuations of percepts or otherwise). Apart from that, there is a sense in which even the brain captures patterns, contingencies, and regularities in the outer world. We may view the cognitive properties of natural language(s) as those that do not reside in biological entities or structures per se because they arise only when the brain extends to connect to the outer world consisting of language users, objects, events, processes, etc., thereby providing the scaffolding for such otherwise biologically meaningless symbolic patterns. This scenario is in consonance with the world–brain relations articulated in Northoff (2018) who, furnishing a wealth of compelling experimental evidence, contends that the brain in its resting (or spontaneous) state captures regularities in the outer world when its spatiotemporal structure is experience dependent and thus (stochastically) dependent on the features of the world. This perspective allows conceptual/cognitive structures to stand in relations to things and states of affairs outside the brain. In fact, Mondal (2019, 171–84) has shown that conceptual/cognitive structures grounded in sensory-motor systems are set-theoretically describable and so interpretable.

Conceived in these terms, the semantic representations unified in (41) can both serve to act as conceptual/cognitive structures or mental spaces and stand in relations to things and states of affairs outside the boundaries of the brain. They are cognitive representations of some sort because they support reasoning, thinking, and judgement, and at the same time, they also relate to things and states of affairs out there in the world because they are often about these external things and states of affairs. Without the anchoring in the real world, the conceptualization of non-existent entities or fictive domains will not be appropriately mapped or specified. For instance, Fauconnier (1994, 2018) has talked about hypothetical spaces in sentences such as ‘If I were a millionaire, my VW would be a Rolls’ that are linked to the real space – the real space in this case contains ‘my VW’. The entity (i.e., ‘my VW’) in the real space is mapped onto ‘a Rolls’ in the hypothetical mental space (specified by the counterfactual antecedent ‘If I were a millionaire…’); the mental content of the entity in the hypothetical mental space is understood to fill in the mental content of the real-world entity. This filling-in takes place only in relation to the real-world object, and hence, the anchoring in the real world is not necessarily lost even when we speak of hypothetical spaces. The same considerations apply to the analysis of metaphorical expressions that map a real-world entity as a target onto another hypothetical entity, the source of the mapping (Fauconnier and Turner 2002, see also Turner 2014). For example, if one utters the sentence in speaking of his/her friend ‘You are my Superman’, the friend is a real-world individual who (as the target) is mapped onto the fictional entity ‘Superman’, which is the source. Here too, the source helps understand the target in a new light by filling in the contents of the target anchored in the real world.

Although it is unmistakably the case that language users have to primarily conceptualize relevant entities, however perceptually demarcated or characterized (as in ‘I don’t know what that was, but here it comes again!’, and Jackendoff (2002, 304) provides this example to argue that being in the world is not sufficient for reference), from this, it does not follow that the conceptualization does not actually stand in some relation to the real-world entity which has been conceptualized in some form. The form of the conceptualization itself may be, at least in part, determined by the appearance or ‘feel’ of the entity in question. Besides, even if being in the world is not a necessary condition for language users to talk about, or even refer to, non-existent things, conceptualizations of non-existent things certainly can stand in relations to existent things in the world. For instance, if we can utter a sentence like ‘Lois Lane met Superman in Metropolis’, the conceptualizations of ‘Lois Lane’, ‘Superman’, and the past eventive relation of meeting that holds for the pair of ‘Lois Lane’ and ‘Superman’ can stand in certain relations to the characters and events shown, say, in a movie one is watching in the actual world. Likewise, if while reading Arthur Conan Doyle’s story we can say that Sherlock Holmes was standing smiling at Watson, our conceptualizations of the relevant linguistic expressions can stand in apposite relations, however conceived, to the characters and events that unfold in the story before us situated in the real world. This may happen in exactly the way one reports an actual incident to another person. As a matter of fact, Partee (2009) has also suggested a similar possibility by proposing that the extensions of linguistic expressions of non-existent entities such as ‘fake guns’ or ‘imaginary creatures’ can be recalibrated with those of the entities that exist. The principle of non-vacuity demanding that the positive and negative extensions of a predicate be interpreted in a non-vacuous manner (Kamp and Partee 1995, 161) buttresses this recalibration. If non-existent entities can be interpreted and understood in (almost) the same way as existent entities by way of such recalibration, the problem of standing in relations to the world is neutralized, for the relations to the world that obtain for existent entities can be co-opted for the non-existent ones. Being in the world and standing in a relation can often be mutually exclusive. A person’s false or mistaken reporting of an incident can establish the latter, without representing the former in a veridical manner. But this does not at all establish that entities that do not even exist in the actual world are barred from being conceptualized in relation to the things in the actual world when they are talked about. Just as non-existing entities can find home in our known world, existent entities can also be described or construed in a manner that is non-existent in our known world (lies are an example here). Both scenarios admit of linguistic expressions standing in relations to things in the real world.

In a nutshell, while being in the world is neither sufficient nor necessary for reference, an encoding of a conceptual/cognitive representation within the boundaries of the brain is usually sufficient and also necessary for language users to let linguistic expressions stand in relations to things in the outside world. Taken in this sense, the encoding of a conceptual/cognitive representation within the boundaries of the brain rather facilitates the standing of linguistic expressions in relations to things in the outside world. Far from banishing this, the talk of non-existent entities in the context of movies, novels, stories, etc. establishes it more firmly. This happens because language users know how to relate the description of non-existent entities to some part or mode of the actual world they are already familiar with.

6 Concluding remarks

This article has formulated the form of unified representations for semantic structures that turn out to be easily amenable to the description of semantic structures in both formal and cognitive terms. Therefore, we arrive at the conclusion that the logical organization of natural language is fully compatible with the cognitive organization of linguistic structures. Although it appears that the externalist motivation for semantic structures in the tradition of formal semantics is in conflict and disharmony with the cognitive foundations of semantic structures, this article argues that there is nothing that actually prevents conceptual/cognitive structures trapped inside the brain/mind from standing in relations to the outside world. Far from it, the entrapment of conceptual/cognitive structures in the brain/mind actually enables linguistic expressions to stand in relations to various things in the real world. This helps square formal semantic representations of linguistic meaning with the conceptual representations of linguistic meaning.

This has consequences for the nature of linguistic meanings in semantic theory because semantic theory has so far remained divided on the nature of semantic representations, and this has motivated different architectures of grammar in linguistic theory. While formal models of grammar usually incorporate formal semantic representations of linguistic meanings in the semantic component (Hornstein 1995, Jacobson 2014, Pietroski 2018, Asudeh and Giorgolo 2020), cognitivist-functionalist theories tend to adopt cognitive representations of meaning (Croft 1991, 2001, Fillmore 2020). A unified form of representations of linguistic meaning can actually help establish that the representations of semantic structures in these divergent models of grammar are not after all different, and also that the grammar–meaning interface is much simpler (in the form of meaning representations linked to syntax) than is currently assumed across theories/formalisms of grammar. It is highly implausible that cognitive representations are mapped on to syntactic structures in certain cases and set-theoretic representations are mapped on to syntactic representations in certain others. It must be recognized that conceptual/cognitive representations and formal-logical structures of linguistic meaning, despite their apparent ontological divergence, can converge at higher levels of brain dynamics in order to form the basis of the learning of semantic structures that evince aspects of both representations (see Mondal 2022). This matter can be left open for further investigation, though.


I wish to thank the editor of Open Linguistics and the two anonymous reviewers for their insightful comments and thoughtful remarks on the manuscript.

  1. Funding information: I’m indebted to the Indian Institute of Technology Hyderabad for providing the Research Development Fund used for the current research reported in the manuscript.

  2. Conflict of interest: The author states no conflict of interest.


Asudeh, A. and G. Giorgolo. 2020. Enriched meanings: Natural language semantics with category theory. New York: Oxford University Press.10.1093/oso/9780198847854.001.0001Search in Google Scholar

Bach, E. and W. Chao. 2009. “On semantic universals and typology.” In Language universals, edited by M. H. Christiansen, C. Collins, and S. Edelman, p. 152–73. New York: Oxford University Press.10.1093/acprof:oso/9780195305432.003.0008Search in Google Scholar

Baker, C. and P. Jacobson. 2007. Direct compositionality. New York: Oxford University Press.Search in Google Scholar

Chierchia, G. and S. McConnell-Ginet. 1990. Meaning in grammar. Cambridge, MA: MIT Press.Search in Google Scholar

Cienki, A. 2017. “Spoken language semantics.” In Ten lectures on spoken language and gesture from the perspective of cognitive linguistics, authored by Cienki A., p. 1–20. Leiden: Brill.10.1163/9789004336230_002Search in Google Scholar

Croft, W. 1991. Syntactic categories and grammatical relations: The cognitive organization of information. Chicago: The University of Chicago Press.Search in Google Scholar

Croft, W. 2001. Radical construction grammar: Syntactic theory in typological perspective. New York: Oxford University Press.10.1093/acprof:oso/9780198299554.001.0001Search in Google Scholar

Damasio, A. R. 2003. Looking for spinoza: Joy, sorrow and the feeling brain. New York: Harcourt.Search in Google Scholar

Davidson, D. 1967. “The logical form of action sentences.” In: The logic of decision and action, edited by N. Rescher, p. 81–94. Pittsburgh: University of Pittsburgh Press.10.1093/0199246270.003.0006Search in Google Scholar

Davidson, D. 2001. Inquiries into truth and interpretation. Oxford: Clarendon Press.10.1093/0199246297.001.0001Search in Google Scholar

Dowty, D. R. 1979. Word meaning and montague grammar. Dordrecht: Kluwer.10.1007/978-94-009-9473-7Search in Google Scholar

Du Bois, J. 2003. “Discourse and grammar.” In The new psychology of language: Cognitive and functional approaches to language structure, edited by M. Tomasello, p. 47–87. Mahwah, NJ: Lawrence Erlbaum.Search in Google Scholar

Evans, N. 1993. “Code, inference, placedness and ellipsis.” In: The role of theory in language description, edited by W. A. Foley, p. 243–80. Berlin: Mouton de Gruyter.10.1515/9783110872835.243Search in Google Scholar

Fauconnier, G. 1994. Mental spaces: Aspects of meaning construction in natural language. Cambridge: Cambridge University Press.10.1017/CBO9780511624582Search in Google Scholar

Fauconnier, G. and M. Turner. 2002. The way we think: Conceptual blending and the mind’s hidden complexities. New York: Basic Books.Search in Google Scholar

Fauconnier, G. 2018. Ten lectures on cognitive construction of meaning. Leiden: Brill.10.1163/9789004360716Search in Google Scholar

Fillmore, C. 2020. Form and meaning in language, Vol. III. edited by P. Gras, J. Östman, and J. Verschueren. Stanford: CSLI Publications.Search in Google Scholar

Frege, G. 1892. “Über sinn und bedeutung zeitschrift für philosophische kritik.” 100, 25–30. In Translations from the philosophical writings of Gottlob Frege, edited by M. Black and P. Geach (1952), p. 56–78. Oxford: Blackwell.Search in Google Scholar

Frege, G. 1979. Posthumous writings, translated by P. Long and R. White. Oxford: Blackwell.Search in Google Scholar

Gärdenfors, P. 2020. “Events and causal mappings modeled in conceptual spaces.” Frontiers in Psychology 11, 1664–1078.10.3389/fpsyg.2020.00630Search in Google Scholar

Goldberg, A. E. and R. Jackendoff. 2004. “The english resultative as a family of constructions.” Language 80(3), 532–68.10.1353/lan.2004.0129Search in Google Scholar

Goldberg, A. E. and R. Jackendoff. 2005. “The end result(ative).” Language 81(2), 474–7.10.1353/lan.2005.0062Search in Google Scholar

Gross, S. 2005. “The nature of semantics: On Jackendoff’s arguments.” The Linguistic Review 22, 249–70.10.1515/tlir.2005.22.2-4.249Search in Google Scholar

Hamm, F., H. Kamp, and M. van Lambalgen. 2006. “There is no opposition between formal and cognitive semantics.” Theoretical Linguistics 32(1), 1–40.10.1515/TL.2006.001Search in Google Scholar

Heim, I. 1982. “The semantics of definite and indefinite noun phrases.” Doctoral Dissertation, University of Massachusetts, Amherst.Search in Google Scholar

Heim, I. and A. Kratzer. 1998. Semantics in generative grammar. Oxford: Blackwell.Search in Google Scholar

Hornstein, N. 1995. Logical form: From GB to minimalism. Oxford: Blackwell.Search in Google Scholar

Jackendoff, R. 1990. Semantic structures. Cambridge, Mass.: MIT Press.Search in Google Scholar

Jackendoff, R. 1996. “Conceptual semantics and cognitive linguistics.” Cognitive Linguistics 7(1), 93–129.10.1515/cogl.1996.7.1.93Search in Google Scholar

Jackendoff, R. 2002. The foundations of language: Brain, meaning, grammar, evolution. New York: Oxford University Press.10.1093/acprof:oso/9780198270126.001.0001Search in Google Scholar

Jackendoff, R. 2007. Language, consciousness, culture. Cambridge, MA: MIT Press.10.7551/mitpress/4111.001.0001Search in Google Scholar

Jacobson, P. 1999. “Towards a variable-free semantics.” Linguistics and Philosophy 22, 117–84.10.1023/A:1005464228727Search in Google Scholar

Jacobson, P. 2014. Compositional semantics: An introduction to the syntax/semantics interface. New York: Oxford University Press.Search in Google Scholar

Kamp, H. and U. Reyle. 1993. From discourse to logic. Dordrecht: Kluwer.10.1007/978-94-017-1616-1Search in Google Scholar

Kamp, H. and B. Partee. 1995. “Prototype theory and compositionality.” Cognition 57, 129–91.10.1016/0010-0277(94)00659-9Search in Google Scholar

Kamp, H., J. Van Genabith, and U. Reyle. 2011. “Discourse representation theory.” In Handbook of Philosophical Logic, edited by D. Gabbay and F. Guenthner, Vol 15, p. 125–394. Berlin: Springer.10.1007/978-94-007-0485-5_3Search in Google Scholar

Kornai, A. and G. K. Pullum. 1990. “The X-bar theory of phrase structure.” Language 66, 24–50.10.1353/lan.1990.0015Search in Google Scholar

Krifka, M. 2012. “Some remarks on event structure, conceptual spaces and logical form.” Theoretical Linguistics 38, 223–336.10.1515/tl-2012-0014Search in Google Scholar

Langacker, R. 1987. Foundations of cognitive grammar, Vol. 1. Stanford: Stanford University Press.Search in Google Scholar

Langacker, R. 1991. Concept, image, and symbol: The cognitive basis of grammar. Berlin: Mouton de Gruyter.Search in Google Scholar

Langacker, R. 1999. Grammar and conceptualization. Berlin: Mouton de Gruyter.10.1515/9783110800524Search in Google Scholar

Langacker, R. 2000. “Why a mind is necessary: Conceptualization, grammar and linguistic semantics.” In Meaning and cognition: A multidisciplinary approach, edited by L. Albertazzi, p. 25–38. Amsterdam: John Benjamins.10.1075/celcr.2.02lanSearch in Google Scholar

Langacker, R. 2008. Cognitive grammar: A basic introduction. New York: Oxford University Press.10.1093/acprof:oso/9780195331967.001.0001Search in Google Scholar

Langacker, R. 2009. Investigations in cognitive grammar. Berlin: Mouton de Gruyter.10.1515/9783110214369Search in Google Scholar

Langacker, R. 2013. “Striving for control.” In English modality, edited by J. Marrin-Arrese et al., p. 3–56. Berlin: Mouton de Gruyter.10.1515/9783110286328.3Search in Google Scholar

Larson, R. and G. Segal. 1995. Knowledge of meaning: An introduction to semantic theory. Cambridge, MA: MIT Press.10.7551/mitpress/4076.001.0001Search in Google Scholar

Lewis, D. 1972. “General semantics.” In Semantics of natural language, edited by D. Davidson and G. Harman, p. 169–218. Dordrecht: Reidel.10.1007/978-94-010-2557-7_7Search in Google Scholar

Linell, P. 2005. The written language bias in linguistics: Its nature, origins, and transformations. London: Routledge.10.4324/9780203342763Search in Google Scholar

Löbner, S. 2017. “Frame theory with first-order comparators: Modeling the lexical meaning of punctual verbs of change with frames.” In Proceedings of the 11th International Tbilisi Symposium on Language, Logic, and Computation, p. 98–117.10.1007/978-3-662-54332-0_7Search in Google Scholar

Löbner, S., T. Gamerschlag, T. Kalenscher, M. Schrenk, and H. Zeevat (Eds.). 2020. Concepts, frames and cascades in semantics, cognition and ontology. Berlin: Springer Nature.10.1007/978-3-030-50200-3Search in Google Scholar

Löbner, S. 2021. “Frames at the interface of language and cognition.” Annual Review of Linguistics 7, 261–84.10.1146/annurev-linguistics-042920-030620Search in Google Scholar

Louwerse, M. M. 2018. “Knowing the meaning of a word by the linguistic and perceptual company it keeps.” Topics in Cognitive Science 10, 573–89.10.1111/tops.12349Search in Google Scholar

Macnamara, J. 1994. “Logic and cognition.” In The logical foundations of cognition, edited by J. Macnamara and G. E. Reyes, p. 11–34. New York: Oxford University Press.Search in Google Scholar

Mondal, P. 2016. Language and cognitive structures of emotion. Berlin: Springer Nature.10.1007/978-3-319-33690-9Search in Google Scholar

Mondal, P. 2019. Language, biology and cognition. Berlin: Springer Nature.10.1007/978-3-030-23715-8Search in Google Scholar

Mondal, P. 2022. “The puzzling chasm between cognitive representations and formal structures of linguistic meanings.” Cognitive Science 46(9), e13200.10.1111/cogs.13200Search in Google Scholar

Müller, S. 2013. “Unifying everything: Some remarks on simpler syntax, construction grammar, minimalism, and HPSG.” Language 89(4), 920–50.10.1353/lan.2013.0061Search in Google Scholar

Northoff, G. 2018. The spontaneous brain: From the mind-body to the world-brain problem. Cambridge, Mass.: MIT Press.10.7551/mitpress/11046.001.0001Search in Google Scholar

Partee, B. H. 1979. “Semantics–mathematics or psychology?” In Semantics from different points of view, edited by R. Bäuerle, U. Egli, and A. von Stechow, p. 1–14. Berlin: Springer.10.1007/978-3-642-67458-7_1Search in Google Scholar

Partee, B. H. 1991. “Domains of quantification and semantic typology.” In Proceedings of the 1990 Mid-America Linguistics Conference, edited by F. Ingemann, p. 3–39. Lawrence: University of Kansas.Search in Google Scholar

Partee, B. H. 1993. “Semantic structures and semantic properties.” In Knowledge and language, volume 2: Lexical and conceptual structure, edited by E. Reuland and W. Abraham, p. 7–29. Dordrecht: Kluwer.10.1007/978-94-011-1842-2_2Search in Google Scholar

Partee, B. H. 2004. Compositionality in formal semantics: Selected papers by Barbara H. Partee. Oxford: Blackwell.10.1002/9780470751305Search in Google Scholar

Partee, B. 2009. “Formal semantics, lexical semantics, and compositionality: The puzzle of privative adjectives.” Philologia 7, 11–23.Search in Google Scholar

Pietroski, P. 2018. Conjoining meanings: Semantics without truth values. New York: Oxford University Press.10.1093/oso/9780198812722.001.0001Search in Google Scholar

Pustejovsky, J. 1991. “The syntax of event structure.” Cognition 41, 47–81.10.1016/0010-0277(91)90032-YSearch in Google Scholar

Pustejovsky, J. 1995. The generative lexicon. Cambridge, Mass.: MIT Press.Search in Google Scholar

Pustejovsky, J. 2006. “Type theory and lexical decomposition.” Journal of Cognitive Science 6, 39–76.10.1007/978-94-007-5189-7_2Search in Google Scholar

Putnam, H. 1975. “The meaning of ‘meaning’.” In Language, mind and knowledge, edited by K. Gunderson, p. 131–93. Minneapolis: University of Minnesota Press.10.1017/CBO9780511625251.014Search in Google Scholar

Recanati, F. 2012. Mental files. New York: Oxford University Press.10.1093/acprof:oso/9780199659982.001.0001Search in Google Scholar

Rijkhoff, J. 2002. “On the interaction of linguistic typology and Functional Grammar.” Functions of Language 9(2), 209–37.10.1075/fol.9.2.05rijSearch in Google Scholar

Rijkhoff, J. 2008. “Descriptive and discourse-referential modifiers in a layered model of the noun phrase.” Linguistics 46(4), 789–829.10.1515/LING.2008.026Search in Google Scholar

Searle, J. 1983. Intentionality: An essay in the philosophy of mind. New York: Cambridge University Press.10.1017/CBO9781139173452Search in Google Scholar

Schiffer, S. 2015. “Meaning and formal semantics in generative grammar.” Erkenntnis 80(1), 61–87.10.1007/s10670-014-9660-7Search in Google Scholar

Shieber, S. M. 2003. An introduction to unification-based approaches to grammar. Massachusetts: Microtome Publishing.Search in Google Scholar

Slobin, D. 2004. “The many ways to search for a frog: linguistic typology and the expression of motion events.” In Relating events in narrative: Typological perspectives, edited by S. Strömqvist and L. Verhoeven, p. 219–57. Mahwah: Lawrence Erlbaum Associates.Search in Google Scholar

Steedman, M. and M. Stone. 2006. “Is semantics computational?” Theoretical Linguistics 32(1), 73–89.10.1515/TL.2006.006Search in Google Scholar

Talmy, L. 1988. “Force dynamics in language and cognition.” Cognitive Science 12(1), 49–100.10.1207/s15516709cog1201_2Search in Google Scholar

Talmy, L. 2000. Toward a cognitive semantics: Concept structuring systems, Vol. 1. Cambridge, Mass.: MIT Press.10.7551/mitpress/6847.001.0001Search in Google Scholar

Talmy, L. 2000a. Toward a cognitive semantics: Typology and process in concept structuring, Vol. 2. Cambridge, Mass.: MIT Press.10.7551/mitpress/6848.001.0001Search in Google Scholar

Talmy, L. 2011. “Universals of semantics.” In Cambridge encyclopedia of the language sciences, edited by P. C. Hogan, p. 754–7. Cambridge: Cambridge University Press.Search in Google Scholar

Ter Meulen, A. 1995. Representing time in natural language: The dynamic interpretation of tense and aspect. Cambridge, Mass.: MIT Press.10.7551/mitpress/5897.001.0001Search in Google Scholar

Thornton, C. 2021. “Extensional superposition and its relation to compositionality in language and thought.” Cognitive Science 45(5), e12929. 10.1111/cogs.12929.Search in Google Scholar

Turner, M. 2014. The origin of ideas: Blending, creativity, and the human spark. New York: Oxford University Press.Search in Google Scholar

Warglien, M., P. Gärdenfors, and M. Westera. 2012. “Event structure, conceptual spaces and the semantics of verbs.” Theoretical Linguistics 38(3–4), 159–93.10.1515/tl-2012-0010Search in Google Scholar

Van Valin, R. D. 2005. Exploring the syntax-semantics interface. Cambridge: Cambridge University Press.10.1017/CBO9780511610578Search in Google Scholar

Zwarts, J. and H. Verkuyl. 1994. “An algebra of conceptual structure: an investigation into Jackendoff’s conceptual semantics.” Linguistics and Philosophy 17, 1–28.10.1007/BF00985039Search in Google Scholar

Received: 2022-06-18
Revised: 2022-11-11
Accepted: 2022-11-14
Published Online: 2023-02-15

© 2023 the author(s), published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 3.6.2023 from
Scroll to top button