So far we have argued that the formal semanticists use of an intermediate logical language (the semantic representation) as discussed in earlier posts, is widely considered by the field to be at the level of a computational theory in the sense of Marr 1982, and is not intended to encode internal mental representations of meaning in any psychologically real fashion.
So what understanding of the human mind then do we gain from a study of the syntax-semantics interface construed in this way? The whole enterprise is made more difficult by the fact that we are essentially attempting to solve an equation in three unknowns: we don’t yet know what syntactic representations (their primes and primitive operations) actually look like, we don’t know what those abstract language specific semantic representations look like, and we do not understand the principles of the mapping between the two, except that we know that they must be systematic.
The history of generative grammar shows that there are a multiplicity of different formal proposals concerning what syntactic representations actually look like, with no emerging consensus currently in sight. And we can see from the history of formal semantics as well that the mapping rules change drastically depending on the type of syntactic theory it is interfacing with (cf. Lechner 2015; Partee 2014). The Semantic representation language was taken over from formal logic systems and it too has adapted slowly over time to form a better fit for the syntax (the particular kinds of syntax) that the formal semanticists are mapping from. As the history of syntactic theorizing has shown, there is always a choice between enriching the syntactic representation, or enriching the mapping rules between it and the semantic representation language. Within generative grammars alone, at least two different trends can be distinguished: more derivational and/or abstract syntactic theories whose abstractions in the form of covert rules and implicit structures (the Logical Forms of classic GB syntax, but also abstractness in the form of empty categories and implicit or unpronounced structure) are motivated by generalizations over interpretations; less abstract, more direct and monostratal syntactic representations (e.g. Categorial Grammars, Lexical Functional Grammar, Montague Grammar itself, and Head-driven Phrase Structure Grammar) form the input to mapping rules which in turn must be more flexible and rich in their output possibilities. It easy to see that in this kind of situation, the nature of the mapping rules and the intermediate representations built can be quite different from each other. The primes of the semantic representation language are also subject to variability from the pragmatics side. Every time a proposal is made about a pragmatic principle that can deliver the correct truth conditional results from a more indeterminate semantic representation, this also forces the primes of the semantic representation to be adjusted (e.g. see the effect that Discourse Representation Theory had on the interpretation of the definite and indefinite articles in English). Every time a change occurs in one of these three areas, the details of the whole package shift slightly. The only thing that remains constant is the anchoring in truth conditions. We have to get there in the end, but if these are purely computational or instrumental theories, then we should not put too much stock in exactly how we get there, implementationally speaking. Even compositionality, as a Fregean principle constraining the relationship between syntactic and semantic representations (see Heim and Kratzer 1998) can always be saved by employing the lambda calculus (Church 1936)— a mathematical innovation which allows the decomposition of the logical representation of complex expressions into pieces (higher order functions) that can match the constituents that syntax provides (whatever those turn out to be). So compositionality in this technical sense turns out not to be the criterion according to which these theories can be distinguished from each other. Only if we believe that these convenient semantic representations that we posit have some kind of cognitive or algorithmic reality, or that at least there is some cognitive reality to the boundaries being drawn between the different components, is the specific research area the syntax-semantics interface distinguishable from formal semantics simpliciter. In fact, most formal semanticists are unwilling to do any `neck baring’ let alone `sticking out’, in the area of psychological or cognitive prediction.
Unlike the formal semanticists proper, those of us working at the interface are interested in intermediate representations that we believe bear some organic relation to actual representations in actual minds. For many of us, the quest is to understand how the syntactic system of language discretizes and categorizes in order to create a workable symbolic tool for the mind, while still liaising with the brain’s general, language independent cognitive machinery.
Of special note here is a recent minority movement within formal semantics/philosophy of language towards exploring natural language ontology (Moltmann 2017, 2020). Moltmann in particular has argued that natural language ontology is an important domain within descriptive metaphysics (using the term from Strawson 1959), which is distinct from the kind of foundational metaphysics that the philosophical tradition tends to engage itself in with its spare ontological commitments of Truth and Reference. I see natural language ontology as primarily interrogating our assumptions about the nature of
intermediate Semantic Representation that mediates between syntax and truth evaluable representations part, building its primes based on the ontological commitments implicit in natural language(s) itself. As Fine (2017) argues, there is a case to be made that progress in foundational metaphysics relies on a close and nuanced understanding of the descriptive metaphysics involved in natural language ontologies. But even if that were not the case, it seems to me that the project of natural language ontology is crucial if we are to understand the compositional products of meaning and meaning building in language and the mechanisms by which it is embedded in our cognition and cognitive processing more generally. The spare and elegant axiomatization of semantic descriptions anchored just in truth and reference to particulars simply does not do justice to content and partial and incremental contents that we see in language. Exploring natural language ontology in its own right, taking the internal evidence as primary is a prerequisite to getting this kind of deeper understanding. Thus, even though we might think of the syn-semE as a computational theory, we can still have the goal of developing a language of primitives on the Semantic Representation side that is more responsive to the implicit categorization found in natural language. Formal semantics took its initial language for the semantic representation from formal logics, but has also repurposed that representation over time to fit natural language better. The research area of natural language ontology takes that goal to its natural conclusion and questions the basic ontology of these representations, and potentially moves the model closer to one that will eventually be more commensurate with cognitive and neurolinguistic theories.
In turn, the patterns that emerge robustly from this kind of natural language investigation, provide clues to both the nature of language itself but to the realities of the cognitive systems that it is embedded in. In part I, I laid out three types of question for I-semantics: Type A questions concerning descriptive generalizations relating semantic systems and the cognitive system they feed; Type B questions related to acquisition and cognitive development; Type C questions concerning the feed back effects of having a language on the very cognitive systems that it subserves. I close this post with a number of examples of phenomena that I think count as instance of Type A generalizations. Note that the existence of these `universals’ would be a surprising fact if general cognition were just one symmetric side of a listed form-meaning pairing. While there seem for example to be no deep generalizations concerning how syntactic primitives are mapped to externalized signals, there are candidates for universals in the mapping to I-semantics. I give some possible candidates in the following list:
(i) Without exception crosslinguistically tense information is represented hierarchically in the syntax outside of causation in the verbal domain, and referential facts such as novelty or familiarity of reference are represented outside of size, colour and substance in the nominal domain (see Julien 2002)
(ii) All human languages make category distinctions within their lexical inventory, minimally N(oun) vs. V(erb) (Baker 2003), and we know that these kinds of syntactic category distinctions cannot be predicted from external facts about the world. But what is this a discretization of in our I-semantics of the world?
(iii) All human languages show open-ended combinatorical ability of open class items to build creative new meanings.
(iv)Semantic modes of combination can be classified minimally into selectional, modificational and quantificational relationships. In other words, even though there is no single semantic combinatoric nexus that will cover all the attested forms of semantic combination, there seems to be a restricted set of semantic nexus types that all languages seem to use (see Higginbotham 1985; Jackendoff 2002) conditioned in systematic ways by syntax.
(v) Quantificational relationships in the semantics always correspond to a particular hierarchical format in the syntax, with the restrictor of the quantifier in combination with the operator, and the scope of the quantifier combined with that. This correlates with the semantic conservativity of all natural language quantifiers (Barwise and Cooper 1981, Lewis 1975).
(vi) The semantics of scalar structure is tracked by linguistic formatives across the syntactic categories of N(oun), V(erb), A(djective) and P(reposition), in all the languages that have been studied.
These are basic empirical generalizations at a fairly abstract level about how human languages compile meanings, and independent of the existence of the Minimalist programme, these are things that it seems to me are the job of the theoretical linguist to pursue in some way. Thus, properly understood, the Minimalist Programme does carve out an interesting and important domain of inquiry, one that might legitimately be called the syntax-semantics interface (Syn-SemI).