Define Semantics

Features Workshop Blogpost3: Svenonius vs. Preminger

My Red Line for this last blog post related to the Features Workshop is Semantics, and not `trail of blood’, like you might be thinking.  The last day saw two final talks, one by our host/organizer Peter Svenonius (PS)  UiT The Arctic University of Norway and the other by Omer Preminger (OP) Maryland. PS gave his talk first, entitled `Case is the Solution to a Locality Problem’, while OP’s talk was called `What are Phi Features Supposed to Do and Where?’.

Granted, from the titles  it doesn’t particularly seem as if either of these talks were about semantics, but I think it the pair up is interesting precisely because it highlights the different relationships  morphosyntacticians can have to semantics, and the very different assumptions about what the word even means.  But let’s be clear from the outset: both PS and OP are formalist syntacticians who believe that the grammatical computation is special and has formal properties that cannot be reduced to meaning or functionalist considerations. They also both care scrupulously about architecture and the principle of keeping the modules from communicating with each other in unattested ways.  In this case, I think that real common ground here is rather extensive, but the rhetorical stance towards semantics stands out at least superficially as being a a point of difference. The kinds of questions they ask, and the nature of the solutions they seek is also quite different.

OP in Brief: You cannot look to the interfaces to help you understand how syntax works, since there are persistent mismatches between syntax and morphophonological properties on the one side, and syntax and semantics on the other. The second half of the talk was an argument about privativity and features. OP wants to say that we can model important things about what the syntactic atoms are by using privative features, rather than binary ones. Important for him is that under this way of looking at things the fact that 3rdperson sg is actually the absence of any feature for person, predicts how it behaves when it comes to Agree.  In particular, there is no omnivorous 3rdperson sg. agreement in natural language.

(Btw, TG wasn’t convinced that one could show in a principled way that privative systems can do anything different than the corresponding binary systems. But there is a complex interaction with what sorts of agreement mechanisms you invoke).

PS in Brief:  Case is a big mystery and we have no consensus on how to model the apparent licensing functions of case, and the patterns of morphological tagging that show up in nominal licensing systems.  PS chooses to model case using the tool of Agree and the checking of uninterpretable Kase features. Case is interesting precisely because in a technical sense it is not `interpretable’— it bears only an indirect relationship to the atomic units of meaning. In OP’s terms, it shows mismatches with any semantic class that one might try to define by purely semantic criteria.    PS too is interested in this `irrational’ phenomenon because it shows evidence of something syntax needs to do for its own theory-internal reasons.  It is the syntactic phenomenon par excellence.   However, in attempting to answer the WHY question for Case, PS makes a proposal that indirectly rests on the understanding of the semantic properties of the clausal hierarchy.  What PS  proposes (capitalizing on the fact that marked case emerges in the presence of two nominals)  is that it is necessary to distinguish nominals from each other in the T-domain otherwise the link back to the thematic domain is made indeterminate.  PS builds on a view from Ramchand and Svenonius 2014 concerning the properties of the two lowest zones of the clause, characterized by the nature of the abstract semantic information that is articulated there.  He argues that having structural case is correlated with the diagnostic properties of the second phase of the clause (referentiality), and that lack of case is correlated with that nominal carrying only `low’, and thematic-relational information.  So even though Case is still not directly correlated with a particular interpretation, the analysis is more semantically motivated than the standard ones which just posit a universal Case Filter.

OP is on record as deploring publicly what happens when syntacticians allow semantics to infect what they do, corrupting their analyses and undermining progress. (Actually, I am at a bit of a loss about what OP is complaining about here, since from my perspective most of the high prestige work on syntactic theory is being done in a complete semantic vaccuum, just as he would advocate. I happen to think much of that work is highly theory internal  and sterile and will be obsolete as soon as the toolbox changes.) The talk in this workshop was a variation on the Semantics is Evil theme since the first part of the talk was an exercise in showing that there are easily found mismatches between the two domains, i.e. no actual transparency between the units of the syntactic representation and the primes of the semantic representation.  .  (Now OP did not say that there was no relationship, but that there are frequent enough mismatches so that the one domain cannot be explained by the other.)

So do OP and PS disagree about the role of semantics in syntax?  They both would say that they believe in the Autonomy of Syntax, but they each seem to have a different working relationship, and different rhetoric towards  facts about what hieararchical structures can and must mean.

In fact, I think that  the straw man that OP set up in the first half of his talk is not entertained or propped up by anyone.  First of all, what is the semantic representation  that we are checking for transparency with the syntactic representation?   Are we talking about the  notational primes of a montagovian or neo-davidsonian semantic formula?  If so, then I do not think that any working formal semanticist out there would make those claims about their own formulas—- they do not think their analytical core units necessarily correspond to the core units that syntax traffics in. There is perhaps a systematic algorithm that gets you from one representation to the other, but there is no transparent  mapping between the primes.   Formal semantic representations are  highly articulated descriptions of truth conditions,  and compositionality can be guaranteed  without the match up of atomic primitives. In most cases, it doesn’t seem to me that formal semanticists are very precious about their notational atoms, as long as the right truth conditions are delivered.

A different question would be to  ask whether the mapping between the narrow syntactic representation to the meaning-building parts of mind/brain is transparent,  or even systematic. Now here, there are two ways of construing the question once of which makes the answer (trivially) YES and the other of which makes the answer interestingly NO.

The YES answer comes about if we want to know whether the `pieces’ of a lexicalized syntactic hierarchical structure correspond in a systematic way to the meanings ultimately being built. Here the answer has to be yes because otherwise we have no way of reliably guaranteeing mutual understanding in the face of unbounded creativity.

On the other hand, If we wonder whether those meaning chunks and operations are sitting in cognition-world out there independent of language, I think that the answer must be no.  Language as a symbolic system co-evolves with the cognitive system more generally during the maturation of every individual.  It is not a stretch to assume that the  pieces that are stored, learned and redeployed, and the mechanisms for semantically combining them are indeed created by and forced on us by the symbolic system itself, albeit against the background of our general cognitive proclivities, perceptions,  and learning biases.    Thus, the semantics I am interested in,  that is systematically tied to the syntax is not really independent of it.  semantics with a small s is backformed from the complexity of the symbolic system that generates it. This is inevitably different from semiotic systems which simply provide labels for independently available concepts.  Many syntacticians are fond of talking about the recursivity of the syntactic system as being a core central property of what makes language unique, but I would argue that is not enough. Language requires in addition the proceduralization of recursive symbolization and semanticity  that is tracked through  recursion and unboundedness.  As James Higginbotham, my late teacher and colleague used to say, when it comes to the syntax-semantics interface, it is like solving an equation in three unknowns.

The problem is that most people don’t use the term Semantics this way. They use it to mean the description of extensional truth conditions, and this, I believe, has even less direct connection with the internal pieces of the sentence than most people assume (See Pietroski 2018 on this point, and also my review of him).  At best, truth conditions are in a holistic correspondence to whole sentential chunks, and that too afterpragmatic inferencing and reference tracking at the discourse level has applied.  So I think OP and I are probably talking past each other here. But the fact remains that some of the questions he is most interested in are the ones where one can ignore the semantic influences on the answer, and this distinguishes him I think from PS.

I think OP is using his rhetoric to justify looking at his favourite pieces of the puzzle.  It’s nice to have favourite pet projects (Confession: I like Verbs)   for example like being most interested in the parts of the system which are about system internal formal relations (OP?).  But it is almost impossible to isolate what those are without understanding the nature of how the narrow computation feeds other cognitive systems. It is not possible to introspect about what syntax is.  It is a category of thing which by hypothesis is sui generis, and we figure out its scope in part by peeling away the things that it is not.  In other words,  if you want to distinguish syntax from not-syntax  then its a good idea to be able to recognize not-syntax.  To take an example from OP’s own talk on Wednesday, he makes the argument that anaphoric binding is not coextensive with abstract agreement, and should not be handled with the same abstract mechanism Agree.  One of the planks of the argument consists in showing that more generally,  the phenomenon of  coreference does not require phi feature agreement, and that superficial agreement for phi features occurs even when there is demonstrably no syntactic relationship possible.  So this is an example of how  one had to pay attention to meaning to make an argument about what lies in the syntactic system proper, and what should be excluded from it.

On a very basic level, there is a methodological claim that one simply cannot run constituency tests such as movement, or ask about whether reference is guaranteed by phi-feature agreement without using people’s judgements about meaning as your core data.  But its not just methodological, its also the explanandum: I do not think that understanding language is tantamount to being able to delimit the class of grammatical utterances (as I think OP has claimed elsewhere). Part of the task is also to preserve semanticity and account for the reliability of the constraints on ultimate meaning that speakers feel grammatical utterances have.

5 thoughts on “Define Semantics

  1. I think that you are just wrong that “the straw man that OP set up in the first half of his talk is not entertained or propped up by anyone.” It is perhaps not one that _you_ (Gillian) take seriously; but there is a whole body of work, very respected in certain circles, emanating out of Western Massachusetts in roughly concentric circles, where:

    1. Making the compositional-semantics trains (and here, this means Montague-Partee type trains) run on time is an adequacy criterion imposed on syntactic theory.

    2. Arguments from meaning alone, which in some cases have no morphosyntactic mooring whatsoever, and in other cases make the morphosyntactic state of affairs actively worse, are considered sufficient to motivate “syntactic” theories. (See: Kyle Johnson’s multidominance proposal for movement, or Keir Moulton’s work on complementation.)

    And there are hiring committees populated by adherents of this school of thought, where a syntax hire would not happen unless the candidate showed that she or he put meaning before morphosyntax in all cases.

    Now, you very well might not take seriously the kind of semantics that these people are bending their syntactic theories _to_, and so it doesn’t register on your radar as work that is “bending syntax to the whims of semantics.” But it is out there, and it is absolutely not a constructed straw man…

    Like

  2. Oh, regarding the use of meaning in constituency tests and other basic tests that bootstrap syntactic theory: this use of meaning is purely heuristic, and does not tell us much one way or another about modularity, I think. To borrow an example from Marc van Oostendorp, you also cannot do minimal pairs in phonology without a heuristic appeal to meaning; does that mean phonology and semantics are module-mates? Probably not.

    Like

    • I totally understand that you (and all those others) are using these things as heuristics. In the case of phonology, I think it is used to delimit the domain of what counts as the `same´thing stored in declarative memory which is bounded and finite. In the case of syntax, it tracks the unbounded creative generative capacity of syntactic representations. Given that we should probably not start off with the assumption that there are two completely distinct generative engines that track each other, we should ask ourselves why it is that these particular heuristics work and are indeed so necessary and natural in picking out the units of syntactic composition in so many cases. So I understand that people are assuming they are just heuristics. I just think in fact that these things are part of the explanandum.

      Like

      • I absolutely agree that there is exactly one engine of unbounded generative capacity, not two. (And it’s called syntax.) And since both the PF and LF interfaces are “interpretive” (in the sense that they perform some transformation on the output of the syntactic component), they can derivatively exhibit unbounded productivity as well. This does not uniquely pick out semantics, btw; wug tests in all their many varieties show that morphophonology is similarly productive, in that absent a reason not to (e.g. listed exception, or overriding transformational rule), we’re happy to apply the default combinatoric rules that map a syntactic structure onto a PF or LF one, even to cases that we’re encountering for the first time.

        Both with wug tests and with constituency tests, we’re placing a wager that the particular example(s) we’ve chosen have not accidentally landed us in one of these corners of exceptionality. (And so, of course, it is useful to try an assortment of examples.) I don’t think the question of why they work is a particularly deep one: given an infinite space, the terrain covered by the exceptions is small; so the chance that our wug or our constituent is in one of these corners are small. And all that constituency tests demand of the relevant semantic mapping is that it is “stable” (in the particular sense that it will not change, or change in very minor ways, as the result of the transformations that constituency tests involve – like topic fronting, etc.). That is not a very involved assumption about the syn-sem mapping, in the sense that there is a very wide variety of mapping theories that will deliver this result (e.g. any theory that reconstructs movement when doing thematic interpretation).

        So the fact that these tests work (wugs and constituency) is indicative of fairly broad properties of the two mappings (broad in the sense that they are shared by a wide variety of theories). This is exactly what one wants from a heuristic, of course.

        And yes, that the mappings (on both sides, PF and LF) have these properties is certainly part of the explanandum. But I don’t think that means that meaning is part of the “core data” of syntax (any more than sound is, anyway). As I am fond of saying, the core data of syntax is precisely that which is neither sound nor meaning 🙂

        Like

Leave a comment