Features Workshop Blogpost3: Svenonius vs. Preminger
My Red Line for this last blog post related to the Features Workshop is Semantics, and not `trail of blood’, like you might be thinking. The last day saw two final talks, one by our host/organizer Peter Svenonius (PS) UiT The Arctic University of Norway and the other by Omer Preminger (OP) Maryland. PS gave his talk first, entitled `Case is the Solution to a Locality Problem’, while OP’s talk was called `What are Phi Features Supposed to Do and Where?’.
Granted, from the titles it doesn’t particularly seem as if either of these talks were about semantics, but I think it the pair up is interesting precisely because it highlights the different relationships morphosyntacticians can have to semantics, and the very different assumptions about what the word even means. But let’s be clear from the outset: both PS and OP are formalist syntacticians who believe that the grammatical computation is special and has formal properties that cannot be reduced to meaning or functionalist considerations. They also both care scrupulously about architecture and the principle of keeping the modules from communicating with each other in unattested ways. In this case, I think that real common ground here is rather extensive, but the rhetorical stance towards semantics stands out at least superficially as being a a point of difference. The kinds of questions they ask, and the nature of the solutions they seek is also quite different.
OP in Brief: You cannot look to the interfaces to help you understand how syntax works, since there are persistent mismatches between syntax and morphophonological properties on the one side, and syntax and semantics on the other. The second half of the talk was an argument about privativity and features. OP wants to say that we can model important things about what the syntactic atoms are by using privative features, rather than binary ones. Important for him is that under this way of looking at things the fact that 3rdperson sg is actually the absence of any feature for person, predicts how it behaves when it comes to Agree. In particular, there is no omnivorous 3rdperson sg. agreement in natural language.
(Btw, TG wasn’t convinced that one could show in a principled way that privative systems can do anything different than the corresponding binary systems. But there is a complex interaction with what sorts of agreement mechanisms you invoke).
PS in Brief: Case is a big mystery and we have no consensus on how to model the apparent licensing functions of case, and the patterns of morphological tagging that show up in nominal licensing systems. PS chooses to model case using the tool of Agree and the checking of uninterpretable Kase features. Case is interesting precisely because in a technical sense it is not `interpretable’— it bears only an indirect relationship to the atomic units of meaning. In OP’s terms, it shows mismatches with any semantic class that one might try to define by purely semantic criteria. PS too is interested in this `irrational’ phenomenon because it shows evidence of something syntax needs to do for its own theory-internal reasons. It is the syntactic phenomenon par excellence. However, in attempting to answer the WHY question for Case, PS makes a proposal that indirectly rests on the understanding of the semantic properties of the clausal hierarchy. What PS proposes (capitalizing on the fact that marked case emerges in the presence of two nominals) is that it is necessary to distinguish nominals from each other in the T-domain otherwise the link back to the thematic domain is made indeterminate. PS builds on a view from Ramchand and Svenonius 2014 concerning the properties of the two lowest zones of the clause, characterized by the nature of the abstract semantic information that is articulated there. He argues that having structural case is correlated with the diagnostic properties of the second phase of the clause (referentiality), and that lack of case is correlated with that nominal carrying only `low’, and thematic-relational information. So even though Case is still not directly correlated with a particular interpretation, the analysis is more semantically motivated than the standard ones which just posit a universal Case Filter.
OP is on record as deploring publicly what happens when syntacticians allow semantics to infect what they do, corrupting their analyses and undermining progress. (Actually, I am at a bit of a loss about what OP is complaining about here, since from my perspective most of the high prestige work on syntactic theory is being done in a complete semantic vaccuum, just as he would advocate. I happen to think much of that work is highly theory internal and sterile and will be obsolete as soon as the toolbox changes.) The talk in this workshop was a variation on the Semantics is Evil theme since the first part of the talk was an exercise in showing that there are easily found mismatches between the two domains, i.e. no actual transparency between the units of the syntactic representation and the primes of the semantic representation. . (Now OP did not say that there was no relationship, but that there are frequent enough mismatches so that the one domain cannot be explained by the other.)
So do OP and PS disagree about the role of semantics in syntax? They both would say that they believe in the Autonomy of Syntax, but they each seem to have a different working relationship, and different rhetoric towards facts about what hieararchical structures can and must mean.
In fact, I think that the straw man that OP set up in the first half of his talk is not entertained or propped up by anyone. First of all, what is the semantic representation that we are checking for transparency with the syntactic representation? Are we talking about the notational primes of a montagovian or neo-davidsonian semantic formula? If so, then I do not think that any working formal semanticist out there would make those claims about their own formulas—- they do not think their analytical core units necessarily correspond to the core units that syntax traffics in. There is perhaps a systematic algorithm that gets you from one representation to the other, but there is no transparent mapping between the primes. Formal semantic representations are highly articulated descriptions of truth conditions, and compositionality can be guaranteed without the match up of atomic primitives. In most cases, it doesn’t seem to me that formal semanticists are very precious about their notational atoms, as long as the right truth conditions are delivered.
A different question would be to ask whether the mapping between the narrow syntactic representation to the meaning-building parts of mind/brain is transparent, or even systematic. Now here, there are two ways of construing the question once of which makes the answer (trivially) YES and the other of which makes the answer interestingly NO.
The YES answer comes about if we want to know whether the `pieces’ of a lexicalized syntactic hierarchical structure correspond in a systematic way to the meanings ultimately being built. Here the answer has to be yes because otherwise we have no way of reliably guaranteeing mutual understanding in the face of unbounded creativity.
On the other hand, If we wonder whether those meaning chunks and operations are sitting in cognition-world out there independent of language, I think that the answer must be no. Language as a symbolic system co-evolves with the cognitive system more generally during the maturation of every individual. It is not a stretch to assume that the pieces that are stored, learned and redeployed, and the mechanisms for semantically combining them are indeed created by and forced on us by the symbolic system itself, albeit against the background of our general cognitive proclivities, perceptions, and learning biases. Thus, the semantics I am interested in, that is systematically tied to the syntax is not really independent of it. semantics with a small s is backformed from the complexity of the symbolic system that generates it. This is inevitably different from semiotic systems which simply provide labels for independently available concepts. Many syntacticians are fond of talking about the recursivity of the syntactic system as being a core central property of what makes language unique, but I would argue that is not enough. Language requires in addition the proceduralization of recursive symbolization and semanticity that is tracked through recursion and unboundedness. As James Higginbotham, my late teacher and colleague used to say, when it comes to the syntax-semantics interface, it is like solving an equation in three unknowns.
The problem is that most people don’t use the term Semantics this way. They use it to mean the description of extensional truth conditions, and this, I believe, has even less direct connection with the internal pieces of the sentence than most people assume (See Pietroski 2018 on this point, and also my review of him). At best, truth conditions are in a holistic correspondence to whole sentential chunks, and that too afterpragmatic inferencing and reference tracking at the discourse level has applied. So I think OP and I are probably talking past each other here. But the fact remains that some of the questions he is most interested in are the ones where one can ignore the semantic influences on the answer, and this distinguishes him I think from PS.
I think OP is using his rhetoric to justify looking at his favourite pieces of the puzzle. It’s nice to have favourite pet projects (Confession: I like Verbs) for example like being most interested in the parts of the system which are about system internal formal relations (OP?). But it is almost impossible to isolate what those are without understanding the nature of how the narrow computation feeds other cognitive systems. It is not possible to introspect about what syntax is. It is a category of thing which by hypothesis is sui generis, and we figure out its scope in part by peeling away the things that it is not. In other words, if you want to distinguish syntax from not-syntax then its a good idea to be able to recognize not-syntax. To take an example from OP’s own talk on Wednesday, he makes the argument that anaphoric binding is not coextensive with abstract agreement, and should not be handled with the same abstract mechanism Agree. One of the planks of the argument consists in showing that more generally, the phenomenon of coreference does not require phi feature agreement, and that superficial agreement for phi features occurs even when there is demonstrably no syntactic relationship possible. So this is an example of how one had to pay attention to meaning to make an argument about what lies in the syntactic system proper, and what should be excluded from it.
On a very basic level, there is a methodological claim that one simply cannot run constituency tests such as movement, or ask about whether reference is guaranteed by phi-feature agreement without using people’s judgements about meaning as your core data. But its not just methodological, its also the explanandum: I do not think that understanding language is tantamount to being able to delimit the class of grammatical utterances (as I think OP has claimed elsewhere). Part of the task is also to preserve semanticity and account for the reliability of the constraints on ultimate meaning that speakers feel grammatical utterances have.