On Monday 27th and Tuesday 28thof May, a number of very clever linguists who have thought long and hard about grammatical theory will be descending on the tiny town of Tromsø, at UiT the Arctic University of Norway to try to make progress on one of the most central and parochial issues in the implementation of generative grammars—- namely, what is the right theory of features.
The cast of characters includes Michelle Sheehan (Anglia Ruskin), Omer Preminger (Maryland), Susanna Bejar (U of Toronto), Daniel Harbour (Queen Mary) and Thomas Graf (Stonybrook) as visitors and myself, Peter Svenonius, Craig Sailor, Antonio Fabregas and the rest of the Tromsø CASTLFish milieu as the hosts.
You will find a programme for the workshop here:
As Peter Svenonius says in his introduction to the topic of the workshop:
“Features are an indispensible part of any grammatical system, but there is little consensus on some of their essential properties.”
So why is there so little consensus, and do we expect that a workshop like this can come to some conclusions that will move the theory forward? More importantly, will having a better theory of features within the implementation of grammars in this tradition, help us make progress on the big questions of language and grammar? Or is this just a housekeeping exercise for the members of a particular tribe?
Now, I´ve never been a Features girl, and I am not giving a talk at the workshop, although I have great respect for all the linguists we will be hosting next week and I will be there giving it my rapt attention. So I thought I would start this blogpost by trying to articulate for myself why I have never been very engaged with the theory of features debate thus far. (Disclaimer: The following is a grumble-list based on my own particular interests, and also in many cases not relevant to the very best work on features, but to the mainstream. )
1.The royal family of feature theory is Phi features, in particular as they appear in agreement configurations. Agreement is the classic place where morphology is semantically inert. If you are me, you aren´t excited by semantic impotency.
2. Even when we get to move away from Phi feature agreement, too much work on features seems to spend too much time worrying about how features interact within a grammatical system, and much less on what they are. Is there more than one kind of Agree? Is probing upwards or downwards? I find it hard to think about these questions, or even contemplate case studies with any urgency in the absence of an overarching argument concerning what sorts of things are given featural manifestation.
3. Syntactic work on features and feature checking quite often involves lots of intricate low level detail and decisions between subtly different kinds of implementation. When I look at the details, I can nearly always think of another way of doing it that doesn´t conform to the axiomatic commitments of the authors. I have no faith that there is any contentful or prediction-making distinction between these notationally different but essentially equivalent choices of toolbox. Even though syntacticians take the atoms of their theories very seriously (see Thomas Graf´s blog post on this topic here ), they are still somehow committed to the idea that these theories are computational, rather than algorithmic in Marr´s sense and will explicitly disavow any predictions in terms of what real minds do. This seems like a contradiction to me, or at least a self deception. If we take the computational level stance at its word, it becomes even less compelling to quibble over implementational hair splitting.
So this is why the current lineup of speakers promises to be so interesting. Harbour and Sheehan are definitely worried about morphosemantics, and not just phi-feature agreement. Bejar and Preminger have worked on very similar kind of data and have different implementations and theoretical commitments— can we press the issue and diagnose the extent to which these differences actually matter for something we should care about? Graf is there as the mathematician keeping us honest with respect to what is formally equivalent and with what consequences, challenging us from outside the tribe. The whole team consists of individuals who are distinct in their items of faith, informed, and opinionated enough to get a discussion going that will address the big picture architectural and methodological questions. The combinations are potent and provocative enough to deconstruct the whole damn system. It´s like the opposite of The Avengers coming together to save the world. Since the world is feature theory as it is currently practised, I am quite happy to embrace a potential future apocalypse.
On the other hand, it might just be productive, world changing consensus building. and what will emerge will be foundational for the feature theories to come. 🙂
3 thoughts on “There Will be Blood”
I have always thought of features–abstracted over a class of linguistic objects–as the result of the child learner having established some systematic property over that class. Thus features must be “formal”; they may, but need not, have semantic potency. Take gender. Some gender classes (in some languages) do have semantic correlates (e.g., biological sex, size, etc.) but in some languages, e.g., German, [+fem] is just the class of nouns that systematically takes a particular determiner and inflects plural by adding -n.
Phi features of course generally do have semantic potency but we probably shouldn’t read too much into that. The 3sg form of the verb in Romance, and the nom.sg form of the noun in Germanic, both appear to be the “base” form. This doesn’t seem to have any semantic motivation but only stems from the fact that they are, for whatever reason perhaps diachronic, most frequent so the child learns them first.
Well my comment about semantic potency was really to explain my own biases, in the sense that I most enjoy working in subareas where I have to think about the relationship between meaning more generally, and the contributions to and interfaces with meaning that come along with a formal symbolic system like language. Of course, as Omer would also say I think, the places where we can show that general semantic or functional considerations could not possibly be responsible for a pattern or generalization we see are the most interesting because they show the formal system at work. Features are one way of implementing the formal labels that form the organizational scaffold for a complex interacting system. And when you see actual agreement, you can be sure that the formal system is responding to something essentially formal, so its a clue about what the units and classifications being attended to actually are. So I would never deny that what we see in language is formal, or claim that syntax can be derived from semantics. However, I do think that part of the interest in looking at these things is in _comparing_ the fuzzy with the symbolic and thereby understanding the trick of how creative meaning generation actually happens, which for me is part of the central human puzzle. So studying features in language is a core aspect of grammar, but I am more interested in the WHAT and WHY than the HOW (which I think depends on understanding the former two, and is more likely to be parochial and theory-internal), and I worry that some of the theory internal purely implementational properties of features get more air time than the bigger architectural questions.