Day 1 of the features workshop started with Thomas Graf from Stonybrook (henceforth TG), who came out with guns blazing (Features: More Trouble than They’re Worth?). And sure, if we are going to have a whole workshop about features, we should understand what that means. So TG has shown that the formal relationship between features and constraints is one of Interdefinability. Specifically, features can always be replaced by definable constraints within a monadic second order (MSO) logic, and that every MSO definable set of constraints can be encoded via features (Graf 2011, 2013, 2017, Kobele 2011).
So if systems articulated in terms of features and those expressed in terms of constraints are essentially notational variants, doesn´t that mean that we don´t need to worry about making a choice between them? After all, they turn out to be `two different sides of the same coin´, mathematically speaking. But as TG puts it, the problem is that the coin is too big. It turns out that there are all sorts of crazy constraints that one can define in MSO, things that we absolutely do not want natural language to be able to do like for example the kinds of constraints we want but all of their symmetric opposites in addition, and random boolean combinations of them. Such systems can be made to count, they have no locality built in, and they freely allow mixing of constraints from different domains which really should not be talking to each other.
But this is just what we expect as linguists, right? The formalism itself does not do the analytical work for us—- thatis the job of the analyst. We need to construct the theory explicitly ourselves and constrain it based on what we see languages doing out there the wild.
But here´s the rub. It turns out that Feature Abuse is frighteningly easy to commit, and much harder to detect than one might imagine. Features are hard to regulate because they produce global behaviour through many small interactions encoded in a distributed fashion over thousands of roots. This makes it hard to relate the high level properties of the system quasystem to specific aspects of the feature calculus. But it is much easier to do and more mathematically well understood to claw back the power of constraints within MSO. For example, we can limit constraints to specific complexity classes, and we can formulate hypotheses and inject restrictions based on things like c-command, or locality in a controlled fashion.
One of the cool and important things about interdefinability is that it is the opposite of boring. Its not like translating Norwegian into English but more like translating Human into Martian. When you transform your landscape in Feature Land into Constraint Land, lakes turn into rain and volcanoes emerge from the mist, and vice versa. For some reason, the human brain does not easily cognitively perceive systems that are in fact mathematically equivalent as the same. This says to me that the two different implementations, although computationally equivalent, may in fact have different consequences and predictions in any ultimate move towards a more algorithmic understanding of how these systems inhabit the brain. For TG the warnings against features are methodological: working with constraints leads to better control and ability to test global properties of the system, and there´s no downside since they are always equivalent to the feature systems that most syntacticians prefer working with. So Features should be dumped in favour of Constraints.
But my reading of TG´s subsequent work, and indeed the second half of his talk, also makes a slightly less methodologically dry point. The fact is that once you embrace interdefinability and work hard in different domains to reconceive systems described one way in terms of the other, you find that it is both hard, and interesting. It is eye-opening to see how an issue or problem or generalization reshapes and topologically modifies itself during that process. Some things that previously seemed to be patterns dissolve frustratingly, but other generalizations become easier to see. TG has been training his Transmogrification Device on various linguistic phenomena, looking at islands, at selection, and at *ABA patterns in attempts to give new feature free analyses. (As Prof. McGonagall points out, the transmogrification spell is more difficult to pull off as you increase the complexity of the thing you are transmogrifying. Living things are particularly hard, and probably also linguistic analyses).
In the discussion session after TG´s talk, Peter Svenonius (PS) asked isn´t a feature just a way of picking out a natural class of things to which a constraint or rule applies? So really the difference in notation is pretty benign. TG says absolutely—- that´s exactly what underpins interdefinability, but that once you explicitly reify the feature as part of the grammatical language, the properties of the system start to explode mathematically. Omer Preminger (OM) pointed out a case where a generalization emerged precisely by looking at things in terms of features, and TG said that of course such phenomena would be interesting points in favour of Feature theory, but that he would have to reserve judgement until he had a chance to see whether he could recast the system in terms of constraints to see if he could make the generalization emerge that way as well.
So will the talks in this workshop succeed in transcending the particularity of the feature language they are using to tell us something more general about the global systems they are describing? Will we be able to come up with a Theory of Feature Theory that will allow us to detect and avoid Feature Abuse? Will we showcase analyses that will convince TG that there is something important and useful in looking at things from a feature theory perspective?
Or maybe such striving is premature, as one of my colleagues maintains, and we should just use whatever toolbox is at hand to build detailed and solid descriptions of the huge range of grammatical phenomena we are currently still woefully ignorant about.
Since this got longer than I had expected, summaries of Harbour, Sheehan and Bejar will appear in blogpost 2.