The next talk at Arctic Features was by Daniel Harbour (DH) on Maximal use of [+/- minimal]. DH is a typological morphosemanticist, who is on a quest for highly abstract universal features. In DH´s previous work on person systems, he argues for the necessity of a feature [+/- minimal] which basically has the interpretation of divisibility of reference (cf. Krifka 1989). This feature is argued to interact with the system of person features to give rise to complicated pronominal systems involve duals and inclusive vs. exclusive participant plurals. DH endorses the intuition of Bach (1981) inter alia that divisibility is potentially a property that crosses category boundaries, at the very least straddling the nominal and verbal domains. A predicate description P conforms to divisibility of reference if for every x that is a P, one can find a material subpart of x, y say, that also satisfies the description P. This is true of the nominal predicate water, but is plausibly also true of the verbal predicate sleep or be-tired (anything stative or activity-like down to a certain granularity according to Taylor 1977). Maybe, in fact, conjectures DH, [+/- minimal] underpins the definition of imperfectivity in verbal aspectual marking more generally. This leads DH to set up the following hypothesis about the space of morphological systems: if a language demonstrably uses [+/- minimal] in its pronominal system (because we can detect a morphologically marked distinction between 2 and 3+, for example), then it is statistically more likely to use [+/- minimal] in its verbal inflectional system and overtly mark imperfectivity. So here comes the typology and after a flurry of checking and counting (60 relevant languages), the report is that there seems to be a fairly robust correlation between funky pronominal systems in the DH sense and overt imperfectivity marking (89 percent of which seem to mark imperfectivity, which is higher than average). But even if we think this is true, this raises a number of questions, which the room was awash with when DH was done with his talk. TG wanted to know why a language would reuse a feature in this way? What is it about the system that might drive you to reuse it? If it is a compelling cognitive distinction, it might be the basis of morphological distinctions across a wide swathe of domains because it is a cognitively general organizational principle, not because the system is literally and mechanically reusing an atom of the featural system from one place to another. Also, why look for a correlation between pronominal systems and imperfectivity marking, instead of, say, a correlation between marking of mass vs. count, for example? Do we expect features to be universal across languages, and is this because of cognitive or even linguistic necessity? or do we expect the inventory of features to vary from language to language since famously `nobody ever conceived of a universal morphology’ said somebody, some time. DH is conceiving of universal aspects of morphology that transcend not just languages, but also categories within a particular language. At some point somebody in the audience raised the spectre of the Sapir-Whorf hypothesis, but DH slapped that back. Not entirely convincingly, in my opinion.
In the next talk, Michelle Sheehan MS (Anglia Ruskin University) tackled the issue of successive cyclic movement, in particular trying to find evidence for an A-movement incarnation for successive cyclicity. The spoiler here is that no, there is plausibly no such thing. MS takes as her starting point the ungrammaticality of long passive under certain causative and perception predicates in many languages.
(1) *Kim was made leave (by someone).
The above phenomenon has been noticed and accounted for in various different ways, with no consensus on which module of grammar is to blame. MS will propose that the ungrammaticality of (1) actually follows from phase theory under certain unremarkable assumptions, if we claim that there are no feature triggers for successive cyclicity that interact with the A system. MS assumes two clause-related phases, roughly corresponding to the C domain versus the v domain (van Urk and Richards). Specifically however, the lowest phase is dynamic, and is a little bit bigger at its biggest than is classically assumed—- ProgP is the largest v related phase in English (Harwood, Boskovic, Sailor). The patterns fall out if the complement of makeis a phase. MS assumes version 2 of the Phase Impenetrability Condition PIC2, whereby we get a window of opportunity for establishing A relations in the T zone aftera phase is assembled but beforeit is spelled out.
Why is the sentence with the to-infinitive possible?
(2) Someone was seen to run in the corridor.
(2) is good because the to-phrase is a TP and there is an EPP feature that would drag the DP argument to the edge in any case, allowing it to escape the phase. Passives of causatives/perception verbs will only be blocked where the complement they take is a phase that lacks T. If they get big enough then there is potentially a phase edge that the DP might end up in for independent reasons, if they are too small it’s not even a phase. So, if MS is right and the best way to account for this nest of data is to say that A movement is never successive cyclic, then we raise the question of how we model this difference between A movement on the one hand and A-bar movement on the other. MS suggests that this might furnish an indirect argument that successive cyclic movement must be feature driven, since it’s hard to see how you could model the difference otherwise.
So MS gave it her best shot, and showed us her best argument for a grammatical phenomenon requiring abstract features, but TG says no, you can definitely model this with constraints. But would it look so elegant?
The final talk of the day was Susana Béjar (SB) (U of Toronto) on `How to be a Picky Probe’. SB: “In addition to serving as diacritics for defining natural classes of syntactic objects, features serve as diacritics for modeling syntactic dependencies (local and non local). This is all a probe is: a syntactic diacritic that signals a trigger for dependency formation, as well as identifying a target and a dependent in the relation. SB has spent much of her research life looking at phi feature probes and trying to see how these kinds of dependencies work in a variety of natural languages, essentially seeking to describe faithfully while searching for higher level generalizations. With respect to phi, one thing SB has discovered in her travels is that hierarchically low probes tend to be picky with respect to participant features, while the probes on the higher T heads are always much less picky.
In today’s talk the focus was on a tricky subcase of interactions where no pattern or generalization seems to be detectable. 😦 This horrible domain is subcase of defective non interveners: things that should be in the path of the probe, not valuing the probe, but also not producing an intervention.
Probes are by definition picky and their pickiness is tantamount to a visibility condition on objects in the search space, with important analytic consequences for locality, so its important to see what patterns exist.
SB shows two case studies that should make us worried. One from Georgian agreement. She shows that for the purposes of the AGR probe for one of the agreement slots, the 3rdperson dative intervenes. However, for the purposes of the other probe the dative does not intervene. The second case study comes from agreement on the verb with the subject in Persian, which works one way with a low probe on a simple main verb, but another way with a high probe on a modal auxiliary. We are forced to say that these probes have different sensitivities. To summarize:
LOW AGR sensitive to the person of DAT; can’t see past it.
High AGR insensitive to person of DAT; can see past it
Low AGR is sensitive to the phi of intensional subject; can see past it.
High AGR is insensitive to person defectivity; can’t see past it.
So it’s very disturbing, and it also makes one begin to suspect that there is something that is being missed here. So maybe features and probing are just not the right way to be thinking about these particular kinds of syntactic dependencies.