The World’s Last `Passives’ Workshop

 

Now don’t get me wrong, I love thematic workshops. And the event I was just at in Vienna was a great conference with lots of stimulating talks and post talk conversations.  But it wasn’t a thematic workshop. In fact, it convinced me that in the name of  progress in our field, we should now be willing to say that passive is not a Thing, and therefore there can be no workshops about it.

Ian Roberts and Michelle Sheehan (only the former in person) started off bravely to convince us of the position that the apparent disunity of the thing called passive across languages, can be tamed via a choice tree of parameters. It was a nice try, but as you will see I was not convinced.

Gereon Muller treated us to a tour de force of `removal’  derivations. I always like it when somebody can give an internally coherent and fully spelled out implementation of something that has hitherto been treated in a completely different way. It is both exciting and disorienting to be guided round a problem, seeing it with new eyes, using a different toolbox. In this way, the problem and its solution get transformed.  I like to think it allows me to understand the analytic problem itself better, in a way that potentially transcends implementations.  In some way Gereon contributed to the deconstruction of passive this weekend because his new tool box made natural classes and unities where there were none before, and focused on just one component of the passive problem (that of removal of the agent).

When it comes to morphology, one could imagine being interested in the precise nature of the participle in the English passive, as I was in my talk.  This is a  different set of analytic concerns that quite literally deconstructs the passive and just concentrates on one ingredient of it, the participle itself. In arguing for a unity behind the participle in all its uses, including perfect (PassivesViennaV3), the talk undermines the centrality of the unity behind the passive itself.   In the talks of Remberger and Ledgeway we also saw one ingredient of the passive embedded within modal or buletic categories in Romance, in ways that were historically and synchronically challenging.  Indeed, morphological ingredients turn out to be crucial. In Greek and Albanian (described for us in Kalluli’s talk),  the morphology is an inflectional verbal ending, known famously, as NonActive, and which has been the subject of a number of very influential theoretical articles over the years. Not only is the morphology for making passive different in these languages, it also covers more semantic and argument structure options than simply the passive as understood from English. Significantly, Terzi has shown in her work that contrary to what has been found for English-like languages, passive (i.e. nonactive Voice) is not recognisably impaired in agrammatic patients in Greek (handout here).  This suggests that the psychological status of nonactive morphology is also different between the two languages. Why should this be? One can only speculate. Terzi suggests based on her work with children, that it is the robust and common reflexive use,  which has rich lexical content and which precedes `passive’-like interpretations in children’s competence,  that is the basis for the form being  relatively well preserved in Aphasia.  Object relatives, as expected,  are bad for Greek agrammatic patients, so it’s not that they have been misdiagnosed.    As we know, in many other Indo European  languages, reflexive morphology is used to cover some of the things that NonActive voice covers in Greek and Albanian, including some passive like uses.   So over and over again,  the particular cluster of properties that have been given the label passive, based on the English participle construction with auxiliary be,  does not correspond neatly to a single form or strategy in other languages.

In this workshop, we were treated to many examples like this.  Faruk Akkus and Julie Anne Legate had a joint paper on double passives in Turkish,  where  the same piece of morphology doubles as a passive and as an impersonal.   In the construction that doubles the suffix in question, it turned out that  the inner passive one can feed a subsequent impersonalization.  Ben Bruening’s poster on Passamaquoddy passives argued forcefully that the sequence of suffixes found there, and the morphologically distinct ways that the `passive’  gets expressed, shows that the connection between syntax and morphology in the verb suffix system is simply arbitrary and templatic, and does not conform to `mirror’. The data was fascinating, but for me only succeeded in reinforcing the impression that the false unity behind our label of `passive’  what was going wrong here. See Bruening’s manuscript in progress on the rebel anti-mirror approach to morphology.

So. Thank you Vienna and to the organisers of the World’s Last Passive Workshop (Akemi Matsuya, Kleanthes Grohmann and Eva Maria Remberger and her graduate students)!  And thank you to all the speakers and poster presenters for a stimulating time. In the future, I look forward to workshops on templatic morphology,  on removal derivations, on whether there are parameters and what they should look like,  on argument structure alternations, on implicit agents, etc. etc. But Passive?  I for one have become convinced during this weekend that it is an old convenient cover term that has outlived its purpose.  But, that’s progress!

 

On the Difference between Philosophers and Linguists:

 

I was just recently at the 9th meeting of Semantics and Philosophy in Europe

(SPE) held this year in lovely Padova, where the special topic was Ontology.

It was stimulating and enlightening to have so many people in the same room

thinking about things ontological, and from so many different backgrounds

and foundational concerns. It is also useful to lay out what the differences in

motivation and intellectual goals are when the term ontology gets bandied

about, because in my experience it leads to massive misunderstanding when

we all use the same word to mean different things. I have been working on

Things Ontological myself recently, and so I feel motivated to express clearly

what it is I am doing, and what it is I am not doing.

So why do we think it is important to think about ontology? There are a

number of perfectly respectable answers.

Ontology Because: Metaphysics

When the philosophers start talking about this stu, quite often what they

are interested in is metaphysics. They want to know what the world is like.

They want to know what the Real World is Really like. Ok, so I am a little

bit skeptical that the Real World exists in any way that my tiny human brain

can Really understand, but there are many philosopher types to consider this

the burning question and seem to be much more optimistic than me. So for

example, `Does creating a piece of music involve `causation’ ?’ which was one

philosophy paper I listened to with furrowed brow like an alien from another

planet. Because I guess I just don’t understand what the philosophers are

worried about, and last I checked, creation verbs regardless of what kind of

object is `created’ all come out as transitive, or causative in language that

have an overt causative morpheme. So I just didn’t understand what the

question was. My bad. I think you just have to have a special philosophy

gene for this one. I look forward to some philosopher taking me to the pub

one day and explaining it to me over lots of alcohol. Because one of the things

that becomes clear when you go to these conferences is that philosophers are

smart, but also great drinkers and conversationalists.

But philosophers were not the only ones at this workshop. There were also

applied ontologists, and they have a very specic set of implementational

problems involving the meaning of lexical items of individual languages.

Ontology Because: Computers are Stupid

Human beings are very intelligent. And the trick we pull off with using lan-

guage creatively to build meanings is one of the most intelligent things we

do. Applied ontologists are very smart human beings trying to spell things

out for very stupid computers.   If you just talk to human beings all the time

like I do, you don’t notice half of the complexity that goes on underneath

the surface wedged inside the meaning of individual lexical items. But when

you are working on trying to get computers to understand humans you start

noticing that complexity—  computers are just so darn literal minded If you

don’t tell them something explicitly they never get it! So we need to have

an idea about what kind of primitive notions and knowledge structures need

to be built in to computers brains so that they don’t produce howlers. One

very interesting topic that the workshop dealt with in some detail was the

phenomenon of `dot objects’, which is a kind of knowledge structure corre-

sponding to lexical concepts that facilitates the kind of meaning slippages

that happen all the time when such a word is used. Consider the following

sentence of English:

The heavy book on the table is very interesting.

Totally unremarkable, except for the fact that to predicate `heavy’  of  `book’  you

need to be thinking of it qua physical object, while predicating `interesting’ of

it requires you to think of it qua literary content. Co-predication is necessary

here to show that the lexical item here is not ambiguous in the traditional

sense. Both meanings reside in the same item, and are quite different qual-

itatively, so vagueness doesn’t seem right either. The collection of different

kinds of `senses’ of lexical items can be quite large and there are certain gen-

eralizations about what kinds of senses cohabit within particular lexical items. As

a linguistic semanticist this is also quite fascinating, because as far as I have

seen, these are not the kinds of meanings that seem to be morphologically

coded for in the languages I know, and because all human languages seem to

allow precisely these kinds of slippages. So its definitely part of the puzzle

about human meaning to see what the generalizations are here. Laure Vieu

(CNRS) argued that cohabiting senses for these kinds of objects are charac-

terized by sharing their spatiotemporal instantiation, and I think it’s a great

insight.

Finally, at the conference were some linguistically inclined folk thinking about

ontology, and they/we had another set of direct motivations.

Ontology Because: Language and Cognition

Basically, we think that language ontology is not a way to get at deep meta-

physical facts. True, language is one of the most complex symbolic things

we do, and it clearly seems to be `about’ something. But there seems to

be no necessity to express the aboutness of language in terms of primitives

that are Really ontological, as in metaphysically primitive in the real world.

In fact, it is an open question how we should represent the `aboutness’ side

of the language equation (at the end of the day, the things language seems

to be talking `about’ will bear some non-trivially determined relationship to

the real world and its ontology, but there is no guarantee that language itself

sees that deep) (see Kit Fine, here, for discussion).

The idea from Friederike Moltmann is that natural language ontology

is the set of primitive categories  which languages seem to implicitly employ,

because the patterns of language  and mappings make most sense that way

(see here for her work on this topic). Natural language ontology therefore can

be discovered only by detailed inspection of natural languages themselves,

their morphosyntax and semantics. It cannot be discovered by introspection,

or by seeing what a computer can understand, or by inspecting the real world.

But why would we want to express `meanings’ in terms of an ontology

parochial to natural language instead of using an objective, independent

more real ontology? I hear you ask. (Well, I hear the formal semanticists

asking this anyway.)   The reason is simply that only by investigating nat-

ural language ontology will we discover anything interesting about how the

human natural language system is built up. If we go for an objective real

world ontology then the match will be indirect at best, and we will miss

generalizations about how syntax and semantics connect in these symbolic

systems. In my opinion, much formal semantics assumes that the choice of

ontology on the aboutness side is a matter of formal convenience, as long

as it is grounded in the Real. But in fact, the ontology is part of what we

need to gure out. And it may have only an indirect relation to the Real.

Wolfram Hinzen in his work seems to go a step further and argues that much

of natural language meaning is constructed by the grammatical system itself

and could not be found in the Real world even in principle. I find myself

sympathetic to the whole natural language ontology enterprise, and suspect

that the cohabitation of senses inside lexical items is really about human

cognition in response to the world, together with the design requirements of

having a symbolic system in the first place. On the other hand, the mean-

ings constructed by functional items are relational in a fundamental way that

breaks the immediate connection between signifier and signified. But that, I

think, is a topic for a different blog.

 

Semantic Composition and the Brain

I write this from the European Society for Philosophy and Psychology meeting taking place at the University of Hertfordshire this year (August 14-August 17). The keynote on the first day was delivered by Prof. Liina Pylkkänen (NYU) talking about semantic composition in the brain. Liina is known to many theoretical linguists for her hugely influential dissertation on verbal argument structures (Introducing Arguments, MIT 2002), but Liina has been following up that theoretical interest in the syntax/semantics interface with a research programme looking at the neural underpinnings of language. Her hugely ambitious goals have been facilitated by the fact that our techniques for measuring brain activity have become much better and more fine grained in the last 25 years. However, this kind of research is still devilishly difficult to do and get replicable and robust effects, and even more difficult to interpret. Her research has focused on the technique of magnetoencephalography (MEG), directing the methodology on the problem of semantic composition specifically. MEG is a nice choice of technique because it provides fine grained spatial resolution for brain effects (unlike EEG) as well as finegrained information about the time course of those effects (unlike fMRI).

In theoretical linguistics, it is often assumed that neurolinguistics is still too remote from theoretical concerns to be directly relevant to it (see Embick and Poeppel 2015 on the commensurability gap), and so the communication between the two subfields is intermittent and sporadic. And to be honest, it is so hard to keep up just with new developments in one’s own small corner! But this is where general and crossdisciplinary conferences come in, and why they are so important.

Specifically, it is great to hear a neurolinguistic keynote with a state of the art update on what has been discovered. In the case of Pylkkänen’s work the experience is richly rewarding, and contains much food for thought. I summarize some of the main points that emerged from her talk here, in case it motivates some people to dip into her research.

Pylkkänen takes a deliberately bottom up approach, targeting core robust cases of semantic combination and devising families of minimal pairs to test the relevance of different factors in the activation of certain areas. The big picture that emerges is the existence of a pair of activation areas, one early in the lateral anterior temporal lobe (LATL) (200 ms) and one a bit later in the ventro-medial prefrontal cortex (vmPFC).  The LATL is particularly interesting because it is very early in comprehension, and it is differentially implicated in both  production (see here) and comprehension (see here), both reading and auditory of complex phrases such as red car as compared to control conditions consistingjust of two unconnected words in a list (Bemis & Pylkkänen, 2011, 2013a,2013b)).

The LATL has been implicated differentially in the processing of sentences as opposed tolists of words in a huge amount of literature (Friederici et al., 2000; Humphries, Binder, Medler, & Liebenthal, 2006; Humphries, Love, Swinney, & Hickok, 2005; Jobard, Vigneau, Mazoyer, & Tzourio- Mazoyer, 2007; Mazoyer et al., 1993; Pallier, Devauchelle, & Dehaene, 2011; Rogalsky & Hickok, 2009; Stowe et al., 1998; Vandenberghe, Nobre, & Price, 2002), but since sentence processing involves a host of different processes, we still don’t know what specific job the LATL is doing, which is why Pylkkänen’s lab has been focusing on simple two word combinations in their own series of experiments.

In one set of experiments (Blanco-Elorrieta, E., Kastner, I.,Emmorey, K. & Pylkkänen, L.`A shared neurobiology for building phrases in signed and spoken language’ (submitted)), the way that the two conditions were controlled for while keeping the stimulus as similar as possible was quite ingenious. (Because brain activity is complex and affected all the time by other things not being tested by the experimenter, it is very important in these methodologies to keep the stimulus as similar as possible, while manipulating the test factors.) In a production task, subjects were asked to name pictures of simple objects coloured in in simple block colours. The coloured objects in the pictures were also however situated against backgrounds of different colours. In one run type, the subject was asked to name the object with its colour (e.g. red car) and in the other type of run, the instructions were to give the name of the background colour followed by giving the name of the object. While the second condition is plausibly harder and less natural, the complexity goes in the other direction than the effect sought. Indeed, there was found to be significantly more activation of LATL in the phrasal composition case as compared to the list. The effect was found also across languages (for Arabic see here) (and across modes of articulation (for ASL). The effect was found in both the nominal and the verbal domain (see Westerlund, M., Kastner, I., Al Kaabi, M., & Pylkkänen, L. (2015) ).

It however is even more specific than this. It is specifically involved in the combination of concepts and not syntactic or semantic combination more generally (Del Prato & Pylkkänen, 2014; Westerlund & Pylkkänen, 2014; Zhang & Pylkkänen, 2015). For example, it is found in the early computation of red car but not in the computation of    two cars (Del Prato  \& Pylkkänen, 2014).  It also appears to be an interaction between the specificity of the left member of the phrase when it needs to be integrated with a noun (more specific left members as in tomato soup producing greater activity than compounds with vaguer left members (vegetable soup) (Zhang and Pylkkänen 2015 )

There is also a contrast between the following two cases:

(1) a. The girls are tall and blonde.

b. The girls are Finnish and Dutch.

In a context where all four girls are tall and blonde, but only two are Finnish and two are Dutch, both (1-a) and (1-b) are of course true on an intersective vs. collective reading of the adjectival conjunction. It turns out that the LATL effect only emerges in (1-a) and not (1-b) (Poortman and Pylkkänen 2016 ).

So what is this LATL area and what is it doing? Why is it activated so early, immediately at the onset of lexical access? It does not generally kick in whenever there is syntactic combination, but is tied specifically to the kind of concept formation that we would normallly think of as being low in the hierarchical linguistic representation. How do we characterize in more theoretical terms the kind of semantic composition that is going on very early in this area of the brain. This is an example of a robust and replicated effect in a specific brain area, which I would argue should start to inform our theories on the architecture of the language faculty.

If indeed we believe theoretical linguistics is ready to have that conversation which our sister disciplines. Fields that call themselves ‘biolinguistics’ certainly seem to think that theoretical linguistics is ‘ready’. But being ready does not just mean being willing to speculate about evolutionary origins or invoke ‘third factor’ principles. I mean : are we ready to be able to do something with facts like these from the literature? Maybe we’re not ready. But if so, then let’s not call what we do biolinguistics just yet.

Actually I do hope we are ready. I think our field is progressing fast in this domain and we are at the beginning of a very exciting time.