Argument Structure Across Modalities

Workshop Held at the University of Amsterdam, Feb 1-3, 2018

I was just at a workshop where people working on argument structure got together. Yes, I seem to have been to a lot of those in my time, but this one was different. This one was explicitly designed as a meeting place for  those working on both signed and spoken languages.  You can find the programme for the workshop, together with abstracts here.

Now the cool thing about studying sign language (for any  linguistic phenomenon) is that it allows you to directly address the question of what aspects of universal patterning are functions of the nature of the interface to the modality chosen for externalization, and what other aspects survive such differences.   Given the starting assumption of universality, there are also phenomena which exist across modalities but which are easier to see and track in one modality vs. another.  Finally, if you work on spoken languages, the existence of phenomena in sign languages forces you to come up with more strictly general articulations of common analyses in order to accommodate this other manifestation of human natural language.  Sometimes it is the encounter with the other modality that convinces one that one has not been sufficiently general in the first place, despite one’s best efforts. Overall, the work on typology at this workshop (both within signed languages, and across both modalities) showed me that sign languages generally have more in common with each other typologically than they have in common with the spoken languages they happen to be in close contact with ( Börstell et al,  Kimmelman et al, Benedicto, this workshop).

(Broad) Differences between Signed and Spoken Languages

Sign languages are externalized in a visual modality using the body to create signs (mostly with the hands), spoken languages are externalized in the auditory modality, using the mouth lips and tongue to modify produced sound. (I start with the obvious).But there’s more.   Sign languages have more agreement and almost never have case.  They use locations as a proxy for reference. This affects the way in which both pronouns and agreement play out in these languages.  There’s a lot of `agreement’  on predicates, both for subject AND object. Its easy to do because the sign signed in space can overlap its beginning or ending portions with the position in space that has been indexed for a particular discourse participant.  Sign languages also  have a lot of object incorporation or `classifiers’  which modulate the shape of verbal predicates. Kimmelman et al (this workshop)  checked a lot of unrelated sign languages and showed the following  (confirming the typology introduced by Brentari and Benedicto 2004): verbs embodying `handling’ classifiers are always transitive;  while those using whole entity classifiers are nearly always intransitive (with a single internal argument); body part classifiers go together with intransitive predicates (with a single internally causing argument).   Tang and He (this workshop)  looked at HKSL and TJSL and found something similar with a wrinkle—- they seemed to find systematic instances of whole entity classifiers used with `null causativization’ to express transitive predications.  I liked this paper because it showed something comfortingly familiar wrapped up in a disorientingly different system from a superficial point of view.

Striking Commonalities

Indeed, despite the surface differences, there are some striking things in common across spoken and signed languages in the domain of argument structure. For example, you can find systematic differences between unaccusative single argument verbs and unergative single argument verb.  Verbs also come classified for durativity and telicity, which display in an obvious way morphologically. Signed languages have resultative constructions. Cornelia Loos  (this workshop) showed us some nice, clear, well argued evidence that a certain class of complex predicatons in German Sign Language (DGS) are monoclausal co-predications, and that the secondary predicate is constrained to modify the internal argument. Yay!  A true language universal……

Signed  and Spoken Languages Unlock Their Own Secrets:

Because it uses the visual modality, there is a clear  iconic relationship between verbal aktionsart and causational path, and physical trajectory in space. This means that we can see endpoints very viscerally (either as a sign leading to a final point of contact or just merely with a final sharp deceleration).  We can also see very clearly the individual portions or subevents corresponding to an internally complex event: whereas the English verb push gives no indication from its sound whether it expresses an initial ballistic transition followed by motion of the undergoer, or whether it involves continuous causation and accompanied motion, in sign languages, the verbal sign for `push’  looks different in these two different scenarios (Benedicto, this workshop).  The iconicity is obviously particularly direct for motion verbs. The semantics of scales and homomorphism work here in the iconic dimension just as they do in spoken languages (Kuhn, this workshop).   The high levels of iconicity found in sign languages cannot be unrelated to the fact that spatial metaphors are ubiquitous for describing temporal and other abstract scales also in spoken languages.   Sign languages are thus great for probing the internal structure of complex motion events, in ways that I am sure are universal even though the spoken languages are less morphologically obvious about it.

On the other hand, when it comes to reflexivity (a topic that Siloni presented at this workshop), overt morphological reflex of reflexive predication is obvious and overt in many spoken languages. Siloni presented detailed arguments to show that the se reflexive forms in French for example, share certain properties with argumental reflexives like English, but share yet others with the lexical reflexive predicates of Hebrew. She mounted an argument that the French reflexives contain a lexical, but not syntactic representation of the internal argument.   Interestingly, because sign languages use the body and body shape classifiers to express self caused actions, and because agreement is done by locational indexing, there is no surface-obvious way of distinguishing between a body shape classifier and reflexive agreement or lexical reflexive marking on a predicate.   Here was a clear domain where internal complexity and nuance could be seen and tested in the spoken languages under consideration but which were much less obvious  on the surface in the signed languages.

Depiction vs. Description

What special challenges do signed languages pose for syntax and semantics?  The existence of pervasive iconicity and iconic `phonetic’  modification of basic signs to give more detailed specification of event predicates turns out to be a very tricky thing to incorporate systematically into the compositional semantics of event predication. In addition, signed languages very often use parallel information streams, posing further problems for integration (Steinbach, invited talk, this workshop). Co-gesture in spoken language poses similar problems and questions, including whether the information so contributed is part of the at-issue content or whether it only occurs in some pragmatic or distinct dimension.  In spoken language, co-gesturing can be shown to be ubiquitous, and also seems to show similar patterns to signed languages and their classifier types (Wu and Cienko, this workshop).  Integration of iconic content is  the problem tackled by K. Davidson (2015) in her work on demonstrations and taken up in Henderson’s work on ideophones that I have recently myself also engaged in.  In my own talk, I proposed that signed and spoken languages were exactly the same in requiring a quotational semantics for the integration of the linguistic sign within the functional/grammatical scaffolding of the clause. I proposed that the DEMO predicate invoked by Davidson for iconic and depictive symbolic elements, is really a flavour of the more general CONVEY, and that properties of both the symbol deployed and the deploying event needed to be taken into account and integrated deep inside the compositional semantics of the at-issue content in a verbal description (you can access  my slides   here).

In my own most recent work (Ramchand, to appear), I was struggling with something different, namely  with the best way to implement a compositional system for natural language propositions that would have certain basic kinds of cartographic semantic layering as a consequence. After toying with the idea of introducing event properties into the ontology (an idea that proved ultimately intractable), I hit on the solution of reifying the linguistic symbol itself.  By reifying the symbol and the deployment event, Quantificational Quotational Semantics  (QQS)   allows the symbol to denote partial descriptions given by immediate sensory and cognitive uptake of eventive particulars, without commiting to the existence of those particulars until the event is existentially closed at the vP level.  A payoff that I hadn’t fully anticipated was that QQS allows a zone of compositional concept building which  leaves it open that the inputs to concept building can come from a variety of different sources, including iconic input (both auditory and gestural).  By reifying the symbol qua symbol, QQS offers a better way of integrating iconic and gestural content into the formal semantic representation and in effect normalizing it  (we don’t need to relegate it into a post-linguistic pragmatic component). In addition, the deployment event is integrated early in concept formation, potentially predicting a more central conceptual role for deictic  information.

To summarize the experience, one thing is clear— having the extra dimension of signed languages in the data pool at a thematic workshop like this gives a satisfying kind of insight into the interplay between the language-specific and the universal which led to much intellectual fascination and pleasure  (something I am sure I shared with my fellow traveller linguists at this workshop).

The World’s Last `Passives’ Workshop


Now don’t get me wrong, I love thematic workshops. And the event I was just at in Vienna was a great conference with lots of stimulating talks and post talk conversations.  But it wasn’t a thematic workshop. In fact, it convinced me that in the name of  progress in our field, we should now be willing to say that passive is not a Thing, and therefore there can be no workshops about it.

Ian Roberts and Michelle Sheehan (only the former in person) started off bravely to convince us of the position that the apparent disunity of the thing called passive across languages, can be tamed via a choice tree of parameters. It was a nice try, but as you will see I was not convinced.

Gereon Muller treated us to a tour de force of `removal’  derivations. I always like it when somebody can give an internally coherent and fully spelled out implementation of something that has hitherto been treated in a completely different way. It is both exciting and disorienting to be guided round a problem, seeing it with new eyes, using a different toolbox. In this way, the problem and its solution get transformed.  I like to think it allows me to understand the analytic problem itself better, in a way that potentially transcends implementations.  In some way Gereon contributed to the deconstruction of passive this weekend because his new tool box made natural classes and unities where there were none before, and focused on just one component of the passive problem (that of removal of the agent).

When it comes to morphology, one could imagine being interested in the precise nature of the participle in the English passive, as I was in my talk.  This is a  different set of analytic concerns that quite literally deconstructs the passive and just concentrates on one ingredient of it, the participle itself. In arguing for a unity behind the participle in all its uses, including perfect (PassivesViennaV3), the talk undermines the centrality of the unity behind the passive itself.   In the talks of Remberger and Ledgeway we also saw one ingredient of the passive embedded within modal or buletic categories in Romance, in ways that were historically and synchronically challenging.  Indeed, morphological ingredients turn out to be crucial. In Greek and Albanian (described for us in Kalluli’s talk),  the morphology is an inflectional verbal ending, known famously, as NonActive, and which has been the subject of a number of very influential theoretical articles over the years. Not only is the morphology for making passive different in these languages, it also covers more semantic and argument structure options than simply the passive as understood from English. Significantly, Terzi has shown in her work that contrary to what has been found for English-like languages, passive (i.e. nonactive Voice) is not recognisably impaired in agrammatic patients in Greek (handout here).  This suggests that the psychological status of nonactive morphology is also different between the two languages. Why should this be? One can only speculate. Terzi suggests based on her work with children, that it is the robust and common reflexive use,  which has rich lexical content and which precedes `passive’-like interpretations in children’s competence,  that is the basis for the form being  relatively well preserved in Aphasia.  Object relatives, as expected,  are bad for Greek agrammatic patients, so it’s not that they have been misdiagnosed.    As we know, in many other Indo European  languages, reflexive morphology is used to cover some of the things that NonActive voice covers in Greek and Albanian, including some passive like uses.   So over and over again,  the particular cluster of properties that have been given the label passive, based on the English participle construction with auxiliary be,  does not correspond neatly to a single form or strategy in other languages.

In this workshop, we were treated to many examples like this.  Faruk Akkus and Julie Anne Legate had a joint paper on double passives in Turkish,  where  the same piece of morphology doubles as a passive and as an impersonal.   In the construction that doubles the suffix in question, it turned out that  the inner passive one can feed a subsequent impersonalization.  Ben Bruening’s poster on Passamaquoddy passives argued forcefully that the sequence of suffixes found there, and the morphologically distinct ways that the `passive’  gets expressed, shows that the connection between syntax and morphology in the verb suffix system is simply arbitrary and templatic, and does not conform to `mirror’. The data was fascinating, but for me only succeeded in reinforcing the impression that the false unity behind our label of `passive’  what was going wrong here. See Bruening’s manuscript in progress on the rebel anti-mirror approach to morphology.

So. Thank you Vienna and to the organisers of the World’s Last Passive Workshop (Akemi Matsuya, Kleanthes Grohmann and Eva Maria Remberger and her graduate students)!  And thank you to all the speakers and poster presenters for a stimulating time. In the future, I look forward to workshops on templatic morphology,  on removal derivations, on whether there are parameters and what they should look like,  on argument structure alternations, on implicit agents, etc. etc. But Passive?  I for one have become convinced during this weekend that it is an old convenient cover term that has outlived its purpose.  But, that’s progress!


On the Difference between Philosophers and Linguists:


I was just recently at the 9th meeting of Semantics and Philosophy in Europe

(SPE) held this year in lovely Padova, where the special topic was Ontology.

It was stimulating and enlightening to have so many people in the same room

thinking about things ontological, and from so many different backgrounds

and foundational concerns. It is also useful to lay out what the differences in

motivation and intellectual goals are when the term ontology gets bandied

about, because in my experience it leads to massive misunderstanding when

we all use the same word to mean different things. I have been working on

Things Ontological myself recently, and so I feel motivated to express clearly

what it is I am doing, and what it is I am not doing.

So why do we think it is important to think about ontology? There are a

number of perfectly respectable answers.

Ontology Because: Metaphysics

When the philosophers start talking about this stu, quite often what they

are interested in is metaphysics. They want to know what the world is like.

They want to know what the Real World is Really like. Ok, so I am a little

bit skeptical that the Real World exists in any way that my tiny human brain

can Really understand, but there are many philosopher types to consider this

the burning question and seem to be much more optimistic than me. So for

example, `Does creating a piece of music involve `causation’ ?’ which was one

philosophy paper I listened to with furrowed brow like an alien from another

planet. Because I guess I just don’t understand what the philosophers are

worried about, and last I checked, creation verbs regardless of what kind of

object is `created’ all come out as transitive, or causative in language that

have an overt causative morpheme. So I just didn’t understand what the

question was. My bad. I think you just have to have a special philosophy

gene for this one. I look forward to some philosopher taking me to the pub

one day and explaining it to me over lots of alcohol. Because one of the things

that becomes clear when you go to these conferences is that philosophers are

smart, but also great drinkers and conversationalists.

But philosophers were not the only ones at this workshop. There were also

applied ontologists, and they have a very specic set of implementational

problems involving the meaning of lexical items of individual languages.

Ontology Because: Computers are Stupid

Human beings are very intelligent. And the trick we pull off with using lan-

guage creatively to build meanings is one of the most intelligent things we

do. Applied ontologists are very smart human beings trying to spell things

out for very stupid computers.   If you just talk to human beings all the time

like I do, you don’t notice half of the complexity that goes on underneath

the surface wedged inside the meaning of individual lexical items. But when

you are working on trying to get computers to understand humans you start

noticing that complexity—  computers are just so darn literal minded If you

don’t tell them something explicitly they never get it! So we need to have

an idea about what kind of primitive notions and knowledge structures need

to be built in to computers brains so that they don’t produce howlers. One

very interesting topic that the workshop dealt with in some detail was the

phenomenon of `dot objects’, which is a kind of knowledge structure corre-

sponding to lexical concepts that facilitates the kind of meaning slippages

that happen all the time when such a word is used. Consider the following

sentence of English:

The heavy book on the table is very interesting.

Totally unremarkable, except for the fact that to predicate `heavy’  of  `book’  you

need to be thinking of it qua physical object, while predicating `interesting’ of

it requires you to think of it qua literary content. Co-predication is necessary

here to show that the lexical item here is not ambiguous in the traditional

sense. Both meanings reside in the same item, and are quite different qual-

itatively, so vagueness doesn’t seem right either. The collection of different

kinds of `senses’ of lexical items can be quite large and there are certain gen-

eralizations about what kinds of senses cohabit within particular lexical items. As

a linguistic semanticist this is also quite fascinating, because as far as I have

seen, these are not the kinds of meanings that seem to be morphologically

coded for in the languages I know, and because all human languages seem to

allow precisely these kinds of slippages. So its definitely part of the puzzle

about human meaning to see what the generalizations are here. Laure Vieu

(CNRS) argued that cohabiting senses for these kinds of objects are charac-

terized by sharing their spatiotemporal instantiation, and I think it’s a great


Finally, at the conference were some linguistically inclined folk thinking about

ontology, and they/we had another set of direct motivations.

Ontology Because: Language and Cognition

Basically, we think that language ontology is not a way to get at deep meta-

physical facts. True, language is one of the most complex symbolic things

we do, and it clearly seems to be `about’ something. But there seems to

be no necessity to express the aboutness of language in terms of primitives

that are Really ontological, as in metaphysically primitive in the real world.

In fact, it is an open question how we should represent the `aboutness’ side

of the language equation (at the end of the day, the things language seems

to be talking `about’ will bear some non-trivially determined relationship to

the real world and its ontology, but there is no guarantee that language itself

sees that deep) (see Kit Fine, here, for discussion).

The idea from Friederike Moltmann is that natural language ontology

is the set of primitive categories  which languages seem to implicitly employ,

because the patterns of language  and mappings make most sense that way

(see here for her work on this topic). Natural language ontology therefore can

be discovered only by detailed inspection of natural languages themselves,

their morphosyntax and semantics. It cannot be discovered by introspection,

or by seeing what a computer can understand, or by inspecting the real world.

But why would we want to express `meanings’ in terms of an ontology

parochial to natural language instead of using an objective, independent

more real ontology? I hear you ask. (Well, I hear the formal semanticists

asking this anyway.)   The reason is simply that only by investigating nat-

ural language ontology will we discover anything interesting about how the

human natural language system is built up. If we go for an objective real

world ontology then the match will be indirect at best, and we will miss

generalizations about how syntax and semantics connect in these symbolic

systems. In my opinion, much formal semantics assumes that the choice of

ontology on the aboutness side is a matter of formal convenience, as long

as it is grounded in the Real. But in fact, the ontology is part of what we

need to gure out. And it may have only an indirect relation to the Real.

Wolfram Hinzen in his work seems to go a step further and argues that much

of natural language meaning is constructed by the grammatical system itself

and could not be found in the Real world even in principle. I find myself

sympathetic to the whole natural language ontology enterprise, and suspect

that the cohabitation of senses inside lexical items is really about human

cognition in response to the world, together with the design requirements of

having a symbolic system in the first place. On the other hand, the mean-

ings constructed by functional items are relational in a fundamental way that

breaks the immediate connection between signifier and signified. But that, I

think, is a topic for a different blog.


Semantic Composition and the Brain

I write this from the European Society for Philosophy and Psychology meeting taking place at the University of Hertfordshire this year (August 14-August 17). The keynote on the first day was delivered by Prof. Liina Pylkkänen (NYU) talking about semantic composition in the brain. Liina is known to many theoretical linguists for her hugely influential dissertation on verbal argument structures (Introducing Arguments, MIT 2002), but Liina has been following up that theoretical interest in the syntax/semantics interface with a research programme looking at the neural underpinnings of language. Her hugely ambitious goals have been facilitated by the fact that our techniques for measuring brain activity have become much better and more fine grained in the last 25 years. However, this kind of research is still devilishly difficult to do and get replicable and robust effects, and even more difficult to interpret. Her research has focused on the technique of magnetoencephalography (MEG), directing the methodology on the problem of semantic composition specifically. MEG is a nice choice of technique because it provides fine grained spatial resolution for brain effects (unlike EEG) as well as finegrained information about the time course of those effects (unlike fMRI).

In theoretical linguistics, it is often assumed that neurolinguistics is still too remote from theoretical concerns to be directly relevant to it (see Embick and Poeppel 2015 on the commensurability gap), and so the communication between the two subfields is intermittent and sporadic. And to be honest, it is so hard to keep up just with new developments in one’s own small corner! But this is where general and crossdisciplinary conferences come in, and why they are so important.

Specifically, it is great to hear a neurolinguistic keynote with a state of the art update on what has been discovered. In the case of Pylkkänen’s work the experience is richly rewarding, and contains much food for thought. I summarize some of the main points that emerged from her talk here, in case it motivates some people to dip into her research.

Pylkkänen takes a deliberately bottom up approach, targeting core robust cases of semantic combination and devising families of minimal pairs to test the relevance of different factors in the activation of certain areas. The big picture that emerges is the existence of a pair of activation areas, one early in the lateral anterior temporal lobe (LATL) (200 ms) and one a bit later in the ventro-medial prefrontal cortex (vmPFC).  The LATL is particularly interesting because it is very early in comprehension, and it is differentially implicated in both  production (see here) and comprehension (see here), both reading and auditory of complex phrases such as red car as compared to control conditions consistingjust of two unconnected words in a list (Bemis & Pylkkänen, 2011, 2013a,2013b)).

The LATL has been implicated differentially in the processing of sentences as opposed tolists of words in a huge amount of literature (Friederici et al., 2000; Humphries, Binder, Medler, & Liebenthal, 2006; Humphries, Love, Swinney, & Hickok, 2005; Jobard, Vigneau, Mazoyer, & Tzourio- Mazoyer, 2007; Mazoyer et al., 1993; Pallier, Devauchelle, & Dehaene, 2011; Rogalsky & Hickok, 2009; Stowe et al., 1998; Vandenberghe, Nobre, & Price, 2002), but since sentence processing involves a host of different processes, we still don’t know what specific job the LATL is doing, which is why Pylkkänen’s lab has been focusing on simple two word combinations in their own series of experiments.

In one set of experiments (Blanco-Elorrieta, E., Kastner, I.,Emmorey, K. & Pylkkänen, L.`A shared neurobiology for building phrases in signed and spoken language’ (submitted)), the way that the two conditions were controlled for while keeping the stimulus as similar as possible was quite ingenious. (Because brain activity is complex and affected all the time by other things not being tested by the experimenter, it is very important in these methodologies to keep the stimulus as similar as possible, while manipulating the test factors.) In a production task, subjects were asked to name pictures of simple objects coloured in in simple block colours. The coloured objects in the pictures were also however situated against backgrounds of different colours. In one run type, the subject was asked to name the object with its colour (e.g. red car) and in the other type of run, the instructions were to give the name of the background colour followed by giving the name of the object. While the second condition is plausibly harder and less natural, the complexity goes in the other direction than the effect sought. Indeed, there was found to be significantly more activation of LATL in the phrasal composition case as compared to the list. The effect was found also across languages (for Arabic see here) (and across modes of articulation (for ASL). The effect was found in both the nominal and the verbal domain (see Westerlund, M., Kastner, I., Al Kaabi, M., & Pylkkänen, L. (2015) ).

It however is even more specific than this. It is specifically involved in the combination of concepts and not syntactic or semantic combination more generally (Del Prato & Pylkkänen, 2014; Westerlund & Pylkkänen, 2014; Zhang & Pylkkänen, 2015). For example, it is found in the early computation of red car but not in the computation of    two cars (Del Prato  \& Pylkkänen, 2014).  It also appears to be an interaction between the specificity of the left member of the phrase when it needs to be integrated with a noun (more specific left members as in tomato soup producing greater activity than compounds with vaguer left members (vegetable soup) (Zhang and Pylkkänen 2015 )

There is also a contrast between the following two cases:

(1) a. The girls are tall and blonde.

b. The girls are Finnish and Dutch.

In a context where all four girls are tall and blonde, but only two are Finnish and two are Dutch, both (1-a) and (1-b) are of course true on an intersective vs. collective reading of the adjectival conjunction. It turns out that the LATL effect only emerges in (1-a) and not (1-b) (Poortman and Pylkkänen 2016 ).

So what is this LATL area and what is it doing? Why is it activated so early, immediately at the onset of lexical access? It does not generally kick in whenever there is syntactic combination, but is tied specifically to the kind of concept formation that we would normallly think of as being low in the hierarchical linguistic representation. How do we characterize in more theoretical terms the kind of semantic composition that is going on very early in this area of the brain. This is an example of a robust and replicated effect in a specific brain area, which I would argue should start to inform our theories on the architecture of the language faculty.

If indeed we believe theoretical linguistics is ready to have that conversation which our sister disciplines. Fields that call themselves ‘biolinguistics’ certainly seem to think that theoretical linguistics is ‘ready’. But being ready does not just mean being willing to speculate about evolutionary origins or invoke ‘third factor’ principles. I mean : are we ready to be able to do something with facts like these from the literature? Maybe we’re not ready. But if so, then let’s not call what we do biolinguistics just yet.

Actually I do hope we are ready. I think our field is progressing fast in this domain and we are at the beginning of a very exciting time.