Define Semantics

Features Workshop Blogpost3: Svenonius vs. Preminger

My Red Line for this last blog post related to the Features Workshop is Semantics, and not `trail of blood’, like you might be thinking.  The last day saw two final talks, one by our host/organizer Peter Svenonius (PS)  UiT The Arctic University of Norway and the other by Omer Preminger (OP) Maryland. PS gave his talk first, entitled `Case is the Solution to a Locality Problem’, while OP’s talk was called `What are Phi Features Supposed to Do and Where?’.

Granted, from the titles  it doesn’t particularly seem as if either of these talks were about semantics, but I think it the pair up is interesting precisely because it highlights the different relationships  morphosyntacticians can have to semantics, and the very different assumptions about what the word even means.  But let’s be clear from the outset: both PS and OP are formalist syntacticians who believe that the grammatical computation is special and has formal properties that cannot be reduced to meaning or functionalist considerations. They also both care scrupulously about architecture and the principle of keeping the modules from communicating with each other in unattested ways.  In this case, I think that real common ground here is rather extensive, but the rhetorical stance towards semantics stands out at least superficially as being a a point of difference. The kinds of questions they ask, and the nature of the solutions they seek is also quite different.

OP in Brief: You cannot look to the interfaces to help you understand how syntax works, since there are persistent mismatches between syntax and morphophonological properties on the one side, and syntax and semantics on the other. The second half of the talk was an argument about privativity and features. OP wants to say that we can model important things about what the syntactic atoms are by using privative features, rather than binary ones. Important for him is that under this way of looking at things the fact that 3rdperson sg is actually the absence of any feature for person, predicts how it behaves when it comes to Agree.  In particular, there is no omnivorous 3rdperson sg. agreement in natural language.

(Btw, TG wasn’t convinced that one could show in a principled way that privative systems can do anything different than the corresponding binary systems. But there is a complex interaction with what sorts of agreement mechanisms you invoke).

PS in Brief:  Case is a big mystery and we have no consensus on how to model the apparent licensing functions of case, and the patterns of morphological tagging that show up in nominal licensing systems.  PS chooses to model case using the tool of Agree and the checking of uninterpretable Kase features. Case is interesting precisely because in a technical sense it is not `interpretable’— it bears only an indirect relationship to the atomic units of meaning. In OP’s terms, it shows mismatches with any semantic class that one might try to define by purely semantic criteria.    PS too is interested in this `irrational’ phenomenon because it shows evidence of something syntax needs to do for its own theory-internal reasons.  It is the syntactic phenomenon par excellence.   However, in attempting to answer the WHY question for Case, PS makes a proposal that indirectly rests on the understanding of the semantic properties of the clausal hierarchy.  What PS  proposes (capitalizing on the fact that marked case emerges in the presence of two nominals)  is that it is necessary to distinguish nominals from each other in the T-domain otherwise the link back to the thematic domain is made indeterminate.  PS builds on a view from Ramchand and Svenonius 2014 concerning the properties of the two lowest zones of the clause, characterized by the nature of the abstract semantic information that is articulated there.  He argues that having structural case is correlated with the diagnostic properties of the second phase of the clause (referentiality), and that lack of case is correlated with that nominal carrying only `low’, and thematic-relational information.  So even though Case is still not directly correlated with a particular interpretation, the analysis is more semantically motivated than the standard ones which just posit a universal Case Filter.

OP is on record as deploring publicly what happens when syntacticians allow semantics to infect what they do, corrupting their analyses and undermining progress. (Actually, I am at a bit of a loss about what OP is complaining about here, since from my perspective most of the high prestige work on syntactic theory is being done in a complete semantic vaccuum, just as he would advocate. I happen to think much of that work is highly theory internal  and sterile and will be obsolete as soon as the toolbox changes.) The talk in this workshop was a variation on the Semantics is Evil theme since the first part of the talk was an exercise in showing that there are easily found mismatches between the two domains, i.e. no actual transparency between the units of the syntactic representation and the primes of the semantic representation.  .  (Now OP did not say that there was no relationship, but that there are frequent enough mismatches so that the one domain cannot be explained by the other.)

So do OP and PS disagree about the role of semantics in syntax?  They both would say that they believe in the Autonomy of Syntax, but they each seem to have a different working relationship, and different rhetoric towards  facts about what hieararchical structures can and must mean.

In fact, I think that  the straw man that OP set up in the first half of his talk is not entertained or propped up by anyone.  First of all, what is the semantic representation  that we are checking for transparency with the syntactic representation?   Are we talking about the  notational primes of a montagovian or neo-davidsonian semantic formula?  If so, then I do not think that any working formal semanticist out there would make those claims about their own formulas—- they do not think their analytical core units necessarily correspond to the core units that syntax traffics in. There is perhaps a systematic algorithm that gets you from one representation to the other, but there is no transparent  mapping between the primes.   Formal semantic representations are  highly articulated descriptions of truth conditions,  and compositionality can be guaranteed  without the match up of atomic primitives. In most cases, it doesn’t seem to me that formal semanticists are very precious about their notational atoms, as long as the right truth conditions are delivered.

A different question would be to  ask whether the mapping between the narrow syntactic representation to the meaning-building parts of mind/brain is transparent,  or even systematic. Now here, there are two ways of construing the question once of which makes the answer (trivially) YES and the other of which makes the answer interestingly NO.

The YES answer comes about if we want to know whether the `pieces’ of a lexicalized syntactic hierarchical structure correspond in a systematic way to the meanings ultimately being built. Here the answer has to be yes because otherwise we have no way of reliably guaranteeing mutual understanding in the face of unbounded creativity.

On the other hand, If we wonder whether those meaning chunks and operations are sitting in cognition-world out there independent of language, I think that the answer must be no.  Language as a symbolic system co-evolves with the cognitive system more generally during the maturation of every individual.  It is not a stretch to assume that the  pieces that are stored, learned and redeployed, and the mechanisms for semantically combining them are indeed created by and forced on us by the symbolic system itself, albeit against the background of our general cognitive proclivities, perceptions,  and learning biases.    Thus, the semantics I am interested in,  that is systematically tied to the syntax is not really independent of it.  semantics with a small s is backformed from the complexity of the symbolic system that generates it. This is inevitably different from semiotic systems which simply provide labels for independently available concepts.  Many syntacticians are fond of talking about the recursivity of the syntactic system as being a core central property of what makes language unique, but I would argue that is not enough. Language requires in addition the proceduralization of recursive symbolization and semanticity  that is tracked through  recursion and unboundedness.  As James Higginbotham, my late teacher and colleague used to say, when it comes to the syntax-semantics interface, it is like solving an equation in three unknowns.

The problem is that most people don’t use the term Semantics this way. They use it to mean the description of extensional truth conditions, and this, I believe, has even less direct connection with the internal pieces of the sentence than most people assume (See Pietroski 2018 on this point, and also my review of him).  At best, truth conditions are in a holistic correspondence to whole sentential chunks, and that too afterpragmatic inferencing and reference tracking at the discourse level has applied.  So I think OP and I are probably talking past each other here. But the fact remains that some of the questions he is most interested in are the ones where one can ignore the semantic influences on the answer, and this distinguishes him I think from PS.

I think OP is using his rhetoric to justify looking at his favourite pieces of the puzzle.  It’s nice to have favourite pet projects (Confession: I like Verbs)   for example like being most interested in the parts of the system which are about system internal formal relations (OP?).  But it is almost impossible to isolate what those are without understanding the nature of how the narrow computation feeds other cognitive systems. It is not possible to introspect about what syntax is.  It is a category of thing which by hypothesis is sui generis, and we figure out its scope in part by peeling away the things that it is not.  In other words,  if you want to distinguish syntax from not-syntax  then its a good idea to be able to recognize not-syntax.  To take an example from OP’s own talk on Wednesday, he makes the argument that anaphoric binding is not coextensive with abstract agreement, and should not be handled with the same abstract mechanism Agree.  One of the planks of the argument consists in showing that more generally,  the phenomenon of  coreference does not require phi feature agreement, and that superficial agreement for phi features occurs even when there is demonstrably no syntactic relationship possible.  So this is an example of how  one had to pay attention to meaning to make an argument about what lies in the syntactic system proper, and what should be excluded from it.

On a very basic level, there is a methodological claim that one simply cannot run constituency tests such as movement, or ask about whether reference is guaranteed by phi-feature agreement without using people’s judgements about meaning as your core data.  But its not just methodological, its also the explanandum: I do not think that understanding language is tantamount to being able to delimit the class of grammatical utterances (as I think OP has claimed elsewhere). Part of the task is also to preserve semanticity and account for the reliability of the constraints on ultimate meaning that speakers feel grammatical utterances have.

Three Completely Different Things to Do with Features

The next talk at Arctic Features was by Daniel Harbour (DH) on Maximal use of [+/- minimal].  DH is a typological morphosemanticist, who is on a quest for highly abstract universal features.  In DH´s previous work on person systems, he argues for the necessity of a feature [+/- minimal] which basically has the interpretation of divisibility of reference (cf. Krifka 1989).  This feature is argued to interact with the system of person features to give rise to complicated pronominal systems involve duals and inclusive vs. exclusive participant plurals.  DH endorses the intuition of Bach (1981) inter alia  that divisibility is potentially a property that crosses category boundaries, at the very least straddling the nominal and verbal domains.  A predicate description P conforms to divisibility of reference if for every x that is a P, one can find a material subpart of x, y say, that also satisfies the description P. This is true of the nominal predicate water, but is plausibly also true of the verbal predicate sleep or be-tired (anything stative or activity-like down to a certain granularity according to Taylor 1977).   Maybe, in fact, conjectures DH, [+/- minimal] underpins the definition of imperfectivity in verbal aspectual marking more generally.   This leads DH to set up the following hypothesis about the space of morphological systems: if a language demonstrably uses [+/- minimal] in its pronominal system (because we can detect a morphologically marked distinction between 2 and 3+, for example), then it is statistically more likely to use [+/- minimal] in its verbal inflectional system and overtly mark imperfectivity.  So here comes the typology and after a flurry of checking and counting (60 relevant languages),  the report is that there seems to be a fairly robust correlation between funky pronominal systems in the DH sense and overt imperfectivity marking  (89 percent of which seem to mark imperfectivity, which is higher than average).  But even if we think this is true, this raises a number of questions, which the room was awash with when DH was done with his talk. TG wanted to know why  a language would reuse a feature in this way? What is it about the system that might drive you to reuse it?  If it is a compelling cognitive distinction, it might be the basis of morphological distinctions across a wide swathe of domains because it is a cognitively general organizational principle, not because the system is literally and mechanically reusing an atom of the featural system from one place to another.  Also, why look for a correlation between pronominal systems and imperfectivity marking, instead of, say, a correlation between marking of mass vs. count, for example?  Do we expect features to be universal across languages, and is this because of cognitive or even linguistic necessity? or do we expect the inventory of features to vary from language to language since famously `nobody ever conceived of a universal morphology’  said somebody,  some time. DH is conceiving of universal aspects of morphology that transcend not just languages, but also categories within a particular language.   At some point somebody in the audience raised the spectre of the Sapir-Whorf hypothesis,  but DH slapped that back. Not entirely convincingly, in my opinion.

In the next talk, Michelle Sheehan  MS (Anglia Ruskin University)  tackled the issue of successive cyclic movement, in particular trying to find evidence for an A-movement incarnation for successive cyclicity.  The spoiler here is that no, there is plausibly no such thing.  MS takes as her starting point the ungrammaticality of long passive under certain causative and perception predicates in many languages.

(1) *Kim was made leave (by someone).

The above phenomenon has been noticed and accounted for in various different ways, with no consensus on which module of grammar is to blame. MS will propose that the ungrammaticality of (1) actually follows from phase theory under certain unremarkable assumptions, if we claim that there are no feature triggers for successive cyclicity that interact with the A system.  MS assumes two clause-related phases, roughly corresponding to the C domain versus the v domain (van Urk and Richards).  Specifically however, the lowest phase is dynamic, and is a little bit bigger at its biggest than is classically assumed—- ProgP is the largest v related phase in English (Harwood, Boskovic, Sailor).   The patterns fall out if the complement of makeis a phase.  MS assumes version 2  of the Phase Impenetrability Condition PIC2, whereby we get a window of opportunity for establishing A relations in the T zone aftera phase is assembled but beforeit is spelled out.

Why is the sentence with the to-infinitive possible?

(2) Someone was seen to run in the corridor.

(2) is good because the to-phrase is a TP and there is an EPP feature that would drag the DP argument to the edge in any case, allowing it to escape the phase. Passives of causatives/perception verbs  will only be blocked where the complement they take is a phase that lacks T. If they get big  enough then there is potentially a  phase edge that the DP might end up in for independent reasons, if they are too small it’s not even a phase.  So, if MS is right and the best way to account for this nest of data is to say that A movement is never successive cyclic, then we raise the question of how we model this difference between A movement on the one hand and A-bar movement on the other.  MS suggests that this might furnish an indirect argument that successive cyclic movement must be feature driven, since it’s hard to see how you could model the difference otherwise.

So MS gave it her best shot, and showed us her best argument for a grammatical phenomenon requiring abstract features, but TG says no, you can definitely model this with constraints. But would it look so elegant?

The final talk of the day was Susana Béjar  (SB)  (U of Toronto) on `How to be a Picky Probe’.  SB: “In addition to serving as diacritics for defining natural classes of syntactic objects, features serve as diacritics for modeling syntactic dependencies (local and non local). This is all a probe is:  a syntactic diacritic that signals a trigger for dependency formation, as well as identifying a target and a dependent in the relation. SB has spent much of her research life looking at phi feature probes and trying to see how these kinds of dependencies work in a variety of natural languages, essentially seeking to describe faithfully while searching for higher level generalizations. With respect to phi, one thing SB has discovered in her travels is that hierarchically low probes tend to be picky with respect to participant features, while the probes on the higher T heads are always much less picky.

In today’s talk the focus was on a tricky subcase of interactions where no pattern or generalization seems to be detectable. 😦  This horrible domain is subcase of defective non interveners: things that should be in the path of the probe,  not valuing the probe, but also not producing an intervention.

Probes are by definition picky and their pickiness is tantamount to a visibility condition on objects in the search space, with important analytic consequences for locality, so its important to see what patterns exist.

SB shows two case studies that should make us worried. One from Georgian agreement. She shows that for the purposes of the AGR probe for one  of the agreement slots, the 3rdperson dative intervenes. However, for the purposes of the other probe the dative does not intervene.  The second case study comes from agreement on the verb with the subject in Persian, which works one way with a low probe on a simple main verb, but another way with a high probe on a modal auxiliary. We are forced to say that these probes have different sensitivities. To summarize:


LOW AGR sensitive to the person of DAT;  can’t see past it.

High AGR insensitive to person of DAT;  can see past it

Persian Modals

Low AGR is sensitive to the phi of intensional subject; can see past it.

High AGR is insensitive to person defectivity;  can’t see past it.

So it’s very disturbing, and it also makes one begin to suspect that there is something that is being missed here.  So maybe features and probing are just not the right way to be thinking about these particular kinds of syntactic dependencies.

Building vs. Renovation (Features Workshop Blogpost 1)

Day 1 of the features workshop started with Thomas Graf from Stonybrook (henceforth TG), who came out with guns blazing (Features: More Trouble than They’re Worth?).  And sure, if we are going to have a whole workshop about features, we should understand what that means.  So TG has shown that the formal relationship between features and constraints is one of Interdefinability.  Specifically, features can always be replaced by definable constraints within a monadic second order (MSO) logic, and that every MSO definable set of constraints can be encoded via features (Graf 2011, 2013, 2017, Kobele 2011).

So if systems articulated in terms of features and those expressed in terms of constraints are essentially notational variants, doesn´t that mean that we don´t need to worry about making a choice between them?  After all, they turn out to be `two different sides of the same coin´, mathematically speaking.  But as TG puts it, the problem is that the coin is too big.  It turns out that there are all sorts of crazy constraints that one can define in MSO, things that we absolutely do not want natural language to be able to do like for example the kinds of constraints we want but all of their symmetric opposites in addition, and random boolean combinations of them. Such systems can be made to count, they have no locality built in, and they freely allow mixing of constraints from different domains which really should not be talking to each other.

But this is just what we expect as linguists, right? The formalism itself does not do the analytical work for us—- thatis the job of the analyst. We need to construct the theory explicitly ourselves and constrain it based on what we see languages doing out there the wild.

But here´s the rub. It turns out that Feature Abuse is frighteningly easy to commit, and much harder to detect than one might imagine.  Features are hard to regulate because they produce global behaviour through many small interactions encoded in a distributed fashion over thousands of roots. This makes it hard to relate the high level properties of the system quasystem to specific aspects of the feature calculus.  But it is much easier to do and more mathematically well understood to claw back the power of constraints within MSO. For example, we can limit constraints to specific complexity classes, and we can formulate hypotheses and inject restrictions based on things like c-command, or locality in a controlled fashion.

One of the cool and important things about interdefinability is that it is the opposite of boring. Its not like translating Norwegian into English but more like translating Human into Martian. When you transform your landscape in Feature Land into Constraint Land, lakes turn into rain and volcanoes emerge from the mist, and vice versa.  For some reason, the human brain does not easily cognitively perceive systems that are in fact mathematically equivalent as the same. This says to me that the two different implementations, although computationally equivalent, may in fact have different consequences and predictions in any ultimate move towards a more algorithmic understanding of how these systems inhabit the brain.  For TG the warnings against features are methodological: working with constraints leads to better control and ability to test global properties of the system, and there´s no downside since they are always equivalent to the feature systems that most syntacticians prefer working with. So Features should be dumped in favour of Constraints.

But my reading of TG´s subsequent work, and indeed the second half of his talk, also makes a slightly less methodologically dry point.  The fact is that once you embrace interdefinability and work hard in different domains to reconceive systems described one way in terms of the other, you find that it is both hard, and interesting. It is eye-opening to see how an issue or problem or generalization reshapes and topologically modifies itself during that process. Some things that previously seemed to be patterns dissolve frustratingly, but other generalizations become easier to see. TG has been training his Transmogrification Device on various linguistic phenomena, looking at islands, at selection, and at *ABA patterns in attempts to give new feature free analyses. (As Prof. McGonagall points out, the transmogrification spell is more difficult to pull off as you increase  the complexity of the thing you are transmogrifying. Living things are particularly hard, and probably also linguistic analyses).

In the discussion session after TG´s talk,  Peter Svenonius (PS) asked isn´t a feature just a way of picking out a natural class of things to which a constraint or rule applies? So really the difference in notation is pretty benign.  TG says absolutely—- that´s exactly what underpins interdefinability,  but that once you explicitly reify the feature as part of the grammatical language, the properties of the system start to explode mathematically. Omer Preminger (OM) pointed out a case where a generalization emerged precisely by looking at things in terms of features, and TG said that of course such phenomena would be interesting points in favour of Feature theory, but that he would have to reserve judgement until he had a chance to see whether he could recast the system in terms of constraints to see if he could make the generalization emerge that way as well.

So will the talks in this workshop succeed in transcending the particularity of the feature language they are using to tell us something more general about the global systems they are describing? Will we be able to come up with a Theory of Feature Theory that will allow us to detect and avoid Feature Abuse?  Will we showcase analyses that will convince TG that there is something important and useful in looking at things from a feature theory perspective?

Or maybe such striving is premature, as one of my colleagues maintains, and we should just use whatever toolbox is at hand to build detailed and solid descriptions of the huge range of grammatical phenomena we are currently still woefully ignorant about.

Since this got longer than I had expected, summaries of Harbour, Sheehan and Bejar will appear in blogpost 2.


There Will be Blood

On Monday 27th and Tuesday 28thof May, a number of very clever linguists who have thought long and hard about grammatical theory will be descending on the tiny town of Tromsø, at UiT the Arctic University of Norway to try to make progress on one of the most central and parochial issues in the implementation of generative grammars—- namely, what is the right theory of features. 

The cast of characters includes Michelle Sheehan (Anglia Ruskin), Omer Preminger (Maryland), Susanna Bejar (U of Toronto), Daniel Harbour (Queen Mary)  and Thomas Graf (Stonybrook) as visitors and myself, Peter Svenonius, Craig Sailor, Antonio Fabregas and the rest of the Tromsø CASTLFish milieu as the hosts.

You will find a programme for the workshop here:

As Peter Svenonius says in his introduction to the topic of the workshop:

“Features are an indispensible part of any grammatical system, but there is little consensus on some of their essential properties.”

So why is there so little consensus, and do we expect that a workshop like this can come to some conclusions that will move the theory forward?  More importantly, will having a better theory of features within the implementation of grammars in this tradition, help us make progress on the big questions of language and grammar? Or is this just a housekeeping exercise for the members of a particular tribe?

Now, I´ve never been a Features girl, and I am not giving a talk at the workshop, although I have great respect for all the linguists we will be hosting next week and I will be there giving it my rapt attention. So I thought I would start this blogpost by trying to articulate for myself why I have never been very engaged with the theory of features debate thus far.  (Disclaimer: The following is a grumble-list based on my own particular interests, and also in many cases not relevant to the very best work on features, but to the mainstream. )

1.The royal family of feature theory is Phi features, in particular as they appear in agreement configurations. Agreement is the classic place where morphology is semantically inert. If you are me, you aren´t excited by semantic impotency.

2. Even when we get to move away from Phi feature agreement, too much work on features seems to spend too much time worrying about how features interact within a grammatical system, and much less on what they are. Is there more than one kind of Agree? Is probing upwards or downwards? I find it hard to think about these questions, or even contemplate case studies with any urgency in the absence of an overarching argument concerning what sorts of things are given featural manifestation.

3. Syntactic work on features and feature checking quite often involves lots of intricate low level detail and decisions between subtly different kinds of implementation. When I look at the details, I can nearly always think of another way of doing it that doesn´t conform to the axiomatic commitments of the authors. I have no faith that there is any contentful or prediction-making distinction between these notationally different but essentially equivalent choices of toolbox. Even though syntacticians take the atoms of their theories very seriously (see Thomas Graf´s blog post on this topic here ), they are still somehow committed to the idea that these theories are computational, rather than algorithmic in Marr´s sense and will explicitly disavow any predictions in terms of what real minds do. This seems like a contradiction to me, or at least a self deception.  If we take the computational level stance at its word, it becomes even less compelling to quibble over implementational hair splitting.

So this is why the current lineup of speakers promises to be so interesting.  Harbour and Sheehan are definitely worried about morphosemantics, and not just phi-feature agreement. Bejar and Preminger have worked on very similar kind of data and have different implementations  and theoretical commitments— can we press the issue and diagnose the extent to which these differences actually matter for something we should care about?   Graf is there as the mathematician keeping us honest with respect to what is formally equivalent and with what consequences, challenging us from outside the tribe. The whole team consists of individuals who are distinct in their items of faith, informed,  and opinionated enough to get a discussion going that will address the big picture architectural and methodological questions. The combinations are potent and provocative enough to deconstruct the whole damn system. It´s like the opposite of The Avengers coming together to save the world.  Since the world is feature theory as it is currently practised, I am quite happy to embrace a potential future apocalypse.

On the other hand, it might just be productive, world changing consensus building. and what will emerge will be foundational for the feature theories to come. 🙂

Stay tuned.


OASIS Conference Part II

The OASIS network was set up to foster connections between those thinking about the primitives of meaning composition across disciplines. From my own perspective, it means linguistic semantics reaching outwards to learn from scientists thinking about meaning from  psychology and neuroscience, from computer science and lexicography, and from philosophy.  It also involves trying to think about the problem in new and innovative ways and finding ways to close the commensurability gap between our formal semantic computational theories of meaning and the algorithmic and implementational realities of mind and brain.  In this second report from the kickoff network conference, I concentrate on panels and talks that were not given from the primary perspective of either formal syntax or semantics (there were great talks of that flavour  too at OASIS, but a psychology blogger should talk about them).

There were two invited talks from Alastair Knott  (University of Otago, New Zealand) and Friederike Moltmann  (CNRS) who spoke from the lands of neuroscience/psychology and philosophy respectively.  What are these strange lands, and do they speak a language we can understand? Knott is an expert on embodied cognition and the deconstructing of the logic of action, as in what it takes for a person to reach out and grab a cup.  His idea was that the deictic routines necessary for executing a simple motor event such as `grabbing a cup´ are of the same granularity, time wise and  with respect to the primitives involved, as the symbolic elements themselves required to represent an event of `grabbing a cup´ in  linguistic terms.

So here was an explicit attempt to close the commensurability gap (cf.  Embick and Poeppel (2015))  by essentially denying it—– the basic building blocks of the one can be directly identified with the building blocks of the other.  The position of these embodied cognition guys is that you get to move up a grain size from neuronal firings to motor plans, and in doing so you get to a level where the two types of language actually do translate each other.   It was a fascinating talk, and a brave attempt, but I simply wasn´t buying it.   Still, exactly the kind of thing that I come to OASIS conferences to hear. Check out his book here (

Moltmann was a completely different kettle of fish. She comes from a heretical position both from the point of view of philosophical metaphysics and  from the point of view of standard formal semantics.  Classical formal semantics invites us to take truth values and objects in the world as the basic ontological types— even subsorts within the type e category need to be justified in terms of extensional identity conditions and independently justifiable real world distinctions.  Philosophical metaphysical ontology is similarly concerned with discovering what actually exists  in the world.  Moltmann outlined what she argued was an emerging field of natural language ontology, whereby what we should be engaged in is a kind of descriptive metaphysics. In other other words, in the particular case of language, we as scientists of meaning should be interested in discovering the ontological primitives that are required for a successful description of the natural language system of meaning construction, i.e. natural language ontology ( What we should seek to discover are  the ingredients that are necessary and implicitly relied on in natural language systems, rather than those required for cognition in general, or those that underpin some objective external reality (see also the work of the philosopher Kit Fine).  In laying out this programme, Moltmann I believe is exactly correct, although of course many of the actual details remain to be worked out.  Her talk represented an attempt to clarify the terms, define the object of inquiry, and set the agenda for the whole enterprise in philosophical terms. In doing so, I think she, and Fine, myself and a growing number of like minded people are still definitively in the minority even though I believe this is the only way forward to genuine communication with psychology.

The panel on the acquisition of counterfactuality was an example of work where OASIS as a network is actively experimenting with bringing psychologists, psycholinguists and theoretical linguists together to help sharpen the questions and issues relevant to all.  Counterfactual reasoning, and language expressing counterfactual reasoning statements have long been fascinating for formal semanticists raising a number of interesting crosslinguistic  issues concerning the elements required to encode such ideas, and questions concerning the interaction of structural semantics and pragmatic reasoning.  The panel started with a clear and pointed theoretical overview from Fabienne Martin (Humboldt, Berlin), and was followed by more experimental talks from Sarah Beck (University of Birmingham) and Nina Kazanina (University of Bristol).

Beck points out that very small children engage in pretending games, pretending to be cats or princesses or dinosaurs. They are fully aware that they are not in fact dinosaurs, but does this entail that they are actually engaging in some form of counterfactual thinking?  Are there simpler versions of counterfactual thinking in which there could be a basic easy version and then a scaled-up adult version?

In Theory of Mind tasks, children start to give correct answers between ages 3 and 4. But recent work suggests that they behave correctly much earlier, as long as the understanding of what someone else might be thinking is not mediated by an explicitly verbal task.

Is this evidence that what is hard is language?  Or that the kids have a baby version that is not as complex as the one that is required by language?  Or maybe even that language allows kids to scale up their instinctive cognitive awareness of the reality of other minds to an automated sophisticated reasoning process using that information?  How could we know , and how would we test that?

Interestingly, in tests conducted by Beck and her team 3-4 year olds found it significantly more difficult to perform at adult levels when the question was `What if he had gone the other way,  where would he be now?´, as opposed to `What if next time he goes the other way, where would he be?´ Quite generally, kids found future hypotheticals easier than past counterfactual reasoning raising the possibility that what they find difficult is reasoning with an alternative to something they know for sure is a fact, as a opposed to just reasoning about two different live options.  Beck speculates, following Hoerl and McCormack, that children only get good at tracking events in a mature temporal updating system at about age 5. Maybe it is this ability that is necessary for the more complicated counterfactual reasoning question.

So, what´s the problem? Is it temporal tracking? Is it contradicting a past known event? OR, is it the linguistic complexity of the counterfactual reading that is getting in the way?

Nina Kazanina from the University of Bristol had been pushing hard on this very question.  If you compare the two conditionals in (a) and (b), it is clear that there is something quite linguistically unusual about (b) from the point of view of English.

  • If Amy eats an apple, she will get a medal.
  • If Amy had eaten an apple, she would have won the medal.

Kazanina set up an experiment with children involving puppets and different things to eat, and medals being awarded and tested them in various conditions. She first tested children on whether they had good control of the if — then  construction in English, and only those who performed close to ceiling on that pre-test were further tested on the counterfactual conditionals.  The scenario is the following: one animal eats the watermelon and gets a medal; the other animal eats the ice cream and gets a cross. A puppet then makes a statement like  `If he had eaten a watermelon, he would have got a medal´  and the kid has to say which animal the puppet is referring to.

The finding is that kids  get it right at age 4 but not at age 3. Further, this failure does not  correlate with performance on the false belief (theory of mind ) task administered at the same time.  Kazanina concludes that the problem is with the linguistic construction and not with the ability to entertain the possibility of false situations.

But is the problem with (b) a kind of hidden `not´ in the inferential process which is not transparently indicated in the linguistic form, as Kazanina claims (since (b)   entails that Amy did not eat the apple and in fact did not win the medal)?

Or is the problem with time tracking as Hoerl and McCormack suggest?

Or is the problem with the interpretation of the past tense in if-clauses, which is not interpreted like normal past tense in English but more like `remote world´  ? Come to think of it, that IS pretty weird.

Really interesting questions. Really interesting panel. I hope they will keep me posted.

Looking forward to the next OASIS conference in Nantes next year, and thanks to Bridget Copley and Isabelle Roy for organising this one.

Categorization and its relation to Language: Evidence from the non-verbal mind

I am writing this blog from Paris where we are having the official kickoff conference for the OASIS network —- Ontology as Structured by the Interfaces with Semantics—primarily funded by the CNRS (

For those of you who do not know what OASIS is about or why we feel we need a network, a few words of explanation are perhaps in order: OASIS network members are those who are involved in thinking in new and cross-disciplinarily commensurable ways about the primary cognitive ingredients of meaning. We believe that progress will only be made if we collaborate actively and open mindedly with psychologists, philosophers, computationalists, and lexicographers and also conversely that language is centrally important in this conversation too. In other words,  linguistic semantics  should  be centrally involved in the questions of meaning and ontology more generally. Linguistic semanticists have so far not been very good at communicating with other related disciplines, and it is not always clear how to bridge the commensurability gap between our own analytic primes and the analytic primes of others. Traditional conference going tends to support and reinforce in-group talk. The OASIS is a network that is intended to support and reinforce cross-group talk in this particular area.  The University of Tromsø (specifically, myself, Peter Svenonius, and Serge Minor) are steering committee members of a network that includes Paris 8, Nantes, Humboldt Berlin, Pompeu Fabra Barcelona and Queen Mary London.

Day 0 involved a satellite workshop on types, tokens, roots and functional structurewhich was truly excellent (,  but what I would like to report on here is Day 1 of the conference proper, where the first of series of panels was presented.  The fun begins!

The topic of Day 1´s panel was the Ontology of the non-verbal and pre-verbal mind, organized by Wolfram Hinzen(Pompeu Fabra).  Louise McNally(Pompeu Fabra) moderated the panel and posed what I thought was an important opening framing question:  What is the relation between reference and categorization? How does language mediate in that relation?  Language itself combines these two aspects in a way that might make it hard to pull apart. But McNally offered an useful practical exemplification of the dissociation:

Categorization without reference is emojis

Reference without categorization is pointing.

Hold that thought because it is going to be relevant in what follows.

Wolfram Hinzen  introduced the first talk by motivating why it is interesting and important to look at minds that are not verbal, either because they are incapable of it congenitally, or because they are at an immature stage of development.   Hinzen´s conviction is that human language and human thought are so bound up in each other that destroying the one destroys the fabric of the other.

One obvious source of non verbal minds are the minds of our non verbal primate cousins such as chimps. What sorts of things are they capable of when it comes to their own brand of gestural communication in the wild?  A very striking fact is that they have no comprehension of pointing.  Ape gestures are mostly imperative or attention getting, and are never descriptively referential. Apes undoubtedly have their own systems of categorization and can even express/communicate certain of these categories, but they do not use signs referentially in the same way. In other words, Apes can do emojis,  but not pointing.   (It strikes me incidentally that humans are not very good at predicting in advance what is going to be easy or difficult or impressive when found in another species. Most people would think that a sense of humour, or mourning one´s dead, are impressively complex feats whereas we are unaccountably surprised to find out that apes just don´t get the pointing thing. At all.)

Next  Hinzen and Domenika Slušná (Pompeu Fabra) reported  on a population of children/young adults diagnosed with autism who have never been able to acquire language. I was surprised to learn that about 25-30 percent of autistic children in fact never manage to develop language. These are  physically healthy people with normal audition, no gross motor impairment, and can produce single words. They comprehend little of speech however, especially out of routine contexts. But they do clearly socially interact. Their non verbal IQ was below average in most cases. In standard tests of categorisation, they were able to do some simple categories if they corresponded to very familiar objects in use for them, but these behaviours and representations did not seem to be very stable.  Slušná  presented her work on the analysis of the gestural repertoire and use in this population.  Their use of gesture, instead of being enriched in compensation for lack of language, turned out to be as limited and non-descriptive as the primate gestures described above. Their gestures are ritualized and used in imperative contexts primarily. Even pointing, which was a very trained and supported gesture, was used differently from neurotypical humans, without consistent referential use.

So these non verbal humans seem also to have limited categorization abilities but like the apes seemed to lack the referential instinct.  Emojis, but no pointing again. 

Next up was Alissa Ferry(U of Manchester) on The role of language in object categorisation in pre/verbal infants.  Here we zero in on the category formation ability of humans.  We know that categories are important for cognition, but we also know that the process of categorisation does not require language (pigeons Wasserman et al 2015;  non human primates Vogels 1999), and indeed we have also seen above that our non-verbal human population did retain some categorisation abilities that  Slušná  actually showed were not correlated with amount of word use or comprehension.  But it still might still be the case that language somehow facilitates categorisation in an important way, and language labels act as a trigger for the process of generalization and the formation of certain useful categories that are then robustly represented in the mind. Ferry in her PhD and subsequent follow up work was interested in tracking the moment at which prelinguistic infants start to categorise and whether this process is affected by the growth in their linguistic abilities.

The task involved a preferential looking paradigm. First the small infants were presented with a set of eight different dinosaur pictures one after the other in a training phase. Then in the target condition, they saw two pictures: another different dinosaur and a fish. If the infants had formed a category, then the dinosaur would be boring and the fish would be excitingly new.   In infants this leads to preferential looks.  If they had not formed a dinosaur category then there should be no real difference in looks in the target phase.  The training phase came in two flavours. In one version, the training phase came with each picture going along with a human saying something like “Wow!  Look at that!  That is a Toma!”  .  In the other version, the soundtrack was a series of computer generated tones.   This paradigm had been tested with 12 month olds and had been found to work like a charm— the infants formed a category with the human language commentary, but not when the soundtrack was computer generated tones.  (In fact, the 12 month olds formed a category when the single label ` toma´ was used for all the dinosaurs, but did  not  form a category when a different label was used for each different dinosaur. Cool—Language labeling has a very direct and demonstrable effect here!).   Ferry wanted to know how far back that would go time wise in the infants´ cognitive development.

She took it back to  6 and 3 months and the 3 month olds still seemed to form a category by this test,  in the verbal commentary paradigm and not to the computer tone soundtrack paradigm!

But wait, these kids aren´t even parsing out words yet presumably. Maybe it´s the fact of language itself that is tipping them off and making them think there is an interesting category to be paid attention to here, even if they are not isolating the individual word label that is being used. So Ferry tried it with Chinese, and while the 12 month and 6 month olds did not form a category with the Chinese soundtrack, the 3 month olds did!  She then tried it with lemur calls and the 3 month olds still formed the category! THEN she tried it with backwards speech (which sounds less weird than it is— it is apparently biologically quite impossible to generate), and the 3 month olds declined to form a category. Clever kids.

So 3 month olds are paying attention to the complex,naturalistic communicative sounds., but don´t really distinguish between lemurs, Chinese and their own language. 6 month and 12 month olds only form a category with their own language. But something interesting also happens between the  6 month mark and the 12 month mark. The original paradigm had the phrase used for each picture be exactly the same in the case of each dinosaur. Ferry wanted to know what would happen if each phrase was actually slightly different, grammatically, with the word `toma´ appearing in a different position in each case. At what point was the infant able to successfully notice that the `toma´word was the same, even though the whole chunk of language was not identical for each dinosaur.  Now recall that the 6 month olds are not fooled by Chinese, but it turns out that if you vary the form of the linguistic stimulus in their own language so that it is a bit harder to pull out the common word `toma´, then six month olds in fact  fail to form a dinosaur category. The 12 month olds are not thrown off by this, and they continue to form the dinosaur (`toma´) category. In fact, even by 9 months they pulling out and  tracking the individual label within the speech stream.

So prelinguistic kids can form categories in the absence of a linguistic label, and the presence of one linguistic label vs many different ones does push the child into a particular categorisation decision, but interestingly, the very presence of communicative noises seems to trigger category forming impulses in the small human infant.

Mohinish Shukla(UMass Boston) wrapped up the panel with a talk which also addressed the question about whether there can be concepts without language: Event generalisation across visually different scenes using eye-tracking, across different populations

 While we seem to have a consensus that concepts and categories are possible without language, maybe there are certain kinds  of categorizations that are only possible with the help of language. While 2 year old infants could form implicit categories based on simple transitivity contrasts (difference in crude number of participants), it turned out that they failed to recognise implicit categories of event based on the reversal of certain thematic role to participant mappings.  In a preferential looking paradigm it was checked to see if kids of 24 months could notice and start to predict a generalization based on whether `the dog pushed the car´, or ´the car pushed the dog´ in a video animation.    While adults managed this no problem, the children at this age failed.  What went wrong?  Was the category too complex? Did that category require too much language sophistication to form, and was not independently cognitively natural enough? One of the issues that struck the linguist audience about this particular implicit category was that it was not actually one that usually forms the basis of distinct verbal labeling.  Maybe language learning was actually inhibiting the child´s ability to generalize here. Maybe the effort of learning actual verbs and attending to events in a certain way was biasing children away from seeing the generalization offered by the scenario. The generalization constructed simply would never conform to a verbal regularity in English and choice of agent was not a parameter of variation being attended to for the purposes of category formation.  So although this was not the conclusion of Shukla, one might speculate that the influence of language labels and language learning is actually having an effect on categorisation here, this time an inhibitory one.

Well that´s enough for one blog post. Tomorrow I will post about the other talks and panels at the OASIS conference.

Preminger´s Arguments for a Semantics-free Syntax

In recent months, Omer Preminger has been mounting a series of arguments against (too much) semantics in syntactic theorising.  His most recent blog post can be found here.

I disagree with this and wrote a brief rebuttal in the comments section, repeated here:

I am completely on the other side in this vendetta against semantics, and I have been meaning to reply to you for a long time in fact.  Here´s a quick first go, just to see the fur fly.

First of all, in other contexts I often do find myself arguing against formal semanticists and siding with the syntacticians in giving a primacy to the nature of (specific to linguistic systems) syntactic representations, and what evidence we have for their internal structure. This is because many semanticists, in my opinion, focus too much on the detailed exposition of truth conditions and less on how meaning is put together in practice in the symbolic system.   Secondly, compositionality is not a holy grail in and of itself because once you allow yourself higher types, the `local compositionality´ constraint is vacuously satisfiable. It only gets bite if you combine it with a constrained view of polysemyand a theory of what sorts of types and shifters are allowed. This has to be part of the negotiation in which both psychological evidence and linearization are factors.

I think the circularity argument is false and it is based on a false premise. Essentially, you are presupposing your conclusion: that syntax has properties independent of the constraints imposed by the mapping to semantics on the one hand and the mapping to linearizable sound sequences on the other. (I personally think these properties might be vanishingly few).  So on the other hand, if like me,  you think that there is nothing to syntax (except for the fact of it), then the two pieces of information you need to be precise about are found in how detailed systematic meaning building is tracked by sound sequencing, in memorized vs. generated chunks.  We are solving an equation in three unknowns, and it simply will not do to assume some aspect of your conclusion (i.e. syntax is autonomous and has lots of interesting things in it).   You may not agree, but in any case the above position is an internally coherent one, and has not be so far debunked.

Imagine making the structurally same attack on phonology as you have made on semantics. It goes like this.  Organizing your syntactic representations to be sensitive to the order in which phonemes/morphemes/words are pronounced (which we do when we tacitly assume the LCA)  and THEN map to phonology and word order makes the latter trivial and circular. (In fact, I think something like this criticism is more pertinent than what you claim about semantics.)

In fact, I would argue that the only reason for hierarchical structure IS the systematicity and generativity of meaning. The only reason for linear order is that phonology has to deal with a serial representation. Syntactic representation is the trick you have to do to allow a symbol representation that will fulfil both sorts of design needs (adequate to the meaning generation part of language and to the necessity for a predictable and trackable externalization).

If you think syntax is essentially about hierarchical structure then it is so only because of the constraint of semanticity.  I am curious to hear what specific  evidence you think there exists for hierarchical structure in syntactic representations that does  not make reference to semantic facts somewhere.

If you base your syntactic theory on string acceptability you have the kind of the theory that is equally applicable to birdsong.

I think that is a bad result. Recursion and hierarchical structure is not enough, in terms of what the crucial design feature of language is that makes it so unique and cognitively important, I would have to add semanticity (cf. Hockett). More precisely, the hybrid notion, recursive semanticity.


Argument Structure Across Modalities

Workshop Held at the University of Amsterdam, Feb 1-3, 2018

I was just at a workshop where people working on argument structure got together. Yes, I seem to have been to a lot of those in my time, but this one was different. This one was explicitly designed as a meeting place for  those working on both signed and spoken languages.  You can find the programme for the workshop, together with abstracts here.

Now the cool thing about studying sign language (for any  linguistic phenomenon) is that it allows you to directly address the question of what aspects of universal patterning are functions of the nature of the interface to the modality chosen for externalization, and what other aspects survive such differences.   Given the starting assumption of universality, there are also phenomena which exist across modalities but which are easier to see and track in one modality vs. another.  Finally, if you work on spoken languages, the existence of phenomena in sign languages forces you to come up with more strictly general articulations of common analyses in order to accommodate this other manifestation of human natural language.  Sometimes it is the encounter with the other modality that convinces one that one has not been sufficiently general in the first place, despite one’s best efforts. Overall, the work on typology at this workshop (both within signed languages, and across both modalities) showed me that sign languages generally have more in common with each other typologically than they have in common with the spoken languages they happen to be in close contact with ( Börstell et al,  Kimmelman et al, Benedicto, this workshop).

(Broad) Differences between Signed and Spoken Languages

Sign languages are externalized in a visual modality using the body to create signs (mostly with the hands), spoken languages are externalized in the auditory modality, using the mouth lips and tongue to modify produced sound. (I start with the obvious).But there’s more.   Sign languages have more agreement and almost never have case.  They use locations as a proxy for reference. This affects the way in which both pronouns and agreement play out in these languages.  There’s a lot of `agreement’  on predicates, both for subject AND object. Its easy to do because the sign signed in space can overlap its beginning or ending portions with the position in space that has been indexed for a particular discourse participant.  Sign languages also  have a lot of object incorporation or `classifiers’  which modulate the shape of verbal predicates. Kimmelman et al (this workshop)  checked a lot of unrelated sign languages and showed the following  (confirming the typology introduced by Brentari and Benedicto 2004): verbs embodying `handling’ classifiers are always transitive;  while those using whole entity classifiers are nearly always intransitive (with a single internal argument); body part classifiers go together with intransitive predicates (with a single internally causing argument).   Tang and He (this workshop)  looked at HKSL and TJSL and found something similar with a wrinkle—- they seemed to find systematic instances of whole entity classifiers used with `null causativization’ to express transitive predications.  I liked this paper because it showed something comfortingly familiar wrapped up in a disorientingly different system from a superficial point of view.

Striking Commonalities

Indeed, despite the surface differences, there are some striking things in common across spoken and signed languages in the domain of argument structure. For example, you can find systematic differences between unaccusative single argument verbs and unergative single argument verb.  Verbs also come classified for durativity and telicity, which display in an obvious way morphologically. Signed languages have resultative constructions. Cornelia Loos  (this workshop) showed us some nice, clear, well argued evidence that a certain class of complex predicatons in German Sign Language (DGS) are monoclausal co-predications, and that the secondary predicate is constrained to modify the internal argument. Yay!  A true language universal……

Signed  and Spoken Languages Unlock Their Own Secrets:

Because it uses the visual modality, there is a clear  iconic relationship between verbal aktionsart and causational path, and physical trajectory in space. This means that we can see endpoints very viscerally (either as a sign leading to a final point of contact or just merely with a final sharp deceleration).  We can also see very clearly the individual portions or subevents corresponding to an internally complex event: whereas the English verb push gives no indication from its sound whether it expresses an initial ballistic transition followed by motion of the undergoer, or whether it involves continuous causation and accompanied motion, in sign languages, the verbal sign for `push’  looks different in these two different scenarios (Benedicto, this workshop).  The iconicity is obviously particularly direct for motion verbs. The semantics of scales and homomorphism work here in the iconic dimension just as they do in spoken languages (Kuhn, this workshop).   The high levels of iconicity found in sign languages cannot be unrelated to the fact that spatial metaphors are ubiquitous for describing temporal and other abstract scales also in spoken languages.   Sign languages are thus great for probing the internal structure of complex motion events, in ways that I am sure are universal even though the spoken languages are less morphologically obvious about it.

On the other hand, when it comes to reflexivity (a topic that Siloni presented at this workshop), overt morphological reflex of reflexive predication is obvious and overt in many spoken languages. Siloni presented detailed arguments to show that the se reflexive forms in French for example, share certain properties with argumental reflexives like English, but share yet others with the lexical reflexive predicates of Hebrew. She mounted an argument that the French reflexives contain a lexical, but not syntactic representation of the internal argument.   Interestingly, because sign languages use the body and body shape classifiers to express self caused actions, and because agreement is done by locational indexing, there is no surface-obvious way of distinguishing between a body shape classifier and reflexive agreement or lexical reflexive marking on a predicate.   Here was a clear domain where internal complexity and nuance could be seen and tested in the spoken languages under consideration but which were much less obvious  on the surface in the signed languages.

Depiction vs. Description

What special challenges do signed languages pose for syntax and semantics?  The existence of pervasive iconicity and iconic `phonetic’  modification of basic signs to give more detailed specification of event predicates turns out to be a very tricky thing to incorporate systematically into the compositional semantics of event predication. In addition, signed languages very often use parallel information streams, posing further problems for integration (Steinbach, invited talk, this workshop). Co-gesture in spoken language poses similar problems and questions, including whether the information so contributed is part of the at-issue content or whether it only occurs in some pragmatic or distinct dimension.  In spoken language, co-gesturing can be shown to be ubiquitous, and also seems to show similar patterns to signed languages and their classifier types (Wu and Cienko, this workshop).  Integration of iconic content is  the problem tackled by K. Davidson (2015) in her work on demonstrations and taken up in Henderson’s work on ideophones that I have recently myself also engaged in.  In my own talk, I proposed that signed and spoken languages were exactly the same in requiring a quotational semantics for the integration of the linguistic sign within the functional/grammatical scaffolding of the clause. I proposed that the DEMO predicate invoked by Davidson for iconic and depictive symbolic elements, is really a flavour of the more general CONVEY, and that properties of both the symbol deployed and the deploying event needed to be taken into account and integrated deep inside the compositional semantics of the at-issue content in a verbal description (you can access  my slides   here).

In my own most recent work (Ramchand, to appear), I was struggling with something different, namely  with the best way to implement a compositional system for natural language propositions that would have certain basic kinds of cartographic semantic layering as a consequence. After toying with the idea of introducing event properties into the ontology (an idea that proved ultimately intractable), I hit on the solution of reifying the linguistic symbol itself.  By reifying the symbol and the deployment event, Quantificational Quotational Semantics  (QQS)   allows the symbol to denote partial descriptions given by immediate sensory and cognitive uptake of eventive particulars, without commiting to the existence of those particulars until the event is existentially closed at the vP level.  A payoff that I hadn’t fully anticipated was that QQS allows a zone of compositional concept building which  leaves it open that the inputs to concept building can come from a variety of different sources, including iconic input (both auditory and gestural).  By reifying the symbol qua symbol, QQS offers a better way of integrating iconic and gestural content into the formal semantic representation and in effect normalizing it  (we don’t need to relegate it into a post-linguistic pragmatic component). In addition, the deployment event is integrated early in concept formation, potentially predicting a more central conceptual role for deictic  information.

To summarize the experience, one thing is clear— having the extra dimension of signed languages in the data pool at a thematic workshop like this gives a satisfying kind of insight into the interplay between the language-specific and the universal which led to much intellectual fascination and pleasure  (something I am sure I shared with my fellow traveller linguists at this workshop).