Workshop Held at the University of Amsterdam, Feb 1-3, 2018
I was just at a workshop where people working on argument structure got together. Yes, I seem to have been to a lot of those in my time, but this one was different. This one was explicitly designed as a meeting place for those working on both signed and spoken languages. You can find the programme for the workshop, together with abstracts here.
Now the cool thing about studying sign language (for any linguistic phenomenon) is that it allows you to directly address the question of what aspects of universal patterning are functions of the nature of the interface to the modality chosen for externalization, and what other aspects survive such differences. Given the starting assumption of universality, there are also phenomena which exist across modalities but which are easier to see and track in one modality vs. another. Finally, if you work on spoken languages, the existence of phenomena in sign languages forces you to come up with more strictly general articulations of common analyses in order to accommodate this other manifestation of human natural language. Sometimes it is the encounter with the other modality that convinces one that one has not been sufficiently general in the first place, despite one’s best efforts. Overall, the work on typology at this workshop (both within signed languages, and across both modalities) showed me that sign languages generally have more in common with each other typologically than they have in common with the spoken languages they happen to be in close contact with ( Börstell et al, Kimmelman et al, Benedicto, this workshop).
(Broad) Differences between Signed and Spoken Languages
Sign languages are externalized in a visual modality using the body to create signs (mostly with the hands), spoken languages are externalized in the auditory modality, using the mouth lips and tongue to modify produced sound. (I start with the obvious).But there’s more. Sign languages have more agreement and almost never have case. They use locations as a proxy for reference. This affects the way in which both pronouns and agreement play out in these languages. There’s a lot of `agreement’ on predicates, both for subject AND object. Its easy to do because the sign signed in space can overlap its beginning or ending portions with the position in space that has been indexed for a particular discourse participant. Sign languages also have a lot of object incorporation or `classifiers’ which modulate the shape of verbal predicates. Kimmelman et al (this workshop) checked a lot of unrelated sign languages and showed the following (confirming the typology introduced by Brentari and Benedicto 2004): verbs embodying `handling’ classifiers are always transitive; while those using whole entity classifiers are nearly always intransitive (with a single internal argument); body part classifiers go together with intransitive predicates (with a single internally causing argument). Tang and He (this workshop) looked at HKSL and TJSL and found something similar with a wrinkle—- they seemed to find systematic instances of whole entity classifiers used with `null causativization’ to express transitive predications. I liked this paper because it showed something comfortingly familiar wrapped up in a disorientingly different system from a superficial point of view.
Striking Commonalities
Indeed, despite the surface differences, there are some striking things in common across spoken and signed languages in the domain of argument structure. For example, you can find systematic differences between unaccusative single argument verbs and unergative single argument verb. Verbs also come classified for durativity and telicity, which display in an obvious way morphologically. Signed languages have resultative constructions. Cornelia Loos (this workshop) showed us some nice, clear, well argued evidence that a certain class of complex predicatons in German Sign Language (DGS) are monoclausal co-predications, and that the secondary predicate is constrained to modify the internal argument. Yay! A true language universal……
Signed and Spoken Languages Unlock Their Own Secrets:
Because it uses the visual modality, there is a clear iconic relationship between verbal aktionsart and causational path, and physical trajectory in space. This means that we can see endpoints very viscerally (either as a sign leading to a final point of contact or just merely with a final sharp deceleration). We can also see very clearly the individual portions or subevents corresponding to an internally complex event: whereas the English verb push gives no indication from its sound whether it expresses an initial ballistic transition followed by motion of the undergoer, or whether it involves continuous causation and accompanied motion, in sign languages, the verbal sign for `push’ looks different in these two different scenarios (Benedicto, this workshop). The iconicity is obviously particularly direct for motion verbs. The semantics of scales and homomorphism work here in the iconic dimension just as they do in spoken languages (Kuhn, this workshop). The high levels of iconicity found in sign languages cannot be unrelated to the fact that spatial metaphors are ubiquitous for describing temporal and other abstract scales also in spoken languages. Sign languages are thus great for probing the internal structure of complex motion events, in ways that I am sure are universal even though the spoken languages are less morphologically obvious about it.
On the other hand, when it comes to reflexivity (a topic that Siloni presented at this workshop), overt morphological reflex of reflexive predication is obvious and overt in many spoken languages. Siloni presented detailed arguments to show that the se reflexive forms in French for example, share certain properties with argumental reflexives like English, but share yet others with the lexical reflexive predicates of Hebrew. She mounted an argument that the French reflexives contain a lexical, but not syntactic representation of the internal argument. Interestingly, because sign languages use the body and body shape classifiers to express self caused actions, and because agreement is done by locational indexing, there is no surface-obvious way of distinguishing between a body shape classifier and reflexive agreement or lexical reflexive marking on a predicate. Here was a clear domain where internal complexity and nuance could be seen and tested in the spoken languages under consideration but which were much less obvious on the surface in the signed languages.
Depiction vs. Description
What special challenges do signed languages pose for syntax and semantics? The existence of pervasive iconicity and iconic `phonetic’ modification of basic signs to give more detailed specification of event predicates turns out to be a very tricky thing to incorporate systematically into the compositional semantics of event predication. In addition, signed languages very often use parallel information streams, posing further problems for integration (Steinbach, invited talk, this workshop). Co-gesture in spoken language poses similar problems and questions, including whether the information so contributed is part of the at-issue content or whether it only occurs in some pragmatic or distinct dimension. In spoken language, co-gesturing can be shown to be ubiquitous, and also seems to show similar patterns to signed languages and their classifier types (Wu and Cienko, this workshop). Integration of iconic content is the problem tackled by K. Davidson (2015) in her work on demonstrations and taken up in Henderson’s work on ideophones that I have recently myself also engaged in. In my own talk, I proposed that signed and spoken languages were exactly the same in requiring a quotational semantics for the integration of the linguistic sign within the functional/grammatical scaffolding of the clause. I proposed that the DEMO predicate invoked by Davidson for iconic and depictive symbolic elements, is really a flavour of the more general CONVEY, and that properties of both the symbol deployed and the deploying event needed to be taken into account and integrated deep inside the compositional semantics of the at-issue content in a verbal description (you can access my slides here).
In my own most recent work (Ramchand, to appear), I was struggling with something different, namely with the best way to implement a compositional system for natural language propositions that would have certain basic kinds of cartographic semantic layering as a consequence. After toying with the idea of introducing event properties into the ontology (an idea that proved ultimately intractable), I hit on the solution of reifying the linguistic symbol itself. By reifying the symbol and the deployment event, Quantificational Quotational Semantics (QQS) allows the symbol to denote partial descriptions given by immediate sensory and cognitive uptake of eventive particulars, without commiting to the existence of those particulars until the event is existentially closed at the vP level. A payoff that I hadn’t fully anticipated was that QQS allows a zone of compositional concept building which leaves it open that the inputs to concept building can come from a variety of different sources, including iconic input (both auditory and gestural). By reifying the symbol qua symbol, QQS offers a better way of integrating iconic and gestural content into the formal semantic representation and in effect normalizing it (we don’t need to relegate it into a post-linguistic pragmatic component). In addition, the deployment event is integrated early in concept formation, potentially predicting a more central conceptual role for deictic information.
To summarize the experience, one thing is clear— having the extra dimension of signed languages in the data pool at a thematic workshop like this gives a satisfying kind of insight into the interplay between the language-specific and the universal which led to much intellectual fascination and pleasure (something I am sure I shared with my fellow traveller linguists at this workshop).