I was just recently at the 9th meeting of Semantics and Philosophy in Europe
(SPE) held this year in lovely Padova, where the special topic was Ontology.
It was stimulating and enlightening to have so many people in the same room
thinking about things ontological, and from so many different backgrounds
and foundational concerns. It is also useful to lay out what the differences in
motivation and intellectual goals are when the term ontology gets bandied
about, because in my experience it leads to massive misunderstanding when
we all use the same word to mean different things. I have been working on
Things Ontological myself recently, and so I feel motivated to express clearly
what it is I am doing, and what it is I am not doing.
So why do we think it is important to think about ontology? There are a
number of perfectly respectable answers.
Ontology Because: Metaphysics
When the philosophers start talking about this stu, quite often what they
are interested in is metaphysics. They want to know what the world is like.
They want to know what the Real World is Really like. Ok, so I am a little
bit skeptical that the Real World exists in any way that my tiny human brain
can Really understand, but there are many philosopher types to consider this
the burning question and seem to be much more optimistic than me. So for
example, `Does creating a piece of music involve `causation’ ?’ which was one
philosophy paper I listened to with furrowed brow like an alien from another
planet. Because I guess I just don’t understand what the philosophers are
worried about, and last I checked, creation verbs regardless of what kind of
object is `created’ all come out as transitive, or causative in language that
have an overt causative morpheme. So I just didn’t understand what the
question was. My bad. I think you just have to have a special philosophy
gene for this one. I look forward to some philosopher taking me to the pub
one day and explaining it to me over lots of alcohol. Because one of the things
that becomes clear when you go to these conferences is that philosophers are
smart, but also great drinkers and conversationalists.
But philosophers were not the only ones at this workshop. There were also
applied ontologists, and they have a very specic set of implementational
problems involving the meaning of lexical items of individual languages.
Ontology Because: Computers are Stupid
Human beings are very intelligent. And the trick we pull off with using lan-
guage creatively to build meanings is one of the most intelligent things we
do. Applied ontologists are very smart human beings trying to spell things
out for very stupid computers. If you just talk to human beings all the time
like I do, you don’t notice half of the complexity that goes on underneath
the surface wedged inside the meaning of individual lexical items. But when
you are working on trying to get computers to understand humans you start
noticing that complexity— computers are just so darn literal minded If you
don’t tell them something explicitly they never get it! So we need to have
an idea about what kind of primitive notions and knowledge structures need
to be built in to computers brains so that they don’t produce howlers. One
very interesting topic that the workshop dealt with in some detail was the
phenomenon of `dot objects’, which is a kind of knowledge structure corre-
sponding to lexical concepts that facilitates the kind of meaning slippages
that happen all the time when such a word is used. Consider the following
sentence of English:
The heavy book on the table is very interesting.
Totally unremarkable, except for the fact that to predicate `heavy’ of `book’ you
need to be thinking of it qua physical object, while predicating `interesting’ of
it requires you to think of it qua literary content. Co-predication is necessary
here to show that the lexical item here is not ambiguous in the traditional
sense. Both meanings reside in the same item, and are quite different qual-
itatively, so vagueness doesn’t seem right either. The collection of different
kinds of `senses’ of lexical items can be quite large and there are certain gen-
eralizations about what kinds of senses cohabit within particular lexical items. As
a linguistic semanticist this is also quite fascinating, because as far as I have
seen, these are not the kinds of meanings that seem to be morphologically
coded for in the languages I know, and because all human languages seem to
allow precisely these kinds of slippages. So its definitely part of the puzzle
about human meaning to see what the generalizations are here. Laure Vieu
(CNRS) argued that cohabiting senses for these kinds of objects are charac-
terized by sharing their spatiotemporal instantiation, and I think it’s a great
Finally, at the conference were some linguistically inclined folk thinking about
ontology, and they/we had another set of direct motivations.
Ontology Because: Language and Cognition
Basically, we think that language ontology is not a way to get at deep meta-
physical facts. True, language is one of the most complex symbolic things
we do, and it clearly seems to be `about’ something. But there seems to
be no necessity to express the aboutness of language in terms of primitives
that are Really ontological, as in metaphysically primitive in the real world.
In fact, it is an open question how we should represent the `aboutness’ side
of the language equation (at the end of the day, the things language seems
to be talking `about’ will bear some non-trivially determined relationship to
the real world and its ontology, but there is no guarantee that language itself
sees that deep) (see Kit Fine, here, for discussion).
The idea from Friederike Moltmann is that natural language ontology
is the set of primitive categories which languages seem to implicitly employ,
because the patterns of language and mappings make most sense that way
(see here for her work on this topic). Natural language ontology therefore can
be discovered only by detailed inspection of natural languages themselves,
their morphosyntax and semantics. It cannot be discovered by introspection,
or by seeing what a computer can understand, or by inspecting the real world.
But why would we want to express `meanings’ in terms of an ontology
parochial to natural language instead of using an objective, independent
more real ontology? I hear you ask. (Well, I hear the formal semanticists
asking this anyway.) The reason is simply that only by investigating nat-
ural language ontology will we discover anything interesting about how the
human natural language system is built up. If we go for an objective real
world ontology then the match will be indirect at best, and we will miss
generalizations about how syntax and semantics connect in these symbolic
systems. In my opinion, much formal semantics assumes that the choice of
ontology on the aboutness side is a matter of formal convenience, as long
as it is grounded in the Real. But in fact, the ontology is part of what we
need to gure out. And it may have only an indirect relation to the Real.
Wolfram Hinzen in his work seems to go a step further and argues that much
of natural language meaning is constructed by the grammatical system itself
and could not be found in the Real world even in principle. I find myself
sympathetic to the whole natural language ontology enterprise, and suspect
that the cohabitation of senses inside lexical items is really about human
cognition in response to the world, together with the design requirements of
having a symbolic system in the first place. On the other hand, the mean-
ings constructed by functional items are relational in a fundamental way that
breaks the immediate connection between signifier and signified. But that, I
think, is a topic for a different blog.