Log in / Register
Home arrow Language & Literature arrow Theory and Data in Cognitive Linguistics


Fauconnier (1994), Lakoff (1987), Jackendoff (2002), Hudson (1990), among others, give plausible reasons why linguistic reference is not "in the world" or "in the fragment of the world that the sentence is describing". The arguments I present here will be familiar to most readers, but it is worth revisiting them, because they lead to a particular stance on quantification.

In general, the view that semantics has to be treated as a domain of conceptual structure is common to several linguistic theories, because one of the desiderata is a cognitively plausible computational system, and meanings (however they might be understood) have the property of being able to combine. We can think of this in terms of the distribution of NPs. A common first-year undergraduate syntax generalization is that English nouns cannot occur on their own, so (5a) is ungrammatical. The next example, though, is fine.

(5) a. * Dog crossed the road.

b. The/some/this/that dog crossed the road.

c. * The/a crossed the road.

d. Some, this, that crossed the road.

The generalization is that English common nouns, like dog, need to occur with a determiner to make them grammatical. The generalization leads to a lot of spilled ink about whether the determiner or the noun is the head of the phrase, because most determiners, apart from the articles, can occur on their own as in (5d), with the ungrammaticality shown in (5c). One way to conceive of these facts is to say that the distribution of NPs is semantically determined: nouns have to have their definiteness value established in order to occur in an argument position. Nouns which are in a phrase with a determiner, or which are generic, have a definiteness value established.

This generalization has the virtue of not making the claim that all nouns need to occur with a determiner to be grammatical, so it includes the distribution of bare plural generics, and the distribution of bare predicative nouns as in We made her president. However, crucially, this account of the behaviour of nouns and determiners and the distribution of noun phrases relies on an intramental notion of reference. Not only is definiteness a semantic rather than a syntactic property, but it is also a property of the semantics of reference rather than the semantics of sense. If we are going to capture the relevant facts to do with the distribution of NPs, we are going to need to include a semantic level of representation which is about reference within the grammar.

This is not surprising. I have already mentioned Heim (1983) and her use of discourse referents. Essentially, the argument is that the treatment of anaphora requires us to treat reference as a kind of concept, not as a relation to entities in the world. Take the example in (6).

(6) Jane hates cats. Whenever she saw a cati in her garden, she shot it..

The phrase a cat in (6) does not refer to any entity in the world. However, it is the antecedent to the pronoun it, which has to collect its referent from another word in the discourse. Heim (1983) argues, following Karttunen (1976), that we need a new notion of discourse referent for examples like (6), where a cat fails to refer (and so too must it, therefore). Similar arguments, about the need for representations in semantics in order to capture the relevant linguistic generalizations are made by Cannetal. (2005).

But there are other reasons for adopting a mentalist view of reference. One is to do with identification questions. Jackendoff (2002: 301-2) observes that words such as Manitoba have indeterminate reference: is it a geographical space in Canada, a political region in Canada or a region on a map of Canada? Likewise a phrase such as Beethoven's 9th Symphony what can it refer to? The score? A particular performance? Obviously neither: imagine someone said the sentence in (7).

(7) Beethoven's 9th Symphony is predictable and boring, but I still get a thrill when the choir blasts out "Ode to Joy".

You cannot argue that the phrase Beethoven's 9th Symphony does not refer. As the subject of the clause it must. But what is being referred to is not anything in the world it's something in the speaker's head (and the hearer's), their concept of the symphony, a generalization over a number of experiences of it (which might include listening to it live, to recorded versions, to performing in it as a singer or as an instrumentalist or as a conductor, to reading the score).

There are, then, two arguments that "referents" or at least the referents which are relevant to linguistic, and especially grammatical, description are concepts. There is a third argument for abandoning a realist view of linguistic meaning in favour of a conceptualist one, and that is the argument from classification. Lakoff (1987) argues that human cognition classifies according to prototypes, and that Aristotelian categories, which are organised around set membership, fail to capture these differences. Categorisation is relevant, because if word meanings aren't sets (because categories aren't hard and fast) then we need to decide how categorisation is done, and how it interacts with other aspects of meaning. Most cognitive theories assume default inheritance which is a kind of logic originally designed to capture the non-monotonic properties of human reasoning. (Langacker calls default inheritance "schematicity".) Inheritance gives us a way of capturing both prototype effects and how word meanings combine to form complex concepts.

'Everybody dies' from Hudson (2007: 31)

Figure 1. 'Everybody dies' from Hudson (2007: 31)

These observations about reference mean that there is no way in which external quantification, analysed with sentence connectives, in a first-order predicate logic can make any sense in a cognitive theory. In a cognitive theory, the relationship between the representation and the world is mediated by the apparatus of cognitive psychology and perception. Examples such as (8) are not translatable into a cognitive theory in a straightforward way, because (8) quantifies over all things in the world: it means for all things in the world x, if x is a person then x dies. A cognitive theory has no mechanism for capturing quantification over a variable with a range which is every single entity in the world.

(8) Vx Person(x)^die(x)

So how do we capture the kinds of information that are described using external quantification such as (8)? It depends, of course, on the theory, and on how it models linguistic representations. I will give an answer using Word Grammar, the main point being that all of the representations in WG are conceptual structures. The theory claims that language is a symbolic network. Naturally enough, WG does not use external quantification (i.e. there is no quantification over variables) because all of WG's semantics is intramental, and not understood in terms of a relationship between the utterance and the fragment of world it describes. Quantification over variables is intended to establish which object in the world is under discussion. Hudson (2007: 31) gives the example in Figure 1 as a WG representation of (8).

Figure 1 presents its own complexities. It states that the category 'Person' is the agent of dying of an instance of 'die'. WG states its equivalent of universally quantified statements using default inheritance: anything that inherits from 'Person' automatically inherits the property that they die; however anything that inherits from 'die' will not inherit being a person. If we wanted to state that all, and only, animals and plants die, we would need to have a link from the 'die' node at the top of the diagram.

There are two kinds of relation in a WG diagram. The line from 'die' to '1' with an upside down triangle at the top represents the Isa or "is an instance of" relation, which is the relationship of default inheritance. Therefore, this part of the diagram says that '1' is an instance of 'die'. The second kind of relation is shown by an arrow, which can be thought of as a function from an argument ('1' in this case) to a value ('Person').

Found a mistake? Please highlight the word and press Shift + Enter  
< Prev   CONTENTS   Next >
Business & Finance
Computer Science
Language & Literature
Political science