1 Misida

Semantic Argument Definition Essay

1. Two kinds of theory of meaning

In “General Semantics,” David Lewis wrote

I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and, second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Only confusion comes of mixing these two topics. (Lewis 1970, 19)

Lewis was right. Even if philosophers have not consistently kept these two questions separate, there clearly is a distinction between the questions ‘What is the meaning of this or that symbol (for a particular person or group)?’ and ‘In virtue of what facts about that person or group does the symbol have that meaning?’

Corresponding to these two questions are two different sorts of theory of meaning. One sort of theory of meaning—a semantic theory—is a specification of the meanings of the words and sentences of some symbol system. Semantic theories thus answer the question, ‘What is the meaning of this or that expression?’ A distinct sort of theory—a foundational theory of meaning—tries to explain what about some person or group gives the symbols of their language the meanings that they have. To be sure, the shape of a correct semantic theory may place constraints on the correct foundational theory of meaning, or vice versa; but that does not change the fact that semantic theories and foundational theories are simply different sorts of theories, designed to answer different questions.

To see the distinction between semantic theories and foundational theories of meaning, it may help to consider an analogous one. Imagine an anthropologist specializing in table manners sent out to observe a distant tribe. One task the anthropologist clearly might undertake is to simply describe the table manners of that tribe—to describe the different categories into which members of the tribe place actions at the table, and to say which sorts of actions fall into which categories. This would be analogous to the task of the philosopher of language interested in semantics; her job is say what different sorts of meanings expressions of a given language have, and which expressions have which meanings.

But our anthropologist might also become interested in the nature of manners; he might wonder how, in general, one set of rules of table manners comes to be the system of etiquette governing a particular group. Since presumably the fact that a group obeys one system of etiquette rather than another is traceable to something about that group, the anthropologist might put his new question by asking, ‘In virtue of what facts about a person or group does that person or group come to be governed by a particular system of etiquette, rather than another?’ Our anthropologist would then have embarked upon the analogue of the construction of a foundational theory of meaning: he would then be interested, not in which etiquette-related properties particular action types have in a certain group, but rather the question of how action-types can, in any group, come to acquire properties of this sort.[1] Our anthropologist might well be interested in both sorts of questions about table manners; but they are, pretty clearly, different questions. Just so, semantic theories and foundational theories of meaning are, pretty clearly, different sorts of theories.

The term ‘theory of meaning’ has, in the recent history of philosophy, been used to stand for both semantic theories and foundational theories of meaning. As this has obvious potential to mislead, in what follows I’ll avoid the term which this article is meant to define and stick instead to the more specific ‘semantic theory’ and ‘foundational theory of meaning.’ ‘Theory of meaning’ simpliciter is to be understood as ambiguous between these two interpretations.

Before turning to discussion of these two sorts of theories, it is worth noting that one prominent tradition in the philosophy of language denies that there are facts about the meanings of linguistic expressions. (See, for example, Quine 1960 and Kripke 1982; for critical discussion, see Soames 1997.) If this sort of skepticism about meaning is correct, then there is neither a true semantic theory nor a true foundational theory of meaning to be found, since the relevant sort of facts simply are not around to be described or analyzed. Discussion of these skeptical arguments is beyond the scope of this entry, so in what follows I’ll simply assume that skepticism about meaning is false.

2. Semantic theories

The task of explaining the main approaches to semantic theory in contemporary philosophy of language might seem to face an in-principle stumbling block. Given that no two languages have the same semantics—no two languages are comprised of just the same words, with just the same meanings—it may seem hard to say how we can say anything about different views about semantics in general, as opposed to views about the semantics of this or that language. This problem has a relatively straightforward solution. While it is of course correct that the semantics for English is one thing and the semantics for French something else, most assume that the various natural languages should all have semantic theories of (in a sense to be explained) the same form. The aim of what follows will, accordingly, be to introduce the reader to the main approaches to natural language semantics—the main views about the right form for a semantics for a natural language to take—rather than a detailed examination of the various views about the semantics of some particular expression. (For some of the latter, see names, descriptions, propositional attitude reports, and natural kinds.)

One caveat before we get started: before a semantic theorist sets off to explain the meanings of the expressions of some language, she needs a clear idea of what she is supposed to explain the meaning of. This might not seem to present much of a problem; aren’t the bearers of meaning just the sentences of the relevant language, and their parts? This is correct as far as it goes; but the task of explaining what the semantically significant parts of a sentence are, and how those parts combine to form the sentence, is an enterprise which is both far from trivial, and has important consequences for semantic theory. Indeed, most disputes about the right semantic treatment of some class of expressions are intertwined with questions about the syntactic form of sentences in which those expressions figure. Unfortunately, discussion of theories of this sort, which attempt to explain the logical form, or syntax, of natural language sentences, is well beyond the scope of this entry. As a result, figures like Richard Montague, whose work on syntax and its connection to semantics has been central to the development of semantic theory over the past few decades, are passed over in what follows. (Montague’s essays are collected in Montague 1974; for a discussion of the importance of his work, see §3.3 of Soames 2010.)

Most philosophers of language these days think that the meaning of an expression is a certain sort of entity, and that the job of semantics is to pair expressions with the entities which are their meanings. For these philosophers, the central question about the right form for a semantic theory concerns the nature of these entities. Because the entity corresponding to a sentence is called a proposition, I’ll call these propositional semantic theories. However, not all philosophers of language think that the meanings of sentences are propositions, or even believe that there are such things. Accordingly, in what follows, I’ll divide the space of approaches to semantics into propositional and non-propositional semantic theories. Following discussion of the leading approaches to theories of each type, I’ll conclude in §2.3 by discussing a few general questions which semantic theorists take which are largely orthogonal to one’s view about the form which a semantic theory ought to take.

2.1 Propositional semantic theories

The easiest way to understand the various sorts of propositional semantic theories is by beginning with another sort of theory: a theory of reference.

2.1.1 The theory of reference

A theory of reference is a theory which, like a propositional semantic theory, pairs the expressions of a language with certain values. However, unlike a semantic theory, a theory of reference does not pair expressions with their meanings; rather, it pairs expressions with the contribution those expressions make to the determination of the truth-values of sentences in which they occur. (Though later we will see that this view of the reference of an expression must be restricted in certain ways.)

This construal of the theory of reference is traceable to Gottlob Frege’s attempt to formulate a logic sufficient for the formalization of mathematical inferences (see especially Frege 1879 and 1892.) The construction of a theory of reference of this kind is best illustrated by beginning with the example of proper names. Consider the following sentences:

  • (1)Barack Obama is the 44th president of the United States.
  • (2)John McCain is the 44th president of the United States.

(1) is true, and (2) is false. Obviously, this difference in truth-value is traceable to some difference between the expressions ‘Barack Obama’ and ‘John McCain.’ What about these expressions explains the difference in truth-value between these sentences? It is very plausible that it is the fact that ‘Barack Obama’ stands for the man who is in fact the 44th president of the United States, whereas ‘John McCain’ stands for a man who is not. This indicates that the reference of a proper name—its contribution to the determination of truth conditions of sentences in which it occurs—is the object for which that name stands.

Given this starting point, it is a short step to some conclusions about the reference of other sorts of expressions. Consider the following pair of sentences:

  • (3)Barack Obama is a Democrat.
  • (4)Barack Obama is a Republican.

Again, the first of these is true, whereas the second is false. We already know that the reference of ‘Barack Obama’ is the man for which the name stands; so, given that reference is power to affect truth-value, we know that the reference of predicates like ‘is a Democrat’ and ‘is a Republican’ must be something which combines with an object to yield a truth-value. Accordingly, it is natural to think of the reference of predicates of this sort as functions from objects to truth-values. The reference of ‘is a Democrat’ is that function which returns the truth-value ‘true’ when given as input an object which is a member of the Democratic party (and the truth-value ‘false’ otherwise), whereas the reference of ‘is a Republican’ is a function which returns the truth-value ‘true’ when given as input an object which is a member of the Republican party (and the truth-value ‘false’ otherwise). This is what explains the fact that (3) is true and (4) false: Obama is a member of the Democratic party, and is not a member of the Republican party.

Matters get more complicated, and more controversial, as we extend this sort of theory of reference to cover more and more of the types of expressions we find in natural languages like English. (For an introduction, see Heim and Kratzer 1998.) But the above is enough to give a rough idea of how one might proceed. For example, some predicates, like ‘loves’ combine with two names to form a sentence, rather than one. So the reference of two-place predicates of this sort must be something which combines with a pair of objects to determine a truth-value—perhaps, that function from ordered pairs of objects to truth-values which returns the truth-value ‘true’ when given as input a pair of objects whose first member loves the second member, and ‘false’ otherwise.

2.1.2 Theories of reference vs. semantic theories

So let’s suppose that we have a theory of reference for a language, in the above sense. Would we then have a satisfactory semantic theory for the language?

Some plausible arguments indicate that we would not. To adopt an example from Quine (1970), let’s assume that the set of animals with hearts (cordates) is the same as the set of animals with kidneys (renates). Now, consider the pair of sentences:

  • (5)All cordates are cordates.
  • (6)All cordates are renates.

Given our assumption, both sentences are true. Moreover, from the point of view of the theory of reference, (5) and (6) are just the same: they differ only in the substitution of ‘renates’ for ‘cordates’, and these expressions have the same reference (because they stand for the same function from objects to truth-values).

All the same, there is clearly an intuitive difference in meaning between (5) and (6); the sentences seem, in some sense, to say different things. The first seems to express the trivial, boring thought that every creature with a heart is a creature with a heart, whereas the second expresses the non-trivial, potentially informative claim that every creature with a heart also has a kidney. This suggests that there is an important difference between (5) and (6) which our theory of reference simply fails to capture.

Examples of the same sort can be generated using pairs of expressions of other types which share a reference, but intuitively differ in meaning; for example, ‘Clark Kent’ and ‘Superman,’ or (an example famously discussed by Frege (1892/1960)), ‘the Morning Star’ and ‘the Evening Star.’

This might seem a rather weak argument for the incompleteness of the theory of reference, resting as it does on intuitions about the relative informativeness of sentences like (5) and (6). But this argument can be strengthened by embedding sentences like (5) and (6) in more complex sentences, as follows:

  • (7)John believes that all cordates are cordates.
  • (8)John believes that all cordates are renates.

(7) and (8) differ only with respect to the underlined expressions and, as we noted above, these expressions have the same reference. Despite this, it seems clear that (7) and (8) could differ in truth-value: someone could know that all cordates have a heart without having any opinion on the question of whether all cordates have a kidney. But that means that the references of expressions don’t even do the job for which they were introduced: they don’t explain the contribution that expressions make to the determination of the truth-value of all sentences in which they occur. (One might, of course, still think that the reference of an expression explains its contribution to the determination of the truth-value of a suitably delimited class of simple sentences in which the expression occurs.) If we are to be able to explain, in terms of the properties of the expressions that make them up, how (7) and (8) can differ in truth-value, then expressions must have some other sort of value, some sort of meaning, which goes beyond reference.

(7) and (8) are called belief ascriptions, for the obvious reason that they ascribe a belief to a subject. Belief ascriptions are one sort of propositional attitude ascription—other types include ascriptions of knowledge, desire, or judgement. As will become clear in what follows, propositional attitude ascriptions have been very important in recent debates in semantics. One of the reasons why they have been important is exemplified by (7) and (8). Because these sentences can differ in truth-value despite the fact that they differ only with respect to the underlined words, and these words both share a reference and occupy the same place in the structure of the two sentences, we say that (7) and (8) contain a non-extensional context: roughly, a ‘location’ in the sentence which is such that substitution of terms which share a reference in that location can change truth-value. (They’re called ‘non-extensional contexts’ because ‘extension’ is another term for ‘reference.’)

We can give a similar argument for the incompleteness of the theory of reference based on the substitution of whole sentences. A theory of reference assigns to subsentential expressions values which explain their contribution to the truth-values of sentences; but to those sentences, it only assigns ‘true’ or ‘false.’ But consider a pair of sentences like

  • (9)Mary believes that Barack Obama is the president of the United States.
  • (10)Mary believes that John Key is the prime minister of New Zealand.

Because both of the underlined sentences are true, (9) and (10) are a pair of sentences which differ only with respect to substitution of expressions (namely, the underlined sentences) with the same reference. Nonetheless, (9) and (10) could plainly differ in truth-value.

This seems to show that a semantic theory should assign some value to sentences other than a truth-value. Another route to this conclusion is the apparent truth of claims of the following sort:

There are three things that John believes about Indiana, and they are all false.

There are many necessary truths which are not a priori, and my favorite sentence expresses one of them.

To get an A you must believe everything I say.

Sentences like these seem to show that there are things which are the objects of mental states like belief, the bearers of truth and falsity as well as modal properties like necessity and possibility and epistemic properties like a prioricity and posterioricity, and the things expressed by sentences. What are these things? The theory of reference provides no answer.

Friends of propositions aim both to provide a theory of these entities, and, in so doing, also to solve the two problems for the theory of reference discussed above: (i) the lack of an explanation for the fact that (5) is trivial and a priori while (6) is not, and (ii) the fact (exemplified by (7)/(8) and (9)/(10)) that sentences which differ only in the substitution of expressions with the same reference can differ in truth-value.

A theory of propositions thus does not abandon the theory of reference, as sketched above, but simply says that there is more to a semantic theory than the theory of reference. Subsentential expressions have, in addition to a reference, a content. The contents of sentences—what sentences express—are known as propositions.

2.1.3 The relationship between content and reference

The natural next question is: What sorts of things are contents? Below I’ll discuss some of the leading answers to this question. But in advance of laying out any theory about what contents are, we can say some general things about the role that contents are meant to play.

First, what is the relationship between content and reference? Let’s examine this question in connection with sentences; here it amounts to the question of the relationship between the proposition a sentence expresses and the sentence’s truth-value. One point brought out by the example of (9) and (10) is that two sentences can express different propositions while having the same truth-value. After all, the beliefs ascribed to Mary by these sentences are different; so if propositions are the objects of belief, the propositions corresponding to the underlined sentences must be different. Nonetheless, both sentences are true.

Is the reverse possible? Can two sentences express the same proposition, but differ in truth-value? It seems not, as can be illustrated again by the role of propositions as the objects of belief. Suppose that you and I believe the exact same thing—both of us believe the world to be just the same way. Can my belief be true, and yours false? Intuitively, it seems not; it seems incoherent to say that we both believe the world to be the same way, but that I get things right and you get them wrong. (Though see the discussion of relativism in §2.3.2 below for a dissenting view.) So it seems that if two sentences express the same proposition, they must have the same truth value.

In general, then, it seems plausible that two sentences with the same content—i.e., which express the same proposition—must always have the same reference, though two expressions with the same reference can differ in content. This is the view stated by the Fregean slogan that sense determines reference (‘sense’ being the conventional translation of Frege’s Sinn, which was his word for what we are calling ‘content’).

If this holds for sentences, does it also hold for subsentential expressions? It seems that it must. Suppose for reductio that two subsentential expressions, e and e*, have the same content but differ in reference. It seems plausible that two sentences which differ only by the substitution of expressions with the same content must have the same content. (While plausible, this principle is not uncontroversial; see compositionality.) But if this is true, then sentences which differ only in the substitution of e and e* would have the same content. But such a pair of sentences could differ in truth-value, since, for any pair of expressions which differ in reference, there is some pair of sentences which differ only by the substitution of those expressions and differ in truth-value. So if there could be a pair of expressions like e and e*, which differ in their reference but not in their content, there could be a pair of sentences which have the same content—which express the same proposition—but differ in truth-value. But this is what we argued above to be impossible; hence there could be no pair of expressions like e and e*, and content must determine reference for subsentential expressions as well as sentences.

This result—that content determines reference—explains one thing we should, plausibly, want a semantic theory to do: it should assign to each expression some value—a content—which determines a reference for that expression.

2.1.4 Character and content, context and circumstance

However, there is an obvious problem with the idea that we can assign a content, in this sense, to all of the expressions of a language like English: many expressions, like ‘I’ or ‘here’, have a different reference when uttered by different speakers in different situations. So we plainly cannot assign to ‘I’ a single content which determines a reference for the expression, since the expression has a different reference in different situations. These ‘situations’ are typically called contexts of utterance, or just contexts, and expressions whose reference depends on the context are called indexicals or context-dependent expressions.

The obvious existence of such expressions shows that a semantic theory must do more than simply assign contents to every expression of the language. Expressions like ‘I’ must also be associated with rules which determine the content of the expression, given a context of utterance. These rules, which are (or determine) functions from contexts to contents, are called characters. (The terminology here, as well as the view of the relationship between context, content, and reference, is due to Kaplan (1989).) So the character of ‘I’ must be some function from contexts to contents which, in a context in which I am the speaker, delivers a content which determines me as reference; in a context in which Barack Obama is the speaker, delivers a content which determines Barack Obama as reference; and so on.

Figure 1.

Here we face another potentially misleading ambiguity in ‘meaning.’ What is the real meaning of an expression—its character, or its content (in the relevant context)? This is an empty terminological question. Expressions have characters which, given a context, determine a content. We can talk about either character or content, and both are important. Nothing is to be gained by arguing that one rather than the other deserves the title of ‘meaning.’ The important thing is to be clear on the distinction, and to see the reasons for thinking that expressions have both a character and (relative to a context) a content.

How many indexical expressions are there? There are some obvious candidates—‘I’, ‘here’, ‘now’, etc.—but beyond the obvious candidates, it is very much a matter of dispute; for discussion, see §2.3.1 below.

But there is a kind of argument which seems to show that almost every expression is an indexical. Consider an expression which does not seem to be context-sensitive, like ‘the second-largest city in the United States.’ This does not seem to be context-sensitive, because it seems to refer to the same city—Los Angeles—whether uttered by me, you, or some other English speaker. But now consider a sentence like

  • (11)100 years ago, the second-largest city in the United States was Chicago.

This sentence is true. But for it to be true, ‘the second-largest city in the United States’ would have to, in (11), refer to Chicago. But then it seems like this expression must be an indexical—its reference must depend on the context of utterance. In (11), the thought goes, the phrase ‘one hundred years ago’ shifts the context: in (11), ‘the second-largest city in the United States’ refers to that city that it would have referred to if uttered one hundred years ago.

However, this can’t be quite right, as is shown by examples like this one:

  • (12)In 100 years, I will not exist.

Let’s suppose that this sentence, as uttered by me, is true. Then, if what we said about (11) was right, it seems that ‘I’ must, in, (12), refer to whoever it would refer to if it were uttered 100 years in the future. So the one thing we know is that (assuming that (12) is true), it does not refer to me—after all, I won’t be around to utter anything. But, plainly, the ‘I’ in (12) does refer to me when this sentence is uttered by me—after all, it is a claim about me. What’s going on here?

What examples like (12) are often taken to show is that the reference of an expression must be relativized, not just to a context of utterance, but also to a circumstance of evaluation—roughly, the possible state of the world relevant to the determination of the truth or falsity of the sentence. In the case of many simple sentences, context and circumstance coincide; details aside, they both just are the state of the world at the time of the utterance, with a designated speaker and place. But sentences like (12) show that they can come apart. Phrases like ‘In 100 years’ shift the circumstance of evaluation—they change the state of the world relevant to the evaluation of the truth or falsity of the sentence—but don’t change the context of utterance. That’s why when I utter (12), ‘I’ refers to me—despite the fact that I won’t exist to utter it in 100 years time.

Figure 2.

This is sometimes called the need for double-indexing semantics—the two indices being contexts of utterance and circumstances of evaluation.

The classic explanation of a double-indexing semantics is Kaplan (1989); another important early discussion is Kamp (1971). For a different interpretation of the framework, see Lewis (1980).

Double-indexing explains how we can regard the reference of ‘the second-largest city in the United States’ in (11) to be Chicago, without taking ‘the second-largest city in the United States’ to be an indexical like ‘I.’ On this view, ‘the second-largest city in the United States’ does not vary in content depending on the context of utterance; rather, the content of this phrase is such that it determines a different reference with respect to different circumstances of evaluation. In particular, it has Los Angeles as its reference with respect to the present state of the actual world, and has Chicago as its reference with respect to the state of actual world 100 years ago, in 1910.[2] Because ‘the second-largest city in the United States’ refers to different things with respect to different circumstances, it is not a rigid designator—these being expressions which (relative to a context of utterance) refer to the same object with respect to every circumstance of evaluation at which that object exists, and never refer to anything else with respect to another circumstance of evaluation. (The term ‘rigid designator’ is due to Kripke (1972).)

(Note that this particular example assumes the highly controversial view that circumstances of evaluation include, not just possible worlds, but also times. For a discussion of different views about the nature of circumstances of evaluation and their motivations, see §2.3.2 below.)

2.1.5 Possible worlds semantics

So we know that expressions are associated with characters, which are functions from contexts to contents; and we know that contents are things which, for each circumstance of evaluation, determine a reference. We can now raise a central question of (propositional) semantic theories: what sorts of things are contents? The foregoing suggests a pleasingly minimalist answer to this question: perhaps, since contents are things which together with circumstances of evaluation determine a reference, contents just are functions from circumstances of evaluation to a reference.

This view sounds abstract but is, in a way, quite intuitive. The idea is that the meaning of an expression is not what the expression stands for in the relevant circumstance, but rather a rule which tells you what the expression would stand for were the world a certain way. So, on this view, the content of an expression like ‘the tallest man in the world’ is not simply the man who happens to be tallest, but rather a function from ways the world might be to men—namely, that function which, for any way the world might be, returns as a referent the tallest man in that world (if there is one, and nothing otherwise). This fits nicely with the intuitive idea that to understand such an expression one needn’t know what the expression actually refers to—after all, one can understand ‘the tallest man’ without knowing who the tallest man is—but must know how to tell what the expression would refer to, given certain information about the world (namely, the heights of all the men in it).

These functions, or rules, are called (following Carnap (1947)) intensions. Possible worlds semantics is the view that contents are intensions (and hence that characters are functions from contexts to intensions, i.e. functions from contexts to functions from circumstances of evaluation to a reference). (‘Intension’ is sometimes used more generally, as a synonym for ‘content.’ This usage is misleading, and the term is better reserved for functions from contexts to referents. It is then controversial whether, as the proponent of possible worlds semantics thinks, contents are intensions.)

Figure 3.

For discussion of the application of the framework of possible world semantics to natural language, see Lewis (1970). The intension of a sentence—i.e., the proposition that sentence expresses, on the present view—will then be a function from worlds to truth-values. In particular, it will be that function which returns the truth-value ‘true’ for every world with respect to which that sentence is true, and ‘false’ otherwise. The intension of a simple predicate like ‘is red’ will be a function from worlds to the function from objects to truth-values which, for each world, determines the truth-value ‘true’ if the thing in question is red, and false otherwise. In effect, possible worlds semantics takes the meanings of expressions to be functions from worlds to the values which would be assigned by a theory of reference to those expressions at the relevant world: in that sense, intensions are a kind of ‘extra layer’ on top of the theory of reference.

This extra layer promises to solve the problem posed by non-extensional contexts, as illustrated by the example of ‘cordate’ and ‘renate’ in (7) and (8). Our worry was that, since these expressions have the same reference, if meaning just is reference, then it seems that any pair of sentences which differ only in the substitution of these expressions must have the same truth-value. But (7) and (8) are such a pair of sentences, and needn’t have the same truth-value. The proponent of possible worlds semantics solves this problem by identifying the meaning of these expressions with their intension rather than their reference, and by pointing out that ‘cordate’ and ‘renate’, while they share a reference, seem to have different intensions. After all, even if in our world every creature with a heart is a creature with a kidney (and vice versa), it seems that the world could have been such that some creatures had a heart but not a kidney. Since with respect to that circumstance of evaluation the terms will differ in reference, their intensions—which are just functions from circumstances of evaluations to referents—must also differ. Hence possible worlds semantics leaves room for (7) and (8) to differ in truth value, as they manifestly can.

The central problem facing possible worlds semantics, however, concerns sentences of the same form as (7) and (8): sentences which ascribe propositional attitudes, like beliefs, to subjects. To see this problem, we can begin by asking: according to possible worlds semantics, what does it take for a pair of sentences to have the same content (i.e., express the same proposition)? Since contents are intensions, and intensions are functions from circumstances of evaluation to referents, it seems that two sentences have the same content, according to possible worlds semantics, if they have the same truth-value with respect to every circumstance of evaluation. In other words, two sentences express the same proposition if and only if it is impossible for them to differ in truth-value.

The problem is that there are sentences which have the same truth-value in every circumstance of evaluation, but seem to differ in meaning. Consider, for example

  • (13)2+2=4.
  • (14)There are infinitely many prime numbers.

(13) and (14) are in a way reminiscent of (5) and (6): the first seems to be a triviality which everyone knows, and the second seems to be a more substantial claim of which one might well be ignorant. However, both are necessary truths: like any truths of mathematics, neither depends on special features of the actual world, but rather both are true with respect to every circumstance of evaluation. Hence (13) and (14) have the same intension and, according to possible worlds semantics, must have the same content.

This is highly counterintuitive. The problem (just as with (5) and (6)) can be sharpened by embedding these sentences in propositional attitude ascriptions:

  • (15)John believes that 2+2=4.
  • (16)John believes that there are infinitely many prime numbers.

As we have just seen, the proponent of possible worlds semantics must take the underlined sentences, (13) and (14), to have the same content; hence the proponent of possible worlds semantics must take (15) and (16) to be a pair of sentences which differ only in the substitution of expressions with the same content. But then it seems that the proponent of possible worlds semantics must take this pair of sentences to express the same proposition, and have the same truth-value; but (15) and (16) (like (7) and (8)) clearly can differ in truth-value, and hence clearly do not express the same proposition.

Indeed, the problem, as shown in Soames (1988), is worse than this. Consider a pair of sentences like

  • (17)Grass is green.
  • (18)Grass is green and there are infinitely many prime numbers.

The second of these is just the first conjoined with a necessary truth; hence the second is true if and only if the first is true. But then they have the same intension and, according to possible worlds semantics, have the same content. Hence the following two sentences cannot differ in truth-value:

  • (19)John believes that grass is green.
  • (20)John believes that grass is green and there are infinitely many prime numbers.

since they differ only by the substitution of (17) and (18), and these are (according to possible worlds semantics) expressions with the same content. Furthermore, it seems that belief distributes over conjunction, in this sense: anyone who believes the conjunction of a pair of propositions must also believe each of those propositions. But then if (20) is true, so must be (16). So it follows that (19) implies (16), and anyone who believes that grass is green must also believe that there are infinitely many primes. This line of argument generalizes to show that anyone who believes any propositions at all must believe every necessary truth. This is, at best, a highly counterintuitive consequence of the possible worlds semanticist’s equation of contents with intensions. All things being equal, it seems that we should seek an approach to semantics which does not have this consequence.

For an attempt to reply to the argument from within the framework of possible worlds semantics, see Stalnaker (1984); for discussion of a related approach to semantics which aims to avoid these problems, see situations in natural language semantics.

2.1.6 Russellian propositions

What we need, then, is an approach to semantics which can explain how sentences like (13) and (14), and hence also (15) and (16), can express different propositions. That is, we need a view of propositions which makes room for the possibility that a pair of sentences can be true in just the same circumstances but nonetheless have genuinely different contents.

A natural thought is that (13) and (14) have different contents because they are about different things; for example, (14) makes a general claim about the set of prime numbers whereas (13) is about the relationship between the numbers 2 and 4. One might want our semantic theory to be sensitive to such differences: to count two sentences as expressing different propositions if they are have different subject matters, in this sense. One way to secure this result is to think of the contents of subsentential expressions as components of the proposition expressed by the sentence as a whole. Differences in the contents of subsentential expressions would then be sufficient for differences in the content of the sentence as a whole; so, for example, since (14) but not (13) contains an expression which refers to prime numbers, these sentences will express different propositions.

Proponents of this sort of view think of propositions as structured: as having constituents which include the meanings of the expressions which make up the sentence expressing the relevant proposition. (See, for more discussion, structured propositions.) One important question for views of this sort is: what does it mean for an abstract object, like a proposition, to be structured, and have constituents? But this question would take us too far afield into metaphysics. (See §2.3.3 below for a brief discussion.) The fundamental semantic question for proponents of this sort of structured proposition view is: what sorts of things are the constituents of propositions?

The answer to this question given by a proponent of Russellian propositions is: objects, properties, relations, and functions. (The view is called ‘Russellianism’ because of its resemblance to the view of content defended in Chapter IV of Russell (1903).) So described, Russellianism is a general view about what sorts of things the constituents of propositions are, and does not carry a commitment to any views about the contents of particular types of expressions. However, most Russellians also endorse a particular view about the contents of proper names which is known as Millianism: the view that the meaning of a simple proper name is the object (if any) for which it stands.

Russellianism has much to be said for it. It not only solves the problems with possible worlds semantics discussed above, but fits well with the intuitive idea that the function of names is to single out objects, and the function of predicates is to (what else?) predicate properties of those objects.

However, Millian-Russellian semantic theories also face some problems. Some of these are metaphysical in nature, and are based on the premise that propositions which have objects among their constituents cannot exist in circumstances in which those objects do not exist. (For discussion, see singular propositions, §§4–5.) Of the semantic objections to Millian-Russellian semantics, two are especially important.

The first of these problems involves the existence of empty names: names which have no referent. It is a commonplace that there are such names; an example is ‘Vulcan,’ the name introduced for the planet between Mercury and the sun which was causing perturbations in the orbit of Mercury. Because the Millian-Russellian says that the content of a name is its referent, the Millian-Russellian seems forced into saying that empty names lack a content. But this is surprising; it seems that we can use empty names in sentences to express propositions and form beliefs about the world. The Millian-Russellian owes some explanation of how this is possible, if such names genuinely lack a content. An excellent discussion of this problem from a Millian point of view is provided in Braun (1993).

Perhaps the most important problem facing Millian-Russellian views, though, is Frege’s puzzle. Consider the sentences

  • (21)Clark Kent is Clark Kent.
  • (22)Clark Kent is Superman.

According to the Millian-Russellian, (21) and (22) differ only in the substitution of expressions with have the same content: after all, ‘Clark Kent’ and ‘Superman’ are proper names which refer to the same object, and the Millian-Russellian holds that the content of a proper name is the object to which that name refers. But this is a surprising result. These sentences seem to differ in meaning, because (21) seems to express a trivial, obvious claim, whereas (22) seems to express a non-trivial, potentially informative claim.

This sort of objection to Millian-Russellian views can (as above) be strengthened by embedding the intuitively different sentences in propositional attitude ascriptions, as follows:

  • (23)Lois believes that Clark Kent is Clark Kent.
  • (24)Lois believes that Clark Kent is Superman.

The problem posed by (23) and (24) for Russellian semantics is analogous to the problem posed by (15) and (16) for possible worlds semantics. Here, as there, we have a pair of belief ascriptions which seem as though they could differ in truth-value despite the fact that these sentences differ only with respect to expressions counted as synonymous by the relevant semantic theory.

Russellians have offered a variety of responses to Frege’s puzzle. Many Russellians think that our intuition that sentences like (23) and (24) can differ in truth-value is based on a mistake. This mistake might be explained at least partly in terms of a confusion between the proposition semantically expressed by a sentence in a context and the propositions speakers would typically use that sentence to pragmatically convey (Salmon 1986; Soames 2002), or in terms of the fact that a single proposition may be believed under several ‘propositional guises’ (again, see Salmon 1986), or in terms of a failure to integrate pieces of information stored using distinct mental representations (Braun and Saul (2002)).[3] Alternatively, a Russellian might try to make room for (23) and (24) to genuinely differ in truth-value by giving up the idea that sentences which differ only in the substitution of proper names with the same content must express the same proposition (Taschek 1995, Fine 2007).

2.1.7 Fregean propositions

However, these are not the only responses to Frege’s puzzle. Just as the Russellian responded to the problem posed by (15) and (16) by holding that two sentences with the same intension can differ in meaning, one might respond to the problem posed by (23) and (24) by holding that two names which refer to the same object can differ in meaning, thus making room for (23) and (24) to differ in truth-value. This is to endorse a Fregean response to Frege’s puzzle, and to abandon the Russellian approach to semantics (or, at least, to abandon Millian-Russellian semantics).

Fregeans, like Russellians, think of the proposition expressed by a sentence as a structured entity with constituents which are the contents of the expressions making up the sentence. But Fregeans, unlike Russellians, do not think of these propositional constituents as the objects, properties, and relations for which these expressions stand; instead, Fregeans think of the contents as modes of presentation, or ways of thinking about, objects, properties, and relations. The standard term for these modes of presentation is sense. (As with ‘intension,’ ‘sense’ is sometimes also used as a synonym for ‘content.’ But, as with ‘intension,’ it avoids confusion to restrict ‘sense’ for ‘content, as construed by Fregean semantics.’ It is then controversial whether there are such things as senses, and whether they are the contents of expressions.) Frege explained his view of senses with an analogy:

The reference of a proper name is the object itself which we designate by its means; the idea, which we have in that case, is wholly subjective; in between lies the sense, which is indeed no longer subjective like the idea, but is yet not the object itself. The following analogy will perhaps clarify these relationships. Somebody observes the Moon through a telescope. I compare the Moon itself to the reference; it is the object of the observation, mediated by the real image projected by the object glass in the interior of the telescope, and by the retinal image of the observer. The former I compare to the sense, the latter is like the idea or experience. The optical image in the telescope is indeed one-sided and dependent upon the standpoint of observation; but it is still objective, inasmuch as it can be used by several observers. At any rate it could be arranged for several to use it simultaneously. But each one would have his own retinal image. (Frege 1892/1960)

Senses are then objective, in that more than one person can express thoughts with a given sense, and correspond many-one to objects. Thus, just as Russellian propositions correspond many-one to intensions, Fregean propositions correspond many-one to Russellian propositions. This is sometimes expressed by the claim that Fregean contents are more fine-grained than Russellian contents (or intensions).

Indeed, we can think of our three propositional semantic theories, along with the theory of reference, as related by this kind of many-one relation, as illustrated by the chart below:

Figure 4.

The principal argument for Fregean semantics (which also motivated Frege himself) is the neat solution the view offers to Frege’s puzzle: the view says that, in cases like (23) and (24) in which there seems to be a difference in content, there really is a difference in content: the names share a reference, but differ in their sense, because they differ in their mode of presentation of their shared reference.

The principal challenge for Fregeanism is the challenge of giving a non-metaphorical explanation of the nature of sense. This is a problem for the Fregean in a way that it is not for the possible worlds semanticist or the Russellian since the Fregean, unlike these two, introduces a new class of entities to serve as meanings of expressions rather than merely appropriating an already recognized sort of entity—like a function, or an object, property, or relation—to serve this purpose.[4]

A first step toward answering this challenge is provided by a criterion for telling when two expressions differ in meaning, which might be stated as follows. In his 1906 paper, ‘A Brief Survey of My Logical Doctrines,’ Frege seems to endorse the following criterion:

Frege’s criterion of difference for senses
Two sentences S and S* differ in sense if and only if some rational agent who understood both could, on reflection, judge that S is true without judging that S* is true.

One worry about this formulation concerns the apparent existence of pairs of sentences, like ‘If Obama exists, then Obama=Obama’ and ‘If McCain exists, McCain=McCain’ which are such that any rational person who understands both will take both to be true. These sentences seem intuitively to differ in content—but this is ruled out by the criterion above. One idea for getting around this problem would be to state our criterion of difference for senses of expressions in terms of differences which result from substituting one expression for another:

Two expressions e and e* differ in sense if and only if there are a pair of sentences, S and S* which (i) differ only in the substitution of e for e* and (ii) are such that some rational agent who understood both could, on reflection, judge that S is true without judging that S* is true.

This version of the criterion has Frege’s formulation as a special case, since sentences are, of course, expressions; and it solves the problem with obvious truths, since it seems that substitution of sentences of this sort can change the truth value of a propositional attitude ascription. Furthermore, the criterion delivers the wanted result that coreferential names like ‘Superman’ and ‘Clark Kent’ differ in sense, since a rational, reflective agent like Lois Lane could think that (21) is true while withholding assent from (22).

But even if this tells us when names differ in sense, it does not quite tell us what the sense of a name is. Here is one initially plausible way of explaining what the sense of a name is. We know that, whatever the content of a name is, it must be something which determines as a reference the object for which the name stands; and we know that, if Fregeanism is true, this must be something other than the object itself. A natural thought, then, is that the content of a name—its sense—is some condition which the referent of the name uniquely satisfies. Coreferential names can differ in sense because there is always more than one condition which a given object uniquely satisfies. (For example, Superman/Clark Kent uniquely satisfies both the condition of being the superhero Lois most admires, and the newspaperman she least admires.) Given this view, it is natural to then hold that names have the same meanings as definite descriptions—phrases of the form ‘the so-and-so.’ After all, phrases of this sort seem to be designed to pick out the unique object, if any, which satisfies the condition following the ‘the.’ (For more discussion, see descriptions.) This Fregean view of names is called Fregean descriptivism.

However, as Saul Kripke argued in Naming and Necessity, Fregean descriptivism faces some serious problems. Here is one of the arguments he gave against the view, which is called the modal argument. Consider a name like ‘Aristotle,’ and suppose for purposes of exposition that the sense I associate with that name is the sense of the definite description ‘the greatest philosopher of antiquity.’ Now consider the following pair of sentences:

  • (25)Necessarily, if Aristotle exists, then Aristotle is Aristotle.
  • (26)Necessarily, if Aristotle exists, then Aristotle is the greatest philosopher of antiquity.

If Fregean descriptivism is true, and ‘the greatest philosopher of antiquity’ is indeed the description I associate with the name ‘Aristotle,’ then it seems that (25) and (26) must be a pair of sentences which differ only via the substitution of expressions (the underlined ones) with the same content. If this is right, then (25) and (26) must express the same proposition, and have the same truth-value. But this seems to be a mistake; while (25) appears to be true (Aristotle could hardly have failed to be himself), (26) appears to be false (perhaps Aristotle could have been a shoemaker rather than a philosopher; or perhaps if Plato had worked a bit harder, he rather than Aristotle could have been the greatest philosopher of antiquity).

Fregean descriptivists have given various replies to Kripke’s modal and other arguments; see especially Plantinga (1978), Dummett (1981), and Sosa (2001). For rejoinders to these Fregean replies, see Soames (1998, 2002) and Caplan (2005). For a brief sketch of Kripke’s other arguments against Fregean descriptivism, see names, §2.4.

Kripke’s arguments provide a strong reason for Fregeans to deny Fregean descriptivism, and hold instead that the senses of proper names are not the senses of any definite description associated with those names by speakers. The main problem for this sort of non-descriptive Fregeanism is to explain what the sense of a name might be such that it can determine the reference of the name, if it is not a condition uniquely satisfied by the reference of the name. Non-descriptive Fregean views are defended in McDowell (1977) and Evans (1981); for a version of the view which gives up the idea that the sense of a name determines its reference, see Chalmers (2004, 2006).

Two other problems for Fregean semantics are worth mentioning. The first calls into question the Fregean’s claim to have provided a plausible solution to Frege’s puzzle. The Fregean resolves instances of Frege’s puzzle by positing differences in sense to explain apparent differences in truth-value. But this sort of solution, if pursued generally, seems to lead to the surprising result that no two expressions can have the same content. For consider a pair of expressions which really do seem to have the same content, like ‘catsup’ and ‘ketchup.’ (The example, as well as the argument to follow, are borrowed from Salmon (1990).) Now consider Bob, a confused condiment user, who thinks that the tasty red substance standardly labeled ‘catsup’ is distinct from the tasty red substance standardly labeled ‘ketchup’, and consider the following pair of sentences:

  • (27)Bob believes that catsup is catsup.
  • (28)Bob believes that catsup is ketchup.

(27) and (28) seem quite a bit like (23) and (24): these each seem to be pairs of sentences which differ in truth-value, despite differing only in the substitution of the underlined expressions. So, for consistency, it seems that the Fregean should explain the apparent difference in truth-value between (27) and (28) in just the way he explains the apparent difference in truth-value between (23) and (24): by positing a difference in meaning between the underlined expressions. But, first, it is hard to see how expressions like ‘catsup’ and ‘ketchup’ could differ in meaning; and, second, it seems that an example of this sort could be generated for any alleged pair of synonymous expressions. (A closely related series of examples is developed in much more detail in Kripke (1979).)

The example of ‘catsup’ and ‘ketchup’ is related to a second worry for the Fregean, which is the reverse of the Fregean’s complaint about Russellian semantics: a plausible case can be made that Frege’s criterion of difference for sense slices contents too finely, and draws distinctions in content where there are none. One way of developing this sort of argument involves (again) propositional attitude ascriptions. It seems plausible that if I utter a sentence like ‘Hammurabi thought that Hesperus was visible only in the morning,’ what I say is true if and only if one of Hammurabi’s thoughts has the same content as does the sentence ‘Hesperus was visible only in the morning,’ as used by me. On a Russellian view, this places a reasonable constraint on the truth of the ascription; it requires only that Hammurabi believe of a certain object that it instantiates the property of being visible in the morning. But on a Fregean view, this sort of view of attitude ascriptions would require that Hammurabi thought of the planet Venus under the same mode of presentation as I attach to the term ‘Hesperus.’ This seems implausible, since it seems that I can truly report Hammurabi’s beliefs without knowing anything about the mode of presentation under which he thought of the planets. (For a recent attempt to develop a Fregean semantics for propositional attitude ascriptions which avoids this sort of problem by integrating aspects of a Russellian semantics, see Chalmers (2011).)

2.2 Non-propositional theories

So, while there are powerful motivations for propositional semantic theories, each theory of this sort also faces some difficult challenges. These challenges have led some to think that the idea behind propositional semantics—the idea that the job of a semantic theory is to systematically pair expressions with the entities which are their meanings—is fundamentally misguided. Wittgenstein was parodying just this idea when he wrote “You say: the point isn’t the word, but its meaning, and you think of the meaning as a thing of the same kind as the word, though also different from the word. Here the word, there the meaning. The money, and the cow that you can buy with it” (§120).

While Wittgenstein himself did not think that systematic theorizing about semantics was possible, this anti-theoretical stance has not been shared by all subsequent philosophers who share his aversion to “meanings as entities.” This section is intended to provide some idea of how semantics might work in a framework which eschews propositions and their constituents, by explaining the basics of two representative theories within this tradition.

The difference between these theories is best explained by recalling the sort of theory of reference sketched in §2.1.1 above. Recall that propositional theories supplement this theory of reference with an extra layer—with a theory which assigns a content, as well as a reference, to each meaningful expression. One alternative to propositional theories—Davidsonian truth-conditional theories—takes this extra layer to be unnecessary, and holds that a theory of reference is all the semantic theory we need. A second, more radical alternative to propositional theories—Chomskyan internalist theories—holds not that a theory of reference is not enough, but rather that it is too much; on this view, the meanings of expressions of a natural language neither are, nor determine, a reference.

2.2.1 The Davidsonian program

One of the most important sources of opposition to the idea of “meanings as entities” is Donald Davidson. Davidson thought that semantic theory should take the form of a theory of truth for the language of the sort which Alfred Tarski showed us how to construct. (See Tarski 1936 and Tarski’s truth definitions.)

For our purposes, it will be convenient to think of a Tarskian truth theory as a variant on the sorts of theories of reference introduced in §2.1.1. Recall that theories of reference of this sort specified, for each proper name in the language, the object to which that name refers, and for every simple predicate in the language, the set of things which satisfy that predicate. If we then consider a sentence which combines a proper name with such a predicate, like

Amelia sings

the theory tells us what it would take for that sentence to be true: it tells us that this sentence is true if and only if the object to which ‘Amelia’ refers is a member of the set of things which satisfy the predicate ‘sings’—i.e., the set of things which sing. So we can think of a full theory of reference for the language as implying, for each sentence of this sort, a T-sentence of the form

“Amelia sings” is T (in the language) if and only if Amelia sings.

Suppose now that we expand our theory of reference so that it implies a T-sentence of this sort for every sentence of the language, rather than just for simple sentences which result from combining a name and a monadic predicate. We would then have a Tarskian truth theory for our language. Tarski’s idea was that such a theory would define a truth predicate (‘T’) for the language; Davidson, by contrast, thought that we find in Tarskian truth theories “the sophisticated and powerful foundation of a competent theory of meaning” (Davidson 1967).

This claim is puzzling: why should a a theory which issues T-sentences, but makes no explicit claims about meaning or content, count as a semantic theory? Davidson’s answer was that knowledge of such a theory would be sufficient to understand the language. If Davidson were right about this, then he would have a plausible argument that a semantic theory could take this form. After all, it is plausible that someone who understands a language knows the meanings of the expressions in the language; so, if knowledge of a Tarskian truth theory for the language were sufficient to understand the language, then knowledge of what that theory says would be sufficient to know all the facts about the meanings of expressions in the language, in which case it seems that the theory would state all the facts about the meanings of expressions in the language.

One advantage of this sort of approach to semantics is its parsimony: it makes no use of the intensions, Russellian propositions, or Fregean senses assigned to expressions by the propositional semantic theories discussed above. Of course, as we saw above, these entities were introduced to provide a satisfactory semantic treatment of various sorts of linguistic constructions, and one might well wonder whether it is possible to provide a Tarskian truth theory of the sort sketched above for a natural language without making use of intensions, Russellian propositions, or Fregean senses. The Davidsonian program obviously requires that we be able to do this, but it is still very much a matter of controversy whether a truth theory of this sort can be constructed. Discussion of this point is beyond the scope of this entry; one good way into this debate is through the debate about whether the Davidsonian program can provide an adequate treatment of propositional attitude ascriptions. See the discussion of the paratactic account and interpreted logical forms in the entry on propositional attitude reports. (For Davidson’s initial treatment of attitude ascriptions, see Davidson (1968); for further discussion see, among other places, Burge 1986; Schiffer 1987; LePore and Loewer 1989; Larson and Ludlow 1993; Soames 2002.)

Let’s set this aside, and assume that a Tarskian truth theory of the relevant sort can be constructed, and ask whether, given this supposition, this sort of theory would provide an adequate semantics. There are two fundamental reasons for thinking that it would not, both of which are ultimately due to Foster (1976). I will follow Larson and Segal (1995) by calling these the extension problem and the informationproblem.

The extension problem stems from the fact that it is not enough for a semantic theory whose theorems are T-sentences to yield true theorems; the T-sentence

“Snow is white” is T in English iff grass is green.

is true, but tells us hardly anything about the meaning of “Snow is white.” Rather, we want a semantic theory to entail, for each sentence of the object language, exactly one interpretive T-sentence: a T-sentence such that the sentence used on its right-hand side gives the meaning of the sentence mentioned on its left-hand side. Our theory must entail at least one such T-sentence for each sentence in the object language because the aim is to give the meaning of each sentence in the language; and it must entail no more than one because, if the theory had as theorems more than one T-sentence for a single sentence S of the object language, an agent who knew all the theorems of the theory would not yet understand S, since such an agent would not know which of the T-sentences which mention S was interpretive.

The problem is that it seems that any theory which implies at least one T-sentence for every sentence of the language will also imply more than one T-sentence for every sentence in the language. For any sentences p,q, if the theory entails a T-sentence

S is T in L iff p,

then, since p is logically equivalent to p & ∼(q & ∼q), the theory will also entail the T-sentence

S is T in L iff p & ∼(q & ∼q),

which, if the first is interpretive, won’t be. But then the theory will entail at least one non-interpretive T-sentence, and someone who knows the theory will not know which of the relevant sentences is interpretive and which not; such a person therefore would not understand the language.

The information problem is that, even if our semantic theory entails all and only interpretive T-sentences, it is not the case that knowledge of what is said by these theorems would suffice for understanding the object language. For, it seems, I can know what is said by a series of interpretive T-sentences without knowing that they are interpretive. I may, for example, know what is said by the interpretive T-sentence

“Londres est jolie” is T in French iff London is pretty

but still not know the meaning of the sentence mentioned on the left-hand side of the T-sentence. The truth of what is said by this sentence, after all, is compatible with the sentence used on the right-hand side being materially equivalent to, but different in meaning from, the sentence mentioned on the left. This seems to indicate that knowing what is said by a truth theory of the relevant kind is not, after all, sufficient for understanding a language. (For replies to these criticisms, see Davidson (1976), Larson and Segal (1995) and Kölbel (2001); for criticism of these replies, see Soames (1992) and Speaks (2006).)

2.2.2 Chomskyan internalist semantics

There is another alternative to propositional semantics which is at least as different from the Davidsonian program as that program is from various propositional views. This view is sometimes called ‘internalist semantics’ by contrast with views which locate the semantic properties of expressions in their relation to elements of the external world. An internalist approach to semantics is associated with the work of Noam Chomsky (see especially Chomsky (2000)).

It is easy to say what this approach to semantics denies. The internalist denies an assumption common to all of the approaches above: the assumption that in giving the content of an expression, we are primarily specifying something about that expression’s relation to things in the world which that expression might be used to say things about. According to the internalist, expressions as such don’t bear any semantically interesting relations to things in the world; names don’t, for example, refer to the objects with which one might take them to be associated. Sentences are not true or false, and do not express propositions which are true or false; the idea that we can understand natural languages using a theory of reference as a guide is mistaken. On this sort of view, we occasionally use sentences to say true or false things about the world, and occasionally use names to refer to things; but this is just one thing we can do with names and sentences, and is not a claim about the meanings of those expressions.

It is more difficult, in a short space, to say what the internalist says the meanings of linguistic expressions are. According to McGilvray (1998), “[t]he basic thesis is that meanings are contents intrinsic to expressions …and that they are defined and individuated by syntax, broadly conceived” (225). This description is sufficient to show the difference between this view of meaning and those sketched above: it is not just that the focus is not on the relationship between certain syntactic items and non-linguistic reality, but that, according to this view, syntactic and semantic properties of expressions are held to be inseparable. McGilvray adds that “[t]his unusual approach to meaning has few contemporary supporters,” which is probably true—though less so now than in 1998, when this was written. For defenses and developments of this view, see McGilvray (1998), Chomsky (2000), and Pietroski (2003, 2005).

2.3 General questions facing semantic theories

As mentioned above, the aim of §2 of this entry is to discuss issues about the form which a semantic theory should take which are at a higher level of abstraction than issues about the correct semantic treatment or particular expression-types. (Also as mentioned above, some of these may be found in the entries on conditionals, descriptions, names, propositional attitude reports, and tense and aspect.) But there are some general issues in semantics which, while more general than questions about how, for example, the semantics of adverbs should go, are largely (though not wholly) orthogonal to the question of whether our semantics should be developed in accordance with a possible worlds, Russellian, Fregean, Davidsonian, or Chomskyan framework. The present subsection introduces a few of these.

2.3.1 How much context-sensitivity?

Above, in §2.1.4, I introduced the idea that some expressions might be context-sensitive, or indexical. Within a propositional semantics, we’d say that these expressions have different contents relative to distinct contexts; but the phenomenon of context-sensitivity is one which any semantic theory must recognize. A very general question which is both highly important and orthogonal to the above distinctions between types of semantic theories is: How much context-sensitivity is there in natural languages?

Virtually everyone recognizes a sort of core group of indexicals, including ‘I’, ‘here’, and ‘now.’ Most also think of demonstratives, like (some uses of) ‘this’ and ‘that’, as indexicals. But whether and how this list should be extended is a matter of controversy. Some popular candidates for inclusion are:

  • devices of quantification
  • gradable adjectives
  • alethic modals, including counterfactual conditionals
  • ‘knows’ and epistemic modals
  • propositional attitude ascriptions
  • ‘good’ and other moral terms

Many philosophers and linguists think that one or more of these categories of expressions are indexicals. Indeed, some think that virtually every natural language expression is context-sensitive.

Questions about context-sensitivity are important, not just for semantics, but for many areas of philosophy. And that is because some of the terms thought to be context-sensitive are terms which play a central role in describing the subject matter of other areas of philosophy.

Perhaps the most prominent example here is the role that the view that ‘knows’ is an indexical has played in recent epistemology. This view is often called ‘contextualism about knowledge’; and in general, the view that some term F is an indexical is often called ‘contextualism about F.’ Contextualism about knowledge is of interest in part because it promises to provide a kind of middle ground between two opposing epistemological positions: the skeptical view that we know hardly anything about our surroundings, and the dogmatist view that we can know that we are not in various Cartesian skeptical scenarios. (So, for example, the dogmatist holds that I can know that I am not a brain in a vat which is, for whatever reason, being made to have the series of experiences subjectively indistinguishable from the experiences I actually have.) Both of these positions can seem unappealing—skepticism because it does seem that I can occasionally know, e.g., that I am sitting down, and dogmatism because it’s hard to see how I can rule out the possibility that I am in a skeptical scenario subjectively indistinguishable from my actual situation.

But the disjunction of these positions can seem, not just unappealing, but inevitable; for the proposition that I am sitting entails that I am not a brain in a vat, and it’s hard to see—presuming that I know that this entailment holds—how I could know the former without thereby being in a position to know the latter. The contextualist about ‘knows’ aims to provide the answer: the extension of ‘knows’ depends on features of the context of utterance. Perhaps—to take one among several possible contextualist views—a pair of a subject and a proposition p will be in the extension of ‘knows’ relative to a context only if that subject is able to rule out every possibility which is both (i) inconsistent with p and (ii) salient in C. The idea is that ‘I know that I am sitting down’ can be true in a normal setting, simply because the possibility that I am a brain in a vat is not normally salient; but typically ‘I know that I am not a brain in a vat’ will be false, since discussion of skeptical scenarios makes them salient, and (if the skeptical scenario is well-designed) I will lack the evidence needed to rule them out. See for discussion, among many other places, the entry on epistemic contextualism, Cohen (1986), DeRose (1992), and Lewis (1996).

Having briefly discussed one important contextualist thesis, let’s return to the general question which faces the semantic theorist, which is: How do we tell when an expression is context-sensitive? Contextualism about knowledge, after all, can hardly get off the ground unless ‘knows’ really is a context-sensitive expression. ‘I’ and ‘here’ wear their context-sensitivity on their sleeves; but ‘knows’ does not. What sort of argument would suffice to show that an expression is an indexical?

Philosophers and linguists disagree about the right answers to this question. The difficulty of coming up with a suitable diagnostic is illustrated by considering one intuitively plausible test, defended in Chapter 7 of Cappelen & LePore (2005). This test says that an expression is an indexical iff it characteristically blocks disquotational reports of what a speaker said in cases in which the original speech and the disquotational report are uttered in contexts which differ with respect to the relevant contextual parameter. (Or, more cautiously, that this test provides evidence that a given expression is, or is not, context-sensitive.)

This test clearly counts obvious indexicals as such. Consider ‘I.’ Suppose that Mary utters

I am hungry.

One sort of disquotational report of Mary’s speech would use the very sentence Mary uttered in the complement of a ‘says’ ascription. So suppose that Sam attempts such a disquotational report of what Mary said, and utters

Mary said that I am hungry.

The report is obviously false; Mary said that Mary is hungry, not that Sam is. The falsity of Sam’s report suggests that ‘I am hungry’ has a different content out of Mary’s mouth than out of Sam’s; and this, in turn, suggests that ‘I’ has a different content when uttered by Mary than when uttered by Sam. Hence, it suggests that ‘I’ is an indexical.

It isn’t just that this test gives the right result in many cases; it’s also that the test fits nicely with the plausible view that an utterance of a sentence of the form ‘A said that S’ in a context C is true iff the content of S in C is the same as the content of what the referent of ‘A’ said (on the relevant occasion).

The interesting uses of this test are not uses which show that ‘I’ is an indexical; we already knew that. The interesting use of this test, as Cappelen and LePore argue, is to show that many of the expressions which have been taken to be indexicals—like the ones on the list given above—are not context-sensitive. For we can apparently employ disquotational reports of the above sort to report utterances using quantifiers, gradable adjectives, modals, ‘knows,’ etc. This test thus apparently shows that no expressions beyond the obvious ones—‘I’, ‘here’, ‘now,’ etc.—are genuinely context-sensitive.

But, as Hawthorne (2006) argues, naive applications of this test seem to lead to unacceptable results. Terms for relative directions, like ‘left’, seem to be almost as obviously context-sensitive as ‘I’; the direction picked out by simple uses of ‘left’ depends on the orientation of the speaker of the context. But we can typically use ‘left’ in disquotational ‘says’ reports of the relevant sort. Suppose, for example, that Mary says

The coffee machine is to the left.

Sam can later truly report Mary’s speech by saying

Mary said that the coffee machine was to the left.

despite the fact that Sam’s orientation in the context of the ascription differs from Mary’s orientation in the context of the reported utterance. Hence our test seems to lead to the absurd result that ‘left’ is not context-sensitive.

One interpretation of this puzzling fact is that our test using disquotational ‘says’ ascriptions is a bit harder to apply than one might have thought. For, to apply it, one needs to be sure that the context of the ascription really does differ from the context of the original utterance in the value of the relevant contextual parameter . And in the case of disquotational reports using ‘left’, one might think that examples like the above show that the relevant contextual parameter is sometimes not the orientation of the speaker, but rather the orientation of the subject of the ascription at the time of the relevant utterance.

This is but one criterion for context-sensitivity. But discussion of this criterion brings out the fact that the reliability of an application of a test for context-sensitivity will in general not be independent of the space of views one might take about the contextual parameters to which a given expression is sensitive. For an illuminating discussion of ways in which we might revise tests for context-sensitivity using disquotational reports which are sensitive to the above data, see Cappelen & Hawthorne (2009). For a critical survey of other proposed tests for context-sensitivity, see Cappelen & LePore (2005), Part I.

2.3.2 How many indices?

Above, in §2.1.5, I introduced the idea of an expression determining a reference, relative to a context, with respect to a particular circumstance of evaluation. But I left the notion of a circumstance of evaluation rather underspecified. One might want to know more about what, exactly, these circumstances of evaluation involve—and hence about what sorts of things the reference of an expression can (once we’ve fixed a context) vary with respect to.

One way to focus this question is to stay at the level of sentences, and imagine that we have fixed on a sentence S, with a certain character, and context C. If sentences express propositions relative to contexts, then S will express some proposition P relative to C. If the determination of reference in general depends not just on character and context, but also on circumstance, then we know that P might have different truth-values relative to different circumstances of evaluation. Our question is: exactly what must we specify in order to determine P’s truth-value?

Let’s say that an index is the sort of thing which, for some proposition P, we must at least sometimes specify in order to determine P’s truth-value. Given this usage, we can think of circumstances of evaluation—the things which play the theoretical role outlined in §2.1.5—as made up of indices.

The most uncontroversial candidate for an index is a world, because most advocates of a propositional semantics think that propositions can have different truth-values with respect to different possible worlds. The main question is whether circumstances of evaluation need contain any indices other than a possible world.

The most popular candidate for a second index is a time. The view that propositions can have different truth-values with respect to different times—and hence that we need a time index—is often called ‘temporalism.’ The negation of temporalism is eternalism.

The motivations for temporalism are both metaphysical and semantic. On the metaphysical side, A-theorists about time (see the entry on time) think that corresponding to predicates like ‘is a child’ are A-series properties which a thing can have at one time, and lack at another time. (Hence, on this view, the property corresponding to ‘is a child’ is not a property like being a child in 2014, since that is a property which a thing has permanently if at all, and hence is a B-series rather than A-series property.) But then it looks like the proposition expressed by ‘Violet is a child’—which predicates this A-series property of Violet—should have different truth-values with respect to different times. And this is enough to motivate the view that we should have an index for a time.

On the semantic side, as Kaplan (1989) notes, friends of the idea that tenses are best modeled as operators have good reason to include a time index in circumstanes of evaluation. After all, operators operate on contents, so if there are temporal operators, they will only be able to affect truth-values if those contents can have different truth-values with respect to different times.

A central challenge for the view that propositions can change truth-value over time is whether the proponent of this view can make sense of retention of propositional attitudes over time. For suppose that I believe in 2014 that Violet is a child. Intuitively, I might hold fixed all of my beliefs about Violet for the next 40 years, without its being true, in 2054, that I have the obviously false belief that Violet is still a child. But the temporalist, who thinks of the proposition that Violet is a child as something which incorporates no reference to a time and changes truth-value over time, seems stuck with this result. Problems of this sort for temporalism are developed in Richard (1981); for a response see Sullivan (2014).

Motivations for eternalism are also both metaphysical and semantic. Those attracted to B-theories of time will take propositions to have their truth-values eternally, which makes inclusion of a time index superfluous. And those who think that tenses are best modeled in terms of quantification over times rather than using tense operators will, similarly, see no use for a time index. For a defense of the quantificational over the operator analysis of tense, see King (2003).

Is there a case to be made for including any indices other than a world and a time? There is; and this has spurred much of the recent interest in relativist semantic theories. Relativist semantic theories hold that our indices should include not just a world and (perhaps) a time, but also a context of assessment. Just as propositions can have different truth values with respect to different worlds, so, on this view, they can vary in their truth depending upon features of the conversational setting in which they are considered. (Though this way of putting things assumes that the relativist should be a ‘truth relativist’ rather than a ‘content relativist’; I ignore this in what follows. See for discussion Weatherson and Egan (2011), § 2.3.)

The motivations for this sort of view can be illustrated by a type of example whose importance is emphasized in Egan et. al. (2005). Suppose that, at the beginning of a murder investigation inquiry, I say

The murderer might have been on campus at midnight.

It looks like the proposition expressed by this sentence will be true, roughly, if we don’t know anything which rules out the murderer having been on campus at midnight. But now suppose that more information comes in, some of which rules out the murderer having been on campus at midnight. At this point, it seems, I could truly say

What I said was false—the murderer couldn’t have been on campus at midnight.

1. Basics

The notions of word and word meaning are problematic to pin down, and this is reflected in the difficulties one encounters in defining the basic terminology of lexical semantics. In part, this depends on the fact that the words ‘word’ and ‘meaning’ themselves have multiple meanings, depending on the context and the purpose they are used for (Matthews 1991). For example, in ordinary parlance ‘word’ is ambiguous between lexeme (as in “Color and colour are spellings of the same word”) and lexical unit (as in “there are thirteen words in the tongue-twister How much wood would a woodchuck chuck if a woodchuck could chuck wood?”). Let us then elucidate the notion of word in a little more detail, and specify what key questions will guide our discussion of word meaning in the rest of the entry.

1.1 The Notion of Word

The notion of word can be defined in two fundamental ways. On one side, we have linguistic definitions, which attempt to characterize the notion of word by illustrating the explanatory role words play or are expected to play in the context of a formal grammar. These approaches often end up splitting the notion of word into a number of more fine-grained and theoretically manageable notions, but still tend to regard ‘word’ as a term that zeroes in on a scientifically respectable concept (e.g., Di Sciullo & Williams 1987). For example, words are the primary locus of stress and tone assignment, the basic domain of morphological conditions on affixation, clitization, compounding, and the theme of phonological and morphological processes of assimilation, vowel shift, metathesis, and reduplication (Bromberger 2011). On the other side, we have metaphysical definitions, which attempt to elucidate the notion of word by describing the metaphysical type of words. This implies answering such questions as “what are words?”, “how should words be individuated?”, and “on what conditions two utterances count as utterances of the same word?”. For example, Kaplan (1990, 2011) has proposed to replace the orthodox type-token account of the relation between words and word occurrences with a “common currency” view on which words relate to their occurrences as continuants relate to stages in four-dimensionalist metaphysics (see the entries on types and tokens and identity over time). For alternative views, see McCulloch (1991), Cappelen (1999), Alward (2005), and Hawthorne & Lepore (2011).

For the purposes of this entry, we can proceed as follows. Every natural language has a lexicon organized into lexical entries, which contain information about lexemes. These are the smallest linguistic expressions that are conventionally associated with a non-compositional meaning and can be uttered in isolation to convey semantic content. Lexemes relate to words just like phonemes relate to phones in phonological theory. To understand the parallelism, think of the variations in the place of articulation of the phoneme /n/, which is pronounced as the voiced bilabial nasal [m] in “ten bags” and as the voiced velar nasal [ŋ] in “ten gates”. Just as phonemes are abstract representations of sets of phones (each defining one way the phoneme can be instantiated in speech), lexemes can be defined as abstract representations of sets of words (each defining one way the lexeme can be instantiated in sentences). Thus, ‘do’, ‘does’, ‘done’ and ‘doing’ are morphologically and graphically marked realizations of the same abstract lexeme do. To wrap everything into a single formula, we can say that the lexical entries listed in a lexicon set the parameters defining the instantiation potential of lexemes as words in utterances and inscriptions (Murphy 2010). In what follows, we shall rely on an intuitive notion of word. However, the reader should bear in mind that, unless otherwise indicated, our talk of ‘word meaning’ should be understood as talk of ‘lexeme meaning’, in the above sense.

1.2 Theories of Word Meaning

As with general theories of meaning (see the entry on theories of meaning), two kinds of theory of word meaning can be distinguished. The first type of theory, that we can label a semantic theory of word meaning, is interested in clarifying what meaning-determining information is encoded by the lexical items of a natural language. A framework establishing that the word ‘bachelor’ encodes the lexical concept adult unmarried male would be an example of a semantic theory of word meaning. The second type of theory, that we can label a foundational theory of word meaning, is interested in singling out the facts whereby lexical expressions come to have the semantic properties they have for their users. A framework investigating the dynamics of linguistic change and social coordination in virtue of which the word ‘bachelor’ has been assigned the function of expressing the lexical concept adult unmarried male would be an example of a foundational theory of word meaning. Obviously, the endorsement of a given semantic theory is bound to place important constraints on the claims one might propose about the foundational attributes of word meaning, and vice versa. Semantic and foundational concerns are often interdependent, and it is difficult to find theories of word meaning which are either purely semantic or purely foundational. For example, Ludlow (2014) establishes a strong correlation between the underdetermination of lexical concepts (a semantic matter) and the processes of linguistic entrenchment whereby discourse partners converge on the assignation of shared meanings to lexical expressions (a foundational matter). However, semantic and foundational theories remain in principle different and designed to answer partly non-overlapping sets of questions. Our focus will be on semantic theories of word meaning, i.e., on theories that try to provide an answer to such questions as “what is the nature of word meaning?”, “what do we know when we know the meaning of a word?”, and “what (kind of) information must an agent associate to the words of a language L in order to be a competent user of the lexicon of L?”. However, we will engage in foundational considerations whenever necessary to clarify how a given theoretical framework addresses issues in the domain of a semantic theory.

2. Historical Background

The study of word meaning acquired the status of a mature academic enterprise in the 19th century, with the birth of historical-philological semantics (Section 2.2). Yet, matters related to word meaning had been the subject of much debate in earlier times. Word meaning constituted a prominent topic of inquiry in three classical traditions: speculative etymology, rhetoric, and lexicography (Meier-Oeser 2011; Geeraerts 2013).

2.1 Classical Traditions

To understand what speculative etymology amounts to, it is useful to refer to the Cratylus (383a-d), where Plato presents his well-known naturalist thesis about word meaning: natural kind terms express the essence of the objects they name and words are appropriate to their referents insofar as they describe what their referents are (see the entry on Plato’s Cratylus). The task of speculative etymology is to break down the surface features of word forms and recover the descriptive (often phonoiconic) rationale that motivated their genesis. For example, the Greek word ‘anthrôpos’ can be broken down into anathrôn ha opôpe, which translates as “one who reflects on what he has seen”: the word used to denote humans reflects their being the only animal species which possesses the combination of vision and intelligence. More in Malkiel (1993), Fumaroli (1999), and Del Bello (2007).

The primary aim of the rhetorical tradition was the study of figures of speech. Some of these affect structural variables such as the linear order of the words occurring in a sentence (e.g., parallelism, climax, anastrophe); others are semantic and arise upon using lexical expressions in a way not intended by their normal meaning (e.g., metaphor, metonymy, synecdoche). Although originated for stylistic and literary purposes, the identification of regular patterns in the figurative use of words initiated by classical rhetoric provided a first organized framework to investigate the semantic flexibility of words, and stimulated an interest in our ability to use lexical expressions beyond the boundaries of their literal meaning. More in Kennedy (1994), Herrick (2004), and Toye (2013).

Finally, lexicography and the practice of writing dictionaries played an important role in systematizing the descriptive data on which later inquiry would rely to illuminate the relationship between words and their meaning. Putnam’s (1970) claim that it was the phenomenon of writing (and needing) dictionaries that gave rise to the idea of a semantic theory is probably an overstatement. But lexicography certainly had an impact on the development of modern theories of word meaning. The practice of separating dictionary entries via lemmatization and defining them through a combination of semantically simpler elements provided a stylistic and methodological paradigm for much subsequent research on lexical phenomena, such as decompositional theories of word meaning. More in Béjoint (2000), Jackson (2002), and Hanks (2013).

2.2 Historical-Philological Semantics

Historical-philological semantics incorporated elements from all the above classical traditions and dominated the linguistic scene roughly from 1870 to 1930, with the work of scholars such as Michel Bréal, Hermann Paul, and Arsène Darmesteter (Gordon 1982). In particular, it absorbed from speculative etymology an interest in the conceptual decomposition of word meaning, it acquired from rhetoric a toolkit for the classification of lexical phenomena, and it assimilated from lexicography and textual philology a basis of descriptive data for lexical analysis (Geeraerts 2013). On the methodological side, the key features of the approach to word meaning introduced by historical-philological semantics can be summarized as follows. First, it had a diachronic and contextualist orientation: that is, it was primarily concerned with the historical evolution of word meaning rather than with word meaning statically understood, and attributed major importance to the pragmatic flexibility of word meaning (e.g., witness Paul’s (1920 [1880]) distinction between usuelle Bedeutung and okkasionelle Bedeutung, or Bréal’s (1924 [1897]) account of polysemy as a byproduct of semantic change). Second, it considered word meaning a psychological phenomenon: it assumed that the semantic properties of words should be defined in mentalistic terms (i.e., words signify “concepts” or “ideas” in a broad sense), and that the dynamics of sense modulation, extension, and contraction that underlie lexical change correspond to patterns of conceptual activity in the human mind. Interestingly, while the rhetorical tradition had looked at tropes as devices whose investigation was motivated by stylistic concerns, historical-philological semantics regarded the psychological mechanisms underlying the production and the comprehension of figures of speech as part of the ordinary life of languages, and as engines of the evolution of all aspects of lexical systems (Nerlich 1992).

The contribution made by historical-philological semantics to the study of lexical phenomena had a long-lasting influence. First, with its emphasis on the principles of semantic change, historical-philological semantics was the first systematic framework to focus on the dynamic nature of word meaning, and to see the contextual flexibility of words as the primary phenomenon that a lexical semantic theory should aim to account for (Nerlich & Clarke 1996, 2007). This feature of historical-philological semantics makes it a forerunner of the stress on context-sensitivity encouraged by many subsequent approaches to word meaning in philosophy (Section 3) and linguistics (Section 4). Second, the psychological conception of word meaning fostered by historical philological-semantics added to the agenda of linguistic research the question of how word meaning relates to cognition at large (Geeraerts 2010). If word meaning is essentially a psychological phenomenon, how can we characterize it? What is the dividing line separating the aspects of our mental life that are relevant to the knowledge of lexical meaning from those that are not? As we shall see, this question will constitute a central concern for cognitive theories of word meaning (Section 5).

3. Philosophy of Language

In this section we shall review some semantic and metasemantic theories in analytic philosophy that bear on how lexical meaning should be conceived and described. We shall follow a roughly chronological order. Some of these theories, such as Carnap’s theory of meaning postulates and Putnam’s theory of stereotypes, have a strong focus on lexical meaning, whereas others, such as Montague semantics, regard it as a side issue. However, such negative views form an equally integral part of the philosophical debate on word meaning.

3.1 Early Contemporary Views

By taking the connection of thoughts and truth as the basic issue of semantics and regarding sentences as “the proper means of expression for a thought” (Frege 1979a [1897]), Frege paved the way for the 20th century priority of sentential meaning over lexical meaning: the semantic properties of subsentential expressions such as individual words were regarded as derivative, and identified with their contribution to sentential meaning. Sentential meaning was in turn identified with truth conditions, most explicitly in Wittgenstein’s Tractatus logico-philosophicus (1922). However, Frege never lost interest in the “building blocks of thoughts” (Frege 1979b [1914]), i.e., in the semantic properties of subsentential expressions. Indeed, his theory of sense and reference for names and predicates may be counted as the inaugural contribution to lexical semantics within the analytic tradition (see the entry on Gottlob Frege). It should be noted that Frege did not attribute semantic properties to lexical units as such, but to what he regarded as a sentence’s logical constituents: e.g., not to the word ‘dog’ but to the predicate ‘is a dog’. In later work this distinction was obliterated and Frege’s semantic notions came to be applied to lexical units.

Possibly because of lack of clarity affecting the notion of sense, and surely because of Russell’s (1905) authoritative criticism of Fregean semantics, word meaning disappeared from the philosophical scene during the 1920s and 1930s. In Wittgenstein’s Tractatus the “real” lexical units, i.e., the constituents of a completely analyzed sentence, are just names, whose semantic properties are exhausted by their reference. In Tarski’s (1933) work on formal languages, which was taken as definitional of the very field of semantics for some time, lexical units are semantically categorized into different classes (individual constants, predicative constants, functional constants) depending on the logical type of their reference, i.e., according to whether they designate individuals in a domain of interpretation, classes of individuals (or of n-tuples of individuals), or functions defined over the domain. However, Tarski made no attempt nor felt any need to represent semantic differences among expressions belonging to the same logical type (e.g., between one-place predicates such as ‘dog’ and ‘run’, or between two-place predicates such as ‘love’ and ‘left of’). See the entry on Alfred Tarski.

Quine (1943) and Church (1951) rehabilitated Frege’s distinction of sense and reference. Non-designating words such as ‘Pegasus’ cannot be meaningless: it is precisely the meaning of ‘Pegasus’ that allows speakers to establish that the word lacks reference. Moreover, as Frege (1892) had argued, true factual identities such as “Morning Star = Evening Star” do not state synonymies; if they did, any competent speaker of the language would be aware of their truth. Along these lines, Carnap (1947) proposed a new formulation of the sense/reference dichotomy, which was translated into the distinction between intension and extension. The notion of intension was intended to be an explicatum of Frege’s “obscure” notion of sense: two expressions have the same intension if and only if they have the same extension in every possible world or, in Carnap’s terminology, in every state description (i.e., in every maximal consistent set of atomic sentences and negations of atomic sentences). Thus, ‘round’ and ‘spherical’ have the same intension (i.e., they express the same function from possible worlds to extensions) because they apply to the same objects in every possible world. Carnap later suggested that intensions could be regarded as the content of lexical semantic competence: to know the meaning of a word is to know its intension, “the general conditions which an object must fulfill in order to be denoted by [that] word” (Carnap 1955). However, such general conditions were not spelled out by Carnap (1947). Consequently, his system did not account, any more than Tarski’s, for semantic differences and relations among words belonging to the same semantic category: there were possible worlds in which the same individual a could be both a married man and a bachelor, as no constraints were placed on either word’s intension. One consequence, as Quine (1951) pointed out, was that Carnap’s system did not capture our intuitive notion of analyticity, on which “Bachelors are unmarried” is not just true but true in every possible world.

To remedy what he agreed was an unsatisfactory feature of his system, Carnap (1952) introduced meaning postulates, i.e., stipulations on the relations among the extensions of lexical items. For example, the meaning postulate

  • MP\(\forall x (\mbox{bachelor}(x) \supset \mathord{\sim}\mbox{married} (x))\)

stipulates that any individual that is in the extension of ‘bachelor’ is not in the extension of ‘married’. Meaning postulates can be seen either as restrictions on possible worlds or as relativizing analyticity to possible worlds. On the former option we shall say that “If Paul is a bachelor then Paul is unmarried” holds in every admissible possible world, while on the latter we shall say that it holds in every possible world in which (MP) holds. Carnap regarded the two options as equivalent; nowadays, the former is usually preferred. Carnap (1952) also thought that meaning postulates expressed the semanticist’s “intentions” with respect to the meanings of the descriptive constants, which may or may not reflect linguistic usage; again, today postulates are usually understood as expressing semantic relations (synonymy, analytic entailment, etc.) among lexical items as currently used by competent speakers.

In the late 1960s and early 1970s, Montague (1974) and other philosophers and linguists (Kaplan, Kamp, Partee, and D. Lewis among others) set out to apply to the analysis of natural language the notions and techniques that had been introduced by Tarski and Carnap and further developed in Kripke’s possible worlds semantics (see the entry on Montague semantics). Montague semantics can be represented as aiming to capture the inferential structure of a natural language: every inference that a competent speaker would regard as valid should be derivable in the theory. Some such inferences depend for their validity on syntactic structure and on the logical properties of logical words, like the inference from “Every man is mortal and Socrates is a man” to “Socrates is mortal”. Other inferences depend on properties of non-logical words that are usually regarded as semantic, like the inference from “Kim is pregnant” to “Kim is not a man”. In Montague semantics, such inferences are taken care of by supplementing the theory with suitable Carnapian meaning postulates. Yet, some followers of Montague regarded such additions as spurious: the aims of semantics, they said, should be distinguished from those of lexicography. The description of the meaning of non-logical words requires considerable world knowledge: for example, the inference from “Kim is pregnant” to “Kim is not a man” is based on a “biological” rather than on a “logical” generalization. Hence, we should not expect a semantic theory to furnish an account of how any two expressions belonging to the same syntactic category differ in meaning (Thomason 1974). From such a viewpoint, Montague semantics would not differ significantly from Tarskian semantics in its account of lexical meaning. But not all later work within Montague’s program shared such a skepticism about representing aspects of lexical meaning within a semantic theory, using either componential analysis (Dowty 1979) or meaning postulates (Chierchia & McConnell-Ginet 2000).

For those who believe that meaning postulates can exhaust lexical meaning, the issue arises of how to choose them, i.e., of how—and whether—to delimit the set of meaning-relevant truths with respect to the set of all true statements in which a given word occurs. As we just saw, Carnap himself thought that the choice could only be the expression of the semanticist’s intentions. However, we seem to share intuitions of analyticity, i.e., we seem to regard some, but not all sentences of a natural language as true by virtue of the meaning of the occurring words. Such intuitions are taken to reflect objective semantic properties of the language, that the semanticist should describe rather than impose at will. Quine (1951) did not challenge the existence of such intuitions, but he argued that they could not be cashed out in the form of a scientifically respectable criterion separating analytic truths (“Bachelors are unmarried”) from synthetic truths (“Aldo’s uncle is a bachelor”), whose truth does not depend on meaning alone. Though Quine’s arguments were often criticized (for recent criticisms, see Williamson 2007), the analytic/synthetic distinction was never fully vindicated, at least within philosophy (for an exception, see Russell 2008). Hence, it was widely believed that lexical meaning could not be adequately described by meaning postulates. Fodor and Lepore (1992) argued that this left semantics with two options: lexical meanings were either atomic (i.e., they could not be specified by descriptions involving other meanings) or they were holistic, i.e., only the set of all true sentences of the language could count as fixing them.

Neither alternative looked promising. Holism incurred in objections connected with the acquisition and the understanding of language: how could individual words be acquired by children, if grasping their meaning involved, somehow, semantic competence on the whole language? And how could individual sentences be understood if the information required to understand them exceeded the capacity of human working memory? (For an influential criticism of several varieties of holism, see Dummett 1991; for a review, Pagin 2006). Atomism, in turn, ran against strong intuitions of (at least some) relations among words being part of a language’s semantics: it is because of what ‘bachelor’ means that it doesn’t make sense to suppose we could discover that some bachelors are married. Fodor (1998) countered this objection by reinterpreting allegedly semantic relations as metaphysically necessary connections among extensions of words. However, sentences that are usually regarded as analytic, such as “Bachelors are unmarried”, are not easily seen as just metaphysically necessary truths like “Water is H2O”. If water is H2O, then its metaphysical essence consists in being H2O (whether we know it or not); but there is no such thing as a metaphysical essence that all bachelors share—an essence that could be hidden to us, even though we use the word ‘bachelor’ competently. On the contrary, on acquiring the word ‘bachelor’ we acquire the belief that bachelors are unmarried (Quine 1986); by contrast, many speakers that have ‘water’ in their lexical repertoire do not know that water is H2O. The difficulties of atomism and holism opened the way to vindications of molecularism (e.g., Perry 1994; Marconi 1997), the view on which only some relations among words matter for acquisition and understanding (see the entry on meaning holism).

While mainstream formal semantics went with Carnap and Montague, supplementing the Tarskian apparatus with the possible worlds machinery and defining meanings as intensions, Davidson (1967, 1984) put forth an alternative suggestion. Tarski had shown how to provide a definition of the truth predicate for a (formal) language L: such a definition is materially adequate (i.e., it is a definition of truth, rather than of some other property of sentences of L) if and only if it entails every biconditional of the form

  • (T) S is true in L iff p,

where S is a sentence of L and p is its translation into the metalanguage of L in which the definition is formulated. Thus, Tarski’s account of truth presupposes that the semantics of both L and its metalanguage is fixed (otherwise it would be undetermined whether S translates into p). On Tarski’s view, each biconditional of form (T) counts as a “partial definition” of the truth predicate for sentences of L (see the entry on Tarski’s truth definitions). By contrast, Davidson suggested that if one took the notion of truth for granted, then T-biconditionals could be read as collectively constituting a theory of meaning for L, i.e., as stating truth conditions for the sentences of L. For example,

  • (W) “If the weather is bad then Sharon is sad” is true in English iff either the weather is not bad or Sharon is sad

states the truth conditions of the English sentence “If the weather is bad then Sharon is sad”. Of course, (W) is intelligible only if one understands the language in which it is phrased, including the predicate ‘true in English’. Davidson thought that the recursive machinery of Tarski’s definition of truth could be transferred to the suggested semantic reading, with extensions to take care of the forms of natural language composition that Tarski had neglected because they had no analogue in the formal languages he was dealing with. Unfortunately, few of such extensions were ever spelled out by Davidson or his followers. Moreover, it is difficult to see how, giving up possible worlds and intensions in favor of a purely extensional theory, the Davidsonian program could account for the semantics of propositional attitude ascriptions of the form “A believes (hopes, imagines, etc.) that p”.

Construed as theorems of a semantic theory, T-biconditionals were often accused of being uninformative (Putnam 1975; Dummett 1976): to understand them, one has to already possess the information they are supposed to provide. This is particularly striking in the case of lexical axioms such as the following:

  • (V1) Val(x, ‘man’) iff x is a man;
  • (V2) Val(\(\langle x,y\rangle\), ‘knows’) iff x knows y.

(To be read, respectively, as “the predicate ‘man’ applies to x if and only if x is a man” and “the predicate ‘know’ applies to the pair \(\langle x, y\rangle\) if and only if x knows y”). Here it is apparent that in order to understand (V1) one must know what ‘man’ means, which is just the information that (V1) is supposed to convey (as the theory, being purely extensional, identifies meaning with reference). Some Davidsonians, though admitting that statements such as (V1) and (V2) are in a sense “uninformative”, insist that what (V1) and (V2) state is no less “substantive” (Larson & Segal 1995). To prove their point, they appeal to non-homophonic versions of lexical axioms, i.e., to the axioms of a semantic theory for a language that does not coincide with the (meta)language in which the theory itself is phrased. Such would be, e.g.,

  • (V3)Val(x, ‘man’) si et seulement si x est un homme.

(V3), they argue, is clearly substantive, yet what it says is exactly what (V1) says, namely, that the word ‘man’ applies to a certain category of objects. Therefore, if (V3) is substantive, so is (V1). But this is beside the point. The issue is not whether (V1) expresses a proposition; it clearly does, and it is, in this sense, “substantive”. But what is relevant here is informative power: to one who understands the metalanguage of (V3), i.e., French, (V3) may communicate new information, whereas there is no circumstance in which (V1) would communicate new information to one who understands English.

3.2 Grounding and Lexical Competence

In the mid-1970s, Dummett raised the issue of the proper place of lexical meaning in a semantic theory. If the job of a theory of meaning is to make the content of semantic competence explicit—so that one could acquire semantic competence in a language L by learning an adequate theory of meaning for L—then the theory ought to reflect a competent speaker’s knowledge of circumstances in which she would assert a sentence of L, such as “The horse is in the barn”, as distinct from circumstances in which she would assert “The cat is on the mat”. This, in turn, appears to require that the theory yields explicit information about the use of ‘horse’, ‘barn’, etc., or, in other words, that it includes information which goes beyond the logical type of lexical units. Dummett identified such information with a word’s Fregean sense. However, he did not specify the format in which word senses should be expressed in a semantic theory, except for words that could be defined (e.g., ‘aunt’ = “sister of a parent”): in such cases, the definiens specifies what a speaker must understand in order to understand the word (Dummett 1991). But of course, not all words are of this kind. For other words, the theory should specify what it is for a speaker to know them, though we are not told how exactly this should be done. Similarly, Grandy (1974) pointed out that by identifying the meaning of a word such as ‘wise’ as a function from possible worlds to the sets of wise people in those worlds, Montague semantics only specifies a formal structure and eludes the question of whether there is some possible description for the functions which are claimed to be the meanings of words. Lacking such descriptions, possible worlds semantics is not really a theory of meaning but a theory of logical form or logical validity. Again, aside from suggesting that “one would like the functions to be given in terms of computation procedures, in some sense”, Grandy had little to say about the form of lexical descriptions.

In a similar vein, Partee (1981) argued that Montague semantics, like every compositional or structural semantics, does not uniquely fix the intensional interpretation of words. The addition of meaning postulates does rule out some interpretations (e.g., interpretations on which the extension of ‘bachelor’ and the extension of ‘married’ may intersect in some possible world). However, it does not reduce them to the unique, “intended” or, in Montague’s words, “actual” interpretation (Montague 1974). Hence, standard model-theoretic semantics does not capture the whole content of a speaker’s semantic competence, but only its structural aspects. Fixing “the actual interpretation function” requires more than language-to-language connections as encoded by, e.g., meaning postulates: it requires some “language-to-world grounding”. Arguments to the same effect were developed by Bonomi (1983) and Harnad (1990). In particular, Harnad had in mind the simulation of human semantic competence in artificial systems: he suggested that symbol grounding could be implemented, in part, by “feature detectors” picking out “invariant features of objects and event categories from their sensory projections” (for recent developments see, e.g., Steels & Hild 2012). Such a cognitively oriented conception of grounding differs from Partee’s Putnam-inspired view, on which the semantic grounding of lexical items depends on the speakers’ objective interactions with the external world in addition to their narrow psychological properties.

A resolutely cognitive approach characterizes Marconi’s (1997) account of lexical semantic competence. In his view, lexical competence has two aspects: an inferential aspect, underlying performances such as semantically based inference and the command of synonymy, hyponymy and other semantic relations; and a referential aspect, which is in charge of performances such as naming (e.g., calling a horse ‘horse’) and application (e.g., answering the question “Are there any spoons in the drawer?”). Language users typically possess both aspects of lexical competence, though in different degrees for different words: a zoologist’s inferential competence on ‘manatee’ is usually richer than a layman’s, though a layman who spent her life among manatees may be more competent, referentially, than a “bookish” scientist. However, the two aspects are independent of each another, and neuropsychological evidence appears to show that they can be dissociated: there are patients whose referential competence is impaired or lost while their inferential competence is intact, and vice versa (see Section 5.3). Being a theory of individual competence, Marconi’s account does not deal directly with lexical meanings in a public language: communication depends both on the uniformity of cognitive interactions with the external world and on communal norms concerning the use of language, together with speakers’ deferential attitude toward semantic authorities.

3.3 The Externalist Turn

Since the early 1970s, views on lexical meaning were revolutionized by semantic externalism. Initially, externalism was limited to proper names and natural kind words such as ‘gold’ or ‘lemon’. In slightly different ways, both Kripke (1972) and Putnam (1970, 1975) argued that the reference of such words was not determined by any description that a competent speaker associated with the word; more generally, and contrary to what Frege may have thought, it was not determined by any cognitive content associated with it in a speaker’s mind (for arguments to that effect, see the entry on names). Instead, reference is determined, at least in part, by objective (“causal”) relations between a speaker and the external world. For example, a speaker refers to Aristotle when she utters the sentence “Aristotle was a great warrior”—so that her assertion expresses a false proposition about Aristotle, not a true proposition about some great warrior she may “have in mind”—thanks to her connection with Aristotle himself. In this case, the connection is constituted by a historical chain of speakers going back to the initial users of the name ‘Aristotle’, or its Greek equivalent, in baptism-like circumstances. To belong to the chain, speakers (including present-day speakers) are not required to possess any precise knowledge of Aristotle’s life and deeds; they are, however, required to intend to use the name as it is used by the speakers they are picking up the name from, i.e., to refer to the individual those speakers intend to refer to.

In the case of most natural kind names, it may be argued, baptisms are hard to identify or even conjecture. In Putnam’s view, for such words reference is determined by speakers’ causal interaction with portions of matter or biological individuals in their environment: ‘water’, for example, refers to this liquid stuff, stuff that is normally found in our rivers, lakes, etc. The indexical component (this liquid, our rivers) is crucial to reference determination: it wouldn’t do to identify the referent of ‘water’ by way of some description (“liquid, transparent, quenches thirst, boils at 100°C, etc.”), for something might fit the description yet fail to be water, as in Putnam’s famous Twin Earth thought experiment (see the entry on reference). It might be remarked that, thanks to modern chemistry, we now possess a description that is sure to apply to water and only to water: “being H2O” (Millikan 2005). However, even if our chemistry were badly mistaken (as it could, in principle, turn out to be) and water were not, in fact, H2O, ‘water’ would still refer to whatever has the same nature as this liquid. Something belongs to the extension of ‘water’ if and only if it is the same substance as this liquid, which we identify—correctly, as we believe—as being H2O.

Let it be noted that in Putnam’s original proposal, reference determination is utterly independent of speakers’ cognition: ‘water’ on Twin Earth refers to XYZ (not to H2O) even though the difference between the two substances is cognitively inert, so that before chemistry was created nobody on either Earth or Twin Earth could have told them apart. However, the label ‘externalism’ has been occasionally used for weaker views: a semantic account may be regarded as externalist if it takes semantic content to depend in one way or another on relations a computational system bears to things outside itself (Rey 2005; Borg 2012), irrespective of whether such relations affect the system’s cognitive state. Weak externalism is hard to distinguish from forms of internalism on which a word’s reference is determined by information stored in a speaker’s cognitive system—information of which the speaker may or may not be aware (Evans 1982). Be that as it may, in what follows ‘externalism’ will be used to mean strong, or Putnamian, externalism.

Does externalism apply to other lexical categories besides proper names and natural kind words? Putnam (1975) extended it to artifactual words, claiming that ‘pencil’ would refer to pencils—those objects—even if they turned out not to fit the description by which we normally identify them (e.g., if they were discovered to be organisms, not artifacts). Schwartz (1978, 1980) pointed out, among many objections, that even in such a case we could make objects fitting the original description; we would then regard the pencil-like organisms as impostors, not as “genuine” pencils. Others sided with Putnam and the externalist account: for example, Kornblith (1980) pointed out that artifactual kinds from an ancient civilization could be re-baptized in total ignorance of their function. The new artifactual word would then refer to the kind those objects belong to independently of any beliefs about them, true or false. Against such externalist accounts, Thomasson (2007) argued that artifactual terms cannot refer to artifactual kinds independently of all beliefs and concepts about the nature of the kind, for the concept of the kind’s creator(s) is constitutive of the nature of the kind. Whether artifactual words are liable to an externalist account is still an open issue, as is, more generally, the scope of application of externalist semantics.

There is another form of externalism that does apply to all or most words of a language: social externalism (Burge 1979), the view on which the meaning of a word as used by an individual speaker depends on the semantic standards of the linguistic community the speaker belongs to. In our community the word ‘arthritis’ refers to arthritis—an affliction of the joints—even when used by a speaker who believes that it can afflict the muscles as well and uses the word accordingly. If the community the speaker belongs to applied ‘arthritis’ to rheumatoids ailments in general, whether or not they afflict the joints, the same word form would not mean arthritis and would not refer to arthritis. Hence, a speaker’s mental contents, such as the meanings associated with the words she uses, depend on something external to her, namely the uses and the standards of use of the linguistic community she belongs to. Thus, social externalism eliminates the notion of idiolect: words only have the meanings conferred upon them by the linguistic community (“public” meanings); discounting radical incompetence, there is no such thing as individual semantic deviance, there are only false beliefs (for criticisms, see Bilgrami 1992, Marconi 1997; see also the entry on idiolects).

Though both forms of externalism focus on reference, neither is a complete reduction of lexical meaning to reference. Both Putnam and Burge make it a necessary condition of semantic competence on a word that a speaker commands information that other semantic views would regard as part of the word’s sense. For example, if a speaker believes that manatees are a kind of household appliance, she would not count as competent on the word ‘manatee’, nor would she refer to manatees by using it (Putnam 1975; Burge 1993). Beyond that, it is not easy for externalists to provide a satisfactory account of lexical semantic competence, as they are committed to regarding speakers’ beliefs and abilities (e.g., recognitional abilities) as essentially irrelevant to reference determination, hence to meaning. Two main solutions have been proposed. Putnam (1973) suggested that a speaker’s semantic competence consists in her knowledge of stereotypes associated with words. A stereotype is an oversimplified theory of a word’s extension: the stereotype associated with ‘tiger’ describes tigers as cat-like, striped, carnivorous, fierce, living in the jungle, etc. Stereotypes are not meanings, as they do not determine reference in the right way: there are albino tigers and tigers that live in zoos. What the ‘tiger’-stereotype describes is (what the community takes to be) the typical tiger. Knowledge of stereotypes is necessary to be regarded as a competent speaker, and—one surmises—it can also be considered sufficient for the purposes of ordinary communication. Thus, Putnam’s account does provide some content for semantic competence, though it dissociates it from knowledge of meaning.

On an alternative view (Devitt 1983), competence on ‘tiger’ does not consist in entertaining propositional beliefs such as “tigers are striped”, but rather in being appropriately linked to a network of causal chains for ‘tiger’ involving other people’s abilities, groundings, and reference borrowings. In order to understand the English word ‘tiger’ and use it in a competent fashion, a subject must be able to combine ‘tiger’ appropriately with other words to form sentences, to have thoughts which those sentences express, and to ground these thoughts in tigers. Devitt’s account appears to make some room for a speaker’s ability to, e.g., recognize a tiger when she sees one; however, the respective weights of individual abilities (and beliefs) and objective grounding are not clearly specified. Suppose a speaker A belongs to a community C that is familiar with tigers; unfortunately, A has no knowledge of the typical appearance of a tiger and is unable to tell a tiger from a leopard. Should A be regarded as a competent user ‘tiger’ on account of her being “part of C” and therefore linked to a network of causal chains for ‘tiger’?

3.4 Internalism

Some philosophers (e.g., Loar 1981; McGinn 1982; Block 1986) objected to the reduction of lexical meaning to reference, or to non-psychological factors that are alleged to determine reference. In their view, there are two aspects of meaning (more generally, of content): the narrow aspect, that captures the intuition that ‘water’ has the same meaning in both Earthian and Twin-Earthian English, and the wide aspect, that captures the externalist intuition that ‘water’ picks out different substances in the two worlds. The wide notion is required to account for the difference in reference between English and Twin-English ‘water’; the narrow notion is needed, first and foremost, to account for the relation between a subject’s beliefs and her behavior. The idea is that how an object of reference is described (not just which object one refers to) can make a difference in determining behavior. Oedipus married Jocasta because he thought he was marrying the queen of Thebes, not his mother, though as a matter of fact Jocasta was his mother. This applies to words of all categories: someone may believe that water quenches thirst without believing that H2O does; Lois Lane believed that Superman was a superhero but she definitely did not believe the same of her colleague Clark Kent, so she behaved one way to the man she identified as Superman and another way to the man she identified as Clark Kent (though they were the same man). Theorists that countenance these two components of meaning and content usually identify the narrow aspect with the inferential or conceptual role of an expression e, i.e., with the aspect of e that contributes to determine the inferential relations between sentences containing an occurrence of e and other sentences. Crucially, the two aspects are independent: neither determines the other. The stress on the independence of the two factors also characterizes more recent versions of so-called “dual aspect” theories, such as Chalmers (1996, 2002).

While dual theorists agree with Putnam’s claim that some aspects of meaning are not “in the head”, others have opted for plain internalism. For example, Segal (2000) rejected the intuitions that are usually associated with the Twin-Earth cases by arguing that meaning (and content in general) “locally supervenes” on a subject’s intrinsic physical properties. But the most influential critic of externalism has undoubtedly been Chomsky (2000). First, he argued that much of the alleged support for externalism comes in fact from “intuitions” about words’ reference in this or that circumstance. But ‘reference’ (and the verb ‘refer’ as used by philosophers) is a technical term, not an ordinary word, hence we have no more intuitions about reference than we have about tensors or c-command. Second, if we look at how words such as ‘water’ are applied in ordinary circumstances, we find that speakers may call ‘water’ liquids that contain a smaller proportion of H2O than other liquids they do not call ‘water’ (e.g., tea): our use of ‘water’ does not appear to be governed by hypotheses about microstructure. According to Chomsky, it may well be that progress in the scientific study of the language faculty will allow us to understand in what respects one’s picture of the world is framed in terms of things selected and individuated by properties of the lexicon, or involves entities and relationships describable by the resources of the language faculty. Some semantic properties do appear to be integrated with other aspects of language. However, so-called “natural kind words” (which in fact have little to do with kinds in nature, Chomsky claims) may do little more than indicating “positions in belief systems”: studying them may be of some interest for “ethnoscience”, surely not for a science of language. Along similar lines, others have maintained that the genuine semantic properties of linguistic expressions should be regarded as part of syntax, and that they constrain but do not determine truth conditions (e.g., Pietroski 2005, 2010). Hence, the connection between meaning and truth conditions (and reference) may be significantly looser than assumed by many philosophers.

3.5 Contextualism, Minimalism, and the Lexicon

“Ordinary language” philosophers of the 1950s and 1960s regarded work in formal semantics as essentially irrelevant to issues of meaning in natural language. Following Austin and the later Wittgenstein, they identified meaning with use and were prone to consider the different patterns of use of individual expressions as originating different meanings of the word. Grice (1975) argued that such a proliferation of meanings could be avoided by distinguishing between what is asserted by a sentence (to be identified with its truth conditions) and what is communicated by it in a given context (or in every “normal” context). For example, consider the following exchange:

  • A: Will Kim be hungry at 11am?
  • B: Kim had breakfast.

Although B does not literally assert that Kim had breakfast on that particular day (see, however, Partee 1973), she does communicate as much. More precisely, A could infer the communicated content by noticing that the asserted sentence, taken literally (“Kim had breakfast at least once in her life”), would be less informative than required in the context: thus, it would violate one or more principles of conversation (“maxims”) whereas there is no reason to suppose that the speaker intended to opt out of conversational cooperation (see the entries on Paul Grice and pragmatics). If the interlocutor assumes that the speaker intended him to infer the communicated content—i.e., that Kim had breakfast that morning, so presumably she would not be hungry at 11—cooperation is preserved. Such non-asserted content, called ‘implicature’, need not be an addition to the overtly asserted content: e.g., in irony asserted content is negated rather than expanded by the implicature (think of a speaker uttering “Paul is a fine friend” to implicate that Paul has wickedly betrayed her).

Grice’s theory of conversation and implicatures was interpreted by many (including Grice himself) as a convincing way of accounting for the variety of contextually specific communicative contents while preserving the uniqueness of a sentence’s “literal” meaning, which was identified with truth conditions and regarded as determined by syntax and the conventional meanings of the occurring words, as in formal semantics. The only semantic role context was allowed to play was in determining the content of indexical words (such as ‘I’, ‘now’, ‘here’, etc.) and the effect of context-sensitive structures (such as tense) on a sentence’s truth conditions. However, in about the same years Travis (1975) and Searle (1979, 1980) pointed out that the semantic relevance of context might be much more pervasive, if not universal: intuitively, the same sentence type could have very different truth conditions in different contexts, though no indexical expression or structure appeared to be involved. Take the sentence “There is milk in the fridge”: in the context of morning breakfast it will be considered true if there is a carton of milk in the fridge and false if there is a patch of milk on a tray in the fridge, whereas in the context of cleaning up the kitchen truth conditions are reversed. Examples can be multiplied indefinitely, as indefinitely many factors can turn out to be relevant to the truth or falsity of a sentence as uttered in a particular context. Such variety cannot be plausibly reduced to traditional polysemy such as the polysemy of ‘property’ (meaning quality or real estate), nor can it be described in terms of Gricean implicatures: implicatures are supposed not to affect a sentence’s truth conditions, whereas here it is precisely the sentence’s truth conditions that are seen as varying with context.

The traditionalist could object by challenging the contextualist’s intuitions about truth conditions. “There is milk in the fridge”, she could argue, is true if and only if there is a certain amount (a few molecules will do) of a certain organic substance in the relevant fridge (for versions of this objection, Cappelen & Lepore 2005). So the sentence is true both in the carton case and in the patch case; it would be false only if the fridge did not contain any amount of any kind of milk (whether cow milk or goat milk or elephant milk). The contextualist’s reply is that, in fact, neither the speaker nor the interpreter is aware of such alleged literal content (the point is challenged by Fodor 1983, Carston 2002); but “what is said” must be intuitively accessible to the conversational participants (Availability Principle, Recanati 1989). If truth conditions are associated with what is said—as the traditionalist would agree they are—then in many cases a sentence’s literal content, if there is such a thing, does not determine a complete, evaluable proposition. For a genuine proposition to arise, a sentence type’s literal content (as determined by syntax and conventional word meaning) must be enriched or otherwise modified by primary pragmatic processes based on the speakers’ background knowledge relative to each particular context of use of the sentence. Such processes differ from Gricean implicature-generating processes in that they come into play at the sub-propositional level; moreover, they are not limited to saturation of indexicals but may include the replacement of a constituent with another. These tenets define contextualism (Recanati 1993; Bezuidenhout 2002; Carston 2002; relevance theory (Sperber & Wilson 1986) is in some respects a precursor of such views). Contextualists take different stands on the existence and nature of the contribution of the semantic properties of words and sentence-types, though they all agree that it is insufficient to fix truth conditions (Stojanovic 2008).

Even if sentence types have no definite truth conditions, it does not follow that lexical types do not make definite or predictable contributions to the truth conditions of sentences (think of indexical words). It does follow, however, that conventional word meanings are not the final constituents of complete propositions (see Allot & Textor 2012). Does this imply that there are no such things as lexical meanings understood as features of a language? If so, how should we account for word acquisition and lexical competence in general? Recanati (2004) does not think that contextualism as such is committed to meaning eliminativism, the view on which words as types have no meaning; nevertheless, he regards it as defensible. Words could be said to have, rather than “meaning”, a semantic potential, defined as the collection of past uses of a word w on the basis of which similarities can be established between source situations (i.e., the circumstances in which a speaker has used w) and target situations (i.e., candidate occasions of application of w). It is natural to object that even admitting that long-term memory could encompass such an immense amount of information (think of the number of times ‘table’ or ‘woman’ are used by an average speaker in the course of her life), surely working memory could not review such information to make sense of new uses. On the other hand, if words were associated with “more abstract schemata corresponding to types of situations”, as Recanati suggests as a less radical alternative to meaning eliminativism, one wonders what the difference would be with respect to traditional accounts in terms of polysemy.

Other conceptions of “what is said” make more room for the semantic contribution of conventional word meanings. Bach (1994) agrees with contextualists that the linguistic meaning of words (plus syntax and after saturation) does not always determine complete, truth-evaluable propositions; however, he maintains that they do provide some minimal semantic information, a so-called ‘propositional radical’, that allows pragmatic processes to issue in one or more propositions. Bach identifies “what is said” with this minimal information. However, many have objected that minimal content is extremely hard to isolate (Recanati 2004; Stanley 2007). Suppose it is identified with the content that all the utterances of a sentence type share; unfortunately, no such content can be attributed to a sentence such as “Every bottle is in the fridge”, for there is no proposition that is stably asserted by every utterance of it (surely not the proposition that every bottle in the universe is in the fridge, which is never asserted). Stanley’s (2007) indexicalism rejects the notion of minimal proposition and any distinction between semantic content and communicated content: communicated content can be entirely captured by means of consciously accessible, linguistically controlled content (content that results from semantic value together with the provision of values to free variables in syntax, or semantic value together with the provision of arguments to functions from semantic types to propositions) together with general conversational norms. Accordingly, Stanley generalizes contextual saturation processes that are usually regarded as characteristic of indexicals, tense, and a few other structures; moreover, he requires that the relevant variables be linguistically encoded, either syntactically or lexically. It remains to be seen whether such solutions apply (in a non-ad hoc way) to all the examples of content modulation that have been presented in the literature.

Finally, minimalism (Borg 2004, 2012; Cappelen & Lepore 2005) is the view that appears (and intends) to be closest to the Frege-Montague tradition. The task of a semantic theory is said to be minimal in that it is supposed to account only for the literal meaning of sentences: context does not affect literal semantic content but “what the speaker says” as opposed to “what the sentence means” (Borg 2012). In this sense, semantics is not another name for the theory of meaning, because not all meaning-related properties are semantic properties (Borg 2004). Contrary to contextualism and Bach’s theory, minimalism holds that lexicon and syntax together determine complete truth-evaluable propositions. Indeed, this is definitional for lexical meaning: word meanings are the kind of things which, if one puts enough of them together in the right sort of way, then what one gets is propositional content (Borg 2012). Borg believes that, in order to be truth-evaluable, propositional contents must be “about the world”, and that this entails some form of semantic externalism. However, the identification of lexical meaning with reference makes it hard to account for semantic relations such as synonymy, analytic entailment or the difference between ambiguity and polysemy, and syntactically relevant properties: the difference between “John is easy to please” and “John is eager to please” cannot be explained by the fact that ‘easy’ means the property easy (see the entry on ambiguity). To account for semantically based syntactic properties, words may come with “instructions” that are not, however, constitutive of a word’s meaning like meaning postulates (which Borg rejects), though awareness of them is part of a speaker’s competence. Once more, lexical semantic competence is divorced from grasp of word meaning. In conclusion, some information counts as lexical if it is either perceived as such in “firm, type-level lexical intuitions” or capable of affecting the word’s syntactic behavior. Borg concedes that even such an extended conception of lexical content will not capture, e.g., analytic entailments such as the relation between ‘bachelor’ and ‘unmarried’.

4. Linguistics

The emergence of modern linguistic theories of word meaning is customarily placed at the transition from historical-philological semantics (Section 2.2) to structuralist semantics.

4.1 Structuralist Semantics

The advances introduced by the structuralist conception of word meaning can be best appreciated by contrasting its tenets with those of historical-philological semantics. Let us recall the three most important differences (Lepschy 1970).

  • Anti-psychologism. Structuralist semantics views language as a symbolic system whose internal dynamics can be analyzed apart from the psychology of its users. Just as the rules of chess can be expressed without mentioning the mental properties of chess players, so the semantic attributes of words can be investigated simply by examining their relations to other elements in the same lexicon.
  • Anti-historicism. Since the primary subject matter of structuralist semantics is the role played by lexical expressions in structured linguistic systems, structuralist semantics privileges synchronic linguistic description. Diachronic accounts of the evolution of a word w presuppose an analysis of the relational properties statically exemplified by w at different stages of the lexical system it belongs to.
  • Anti-localism. As the semantic properties of lexical expressions depend on the relations they entertain with other expressions in the same lexical system, word meanings cannot be studied in isolation. This is both an epistemological and a foundational claim, i.e., a claim about how matters related to word meaning should be addressed in the context of a semantic theory, and a claim about the dynamics whereby the elements of a system of signs acquire the meaning they have for their users.

The account of lexical phenomena popularized by structuralism gave rise to a variety of descriptive approaches to word meaning. We can group them in three categories (Lipka 1992; Murphy 2003; Geeraerts 2006).

  • Lexical Field Theory. Introduced by Trier (1931), it argues that words should be studied by looking at their relations to other words in the same lexical field. A lexical field is a set of semantically related lexical items whose meanings are mutually interdependent and which together provide a given domain of reality with conceptual structure. Lexical field theory assumes that lexical fields are closed sets with no overlapping meanings or semantic gaps. Whenever a word undergoes a change in meaning (e.g., its range of application is extended or contracted), the whole arrangement of its lexical field is affected (Lehrer 1974).
  • Componential Analysis. Developed in the second half of the 1950s by European and American linguists (e.g., Pattier, Coseriu, Bloomfield, Nida), this framework argues that word meaning can be described on the basis of a finite set of conceptual building blocks called semantic components or features. For example, ‘man’ can be analyzed as [+ male], [+ mature], ‘woman’ as [− male], [+ mature], ‘child’ as [+/− male] [− mature] (Leech 1974).
  • Relational Semantics. This approach, prominent in the work of linguists such as Lyons (1963), shares with lexical field theory the commitment to a mode of analysis that privileges the description of lexical relations, but departs from it in two important respects. First, it postulates no isomorphism between sets of related words and domains of reality, thereby eliminating non-linguistic predicates from the theoretical vocabulary that can be used in the description of lexical relations, and dropping the assumption that the organization of lexical fields has to reflect ontology. Second, instead of deriving statements about the meaning relations entertained by a lexical item (e.g., synonymy, hyponymy) from an independent account of its meaning, relational semantics sees word meanings as constituted by the set of semantic relations they participate in (Evens et al. 1980; Cruse 1986).

4.2 Generativist Semantics

The componential current of structuralism was the first to produce an important innovation in theories of word meaning, namely Katzian semantics (KS; Katz & Fodor 1963; Katz 1972, 1987). KS combined componential analysis with a mentalistic conception of word meaning and developed a method for the description of lexical phenomena in the context of a formal grammar. The psychological component of KS is twofold. First, word meanings are defined in terms of the combination of simpler conceptual components. Second, the subject of semantic theorizing is not identified with the “structure of the language” but, following Chomsky (1957, 1965), with the ability of the language user to interpret sentences. In KS, word meanings are structured entities whose representations are called semantic markers. A semantic marker is a tree with labeled nodes whose structure reproduces the structure of the represented meaning, and whose labels identify the word’s conceptual components. For example, the figure below illustrates the sense of ‘chase’ (simplified from Katz 1987).

Katz (1987) claimed that KS was superior to the kind of semantic analysis that could be provided via meaning postulates. For example, in KS the validation of conditionals such as \(\forall x\forall y (\textrm{chase}(x, y) \to \textrm{follow}(x,y))\) could be reduced to a matter of inspection: one had simply to check whether the semantic marker of ‘follow’ was a subtree of the semantic marker of ‘chase’. Moreover, the method allowed to incorporate syntagmatic relations among the phenomena to be considered in the representation of word meanings (witness the grammatical tags ‘NP’, ‘VP’ and ‘S’ attached to the conceptual components above). KS was favorably received by the Generative Semantics movement (Fodor 1977; Newmeyer 1980) and boosted an interest in the formal representation of word meaning that would dominate the linguistic scene for decades to come (Harris 1993).Nonetheless , it was eventually abandoned. First, it had no theory of how lexical expressions contributed to the truth conditions of sentences (Lewis 1972). Second, some features that could be easily represented with the standard notation of meaning postulates could not be expressed through semantic markers, such as the symmetry and the transitivity of predicates (e.g., \(\forall x\forall y (\textrm{sibling}(x, y) \to \textrm{sibling}(y, x))\) or \(\forall x\forall y\forall z (\textrm{louder}(x, y) \mathbin{\&} \textrm{louder}(y, z) \to \textrm{louder}(x, z))\); see Dowty 1979). Third, the arguments staged by KS in support of its assumption that lexical meaning should be regarded as having an internal structure turned out to be vulnerable to objections from proponents of an atomistic view of word meaning (Fodor & Lepore 1992).

After KS, the landscape of linguistic theories of word meaning bifurcated. On one side, we have a group of theories advancing the decompositional agenda established by Katz. On the other, we have a group of theories aligning with the relational approach originated by lexical field theory and relational semantics. Following Geeraerts (2010), we shall briefly characterize the following ones.

Decompositional FrameworksRelational Frameworks
Natural Semantic Metalanguage Symbolic Networks
Conceptual Semantics Statistical Analysis
Two-Level Semantics
Generative Lexicon Theory

4.3 Decompositional Approaches

The basic idea of the Natural Semantic Metalanguage approach (NSM; Wierzbicka 1972, 1996; Goddard & Wierzbicka 2002) is that word meaning should be described in terms of a small core of elementary conceptual particles, known as semantic primes. According to NSM, primes are primitive, innate, unanalyzable semantic constituents that are lexicalized in all natural languages (in the form of a word, a morpheme, a phraseme) and whose appropriate combination should be sufficient to delineate the semantic properties of any lexical expression in any natural language. Wierzbicka (1996) proposed a catalogue of about 60 primes, to be exploited to spell out the internal structure of word meanings and grammatical constructions using so-called reductive paraphrases: for example, ‘top’ is analyzed as a part of something; this part is above all the other parts of this something. NSM has produced interesting applications in comparative linguistics (Peeters 2006), language teaching (Goddard & Wierzbicka 2007), and lexical typology (Goddard 2012). However, it has been criticized on various grounds. First, it has been argued that the method followed by NSM in the identification of lexical semantic universals is invalid (e.g., Matthewson 2003), and that reductive paraphrases are too vague to be considered full specifications of lexical meanings, since they fail to account for fine-grained differences among words whose semantic attributes are closely related. For example, the definition provided by Wierzbicka for ‘sad’ (i.e., xfeels something; sometimes a person thinks something like this: something bad happened; if i didn’t know that it happened i would say: i don’t want it to happen; i don’t say this now because i know: i can’t do anything; because of this, this person feels something bad;xfeels something like this) seems to apply equally well to ‘unhappy’, ‘distressed’, ‘frustrated’, ‘upset’, and ‘annoyed’ (Aitchison 2012). In addition, it has been observed that some items in the lists of primes elaborated by NSM theorists fail to comply with the requirement of universality and are not explicitly lexicalized in all known languages (Bohnemeyer 2003; Von Fintel & Matthewson 2008). See Goddard (1998) for some replies and Riemer (2006) for further objections.

For NSM, lexical meaning is a purely linguistic entity that bears no constitutive relation to the domain of world knowledge. Conceptual Semantics (CSEM; Jackendoff 1983, 1990, 2002) proposes a more open-ended approach. According to CSEM, formal semantic representations do not contain all the information on the basis of which lexically competent subjects use and interpret words. Rather, the meaning of lexical expressions is determined thanks to the interaction between the formal representations that constitute the primary level of word knowledge and conceptual structure, which is the domain of non-linguistic modes of cognition such as perceptual knowledge and motor schemas. This interface is reflected in the way CSEM proposes to model word meanings. Below, the semantic representation of ‘drink’ according to Jackendoff.

drink:
V
\(-\langle\mbox{NP}_j\rangle\)
\([_{\textrm{Event}} {\Tiny\textrm{ CAUSE}} ([_{\textrm{Thing}} \Rule{2em}{1px}{0px}]_i, [_{\textrm{Event}} {\Tiny\textrm{ GO}}([_{\textrm{Thing}} {\Tiny\textrm{ LIQUID}}]_j, [_{\textrm{Path}} {\Tiny\textrm{ TO}} ([_{\textrm{Place}} {\Tiny\textrm{ IN}} ([_{\textrm{Thing}} {\Tiny\textrm{ MOUTH OF}} ([_{\textrm{Thing}} \Rule{2em}{1px}{0px}]_i)])])])])]\)

Syntactic tags represent the way the word interacts with the grammatical environment where it is used, while the items in subscript come from a set of perceptually grounded primitives (e.g., event, state, thing, path, place, property, amount) which are assumed to be innate, cross-modal and universal categories of human cognition. CSEM elaborates with accuracy on the interface between syntax and lexical semantics, but some of its claims about the interplay between formal lexical representations and non-linguistic information seem less stringent. To begin with, psychologists have observed that speakers tend to use causative predicates and the paraphrases expressing their decompositional structure in different and partially non-interchangeable ways (e.g., Wolff 2003). Furthermore, CSEM provides no well-founded method for the identification of pre-conceptual primitives (Pulman 2005), and the claim that the bits of information to be inserted in the definition of word meaning should be ultimately perception-related looks disputable. For example, how can we account for the difference in meaning between ‘jog’ and ‘run’ without pointing to information about the social characteristics of jogging, which imply a certain leisure setting, the intention to contribute to physical wellbeing, and so on? See Taylor (1996), Deane (1996).

The principled division between word knowledge and world knowledge introduced by CSEM does not have much to say about the dynamic interaction of the two in language use. The Two-Level Semantics (TLS) of Bierwisch (1983a,b) and Lang (Bierwisch & Lang 1989; Lang 1993) aims to provide precisely such a dynamic account. TLS views lexical meaning as the output of the interaction of two systems: semantic form (SF) and conceptual structure (CS). SF is a formalized representation of the basic features of a lexical item. It contains grammatical information that specifies how a word can contribute to the formation of syntactic structures, plus a set of variables and parameters whose value is determined through CS. By contrast, CS consists of language-independent systems of knowledge that mediate between language and the world as construed by the human mind (Lang & Maienborn 2011). According to TLS, polysemous words express variable meanings because their stable SF interacts flexibly with CS. Consider for example the word ‘university’, which can be read as referring either to an institution (as in “the university selected John’s application”) or to a building (as in “the university is located on the North side of the river”). Skipping some technical details, TLS construes the dynamics governing the selection of these readings as follows.

  1. The word ‘university’ is assigned to the category \(\lambda x [\textrm{purpose} [x w]]\) (i.e., ‘university’ belongs to the category of words denoting objects primarily characterized by their purpose).
  2. Based on a general understanding of the defining purposes of universities, the SF of ‘university’ is specified as \(\lambda x [\textrm{purpose} [x w] \mathbin{\&} \textit{advanced study and teaching} [w]]\).
  3. The alternative readings obtain as a function of the two ways CS allows \(\lambda x [\textrm{purpose} [x w]]\) to be specified, i.e., \(\lambda x [\textrm{institution} [x] \mathbin{\&} \textrm{purpose} [x w]]\) or \(\lambda x [\textrm{building} [x] \mathbin{\&} \textrm{purpose} [x w]]\).

TLS aligns with Jackendoff’s and Wierzbicka’s commitment to a descriptive paradigm that takes into account the plasticity of lexical meaning while anchoring it to a stable semantic template. But even if explaining the contextual flexibility of word uses in terms of access to non-linguistic information were as unavoidable a move as TLS suggests, there may be reasons to doubt that the approach privileged by TLS is the best to provide a detailed account of such dynamics. A first problem has to do, once again, with definitional accuracy: defining the SF of ‘university’ as \(\lambda x [\textrm{purpose} [x w] \mathbin{\&} \textit{advanced study and teaching} [w]]\) seems inadequate to reflect the subtle differences in meaning among ‘university’ and related terms designating institutions for higher education, such as ‘college’ or ‘academy’. Furthermore, the apparatus of TLS excludes from CS bits of encyclopedic knowledge that would be difficult to represent via lambda expressions, and yet are indispensable to select among the alternative meanings of a word (Taylor 1994, 1995). See also Wunderlich (1991, 1993).

Generative Lexicon Theory (GLT; Pustejovsky 1995) developed out of a goal to provide a computational semantics for the way words modulate their meaning in language use, and proposed to model the contextual flexibility of lexical meaning as the output of formal operations defined over a generative lexicon. According to GLT, the computational resources available to a lexical item w consist of the following four levels.

  • A lexical typing structure, giving an explicit type for w positioned within a type system for the language;
  • An argument structure, representing the number and nature of the arguments supported by w;
  • An event structure, defining the event type denoted by w (e.g., state, process, transition);
  • A qualia structure, specifying the predicative force of w.

In particular, qualia structure captures how humans understand objects and relations in the world and provides a minimal explanation for the behavior of lexical items based on some properties of their referents (Pustejovsky 1998). GLT distinguishes four types of qualia:

  • constitutive: the relation between an object x and its constituent parts;
  • formal: the basic ontological category of x;
  • telic: the purpose and the function of x;
  • agentive: the factors involved in the origin of x.

For example, the qualia structure of the noun ‘sandwich’ will contain information about the composition of sandwiches, their typical role in the activity of eating, and their nature of physical artifacts. If eat(P, g, x) denotes a process, P, involving an individual gand an object x, then the qualia structure of ‘sandwich’ is as follows.

sandwich(x)
const = {bread, …}
form = physobj(x)
tel = eat(P, g, x)
agent = artifact(x)

Qualia structure is the primary explanatory device by which GLT accounts for polysemy: the sentence “Mary finished the sandwich” receives the default interpretation “Mary finished eating the sandwich” because the argument structure of ‘finish’ requires an action as direct object, and the qualia structure of ‘sandwich’ allows the generation of the appropriate sense via type coercion (Pustejovsky 2006). GLT is an ongoing research program (Pustejovsky et al. 2012) that has led to significant applications in computational linguistics (e.g., Pustejovsky & Jezek 2008; Pustejovsky & Rumshisky 2008). But like the theories mentioned so far, it has been subject to criticisms. A first objection has argued that the decompositional assumptions underlying GLT are unwarranted and should be replaced by an atomist view of word meaning (Fodor & Lepore 1998; see Pustejovsky 1998 for a reply). Second, many have pointed out that while GLT reduces polysemy to a formal mechanism operating on information provided by the sentential context, contextual variations in lexical meaning often depend on non-linguistic factors (e.g., Lascarides & Copestake 1998; Asher 2011) and can conflict with the predictions offered by GLT (Blutner 2002). Third, it has been argued that qualia structure sometimes overgenerates or undergenerates interpretations (e.g., Jayez 2001), and is included in lexical representations by drawing an arbitrary dividing line between linguistic and non-linguistic information (Asher & Lascarides 1995).

4.4 Relational Approaches

To conclude this section, we shall mention some contemporary approaches to word meaning that develop the relational component of the structuralist paradigm. We can group them into two categories. On the one hand, we have symbolic approaches, whose goal is to build formalized models of lexical knowledge in which the lexicon is seen as a structured system of entries interconnected by sense relations such as synonymy, antonymy, and meronymy. On the other, we have statistical approaches, whose primary aim is to investigate the patterns of co-occurrence among word forms in linguistic corpora.

The chief example of symbolic approaches is Collins and Quillian’s (1969) hierarchical network model, in which words are represented as entries in a network of nodes comprising a set of conceptual features defining the conventional meaning of the word in question, and connected to other nodes in the network through semantic relations (more in Lehman 1992). Subsequent developments of the hierarchical network model include the Semantic Feature Model (Smith, Shoben & Rips 1974), the Spreading Activation Model (Collins & Loftus 1975; Bock & Levelt 1994), the WordNet database (Fellbaum 1998), as well as the connectionist models of Seidenberg & McClelland (1989), Hinton & Shallice (1991), and Plaut & Shallice (1993) (see the entry on connectionism).

Statistical analysis, by contrast, is based on an attempt to gather evidence about the distribution of words in corpora and use this information to account for their meaning. Basically, collecting data about the patterns of preferred co-occurrence among lexical items helps identify their semantic properties and differentiate between their different senses (for overviews, see Atkins & Zampolli 1994; Manning & Schütze 1999; Stubbs 2002; Sinclair 2004). It is important to mention that although network models and statistical analysis share an interest in developing computational tools for language processing, they are divided by a difference. While symbolic networks are models of the architecture of the lexicon that seek to be cognitively adequate and to fit psycholinguistic evidence, statistical analysis is a practical methodology for the analysis of corpora which is not necessarily interested in providing a psychological account of the information that a subject must associate with words in order to master a lexicon (see the entry on computational linguistics).

5. Cognitive Science

As we have seen, most theories of lexical meaning in linguistics attempt to trace a plausible dividing line between word knowledge and world knowledge, and the various ways they tackle this task display some recurrent features. They focus on the structural attributes of lexical meaning rather than on the dynamics of word use, they maintain that words encode distinctively linguistic information about their alternative senses, they see the study of word meaning as an enterprise whose epistemological niche is linguistic theory, and they assume that the lexicon constitutes a system whose properties can be illuminated with a fairly economical appeal to the landscape of factual knowledge and non-linguistic cognition. In this section, we survey a group of theories that adopt a different stance on word meaning. The focus is once again psychological, which means that the overall goal is to provide a cognitively realistic account of the representational repertoire underlying our ability to use words. But unlike the approaches mentioned in Section 4, these theories tend to encourage a view on which the distinction between lexical semantics and pragmatics is highly unstable (or impossible to draw), where word knowledge is richly interfaced with general intelligence, and where lexical activity is not sustained by an autonomous lexicon that operates entirely apart from other cognitive systems (Evans 2010). The first part of this section will examine some cognitive linguistic theories of word meaning, whose primary aim is to shed light on the complexities of lexical phenomena through a characterization of the processes interfacing word knowledge with non-linguistic cognition. The second part will go into some psycholinguistic and neurolinguistic approaches to word meaning, which attempt to identify the representational format and the neural correlates of word knowledge through the experimental study of lexical activity.

5.1 Cognitive Linguistics

At the beginning of the 1970s, Eleanor Rosch put forth a new theory of the mental representation of categories. Concepts such as furniture or bird, she claimed, are not represented just as sets of criterial features with clear-cut boundaries, so that an item can be conceived as falling or not falling under the concept based on whether or not it meets some relevant criteria. Rather, items within categories can be considered differentially representative of the meaning of category-terms (Rosch 1975; Rosch & Mervis 1975; Mervis & Rosch 1981). Several experiments seemed to show that the application of concepts was no simple yes-or-no business: some items (the “good examples”) are more easily identified as falling under a concept than others (the “poor examples”). An automobile is perceived as a better example of vehicle than a rowboat, and much better than an elevator; a carrot is more readily identified as falling under the concept vegetable than a pumpkin. If lexical concepts were represented merely by criteria, such differences would be inexplicable when occurring between items that meet the criteria equally well. It is thus plausible to assume that the mental representations of category words are somehow closer to good examples than to bad examples of the category: a robin is perceived as a more “birdish” bird than an ostrich or, as people would say, closer to the prototype of a bird or to the prototypical bird (see the entry on concepts).

Although nothing in Rosch’s experiments licensed the conclusion that prototypes should be reified and treated as mental entities (what her experiments did support was merely that a theory of the mental representation of categories should be consistent with the existence of prototype effects), prototypes were soon identified with feature bundles in the mind and led to the formulation of a prototype-based approach to word meaning (Murphy 2002). First, prototypes were used for the development of the Radial Network Theory of Brugman (1988 [1981]; Brugman & Lakoff 1988), who proposed to model the sense network of polysemous words by introducing in the architecture of lexical items the center-periphery relation envisaged by Rosch. According to Brugman, the meaning potential of a polysemous word can be modeled as a radial complex where a dominant sense is related to less typical senses by means of semantic relations such as metaphor and metonymy (e.g., the sense network of ‘fruit’ has product of plant growth at its center and a more abstract outcome at its periphery, and the two are connected by a metaphorical relation). Shortly after, the Conceptual Metaphor Theory of Lakoff & Johnson (1980; Lakoff 1987) and the Mental Spaces Approach of Fauconnier (1994; Fauconnier & Turner 1998) combined the assumption that words encode radial categories with the claim that word uses are governed by mechanisms of figurative mapping that integrate lexical categories across different conceptual domains (e.g., “love is war”, “life is a journey”). These associations are creative, perceptually grounded, systematic, cross-culturally uniform, and emerge on pre-linguistic patterns of conceptual activity which correlate with core elements of human embodied experience (see the entries on metaphor and embodied cognition). More in Kövecses (2002), Gibbs (2008), and Dancygier & Sweetser (2014).

Leave a Comment

(0 Comments)

Your email address will not be published. Required fields are marked *