Katie Wagner and David Barner
Human experience of color results from a complex interplay of perceptual and linguistic systems. At the lowest level of perception, the human visual system transforms the visible light portion of the electromagnetic spectrum into a rich, continuous three-dimensional experience of color. Despite our ability to perceptually discriminate millions of different color shades, most languages categorize color into a number of discrete color categories. While the meanings of color words are constrained by perception, perception does not fully define them. Once color words are acquired, they may in turn influence our memory and processing speed for color, although it is unlikely that language influences the lowest levels of color perception.
One approach to examining the relationship between perception and language in forming our experience of color is to study children as they acquire color language. Children produce color words in speech for many months before acquiring adult meanings for color words. Research in this area has focused on whether children’s difficulties stem from (a) an inability to identify color properties as a likely candidate for word meanings, or alternatively (b) inductive learning of language-specific color word boundaries. Lending plausibility to the first account, there is evidence that children more readily attend to object traits like shape, rather than color, as likely candidates for word meanings. However, recent evidence has found that children have meanings for some color words before they begin to produce them in speech, indicating that in fact, they may be able to successfully identify color as a candidate for word meaning early in the color word learning process. There is also evidence that prelinguistic infants, like adults, perceive color categorically. While these perceptual categories likely constrain the meanings that children consider, they cannot fully define color word meanings because languages vary in both the number and location of color word boundaries. Recent evidence suggests that the delay in color word acquisition primarily stems from an inductive process of refining these boundaries.
Myrto Grigoroglou and Anna Papafragou
To become competent communicators, children need to learn that what a speaker means often goes beyond the literal meaning of what the speaker says. The acquisition of pragmatics as a field is the study of how children learn to bridge the gap between the semantic meaning of words and structures and the intended meaning of an utterance. Of interest is whether young children are capable of reasoning about others’ intentions and how this ability develops over time.
For a long period, estimates of children’s pragmatic sophistication were mostly pessimistic: early work on a number of phenomena showed that very young communicators were egocentric, oblivious to other interlocutors’ intentions, and overall insensitive to subtle pragmatic aspects of interpretation. Recent years have seen major shifts in the study of children’s pragmatic development. Novel methods and more fine-grained theoretical approaches have led to a reconsideration of older findings on how children acquire pragmatics across a number of phenomena and have produced a wealth of new evidence and theories.
Three areas that have generated a considerable body of developmental work on pragmatics include reference (the relation between words or phrases and entities in the world), implicature (a type of inferred meaning that arises when a speaker violates conversational rules), and metaphor (a case of figurative language). Findings from these three domains suggest that children actively use pragmatic reasoning to delimit potential referents for newly encountered words, can take into account the perspective of a communicative partner, and are sensitive to some aspects of implicated and metaphorical meaning. Nevertheless, children’s success with pragmatic communication is fragile and task-dependent.
William F. Hanks
Deictic expressions, like English ‘this, that, here, and there’ occur in all known human languages. They are typically used to individuate objects in the immediate context in which they are uttered, by pointing at them so as to direct attention to them. The object, or demonstratum is singled out as a focus, and a successful act of deictic reference is one that results in the Speaker (Spr) and Addressee (Adr) attending to the same referential object. Thus,
(1)A:Oh, there’sthat guy again (pointing)B:Oh yeah, now I see him (fixing gaze on the guy)
(2)A:I’ll have that one over there (pointing to a dessert on a tray)B:This? (touching pastry with tongs)A:yeah, that looks greatB:Here ya’ go (handing pastry to customer)
In an exchange like (1), A’s utterance spotlights the individual guy, directing B’s attention to him, and B’s response (both verbal and ocular) displays that he has recognized him. In (2) A’s utterance individuates one pastry among several, B’s response makes sure he’s attending to the right one, A reconfirms and B completes by presenting the pastry to him. If we compare the two examples, it is clear that the underscored deictics can pick out or present individuals without describing them. In a similar way, “I, you, he/she, we, now, (back) then,” and their analogues are all used to pick out individuals (persons, objects, or time frames), apparently without describing them. As a corollary of this semantic paucity, individual deictics vary extremely widely in the kinds of object they may properly denote: ‘here’ can denote anything from the tip of your nose to planet Earth, and ‘this’ can denote anything from a pastry to an upcoming day (this Tuesday). Under the same circumstance, ‘this’ and ‘that’ can refer appropriately to the same object, depending upon who is speaking, as in (2). How can forms that are so abstract and variable over contexts be so specific and rigid in a given context? On what parameters do deictics and deictic systems in human languages vary, and how do they relate to grammar and semantics more generally?
While both pragmatic theory and experimental investigations of language using psycholinguistic methods have been well-established subfields in the language sciences for a long time, the field of Experimental Pragmatics, where such methods are applied to pragmatic phenomena, has only fully taken shape since the early 2000s. By now, however, it has become a major and lively area of ongoing research, with dedicated conferences, workshops, and collaborative grant projects, bringing together researchers with linguistic, psychological, and computational approaches across disciplines. Its scope includes virtually all meaning-related phenomena in natural language comprehension and production, with a particular focus on what inferences utterances give rise to that go beyond what is literally expressed by the linguistic material.
One general area that has been explored in great depth consists of investigations of various ‘ingredients’ of meaning. A major aim has been to develop experimental methodologies to help classify various aspects of meaning, such as implicatures and presuppositions as compared to basic truth-conditional meaning, and to capture their properties more thoroughly using more extensive empirical data. The study of scalar implicatures (e.g., the inference that some but not all students left based on the sentence Some students left) has served as a catalyst of sorts in this area, and they constitute one of the most well-studied phenomena in Experimental Pragmatics to date. But much recent work has expanded the general approach to other aspects of meaning, including presuppositions and conventional implicatures, but also other aspects of nonliteral meaning, such as irony, metonymy, and metaphors.
The study of reference constitutes another core area of research in Experimental Pragmatics, and has a more extensive history of precursors in psycholinguistics proper. Reference resolution commonly requires drawing inferences beyond what is conventionally conveyed by the linguistic material at issue as well; the key concern is how comprehenders grasp the referential intentions of a speaker based on the referential expressions used in a given context, as well as how the speaker chooses an appropriate expression in the first place. Pronouns, demonstratives, and definite descriptions are crucial expressions of interest, with special attention to their relation to both intra- and extralinguistic context. Furthermore, one key line of research is concerned with speakers’ and listeners’ capacity to keep track of both their own private perspective and the shared perspective of the interlocutors in actual interaction.
Given the rapid ongoing growth in the field, there is a large number of additional topical areas that cannot all be mentioned here, but the final section of the article briefly mentions further current and future areas of research.
Experimental Semiotics (ES) is a burgeoning new discipline aimed at investigating in the laboratory the development of novel forms of human communication. Conceptually connected to experimental research on language use, ES provides a scientific complement to field studies of spontaneously emerging new languages and studies on the emergence of communication systems among artificial agents.
ES researchers have created quite a few research paradigms to investigate the development of novel forms of human communication. Despite their diversity, these paradigms all rely on the use of semiotic games, that is, games in which people can succeed reliably only after they have developed novel communication systems. Some of these games involve creating novel signs for pre-specified meanings. These games are particularly suitable for studying relatively large communication systems and their structural properties. Other semiotic games involve establishing shared meanings as well as novel signs to communicate about them. These games are typically rather challenging and are particularly suitable for investigating the processes through which novel forms of communication are created.
Considering that ES is a methodological stance rather than a well-defined research theme, researchers have used it to address a greatly heterogeneous set of research questions. Despite this, and despite the recent origins of ES, two of these questions have begun to coalesce into relatively coherent research themes.
The first theme originates from the observation that novel communication systems developed in the laboratory tend to acquire features that are similar to key features of natural language. Most notably, they tend (a) to rely on the use of symbols—that is purely conventional signs—and (b) to adopt a combinatorial design, using a few basic units to express a large number of meanings. ES researchers have begun investigating some of the factors that lead to the acquisition of such features. These investigations suggest two conclusions. The first is that the emergence of symbols depends on the fact that, when repeatedly using non-symbolic signs, people tend to progressively abstract them. The second conclusion is that novel communication systems tend to adopt a combinatorial design more readily when their signs have low degrees of motivation and fade rapidly.
The second research theme originates from the observation that novel communication systems developed in the laboratory tend to begin systematically with motivated—that is non-symbolic—signs. ES investigations of this tendency suggest that it occurs because motivation helps people bootstrap novel forms of communication. Put it another way, these investigations show that it is very difficult for people to bootstrap communication through arbitrary signs.
Game theory provides formal means of representing and explaining action choices in social decision situations where the choices of one participant depend on the choices of another. Game theoretic pragmatics approaches language production and interpretation as a game in this sense. Patterns in language use are explained as optimal, rational, or at least nearly optimal or rational solutions to a communication problem. Three intimately related perspectives on game theoretic pragmatics are sketched here: (i) the evolutionary perspective explains language use as the outcome of some optimization process, (ii) the rationalistic perspective pictures language use as a form of rational decision-making, and (iii) the probabilistic reasoning perspective considers specifically speakers’ and listeners’ beliefs about each other. There are clear commonalities behind these three perspectives, and they may in practice blend into each other.
At the heart of game theoretic pragmatics lies the idea that speaker and listener behavior, when it comes to using a language with a given semantic meaning, are attuned to each other. By focusing on the evolutionary or rationalistic perspective, we can then give a functional account of general patterns in our pragmatic language use. The probabilistic reasoning perspective invites modeling actual speaker and listener behavior, for example, as it shows in quantitative aspects of experimental data.
Interest in the linguistics of humor is widespread and dates since classical times. Several theoretical models have been proposed to describe and explain the function of humor in language. The most widely adopted one, the semantic-script theory of humor, was presented by Victor Raskin, in 1985. Its expansion, to incorporate a broader gamut of information, is known as the General Theory of Verbal Humor. Other approaches are emerging, especially in cognitive and corpus linguistics. Within applied linguistics, the predominant approach is analysis of conversation and discourse, with a focus on the disparate functions of humor in conversation. Speakers may use humor pro-socially, to build in-group solidarity, or anti-socially, to exclude and denigrate the targets of the humor. Most of the research has focused on how humor is co-constructed and used among friends, and how speakers support it. Increasingly, corpus-supported research is beginning to reshape the field, introducing quantitative concerns, as well as multimodal data and analyses. Overall, the linguistics of humor is a dynamic and rapidly changing field.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
The concept of innateness (innate is first recorded in the period 1375–1425; from Latin innātus “inborn”) relates to types of behavior and knowledge that are present in the organism since birth (in fact, since fertilization), prior to any sensory experience with the environment. The term has been applied to two general types of qualities. The first consists of instinctive and inflexible reflexes and behaviors, which are apparent in survival, mating, and rearing activities. The other relates to cognition, with certain concepts, ideas, propositions, and particular ways of mental computation suggested to be part of one’s biological makeup. While both types of innatism have a long history in human philosophy and science (e.g., Plato and Descartes), some bias appears to exist in favor of claims for inherent behavioral traits, which are typically accepted when satisfactory empirical evidence is provided. One famous example is Lorenz’s demonstration of imprinting, a natural phenomenon that obeys a predetermined mechanism and schedule (Lorenz’s incubator-hatched goslings imprinted on his boots, the first moving object they encountered). Likewise, there seems to be little controversy in regard to predetermined ways of organizing sensory information, as is the case with the detection and classification of shapes and colors by the mind. In contrast, the idea that certain types of abstract knowledge may be part of an organism’s biological endowment (i.e., not learned) is typically faced with a greater sense of skepticism, and touches on a fundamental question in epistemological philosophy: Can reason be based (to a certain extent) on a priori knowledge—that is, knowledge that precedes and is independent of experience? The most influential and controversial claim for such innate knowledge in modern science is Chomsky’s breakthrough nativist theory of Universal Grammar in language and the famous “Argument from the Poverty of the Stimulus.” The main Chomskyan hypothesis is that all human beings share a preprogrammed linguistic infrastructure consisting of a finite collection of rules that, in principle, may generate (through combination or transformation) an infinite number of (only) grammatical sentences. Thus, the innate grammatical system constrains and structures the acquisition and use of all natural languages.
Computational models of human sentence comprehension help researchers reason about how grammar might actually be used in the understanding process. Taking a cognitivist approach, this article relates computational psycholinguistics to neighboring fields (such as linguistics), surveys important precedents, and catalogs open problems.
Marieke Woensdregt and Kenny Smith
Pragmatics is the branch of linguistics that deals with language use in context. It looks at the meaning linguistic utterances can have beyond their literal meaning (implicature), and also at presupposition and turn taking in conversation. Thus, pragmatics lies on the interface between language and social cognition.
From the point of view of both speaker and listener, doing pragmatics requires reasoning about the minds of others. For instance, a speaker has to think about what knowledge they share with the listener to choose what information to explicitly encode in their utterance and what to leave implicit. A listener has to make inferences about what the speaker meant based on the context, their knowledge about the speaker, and their knowledge of general conventions in language use. This ability to reason about the minds of others (usually referred to as “mindreading” or “theory of mind”) is a cognitive capacity that is uniquely developed in humans compared to other animals.
What we know about how pragmatics (and the underlying ability to make inferences about the minds of others) has evolved. Biological evolution and cultural evolution are the two main processes that can lead to the development of a complex behavior over generations, and we can explore to what extent they account for what we know about pragmatics.
In biological evolution, changes happen as a result of natural selection on genetically transmitted traits. In cultural evolution on the other hand, selection happens on skills that are transmitted through social learning. Many hypotheses have been put forward about the role that natural selection may have played in the evolution of social and communicative skills in humans (for example, as a result of changes in food sources, foraging strategy, or group size). The role of social learning and cumulative culture, however, has been often overlooked. This omission is particularly striking in the case of pragmatics, as language itself is a prime example of a culturally transmitted skill, and there is solid evidence that the pragmatic capacities that are so central to language use may themselves be partially shaped by social learning.
In light of empirical findings from comparative, developmental, and experimental research, we can consider the potential contributions of both biological and cultural evolutionary mechanisms to the evolution of pragmatics. The dynamics of types of evolutionary processes can also be explored using experiments and computational models.