While both pragmatic theory and experimental investigations of language using psycholinguistic methods have been well-established subfields in the language sciences for a long time, the field of Experimental Pragmatics, where such methods are applied to pragmatic phenomena, has only fully taken shape since the early 2000s. By now, however, it has become a major and lively area of ongoing research, with dedicated conferences, workshops, and collaborative grant projects, bringing together researchers with linguistic, psychological, and computational approaches across disciplines. Its scope includes virtually all meaning-related phenomena in natural language comprehension and production, with a particular focus on what inferences utterances give rise to that go beyond what is literally expressed by the linguistic material.
One general area that has been explored in great depth consists of investigations of various ‘ingredients’ of meaning. A major aim has been to develop experimental methodologies to help classify various aspects of meaning, such as implicatures and presuppositions as compared to basic truth-conditional meaning, and to capture their properties more thoroughly using more extensive empirical data. The study of scalar implicatures (e.g., the inference that some but not all students left based on the sentence Some students left) has served as a catalyst of sorts in this area, and they constitute one of the most well-studied phenomena in Experimental Pragmatics to date. But much recent work has expanded the general approach to other aspects of meaning, including presuppositions and conventional implicatures, but also other aspects of nonliteral meaning, such as irony, metonymy, and metaphors.
The study of reference constitutes another core area of research in Experimental Pragmatics, and has a more extensive history of precursors in psycholinguistics proper. Reference resolution commonly requires drawing inferences beyond what is conventionally conveyed by the linguistic material at issue as well; the key concern is how comprehenders grasp the referential intentions of a speaker based on the referential expressions used in a given context, as well as how the speaker chooses an appropriate expression in the first place. Pronouns, demonstratives, and definite descriptions are crucial expressions of interest, with special attention to their relation to both intra- and extralinguistic context. Furthermore, one key line of research is concerned with speakers’ and listeners’ capacity to keep track of both their own private perspective and the shared perspective of the interlocutors in actual interaction.
Given the rapid ongoing growth in the field, there is a large number of additional topical areas that cannot all be mentioned here, but the final section of the article briefly mentions further current and future areas of research.
Interest in the linguistics of humor is widespread and dates since classical times. Several theoretical models have been proposed to describe and explain the function of humor in language. The most widely adopted one, the semantic-script theory of humor, was presented by Victor Raskin, in 1985. Its expansion, to incorporate a broader gamut of information, is known as the General Theory of Verbal Humor. Other approaches are emerging, especially in cognitive and corpus linguistics. Within applied linguistics, the predominant approach is analysis of conversation and discourse, with a focus on the disparate functions of humor in conversation. Speakers may use humor pro-socially, to build in-group solidarity, or anti-socially, to exclude and denigrate the targets of the humor. Most of the research has focused on how humor is co-constructed and used among friends, and how speakers support it. Increasingly, corpus-supported research is beginning to reshape the field, introducing quantitative concerns, as well as multimodal data and analyses. Overall, the linguistics of humor is a dynamic and rapidly changing field.
Agustin Vicente and Ingrid Lossius Falkum
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses (e.g., run a marathon, run some water, run on gasoline, run a store, etc.). It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings, represented as different lexemes (e.g., bank). Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice. For instance, none of the linguistic tests devised for this purpose give clear-cut answers, either because they are context-sensitive (sometimes, only a slight manipulation of the context may give rise to a different sense), or because they do not track the intuitive distinctions, identifying some kinds of polysemy as monosemy and others as instances of homonymy.
Polysemy proliferates in natural language: virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature, as well as in related disciplines. One notable exception is the cognitive linguistics framework, where polysemy has played an important role in theorizing from the outset. However, it is only recently that polysemy has been seen as a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics-pragmatics divide.
Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is stored as an individual representation in the lexicon (this approach has been called the Sense Enumeration Lexicon, or SEL, for short). Polysemy and homonymy are treated on a par, both being resolved by language users selecting a sense from among the list of lexically stored senses, which then feeds into the semantic composition process.
The SEL approach has been strongly criticized on both theoretical and empirical grounds. Today, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation. One contemporary debate revolves around the status of this representation: Are the lexical representations of polysemous expressions informationally scarce and under-specific with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses? Alternatively, are senses computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation, or ad hoc concept construction?
A related issue that has recently attracted interest is how polysemy is generated or constructed in the course of discourse, a question that has important implications for accounts of semantic change. If this process is not entirely arbitrary (i.e., the senses are related to each other in semi-predictable ways), what are the underlying mechanisms? While it is widely agreed that two important sources of polysemy are metaphor and metonymy, the question of what consequences the source of a polysemy may have (if any) for lexical representation and sense activation remains a largely unexplored question.