Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Agustin Vicente and Ingrid Lossius Falkum
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses (e.g., run a marathon, run some water, run on gasoline, run a store, etc.). It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings, represented as different lexemes (e.g., bank). Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice. For instance, none of the linguistic tests devised for this purpose give clear-cut answers, either because they are context-sensitive (sometimes, only a slight manipulation of the context may give rise to a different sense), or because they do not track the intuitive distinctions, identifying some kinds of polysemy as monosemy and others as instances of homonymy.
Polysemy proliferates in natural language: virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature, as well as in related disciplines. One notable exception is the cognitive linguistics framework, where polysemy has played an important role in theorizing from the outset. However, it is only recently that polysemy has been seen as a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics-pragmatics divide.
Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is stored as an individual representation in the lexicon (this approach has been called the Sense Enumeration Lexicon, or SEL, for short). Polysemy and homonymy are treated on a par, both being resolved by language users selecting a sense from among the list of lexically stored senses, which then feeds into the semantic composition process.
The SEL approach has been strongly criticized on both theoretical and empirical grounds. Today, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation. One contemporary debate revolves around the status of this representation: Are the lexical representations of polysemous expressions informationally scarce and under-specific with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses? Alternatively, are senses computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation, or ad hoc concept construction?
A related issue that has recently attracted interest is how polysemy is generated or constructed in the course of discourse, a question that has important implications for accounts of semantic change. If this process is not entirely arbitrary (i.e., the senses are related to each other in semi-predictable ways), what are the underlying mechanisms? While it is widely agreed that two important sources of polysemy are metaphor and metonymy, the question of what consequences the source of a polysemy may have (if any) for lexical representation and sense activation remains a largely unexplored question.