Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking.
In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features).
Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century
The Japanese psycholinguistics research field is moving rapidly in many different directions as it includes various sub-linguistics fields (e.g., phonetics/phonology, syntax, semantics, pragmatics, discourse studies). Naturally, diverse studies have reported intriguing findings that shed light on our language mechanism. This article presents a brief overview of some of the notable early 21st century studies mainly from the language acquisition and processing perspectives. The topics are divided into various sections: the sound system, the script forms, reading and writing, morpho-syntactic studies, word and sentential meanings, and pragmatics and discourse studies sections. Studies on special populations are also mentioned.
Studies on the Japanese sound system have advanced our understanding of L1 and L2 (first and second language) acquisition and processing. For instance, more evidence is provided that infants form adult-like phonological grammar by 14 months in L1, and disassociation of prosody is reported from one’s comprehension in L2. Various cognitive factors as well as L1 influence the L2 acquisition process. As the Japanese language users employ three script forms (hiragana, katakana, and kanji) in a single sentence, orthographic processing research reveal multiple pathways to process information and the influence of memory. Adult script decoding and lexical processing has been well studied and research data from special populations further helps us to understand our vision-to-language mapping mechanism. Morpho-syntactic and semantic studies include a long debate on the nativist (generative) and statistical learning approaches in L1 acquisition. In particular, inflectional morphology and quantificational scope interaction in L1 acquisition bring pros and cons of both approaches as a single approach. Investigating processing mechanisms means studying cognitive/perceptual devices. Relative clause processing has been well-discussed in Japanese because Japanese has a different word order (SOV) from English (SVO), allows unpronounced pronouns and pre-verbal word permutations, and has no relative clause marking at the verbal ending (i.e., morphologically the same as the matrix ending). Behavioral and neurolinguistic data increasingly support incremental processing like SVO languages and an expectancy-driven processor in our L1 brain. L2 processing, however, requires more study to uncover its mechanism, as the literature is scarce in both L2 English by Japanese speakers and L2 Japanese by non-Japanese speakers. Pragmatic and discourse processing is also an area that needs to be explored further. Despite the typological difference between English and Japanese, the studies cited here indicate that our acquisition and processing devices seem to adjust locally while maintaining the universal mechanism.
Aidan Pine and Mark Turin
The world is home to an extraordinary level of linguistic diversity, with roughly 7,000 languages currently spoken and signed. Yet this diversity is highly unstable and is being rapidly eroded through a series of complex and interrelated processes that result in or lead to language loss. The combination of monolingualism and networks of global trade languages that are increasingly technologized have led to over half of the world’s population speaking one of only 13 languages. Such linguistic homogenization leaves in its wake a linguistic landscape that is increasingly endangered.
A wide range of factors contribute to language loss and attrition. While some—such as natural disasters—are unique to particular language communities and specific geographical regions, many have similar origins and are common across endangered language communities around the globe. The harmful legacy of colonization and the enduring impact of disenfranchising policies relating to Indigenous and minority languages are at the heart of language attrition from New Zealand to Hawai’i, and from Canada to Nepal.
Language loss does not occur in isolation, nor is it inevitable or in any way “natural.” The process also has wide-ranging social and economic repercussions for the language communities in question. Language is so heavily intertwined with cultural knowledge and political identity that speech forms often serve as meaningful indicators of a community’s vitality and social well-being. More than ever before, there are vigorous and collaborative efforts underway to reverse the trend of language loss and to reclaim and revitalize endangered languages. Such approaches vary significantly, from making use of digital technologies in order to engage individual and younger learners to community-oriented language nests and immersion programs. Drawing on diverse techniques and communities, the question of measuring the success of language revitalization programs has driven research forward in the areas of statistical assessments of linguistic diversity, endangerment, and vulnerability. Current efforts are re-evaluating the established triad of documentation-conservation-revitalization in favor of more unified, holistic, and community-led approaches.
Agustin Vicente and Ingrid Lossius Falkum
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses (e.g., run a marathon, run some water, run on gasoline, run a store, etc.). It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings, represented as different lexemes (e.g., bank). Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice. For instance, none of the linguistic tests devised for this purpose give clear-cut answers, either because they are context-sensitive (sometimes, only a slight manipulation of the context may give rise to a different sense), or because they do not track the intuitive distinctions, identifying some kinds of polysemy as monosemy and others as instances of homonymy.
Polysemy proliferates in natural language: virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature, as well as in related disciplines. One notable exception is the cognitive linguistics framework, where polysemy has played an important role in theorizing from the outset. However, it is only recently that polysemy has been seen as a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics-pragmatics divide.
Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is stored as an individual representation in the lexicon (this approach has been called the Sense Enumeration Lexicon, or SEL, for short). Polysemy and homonymy are treated on a par, both being resolved by language users selecting a sense from among the list of lexically stored senses, which then feeds into the semantic composition process.
The SEL approach has been strongly criticized on both theoretical and empirical grounds. Today, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation. One contemporary debate revolves around the status of this representation: Are the lexical representations of polysemous expressions informationally scarce and under-specific with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses? Alternatively, are senses computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation, or ad hoc concept construction?
A related issue that has recently attracted interest is how polysemy is generated or constructed in the course of discourse, a question that has important implications for accounts of semantic change. If this process is not entirely arbitrary (i.e., the senses are related to each other in semi-predictable ways), what are the underlying mechanisms? While it is widely agreed that two important sources of polysemy are metaphor and metonymy, the question of what consequences the source of a polysemy may have (if any) for lexical representation and sense activation remains a largely unexplored question.
Christina L. Gagné
Psycholinguistics is the study of how language is acquired, represented, and used by the human mind; it draws on knowledge about both language and cognitive processes. A central topic of debate in psycholinguistics concerns the balance between storage and processing. This debate is especially evident in research concerning morphology, which is the study of word structure, and several theoretical issues have arisen concerning the question of how (or whether) morphology is represented and what function morphology serves in the processing of complex words. Five theoretical approaches have emerged that differ substantially in the emphasis placed on the role of morphemic representations during the processing of morphologically complex words. The first approach minimizes processing by positing that all words, even morphologically complex ones, are stored and recognized as whole units, without the use of morphemic representations. The second approach posits that words are represented and processed in terms of morphemic units. The third approach is a mixture of the first two approaches and posits that a whole-access route and decomposition route operate in parallel. A fourth approach posits that both whole word representations and morphemic representations are used, and that these two types of information interact. A fifth approach proposes that morphology is not explicitly represented, but rather, emerges from the co-activation of orthographic/phonological representations and semantic representations. These competing approaches have been evaluated using a wide variety of empirical methods examining, for example, morphological priming, the role of constituent and word frequency, and the role of morphemic position. For the most part, the evidence points to the involvement of morphological representations during the processing of complex words. However, the specific way in which these representations are used is not yet fully known.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
The distinction between representations and processes is central to most models of the cognitive science of language. Linguistic theory informs the types of representations assumed, and these representations are what are taken to be the targets of second language acquisition. Epistemologically, this is often taken to be knowledge, or knowledge-that. Techniques such as Grammaticality-Judgement tasks are paradigmatic as we seek to gain insight into what a learner’s grammar looks like. Learners behave as if certain phonological, morphological, or syntactic strings (which may or may not be target-like) were well-formed. It is the task of the researcher to understand the nature of the knowledge that governs those well-formedness beliefs.
Traditional accounts of processing, on the other hand, look to the real-time use of language, either in production or perception, and invoke discussions of skill or knowledge-how. A range of experimental psycholinguistic techniques have been used to assess these skills: self-paced reading, eye-tracking, ERPs, priming, lexical decision, AXB discrimination, etc. Such online measures can show us how we do language when it comes to activities such as production or comprehension.
There has long been a connection between linguistic theory and theories of processing as evidenced by the work of Berwick (The Grammatical Basis of Linguistic Performance). The task of the parser is to assign abstract structure to a phonological, morphological, or syntactic string; structure that does not come directly labelled in the acoustic input. Such processing studies as the Garden Path phenomenon have revealed that grammaticality and processability are distinct constructs.
In some models, however, the distinction between grammar and processing is less distinct. Phillips says that “parsing is grammar”, while O’Grady builds an emergentist theory with no grammar, only processing. Bayesian models of acquisition, and indeed of knowledge, assume that the grammars we set up are governed by a principle of entropy, which governs other aspects of human behavior; knowledge and skill are combined. Exemplar models view the processing of the input as a storing of all phonetic detail that is in the environment, not storing abstract categories; the categories emerge via a process of comparing exemplars.
Linguistic theory helps us to understand the processing of input to acquire new L2 representations, and the access of those representations in real time.
Patrice Speeter Beddor
In their conversational interactions with speakers, listeners aim to understand what a speaker is saying, that is, they aim to arrive at the linguistic message, which is interwoven with social and other information, being conveyed by the input speech signal. Across the more than 60 years of speech perception research, a foundational issue has been to account for listeners’ ability to achieve stable linguistic percepts corresponding to the speaker’s intended message despite highly variable acoustic signals. Research has especially focused on acoustic variants attributable to the phonetic context in which a given phonological form occurs and on variants attributable to the particular speaker who produced the signal. These context- and speaker-dependent variants reveal the complex—albeit informationally rich—patterns that bombard listeners in their everyday interactions.
How do listeners deal with these variable acoustic patterns? Empirical studies that address this question provide clear evidence that perception is a malleable, dynamic, and active process. Findings show that listeners perceptually factor out, or compensate for, the variation due to context yet also use that same variation in deciding what a speaker has said. Similarly, listeners adjust, or normalize, for the variation introduced by speakers who differ in their anatomical and socio-indexical characteristics, yet listeners also use that socially structured variation to facilitate their linguistic judgments. Investigations of the time course of perception show that these perceptual accommodations occur rapidly, as the acoustic signal unfolds in real time. Thus, listeners closely attend to the phonetic details made available by different contexts and different speakers. The structured, lawful nature of this variation informs perception.
Speech perception changes over time not only in listeners’ moment-by-moment processing, but also across the life span of individuals as they acquire their native language(s), non-native languages, and new dialects and as they encounter other novel speech experiences. These listener-specific experiences contribute to individual differences in perceptual processing. However, even listeners from linguistically homogenous backgrounds differ in their attention to the various acoustic properties that simultaneously convey linguistically and socially meaningful information. The nature and source of listener-specific perceptual strategies serve as an important window on perceptual processing and on how that processing might contribute to sound change.
Theories of speech perception aim to explain how listeners interpret the input acoustic signal as linguistic forms. A theoretical account should specify the principles that underlie accurate, stable, flexible, and dynamic perception as achieved by different listeners in different contexts. Current theories differ in their conception of the nature of the information that listeners recover from the acoustic signal, with one fundamental distinction being whether the recovered information is gestural or auditory. Current approaches also differ in their conception of the nature of phonological representations in relation to speech perception, although there is increasing consensus that these representations are more detailed than the abstract, invariant representations of traditional formal phonology. Ongoing work in this area investigates how both abstract information and detailed acoustic information are stored and retrieved, and how best to integrate these types of information in a single theoretical model.