The word accent system of Tokyo Japanese might look quite complex with a number of accent patterns and rules. However, recent research has shown that it is not as complex as has been assumed if one incorporates the notion of markedness into the analysis: nouns have only two productive accent patterns, the antepenultimate and the unaccented pattern, and different accent rules can be generalized if one focuses on these two productive accent patterns.
The word accent system raises some new interesting issues. One of them concerns the fact that a majority of nouns are ‘unaccented,’ that is, they are pronounced with a rather flat pitch pattern, apparently violating the principle of obligatoriness. A careful analysis of noun accentuation reveals that this strange accent pattern occurs in some linguistically predictable structures. In morphologically simplex nouns, it typically tends to emerge in four-mora nouns ending in a sequence of light syllables. In compound nouns, on the other hand, it emerges due to multiple factors, such as compound-final deaccenting morphemes, deaccenting pseudo-morphemes, and some types of prosodic configurations.
Japanese pitch accent exhibits an interesting aspect in its interactions with other phonological and linguistic structures. For example, the accent of compound nouns is closely related with rendaku, or sequential voicing; the choice between the accented and unaccented patterns in certain types of compound nouns correlates with the presence or absence of the sequential voicing. Moreover, whether the compound accent rule applies to a certain compound depends on its internal morphosyntactic configuration as well as its meaning; alternatively, the compound accent rule is blocked in certain types of morphosyntactic and semantic structures.
Finally, careful analysis of word accent sheds new light on the syllable structure of the language, notably on two interrelated questions about diphthong-hood and super-heavy syllables. It provides crucial insight into ‘diphthongs,’ or the question of which vowel sequence constitutes a diphthong, against a vowel sequence across a syllable boundary. It also presents new evidence against trimoraic syllables in the language.
Marie K. Huffman
Articulatory phonetics is concerned with the physical mechanisms involved in producing spoken language. A fundamental goal of articulatory phonetics is to relate linguistic representations to articulator movements in real time and the consequent acoustic output that makes speech a medium for information transfer. Understanding the overall process requires an appreciation of the aerodynamic conditions necessary for sound production and the way that the various parts of the chest, neck, and head are used to produce speech. One descriptive goal of articulatory phonetics is the efficient and consistent description of the key articulatory properties that distinguish sounds used contrastively in language. There is fairly strong consensus in the field about the inventory of terms needed to achieve this goal. Despite this common, segmental, perspective, speech production is essentially dynamic in nature. Much remains to be learned about how the articulators are coordinated for production of individual sounds and how they are coordinated to produce sounds in sequence. Cutting across all of these issues is the broader question of which aspects of speech production are due to properties of the physical mechanism and which are the result of the nature of linguistic representations. A diversity of approaches is used to try to tease apart the physical and the linguistic contributions to the articulatory fabric of speech sounds in the world’s languages. A variety of instrumental techniques are currently available, and improvement in safe methods of tracking articulators in real time promises to soon bring major advances in our understanding of how speech is produced.
Clinical linguistics is the branch of linguistics that applies linguistic concepts and theories to the study of language disorders. As the name suggests, clinical linguistics is a dual-facing discipline. Although the conceptual roots of this field are in linguistics, its domain of application is the vast array of clinical disorders that may compromise the use and understanding of language. Both dimensions of clinical linguistics can be addressed through an examination of specific linguistic deficits in individuals with neurodevelopmental disorders, craniofacial anomalies, adult-onset neurological impairments, psychiatric disorders, and neurodegenerative disorders. Clinical linguists are interested in the full range of linguistic deficits in these conditions, including phonetic deficits of children with cleft lip and palate, morphosyntactic errors in children with specific language impairment, and pragmatic language impairments in adults with schizophrenia.
Like many applied disciplines in linguistics, clinical linguistics sits at the intersection of a number of areas. The relationship of clinical linguistics to the study of communication disorders and to speech-language pathology (speech and language therapy in the United Kingdom) are two particularly important points of intersection. Speech-language pathology is the area of clinical practice that assesses and treats children and adults with communication disorders. All language disorders restrict an individual’s ability to communicate freely with others in a range of contexts and settings. So language disorders are first and foremost communication disorders. To understand language disorders, it is useful to think of them in terms of points of breakdown on a communication cycle that tracks the progress of a linguistic utterance from its conception in the mind of a speaker to its comprehension by a hearer. This cycle permits the introduction of a number of important distinctions in language pathology, such as the distinction between a receptive and an expressive language disorder, and between a developmental and an acquired language disorder. The cycle is also a useful model with which to conceptualize a range of communication disorders other than language disorders. These other disorders, which include hearing, voice, and fluency disorders, are also relevant to clinical linguistics.
Clinical linguistics draws on the conceptual resources of the full range of linguistic disciplines to describe and explain language disorders. These disciplines include phonetics, phonology, morphology, syntax, semantics, pragmatics, and discourse. Each of these linguistic disciplines contributes concepts and theories that can shed light on the nature of language disorder. A wide range of tools and approaches are used by clinical linguists and speech-language pathologists to assess, diagnose, and treat language disorders. They include the use of standardized and norm-referenced tests, communication checklists and profiles (some administered by clinicians, others by parents, teachers, and caregivers), and qualitative methods such as conversation analysis and discourse analysis. Finally, clinical linguists can contribute to debates about the nosology of language disorders. In order to do so, however, they must have an understanding of the place of language disorders in internationally recognized classification systems such as the 2013 Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Coarticulation can be characterized as an articulatory effect exerted by one phonetic segment (the trigger) onto another (the target) in the speech chain, for example, anticipatory velar lowering during a vowel preceding a syllable-final nasal consonant (send) or tongue body raising and fronting during a schwa placed next to a palatal consonant (the shore, a shamed). Coarticulatory effects have been generally investigated with reference to a single articulator (e.g., velum, lips, tongue tip, tongue body, jaw, larynx) or a given acoustic parameter (e.g., second formant). It is then convenient to keep this concept separate from gestural coproduction, which refers to the spatiotemporal interaction among different articulatory structures during the realization of one or several successive phonetic segments.
Coarticulation may be measured in space and time. Thus, tongue body raising and fronting effects exerted by palatal consonants on an immediately preceding schwa are predicted to be larger and start earlier than those exerted by the same consonant type on a preceding low or mid-vowel. Moreover, the spatiotemporal effects in question may differ in direction—they may be anticipatory and thus proceed leftwards towards the preceding segment(s), or they may be carryover and thus proceed rightwards towards the following segment(s); it is commonly accepted that anticipatory effects reflect phonemic planning, while carryover effects are mainly associated with the physico-mechanical requirements of the articulatory structures. The magnitude, temporal extent, and direction of the co-articulatory effects are conditioned by the place and manner of articulation of the triggering and target consonants and/or vowels, as well as by the articulatory subsystem involved in closure or constriction formation. Depending on their articulatory characteristics, vowels and consonants may differ regarding coarticulation resistance and aggressiveness, namely, the degree to which they block coarticulatory effects from contextual segments (resistance) and modify the articulatory characteristics of other segments (aggressiveness); thus, in a CV sequence composed of a palatal consonant and a schwa, the palatal segment is more coarticulation resistant and aggressive than the schwa. Other factors affecting coarticulation are segmental position within the word and the utterance and, with respect to word and sentence stress, as well as sequence type (VCV, CC, and so on), speech rate, speaker, and language.
The study of coarticulation provides information about the spatiotemporal mechanisms used by speakers for the production of phonemic sequences, about phonemic planning strategies in speech, and about sound change patterns and assimilatory processes. It has been traditionally assumed that coarticulatory effects are phonetic and thus gradual, variable, and universal, while assimilations are phonological and thus categorical, systematic, and language-specific. Thus, for example, tongue body raising and fronting effects from a palatal consonant during a schwa occur to a greater or lesser extent in any speech production event (coarticulation), but may only be labeled assimilatory if giving rise to a higher and more frontal vowel, such as /e/ or /i/, in a subset of lexical items or across the lexicon of a given language (assimilation). Experimental evidence shows, however, that the division between coarticulation and assimilation is not so straightforward. Indeed, coarticulatory effects may exhibit language-dependent differences (e.g., languages may differ regarding the degree of anticipatory vowel nasalization triggered by a syllable-final nasal consonant), while processes that have been traditionally considered to be assimilatory are far from applying categorically and systematically (e.g., the extent to which /n/ assimilates in place of articulation to a following consonant in English or German may vary with the consonant itself, speaker, prosodic factors, and speech rate).
Carol A. Fowler
The theory of speech perception as direct derives from a general direct-realist account of perception. A realist stance on perception is that perceiving enables occupants of an ecological niche to know its component layouts, objects, animals, and events. “Direct” perception means that perceivers are in unmediated contact with their niche (mediated neither by internally generated representations of the environment nor by inferences made on the basis of fragmentary input to the perceptual systems). Direct perception is possible because energy arrays that have been causally structured by niche components and that are available to perceivers specify (i.e., stand in 1:1 relation to) components of the niche. Typically, perception is multi-modal; that is, perception of the environment depends on specifying information present in, or even spanning, multiple energy arrays.
Applied to speech perception, the theory begins with the observation that speech perception involves the same perceptual systems that, in a direct-realist theory, enable direct perception of the environment. Most notably, the auditory system supports speech perception, but also the visual system, and sometimes other perceptual systems. Perception of language forms (consonants, vowels, word forms) can be direct if the forms lawfully cause specifying patterning in the energy arrays available to perceivers. In Articulatory Phonology, the primitive language forms (constituting consonants and vowels) are linguistically significant gestures of the vocal tract, which cause patterning in air and on the face. Descriptions are provided of informational patterning in acoustic and other energy arrays. Evidence is next reviewed that speech perceivers make use of acoustic and cross modal information about the phonetic gestures constituting consonants and vowels to perceive the gestures.
Significant problems arise for the viability of a theory of direct perception of speech. One is the “inverse problem,” the difficulty of recovering vocal tract shapes or actions from acoustic input. Two other problems arise because speakers coarticulate when they speak. That is, they temporally overlap production of serially nearby consonants and vowels so that there are no discrete segments in the acoustic signal corresponding to the discrete consonants and vowels that talkers intend to convey (the “segmentation problem”), and there is massive context-sensitivity in acoustic (and optical and other modalities) patterning (the “invariance problem”). The present article suggests solutions to these problems.
The article also reviews signatures of a direct mode of speech perception, including that perceivers use cross-modal speech information when it is available and exhibit various indications of perception-production linkages, such as rapid imitation and a disposition to converge in dialect with interlocutors.
An underdeveloped domain within the theory concerns the very important role of longer- and shorter-term learning in speech perception. Infants develop language-specific modes of attention to acoustic speech signals (and optical information for speech), and adult listeners attune to novel dialects or foreign accents. Moreover, listeners make use of lexical knowledge and statistical properties of the language in speech perception. Some progress has been made in incorporating infant learning into a theory of direct perception of speech, but much less progress has been made in the other areas.
Holger Diessel and Martin Hilpert
Until recently, theoretical linguists have paid little attention to the frequency of linguistic elements in grammar and grammatical development. It is a standard assumption of (most) grammatical theories that the study of grammar (or competence) must be separated from the study of language use (or performance). However, this view of language has been called into question by various strands of research that have emphasized the importance of frequency for the analysis of linguistic structure. In this research, linguistic structure is often characterized as an emergent phenomenon shaped by general cognitive processes such as analogy, categorization, and automatization, which are crucially influenced by frequency of occurrence.
There are many different ways in which frequency affects the processing and development of linguistic structure. Historical linguists have shown that frequent strings of linguistic elements are prone to undergo phonetic reduction and coalescence, and that frequent expressions and constructions are more resistant to structure mapping and analogical leveling than infrequent ones. Cognitive linguists have argued that the organization of constituent structure and embedding is based on the language users’ experience with linguistic sequences, and that the productivity of grammatical schemas or rules is determined by the combined effect of frequency and similarity. Child language researchers have demonstrated that frequency of occurrence plays an important role in the segmentation of the speech stream and the acquisition of syntactic categories, and that the statistical properties of the ambient language are much more regular than commonly assumed. And finally, psycholinguists have shown that structural ambiguities in sentence processing can often be resolved by lexical and structural frequencies, and that speakers’ choices between alternative constructions in language production are related to their experience with particular linguistic forms and meanings. Taken together, this research suggests that our knowledge of grammar is grounded in experience.
Linguistic change not only affects the lexicon and the phonology of words, it also operates on the grammar of a language. In this context, grammaticalization is concerned with the development of lexical items into markers of grammatical categories or, more generally, with the development of markers used for procedural cueing of abstract relationships out of linguistic items with concrete referential meaning. A well-known example is the English verb go in its function of a future marker, as in She is going to visit her friend. Phenomena like these are very frequent across the world’s languages and across many different domains of grammatical categories. In the last 50 years, research on grammaticalization has come up with a plethora of (a) generalizations, (b) models of how grammaticalization works, and (c) methodological refinements.
On (a): Processes of grammaticalization develop gradually, step by step, and the sequence of the individual stages follows certain clines as they have been generalized from cross-linguistic comparison (unidirectionality). Even though there are counterexamples that go against the directionality of various clines, their number seems smaller than assumed in the late 1990s.
On (b): Models or scenarios of grammaticalization integrate various factors. Depending on the theoretical background, grammaticalization and its results are motivated either by the competing motivations of economy vs. iconicity/explicitness in functional typology or by a change from movement to merger in the minimalist program. Pragmatic inference is of central importance for initiating processes of grammaticalization (and maybe also at later stages), and it activates mechanisms like reanalysis and analogy, whose status is controversial in the literature. Finally, grammaticalization does not only work within individual languages/varieties, it also operates across languages. In situations of contact, the existence of a certain grammatical category may induce grammaticalization in another language.
On (c): Even though it is hard to measure degrees of grammaticalization in terms of absolute and exact figures, it is possible to determine relative degrees of grammaticalization in terms of the autonomy of linguistic signs. Moreover, more recent research has come up with criteria for distinguishing grammaticalization and lexicalization (defined as the loss of productivity, transparency, and/or compositionality of former productive, transparent, and compositional structures).
In spite of these findings, there are still quite a number of questions that need further research. Two questions to be discussed address basic issues concerning the overall properties of grammaticalization. (1) What is the relation between constructions and grammaticalization? In the more traditional view, constructions are seen as the syntactic framework within which linguistic items are grammaticalized. In more recent approaches based on construction grammar, constructions are defined as combinations of form and meaning. Thus, grammaticalization can be seen in the light of constructionalization, i.e., the creation of new combinations of form and meaning. Even though constructionalization covers many apects of grammaticalization, it does not exhaustively cover the domain of grammaticalization. (2) Is grammaticalization cross-linguistically homogeneous, or is there a certain range of variation? There is evidence from East and mainland Southeast Asia that there is cross-linguistic variation to some extent.
Young-mee Yu Cho
Due to a number of unusual and interesting properties, Korean phonetics and phonology have been generating productive discussion within modern linguistic theories, starting from structuralism, moving to classical generative grammar, and more recently to post-generative frameworks of Autosegmental Theory, Government Phonology, Optimality Theory, and others. In addition, it has been discovered that a description of important issues of phonology cannot be properly made without referring to the interface between phonetics and phonology on the one hand, and phonology and morpho-syntax on the other. Some phonological issues from Standard Korean are still under debate and will likely be of value in helping to elucidate universal phonological properties with regard to phonation contrast, vowel and consonant inventories, consonantal markedness, and the motivation for prosodic organization in the lexicon.
Nora C. England
Mayan languages are spoken by over 5 million people in Guatemala, Mexico, Belize, and Honduras. There are around 30 different languages today, ranging in size from fairly large (about a million speakers) to very small (fewer than 30 speakers). All Mayan languages are endangered given that at least some children in some communities are not learning the language, and two languages have disappeared since European contact. Mayas developed the most elaborated and most widely attested writing system in the Americas (starting about 300 BC).
The sounds of Mayan languages consist of a voiceless stop and affricate series with corresponding glottalized stops (either implosive and ejective) and affricates, glottal stop, voiceless fricatives (including h in some of them inherited from Proto-Maya), two to three nasals, three to four approximants, and a five vowel system with contrasting vowel length (or tense/lax distinctions) in most languages. Several languages have developed contrastive tone.
The major word classes in Mayan languages include nouns, verbs, adjectives, positionals, and affect words. The difference between transitive verbs and intransitive verbs is rigidly maintained in most languages. They usually use the same aspect markers (but not always). Intransitive verbs only indicate their subjects while transitive verbs indicate both subjects and objects. Some languages have a set of status suffixes which is different for the two classes. Positionals are a root class whose most characteristic word form is a non-verbal predicate. Affect words indicate impressions of sounds, movements, and activities. Nouns have a number of different subclasses defined on the basis of characteristics when possessed, or the structure of compounds. Adjectives are formed from a small class of roots (under 50) and many derived forms from verbs and positionals.
Predicate types are transitive, intransitive, and non-verbal. Non-verbal predicates are based on nouns, adjectives, positionals, numbers, demonstratives, and existential and locative particles. They are distinct from verbs in that they do not take the usual verbal aspect markers. Mayan languages are head marking and verb initial; most have VOA flexible order but some have VAO rigid order. They are morphologically ergative and also have at least some rules that show syntactic ergativity. The most common of these is a constraint on the extraction of subjects of transitive verbs (ergative) for focus and/or interrogation, negation, or relativization. In addition, some languages make a distinction between agentive and non-agentive intransitive verbs. Some also can be shown to use obviation and inverse as important organizing principles. Voice categories include passive, antipassive and agent focus, and an applicative with several different functions.
Chiyuki Ito and Michael J. Kenstowicz
Typologically, pitch-accent languages stand between stress languages like Spanish and tone languages like Shona, and share properties of both. In a stress language, typically just one syllable per word is accented and bears the major stress (cf. Spanish sábana ‘sheet,’ sabána ‘plain,’ panamá ‘Panama’). In a tone language, the number of distinctions grows geometrically with the size of the word. So in Shona, which contrasts high versus low tone, trisyllabic words have eight possible pitch patterns. In a canonical pitch-accent language such as Japanese, just one syllable (or mora) per word is singled out as distinctive, as in Spanish. Each syllable in the word is assigned a high or low tone (as in Shona); however, this assignment is predictable based on the location of the accented syllable.
The Korean dialects spoken in the southeast Kyengsang and northeast Hamkyeng regions retain the pitch-accent distinctions that developed by the period of Middle Korean (15th–16th centuries). For example, in Hamkyeng a three-syllable word can have one of four possible pitch patterns, which are assigned by rules that refer to the accented syllable. The accented syllable has a high tone, and following syllables have low tones. Then the high tone of the accented syllable spreads up to the initial syllable, which is low. Thus, /MUcike/ ‘rainbow’ is realized as high-low-low, /aCImi/ ‘aunt’ is realized as low-high-low, and /menaRI/ ‘parsley’ is realized as low-high-high. An atonic word such as /cintallɛ/ ‘azalea’ has the same low-high-high pitch pattern as ‘parsley’ when realized alone. But the two types are distinguished when combined with a particle such as /MAN/ ‘only’ that bears an underlying accent: /menaRI+MAN/ ‘only parsely’ is realized as low-high-high-low while /cintallɛ+MAN/ ‘only azelea’ is realized as low-high-high-high. This difference can be explained by saying that the underlying accent on the particle is deleted if the stem bears an accent. The result is that only one syllable per word may bear an accent (similar to Spanish). On the other hand, since the accent is realized with pitch distinctions, tonal assimilation rules are prevalent in pitch-accent languages.
This article begins with a description of the Middle Korean pitch-accent system and its evolution into the modern dialects, with a focus on Kyengsang. Alternative synchronic analyses of the accentual alternations that arise when a stem is combined with inflectional particles are then considered. The discussion proceeds to the phonetic realization of the contrasting accents, their realizations in compounds and phrases, and the adaptation of loanwords. The final sections treat the lexical restructuring and variable distribution of the pitch accents and their emergence from predictable word-final accent in an earlier stage of Proto-Korean.