You are looking at 1-10 of 137 articles
The word accent system of Tokyo Japanese might look quite complex with a number of accent patterns and rules. However, recent research has shown that it is not as complex as has been assumed if one incorporates the notion of markedness into the analysis: nouns have only two productive accent patterns, the antepenultimate and the unaccented pattern, and different accent rules can be generalized if one focuses on these two productive accent patterns.
The word accent system raises some new interesting issues. One of them concerns the fact that a majority of nouns are ‘unaccented,’ that is, they are pronounced with a rather flat pitch pattern, apparently violating the principle of obligatoriness. A careful analysis of noun accentuation reveals that this strange accent pattern occurs in some linguistically predictable structures. In morphologically simplex nouns, it typically tends to emerge in four-mora nouns ending in a sequence of light syllables. In compound nouns, on the other hand, it emerges due to multiple factors, such as compound-final deaccenting morphemes, deaccenting pseudo-morphemes, and some types of prosodic configurations.
Japanese pitch accent exhibits an interesting aspect in its interactions with other phonological and linguistic structures. For example, the accent of compound nouns is closely related with rendaku, or sequential voicing; the choice between the accented and unaccented patterns in certain types of compound nouns correlates with the presence or absence of the sequential voicing. Moreover, whether the compound accent rule applies to a certain compound depends on its internal morphosyntactic configuration as well as its meaning; alternatively, the compound accent rule is blocked in certain types of morphosyntactic and semantic structures.
Finally, careful analysis of word accent sheds new light on the syllable structure of the language, notably on two interrelated questions about diphthong-hood and super-heavy syllables. It provides crucial insight into ‘diphthongs,’ or the question of which vowel sequence constitutes a diphthong, against a vowel sequence across a syllable boundary. It also presents new evidence against trimoraic syllables in the language.
Katie Wagner and David Barner
Human experience of color results from a complex interplay of perceptual and linguistic systems. At the lowest level of perception, the human visual system transforms the visible light portion of the electromagnetic spectrum into a rich, continuous three-dimensional experience of color. Despite our ability to perceptually discriminate millions of different color shades, most languages categorize color into a number of discrete color categories. While the meanings of color words are constrained by perception, perception does not fully define them. Once color words are acquired, they may in turn influence our memory and processing speed for color, although it is unlikely that language influences the lowest levels of color perception.
One approach to examining the relationship between perception and language in forming our experience of color is to study children as they acquire color language. Children produce color words in speech for many months before acquiring adult meanings for color words. Research in this area has focused on whether children’s difficulties stem from (a) an inability to identify color properties as a likely candidate for word meanings, or alternatively (b) inductive learning of language-specific color word boundaries. Lending plausibility to the first account, there is evidence that children more readily attend to object traits like shape, rather than color, as likely candidates for word meanings. However, recent evidence has found that children have meanings for some color words before they begin to produce them in speech, indicating that in fact, they may be able to successfully identify color as a candidate for word meaning early in the color word learning process. There is also evidence that prelinguistic infants, like adults, perceive color categorically. While these perceptual categories likely constrain the meanings that children consider, they cannot fully define color word meanings because languages vary in both the number and location of color word boundaries. Recent evidence suggests that the delay in color word acquisition primarily stems from an inductive process of refining these boundaries.
A growing phenomena in urban centers on the African continent in the latter half of the 20th century and start of the 21st century has been what have been described as Urban Youth Languages,’ although the ‘urban’ moniker is increasingly being dropped as these phenomena spread out from cities to rural areas. The term tends to refer to language phenomena such as Sheng or Engsh in Kenya, Tsotsitaal in South Africa, Nouchi in Ivory Coast, Camfranglais in Cameroon, and many more, both named and unnamed. These language styles are used and innovated predominantly by young people, and in this way they are distinguished from the large urban vernaculars present in African urban centers such as urban Wolof.
African (Urban) Youth Languages usually utilize a dominant urban language as the grammatical base, such as Swahili in Nairobi Sheng and Zulu or Sotho in Johannesburg Tsotsitaal, and they feature a great deal of lexical borrowing from other languages present in Africa’s highly multilingual urban contexts, such as the colonial languages and the local African languages common to a particular urban center. They also may utilize the dominant European language as the grammatical base, such as French in the case of Camfranglais, with borrowings from English and African languages. They strikingly draw on metaphor and pop culture in the innovation of new terms. These varieties are ‘languages relexicalised,’ in Halliday’s terms, and are used by young people for creativity and entertainment, to have fun with peers, to affirm in-group relations, and to indicate status.
“Altaic” is a common term applied by linguists to a number of language families, spread across Central Asia and the Far East and sharing a large, most likely non-coincidental, number of structural and morphemic similarities. At the onset of Altaic studies, these similarities were ascribed to the one-time existence of an ancestral language—“Proto-Altaic,” from which all these families are descended; circumstantial evidence and glottochronological calculations tentatively date this language to some time around the 6th–7th millennium
The debate over the nature of the relationship between the various units that constitute “Altaic,” sometimes referred to as “the Altaic controversy,” has been one of the most hotly debated topics in 20th-century historical linguistics and a major focal point of studies dealing with the prehistory of Central and East Eurasia. Supporters of “Proto-Altaic,” commonly known as “(pro-)Altaicists,” claim that only divergence from an original common ancestor can account for the observed regular phonetic correspondences and other structural similarities, whereas “anti-Altaicists,” without denying the existence of such similarities, insist that they do not belong to the “core” layers of the respective languages and are therefore better explained as results of lexical borrowing and other forms of areal linguistic contact.
As a rule, “pro-Altaicists” claim that “Proto-Altaic” is as reconstructible by means of the classic comparative method as any uncontroversial linguistic family; in support of this view, they have produced several attempts to assemble large bodies of etymological evidence for the hypothesis, backed by systems of regular phonetic correspondences between compared languages. All of these, however, have been heavily criticized by “anti-Altaicists” for lack of methodological rigor, implausibility of proposed phonetic and/or semantic changes, and confusion of recent borrowings with items allegedly inherited from a common ancestor. Despite the validity of many of these objections, it remains unclear whether they are sufficient to completely discredit the hypothesis of a genetic connection between the various branches of “Altaic,” which continues to be actively supported by a small, but stable scholarly minority.
K. A. Jayaseelan
The Dravidian languages have a long-distance reflexive anaphor taan. (It is taan in Tamil and Malayalam, taanu in Kannada and tanu in Telugu.) As is the case with other long-distance anaphors, it is subject-oriented; it is also [+human] and third person. Interestingly, it is infelicitous if bound within the minimal clause when it is an argument of the verb. (That is, it seems to obey Principle B of the binding theory.) Although it is subject-oriented in the normal case, it can be bound by a non-subject if the verb is a “psych predicate,” that is, a predicate that denotes a feeling; in this case, it can be bound by the experiencer of the feeling. Again, in a discourse that depicts the thoughts, feelings, or point of view of a protagonist—the so-called “logophoric contexts”—it can be coreferential with the protagonist even if the latter is mentioned only in the preceding discourse (not within the sentence). These latter facts suggest that the anaphor is in fact coindexed with the perspective of the clause (rather than with the subject per se). In cases where this anaphor needs to be coindexed with the minimal subject (to express a meaning like ‘John loves himself’), the Dravidian languages exhibit two strategies to circumvent the Principle B effect. Malayalam adds an emphasis marker tanne to the anaphor; taan tanne can corefer with the minimal subject. This strategy parallels the strategy of European languages and East Asian languages (cf. Scandinavian seg selv). The three other major Dravidian languages—Tamil, Telugu, and Kannada—use a verbal reflexive: they add a light verb koL- (lit. ‘take’) to the verbal complex, which has the effect of reflexivizing the transitive predicate. (It either makes the verb intransitive or gives it a self-benefactive meaning.)
The Dravidian languages also have reciprocal and distributive anaphors. These have bipartite structures. An example of a Malayalam reciprocal anaphor is oral … matte aaL (‘one person … other person’). The distributive anaphor in Malayalam has the form awar-awar (‘they-they’); it is a reduplicated pronoun. The reciprocals and distributives are strict anaphors in the sense that they apparently obey Principle A; they must be bound in the domain of the minimal subject. They are not subject-oriented.
A noteworthy fact about the pronominal system of Dravidian is that the third person pronouns come in proximal-distal pairs, the proximal pronoun being used to refer to something nearby and the distal pronoun being used elsewhere.
Susan Edwards and Christos Salis
Aphasia is an acquired language disorder subsequent to brain damage in the left hemisphere. It is characterized by diminished abilities to produce and understand both spoken and written language compared with the speaker’s presumed ability pre-cerebral damage. The type and severity of the aphasia depends not only on the location and extent of the cerebral damage but also the effect the lesion has on connecting areas of the brain. Type and severity of aphasia is diagnosed in comparison with assumed normal adult language. Language changes associated with normal aging are not classed as aphasia. The diagnosis and assessment of aphasia in children, which is unusual, takes account of age norms.
The most common cause of aphasia is a cerebral vascular accident (CVA) commonly referred to as a stroke, but brain damage following traumatic head injury such as road accidents or gunshot wounds can also cause aphasia. Aphasia following such traumatic events is non-progressive in contrast to aphasia arising from brain tumor, some types of infection, or language disturbances in progressive conditions such as Alzheimer’s disease, where the language disturbance increases as the disease progresses.
The diagnosis of primary progressive aphasia (as opposed to non-progressive aphasia, the main focus of this article) is based on the following inclusion and exclusion criteria by M. Marsel Mesulam, in 2001. Inclusion criteria are as follows: Difficulty with language that interferes with activities of daily living and aphasia is the most prominent symptom. Exclusion criteria are as follows: Other non-degenerative disease or medical disorder, psychiatric diagnosis, episodic memory, visual memory, and visuo-perceptual impairment, and, finally, initial behavioral disturbance.
Aphasia involves one or more of the building blocks of language, phonemes, morphology, lexis, syntax, and semantics; and the deficits occur in various clusters or patterns across the spectrum. The degree of impairment varies across modalities, with written language often, but not always, more affected than spoken language. In some cases, understanding of language is relatively preserved, in others both production and understanding are affected. In addition to varied degrees of impairment in spoken and written language, any or more than one component of language can be affected. At the most severe end of the spectrum, a person with aphasia may be unable to communicate by either speech or writing and may be able to understand virtually nothing or only very limited social greetings. At the least severe end of the spectrum, the aphasic speaker may experience occasional word finding difficulties, often difficulties involving nouns; but unlike difficulties in recalling proper nouns in normal aging, word retrieval problems in mild aphasia includes other word classes.
Descriptions of different clusters of language deficits have led to the notion of syndromes. Despite great variations in the condition, patterns of language deficits associated with different areas of brain damage have been influential in understanding language-brain relationships. Increasing sophistication in language assessment and neurological investigations are contributing to a greater, yet still incomplete understanding of language-brain relationships.
Japanese is a language where the grammatical status of arguments and adjuncts is marked exclusively by postnominal case markers, and various argument realization patterns can be assessed by their case marking. Since Japanese is categorized as a language of the nominative-accusative type typologically, the unmarked case-marking frame obtained for transitive predicates of the non-stative (or eventive) type is ‘nominative-accusative’. Nevertheless, transitive predicates falling into the stative class often have other case-marking alignments, such as ‘nominative-nominative’ and ‘dative-nominative’. Consequently, Japanese provides much more varying argument realization patterns than those expected from its typological character as a nominative-accusative language.
In point of fact, argument marking can actually be much more elastic and variable, the variations being motivated by several linguistic factors. Arguments often have the option of receiving either syntactic or semantic case, with no difference in the logical or cognitive meaning (as in plural agent and source agent alternations) or depending on the meanings their predicate carry (as in locative alternation). The type of case marking that is not normally available in main clauses can sometimes be obtained in embedded contexts (i.e., in exceptional case marking and small-clause constructions). In complex predicates, including causative and indirect passive predicates, arguments are case-marked differently from their base clauses by virtue of suffixation, and their case patterns follow the mono-clausal case array, despite the fact that they have multi-clausal structures.
Various case marking options are also made available for arguments by grammatical operations. Some processes instantiate a change on the grammatical relations and case marking of arguments with no affixation or embedding. Japanese has the grammatical process of subjectivization, creating extra (non-thematic) major subjects, many of which are identified as instances of ‘possessor raising’ (or argument ascension). There is another type of grammatical process, which reduces the number of arguments by virtue of incorporating a noun into the predicate, as found in the light verb constructions with suru ‘do’ and the complex adjective constructions formed on the negative adjective nai ‘non-existent.’
Malka Rappaport Hovav
Words are sensitive to syntactic context. Argument realization is the study of the relation between argument-taking words, the syntactic contexts they appear in and the interpretive properties that constrain the relation between them.
Marie K. Huffman
Articulatory phonetics is concerned with the physical mechanisms involved in producing spoken language. A fundamental goal of articulatory phonetics is to relate linguistic representations to articulator movements in real time and the consequent acoustic output that makes speech a medium for information transfer. Understanding the overall process requires an appreciation of the aerodynamic conditions necessary for sound production and the way that the various parts of the chest, neck, and head are used to produce speech. One descriptive goal of articulatory phonetics is the efficient and consistent description of the key articulatory properties that distinguish sounds used contrastively in language. There is fairly strong consensus in the field about the inventory of terms needed to achieve this goal. Despite this common, segmental, perspective, speech production is essentially dynamic in nature. Much remains to be learned about how the articulators are coordinated for production of individual sounds and how they are coordinated to produce sounds in sequence. Cutting across all of these issues is the broader question of which aspects of speech production are due to properties of the physical mechanism and which are the result of the nature of linguistic representations. A diversity of approaches is used to try to tease apart the physical and the linguistic contributions to the articulatory fabric of speech sounds in the world’s languages. A variety of instrumental techniques are currently available, and improvement in safe methods of tracking articulators in real time promises to soon bring major advances in our understanding of how speech is produced.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Autosegments were introduced by John Goldsmith in his 1976 MIT dissertation to represent tone and other suprasegmental phenomena. Goldsmith’s intuition, embodied in the term he created, was that autosegments constituted an independent, conceptually equal tier of phonological representation, with both tiers realized simultaneously like the separate voices in a musical score.
The analysis of suprasegmentals came late to generative phonology, even though it had been tackled in American structuralism with the long components of Harris 1944 and despite being a particular focus of Firthian prosodic analysis. The standard version of generative phonology of the era (Chomsky & Halle’s The Sound Pattern of English) made no special provision for phenomena that had been labeled suprasegmental or prosodic by earlier traditions.
An early sign that tones required a separate tier of representation was the phenomenon of tonal stability. In many tone languages, when vowels are lost historically or synchronically, their tones remain. The behavior of contour tones in many languages also falls into place when the contours are broken down into sequences of level tones on an independent level or representation. The autosegmental framework captured this naturally, since a sequence of elements on one tier can be connected to a single element on another. But the single most compelling aspect of the early autosegmental model was a natural account of tone spreading, a very common process that was only awkwardly captured by rules of whatever sort. Goldsmith’s autosegmental solution was the well-formedness condition, requiring, among other things, that every tone on the tonal tier be associated with some segment on the segmental tier, and vice-versa. Tones thus spread more or less automatically to segments lacking them. The condition of well-formedness, at the very core of the autosegmental framework, was a rare constraint, posited nearly two decades before optimality theory.
One-to-many associations and spreading onto adjacent elements are characteristic of tone but not confined to it. Similar behaviors are widespread in long-distance phenomena including intonation, vowel harmony, and nasal prosodies, as well as more locally with partial or full assimilation across adjacent segments. A major discovery, in Mark Liberman’s 1975 MIT dissertation, was that autosegmental tiers have hierarchical structure, with Goldsmith’s autosegments as the terminal elements of those structures.
The early autosegmental notion of tiers of representation that were distinct but conceptually equal soon gave way to a model with one basic tier—called the skeleton or CV tier—connected to tiers for particular kinds of articulation, including tone and intonation, nasality, vowel features, and others. This has led to hierarchical representations of phonological features in current models of feature geometry, replacing the unordered distinctive feature matrices of early generative phonology.
Autosegmental representations and processes also provide a means of representing nonconcatenative morphology, notably the complex interweaving of roots and patterns in Semitic languages.