You are looking at 41-50 of 152 articles
Dene-Yeniseian is a proposed genealogical link between the widespread North American language family Na-Dene (Athabaskan, Eyak, Tlingit) and Yeniseian in central Siberia, represented today by the critically endangered Ket and several documented extinct relatives. The Dene-Yeniseian hypothesis is an old idea, but since 2006 new evidence supporting it has been published in the form of shared morphological systems and a modest number of lexical cognates showing interlocking sound correspondences. Recent data from human genetics and folklore studies also increasingly indicate the plausibility of a prehistoric (probably Late Pleistocene) connection between populations in northwestern North America and the traditionally Yeniseian-speaking areas of south-central Siberia. At present, Dene-Yeniseian cannot be accepted as a proven language family until the purported evidence supporting the lexical and morphological correspondences between Yeniseian and Na-Dene is expanded and tested by further critical analysis and their relationship to Old World families such as Sino-Tibetan and Caucasian, as well as the isolate Burushaski (all earlier proposed as relatives of Yeniseian, and sometimes also of Na-Dene), becomes clearer.
Željko Bošković and Troy Messick
Economy considerations have always played an important role in the generative theory of grammar. They are particularly prominent in the most recent instantiation of this approach, the Minimalist Program, which explores the possibility that Universal Grammar is an optimal way of satisfying requirements that are imposed on the language faculty by the external systems that interface with the language faculty which is also characterized by optimal, computationally efficient design. In this respect, the operations of the computational system that produce linguistic expressions must be optimal in that they must satisfy general considerations of simplicity and efficient design. Simply put, the guiding principles here are (a) do something only if you need to and (b) if you do need to, do it in the most economical/efficient way. These considerations ban superfluous steps in derivations and superfluous symbols in representations. Under economy guidelines, movement takes place only when there is a need for it (with both syntactic and semantic considerations playing a role here), and when it does take place, it takes place in the most economical way: it is as short as possible and carries as little material as possible. Furthermore, economy is evaluated locally, on the basis of immediately available structure. The locality of syntactic dependencies is also enforced by minimal search and by limiting the number of syntactic objects and the amount of structure accessible in the derivation. This is achieved by transferring parts of syntactic structure to the interfaces during the derivation, the transferred parts not being accessible for further syntactic operations.
Diglossia refers to a situation where two linguistic varieties coexist within a given speech community. One variety, labeled the ‘high variety’, is used in formal domains including education, while the other variety, labeled the ‘low variety’, is used principally in instances of informal extemporaneous communication. The domains of use, however, are not strictly separate and especially so with the increase in electronic modes of communication. This results in what has been described as diglossic code-switching, and the gradual encroaching of, in the case under consideration here, vernacular Arabic upon the domains of use of Standard Arabic.
While the genetic relationship between the two varieties is central in the definition of a classical diglossic situation as in the case of Arabic, the concept of diglossia has often been extended in the literature to cover situations of a functional distribution between languages that are genetically distant, such as with the situation of Spanish and Guaraní in Paraguay.
In North Africa, vernacular Arabic is in a classical diglossic distribution with Standard Arabic, while the Berber languages are often described as existing in a situation of extended diglossia with Arabic. However, distinguishing between diglossia as it exists between the Arabic dialects and Standard Arabic and the situation of bilingualism that involves Arabic, Berber, and European languages provides the best framework for describing the linguistic situation in North Africa. Diglossia is a key element in understanding the mechanisms of the region’s language contact and change as it plays a central role in shaping language attitude, language policy, and language planning.
Dispersion Theory concerns the constraints that govern contrasts, the phonetic differences that can distinguish words in a language. Specifically it posits that there are distinctiveness constraints that favor contrasts that are more perceptually distinct over less distinct contrasts. The preference for distinct contrasts is hypothesized to follow from a preference to minimize perceptual confusion: In order to recover what a speaker is saying, a listener must identify the words in the utterance. The more confusable words are, the more likely a listener is to make errors. Because contrasts are the minimal permissible differences between words in a language, banning indistinct contrasts reduces the likelihood of misperception.
The term ‘dispersion’ refers to the separation of sounds in perceptual space that results from maximizing the perceptual distinctiveness of the contrasts between those sounds, and is adopted from Lindblom’s Theory of Adaptive Dispersion, a theory of phoneme inventories according to which inventories are selected so as to maximize the perceptual differences between phonemes. These proposals follow a long tradition of explaining cross-linguistic tendencies in the phonetic and phonological form of languages in terms of a preference for perceptually distinct contrasts.
Flemming proposes that distinctiveness constraints constitute one class of constraints in an Optimality Theoretic model of phonology. In this context, distinctiveness constraints predict several basic phenomena, the first of which is the preference for maximal dispersion in inventories of contrasting sounds that first motivated the development of the Theory of Adaptive Dispersion. But distinctiveness constraints are formulated as constraints on the surface forms of possible words that interact with other phonological constraints, so they evaluate the distinctiveness of contrasts in context. As a result, Dispersion Theory predicts that contrasts can be neutralized or enhanced in particular phonological contexts. This prediction arises because the phonetic realization of sounds depends on their context, so the perceptual differences between contrasting sounds also depend on context. If the realization of a contrast in a particular context would be insufficiently distinct (i.e., it would violate a high-ranked distinctiveness constraint), there are two options: the offending contrast can be neutralized, or it can be modified (‘enhanced’) to make it more distinct.
A basic open question regarding Dispersion Theory concerns the proper formulation of distinctiveness constraints and the extent of variation in their rankings across languages, issues that are tied up with the questions about the nature of perceptual distinctiveness. Another concerns the size and nature of the comparison set of contrasting word-forms required to be able to evaluate whether a candidate output satisfies distinctiveness constraints.
Displacement is a ubiquitous phenomenon in natural languages. Grammarians often speak of displacement in cases where the rules for the canonical word order of a language lead to the expectation of finding a word or phrase in a particular position in the sentence whereas it surfaces instead in a different position and the canonical position remains empty: ‘Which book did you buy?’ is an example of displacement because the noun phrase ‘which book’, which acts as the grammatical object in the question, does not occur in the canonical object position, which in English is after the verb. Instead, it surfaces at the beginning of the sentence and the object position remains empty. Displacement is often used as a diagnostic for constituent structure because it affects only (but not all) constituents. In the clear cases, displaced constituents show properties associated with two distinct linear and hierarchical positions. Typically, one of these two positions c-commands the other and the displaced element is pronounced in the c-commanding position. Displacement also shows strong interactions with the path between the empty canonical position and the position where the element is pronounced: one often encounters morphological changes along this path and evidence for structural placement of the displaced constituent, as well as constraints on displacement induced by the path.
The exact scope of displacement as an analytically unified phenomenon varies from theory to theory. If more then one type of syntactic displacement is recognized, the question of the interaction between movement types arises. Displacement phenomena are extensively studied by syntacticians. Their enduring interest derives from the fact that the complex interactions between displacement and other aspects of syntax offer a powerful probe into the inner workings and architecture of the human syntactic faculty.
Jonathan David Bobaljik
Distributed Morphology (DM) is a framework in theoretical morphology, characterized by two core tenets: (i) that the internal hierarchical structure of words is, in the first instance, syntactic (complex words are derived syntactically), and (ii) that the syntax operates on abstract morphemes, defined in terms of morphosyntactic features, and that the spell-out (realization, exponence) of these abstract morphemes occurs after the syntax. Distributing the functions of the classical morpheme in this way allows for analysis of mismatches between the minimal units of grammatical combination and the minimal units of sound. Much work within the framework is nevertheless guided by seeking to understand restrictions on such mismatches, balancing the need for the detailed description of complex morphological data in individual languages against an attempt to explain broad patterns in terms of restrictions imposed by grammatical principles.
In the Early Modern English period (1500–1700), steps were taken toward Standard English, and this was also the time when Shakespeare wrote, but these perspectives are only part of the bigger picture. This chapter looks at Early Modern English as a variable and changing language not unlike English today. Standardization is found particularly in spelling, and new vocabulary was created as a result of the spread of English into various professional and occupational specializations. New research using digital corpora, dictionaries, and databases reveals the gradual nature of these processes. Ongoing developments were no less gradual in pronunciation, with processes such as the Great Vowel Shift, or in grammar, where many changes resulted in new means of expression and greater transparency. Word order was also subject to gradual change, becoming more fixed over time.
Chris Rogers and Lyle Campbell
The reduction of the world’s linguistic diversity has accelerated over the last century and correlates to a loss of knowledge, collective and individual identity, and social value. Often a language is pushed out of use before scholars and language communities have a chance to document or preserve this linguistic heritage. Many are concerned for this loss, believing it to be one of the most serious issues facing humanity today. To address the issues concomitant with an endangered language, we must know how to define “endangerment,” how different situations of endangerment can be compared, and how each language fits into the cultural practices of individuals. The discussion about endangered languages focuses on addressing the needs, causes, and consequences of this loss.
Concern over endangered languages is not just an academic catch phrase. It involves real people and communities struggling with real social, political, and economic issues. To understand the causes and consequence of language endangerment for these individuals and communities requires a multifaceted perspective on the place of each language in the lives of their users. The loss of a language affects not only the world’s linguistic diversity but also an individual’s social identity, and a community’s sense of itself and its history.
The Eskimo-Aleut language family consists of two quite different branches, Aleut and Eskimo. The latter consists of Yupik and Inuit languages. It is spoken from the eastern coast of Russia to Greenland. The family is thought to have developed and diverged in Alaska between 4,000 and 6,000 years ago, although recent findings in a variety of fields suggest a more complex prehistory than previously assumed. The language family shares certain characteristics, including polysynthetic word formation, an originally ergative-absolutive case system (now substantially modified in Aleut), SOV word order, and more or less similar phonological systems across the language family, involving voiceless stop and voiced fricative consonant series often in alternation, and an originally four-vowel system frequently reduced to three. The languages in the family have undergone substantial postcolonial contact effects, especially evident in (although not restricted to) loanwords from the respective colonial languages. There is extensive language documentation for all languages, although not necessarily all dialects. Most languages and dialects are severely endangered today, with the exception of Eastern Canadian Inuit and Greenlandic (Kalaallisut). There are also theoretical studies of the languages in many linguistic fields, although the languages are unevenly covered, and there are still many more studies of the phonologies and syntaxes of the respective languages than other aspects of grammar.