You are looking at 11-20 of 123 articles
Bert Le Bruyn, Henriëtte de Swart, and Joost Zwarts
Bare nominals (also called “bare nouns”) are nominal structures without an overt article or other determiner. The distinction between a bare noun and a noun that is part of a larger nominal structure must be made in context: Milk is a bare nominal in I bought milk, but not in I bought the milk. Bare nouns have a limited distribution: In subject or object position, English allows bare mass nouns and bare plurals, but not bare singular count nouns (*I bought table). Bare singular count nouns only appear in special configurations, such as coordination (I bought table and chairs for £182).
From a semantic perspective, it is noteworthy that bare nouns achieve reference without the support of a determiner. A full noun phrase like the cookies refers to the maximal sum of cookies in the context, because of the definite article the. English bare plurals have two main interpretations: In generic sentences they refer to the kind (Cookies are sweet), in episodic sentences they refer to some exemplars of the kind (Cookies are in the cabinet). Bare nouns typically take narrow scope with respect to other scope-bearing operators like negation.
The typology of bare nouns reveals substantial variation, and bare nouns in languages other than English may have different distributions and meanings. But genericity and narrow scope are recurring features in the cross-linguistic study of bare nominals.
Since the start of the Islamic conquest of the Maghreb in the 7th century
Linguistic influence is found on all levels: phonology, morphology, syntax, and lexicon. In those cases where only innovative patterns are shared between the two language groups, it is often difficult to make out where the innovation started; thus the great similarities in syllable structure between Maghrebian Arabic and northern Berber are the result of innovations within both language families, and it is difficult to tell where it started. Morphological influence seems to be mediated exclusively by lexical borrowing. Especially in Berber, this has led to parallel systems in the morphology, where native words always have native morphology, while loans either have nativized morphology or retain Arabic-like patterns. In the lexicon, it is especially Berber that takes over scores of loanwords from Arabic, amounting in one case to over one-third of the basic lexicon as defined by 100-word lists.
Cedric Boeckx and Pedro Tiago Martins
All humans can acquire at least one natural language. Biolinguistics is the name given to the interdisciplinary enterprise that aims to unveil the biological bases of this unique capacity.
Blocking can be defined as the non-occurrence of some linguistic form, whose existence could be expected on general grounds, due to the existence of a rival form. *Oxes, for example, is blocked by oxen, *stealer by thief. Although blocking is closely associated with morphology, in reality the competing “forms” can not only be morphemes or words, but can also be syntactic units. In German, for example, the compound Rotwein ‘red wine’ blocks the phrasal unit *roter Wein (in the relevant sense), just as the phrasal unit rote Rübe ‘beetroot; lit. red beet’ blocks the compound *Rotrübe. In these examples, one crucial factor determining blocking is synonymy; speakers apparently have a deep-rooted presumption against synonyms. Whether homonymy can also lead to a similar avoidance strategy, is still controversial. But even if homonymy blocking exists, it certainly is much less systematic than synonymy blocking.
In all the examples mentioned above, it is a word stored in the mental lexicon that blocks a rival formation. However, besides such cases of lexical blocking, one can observe blocking among productive patterns. Dutch has three suffixes for deriving agent nouns from verbal bases, -er, -der, and -aar. Of these three suffixes, the first one is the default choice, while -der and -aar are chosen in very specific phonological environments: as Geert Booij describes in The Morphology of Dutch (2002), “the suffix -aar occurs after stems ending in a coronal sonorant consonant preceded by schwa, and -der occurs after stems ending in /r/” (p. 122). Contrary to lexical blocking, the effect of this kind of pattern blocking does not depend on words stored in the mental lexicon and their token frequency but on abstract features (in the case at hand, phonological features).
Blocking was first recognized by the Indian grammarian Pāṇini in the 5th or 4th century
Andrej L. Malchukov
Languages from at least five genetically unrelated families are spoken in the Caucasus, but there are only three endemic linguistic families belonging to the region: Kartvelian, West Caucasian, and Northeast Caucasian. These families are rather heterogeneous in terms of the number of languages and the distribution of the speakers across them. The Caucasus represents a situation where languages with millions of speakers have coexisted with one-village languages for hundreds of years, and where multilingualism has always been the norm. The richness of Caucasian languages on every linguistic stratum is dazzling: here we find some of the largest consonant inventories, inflectional systems where the mere number of word forms strains credibility (one of the Caucasian languages, Archi, is claimed to have over a million and a half word forms), and challenging syntactic structures. The typological interest of the Caucasian languages and the challenges they present to linguistic theory lie in different areas. Thus, for Kartvelian languages, the number of factors at play in the verbal system make the task of the production of a correct verbal form far from trivial. West Caucasian languages represent an instance of polysynthetic polypersonal verb inflection, which is unusual not only for Caucasus but for Eurasia in general. East Caucasian languages have large systems of non-finite forms which, unusually, retain the ability to realize agreement in gender and number while their non-finite nature is determined by the inability to head an independent clause and to express certain morpho-syntactic categories such as illocutionary force and evidentiality. Finally, all Caucasian languages are ergative to some extent.
Child phonology refers to virtually every phonetic and phonological phenomenon observable in the speech productions of children, including babbles. This includes qualitative and quantitative aspects of babbled utterances as well as all behaviors such as the deletion or modification of the sounds and syllables contained in the adult (target) forms that the child is trying to reproduce in his or her spoken utterances. This research is also increasingly concerned with issues in speech perception, a field of investigation that has traditionally followed its own course; it is only recently that the two fields have started to converge. The recent history of research on child phonology, the theoretical approaches and debates surrounding it, as well as the research methods and resources that have been employed to address these issues empirically, parallel the evolution of phonology, phonetics, and psycholinguistics as general fields of investigation. Child phonology contributes important observations, often organized in terms of developmental time periods, which can extend from the child’s earliest babbles to the stage when he or she masters the sounds, sound combinations, and suprasegmental properties of the ambient (target) language. Central debates within the field of child phonology concern the nature and origins of phonological representations as well as the ways in which they are acquired by children. Since the mid-1900s, the most central approaches to these questions have tended to fall on each side of the general divide between generative vs. functionalist (usage-based) approaches to phonology. Traditionally, generative approaches have embraced a universal stance on phonological primitives and their organization within hierarchical phonological representations, assumed to be innately available as part of the human language faculty. In contrast to this, functionalist approaches have utilized flatter (non-hierarchical) representational models and rejected nativist claims about the origin of phonological constructs. Since the beginning of the 1990s, this divide has been blurred significantly, both through the elaboration of constraint-based frameworks that incorporate phonetic evidence, from both speech perception and production, as part of accounts of phonological patterning, and through the formulation of emergentist approaches to phonological representation. Within this context, while controversies remain concerning the nature of phonological representations, debates are fueled by new outlooks on factors that might affect their emergence, including the types of learning mechanisms involved, the nature of the evidence available to the learner (e.g., perceptual, articulatory, and distributional), as well as the extent to which the learner can abstract away from this evidence. In parallel, recent advances in computer-assisted research methods and data availability, especially within the context of the PhonBank project, offer researchers unprecedented support for large-scale investigations of child language corpora. This combination of theoretical and methodological advances provides new and fertile grounds for research on child phonology and related implications for phonological theory.
Children’s acquisition of language is an amazing feat. Children master the syntax, the sentence structure of their language, through exposure and interaction with caregivers and others but, notably, with no formal tuition. How children come to be in command of the syntax of their language has been a topic of vigorous debate since Chomsky argued against Skinner’s claim that language is ‘verbal behavior.’ Chomsky argued that knowledge of language cannot be learned through experience alone but is guided by a genetic component. This language component, known as ‘Universal Grammar,’ is composed of abstract linguistic knowledge and a computational system that is special to language. The computational mechanisms of Universal Grammar give even young children the capacity to form hierarchical syntactic representations for the sentences they hear and produce. The abstract knowledge of language guides children’s hypotheses as they interact with the language input in their environment, ensuring they progress toward the adult grammar. An alternative school of thought denies the existence of a dedicated language component, arguing that knowledge of syntax is learned entirely through interactions with speakers of the language. Such ‘usage-based’ linguistic theories assume that language learning employs the same learning mechanisms that are used by other cognitive systems. Usage-based accounts of language development view children’s earliest productions as rote-learned phrases that lack internal structure. Knowledge of linguistic structure emerges gradually and in a piecemeal fashion, with frequency playing a large role in the order of emergence for different syntactic structures.
Clinical linguistics is the branch of linguistics that applies linguistic concepts and theories to the study of language disorders. As the name suggests, clinical linguistics is a dual-facing discipline. Although the conceptual roots of this field are in linguistics, its domain of application is the vast array of clinical disorders that may compromise the use and understanding of language. Both dimensions of clinical linguistics can be addressed through an examination of specific linguistic deficits in individuals with neurodevelopmental disorders, craniofacial anomalies, adult-onset neurological impairments, psychiatric disorders, and neurodegenerative disorders. Clinical linguists are interested in the full range of linguistic deficits in these conditions, including phonetic deficits of children with cleft lip and palate, morphosyntactic errors in children with specific language impairment, and pragmatic language impairments in adults with schizophrenia.
Like many applied disciplines in linguistics, clinical linguistics sits at the intersection of a number of areas. The relationship of clinical linguistics to the study of communication disorders and to speech-language pathology (speech and language therapy in the United Kingdom) are two particularly important points of intersection. Speech-language pathology is the area of clinical practice that assesses and treats children and adults with communication disorders. All language disorders restrict an individual’s ability to communicate freely with others in a range of contexts and settings. So language disorders are first and foremost communication disorders. To understand language disorders, it is useful to think of them in terms of points of breakdown on a communication cycle that tracks the progress of a linguistic utterance from its conception in the mind of a speaker to its comprehension by a hearer. This cycle permits the introduction of a number of important distinctions in language pathology, such as the distinction between a receptive and an expressive language disorder, and between a developmental and an acquired language disorder. The cycle is also a useful model with which to conceptualize a range of communication disorders other than language disorders. These other disorders, which include hearing, voice, and fluency disorders, are also relevant to clinical linguistics.
Clinical linguistics draws on the conceptual resources of the full range of linguistic disciplines to describe and explain language disorders. These disciplines include phonetics, phonology, morphology, syntax, semantics, pragmatics, and discourse. Each of these linguistic disciplines contributes concepts and theories that can shed light on the nature of language disorder. A wide range of tools and approaches are used by clinical linguists and speech-language pathologists to assess, diagnose, and treat language disorders. They include the use of standardized and norm-referenced tests, communication checklists and profiles (some administered by clinicians, others by parents, teachers, and caregivers), and qualitative methods such as conversation analysis and discourse analysis. Finally, clinical linguists can contribute to debates about the nosology of language disorders. In order to do so, however, they must have an understanding of the place of language disorders in internationally recognized classification systems such as the 2013 Diagnostic and Statistical Manual of Mental Disorders (DSM-5) of the American Psychiatric Association.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Coarticulation can be characterized as an articulatory effect exerted by one phonetic segment (the trigger) onto another (the target) in the speech chain, for example, anticipatory velar lowering during a vowel preceding a syllable-final nasal consonant (send) or tongue body raising and fronting during a schwa placed next to a palatal consonant (the shore, a shamed). Coarticulatory effects have been generally investigated with reference to a single articulator (e.g., velum, lips, tongue tip, tongue body, jaw, larynx) or a given acoustic parameter (e.g., second formant). It is then convenient to keep this concept separate from gestural coproduction, which refers to the spatiotemporal interaction among different articulatory structures during the realization of one or several successive phonetic segments.
Coarticulation may be measured in space and time. Thus, tongue body raising and fronting effects exerted by palatal consonants on an immediately preceding schwa are predicted to be larger and start earlier than those exerted by the same consonant type on a preceding low or mid-vowel. Moreover, the spatiotemporal effects in question may differ in direction—they may be anticipatory and thus proceed leftwards towards the preceding segment(s), or they may be carryover and thus proceed rightwards towards the following segment(s); it is commonly accepted that anticipatory effects reflect phonemic planning, while carryover effects are mainly associated with the physico-mechanical requirements of the articulatory structures. The magnitude, temporal extent, and direction of the co-articulatory effects are conditioned by the place and manner of articulation of the triggering and target consonants and/or vowels, as well as by the articulatory subsystem involved in closure or constriction formation. Depending on their articulatory characteristics, vowels and consonants may differ regarding coarticulation resistance and aggressiveness, namely, the degree to which they block coarticulatory effects from contextual segments (resistance) and modify the articulatory characteristics of other segments (aggressiveness); thus, in a CV sequence composed of a palatal consonant and a schwa, the palatal segment is more coarticulation resistant and aggressive than the schwa. Other factors affecting coarticulation are segmental position within the word and the utterance and, with respect to word and sentence stress, as well as sequence type (VCV, CC, and so on), speech rate, speaker, and language.
The study of coarticulation provides information about the spatiotemporal mechanisms used by speakers for the production of phonemic sequences, about phonemic planning strategies in speech, and about sound change patterns and assimilatory processes. It has been traditionally assumed that coarticulatory effects are phonetic and thus gradual, variable, and universal, while assimilations are phonological and thus categorical, systematic, and language-specific. Thus, for example, tongue body raising and fronting effects from a palatal consonant during a schwa occur to a greater or lesser extent in any speech production event (coarticulation), but may only be labeled assimilatory if giving rise to a higher and more frontal vowel, such as /e/ or /i/, in a subset of lexical items or across the lexicon of a given language (assimilation). Experimental evidence shows, however, that the division between coarticulation and assimilation is not so straightforward. Indeed, coarticulatory effects may exhibit language-dependent differences (e.g., languages may differ regarding the degree of anticipatory vowel nasalization triggered by a syllable-final nasal consonant), while processes that have been traditionally considered to be assimilatory are far from applying categorically and systematically (e.g., the extent to which /n/ assimilates in place of articulation to a following consonant in English or German may vary with the consonant itself, speaker, prosodic factors, and speech rate).