Mark de Vries
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
A relative clause is a clausal modifier that relates to a constituent of the sentence, typically a noun phrase. This is the antecedent or “head” of the relative construction. What makes the configuration special is that the subordinate clause contains a variable that is bound by the head. For instance, in the English sentence Peter recited a poem that Anne liked, the object of the embedded verb liked is relativized. In this example the relative clause is a restrictive property, and the possible reference of a poem is narrowed to poems that Anne likes. However, it is also possible to construct a relative clause non-restrictively. If the example is changed to Peter recited this poem by Keats, which Anne likes, the relative clause provides additional information about the antecedent, and the internal variable, here spelled out by the relative pronoun which, is necessarily coreferential with the antecedent.
Almost all languages make use of (restrictive) relative constructions in one way or another. Various strategies of building relative clauses have been distinguished, which correlate at least partially with particular properties of languages, including word order patterns and the availability of certain pronouns. Relative clauses can follow or precede the head, or even include the head. Some languages make use of relative pronouns; others use resumptive pronouns, or simply leave the relativized argument unpronounced in the subordinate clause. Furthermore, there is cross-linguistic variation in the range of syntactic functions that can be relativized. Notably, more than one type of relative clause can be present in one language. Special types of relative constructions include free relatives (with an implied pronominal antecedent), cleft constructions, and correlatives.
There is an extensive literature on the structural analysis of relative constructions. Questions that are debated include: How can different subtypes be distinguished? How does the internal variable relate to the antecedent? How can reconstruction and anti-reconstruction effects be explained? At what structural level is the relative clause attached to the antecedent or the matrix clause?
Gender is a grammatical feature, in a family with person, number, and case. In the languages that have grammatical gender—according to a representative typological sample, almost half of the languages in the world—it is a property that separates nouns into classes. These classes are often meaningful and often linked to biological sex, which is why many languages are said to have a “masculine” and a “feminine” gender. A typical example is Italian, which has masculine words for male persons (il bambino “the.
Across the languages of the world, gender systems vary widely. They differ in the number of classes, in the underlying assignment rules, and in how and where gender is marked. Since agreement is a definitional property, gender is generally absent in isolating languages as well as in young languages with little bound morphology, including sign languages. Therefore, gender is considered a mature phenomenon in language.
Gender interacts in various ways with other grammatical features. For example, it may be limited to the singular number or the third person, and it may be crosscut by case distinctions. These and other interrelations can complicate the task of figuring out a gender system in first or second language acquisition. Yet, children master gender early, making use of a broad variety of cues. By contrast, gender is famously difficult for second-language learners. This is especially true for adults and for learners whose first language does not have a gender system. Nevertheless, tests show that even for this group, native-like competence is possible to attain.
Holger Diessel and Martin Hilpert
Until recently, theoretical linguists have paid little attention to the frequency of linguistic elements in grammar and grammatical development. It is a standard assumption of (most) grammatical theories that the study of grammar (or competence) must be separated from the study of language use (or performance). However, this view of language has been called into question by various strands of research that have emphasized the importance of frequency for the analysis of linguistic structure. In this research, linguistic structure is often characterized as an emergent phenomenon shaped by general cognitive processes such as analogy, categorization, and automatization, which are crucially influenced by frequency of occurrence.
There are many different ways in which frequency affects the processing and development of linguistic structure. Historical linguists have shown that frequent strings of linguistic elements are prone to undergo phonetic reduction and coalescence, and that frequent expressions and constructions are more resistant to structure mapping and analogical leveling than infrequent ones. Cognitive linguists have argued that the organization of constituent structure and embedding is based on the language users’ experience with linguistic sequences, and that the productivity of grammatical schemas or rules is determined by the combined effect of frequency and similarity. Child language researchers have demonstrated that frequency of occurrence plays an important role in the segmentation of the speech stream and the acquisition of syntactic categories, and that the statistical properties of the ambient language are much more regular than commonly assumed. And finally, psycholinguists have shown that structural ambiguities in sentence processing can often be resolved by lexical and structural frequencies, and that speakers’ choices between alternative constructions in language production are related to their experience with particular linguistic forms and meanings. Taken together, this research suggests that our knowledge of grammar is grounded in experience.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
The Dravidian languages, spoken mainly in southern India and south Asia, were identified as a separate language family between 1816 and 1856. Four of the twenty-six Dravidian languages, namely Tamil, Telugu, Kannada, and Malayalam, have long literary traditions, the earliest dating back to the 1st century
A typical characteristic of Dravidian, which is also an areal characteristic of south Asian languages, is that experiencers and inalienable possessors are case-marked dative. Another is the serialization of verbs by the use of participles, and the use of light verbs to indicate aspectual meaning such as completion, (self- or non-self) benefaction, and reflexivization. Subjects, and arguments in general (e.g., direct and indirect objects), may be non-overt. So is the copula, except in Malayalam.
A number of properties of Dravidian are of interest from a universalist perspective, beginning with the observation that not all syntactic categories N, V, A, and P may be primitive. Dravidian postpositions are nominal or verbal in origin. A mere thirty proto-Dravidian roots have been identified as adjectival; these include numerals, quantifiers, and demonstratives in the proximate-distal-wh series. The adjectival function is performed by inflected verbs (participles) and nouns. The nominal encoding of experiences (as fear rather than afraid/afeared), and the absence of the verb have, arguably correlates with the appearance of dative case on experiencers. “Possessed” or genitive-marked N may fulfil the adjectival function, as also noticed for languages like Ulwa (a less exotic parallel may be adduced from the English of-possessive construction; cf. circles of light, cloth/rings of gold). More uniquely perhaps, Kannada instantiates dative-marked nouns as predicative adjectives. A recent argument that Malayalam verbs may originate as dative-marked nouns suggests that N is the only primitive syntactic category, and the seminal role of dative case.
Other important aspects of Dravidian morphosyntax that have received attention are anaphors and pronouns, in particular the long-distance anaphor taan and the verbal reflexive morpheme; question (wh-) words and the question/disjunction morphemes, which combine in a semantically transparent way to form quantifier words like someone; the use of reduplication to indicate distributive quantification; and the occurrence of “monstrous agreement” (first-person agreement in clauses embedded under a speech predicate, triggered by matrix third person antecedents).
Traditionally, agreement has been considered the marker of finiteness in Dravidian. The negative morpheme assumes finite and non-finite forms; the occurrence of matrix non-finite verb forms in finite negative clauses challenges the current equation of finiteness and tense.
The Dravidian languages are standardly considered to be wh- in situ languages, but wh- words, in fact, seem to move to a pre-verbal position in the unmarked word order; the consequent, apparently rightward, movement of some wh- arguments can be avoided by assuming a universal VO order, and wh-movement to a pre-verbal focus phrase.
Veneeta Dayal and Deepak Alok
Natural language allows questioning into embedded clauses. One strategy for doing so involves structures like the following: [CP-1 whi [TP DP V [CP-2 … ti …]]], where a wh-phrase that thematically belongs to the embedded clause appears in the matrix scope position. A possible answer to such a question must specify values for the fronted wh-phrase. This is the extraction strategy seen in languages like English. An alternative strategy involves a structure in which there is a distinct wh-phrase in the matrix clause. It is manifested in two types of structures. One is a close analog of extraction, but for the extra wh-phrase: [CP-1 whi [TP DP V [CP-2 whj [TP…tj…]]]]. The other simply juxtaposes two questions, rather than syntactically subordinating the second one: [CP-3 [CP-1 whi [TP…]] [CP-2 whj [TP…]]]. In both versions of the second strategy, the wh-phrase in CP-1 is invariant, typically corresponding to the wh-phrase used to question propositional arguments. There is no restriction on the type or number of wh-phrases in CP-2. Possible answers must specify values for all the wh-phrases in CP-2. This strategy is variously known as scope marking, partial wh movement or expletive wh questions. Both strategies can occur in the same language. German, for example, instantiates all three possibilities: extraction, subordinated, as well as sequential scope marking. The scope marking strategy is also manifested in in-situ languages. Scope marking has been subjected to 30 years of research and much is known at this time about its syntactic and semantic properties. Its pragmatics properties, however, are relatively under-studied. The acquisition of scope marking, in relation to extraction, is another area of ongoing research. One of the reasons why scope marking has intrigued linguists is because it seems to defy central tenets about the nature of wh scope taking. For example, it presents an apparent mismatch between the number of wh expressions in the question and the number of expressions whose values are specified in the answer. It poses a challenge for our understanding of how syntactic structure feeds semantic interpretation and how alternative strategies with similar functions relate to each other.
Noun incorporation (NI) is a grammatical construction where a nominal, usually bearing the semantic role of an object, has been incorporated into a verb to form a complex verb or predicate. Traditionally, incorporation was considered to be a word formation process, similar to compounding or cliticization. The fact that a syntactic entity (object) was entering into the lexical process of word formation was theoretically problematic, leading to many debates about the true nature of NI as a lexical or syntactic process. The analytic complexity of NI is compounded by the clear connections between NI and other processes such as possessor raising, applicatives, and classification systems and by its relation with case, agreement, and transitivity. In some cases, it was noted that no morpho-phonological incorporation is discernable beyond perhaps adjacency and a reduced left periphery for the noun. Such cases were termed pseudo noun incorporation, as they exhibit many properties of NI, minus any actual morpho-phonological incorporation. On the semantic side, it was noted that NI often correlates with a particular interpretation in which the noun is less referential and the predicate is more general. This led semanticists to group together all phenomena with similar semantics, whether or not they involve morpho-phonological incorporation. The role of cases of morpho-phonological NI that do not exhibit this characteristic semantics, i.e., where the incorporated nominal can be referential and the action is not general, remains a matter of debate. The interplay of phonology, morphology, syntax, and semantics that is found in NI, as well as its lexical overtones, has resulted in a wide range of analyses at all levels of the grammar. What all NI constructions share is that according to various diagnostics, a thematic element, usually correlating with an internal argument, functions to a lesser extent as an independent argument and instead acts as part of a predicate. In addition to cases of incorporation between verbs and internal arguments, there are also some cases of incorporation of subjects and adverbs, which remain less well understood.
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Young-mee Yu Cho
Due to a number of unusual and interesting properties, Korean phonetics and phonology have been generating productive discussion within modern linguistic theories, starting from structuralism, moving to classical generative grammar, and more recently to post-generative frameworks of Autosegmental Theory, Government Phonology, Optimality Theory, and others. In addition, it has been discovered that a description of important issues of phonology cannot be properly made without referring to the interface between phonetics and phonology on the one hand, and phonology and morpho-syntax on the other. Some phonological issues from Standard Korean are still under debate and will likely be of value in helping to elucidate universal phonological properties with regard to phonation contrast, vowel and consonant inventories, consonantal markedness, and the motivation for prosodic organization in the lexicon.
While in phonology Middle Indo-Aryan (MIA) dialects preserved the phonological system of Old Indo-Aryan (OIA) virtually intact, their morphosyntax underwent far-reaching changes, which altered fundamentally the synthetic morphology of earlier Prākrits in the direction of the analytic typology of New Indo-Aryan (NIA). Speaking holistically, the “accusative alignment” of OIA (Vedic Sanskrit) was restructured as an “ergative alignment” in Western IA languages, and it is precisely during the Late MIA period (ca. 5th–12th centuries
(a) We shall start with the restructuring of the nominal case system in terms of the reduction of the number of cases from seven to four. This phonologically motivated process resulted ultimately in the rise of the binary distinction of the “absolutive” versus “oblique” case at the end of the MIA period). (b) The crucial role of animacy in the restructuring of the pronominal system and the rise of the “double-oblique” system in Ardha-Māgadhī and Western Apabhramśa will be explicated. (c) In the verbal system we witness complete remodeling of the aspectual system as a consequence of the loss of earlier synthetic forms expressing the perfective (Aorist) and “retrospective” (Perfect) aspect. Early Prākrits (Pāli) preserved their sigmatic Aorists (and the sigmatic Future) until late MIA centuries, while on the Iranian side the loss of the “sigmatic” aorist was accelerated in Middle Persian by the “weakening” of s > h > Ø. (d) The development and the establishment of “ergative alignment” at the end of the MIA period will be presented as a consequence of the above typological changes: the rise of the “absolutive” vs. “oblique” case system; the loss of the finite morphology of the perfective and retrospective aspect; and the recreation of the aspectual contrast of perfectivity by means of quasinominal (participial) forms. (e) Concurrently with the development toward the analyticity in grammatical aspect, we witness the evolution of lexical aspect (Aktionsart) ushering in the florescence of “serial” verbs in New Indo-Aryan.
On the whole, a contingency view of alignment considers the increase in ergativity as a by-product of the restoration of the OIA aspectual triad: Imperfective–Perfective–Perfect (in morphological terms Present–Aorist–Perfect). The NIA Perfective and Perfect are aligned ergatively, while their finite OIA ancestors (Aorist and Perfect) were aligned accusatively. Detailed linguistic analysis of Middle Indo-Aryan texts offers us a unique opportunity for a deeper comprehension of the formative period of the NIA state of affairs.
In the linguistic literature, the term theme has several interpretations, one of which relates to discourse analysis and two others to sentence structure. In a more general (or global) sense, one may speak about the theme or topic (or topics) of a text (or discourse), that is, to analyze relations going beyond the sentence boundary and try to identify some characteristic subject(s) for the text (discourse) as a whole. This analysis is mostly a matter of the domain of information retrieval and only partially takes into account linguistically based considerations. The main linguistically based usage of the term theme concerns relations within the sentence. Theme is understood to be one of the (syntactico-) semantic relations and is used as the label of one of the arguments of the verb; the whole network of these relations is called thematic relations or roles (or, in the terminology of Chomskyan generative theory, theta roles and theta grids). Alternatively, from the point of view of the communicative function of the language reflected in the information structure of the sentence, the theme (or topic) of a sentence is distinguished from the rest of it (rheme, or focus, as the case may be) and attention is paid to the semantic consequences of the dichotomy (especially in relation to presuppositions and negation) and its realization (morphological, syntactic, prosodic) in the surface shape of the sentence. In some approaches to morphosyntactic analysis the term theme is also used referring to the part of the word to which inflections are added, especially composed of the root and an added vowel.