You are looking at 41-50 of 123 articles
Evaluative morphology is a field of linguistic studies that deals with the formation of diminutives, augmentatives, pejoratives, and amelioratives. Actually, evaluative constructions cross the boundaries of morphology, and are sometimes realized by formal strategies that cannot be numbered among word formation processes. Nevertheless, morphology plays a dominant role in the formation of evaluatives. The first attempt to draw an exhaustive account of this set of complex forms is found in the 1984 work Generative Morphology, by Sergio Scalise, who made the hypothesis that evaluatives represent a separate block of rules between inflection and derivation. This hypothesis is based on the fact that evaluatives show some properties that are derivational, others that are inflectional, and some specific properties that are neither derivational nor inflectional. After Scalise’s proposal, almost all scholars have tried to answer the question concerning the place of evaluative rules within the morphological component. What data reveal is that, in a cross-linguistic perspective, evaluatives display a uniform behavior from a semantic and functional point of view, but exhibit a wide range of formal properties. In other words, functional identity does not imply formal identity; consequently, we can expect that constructions performing the same function display different formal properties in different languages. So, if evaluatives are undoubtedly derivational in most Indo-European languages (even if they cannot be considered a typical example of derivation), they are certainly quite close to inflection in some Bantu languages. This means that the question about the place of evaluatives within the morphological component probably is not as crucial as scholars have thought, and that other issues, sometimes neglected in the literature, deserve the same attention. Among them, the role of pragmatics in the description of evaluatives is no doubt central. According to Dressler and Merlini Barbaresi, in their 1994 work, Morphopragmatics: Diminutives and Intensifiers in Italian, German and Other Languages, evaluative constructions are the more typical instantiation of morphopragmatics, which is “defined as the area of general pragmatic meanings of morphological rules, that is of the regular pragmatic effects produced when moving from the input to the output of a morphological rule.” Evaluatives include “a pragmatic variable which cannot be suppressed in the description of [their] meaning.” Another central issue in studies on evaluative morphology is the wide set of semantic nuances that usually accompany diminutives, augmentatives, pejoratives, and amelioratives. For example, a diminutive form can occasionally assume a value that is attenuative, singulative, partitive, appreciative, affectionate, etc. This cluster of semantic values has often increased the idea that evaluatives are irregular in nature and that they irremediably avoid any generalization. Dan Jurafsky showed, in 1996, that these different meanings are often the outcome of regular and cross-linguistically recurrent semantic processes, both in a synchronic and in a diachronic perspective.
While both pragmatic theory and experimental investigations of language using psycholinguistic methods have been well-established subfields in the language sciences for a long time, the field of Experimental Pragmatics, where such methods are applied to pragmatic phenomena, has only fully taken shape since the early 2000s. By now, however, it has become a major and lively area of ongoing research, with dedicated conferences, workshops, and collaborative grant projects, bringing together researchers with linguistic, psychological, and computational approaches across disciplines. Its scope includes virtually all meaning-related phenomena in natural language comprehension and production, with a particular focus on what inferences utterances give rise to that go beyond what is literally expressed by the linguistic material.
One general area that has been explored in great depth consists of investigations of various ‘ingredients’ of meaning. A major aim has been to develop experimental methodologies to help classify various aspects of meaning, such as implicatures and presuppositions as compared to basic truth-conditional meaning, and to capture their properties more thoroughly using more extensive empirical data. The study of scalar implicatures (e.g., the inference that some but not all students left based on the sentence Some students left) has served as a catalyst of sorts in this area, and they constitute one of the most well-studied phenomena in Experimental Pragmatics to date. But much recent work has expanded the general approach to other aspects of meaning, including presuppositions and conventional implicatures, but also other aspects of nonliteral meaning, such as irony, metonymy, and metaphors.
The study of reference constitutes another core area of research in Experimental Pragmatics, and has a more extensive history of precursors in psycholinguistics proper. Reference resolution commonly requires drawing inferences beyond what is conventionally conveyed by the linguistic material at issue as well; the key concern is how comprehenders grasp the referential intentions of a speaker based on the referential expressions used in a given context, as well as how the speaker chooses an appropriate expression in the first place. Pronouns, demonstratives, and definite descriptions are crucial expressions of interest, with special attention to their relation to both intra- and extralinguistic context. Furthermore, one key line of research is concerned with speakers’ and listeners’ capacity to keep track of both their own private perspective and the shared perspective of the interlocutors in actual interaction.
Given the rapid ongoing growth in the field, there is a large number of additional topical areas that cannot all be mentioned here, but the final section of the article briefly mentions further current and future areas of research.
Experimental Semiotics (ES) is a burgeoning new discipline aimed at investigating in the laboratory the development of novel forms of human communication. Conceptually connected to experimental research on language use, ES provides a scientific complement to field studies of spontaneously emerging new languages and studies on the emergence of communication systems among artificial agents.
ES researchers have created quite a few research paradigms to investigate the development of novel forms of human communication. Despite their diversity, these paradigms all rely on the use of semiotic games, that is, games in which people can succeed reliably only after they have developed novel communication systems. Some of these games involve creating novel signs for pre-specified meanings. These games are particularly suitable for studying relatively large communication systems and their structural properties. Other semiotic games involve establishing shared meanings as well as novel signs to communicate about them. These games are typically rather challenging and are particularly suitable for investigating the processes through which novel forms of communication are created.
Considering that ES is a methodological stance rather than a well-defined research theme, researchers have used it to address a greatly heterogeneous set of research questions. Despite this, and despite the recent origins of ES, two of these questions have begun to coalesce into relatively coherent research themes.
The first theme originates from the observation that novel communication systems developed in the laboratory tend to acquire features that are similar to key features of natural language. Most notably, they tend (a) to rely on the use of symbols—that is purely conventional signs—and (b) to adopt a combinatorial design, using a few basic units to express a large number of meanings. ES researchers have begun investigating some of the factors that lead to the acquisition of such features. These investigations suggest two conclusions. The first is that the emergence of symbols depends on the fact that, when repeatedly using non-symbolic signs, people tend to progressively abstract them. The second conclusion is that novel communication systems tend to adopt a combinatorial design more readily when their signs have low degrees of motivation and fade rapidly.
The second research theme originates from the observation that novel communication systems developed in the laboratory tend to begin systematically with motivated—that is non-symbolic—signs. ES investigations of this tendency suggest that it occurs because motivation helps people bootstrap novel forms of communication. Put it another way, these investigations show that it is very difficult for people to bootstrap communication through arbitrary signs.
Holger Diessel and Martin Hilpert
Until recently, theoretical linguists have paid little attention to the frequency of linguistic elements in grammar and grammatical development. It is a standard assumption of (most) grammatical theories that the study of grammar (or competence) must be separated from the study of language use (or performance). However, this view of language has been called into question by various strands of research that have emphasized the importance of frequency for the analysis of linguistic structure. In this research, linguistic structure is often characterized as an emergent phenomenon shaped by general cognitive processes such as analogy, categorization, and automatization, which are crucially influenced by frequency of occurrence.
There are many different ways in which frequency affects the processing and development of linguistic structure. Historical linguists have shown that frequent strings of linguistic elements are prone to undergo phonetic reduction and coalescence, and that frequent expressions and constructions are more resistant to structure mapping and analogical leveling than infrequent ones. Cognitive linguists have argued that the organization of constituent structure and embedding is based on the language users’ experience with linguistic sequences, and that the productivity of grammatical schemas or rules is determined by the combined effect of frequency and similarity. Child language researchers have demonstrated that frequency of occurrence plays an important role in the segmentation of the speech stream and the acquisition of syntactic categories, and that the statistical properties of the ambient language are much more regular than commonly assumed. And finally, psycholinguists have shown that structural ambiguities in sentence processing can often be resolved by lexical and structural frequencies, and that speakers’ choices between alternative constructions in language production are related to their experience with particular linguistic forms and meanings. Taken together, this research suggests that our knowledge of grammar is grounded in experience.
Game theory provides formal means of representing and explaining action choices in social decision situations where the choices of one participant depend on the choices of another. Game theoretic pragmatics approaches language production and interpretation as a game in this sense. Patterns in language use are explained as optimal, rational, or at least nearly optimal or rational solutions to a communication problem. Three intimately related perspectives on game theoretic pragmatics are sketched here: (i) the evolutionary perspective explains language use as the outcome of some optimization process, (ii) the rationalistic perspective pictures language use as a form of rational decision-making, and (iii) the probabilistic reasoning perspective considers specifically speakers’ and listeners’ beliefs about each other. There are clear commonalities behind these three perspectives, and they may in practice blend into each other.
At the heart of game theoretic pragmatics lies the idea that speaker and listener behavior, when it comes to using a language with a given semantic meaning, are attuned to each other. By focusing on the evolutionary or rationalistic perspective, we can then give a functional account of general patterns in our pragmatic language use. The probabilistic reasoning perspective invites modeling actual speaker and listener behavior, for example, as it shows in quantitative aspects of experimental data.
Gender is a grammatical feature, in a family with person, number, and case. In the languages that have grammatical gender—according to a representative typological sample, almost half of the languages in the world—it is a property that separates nouns into classes. These classes are often meaningful and often linked to biological sex, which is why many languages are said to have a “masculine” and a “feminine” gender. A typical example is Italian, which has masculine words for male persons (il bambino “the.
Across the languages of the world, gender systems vary widely. They differ in the number of classes, in the underlying assignment rules, and in how and where gender is marked. Since agreement is a definitional property, gender is generally absent in isolating languages as well as in young languages with little bound morphology, including sign languages. Therefore, gender is considered a mature phenomenon in language.
Gender interacts in various ways with other grammatical features. For example, it may be limited to the singular number or the third person, and it may be crosscut by case distinctions. These and other interrelations can complicate the task of figuring out a gender system in first or second language acquisition. Yet, children master gender early, making use of a broad variety of cues. By contrast, gender is famously difficult for second-language learners. This is especially true for adults and for learners whose first language does not have a gender system. Nevertheless, tests show that even for this group, native-like competence is possible to attain.
Different methods exist for classifying languages, depending on whether the task is to work out the relations among languages already known to be related—internal language classification—or whether the task is to establish that certain languages are related—external language classification.
The comparative method in historical linguistics, developed during the latter part of the 19th century, represents one method for internal language classification; lexicostatistics, developed during the 1950s, represents another. Elements of lexicostatistics have been transformed and carried over into modern computational linguistic phylogenetics, and currently efforts are also being made to automate the comparative method. Recent years have seen rapid progress in the development of methods, tools, and resources for language classification. For instance, computational phylogenetic algorithms and software have made it possible to handle the classification of many languages using explicit models of language change, and data have been gathered for two thirds of the world’s language, allowing for rapid, exploratory classifications. There are also many open questions and venues for future research, for instance: What are the real-world counterparts to the nodes in a family tree structure? How can shortcomings in the traditional method of comparative historical linguistics be overcome? How can the understanding of the results that computational linguistic phylogenetics have to offer be improved?
External language classification, a notoriously difficult task, has also benefitted from the advent of computational power. While, in the past, the simultaneous comparison of many languages for the purpose of discovering deep genealogical links was carried out in a haphazard fashion, leaving too much room for the effect of chance similarities to kick in, this sort of activity can now be done in a systematic, objective way on an unprecedented scale. The ways of producing final, convincing evidence for a deep genealogical relation, however, have not changed much. There is some room for improvement in this area, but even more room for improvement in the way that proposals for long-distance relations are evaluated.
Knut Tarald Taraldsen
This article presents different types of generative grammar that can be used as models of natural languages focusing on a small subset of all the systems that have been devised. The central idea behind generative grammar may be rendered in the words of Richard Montague: “I reject the contention that an important theoretical difference exists between formal and natural languages” (“Universal Grammar,” Theoria, 36 , 373–398).
D. Gary Miller
Apart from runic inscriptions, Gothic is the earliest attested language of the Germanic family, dating to the 4th century. Along with Crimean Gothic, it belongs to the branch known as East Germanic. The bulk of the extant Gothic corpus is a translation of the Bible, of which only a portion remains. The translation is traditionally ascribed to Wulfila, who is credited with inventing the Gothic alphabet. The many Greek conventions both help and hinder interpretation of the Gothic phonological system. As in Greek, letters of the alphabet functioned as numerals, but the late letter names were from runic.
Gothic inflectional categories include nouns, adjectives, and verbs. Nouns are inflected for three genders, two numbers, and four cases. Various stem types inherited from Indo-European constitute different form classes in Gothic. Adjectives have the same properties and are also inflected according to so-called weak and strong forms, as are Gothic verbs. Verbs are inflected for three persons and numbers, an indicative and a nonindicative mood (here called “optative”), past and nonpast tense, and voice. The mediopassive survives in Gothic morphologically as a synthetic passive and syntactically in innovated periphrastic formations; middle and anticausative functions were taken over by reflexive-type structures. Nonfinite forms are the infinitive, the imperative, and two participles.
In syntax, Gothic had null subjects as an option, mostly in the third person singular. Aspect was effected primarily by prefixes, which have many other functions, and aspect is not consistently indicated. Absolute constructions with a participle occurred in various cases with functional differences. Relativization was effected primarily by relative pronouns built on demonstratives plus a complementizer. Complementizers could be used with subordinate clause verbs in the indicative or optative. The switch to the optative was triggered by irrealis, matrix verbs that do not permit a full range of subordinate tenses, expression of a hope or wish, potentiality, and several other conditions. Many of these are also relevant to matrix clauses (independent optatives).
Essentials of linearization include prepositional phrases, default postposed genitives and possessive adjectives, and preposed demonstratives. Verb-object order predominates, but there is much Greek influence. Verb-auxiliary order is native Gothic.
Linguistic change not only affects the lexicon and the phonology of words, it also operates on the grammar of a language. In this context, grammaticalization is concerned with the development of lexical items into markers of grammatical categories or, more generally, with the development of markers used for procedural cueing of abstract relationships out of linguistic items with concrete referential meaning. A well-known example is the English verb go in its function of a future marker, as in She is going to visit her friend. Phenomena like these are very frequent across the world’s languages and across many different domains of grammatical categories. In the last 50 years, research on grammaticalization has come up with a plethora of (a) generalizations, (b) models of how grammaticalization works, and (c) methodological refinements.
On (a): Processes of grammaticalization develop gradually, step by step, and the sequence of the individual stages follows certain clines as they have been generalized from cross-linguistic comparison (unidirectionality). Even though there are counterexamples that go against the directionality of various clines, their number seems smaller than assumed in the late 1990s.
On (b): Models or scenarios of grammaticalization integrate various factors. Depending on the theoretical background, grammaticalization and its results are motivated either by the competing motivations of economy vs. iconicity/explicitness in functional typology or by a change from movement to merger in the minimalist program. Pragmatic inference is of central importance for initiating processes of grammaticalization (and maybe also at later stages), and it activates mechanisms like reanalysis and analogy, whose status is controversial in the literature. Finally, grammaticalization does not only work within individual languages/varieties, it also operates across languages. In situations of contact, the existence of a certain grammatical category may induce grammaticalization in another language.
On (c): Even though it is hard to measure degrees of grammaticalization in terms of absolute and exact figures, it is possible to determine relative degrees of grammaticalization in terms of the autonomy of linguistic signs. Moreover, more recent research has come up with criteria for distinguishing grammaticalization and lexicalization (defined as the loss of productivity, transparency, and/or compositionality of former productive, transparent, and compositional structures).
In spite of these findings, there are still quite a number of questions that need further research. Two questions to be discussed address basic issues concerning the overall properties of grammaticalization. (1) What is the relation between constructions and grammaticalization? In the more traditional view, constructions are seen as the syntactic framework within which linguistic items are grammaticalized. In more recent approaches based on construction grammar, constructions are defined as combinations of form and meaning. Thus, grammaticalization can be seen in the light of constructionalization, i.e., the creation of new combinations of form and meaning. Even though constructionalization covers many apects of grammaticalization, it does not exhaustively cover the domain of grammaticalization. (2) Is grammaticalization cross-linguistically homogeneous, or is there a certain range of variation? There is evidence from East and mainland Southeast Asia that there is cross-linguistic variation to some extent.