Malka Rappaport Hovav
Words are sensitive to syntactic context. Argument realization is the study of the relation between argument-taking words, the syntactic contexts they appear in and the interpretive properties that constrain the relation between them.
Bert Le Bruyn, Henriëtte de Swart, and Joost Zwarts
Bare nominals (also called “bare nouns”) are nominal structures without an overt article or other determiner. The distinction between a bare noun and a noun that is part of a larger nominal structure must be made in context: Milk is a bare nominal in I bought milk, but not in I bought the milk. Bare nouns have a limited distribution: In subject or object position, English allows bare mass nouns and bare plurals, but not bare singular count nouns (*I bought table). Bare singular count nouns only appear in special configurations, such as coordination (I bought table and chairs for £182).
From a semantic perspective, it is noteworthy that bare nouns achieve reference without the support of a determiner. A full noun phrase like the cookies refers to the maximal sum of cookies in the context, because of the definite article the. English bare plurals have two main interpretations: In generic sentences they refer to the kind (Cookies are sweet), in episodic sentences they refer to some exemplars of the kind (Cookies are in the cabinet). Bare nouns typically take narrow scope with respect to other scope-bearing operators like negation.
The typology of bare nouns reveals substantial variation, and bare nouns in languages other than English may have different distributions and meanings. But genericity and narrow scope are recurring features in the cross-linguistic study of bare nominals.
Connectionism is an important theoretical framework for the study of human cognition and behavior. Also known as Parallel Distributed Processing (PDP) or Artificial Neural Networks (ANN), connectionism advocates that learning, representation, and processing of information in mind are parallel, distributed, and interactive in nature. It argues for the emergence of human cognition as the outcome of large networks of interactive processing units operating simultaneously. Inspired by findings from neural science and artificial intelligence, connectionism is a powerful computational tool, and it has had profound impact on many areas of research, including linguistics. Since the beginning of connectionism, many connectionist models have been developed to account for a wide range of important linguistic phenomena observed in monolingual research, such as speech perception, speech production, semantic representation, and early lexical development in children. Recently, the application of connectionism to bilingual research has also gathered momentum. Connectionist models are often precise in the specification of modeling parameters and flexible in the manipulation of relevant variables in the model to address relevant theoretical questions, therefore they can provide significant advantages in testing mechanisms underlying language processes.
Displacement is a ubiquitous phenomenon in natural languages. Grammarians often speak of displacement in cases where the rules for the canonical word order of a language lead to the expectation of finding a word or phrase in a particular position in the sentence whereas it surfaces instead in a different position and the canonical position remains empty: ‘Which book did you buy?’ is an example of displacement because the noun phrase ‘which book’, which acts as the grammatical object in the question, does not occur in the canonical object position, which in English is after the verb. Instead, it surfaces at the beginning of the sentence and the object position remains empty. Displacement is often used as a diagnostic for constituent structure because it affects only (but not all) constituents. In the clear cases, displaced constituents show properties associated with two distinct linear and hierarchical positions. Typically, one of these two positions c-commands the other and the displaced element is pronounced in the c-commanding position. Displacement also shows strong interactions with the path between the empty canonical position and the position where the element is pronounced: one often encounters morphological changes along this path and evidence for structural placement of the displaced constituent, as well as constraints on displacement induced by the path.
The exact scope of displacement as an analytically unified phenomenon varies from theory to theory. If more then one type of syntactic displacement is recognized, the question of the interaction between movement types arises. Displacement phenomena are extensively studied by syntacticians. Their enduring interest derives from the fact that the complex interactions between displacement and other aspects of syntax offer a powerful probe into the inner workings and architecture of the human syntactic faculty.
Laura A. Michaelis
Meanings are assembled in various ways in a construction-based grammar, and this array can be represented as a continuum of idiomaticity, a gradient of lexical fixity. Constructional meanings are the meanings to be discovered at every point along the idiomaticity continuum. At the leftmost, or ‘fixed,’ extreme of this continuum are frozen idioms, like the salt of the earth and in the know. The set of frozen idioms includes those with idiosyncratic syntactic properties, like the fixed expression by and large (an exceptional pattern of coordination in which a preposition and adjective are conjoined). Other frozen idioms, like the unexceptionable modified noun red herring, feature syntax found elsewhere. At the rightmost, or ‘open’ end of this continuum are fully productive patterns, including the rule that licenses the string Kim blinked, known as the Subject-Predicate construction. Between these two poles are (a) lexically fixed idiomatic expressions, verb-headed and otherwise, with regular inflection, such as chew/chews/chewed the fat; (b) flexible expressions with invariant lexical fillers, including phrasal idioms like spill the beans and the Correlative Conditional, such as the more, the merrier; and (c) specialized syntactic patterns without lexical fillers, like the Conjunctive Conditional (e.g., One more remark like that and you’re out of here). Construction Grammar represents this range of expressions in a uniform way: whether phrasal or lexical, all are modeled as feature structures that specify phonological and morphological structure, meaning, use conditions, and relevant syntactic information (including syntactic category and combinatoric potential).
Agustin Vicente and Ingrid Lossius Falkum
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Polysemy is characterized as the phenomenon whereby a single word form is associated with two or several related senses (e.g., run a marathon, run some water, run on gasoline, run a store, etc.). It is distinguished from monosemy, where one word form is associated with a single meaning, and homonymy, where a single word form is associated with two or several unrelated meanings, represented as different lexemes (e.g., bank). Although the distinctions between polysemy, monosemy, and homonymy may seem clear at an intuitive level, they have proven difficult to draw in practice. For instance, none of the linguistic tests devised for this purpose give clear-cut answers, either because they are context-sensitive (sometimes, only a slight manipulation of the context may give rise to a different sense), or because they do not track the intuitive distinctions, identifying some kinds of polysemy as monosemy and others as instances of homonymy.
Polysemy proliferates in natural language: virtually every word is polysemous to some extent. Still, the phenomenon has been largely ignored in the mainstream linguistics literature, as well as in related disciplines. One notable exception is the cognitive linguistics framework, where polysemy has played an important role in theorizing from the outset. However, it is only recently that polysemy has been seen as a topic of relevance to linguistic and philosophical debates regarding lexical meaning representation, compositional semantics, and the semantics-pragmatics divide.
Early accounts treated polysemy in terms of sense enumeration: each sense of a polysemous expression is stored as an individual representation in the lexicon (this approach has been called the Sense Enumeration Lexicon, or SEL, for short). Polysemy and homonymy are treated on a par, both being resolved by language users selecting a sense from among the list of lexically stored senses, which then feeds into the semantic composition process.
The SEL approach has been strongly criticized on both theoretical and empirical grounds. Today, most researchers converge on the hypothesis that the senses of at least many polysemous expressions derive from a single meaning representation. One contemporary debate revolves around the status of this representation: Are the lexical representations of polysemous expressions informationally scarce and under-specific with respect to their different senses? Or do they have to be informationally rich in order to store and be able to generate all these polysemous senses? Alternatively, are senses computed from a literal, primary meaning via semantic or pragmatic mechanisms such as coercion, modulation, or ad hoc concept construction?
A related issue that has recently attracted interest is how polysemy is generated or constructed in the course of discourse, a question that has important implications for accounts of semantic change. If this process is not entirely arbitrary (i.e., the senses are related to each other in semi-predictable ways), what are the underlying mechanisms? While it is widely agreed that two important sources of polysemy are metaphor and metonymy, the question of what consequences the source of a polysemy may have (if any) for lexical representation and sense activation remains a largely unexplored question.
In the linguistic literature, the term theme has several interpretations, one of which relates to discourse analysis and two others to sentence structure. In a more general (or global) sense, one may speak about the theme or topic (or topics) of a text (or discourse), that is, to analyze relations going beyond the sentence boundary and try to identify some characteristic subject(s) for the text (discourse) as a whole. This analysis is mostly a matter of the domain of information retrieval and only partially takes into account linguistically based considerations. The main linguistically based usage of the term theme concerns relations within the sentence. Theme is understood to be one of the (syntactico-) semantic relations and is used as the label of one of the arguments of the verb; the whole network of these relations is called thematic relations or roles (or, in the terminology of Chomskyan generative theory, theta roles and theta grids). Alternatively, from the point of view of the communicative function of the language reflected in the information structure of the sentence, the theme (or topic) of a sentence is distinguished from the rest of it (rheme, or focus, as the case may be) and attention is paid to the semantic consequences of the dichotomy (especially in relation to presuppositions and negation) and its realization (morphological, syntactic, prosodic) in the surface shape of the sentence. In some approaches to morphosyntactic analysis the term theme is also used referring to the part of the word to which inflections are added, especially composed of the root and an added vowel.