Laura A. Michaelis
Meanings are assembled in various ways in a construction-based grammar, and this array can be represented as a continuum of idiomaticity, a gradient of lexical fixity. Constructional meanings are the meanings to be discovered at every point along the idiomaticity continuum. At the leftmost, or ‘fixed,’ extreme of this continuum are frozen idioms, like the salt of the earth and in the know. The set of frozen idioms includes those with idiosyncratic syntactic properties, like the fixed expression by and large (an exceptional pattern of coordination in which a preposition and adjective are conjoined). Other frozen idioms, like the unexceptionable modified noun red herring, feature syntax found elsewhere. At the rightmost, or ‘open’ end of this continuum are fully productive patterns, including the rule that licenses the string Kim blinked, known as the Subject-Predicate construction. Between these two poles are (a) lexically fixed idiomatic expressions, verb-headed and otherwise, with regular inflection, such as chew/chews/chewed the fat; (b) flexible expressions with invariant lexical fillers, including phrasal idioms like spill the beans and the Correlative Conditional, such as the more, the merrier; and (c) specialized syntactic patterns without lexical fillers, like the Conjunctive Conditional (e.g., One more remark like that and you’re out of here). Construction Grammar represents this range of expressions in a uniform way: whether phrasal or lexical, all are modeled as feature structures that specify phonological and morphological structure, meaning, use conditions, and relevant syntactic information (including syntactic category and combinatoric potential).
Computational models of human sentence comprehension help researchers reason about how grammar might actually be used in the understanding process. Taking a cognitivist approach, this article relates computational psycholinguistics to neighboring fields (such as linguistics), surveys important precedents, and catalogs open problems.
Howard Lasnik and Terje Lohndal
Noam Avram Chomsky is one of the central figures of modern linguistics. He was born in Philadelphia, Pennsylvania on December 7, 1928. In 1945, Chomsky enrolled in the University of Pennsylvania, where he met Zellig Harris (1909–1992), a leading Structuralist, through their shared political interests. His first encounter with Harris’s work was when he proof-read Harris’s book Methods in Structural Linguistics, published in 1951 but completed already in 1947. Chomsky grew dissatisfied with Structuralism and started to develop his own major idea that syntax and phonology are in part matters of abstract representations. This was soon combined with a psychobiological view of language as a unique part of the mind/brain.
Chomsky spent 1951–1955 as a Junior Fellow of the Harvard Society of Fellows, after which he joined the faculty at MIT under the sponsorship of Morris Halle. He was promoted to full professor of Foreign Languages and Linguistics in 1961, appointed Ferrari Ward Professor of Linguistics in 1966, and Institute Professor in 1976, retiring in 2002. Chomsky is still remarkably active, publishing, teaching, and lecturing across the world.
In 1967, both the University of Chicago and the University of London awarded him honorary degrees, and since then he has been the recipient of scores of honors and awards. In 1988, he was awarded the Kyoto Prize in basic science, created in 1984 in order to recognize work in areas not included among the Nobel Prizes. These honors are all a testimony to Chomsky’s influence and impact in linguistics and cognitive science more generally over the past 60 years. His contributions have of course also been heavily criticized, but nevertheless remain crucial to investigations of language.
Chomsky’s work has always centered around the same basic questions and assumptions, especially that human language is an inherent property of the human mind. The technical part of his research has continuously been revised and updated. In the 1960s phrase structure grammars were developed into what is known as the Standard Theory, which transformed into the Extended Standard Theory and X-bar theory in the 1970s. A major transition occurred at the end of the 1970s, when the Principles and Parameters Theory emerged. This theory provides a new understanding of the human language faculty, focusing on the invariant principles common to all human languages and the points of variation known as parameters. Its recent variant, the Minimalist Program, pushes the approach even further in asking why grammars are structured the way they are.
Relative clauses of which the predicate contains a present, past, or passive participle can be used in a reduced form. Although it has been shown that participial relative clauses cannot always be considered to be non-complete variants of full relative clauses, they are generally called reduced relative clauses in the literature. Since they differ from full relative clauses in containing a non-finite predicate, they are also called non-finite relative clauses. Another type of non-finite relative clause is the infinitival relative clause. In English, in participial relative clauses, the antecedent noun is interpreted as the subject of the predicate of the relative clause. Because of this restriction, the status of relative clause has been put into doubt for participial adnominal modifiers, especially, because in a language such as English, they can occur in pre-nominal position, whereas a full relative clause cannot. While some linguists analyze both pre-nominal and post-nominal participles as verbal, others have argued that participles are essentially adjectival categories. In a third type of analysis, participles are divided into verbal and adjectival ones. This also holds for adnominal participles. Besides the relation to full relative clauses and the category of the participle, participial relative clauses raise a number of other interesting questions, which have been discussed in the literature. These questions concern the similarity or difference in interpretation of the pre-nominal and the post-nominal participial clause, restrictions on the type of verb used in past participial relative clauses, and similarities and differences between the syntax and semantics of participial clauses in English and other languages. Besides syntactic and semantic issues, participial relative clauses have raised other questions, such as their use in texts. Participial relative clauses have been studied from a diachronic and a stylistic point of view. It has been shown that the use of reduced forms such as participial relative clauses has increased over time and that, because of their condensed form, they are used more in academic styles than in colloquial speech. Nonetheless, they have proven to be used already by very young children, although in second language acquisition they are used late, because their condensed form is associated with an academic style of writing. Since passive or past participles often have the same form as the past tense, it has been shown that sentences containing a subject noun modified by a post-nominal past or passive participle are difficult to process, although certain factors may facilitate the processing of the sentence.
Chiyuki Ito and Michael J. Kenstowicz
Typologically, pitch-accent languages stand between stress languages like Spanish and tone languages like Shona, and share properties of both. In a stress language, typically just one syllable per word is accented and bears the major stress (cf. Spanish sábana ‘sheet,’ sabána ‘plain,’ panamá ‘Panama’). In a tone language, the number of distinctions grows geometrically with the size of the word. So in Shona, which contrasts high versus low tone, trisyllabic words have eight possible pitch patterns. In a canonical pitch-accent language such as Japanese, just one syllable (or mora) per word is singled out as distinctive, as in Spanish. Each syllable in the word is assigned a high or low tone (as in Shona); however, this assignment is predictable based on the location of the accented syllable.
The Korean dialects spoken in the southeast Kyengsang and northeast Hamkyeng regions retain the pitch-accent distinctions that developed by the period of Middle Korean (15th–16th centuries). For example, in Hamkyeng a three-syllable word can have one of four possible pitch patterns, which are assigned by rules that refer to the accented syllable. The accented syllable has a high tone, and following syllables have low tones. Then the high tone of the accented syllable spreads up to the initial syllable, which is low. Thus, /MUcike/ ‘rainbow’ is realized as high-low-low, /aCImi/ ‘aunt’ is realized as low-high-low, and /menaRI/ ‘parsley’ is realized as low-high-high. An atonic word such as /cintallɛ/ ‘azalea’ has the same low-high-high pitch pattern as ‘parsley’ when realized alone. But the two types are distinguished when combined with a particle such as /MAN/ ‘only’ that bears an underlying accent: /menaRI+MAN/ ‘only parsely’ is realized as low-high-high-low while /cintallɛ+MAN/ ‘only azelea’ is realized as low-high-high-high. This difference can be explained by saying that the underlying accent on the particle is deleted if the stem bears an accent. The result is that only one syllable per word may bear an accent (similar to Spanish). On the other hand, since the accent is realized with pitch distinctions, tonal assimilation rules are prevalent in pitch-accent languages.
This article begins with a description of the Middle Korean pitch-accent system and its evolution into the modern dialects, with a focus on Kyengsang. Alternative synchronic analyses of the accentual alternations that arise when a stem is combined with inflectional particles are then considered. The discussion proceeds to the phonetic realization of the contrasting accents, their realizations in compounds and phrases, and the adaptation of loanwords. The final sections treat the lexical restructuring and variable distribution of the pitch accents and their emergence from predictable word-final accent in an earlier stage of Proto-Korean.
Veneeta Dayal and Deepak Alok
Natural language allows questioning into embedded clauses. One strategy for doing so involves structures like the following: [CP-1 whi [TP DP V [CP-2 … ti …]]], where a wh-phrase that thematically belongs to the embedded clause appears in the matrix scope position. A possible answer to such a question must specify values for the fronted wh-phrase. This is the extraction strategy seen in languages like English. An alternative strategy involves a structure in which there is a distinct wh-phrase in the matrix clause. It is manifested in two types of structures. One is a close analog of extraction, but for the extra wh-phrase: [CP-1 whi [TP DP V [CP-2 whj [TP…tj…]]]]. The other simply juxtaposes two questions, rather than syntactically subordinating the second one: [CP-3 [CP-1 whi [TP…]] [CP-2 whj [TP…]]]. In both versions of the second strategy, the wh-phrase in CP-1 is invariant, typically corresponding to the wh-phrase used to question propositional arguments. There is no restriction on the type or number of wh-phrases in CP-2. Possible answers must specify values for all the wh-phrases in CP-2. This strategy is variously known as scope marking, partial wh movement or expletive wh questions. Both strategies can occur in the same language. German, for example, instantiates all three possibilities: extraction, subordinated, as well as sequential scope marking. The scope marking strategy is also manifested in in-situ languages. Scope marking has been subjected to 30 years of research and much is known at this time about its syntactic and semantic properties. Its pragmatics properties, however, are relatively under-studied. The acquisition of scope marking, in relation to extraction, is another area of ongoing research. One of the reasons why scope marking has intrigued linguists is because it seems to defy central tenets about the nature of wh scope taking. For example, it presents an apparent mismatch between the number of wh expressions in the question and the number of expressions whose values are specified in the answer. It poses a challenge for our understanding of how syntactic structure feeds semantic interpretation and how alternative strategies with similar functions relate to each other.
Philippe Schlenker, Emmanuel Chemla, and Klaus Zuberbühler
Rich data gathered in experimental primatology in the last 40 years are beginning to benefit from analytical methods used in contemporary linguistics, especially in the area of semantics and pragmatics. These methods have started to clarify five questions: (i) What morphology and syntax, if any, do monkey calls have? (ii) What is the ‘lexical meaning’ of individual calls? (iii) How are the meanings of individual calls combined? (iv) How do calls or call sequences compete with each other when several are appropriate in a given situation? (v) How did the form and meaning of calls evolve? Four case studies from this emerging field of ‘primate linguistics’ provide initial answers, pertaining to Old World monkeys (putty-nosed monkeys, Campbell’s monkeys, and colobus monkeys) and New World monkeys (black-fronted Titi monkeys). The morphology mostly involves simple calls, but in at least one case (Campbell’s -oo) one finds a root–suffix structure, possibly with a compositional semantics. The syntax is in all clear cases simple and finite-state. With respect to meaning, nearly all cases of call concatenation can be analyzed as being semantically conjunctive. But a key question concerns the division of labor between semantics, pragmatics, and the environmental context (‘world’ knowledge and context change). An apparent case of dialectal variation in the semantics (Campbell’s krak) can arguably be analyzed away if one posits sufficiently powerful mechanisms of competition among calls, akin to scalar implicatures. An apparent case of noncompositionality (putty-nosed pyow–hack sequences) can be analyzed away if one further posits a pragmatic principle of ‘urgency’. Finally, rich Titi sequences in which two calls are re-arranged in complex ways so as to reflect information about both predator identity and location are argued not to involve a complex syntax/semantics interface, but rather a fine-grained interaction between simple call meanings and the environmental context. With respect to call evolution, the remarkable preservation of call form and function over millions of years should make it possible to lay the groundwork for an evolutionary monkey linguistics, illustrated with cercopithecine booms.