Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS ( (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 26 March 2017

Theoretical Phonology

Summary and Keywords

Phonology has both a taxonomic/descriptive and cognitive meaning. In the taxonomic/descriptive context, it refers to speech sound systems. As a cognitive term, it refers to a part of the brain’s ability to produce and perceive speech sounds. This article focuses on research in the cognitive domain.

The brain does not simply record speech sounds and “play them back.” It abstracts over speech sounds, and transforms the abstractions in nontrivial ways. Phonological cognition is about what those abstractions are, and how they are transformed in perception and production.

There are many theories about phonological cognition. Some theories see it as the result of domain-general mechanisms, such as analogy over a Lexicon. Other theories locate it in an encapsulated module that is genetically specified, and has innate propositional content. In production, this module takes as its input phonological material from a Lexicon, and refers to syntactic and morphological structure in producing an output, which involves nontrivial transformation. In some theories, the output is instructions for articulator movement, which result in speech sounds; in other theories, the output goes to the Phonetic module. In perception, a continuous acoustic signal is mapped onto a phonetic representation, which is then mapped onto underlying forms via the Phonological module, which are then matched to lexical entries.

Exactly which empirical phenomena phonological cognition is responsible for depends on the theory. At one extreme, it accounts for all human speech sound patterns and realization. At the other extreme, it is little more than a way of abstracting over speech sounds. In the most popular Generative conception, it explains some sound patterns, with other modules (e.g., the Lexicon and Phonetic module) accounting for others. There are many types of patterns, with names such as “assimilation,” “deletion,” and “neutralization”—a great deal of phonological research focuses on determining which patterns there are, which aspects are universal and which are language-particular, and whether/how phonological cognition is responsible for them.

Phonological computation connects with other cognitive structures. In the Generative T-model, the phonological module’s input includes morphs of Lexical items along with at least some morphological and syntactic structure; the output is sent to either a Phonetic module, or directly to the neuro-motor interface, resulting in articulator movement. However, other theories propose that these modules’ computation proceeds in parallel, and that there is bidirectional communication between them.

The study of phonological cognition is a young science, so many fundamental questions remain to be answered. There are currently many different theories, and theoretical diversity over the past few decades has increased rather than consolidated. In addition, new research methods have been developed and older ones have been refined, providing novel sources of evidence. Consequently, phonological research is both lively and challenging, and is likely to remain that way for some time to come.

Keywords: phonology, linguistics, cognition, speech sounds, representation, computation, phonotactics, alternations, morphophonology

1. Introduction

The term phonology has a taxonomic/descriptive meaning, and a cognitive meaning.

1.1 Taxonomic/Descriptive Phonology

In a taxonomic/descriptive context, “phonology” refers to speech sound systems in human language communication. It has also been extended to refer to systems of gestures in signed languages (e.g., Brentari, 2012), and to sound patterns in non-human animal communication (e.g., Clark, Marler, & Beeman, 1987).

Research in taxonomic/descriptive phonology involves developing frameworks for comprehensive description of speech sound systems, techniques for more accurate and faster description, and methods for storage and retrieval of data. For example, Harris (1960, p. 3) proposes methods “which will not impose a fixed system upon various languages, yet will tell us more about each language than will a mere catalogue of sounds and forms,” with a goal that application of descriptive methods will allow languages to be “more readily compared for structural differences.” In terms of storage and retrieval, the development of the Internet and lower cost of digital storage has allowed the creation and expansion of online language descriptive databases and repositories (e.g., Lewis, Simons, & Fenning, 2015; Digital Endangered Languages and Musics Archives Network).

1.2 Phonology in Cognition

Humans do not simply record what they hear and “play it back” when they speak. There is a cognitive process of abstraction where the brain stores sounds as symbols. In this cognitive context, “phonology” refers to the process of abstraction, and how the abstract representations are computed. In production, the computation produces phonological representations that are ultimately realized as the articulatory movements that produce speech sounds. In perception, speech sounds are converted into phonological representations as part of the process of determining the speaker’s intended meaning.

There are many different theories of what phonology is in cognition. At one extreme, phonology is a domain-general cognitive ability to analogize about sound symbols (e.g., Cognitive Phonology—Nathan, 2007, 2008; Evolutionary Phonology—Blevins, 2004; Usage-Based Phonology—Bybee, 2001, 2010). In the Evolutionary Phonology view, for example, there is a Lexicon where morphemes are stored, and their entries contain phonological symbols—symbols that are ultimately realized as articulatory movements in production. Patterns seen among the phonological symbols are a remnant of the language’s development, with any systematicity due to consistent biases in learning procedures. Humans can use analogical mechanisms to generate new words, or to adapt existing words to new contexts.

In the majority of cognitive theories, though, “phonology” refers to a specific cognitive module. In the Generative conception, this module takes phonological symbols as its input and converts them into an output representation. In some conceptions, the module’s output is essentially instructions to articulators (e.g., Chomsky & Halle, 1968—hereafter “SPE”). In other theories, there is an additional module (the Phonetic module) that transforms the phonological output into articulatory instructions—so, the phonological module is only one part of the production and perception of speech sounds (e.g., Keating, 1990).

There are many theories of the phonological module. They differ in whether they consider the module to be innate, or constructed by general cognitive learning processes. They also differ in where the propositional content of the modules comes from: whether it is innate, or derived by mechanisms that refer to more external factors such as articulatory ease and perceptual distinctiveness (Gordon, 2007). There are also a variety of theories of how the phonological module connects to other modules and cognitive abilities.

Regardless, in the Generative conception, there is a phonological module which exists inside all intact human brains. So, phonology is the study of this module, and just as with the study of any biological mechanism, phonology is a natural science.

1.3 Connections and Differences

In short, “phonology” refers to two completely different concepts: in the taxonomic/descriptive realm it refers to descriptions of speech sound systems, while in the cognitive realm it refers to a physical object. It is crucial to be aware of the different uses because in practice they are often used together. In fact, taxonomic/descriptive research and cognitive theories have often borrowed terms and concepts from each other: a descriptive work might use a Generative theory’s formalism (such as transformational rules or constraints) without necessarily implying any commitment to a particular cognitive theory. Similarly, a grammar’s “Phonology” section might well include phenomena that a Generative phonologist would view as being initiated by the Phonetic module. On the cognitive side, the majority of work in cognition has relied on descriptive work as sources of evidence.

1.4 Current Research

Taxonomic/descriptive phonology is an area of active research and application, both within the field of linguistics and in anthropology (as anthropological linguistics), and in cross-disciplinary movements such as endangered language preservation, and documentary linguistics (e.g., Woodbury, 2003).

There has been a great deal of research on cognitive theories of phonology. It is actively studied in academia both within linguistics departments and in related departments such as psychology, phonetics, computer science, communication, and language-specific departments, and also across disciplines through movements such as laboratory phonology. At the time of writing (2015), there is perhaps more theoretical diversity than at any time since the seminal work of Chomsky and Halle (1968). The rest of this article will focus on themes in research on phonology in cognition, with the observation that such theoretical understanding often informs description, and descriptions often serve as evidence for cognitive theories.

2. Representation

The study of phonological representation seeks to determine the nature of the objects that phonological cognition acts upon. It is not entirely reasonable to discuss theories of representation separately from theories of computation because they are so intimately connected: representational elements are only significant if the computational system has some means of referring to them. Even so, it is possible to identify broad research themes that apply to representational theories, and many representational theories have been adopted into different computational theories. For example, the representational theory of Autosegmental Phonology was originally proposed with a rule and filter serialist model of computation (Goldsmith, 1976), but has since been adopted into some constraint-based theories (e.g., McCarthy, 2011).

2.1 Features

A central goal of phonological research is to determine the form of phonological representations. A fundamental issue is whether phonological representations are identical to representations in other cognitive domains, or specialized for phonological cognition. For example, Bybee (2001, p. 7) argues that there is no difference between phonological representations and representations of any other mental object. In contrast, a leading idea in the Generative framework has involved specialized structures for phonological cognition; in SPE (Chomsky & Halle, 1968), representations are strings of segments, which are bundles of features. A feature is the minimal unit of phonological representation; it has both a “classificatory” and “phonetic” role. In its classificatory role, a feature is used by computational processes (e.g., rules, constraints) to identify classes of segments that can be transformed, and to define the nature of that transformation—that is, from one feature value (or group of feature values) to another. In SPE, all features are binary in their classificatory role—i.e., all have either the value + or - (Chomsky & Halle, 1968, p. 65). In their phonetic role, features are interpreted as particular articulatory instructions (or acoustic targets). For example, the feature [±round] identifies two classes of phonological segments: those that are [+round] and those that are [-round]; [+round] is phonetically interpreted as narrowing of the lips (contraction of the orbicularis oris muscle), and [-round] as not involving such a narrowing (Chomsky & Halle, 1968, p. 309).

A significant research question has been determining how many values features have, and whether all features have the same valency (Harris, 2007). In some theories, particular features are privative, or one-valued (monovalent). So, instead of all segments containing either [+coronal] or [-coronal], segments either do or do not contain [coronal]. SPE had exclusively binary features, some theories have both binary and privative features (e.g., Clements & Hume, 1995), and some have exclusively monovalent features (e.g., Anderson & Ewen, 1987). Importantly, privative and binary features only behave differently if the computational system cannot refer to the lack of a feature, otherwise the absence of a feature functions as if it is a “-” value. Similarly, a feature might be binary-valued, but if there is no rule or constraint that can refer to one of the values, it can be functionally privative.

The value of feature mono-valency (privativity), when coupled with particular computational theory, is that phonological processes cannot be sensitive to the absence of a feature. The lack of the feature should be unable to trigger a particular transformation, or interfere with the conditioning environment for a transformation. For example, there are cases of rounding harmony, where all vowels become [+round] in the presence of a [+round] segment. However, there are apparently no cases of [-round] harmony (Kaun, 1995). This can be explained if the feature is privative [round], and there are no rules or constraints that demand that segments agree in lacking [round].

There are theories of phonological representation which differ in profound ways from Generative conceptions. For example, exemplar theories see representational elements as labels for groups of tokens. Humans store acoustic perceptual events in memory, and then organize them into clusters (Pierrehumbert, 2002; Välimaa-Blum, 2009). This approach offers a straightforward way to account for variation in production and perception.

2.2 Beyond Features

A major research theme, departing from SPE (Chomsky & Halle, 1968), has been whether features and feature bundles are the only phonological primitive. For example, Dependency Phonology has not only privative features (e.g., |a| “openness,” |i| “frontness”), but also a relation (dependency) that holds between them (Anderson & Ewen, 1987; Harris & Lindsey, 1995). A segment where |a| depends on |i| is interpreted as [e], while a segment where |i| depends on |a| is interpreted as [æ].

Perhaps the most influential representational addition is the autosegmental association relation (Autosegmental Phonology—Goldsmith, 1976). The original conception involved tones: instead of seeing a tone as a feature of a segment, Autosegmental representation expresses it as an independent object that is connected to the segment via a relation (“association”). Such independence provides an insightful way to explain spreading of tones, why tones can survive even after their sponsoring segment is deleted, “floating” tones that remain unattached to segments, and why some morphemes seem to consist of a tone and no segments. Autosegmental representation spread beyond tones to all segmental features (e.g., in the “feature geometries” of Clements, 1985; Sagey, 1986). In fact, it led to the idea that there are features whose sole role is to organize other features—that is, they have a classificatory role but no phonetic role. For example, Clements and Hume’s (1995) “C-Place” node is simply an anchor for place features. It is an ongoing research question whether all features should be autosegmental. Some features do not seem to act with the autonomy of tone—for example, they do not survive deletion of their segmental sponsor. In resolution, McCarthy (1988) proposes that certain features inhabit a segment’s “root” node, effectively returning them to their SPE conception as a bundle of features.

Another ongoing issue is whether certain aspects of feature behavior should be ascribed to representation or computation. Padgett (2002) accepts the autosegmental idea for features, but argues that there are no non-terminal nodes (such as C-Place). Instead, the behavior of certain features as a class is a side-effect of the constraints that refer to them. This opens the way to considering whether all feature behavior can be explained in terms of how the computational system refers to features rather than in terms of their representation, a point explored by Flemming (2005); the final result might be a return to SPE’s feature bundle theory, at least for features other than tone.

The autosegmental conception of feature organization has been vigorously developed in a variety of computational frameworks (e.g., Halle, Vaux, & Wolfe, 2000; Morén, 2003; Jurgec, 2011). However, alternatives to autosegmental featural organization have also been proposed (e.g., Span Theory—McCarthy, 2004). One of the strengths of autosegmental representation has been to account for agreement (assimilation, harmony) and disagreement (dissimilation) of feature values among different segments. However, theories have been proposed that see assimilation/harmony as the result of constraints that impose an identity requirement (e.g., Baković, 2000), or that see assimilation, harmony, and disagreement as due to non-autosegmental relations (e.g., “Correspondence”—Rose & Walker, 2004; Bennett, 2015).

2.3 Constituency

Another major research theme since SPE (Chomsky & Halle, 1968), intersecting with Autosegmental Theory, has involved supra-segmental organization and prosodic representation. In SPE, rules could refer to groups of segments indirectly by using “boundary symbols”—a special type of segment that marked morpheme, word, and phrase edges. After SPE, theories of constituency developed. For example, Kahn (1976) proposed the syllable node—an object which associates to one or more segments, so defining groups of segments to which rules and constraints could refer. Theories of constituency developed to posit many nodes at different levels (the “prosodic hierarchy”), with nodes at one level associated to nodes at the level below (and possibly to nodes at lower levels, too), so that every representation involves a complex structure with several levels terminating at a single node at the top (the Utterance node) (Selkirk, 1984). As with segmental features, a central question in prosodic research has been how many nodes there are in the hierarchy (Nespor & Vogel, 1986), and how they relate to each other. For example, the sub-syllabic “mora” node (μ‎) was introduced (e.g., Hyman, 1985), and the syllable (σ‎) node was argued to associate to both μ‎ nodes and root nodes (e.g., Hayes, 1989), violating an earlier restriction that nodes at level n could only associate to nodes at level n-1 (“Strict Layering”); this was followed by relaxation of Strict Layering at the foot node (Ft) level, too (e.g., Hayes, 1995). There are continuing research questions about which prosodic structures are possible and attested in natural language. For example, while it is widely accepted that Ft organization comes in two basic types—where the leftmost σ‎ is the head (“trochaic”) vs. where the rightmost σ‎ is the head (“iambic”)—there is controversy over how these two types can be arranged inside the larger Prosodic Word constituent (Hayes, 1995, p. 262).

Apart from the autosegmental expansion of representation is the grid theory of Prince (1983). A grid structure consists of successive layers of gridmarks, and is interpreted as relative prominence (“stress”). In its original conception, grids were conceptually quite different from the prosodic hierarchy, though grid marks approximated “heads” of prosodic constituents. However, it was extended in Halle and Vergnaud (1987) by adding symbols to mark constituents of grid marks, making it notationally more similar to the prosodic hierarchy. While the grid representation competes in function with the prosodic hierarchy, Hyde (2002) has argued that both are necessary.

Another ongoing approach to constituency has been developed in Government Phonology. There is a “licensing” relationship between constituents that effectively expresses constituency without using a dominating node (van der Hulst & Ritter, 1999).

2.4 Differences between Phonological Representations

An interesting issue is whether all phonological representations are constructed from the same primitives. In SPE (Chomsky & Halle, 1968) the input representation consisted of bundles of binary features but the output representation’s features were multi-valued (specifically, integers). The role of the phonological component was to convert binary features into multi-valued ones that would then serve as “a set of instructions to the physical articulatory system” (Chomsky & Halle, 1968, p. 65). The phonological output was not instructions in their final form; universal implementation rules provided additional phonetic detail (Chomsky & Halle, 1968, p. 295). This view of the phonological module as a combination of categorical and gradient detail is developed in Flemming (2001), adopting features with scalar values.

In contrast, recent Generative models say that the phonological output does not specify articulator instructions, but rather feeds another module—the Phonetic Module—which converts the phonological output into articulatory instructions (e.g., Keating, 1990). Such models mean that phonological representations at all stages can be unified—inputs and outputs can be built of the same features and values. However, even within such models it is an ongoing question whether inputs to the phonological module have the same representational possibilities as outputs. For example, inputs seem to lack information about prosodic constituency: for example, there is no robust case in which a minimal pair contrasts only in syllabification (e.g., [pat.a] vs. [pa.ta]) (e.g., Blevins, 1995). This might indicate that σ‎ nodes are not permitted in input representations, though it could also mean that σ‎ nodes and their associations are simply not preserved by the phonological module (see “Computation”).

A complex issue is whether the input is restricted to a particular set of segments on a system-particular basis. In a number of theories, phonological inputs (and lexical entries’ morphs) must be constructed from a finite inventory of segments (e.g., SPE), and can have restrictions placed on them (e.g., morpheme-structure constraints—Booij, 2011). The inventory is determined by the learner by identifying minimally contrastive segments (e.g., Dresher, 2009). In fact, some theories pare lexical representations down to the very smallest number of features that still maintain contrastive oppositions (e.g., Archangeli, 1984). In comparison, in some theories there are no restrictions on underlying forms, and no segment inventory (Prince & Smolensky, 2004), so representations are not pared down at any level. The issue of whether and how contrastiveness plays an active role in representations and computation is still actively investigated (e.g., Łubowicz, 2012).

While the input might lack information, there is also disagreement about how completely specified the output representation must be (e.g., Steriade, 1995). In many theories of tone, lack of complete tonal specification has long been assumed possible—the phonetic module interpolates pitch targets over toneless segments (e.g., Pierrehumbert, 1980). It is less clear whether such lack of specification can occur for segmental features, though Choi (1992) argues that Marshallese output vowels lack specifications for frontness and backness—vowel position is realized by interpolation between surrounding consonants. Similarly, some types of schwa can be featureless in some theories (e.g., Oostendorp, 1995), and so are assigned a phonetic interpretation as having a neutral tongue position, perhaps highly influenced by context.

2.5 Current Research

At the current time, a good amount of research on theories of phonological representation is being produced. However, it seems there has been less focus on representation over the past two decades and more on computation due to the rise of Optimality Theory (OT) (Prince & Smolensky, 2004). In some other theories, representations carry a great deal of the explanatory burden; in contrast, OT allows (or seems to allow) greater agnosticism about representations, with more burden falling on computation (see “Computation”). For example, epenthetic segments have a special representation in many theories (e.g., they are empty prosodic nodes); however, OT with Correspondence Theory (McCarthy & Prince, 1999) allows epenthetic segments to be fully featurally specified like any other segment; their epenthetic status is instead marked by their relations to input segments and by constraints that refer to that relationship. On the other hand, there have been a number of developments in theories of representation over the past twenty years, as mentioned above, and of course, any complete theory of the phonological module must incorporate a fully developed theory of representation at all relevant levels.

3. Computation

Accounting for creativity is a fundamental challenge for linguistic research (Chomsky, 1965, p. 6). In phonology, creativity refers to the ability to produce and perceive novel phonological structures. For example, speakers can compose words they have never heard before and calculate their phonological output (e.g., while pro-anti-bureaucratic might be a novel word to almost all English speakers, they will have no difficulty in saying it and speakers will have no difficulty in perceiving it and correctly parsing its morphemes). In many theories, phonological creativity is explained through a computational system—that is, operations that generate, change, or relate representations.

3.1 Transformations

In Generative phonology, a crucial part of explaining creativity is the idea that the phonological module is transformational: it takes an input (the underlying form) and produces an output (the surface form) that can be remarkably different. Input→output transformations are also used to explain phonological similarity between morphologically-related forms. For example, German [loːp] Lob “praise” and [loːb-əs] Lobes “praise+genitive” are related by an underlying form/loːb/and a phonological transformation that changes/b/to [p] in a specific environment (Wiese, 1996, p. 201).

However, some theories reject the idea of an input→output mapping. In the Declarative Phonology framework, the equivalent of phonological outputs are a collection of constraints, each of which partially defines a phonological object (Scobbie, Coleman, & Bird, 1996). There is in effect no input→output mapping in this theory. In fact, explanation for alternations is expected to arise from other modules (Scobbie et al., 1996, p. 703). Some theories have representations, but no input→output mapping (e.g., Hooper, 1976; Albright, 2002; Burzio, 2002); instead, there are networks of connections between representations. Even so, the majority of theories in the Generative tradition adopt an input→output mapping. Motivations for the transformational Generative approach are reviewed by McCarthy (2007a).

While this article focuses on production, transformations can also occur in the perceptual process. The perceptual process could involve a perceived acoustic signal being mapped onto a phonetic representation, which is then mapped onto an underlying form via the Phonological module (e.g., Peperkamp & Dupoux, 2003; Boersma, 2009). This mapping involves significant abstraction away from the phonetic form, though there are several theories of the perceptual process (e.g., Boersma & Hamann, 2009).

3.1.1 Serialism and Parallelism

In SPE (Chomsky & Halle, 1968), an input undergoes a series of changes effected by rules to produce an output. This “serialist” conception of input→output mapping has dominated in many Generative theories since SPE (e.g., Kenstowicz, 1994).

In contrast, in classical Optimality Theory (OT) an input is transformed into many different (in principle, an infinite number of) representations (“candidates”) (Prince & Smolensky, 2004). In a later development of OT, Correspondence Theory suggests a different conception of candidate generation: all candidates are the same for any input; the difference is in how the input relates to those candidates via “correspondence” (McCarthy & Prince, 1999). The exact process of candidate generation has not yet been clearly defined; it might work in a serialist fashion. Regardless, in such “parallelist” approaches, after candidate generation there is a selection process that identifies the output (the “winner”) from among the candidates.

Some theories employ both serialist and parallelist modes of computation. For example, Harmonic Serialism has a series of forms in its derivation (McCarthy, 2010). At each stage, several candidates are generated, but in a highly restricted way: only one alteration to the input at each state is permitted. The winner at each stage is identified, then fed back into the computation. This procedure repeats until a stage is reached where the winner is identical to that stage’s input.

Distinguishing between parallelist and serialist theories has proven challenging. It may seem that the single-level parallelist approach, with its potentially infinite candidate set, would predict far greater diversity in input→output mappings than a serialist theory. However, parallelist theories’ mapping abilities are limited not by what candidates they generate but by how the winning candidates are chosen. Similarly, SPE is highly serialist, but with a relatively unconstrained rule formalism can generate a wide array of outputs from any input. Initially, it seemed that parallelist models were unable to generate derivations involving opacity—where, roughly speaking, the output representation shows signs of having undergone a transformation, but the environment that motivated that transformation no longer exists (e.g., Idsardi, 2000). However, later work proposed additions to parallelist theories to account for opacity (e.g., McCarthy, 2007b). The mixed serialist-parallelist model of Harmonic Serialism has been argued to be more constrained (i.e., offers fewer output options for any input) than classical OT—severe limits on construction of candidates at each computational iteration means that for an input there are some output candidates that can never be constructed (McCarthy, 2010). However, the difficult—and ongoing—challenge is to determine for any theory whether it necessarily overgenerates or undergenerates, or overgenerates in some areas and undergenerates in others. After all, whether one theory overgenerates/undergenerates better than another is uninteresting if both theories overgenerate/undergenerate relative to the evidence.

3.2 Levels of Computation

Another research theme has been the question of how many phonological computational systems can exist in one grammar. SPE (Chomsky & Halle, 1968), classical Optimality Theory (OT), and even Harmonic Serialism all have one computational system: a single set of rules for SPE, and a single candidate evaluation mechanism for classical OT and Harmonic Serialism. In contrast, some theories have several computational systems; the output of one is the input to another. This idea started with Lexical Phonology and Morphology, where an input passes through several “levels”—different computational systems that can apply quite different transformations (Kiparsky, 1982). In the more recent Stratal OT, there are also a series of distinct computational systems: the stem-level, word-level, and phrase-level (Bermúdez-Otero, 2011; though even some of the earliest work in OT has multiple levels—McCarthy & Prince, 1993b). Each level has an internally parallelist computation, and the output of each level is the input to the next level. There is no limit on how much the systems can differ, though the learning process will ensure that they do not deviate too markedly (Bermúdez-Otero, 2011). It remains unclear whether a single-level computational system is adequate. Within classical OT, for example, some of the functions of multiple levels have been emulated by extending the theory to allow constraints to refer to “loser” candidates (e.g., Benua, 2000; McCarthy, 2007b).

The idea that there are multiple phonological systems is taken to an extreme in theories where every morphological construction is potentially associated with a different phonological mechanism (e.g., a rule system, or constraint ranking) (e.g., Anderson, 1992; Inkelas & Zoll, 2005), discussed in the “Morphology” section of this article.

3.3 One-to-one or One-to-many?

A research issue that has seen increasing attention and importance is whether input→output mapping is one-to-one or one-to-many. In SPE, for one input the computational system generated one output in any particular environment. Such input-output uniqueness is challenged in work that seeks to account for “free variation”—the idea that a single input can be realized with different outputs and no phonological or morphological environment is involved. For example, in American English [t] and [d] can delete pre-consonantally, though whether deletion occurs or not seems to be at least partially random (Anttila, 2007). Approaches to such unconditioned randomness include having rules that apply randomly within any particular derivation, or to have partially ordered constraint rankings where the order is only set for a specific derivation (e.g., Anttila, 1997). A similar idea is found in Stochastic Optimality Theory, where constraints are assigned a numerical index on a scale but have a particular range of movement along the scale; for any individual derivation, the exact location of the constraint on the scale varies, so that for a pair of constraints whose ranges overlap one ranking will obtain a certain percentage of the time, and the opposite will hold otherwise (Boersma & Hayes, 2001).

However, it is uncertain whether pure free variation exists. While it is clear that there is variation that is not conditioned by phonological, morphological, or lexical factors, it seems that it is instead conditioned by external factors such as age, gender, and speech style. One approach to such externally conditioned variation is to posit that speakers have multiple phonological systems; each system maps inputs to outputs in a one-to-one fashion, and a speaker’s choice to use a particular system depends on a variety of external factors (Kroch, 1989). One other possibility is that some variation is not phonological, but properly located in the phonetic module. For example, the apparent deletion of [t]/[d] in American English could be due to mistiming in articulatory execution (Browman & Goldstein, 1990).

3.4 Input-Output Similarity and Difference

Perhaps the majority of research has—and remains—focused on explaining why and how an input can differ from its output. One fruitful approach has been to ask why inputs and outputs are so similar. In SPE, input-output similarity is an epiphenomenon of rule non-application: for example, /ba/→[ba] because no rule (apart from identity) has applied. However, SPE’s rules were relatively unconstrained (see Chomsky & Halle, 1968, chap. 9), so later theories limited possible rules and constraints so that deviation from the input was more tightly restricted (e.g., Stampe, 1973). In contrast, in Correspondence Theory input-output identity is due to constraints on a relation between input and output segments (McCarthy & Prince, 1999). For example, if/ba/maps to the output [ba], this is due to a relation called “correspondence” (C) where C(x,y) and/b/C[b],/a/C[a], and the constraint system is ranked in such a way that constraints that promote identity between corresponding segments (e.g., IO-ident[F], IO-max) outrank any antagonistic constraints (i.e., markedness or anti-faithfulness constraints). In other words, in Correspondence Theory input-output identity is due to active restrictions on the grammar.

It is important to point out that parallelist theories do not require an approach to input-output identity like Correspondence Theory’s—classical OT originally imposed identity by requiring that every output contain the input, and by penalizing additions (Prince & Smolensky, 2004).

The different conceptions of input-output identity have potentially quite different empirical effects. The epiphenomenal approach means that access to an input segment is lost after the first rule has altered it. For example, /paku-ta/could undergo high vowel deletion [pakta] then inter-consonantal epenthesis. The epenthetic vowel must have default feature values (e.g., [pakita])—the/u/cannot be restored to [pakuta] because at this point in the derivation, [u] does not exist (though see Kenstowicz, 1981). In contrast, Correspondence Theory permits such restoration, even in a multi-stratal model, because there are constraints that require output identity with the input, regardless of what happens in the intervening derivation.

The Correspondence Theory conception of input-output identity has been extended to other dimensions. For example, McCarthy and Prince (1999) propose that reduplication is due to Correspondence relations between reduplicant morphemes and the stems they attach to. Benua (2000) proposes that cases where outputs share properties with their derivational base rather than their inputs are due to Correspondence relations between the output and base. Some theories propose that output segments can correspond to other segments, causing assimilation, harmony, and dissimilation (e.g., Rose & Walker, 2004). The identity-as-epiphenomenon theories, on the other hand, see identity effects in these areas as due to other mechanisms, such as autosegmental spreading (e.g., McCarthy & Prince, 1986; Raimy, 2000). An ongoing research issue is whether identity really can be unified in the way suggested by Correspondence Theory, or whether instead there are profound differences between input-output identity and apparent identity on other dimensions.

There are many theories of why and how outputs can be different from inputs. In SPE, an input representation is submitted to a series of rules, each of which generates a new representation if it applies; the “output” is the form that no more rules can apply to. Later theories introduced constraints to the computational system—statements of well- or ill-formed structures. Constraints could doom a particular derivation or motivate other rules to apply to fix the problem (Goldsmith, 1976; Paradis, 1988). In classical Optimality Theory, on the other hand, constraints are functions that return a set of violation marks for a candidate. Constraints themselves do not determine whether a candidate is the “winner”—the form that is output to the Phonetic module. Instead, the winning candidate is one that fares best in terms of the evaluation function EVAL. There have been a variety of proposals about how EVAL works. In classical OT, constraint ranking is a total order; a candidate l is a loser if there is some other candidate w for which constraint C assigns fewer violations to w than l (i.e., C favors w over l) and there is no other constraint K that outranks C where K favors l over w (Prince & Smolensky, 2004). In Harmonic Grammar, in comparison, every constraint has a numeric value (a “weight”) (Pater, 2009); if a candidate violates constraint C n times, then its violation value is n×CWEIGHT. A candidate’s violation values are then summed, and the candidate with the lowest total value wins.

3.5 Possible Transformations

In the first eight chapters of SPE, the rule-definition language allows rules to be formulated in which any feature can be transformed with any expressible environment (e.g. [-voice]→[+voice]/_#). However, SPE’s chapter 9 (Chomsky & Halle, 1968) introduced restrictions on possible feature value mappings to prevent the modeling of unattested processes, such as word-final consonant voicing (e.g., de Lacy, 2006b). In all subsequent theories, how to restrict the rule/constraint system to permit only attested phonological processes and ban unattested ones has been a central research concern (a concern that stretches much further back—e.g., Trubetzkoy, 1939).

One source of explanation for such universal asymmetries comes from the rule-definition or constraint-definition language itself—the rule/constraint-definition language determines the form of constraints, and so places limits on what can be expressed (e.g., de Lacy, 2011). However, it seems that further restrictions on representation and computation are necessary.

One approach has been to rely on limiting representations. For example, Lombardi (1995) argues that word-final voicing is impossible because of the representation of the feature [voice]—it is privative, so it can be deleted (delinked); there is also a computational restriction—there are rules that delink, but none that introduce a new feature. In OT, in comparison, the lack of word-final voicing can be due to the lack of a constraint that favors [+voice] consonants over [-voice] ones in that environment, or having such a constraint be universally outranked by a constraint that favors [-voice] in that environment (de Lacy, 2006a). In general terms, the challenges for any substantive theory of markedness is to explain some phonological processes always occur in “one direction” (such as devoicing, but no voicing), and others do not. De Lacy (2006a) also argues that there are further requirements: to explain why some phonological systems are sensitive to category distinctions and others collapse them (“markedness conflation”), and to explain why highly marked elements can be exempted from undergoing phonological processes (“preservation of the marked”).

3.6 Universality

An important research issue has centered on whether rules/constraints are universally present in every phonological module. It is possible that rules and/or constraints are propositionally innate—that is, each rule and constraint is hardwired into the human brain. With such an approach, rule/constraint similarities are not necessarily to be expected—any similarities would be due to accidental evolutionary convergence, or due to deeper cognitive restrictions on rule/constraint form. However, it is also possible that only rule and constraint schemas are hardwired, and individual constraints are generated by them. Such an approach allows for constraints such as the um-specific align mentioned above, and leads one to expect to find families of similar constraints, as they would all be derived from a potentially small set of common schemas (e.g., ident[F], align, and so on).

Even further, it is possible that rules/constraint schemas are not hardwired, but rather there are construction processes that refer to module-external factors such as articulatory effort or difficulty of perceptual discrimination (e.g., Archangeli & Pulleyblank, 1994; Gordon, 2007). A “direct” approach involves mechanisms that translate such articulatory and perceptual difficulties into phonological rules/constraints with little distortion. An indirect approach is suggested by Hayes (1999). Ease of voicing for the stops [b], [d], and [g] varies with place of articulation and context. However, while voicing in [g] is more difficult to maintain than for [b] and [d] after oral sonorants, Hayes (1999) argues that there is no phonological constraint such as *L[g] (where L is an oral sonorant). Instead, functional pressures can only be expressed in phonological constraints in accord with the limitations of the phonological system’s constraint definition language, and whatever mechanisms construct constraints. Of course, such externally-referring rule/constraint-construction processes could lead to universal rules/constraints, as long as the processes are applied equally by all learners. However, they also raise the possibility that different individual experience and results might lead to different constraints, or at least fewer constraints than others, shaped by the specific variation found in an individual’s articulatory/perceptual system. On the other hand, the disparity between phonological constraints and articulatory effort and perceptual distinctiveness could be seen as indicating that there is no online process of constraint construction or evaluation. Instead, apparent similarity between rules/constraints and phonetic desirability could be a side effect of species-level pressures (e.g., Chomsky & Lasnik, 1977).

In contrast to rules/constraints, most research has assumed that phonological representations are universal. However, Mielke (2008) argues that features are system-specific in their categorization function—learners construct features so that they can express abstract generalizations about phonological processes they observe. A central point in this work is that there are apparently “crazy,” or unnatural, phonological processes that refer to unexpected classes of segments as undergoers and targets. For example, Mielke (2008) discusses a case in Evenki where/v/,/s/and/g/nasalize when they follow nasal consonants (e.g., /oron-vi/→[oronmi],/ŋanakin-si/→[ŋanakinni]), but other consonants (/p t k b d tʃ dʒ x h ʒ r l/) do not (e.g., /amkin-du/→[amkin-du], *[amkinnu]). Mielke (2008) observes that it is not possible to provide any straightforward characterization of either environment in terms of a conjunction of universal features; instead, learners posit an abstract feature that distinguish the two classes. Of course, the validity of such arguments depends on the computational system. The argument depends on the idea that there is necessarily one rule or constraint responsible for motivating the Evenki process. Instead, there may be several different processes occurring here—one for [g], one for [s], and one for [v]—with the apparent outcome being an unnatural class. In short, while universality of representation and computation is a central issue, it is difficult—perhaps impossible—to explore representation and computation independently as the predictions of any theory rely on the interaction of both.

3.7 System-specificity

Of course, phonological systems do differ. A major research goal is to identify possible and impossible differences, and provide an account for them. In SPE, differences are due to variation in restrictions on the input (morpheme structure constraints), rules, and rule ordering; the representational system is universal. In subsequent theories, a great deal of research has been devoted to determining whether there are restrictions on these areas.

For example, Natural Phonology proposed limits on possible rules (Stampe, 1973). Some constraints were argued to be universal (e.g., the No-Line Crossing constraint—Goldsmith, 1976). Rule ordering was argued to be intrinsic—rules apply when their structural description is met, rather than having an explicit order of application (e.g., Hyman, 1993). In classical OT, in comparison, all variation resides in the constraint ranking; there are no restrictions on underlying forms, and all constraints are universal (Prince & Smolensky, 2004). However, there has been slight practical relaxation of constraint universality in claims that there are universal constraint schemas, or templates, which can be filled in with language-specific morphemes. For example, the constraint align([um]Af, L, Stem, L) is violated when the left edge of the Tagalog affix um- is not at the left edge of a stem, and is a morpheme-specific implementation of the more general align constraint schema (McCarthy & Prince, 1993a).

3.8 Modeling Tools

As a final comment, one important move forward in theoretical phonology has been creation of software to model phonological processes and test theoretical predictions. There are now several software packages that aid in modeling analyses in various Optimality Theory theories, both in production and learning: Hayes, Tesar, and Zuraw’s (2013) OTSoft (classical OT, Maximum Entropy grammars), Prince, Tesar, and Merchant’s (2015) OTWorkplace (classical OT), Riggle and Bane’s (2011) PyPhon (classical OT, Harmonic Grammar), and Staubs et al.’s (2010) OT-Help 2.0 (classical OT, serialist OTs, and Harmonic Grammar). Phonological theories are so complex that manual calculation is inefficient and error-prone; such software has the potential for ensuring accuracy in the construction of proofs.

4. Interfaces

A great deal of research has focused on how phonology interfaces with other cognitive domains. A recurrent theme is “visibility”: which aspects of other modules’ representations can the phonological module see, and what can they see of the phonological module?

4.1 Morphology

The interaction of morphology and phonology has attracted a great deal of attention (Ussishkin, 2007). Phonological processes can refer to at least some part of speech information: for example, there are phonological processes that apply only to nouns (Smith, 2001). For morphological structure, constraints and rules have been proposed that refer to root boundaries and membership (Beckman, 1998), word boundaries (Selkirk, 1995), affix classes (Benua, 2000), derivational heads (Revithiadou, 1999), and lexical class membership (Itô & Mester, 1999).

It also appears that phonological processes can refer to a form other than the input—their derivational base. For example, Sundanese/ar+ɲiar/surfaces as [ɲãlĩãr], even though [l] blocks nasal harmony in other words—that is, the output should be *[ɲãliar] (/r/→[l] through dissimilation). In this case, it appears that [ĩã] are nasal because they are reflecting the nasality in the derivational base: that is, [ɲĩãr] (Benua, 2000). Such phenomena have been analyzed in parallelist models by including derivational bases in the input along with the actual input: that is, the input would involve the computation of both/ar+ɲiar/and/ɲiar/, and constraints that allow the form of/ar+ɲiar/to be influenced by the output of/ɲiar/. Similarly, some theories propose that entire inflectional paradigms are included in the computation of a form (e.g., McCarthy, 2005). In contrast, other theories seek to explain such effects through rule ordering, or by employing several comptuational strata (Kiparsky, 2000; Bermúdez-Otero, 2011). In stratal approaches, the root/ɲiar/could first undergo nasal harmony to [ɲĩãr], and then the affix would be added at a later level with dissimilation reapplication of nasal harmony: [ɲ-ar-ĩãr] → [ɲãlĩãr]. For some cases, representational solutions have also been proposed. For example, prosodic structure (e.g., syllables, feet) of a derived form often seems sensitive to the prosodic structure of its base; while derivational solutions have been proposed, representational solutions involving the construction of different prosodic categories also appear viable (e.g., Peperkamp, 1997).

Another major issue is whether phonological conditions can affect morphological structure. It is now well established that phonological restrictions can affect morph order (i.e., the position of the phonological exponents of morphemes). For example, infixes are affixes whose morph’s position is dictated by phonological requirements on prosodic well-formedness (McCarthy & Prince, 1986; cf. Yu, 2003), and the same has been argued for root-and-pattern morphology (McCarthy, 1979; Ussishkin, 2007). However, phonological conditions seem to be unable to provoke a change in morphological structure, or in the morpheme affiliation of output segments. This latter point is codified in McCarthy and Prince’s (1993b) principle of Consistency of Exponence, but challenged by Łubowicz (2005) who argues that affix segments can take on new morphological affiliations in the output.

One of the liveliest areas of morpho-phonological research has been reduplication, where the content of an affix involves copying part or all of a nearby stem, as in Māori [hoː-honu] “seek,” where the reduplicative prefix has the shape CVː and gets its segmental material from the following root. In earlier theories (e.g., Marantz, 1982), a reduplicant’s underlying form consisted of a string of underspecified segments (Cs and Vs), with the output features filled in via Autosegmental spreading. In later theories, reduplicants consisted of prosodic units (McCarthy & Prince, 1986). In some recent theories, reduplicants have no underlying phonological material; instead, they impose Correspondence relations (McCarthy & Prince, 1999), and their surface form is entirely determined by constraints on output form (Urbanczyk, 2001).

4.2 Lexicon

Further research into the phonological aspects of Lexical entries has addressed how sparsely specified underlying forms can be, and how many underlying forms a morpheme can have. In phonologically conditioned suppletion, which of a morpheme’s several allomorphs appears depends on phonological reasons, sometimes in apparent disregard of morphological conditions. For example, the French feminine adjective [bɛl] is used before vowel-initial masculine nouns (e.g., [bɛl ami]), instead of the masculine [bo] (Mascaró, 2007). In such cases, a morpheme seems to have several underlying forms (e.g., /bɛl/,/bo/), and the phonologically optimal one surfaces. For sparse specification, a number of theories have argued that morphs can consist of single features rather than fully specified segments (e.g., Akinlabi, 1996).

An overarching research issue for lexical items is where morph information is stored. While a great deal of research has adopted the idea that morphs are stored in a Lexicon as part of a morpheme, other theories have placed such idiosyncratic information in the computational system. For example, Anderson (1992) proposes that morphemes are rules—that is, part of the computational system, rather than representational objects. The idea that at least certain morphemic idiosyncrasies are best handled in the computational system as rules or constraints has appeared in various guises in subsequent theories, such as the proposal that there are constraints that refer to the position of particular morphemes (e.g., align—McCarthy & Prince, 1993a).

4.3 Syntax

As with morphology, a central issue in the study of the syntax-phonology interface is determining which aspects of syntactic structure are visible to phonological processes. A fundamental question is whether phonological computation can refer directly to syntactic structure, or whether reference must be indirect (Selkirk, 1986). Indirect references means that almost all phonological rules/constraints can only refer to phonological representation; the exception is specific processes that construct those phonological structures by referring syntactic structures. A good deal of attention has focused on which phonological constituents exist and how they are constructed. An influential strand of research posits several prosodic constituents above the Prosodic Word level; their boundaries are determined by referring to the boundaries of syntactic phrases with lexical heads (e.g., Selkirk, 1984, 1995; Truckenbrodt, 1999). Such theories assume that prosodic structure is built with respect to surface syntactic representation. In contrast, some recent theories effectively interweave syntactic structure-building with phonological structure-building. At certain points in the syntactic derivation (“phases”—e.g., Chomsky, 2001), the phonological form of the derivation is realized (“spelled out”), and forms a prosodic domain (e.g., Adger, 2007).

An important research theme is how phonology influences syntactic structure. In standard Generative models, the syntactic module feeds the phonological module (Chomsky, 1965). However, in some cases it seems that phonological conditions determine or restrict syntactic structure. For example, Zubizarreta (1998) argues that focused phrases in Romance languages appear rightmost because focused constituents must have phrase-stress, and there is a phonological condition requiring phrase-stress to be rightmost. However, Zubizarreta’s (1998) theory still maintains a strict forward-feeding relationship between the syntactic and phonological modules. This strict direction is rejected by Samek-Lodovici (2005), who argues that syntactic and phonological representations can be evaluated at the same time, and that the computational system intermingles conditions (specifically, intermingling syntactic and prosodic constraints). Exactly how phonological conditions can influence syntactic structure remains an open question.

4.4 Phonetics

The most basic question for research into the phonology-phonetics interface is whether there is a phonetic module. In SPE, one module computed phonological structures and converted them into phonetic representation, a view also taken in some more recent theories (e.g., Flemming, 2001). An alternative view is that there are two modules—the Phonological and the Phonetic (e.g., Keating, 1985). Evidence for a distinct Phonetic module includes differences in the phonetic realization of phonologically initiated and phonetically initiated processes, such as categorical (phonologically- nitiated) and gradient (phonetically initiated) realization of nasalization in Sundanese (Cohn, 1998). Kingston and Diehl (1994) argue that some phonological features have multiple phonetic realizations, and exactly which realization is employed in which phonological system must be learned. The two-module approach also predicts that processes initiated in the phonetic module cannot be visible to the phonological module, and so cannot condition phonological processes (e.g., phrase-final lengthening).

A strand of research has asked whether the phonetic module can see more than the phonological output. For example, in “partial neutralization,” pairs of neutralized forms are realized with a slight difference, suggesting that the phonetic module can see underlying as well as surface forms (e.g., Braver, 2014). Focusing on the phonological output only, it is an interesting question whether phonetic implementation refers to every aspect of phonological structure, and if so, how. Some phonetic processes refer to prosodic phrase boundaries and metrical heads (Cho & Keating, 2009). It has also been argued that segmental duration is determined by referring to fine details of sub-syllabic constituency (Broselow, Chen, & Huffman, 1997). However, whether phonetic processes make use of every prosodic node and the complex associations between them, and exactly how they are used, remains an interesting ongoing issue.

As with the phonology-syntax interface, it is also an interesting research issue whether phonology-phonetics interaction is mono- or bi-directional. De Lacy (2007a) proposes a limited bi-directional model where the phonetic module can provide feedback to the phonological module by rejecting phonological outputs that are impossible to realize, and requesting the next best candidate. Other research can be seen as exploring whether phonological mechanisms can be sensitive to their phonetic consequences—that is, where the phonetic module provides feedback when a phonological process would create a structure that exceeds a particular threshold of articulatory difficulty or perceptual confusability. In this vein, Walker and Pullum (1999) argue that every phonetically possible segment has some phonological representation, suggesting that there is some phonological sensitivity to the phonetic consequences of its representations, though perhaps at the level of the species. On the other hand, it is far from clear whether every phonological structure can be phonetically realized (e.g., voiced epiglottal plosives—Ladefoged & Maddieson, 1996, p. 38).

A major research issue has been whether phonological features are interpreted articulatorily or auditorily, or whether both types exist. In SPE, features were seen as “physical scales describing independently controllable aspects of the speech event” (Chomsky & Halle, 1968, p. 297), and were interpreted as articulatory configurations. In Articulatory Phonology, representations are articulatory gestures (Browman & Goldstein, 1989). Not all theories have a straightforward articulatory realization of features; for example in Halle (1995) some features are “articulator-bound” and always produced using the same articulator; others are “articulator-free”—they are executed by different articulators in different segments. In contrast, some theories define at least some features in acoustic/auditory terms (e.g., Jakobson, Fant, & Halle, 1952). Flemming (2001) argues that such definitions are necessary to account for affinities between acoustically similar but articulatorily different classes of segments, such as pharyngealized and round segments.

4.5 Other Interfaces

While the core linguistic interfaces with phonology have been discussed above, there are potentially others, too. For example, the relationship between phonology and musical cognition has a long history, and Lerdahl and Jackendoff’s (1983) seminal work inspired developments in metrical theory. Phonology may also figure prominently in orthographic processing and production; for example, in Liberman, Liberman, Mattingly, and Shankweiler’s (1980) theory, orthography is converted into phonological information at an early stage of the perception process. Musical and orthographic cognition may in fact influence the phonological module. For example, orthography may influence lexical representation (Taft, 2006), and even modes of phonological processing (e.g., Frith, 1998). Musical and prosodic impairments seem to be correlated, and musical training seems to help with phonological awareness of prosodic units (Goswami, 2012).

5. Evidence

Different theories of phonological cognition make different claims about what such cognition is responsible for. For example, Evolutionary Phonology holds that there is no transformational phonological module; instead, sound patterns are the side-effects of evolutionary patterns (Blevins, 2004). So, for this theory, online (synchronic) phonological computation has no measurable effect in the world because there is no phonological computation. Similarly, in Declarative Phonology the effects of many alternations are ascribed to phonetic realization rather than phonological processes (Scobbie et al., 1996, p. 703). At the other extreme, SPE (Chomsky & Halle, 1968) had one single module for phonological computation and language-specific phonetic computation, so all non-universal phonetic effects would be ascribed to the module. In more recent Generative theories of the phonological and phonetic modules, however, some speech sound realizations and patterns are due entirely to the Phonetic module (Keating, 1985). In short, it is not possible to declare that some speech sound pattern is evidence for phonological cognition outside of the context of a specific theory. Focusing on Generative phonology, there are two core types of phenomenon that the theory seeks to explain: distributional patterns and alternations.

5.1 Phonotactics

Distributional patterns (also called “phonotactics”) are restrictions on the appearance of particular phonological segments or combinations in particular environments. For example, plain, ejective, and aspirated stops can occur in Cuzco Quechua roots, but only plain stops appear in affixes (Beckman, 1998). Combinations of consonants and vowels are also always restricted—for example, Māori does not permit consonant clusters or word-final consonants (Bauer, Parker, & Evans, 1993). A more complex restriction is seen in Arabic roots: there are no roots with the shape/C1C2C3/where C1 and C2are identical (e.g., no [tatam]) (McCarthy, 1979). Such patterns are “distributional” in a particular language because they do not participate in alternations.

One explanation for phonotactic patterns is that they are “accidental” gaps in the Lexicon. In this view, the lack of affixal ejective stops in Cuzco Quechua is a generalization about affix morphs, but no synchronic process lies behind it (of course, there may well be a diachronic reason for such lexical patterns). However, apparently not all distributional generalizations can be ascribed to accidental gaps—Moreton (2000) provided experimental evidence that American English speakers accept the word-initial cluster [bw] even though it did not appear in any (or exceedingly few) morphemes, yet rejected [dl], indicating that word-initial [bw] is grammatical but not attested (i.e., an accidental gap), while there is an active phonological prohibition against word-initial [dl].

There are different theoretical approaches to explaining phonotactic restrictions. In SPE (Chomsky & Halle, 1968, p. 381ff) and many of its theoretical successors, there are rules that apply in the Lexicon (“lexical redundancy rules,” or “morpheme structure constraints”) to restrict possible morph shapes and possible inputs to the phonological system. A consequence is that such restrictions could in principle be different from phonological restrictions. However, SPE (Chomsky & Halle, 1968, p. 382) notes that “In many respects, they [morpheme structure constraints] seem to be exactly like ordinary phonological rules, in form and function.” Also, many restrictions seemed to refer to prosodic constituency, but lexical entries do not contain prosodic constituents (see Booij, 2011 for further discussion). In contrast, several Optimality Theory theories do not place restrictions on the input, so—aside from accidental gaps—all distributional restrictions must be due to neutralization: in Cuzco Quechua, for example, aspirated stops must be mapped to some permissible surface structure (e.g., plain stops, or even deletion) (also see Natural Phonology for a similar outlook—Stampe, 1973). So, all distributional restrictions should have counterparts in phonological neutralizations—if a segment is banned in an environment in one language, it should be possible for there to be a process involving the neutralization of that segment in that same environment in some other language.

5.2 Alternations

Morphophonological alternations (“alternations,” or “phonological processes”) involve pairs of words that contain a common morpheme, but the phonological form of the morpheme is distinct. For example, German [loːp] and [loːb-əs] (mentioned in the the section “Computation: Transformations”) share the morpheme Lob. In Generative phonology, such morphemes have been assumed to share a common underlying form (i.e./loːb/, and so the surface difference must be due to a phonological transformation (in this case, coda obstruent devoicing), and so alternations are the Generative “gold standard” for showing input→output mappings (de Lacy, 2009). However, apparent alternations may instead be distinct allomorphs that are listed in the Lexicon (suppletion), so tests for productivity must be applied to help determine the status of alternations. In addition, for theories with a Phonetic Module some apparent alternations may not be phonological but due to phonetic processes. For example, Cohn (1993) identifies anticipatory nasalization in English (e.g., [bĩn] bean) as a phonetic process as it involves a gradient increase in nasalization through the vowel.

An individual may apparently produce the same word in different ways (e.g., some American English speakers may or may not delete the [t] in winter). These variant pronunciations are not morpho-phonological alternations because they are not conditioned by a change in environment. However, some have argued that such free variation provide evidence for transformations from single underlying forms (e.g. Labov,1969; Cedergren & Sankoff, 1974), though see the section “One-to-one or One-to-many?

5.3 Phenomena

A great deal—perhaps the majority—of phonological research is about alternations and phonotactic restrictions: which ones exist, which are possible, and how they work. There are many typological surveys of various types of alternations. For example, de Lacy (2006a) includes a survey of prosodically conditioned feature or tone change (often called “neutralization,” though this term has a stricter interpretation). For example, Yamphu/tː/becomes [ʔ] in syllable codas, as seen in the alternations [sitː-a] “hit+past” and [siʔ-ma] “hit+infinitive” (from underlying/sitː/). The broader claim in this case, and in many others, is that there are asymmetries in what undergoes transformations, what triggers them, and what they produce. For example, de Lacy (2006a) claims that input coronals can become glottals in syllable coda neutralization, but can never become labial or dorsal. Other alternations involve assimilative feature or tone change, where features/tones become more like a neighboring segment, and dissimilation of features and tones, where features/tones become less like neighboring segments. The several handbooks of phonology all provide extensive overviews of these processes (e.g., Goldsmith, 1995; de Lacy, 2007b; Goldsmith et al., 2011; van Oostendorp, Ewen, Hume, & Rice, 2011).

Research into phonological computation can also often be categorized into one of several domains: segmental, prosodic, tonal, and intonational. For example, there are extensive typological surveys of sub-word metrical structure (e.g. Hayes, 1995; Gordon, 2006; van der Hulst et al., 2010). There is also a great deal of research into lexical tone (e.g. Yip, 2002), and intonation (e.g., Gussenhoven, 2004). While there has been a good deal of research into how segmental transformations are conditioned by prosodic structure, there has been relatively less research into how the other domains interact—e.g., how metrical structure and tone interact (e.g., de Lacy, 2007c). Of course, there is also research that spans the phonological interfaces, discussed in the “Interfaces” section.

5.4 Productivity

An ongoing research issue is how to tell accidental gaps from phonotactic distributions, and synchronic alternations from static patterns in the Lexicon. In some theories, there are no distinctions—all output phonological patterns are side-effects of lexical patterns (Blevins, 2004). However, de Lacy and Kingston (2013) present evidence that speakers can distinguish synchronically active patterns from others, and argue that the distinction indicates the activity of a transformational module. There are also differences in the productivity of patterns, which can be related to their generative status (Bauer, 2001). One interesting issue is whether speakers are only aware of absolute restrictions, or whether they also know about “gradient” generalizations. For example, Frisch, Pierrehumbert, and Broe (2004) observe that Jordanian Arabic verb roots that have consonants with the same place of articulation are relatively rare, though not unattested, and argue that speakers are aware of this distinction.

5.5 Phonology or Phonetics?

An ongoing research issue is which properties can be ascribed to the phonological module, and which are due to the phonetic module (e.g. Kawahara, 2011). For example, differences in segment durations can be due to phonological properties such as mora count and association (e.g., Hubbard, 1995). However, there are also a number of processes that are initiated in the phonetic module that affect duration, and some differences in duration are a side-effect of articulatory timing (e.g., Turk & Shattuck-Hufnagel, 2000). The difficulty of sorting out the influence of various modules can be seen in the variety of theories of loanword adaptation. In some theories, loanword adaptation is almost entirely due to perceptual adaptation and does not involve the phonological module (e.g., Peperkamp, Vendelin, & Nakamura, 2008), in others it is primarily due to the phonological module (e.g. LaCharité & Paradis, 2005), while some consider loanword adaptation to involve both perceptual and phonological adaptation (e.g., Broselow, 2009).

5.6 Sources of Evidence

In the past, the two most common methods of gathering evidence for phonological theories has been to use secondary sources (e.g., grammars, journal article descriptions, dictionaries) and fieldwork (de Lacy, 2009). Reliance on secondary sources is particularly widespread—and in practice necessary—for typological surveys (e.g., Hayes, 1995, p. 3). The majority of this work is “impressionistic”—it is a report of the authors’ perception of the target language’s speech sounds, which may or may not be their native language. One challenge with impressionistic work is knowing whether the linguist has misperceived the source data. The descriptions also often report native-speaker intuitions about their speech; this method also presents challenges (e.g., Kawahara, 2011).

Over the past couple of decades, the Laboratory Phonology movement has developed, in part in response to the issues surrounding impressionistic data and intuition-based evidence (Cohn, 2010; Cole, 2010). However, experimental approaches to phonological evidence have a much longer history (e.g., Berko, 1958 wug-tests), and some subfields of linguistics have long relied on laboratory-based experimentation (e.g., Pierrehumbert, 1980). The increasing prevalence of non-impressionistic methods has been helped by a significant fall in the cost of articulatory and acoustic measurement devices (e.g., spectrographs, laryngographs, nasometers), and digital storage devices. There are now excellent free, open-source software programs for acoustic analysis (e.g., Praat—Boersma & Weenink, 2015).

A number of online databases that aid in phonological description have been created. For example, UPSID (the UCLA Phonological Segment Inventory Database—Maddieson, 1980) was originally released as software, but is now available with an online interface. The World Phonotactics Database provides an interface for searching for phonotactic patterns (Donohue, Hetherington, McElvenny, & Dawson, 2013), the Typological Database System is an aggregator of a number of different linguistic databases, P-base is a database of sound patterns in over 500 languages (Mielke, 2004), and more specifically, the Metathesis in Language database is focused on descriptions of metathesis (Hume-O’Haire, 2003), and PHOIBLE is a database of phoneme inventories (Moran, McCloy, & Wright, 2014).

6. Critical Analysis of Scholarship

As emphasized throughout the discussion above, there are a large number of phonological research questions that are being actively pursued. In fact, it is difficult to point to any specific research area and assert with certainty that there is consensus. There is a significant range of theoretical diversity, even within broad frameworks like Generative Phonology. In fact, even for specific theories like Optimality Theory, there are many versions with remarkably different mechanisms (e.g., de Lacy, 2007d).

For example, there are many extant feature theories. They differ on such fundamental issues as whether features are articulatorily or acoustically interpreted, whether they are hierarchically organized, whether they are universal, and what the features are (Hall, 2007).

Even fundamental issues, such as whether there are underlying forms, are not yet resolved to overwhelming consensus. Similarly, whether the phonological module is encapsulated, or is shaped by external forces is not yet resolved.

The evidentiary base of many phonological theories has also been called into question (e.g., de Lacy, 2009). For example, there is extensive research into the typology of metrical stress; however, de Lacy (2014) argues that much of the evidence does not reach an adequate standard for Generative theories. Similarly, Gordon (2014) suggests that some descriptions of stress are misdescribed intonation pitch accents. The reliability of descriptive methods that employ impressionistic reports and intuition has also be questioned (e.g., Kawahara, 2011). Even for research that employs more objective, non-impressionistic methods, there is little replication.

While the preceding paragraphs might seem critical of the field as a whole, they are simply an observation that the field is young. Even though descriptive phonology can trace its roots back 2400 years to Pāṇini’s Aṣṭādhyāyī (Vasu, 1897), structuralist linguistics only began in the early 20th century (Saussure, 1916), and Generative phonology in the 1960s. Given that there was 218 years between Newton’s Principia (Newton, 1687) and Einstein’s theory of Special Relativity (Einstein, 1920), it would be remarkable if Generative phonology had advanced an equivalent amount with only 55 years of study in the Generative phonology paradigm. It is therefore not unlikely that phonology will see profound changes in its theories and methods in the coming decades and centuries.

Further Reading


There are several excellent textbooks about phonology. Gussenhoven & Jacobs (2005) provides an introduction to standard conceptions of phonological cognition. Clark et al. (2007) provide an introduction from a functionalist perspective, and Kenstowicz (1994) from a Generative, formalist perspective. Kager (1999) is an introduction to a specific theory—Optimality Theory.

Clark, J., Yallop, C., & Fletcher, H. (2007). An introduction to phonetics and phonology (3d ed.). Blackwell Textbooks in Linguistics. Malden, MA: Blackwell.Find this resource:

Gussenhoven, C., & Jacobs, J. (2005). Understanding phonology (2d ed.). Understanding Language series. New York: Oxford University Press.Find this resource:

Kager, R. (1999). Optimality theory. Cambridge Textbooks in Linguistics. Cambridge, U.K.: Cambridge University Press.Find this resource:

Kenstowicz, M. (1994). Phonology in generative grammar. Blackwell Textbooks in Linguistics. Oxford: Blackwell.Find this resource:

Oxford Bibliographies Online

The Oxford Bibliographies website contains many recommendations for further reading in phonology and related topics. A good starting point is de Lacy (2011).

de Lacy, Paul (2011). Phonology. In Mark Aronoff (Ed.), Oxford Bibliographies Online: Linguistics. New York: Oxford University Press.Find this resource:


Several recent handbooks provide more advanced readings, with chapters on specific topics.

Cohn, A. C., Fougeron, C., & Huffman, M. K. (2012). The Oxford handbook of laboratory phonology. Oxford: Oxford University Press.Find this resource:

de Lacy, P. (Ed.) (2007). The Cambridge handbook of phonology. Cambridge, U.K.: Cambridge University Press.Find this resource:

Goldsmith, J. (Ed.) (1995). The handbook of phonological theory. Cambridge, MA: Blackwell.Find this resource:

Goldsmith, J., Riggle, J., & Yu, A. C. L. (Eds.). (2011). The handbook of phonological theory (2d ed.). Malden, MA: Wiley-Blackwell.Find this resource:

van Oostendorp, M., Ewen, C., Hume, E., & Rice, K. (Eds). (2011). The Blackwell companion to phonology. Vol. 4. Malden, MA: Wiley-Blackwell.Find this resource:

Journals and Book Series

Most published research appears in either journals or book series. The journals devoted to phonology are Phonology and Laboratory Phonology, though a great deal of phonological research also appears in other linguistics journals (e.g., Language, Lingua, Linguistic Inquiry, Natural Language and Linguistic Theory, the Journal of Phonetics). There is a book series devoted to phonological description and theory—The Phonology of the World’s Languages—though many book series devoted to linguistics include work on descriptive and theoretical phonology.

Ball, M. J, & van Lieshout, P. (Eds.) (Forthcoming). Studies in Phonetics and Phonology. Sheffield, U.K.: Equinox.Find this resource:

Cole, J. (Ed.) (2010–) Laboratory Phonology.Find this resource:

Durand, J. (Ed.) (1993–). The Phonology of the World’s Languages. Oxford: Oxford University Press.Find this resource:

Ewen, C., & Kaisse, E. (Eds.) (1984–). Phonology.Find this resource:

Idsardi, W., & Vaux, B. (Eds.). Oxford Surveys in Phonology and Phonetics. Oxford: Oxford University Press.Find this resource:

Nevins, A., & Rice, K. (Eds.).Oxford Studies in Phonology and Phonetics. Oxford: Oxford University Press.Find this resource:

Online Repositories

Finally, there are online repositories where phonological work is made available, often pre-publication:

Prince, A., & Baković, E. (Eds.) (1993–). The Rutgers Optimality Archive.Find this resource:

Starke, M. (Ed.) (2007–) LingBuzz.Find this resource:


Adger, D. (2007). Stress and phasal syntax. Linguistic Analysis, 33(3–4), 238–266.Find this resource:

Akinlabi, A. (1996). Featural alignment. Journal of Linguistics, 32, 239–289.Find this resource:

Albright, A. (2002). The identification of bases in morphological paradigms. PhD diss., UCLA.Find this resource:

Anderson, J. M., & Ewen, C. J. (1987). Principles of dependency phonology. Cambridge, U.K.: Cambridge University Press.Find this resource:

Anderson, S. R. (1992). A-morphous morphology. Cambridge, U.K.: Cambridge University Press.Find this resource:

Anttila, A. (1997). Deriving variation from grammar. In F. Hinskins, R. van Hout, and W. L. Wetzels (Eds.), Variation, change and phonological theory (pp. 35–68). Amsterdam: John Benjamins.Find this resource:

Anttila, A. (2007). Variation and optionality. In P. de Lacy (Ed.), The Cambridge handbook of phonology (pp. 519–536). Cambridge, U.K.: Cambridge University Press.Find this resource:

Archangeli, D. (1984). Underspecification in Yawelmani phonology and morphology. PhD diss., Massachusetts Institute of Technology.Find this resource:

Archangeli, D., and Pulleyblank, D. (1994). Grounded phonology. Cambridge, MA: MIT Press.Find this resource:

Baković, E. (2000). Harmony, dominance, and control. PhD diss., Rutgers University. Rutgers Optimality Archive 360.Find this resource:

Bauer, L. (2001). Morphological productivity. Cambridge, U.K.: Cambridge University Press.Find this resource:

Bauer, W., Parker, W., & Evans, T. K. (1993). Maori. London: Routledge.Find this resource:

Beckman, J. (1998). Positional faithfulness. PhD diss., University of Massachusetts Amherst.Find this resource:

Bennett, W. (2015). The phonology of consonants: Harmony, dissimilation, and correspondence. Cambridge, U.K.: Cambridge University Press.Find this resource:

Benua, L. (2000). Phonological relations between words. New York: Garland.Find this resource:

Berko, J. (1958). The child’s learning of English morphology. Word, 14, 150–177.Find this resource:

Bermúdez-Otero, R. (2011). Cyclicity. In M. van Oostendorp, C. Ewen, E. Hume, & K. Rice (Eds.), The Blackwell companion to phonology, Vol. 4 (pp. 2019–2048). Malden, MA: Wiley-Blackwell.Find this resource:

Blevins, J. (1995). The syllable in phonological theory. In J. Goldsmith (Ed.), The handbook of phonological theory (pp. 206–244). London: Basil Blackwell.Find this resource:

Blevins, J. (2004). Evolutionary phonology: The emergence of sound patterns. Cambridge, U.K.: Cambridge University Press.Find this resource:

Boersma, P. (2009). Cue constraints and their interactions in phonological perception and production. In P. Boersma & S. Hamann (Eds.), Phonology in perception (pp. 55–110). Berlin: Mouton de Gruyter.Find this resource:

Boersma, P., & Hamann, S. (Eds.) (2009). Phonology in perception. Berlin, Germany: Mouton de Gruyter.Find this resource:

Boersma, P., & Hayes, B. (2001). Empirical tests of the Gradual Learning Algorithm. Linguistic Inquiry, 32, 45–86.Find this resource:

Boersma, P., & Weenink, D. (2015). Praat: Doing phonetics by computer. Version 5.4.08. Computer program.Find this resource:

Booij, G. (2011). Morpheme structure constraints. In M. van Oostendorp, C. Ewen, E. Hume, & K. Rice (Eds.), The Blackwell companion to phonology. Vol. 4, Phonological interfaces (pp. 2049–2070). Oxford: Blackwell.Find this resource:

Braver, A. (2014). Imperceptible incomplete neutralization: Production, non-identifiability, and non-discriminability in American English flapping. Lingua, 152, 24–44.Find this resource:

Brentari, D. (2012). Phonology. In R. Pfau, M. Steinbach, & B. Woll (Eds.), Sign language: An international handbook (pp. 21–54). Berlin, Germany: De Gruyter Mouton.Find this resource:

Broselow, E. (2009). Stress adaptation in loanword phonology: Perception and learnability. In P. Boersma & S. Hamann (Eds.), Phonology in perception (pp. 191–234). Berlin: Mouton de Gruyter.Find this resource:

Broselow, E., Chen, S. & Huffman, M. (1997). Syllable weight: Convergence of phonology and phonetics. Phonology 14, 47–82.Find this resource:

Browman, C. P., & Goldstein, L. (1989). Articulatory gestures as phonological units. Phonology, 6, 201–251.Find this resource:

Browman, C. P., & Goldstein, L. (1990). Tiers in articulatory phonology, with some implications for casual speech. In J. Kingston and M. E. Beckman (Eds.), Papers in laboratory phonology I: Between the grammar and physics of speech (pp. 341–376). Cambridge, U.K.: Cambridge University Press.Find this resource:

Burzio, L. (2002). Surface-to-surface morphology: When your representations turn into constraints. In P. Boucher (Ed.), Many morphologies (pp. 142–177). Somerville, MA: Cascadilla.Find this resource:

Bybee, J. (2001). Phonology and language use. Cambridge, U.K.: Cambridge University Press.Find this resource:

Bybee, J. (2010). Language, Usage, and Cognition. Cambridge, U.K.: Cambridge University Press.Find this resource:

Cedergren, H. J., & Sankoff, D. (1974). Variable rules: Performance as a statistical reflection of Competence. Language, 50, 333–355.Find this resource:

Cho, T., & Keating, P. (2009). Effects of initial position versus prominence in English. Journal of Phonetics, 37, 466–485.Find this resource:

Choi, J. D. (1992). Phonetic underspecification and target-interpolation: An acoustic study of Marshallese vowel allophony. PhD diss., UCLA. UCLA Working Papers in Phonetics 82.Find this resource:

Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press.Find this resource:

Chomsky, N. (2001). Derivation by phase. In M. Kenstowicz (Ed.), Ken Hale: A life in language (pp. 1–52). Cambridge, MA: MIT Press.Find this resource:

Chomsky, N., & Halle, M. (1968). The sound pattern of English. New York: Harper & Row.Find this resource:

Chomsky, N., & Lasnik, H. (1977). Filters and control. Linguistic Inquiry, 8, 425–504.Find this resource:

Clark, C. W., Marler, B., & Beeman, K. (1987). Quantitative analysis of animal vocal phonology: An application to Swamp Sparrow song. Ethology: International journal of behavioral biology, 76(2), 101–115.Find this resource:

Clements, G. N. (1985). The geometry of phonological features. Phonology Yearbook, 2, 225–252.Find this resource:

Clements, G. N., & Hume, E. V. (1995). The internal organization of speech sounds. In John Goldsmith (Ed.), The handbook of phonological theory (pp. 245–306). Oxford: Blackwell.Find this resource:

Cohn, A. (1993). Nasalization in English: Phonology or phonetics. Phonology, 10, 43–81.Find this resource:

Cohn, A. (1998). The phonetics-phonology interface revisited: Where’s phonetics? Texas Linguistic Forum, 41, 25–240.Find this resource:

Cohn, A. C. (2010). Laboratory phonology: Past successes and current questions, challenges, and goals. In C. Fougeron, B. Kühnert, M. D’Imperio & N. Vallée (Eds.), Laboratory phonology 10 (pp. 3–30). Berlin, Germany: Mouton de Gruyter.Find this resource:

Cole, J. (Ed.) (2010–). Laboratory Phonology.Find this resource:

de Lacy, P. (2006a). Markedness: Reduction and preservation in phonology. Cambridge, U.K.: Cambridge University Press.Find this resource:

de Lacy, P. (2006b). Transmissibility and the role of the phonological component. Theoretical Linguistics, 32(2), 185–196.Find this resource:

de Lacy, P. (2007a). Freedom, interpretability, and the Loop. In S. Blaho, P. Bye, & M. Krämer (Eds.), Freedom of analysis? Studies in Generative Grammar 95 (pp. 86–118). Berlin: Mouton de Gruyter.Find this resource:

de Lacy, P. (Ed.) (2007b). The Cambridge handbook of phonology. Cambridge, U.K.: Cambridge University Press.Find this resource:

de Lacy, P. (2007c). The interaction of tone, sonority, and prosodic structure. In Paul de Lacy (Ed.), The Cambridge handbook of phonology (pp. 281–307). Cambridge, U.K.: Cambridge University Press.Find this resource:

de Lacy, P. (2007d). Themes in phonology. In Paul de Lacy (Ed.), The Cambridge handbook of phonology (pp. 5–30). Cambridge, U.K.: Cambridge University Press.Find this resource:

de Lacy, P. (2009). Phonological evidence. In S. Parker (Ed.), Phonological argumentation: Essays on evidence and motivation (pp. 43–78). London: Equinox.Find this resource:

de Lacy, P. (2011). Markedness and faithfulness constraints. In M. van Oostendorp, C. J. Ewen, E. Hume, and K. Rice (Ed.), The Blackwell companion to phonology. Vol. 3, Phonological Processes (pp. 1491–1512). Malden, MA: Blackwell.Find this resource:

de Lacy, P. (2014). Evaluating evidence for stress systems. In Harry van der Hulst (Ed.), Word stress: Theoretical and typological issues (pp. 149–193). Cambridge, U.K.: Cambridge University Press.Find this resource:

de Lacy, P., & Kingston, J. (2013). Synchronic explanation. Natural Language and Linguistic Theory, 31(2), 287–355.Find this resource:

Donohue, M., Hetherington, R., McElvenny, J., & Dawson, V. (2013). World phonotactics database. Department of Linguistics, The Australian National University.Find this resource:

Dresher, E. (2009). The contrastive hierarchy in phonology. Cambridge, U.K.: Cambridge University Press.Find this resource:

Einstein, A. (1920). Relativity: The Special and General Theory. (R. W. Lawson, Trans.). New York: Henry Holt.Find this resource:

Flemming, E. (2001). Scalar and categorical phenomena in a unified model of phonetics and phonology. Phonology 18, 7–44.Find this resource:

Flemming, E. (2005). Deriving natural classes in phonology. Lingua, 115, 287–309.Find this resource:

Frisch, S., Pierrehumbert, J., & Broe, M. B. (2004). Similarity avoidance and the OCP. Natural Language and Linguistic Theory, 22, 179–228.Find this resource:

Frith, U. (1998). Literally changing the brain. Brain 121(6), 1011–1012.Find this resource:

Goldsmith, J. (1976). An overview of autosegmental phonology. Linguistic Analysis, 2(1), 23–68.Find this resource:

Goldsmith, J. (Ed.), (1995). The handbook of phonological theory. Cambridge, MA: Blackwell.Find this resource:

Goldsmith, J., Riggle, J., & Yu, A. C. L. (Eds.). (2011). The handbook of phonological theory (2d ed.). Cambridge, MA: Wiley-Blackwell.Find this resource:

Gordon, M. (2006). Syllable weight: Phonetics, phonology, typology. New York: Routledge.Find this resource:

Gordon, M. (2007). Functionalism in phonology. In P. de Lacy (Ed.), The Cambridge handbook of phonology (pp. 61–78). Cambridge, U.K.: Cambridge University Press.Find this resource:

Gordon, M. (2014). Disentangling stress and pitch-accent: A typology of prominence at different prosodic levels. In H. van der Hulst (Ed.), Word stress: Theoretical and typological issues (pp. 83–118). Cambridge, U.K.: Cambridge University Press.Find this resource:

Goswami, U. (2012). Entraining the brain: Applications to language research and links to musical entertainment. Empirical Musicology Review, 7(1–2), 57–63.Find this resource:

Gussenhoven, C. (2004). The phonology of tone and intonation. Cambridge, U.K.: Cambridge University Press.Find this resource:

Hall, T. A. (2007). Segmental features. In P. de Lacy (Ed.), The Cambridge handbook of phonology (pp. 311–333). Cambridge, U.K.: Cambridge University Press.Find this resource:

Halle, M. (1995). Feature geometry and feature spreading. Linguistic Inquiry, 26, 1–46.Find this resource:

Halle, M., Vaux, B., & Wolfe, A. (2000). On feature spreading and the representation of place of articulation. Linguistic Inquiry, 31(3), 387–444.Find this resource:

Halle, M., & Vergnaud, J.-R. (1987). An essay on stress. Cambridge, MA: MIT Press.Find this resource:

Harris, J. (2007). Representation. In P. de Lacy (Ed.), The Cambridge handbook of phonology (pp. 119–137). Cambridge, U.K.: Cambridge University Press.Find this resource:

Harris, J., & Lindsey, G. (1995). The elements of phonological representation. In Jacques Durand and Francis Katamba (Eds.), Frontiers of phonology: Atoms, structures, derivations (pp. 34–79). Harlow, U.K.: Longman.Find this resource:

Harris, Z. (1960). Structural linguistics. Chicago: University of Chicago Press.Find this resource:

Hayes, B. (1989). Compensatory lengthening in moraic phonology. Linguistic Inquiry, 20, 253–306.Find this resource:

Hayes, B. (1995). Metrical stress theory: Principles and case studies. Chicago: University of Chicago Press.Find this resource:

Hayes, B. (1999). Phonetically-driven phonology: The role of optimality theory and inductive grounding. In M. Darnell, E. A. Moravscik, M. Noonan, F. Newmeyer, and K. Wheatley (Eds.), Functionalism and formalism in linguistics. Vol. 1, General Papers (pp. 243–285). Amsterdam: John Benjamins.Find this resource:

Hayes, B., Tesar, B. and Zuraw, K. (2013). OTSoft 2.3.2. Software.Find this resource:

Hooper, J. B. (1976). An introduction to natural generative phonology. New York: Academic Press.Find this resource:

Hubbard, K. (1995) Toward a theory of phonological and phonetic timing: Evidence from Bantu. In B. Connell & A. Arvaniti (Eds.), Phonology and phonetic evidence (pp. 168–187). Papers in Laboratory Phonology 4. Cambridge, U.K.: Cambridge University Press.Find this resource:

Hulst, H. van der, Goedemans, R., & van Zanten, E. (Eds). (2010). A survey of word accentual patterns in the languages of the world. Berlin: De Gruyter Mouton.Find this resource:

Hume-O’Haire, E. (2003). Metathesis in language database.Find this resource:

Hyde, B. (2002). A restrictive theory of metrical stress. Phonology 19(3), 313–359.Find this resource:

Hyman, L. (1985). A theory of phonological weight. Dordrecht, The Netherlands: Foris.Find this resource:

Hyman, L. (1993). Problems for rule ordering in phonology: Two Bantu test cases. In J. Goldsmith (Ed.), The last phonological rule: Reflections on constraints and derivations (pp. 195–222). Chicago: University of Chicago Press.Find this resource:

Idsardi, W. (2000). Clarifying opacity. The Linguistic Review, 17, 337–350.Find this resource:

Inkelas, S., & Zoll, C. (2005). Reduplication: Doubling in morphology. Cambridge Studies in Linguistics 106. Cambridge, U.K.: Cambridge University Press.Find this resource:

Itô, J., & Mester, A. (1999). The phonological Lexicon. In M. Tsujimura (Ed.), The handbook of Japanese linguistics (pp. 62–100). Oxford: Blackwell.Find this resource:

Jakobson, R., Fant, G. & Halle, M. (1952). Preliminaries to speech analysis. Cambridge, MA: MIT Press.Find this resource:

Jurgec, P. (2011). Feature Spreading 2.0: A unified theory of assimilation. PhD diss., University of Tromsø.Find this resource:

Kahn, Daniel (1976). Syllable-based generalizations in English phonology. PhD diss., Massachusetts Institute of Technology.Find this resource:

Kaun, A. (1995). The typology of rounding harmony: An optimization approach. PhD diss., UCLA.Find this resource:

Kawahara, S. (2011). Experimental approaches in theoretical phonology. In M. van Oostendorp, C. J. Ewen, E. Hume, & K. Rice (Eds.), The Blackwell companion to phonology, 2283–2303. Oxford: Blackwell-Wiley.Find this resource:

Keating, P. (1985). Universal phonetics and the organization of grammars. In V. Fromkin (Ed.), Phonetic linguistics essays in gonor of Peer Ladefoged (pp. 15–132). Orlando, FL: Academic Press.Find this resource:

Keating, Patricia A. (1990). Phonetic representations in a generative grammar. Journal of Phonetics, 18, 321–334.Find this resource:

Kenstowicz, M. (1981). Vowel harmony in Palestinian Arabic: A suprasegmental analysis. Linguistics, 19, 449–465.Find this resource:

Kenstowicz, M. (1994). Phonology in generative grammar. Blackwell Textbooks in Linguistics. Oxford: Blackwell.Find this resource:

Kingston, J., and Diehl, R. (1994). Phonetic knowledge. Language, 70, 419–454.Find this resource:

Kiparsky, P. (1982). Lexical morphology and phonology. In The Linguistic Society of Korea (Ed.), Linguistics in the morning calm (pp. 3–91). Seoul, Korea: Hanshin.Find this resource:

Kiparsky, P. (2000). Opacity and cyclicity. in N. A. Ritter (Ed.), A review of optimality theory. Special issue, The Linguistic Review, 17(2–4), 351–367.Find this resource:

Kiparsky, P. (2010). Reduplication in Stratal OT. In L. Uyechi & L. H. Wee (Eds), Reality, exploration, and discovery: Pattern interaction in language and life. Stanford, CA: Center for the Study of Language and Information. Festschrift for K. P. Mohanan.Find this resource:

Kroch, A. (1989). Reflexes of grammar in patterns of language change. Language variation and change, 1(3), 199–244.Find this resource:

Labov, W. (1969). Contraction, deletion, and inherent variability of the English copula. Language, 45, 715–762.Find this resource:

LaCharité, D., & Paradis, C. (2005). Category preservation and proximity versus phonetic approximation in loanword adaptation. Linguistic Inquiry, 36, 223–258.Find this resource:

Ladefoged, P., & Maddieson, I. (1996). The sounds of the world’s languages. Cambridge, MA: Blackwell.Find this resource:

Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge, MA: MIT Press.Find this resource:

Lewis, M. P., Simons, G. F., & Fennig, C. D. (Eds.). (2015). Ethnologue: Languages of the world (18th ed.). Dallas, TX: SIL International. An online version is also available.Find this resource:

Liberman, I. Y., Liberman, A. M., Mattingly, I. G., & Shankweiler, D. (1980). Orthography and the beginning reader. In J. F. Kavanagh & R. L. Venezky (Eds.), Orthography, reading, and dyslexia (pp. 137–153). Austin, TX: Pro-Ed.Find this resource:

Lombardi, L. (1995). Laryngeal features and privativity. The Linguistic Review, 12, 35–59.Find this resource:

Łubowicz, A. (2005). Infixation as morpheme absorption. In S. Parker (Ed.), Phonological argumentation: Essays on evidence and motivation. London: Equinox.Find this resource:

Łubowicz, Anna. (2012). The phonology of contrast. Oakville, CT: Equinox.Find this resource:

Maddieson, I. (1980). UPSID: The UCLA Phonological Segment Inventory Database. UCLA Working Papers in Phonetics, 60, 4–56.Find this resource:

Marantz, A. (1982). Re reduplication. Linguistic Inquiry, 13, 435–482.Find this resource:

Mascaró, J. (2007). External allomorphy and lexical representation. Linguistic Inquiry, 38(4), 715–735.Find this resource:

McCarthy, J. J. (1979). Formal problems in Semitic phonology and morphology. PhD diss., Massachusetts Institute of Technology.Find this resource:

McCarthy, J. J. (1988). Feature geometry and dependency: A review. Phonetica, 43, 84–108.Find this resource:

McCarthy, J. J. (2004). Headed spans and autosegmental spreading. University of Massachusetts, Amherst. Unpublished manuscript.Find this resource:

McCarthy, J. J. (2005). Optimal paradigms. In L. Downing, T. A. Hall, & R. Raffelsiefen (Eds.), Paradigms in phonological theory (pp. 170–210). Oxford: Oxford University Press.Find this resource:

McCarthy, J. J. (2007a). Derivations and levels of representation. In Paul de Lacy (Ed.), The Cambridge handbook of phonology (pp. 99–118). Cambridge, U.K.: Cambridge University Press.Find this resource:

McCarthy, J. J. (2007b). Hidden generalizations: Phonological opacity in optimality theory. London: Equinox.Find this resource:

McCarthy, J. J. (2010). An introduction to Harmonic Serialism. University of Massachusetts Amherst. Unpublished manuscript.Find this resource:

McCarthy, J. J. (2011) Autosegmental spreading in Optimality Theory. In J. Goldsmith, E. Hume, & L. Wetzels (Eds.), Tones and features: Phonetic and phonological perspectives (pp. 195–222). Berlin: Mouton de Gruyter.Find this resource:

McCarthy, J. J., & Prince, A. (1986). Prosodic morphology. Rutgers Technical Report TR-32. New Brunswick, NJ: Rutgers University Center for Cognitive Science.Find this resource:

McCarthy, J. J., & Prince, A. (1993a). Generalized alignment. In G. Booij and J. van Marle (Eds.), Yearbook of Morphology (pp. 79–153). Dordrecht, The Netherlands: Kluwer.Find this resource:

McCarthy, J. J. & Prince, A. (1993b). Prosodic morphology I: Constraint interaction and satisfaction. Rutgers Technical Report TR-3. New Brunswick, NJ: Rutgers University Center for Cognitive Science.Find this resource:

McCarthy, J. J., & Prince, A. (1999). Faithfulness and identity in prosodic morphology. In R. Kager, H. van der Hulst, & W. Zonneveld (Eds.), The prosody-morphology interface (pp. 218–309). Cambridge, U.K.: Cambridge University Press.Find this resource:

Mielke, J. (2004). A;P-Base: a database of sound patterns in 500+ languages.Find this resource:

Mielke, J. (2008). The emergence of distinctive features. Oxford: Oxford University Press.Find this resource:

Moran, S., McCloy, D., & Wright, R. (Eds.). (2014). PHOIBLE Online. Leipzig: Max Planck Institute for Evolutionary Anthropology.Find this resource:

Morén, B. (2003). The parallel structures model of feature geometry. Working papers of the Cornell phonetics laboratory, 15, 194–270.Find this resource:

Moreton, E. (2000). Phonological grammar in speech perception. PhD diss., University of Massachusetts, Amherst.Find this resource:

Nathan, G. (2007). Phonology. In D. Geeraerts & H. Cuykens (Eds.), The Oxford handbook of cognitive linguistics (pp. 611–631). Oxford: Oxford University Press.Find this resource:

Nathan, G. (2008). Phonology: A Cognitive Grammar Introduction. Amsterdam: John Benjamins.Find this resource:

Nespor, M., & Vogel, I. (1986). Prosodic phonology. Studies in Generative Grammarl 28. Dordrecht, The Netherlands: Foris.Find this resource:

Newton, I. (1687). Philosophiae Naturalis Principia Mathematica. London: Jussu Societas Regiae ac Typis Joseph Streater. English translation: The Principia: Mathematical principles of natural philosophy. (I. B. Cohen & A. Whitman, Trans.). Berkeley: University of California Press, 1999.Find this resource:

Oostendorp, M. van (1995). Vowel quality and phonological projection. PhD diss., Tilburg University. Rutgers Optimality Archive 84.Find this resource:

Oostendorp, M. van, Ewen, C. J., Hume, E., & Rice, K. (Eds.). (2011). The Blackwell companion to phonology. Malden, MA: Blackwell.Find this resource:

Padgett, J. (2002). On the characterization of feature classes in phonology. Language, 78(1), 81–110.Find this resource:

Paradis, C. (1988). On constraints and repair strategies. The Linguistic Review, 6, 71–97.Find this resource:

Pater, J. (2009). Weighted constraints in generative linguistics. Cognitive Science, 33, 999–1035.Find this resource:

Peperkamp, S. (1997). Prosodic words. The Hague: Holland Academic Graphics.Find this resource:

Peperkamp, S., & Dupoux, E. (2003). Reinterpreting loanword adaptations: The role of perception. In M. J. Solé, D. Recasens, & J. Romero (Eds.), Proceedings of the 15th International Congress of Phonetic Sciences (pp. 367–370). Adelaide, Australia: Causal Productions.Find this resource:

Peperkamp, S., Vendelin, I., & Nakamura, K. (2008). On the perceptual origin of loanword adaptations: Experimental evidence from Japanese. Phonology, 25, 129–164.Find this resource:

Pierrehumbert, J. (1980). The phonology and phonetics of English intonation. PhD diss., Massachusetts Institute of Technology.Find this resource:

Pierrehumbert, J. (2002). Word-specific phonetics. In C. Gussenhoven & N. Warner, (Eds.), Laboratory Phonology 7 (pp. 101–139). Berlin: Mouton de Gruyter.Find this resource:

Prince, A. (1983). Relating to the grid. Linguistic Inquiry, 14(1), 19–100.Find this resource:

Prince, A., & Smolensky, P. (2004). Optimality theory: Constraint interaction in Generative grammar. Malden, MA: Blackwell. Also available through Rutgers Opimality Archive 537.Find this resource:

Prince, A., Tesar, B., & Merchant, N. (2015). OTWorkplace 12. Software.

Raimy, E. (2000). The phonology and morphology of reduplication. Berlin: Mouton de Gruyter.Find this resource:

Revithiadou, A. (1999). Headmost accent wins: Head dominance and ideal prosodic form in lexical accent systems. PhD diss., University of Leiden.Find this resource:

Riggle, J., & Bane, M. (2011). PyPhon. Software.Find this resource:

Rose, S., & Walker, R. (2004). A typology of consonant agreement as correspondence. Language, 80, 475–531.Find this resource:

Sagey, E. (1986). The representation of features and relations in nonlinear phonology. PhD diss., Massachusetts Institute of Technology.Find this resource:

Samek-Lodovici, V. (2005). Prosody-syntax interaction in the expression of focus. Natural Language and Linguistic Theory, 23(3), 687–755.Find this resource:

Saussure, F. de (1916). Cours de linguistique générale. Paris: Payot.Find this resource:

Scobbie, J. M., Coleman, J. S., & Bird, S. (1996). Key aspects of declarative phonology. In J. Durand & B. Laks (Eds.), Current trends in phonology: Models and methods, Vol. 2 (pp. 685–709). European Studies Research Institute (ESRI). Manchester, U.K.: University of Salford.Find this resource:

Selkirk, E. O. (1984). Phonology and syntax: The relation between sound and structure. Cambridge, MA: MIT Press.Find this resource:

Selkirk, E. O. (1986). On derived domains in sentence phonology. Phonology Yearbook 3, 371–405Find this resource:

Selkirk, E. O. (1995). Sentence prosody: Intonation, stress and phrasing. In John Goldsmith (Ed.), The handbook of phonological theory (pp. 550–569). Cambridge, MA: Blackwell.Find this resource:

Smith, J. L. (2001). Lexical category and phonological contrast. In R. Kirchner, J. Pater, & W. Wikely (Eds.), Papers in Experimental and Theoretical Linguistics 6: Workshop on the Lexicon in Phonetics and Phonology (pp. 61–72). Edmonton: University of Alberta.Find this resource:

Stampe, D. (1973). How I spent my summer vacation (A dissertation on Natural Generative Phonology). PhD diss., University of Chicago.Find this resource:

Staubs, R., Becker, M., Potts, C., Pratt, P., McCarthy, J. J., & Pater, J. (2010). OT-Help 2.0. Software.Find this resource:

Steriade, D. (1995). Underspecification and markedness. In John Goldsmith (Ed.), The handbook of phonological theory (pp. 114–174). Cambridge, MA: Blackwell.Find this resource:

Taft, M. (2006). Orthographically influenced abstract phonological representation: Evidence from non-rhotic speakers. Journal of Psycholinguistic Research, 35(1), 67–78.Find this resource:

Trubetzkoy, N. (1939). Grundzüge der Phonologie. Prague, Czechoslovakia: Vandenhoeck & Ruprecht. English translation: (C. A. M. Baltaxe, Trans.).Find this resource:

Truckenbrodt, H. (1999). On the relation between syntactic phrases and phonological phrases. Linguistic Inquiry, 30(2), 219–255.Find this resource:

Turk, A. E., & Shattuck-Hufnagel, S. (2000). Word-boundary-related duration patterns in English. Journal of Phonetics, 28, 397–440.Find this resource:

Urbanczyk, S. (2001). Patterns of reduplication in Lushootseed. Outstanding Dissertations in Linguistics. New York: Garland.Find this resource:

Ussishkin, A. (2007). Morpheme position. In P. de Lacy (Ed.), The Cambridge handbook of phonology (pp. 457–472). Cambridge, U.K.: Cambridge University Press.Find this resource:

Välimaa-Blum, R. (2009). The phoneme in cognitive phonology: Episodic memories of both meaningful and meaningless units? CogniTextes 2.Find this resource:

Vasu, S. C. (1897). The “Ashtádhyáyí” of Páṇini. Vol. 6. (Srisa Chandra Vasu, Trans). Benares, India: Sindhu Charan Bose.Find this resource:

van der Hulst, H., & Ritter, N. (1999). Theories of the syllable. In H. van der Hulst & N. Ritter (Eds.), The syllable: Views and facts (pp. 13–52). New York: Mouton de Gruyter.Find this resource:

Walker, R., & Pullum, G. K. (1999). Possible and impossible segments. Language, 75, 764–780.Find this resource:

Wiese, R. (1996). The phonology of German. Oxford: Clarendon.Find this resource:

Woodbury, A. C. (2003). Defining documentary linguistics. In Peter Austin (Ed.), Language documentation and description (pp. 35–51). Vol. 1. London: Hans Rausing Endangered Languages Project.Find this resource:

Yip, M. (2002). Tone. Cambridge, U.K.: Cambridge University Press.Find this resource:

Yu, A. (2003). The morphology and phonology of infixation. PhD diss., University of California at Berkeley.Find this resource:

Zubizarreta, M. L. (1998). Prosody, focus, and word order. Cambridge, MA: MIT Press.Find this resource: