Show Summary Details

Page of

PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, LINGUISTICS (linguistics.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 25 March 2017

Learnability and Learning Algorithms in Phonology

This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.

Phonological learnability deals with the formal properties of phonological grammars when combined with algorithms that attempt to learn the language-specific aspects of those grammars. The classical learning task can be outlined as follows: beginning at a pre-determined initial state, the learner is exposed to positive evidence of legal strings and structures from the target language, and its goal is to reach a pre-determined end state, where the grammar will produce or accept all, and only, the target language’s strings and structures. In addition, a phonological learner must also acquire a set of language-specific representations for morphemes, words, and so on, and in many cases, the grammar and the representations must be acquired at the same time.

Phonological learnability research seeks to determine how the architecture of the grammar, and the workings of an associated learning algorithm, influence success in completing this learning task, that is, in reaching the end-state grammar. One basic question is about convergence: Is the learning algorithm guaranteed to converge on an end-state grammar, or will it never stabilize? Is there a class of initial states, or a kind of learning data (evidence), which can prevent a learner from converging?

Next is the question of success: Assuming the algorithm will reach an end state, will it match the target? In particular, will the learner ever acquire a grammar that deems grammatical a superset of the target language’s legal outputs? How can the learner avoid such superset end-state traps, whether by calibration of the initial state, biases in the learning algorithm, or other methods?

A third question considers the time-course of learning: How long does the learner take to reach the end state? How does the time to convergence and/or success increase as the grammar becomes more complex and the evidence set becomes larger? Are some grammars too complex to ever be learned?

In assessing phonological learnability, the analyst also has many differences between potential learning algorithms to consider. At the core of any algorithm is its update rule, meaning its method(s) of changing the current grammar on the basis of evidence. Other key aspects of an algorithm include how it is triggered to learn, how it processes and/or stores the errors that it makes, and how it responds to noise or variability in the learning data. Ultimately, the choice of algorithm is also tied to the type of phonological grammar being learned—whether the generalizations being learned are couched within rules, features, parameters, constraints, rankings, and/or weightings.