Jane Chandlee and Jeffrey Heinz
Computational phonology studies the nature of the computations necessary and sufficient for characterizing phonological knowledge. As a field it is informed by the theories of computation and phonology.
The computational nature of phonological knowledge is important because at a fundamental level it is about the psychological nature of memory as it pertains to phonological knowledge. Different types of phonological knowledge can be characterized as computational problems, and the solutions to these problems reveal their computational nature. In contrast to syntactic knowledge, there is clear evidence that phonological knowledge is computationally bounded to the so-called regular classes of sets and relations. These classes have multiple mathematical characterizations in terms of logic, automata, and algebra with significant implications for the nature of memory. In fact, there is evidence that phonological knowledge is bounded by particular subregular classes, with more restrictive logical, automata-theoretic, and algebraic characterizations, and thus by weaker models of memory.
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Linguistics. Please check back later for the full article.
Phonological learnability deals with the formal properties of phonological grammars when combined with algorithms that attempt to learn the language-specific aspects of those grammars. The classical learning task can be outlined as follows: beginning at a pre-determined initial state, the learner is exposed to positive evidence of legal strings and structures from the target language, and its goal is to reach a pre-determined end state, where the grammar will produce or accept all, and only, the target language’s strings and structures. In addition, a phonological learner must also acquire a set of language-specific representations for morphemes, words, and so on, and in many cases, the grammar and the representations must be acquired at the same time.
Phonological learnability research seeks to determine how the architecture of the grammar, and the workings of an associated learning algorithm, influence success in completing this learning task, that is, in reaching the end-state grammar. One basic question is about convergence: Is the learning algorithm guaranteed to converge on an end-state grammar, or will it never stabilize? Is there a class of initial states, or a kind of learning data (evidence), which can prevent a learner from converging?
Next is the question of success: Assuming the algorithm will reach an end state, will it match the target? In particular, will the learner ever acquire a grammar that deems grammatical a superset of the target language’s legal outputs? How can the learner avoid such superset end-state traps, whether by calibration of the initial state, biases in the learning algorithm, or other methods?
A third question considers the time-course of learning: How long does the learner take to reach the end state? How does the time to convergence and/or success increase as the grammar becomes more complex and the evidence set becomes larger? Are some grammars too complex to ever be learned?
In assessing phonological learnability, the analyst also has many differences between potential learning algorithms to consider. At the core of any algorithm is its update rule, meaning its method(s) of changing the current grammar on the basis of evidence. Other key aspects of an algorithm include how it is triggered to learn, how it processes and/or stores the errors that it makes, and how it responds to noise or variability in the learning data. Ultimately, the choice of algorithm is also tied to the type of phonological grammar being learned—whether the generalizations being learned are couched within rules, features, parameters, constraints, rankings, and/or weightings.