[Author Login]
[Home]
ROA:156
Title:Learnability in Optimality Theory (long version)
Authors:Bruce Tesar, Paul Smolensky
Comment:68 pages (single spaced)
Length:68
Abstract:Bruce Tesar

The Center for Cognitive Science / Linguistics Department

Rutgers University



and



Paul Smolensky

Cognitive Science Department, Johns Hopkins University





Abstract for the short version:



A central claim of Optimality Theory is that grammars may

differ only in how conflicts among universal well-formedness

constraints are resolved: a grammar is precisely a means of

resolving such conflicts via a strict priority ranking of

constraints. It is shown here how this theory of Universal

Grammar yields a highly general Constraint Demotion principle

for grammar learning. The resulting learning procedure

specifically exploits the grammatical structure of Optimality

Theory, independent of the content of substantive constraints

defining any given grammatical module. The learning problem is

decomposed and formal results are presented for a central

subproblem, deducing the constraint ranking particular to a

target language, given structural descriptions of positive

examples and knowledge of universal grammatical elements.

Despite the potentially large size of the space of possible

grammars, the structure imposed on this space by Optimality

Theory allows efficient convergence to a correct grammar.

Implications are discussed for learning from overt data only,

learnability of partially-ranked constraint hierarchies, and

the initial state. It is argued that Optimality Theory

promotes a goal which, while generally desired, has been

surprising elusive: confluence of the demands of more effective

learnability and deeper linguistic explanation.



This longer version includes two types of additional material. The

Recursive Constraint Demotion learning algorithm and a general family

of Constraint Demotion algorithms is developed; analytical results are

given concerning convergence, correctness, and efficiency. The longer

version also provides some additional discussion of a number of general

issues associated with extending the analysis to primary learning data

consisting solely of overt elements. Topics include a general class of

iterative solutions to simultaneously learning a grammar and learning

to assign hidden structure to primary data, a simple illustration of

the operation of our proposed iterative solution to this problem in the

OT case, the initial state and universal constraint rankings, and

acquisition of underlying forms. This work illustrates how the

constraint ranking defining an OT grammar can, in principle, be used in

optimization in multiple ways: for relating underlying forms to surface

forms, for relating overt forms to their structural descriptions, and

for deriving new underlying forms from overt data.
Type:Paper/tech report
Area/Keywords:
Article:Version 1