ROA: | 1150 |
---|---|
Title: | Revisiting the Evaluator: Derivations (and Learning) in OT |
Authors: | Diego Krivochen |
Comment: | |
Length: | 41 |
Abstract: | This theoretical, programmatic paper has as its purpose to open new perspectives for a crash-proof version of Optimality Theory, in which the GEN function only produces an optimal candidate per derivational cycle. We will discuss the concept of crash-proof syntax (Putman, 2010), trying to incorporate it into a learning model for neural networks, briefly getting into the digital computers-quantum computers debate, comparing the consequences the adoption of one and the other would have for our theory. This objective will be pursued with Minimalism as a program -not as a theory - and the theoretical substance will be provided by OT and connectionist models of neural networks. A secondary aim of this paper is to show that mathematical or physical formalizations of natural language do not necessarily imply a "metaphor", but it is possible to work with the hypothesis that natural objects are in themselves mathematical structures. Such formalization would be the first step in order to allow a more fluent interdisciplinary scientific exchange between linguistics and formal sciences. |
Type: | Paper/tech report |
Area/Keywords: | Optimality Theory; Radical Minimalism; Neural Network Learning; Quantum Computer; Syntax; Semantics |
Article: | Version 2 Version 1 |