Abstract
This article aims to show the effectiveness of evolutionary algorithms in automatically parsing sentences of real texts. Parsing methods based on complete search techniques are limited by the exponential increase of the size of the search space with the size of the grammar and the length of the sentences to be parsed. Approximated methods, such as evolutionary algorithms, can provide approximate results, adequate to deal with the indeterminism that ambiguity introduces in natural language processing. This work investigates different alternatives to implement an evolutionary bottom-up parser. Different genetic operators have been considered and evaluated. We focus on statistical parsing models to establish preferences among different parses. It is not our aim to propose a new statistical model for parsing but a new algorithm to perform the parsing once the model has been defined. The training data are extracted from syntactically annotated corpora (treebanks) which provide sets of lexical and syntactic tags as well as the grammar in which the parsing is based. We have tested the system with two corpora: Susanne and Penn Treebank, obtaining very encouraging results.
Notes
|s| length of the sentence.
It uses a data structure called chart to store partial results of matches already done, thus avoiding to try the same matches again and again, and explores the highest probability constituents first.
By elitism, we refer to the technique of retaining in the population the best individuals found so far.
In order to simplify the process, those sentences that make reference to elements outside them (trace sentences) have not been used to extract the grammar.
If we are considering the rule r of the form A → ···, the probability of r is computed as
|
Rate of words that have been assigned the correct P05 tag.
The probability of each parse is the product of the probabilities of all the rules used in the parse tree. The probability of a sentence in a PCFG is the sum of the probabilities of all possible parses for the sentence. Then, the probabilities of all the sentences generated by the grammar add up to one.
Supported by projects TIN2007-68083-C02-01 and TIN2007-67581-C02-01.