Abstract
Zipf's first and second laws define two striking phenomena in literary text. The two laws have applications in various fields of computer science. Recently, the study of continuous speech recognition in artificial intelligence has called for the use of statistical models of text generation. A major issue is the lack of effective and objective evaluation of the models. In this paper, four leading statistical models of text generation are evaluated with respect to Zipf's laws and we identify the Simon-Yule model as a promising approach. A significant implication of the findings for text modeling is also discussed.