ABSTRACT
Recent advances in natural language processing have catalysed active research in designing algorithms to generate contextual vector representations of words, or word embedding, in the machine learning and computational linguistics community. Existing works pay little attention to patterns of words, which encode rich semantic information and impose semantic constraints on a word’s context. This paper explores the feasibility of incorporating word embedding with pattern grammar, a grammar model to describe the syntactic environment of lexical items. Specifically, this research develops a method to extract patterns with semantic information of word embedding and investigates the statistical regularities and distributional semantics of the extracted patterns. The major results of this paper are as follows. Experiments on the LCMC Chinese corpus reveal that the frequency of patterns follows Zipf’s hypothesis, and the frequency and pattern length are inversely related. Therefore, the proposed method enables the study of distributional properties of patterns in large-scale corpora. Furthermore, experiments illustrate that our extracted patterns impose semantic constraints on context, proving that patterns encode rich semantic and contextual information. This sheds light on the potential applications of pattern-based word embedding in a wide range of natural language processing tasks.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
1. Biber (Citation2009, p. 279) pointed out, ‘However, the pattern grammar studies are corpus-based because the analyses are in part determined by pre-defined linguistic categories (including basic grammatical categories like “noun” and “verb”, phrase types, and even syntactic structures).’
2. This paper frequently deals with POS of Chinese texts. This paper adheres to the notations of ICTCLAS POS tagging proposed by the Chinese Academy of Science. The notations can be found at http://ictclas.nlpir.org/nlpir/html/readme.htm.
3. The statistics were retrieved by counting all item tags with words in the corpus, excluding punctuation, spaces, etc.