Figures & data
Figure 2. The architecture of CBOW and Skip-gram as described in (Mikolov et al. Citation2013b).
![Figure 2. The architecture of CBOW and Skip-gram as described in (Mikolov et al. Citation2013b).](/cms/asset/072ebf00-920d-4499-ab04-4b5ca5cca6df/uaai_a_2019885_f0002_b.gif)
Table 1. Word embedding mapping methods
Table 2. An approximate count of articles and tokens in Wikipedia dumps for each language (K = 1000)
Table 3. The number of words in seed dictionaries and size of the training, validation, and test sets (K = 1000)
Figure 5. Encoder-Decoder architecture with an attention mechanism (Bahdanau, Cho, and Bengio Citation2016).
![Figure 5. Encoder-Decoder architecture with an attention mechanism (Bahdanau, Cho, and Bengio Citation2016).](/cms/asset/1bbc042d-6cf3-426d-b754-50d7444464c2/uaai_a_2019885_f0005_oc.jpg)
Table 4. Implemented model’s performance in different networks
Table 5. Accuracy of the proposed method compared with previous works