Abstract
The Web has become one of the most important data sources, and the content shared is most often multilingual, as users belong to different cultures and speak different languages. Multilingual content (document) is not suitable for many people who only need content in one language. Furthermore, dividing a multilingual document into monolingual documents helps researchers extract only the text of the desired language to use in different tasks such as training or model testing. Therefore, it is challenging to clean and divide the raw content manually. This paper presents an automatic approach to dividing a multilingual document and reassembling it into monolingual documents by examining three existing state-of-the-art tools for Language Identification (LI). We prepared different corpora with different heterogeneity characteristics for the evaluation and evaluated their code-switching pattern using three different code-switching metrics. The proposed approach reached 99% as the best accuracy result for the long segment (long text) and 90% for the mixed segment. In addition, a good correlation was found between the I-Index and accuracy with Pearson’s r = −0.998.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Notes
3 We used the UDHR corpus in our experiments; we ignored blank lines and lines with less than 3 letters and we also removed numbers and all special characters (e.g., +!, *@#%&$ … etc) from the text