47
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Hybrid deep learning model for Arabic text classification based on mutual information

&
Pages 1901-1908 | Received 01 Aug 2021, Published online: 16 Dec 2022
 

Abstract

Text categorization refers to the process of grouping text or documents into classes or categories according to their content, which is a significant task in natural language processing. The majority of the present work focused on English text, with a few experiments on Arabic text. The text classification process consists of many steps, from preprocessing documents (removing stop words and stem method), to feature extraction and classification phase. A new improved approach for Arabic text categorization was proposed using mutual information in a hybrid deep learning model for classification. To test the proposed model, two datasets of Arabic documents are employed. The experimental results demonstrate that employing the proposed mutual information exceeds other prior techniques in terms of performance. In Akhbarona corpus, the Multi-Layer Perceptron achieved a minimum accuracy of 96.09%, while the hybrid Convolution-Long Short-Term Memory had a performance level of 99.28%. In Khaleej corpus, the Gated Recurrent Unit had the maximum accuracy of 98.23%, while Multi-Layer Perceptron had the lowest accuracy of 97.23%

Subject Classification:

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.