431
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

Deep Learning-Based Model Using DensNet201 for Mobile User Interface Evaluation

&
Pages 1981-1994 | Received 23 Oct 2021, Accepted 27 Jan 2023, Published online: 12 Feb 2023
 

Abstract

Human-centered AI plays a vital role in ensuring that human capabilities and ideas are tailored to meet efficiently the data requirements. The main idea is focusing on making machines learn from human behavior in many fields including e-learning, mobile computing, e-health. In this context, sales of mobile devices are rising every day and Mobile User Interfaces (MUI) for smartphones and tablets are attracting greater attention. In this way, there are widely development tools of new mobile services. Moreover, there are useful mobile apps that help users in their daily living such as health, entertainment, games, social networking, weather, e-learning, logistics and transport. The mobile user interfaces have become a necessity for the user’s satisfaction. In this context, the evaluation of Mobile User Interfaces (MUI) is a fundamental dimension in the success of mobile apps. Generally, there are two main classes of user interface evaluation methods: manual and automatic. The first category is conducted by users or experts to evaluate the visual design quality of MUIs. Nevertheless, it is more time-consuming task. The second category used an automatic tool, that require preconfiguration in the source code. However, this configuration is a difficult task for non-programmer evaluators. To address this issue, we propose an evaluation method based on the analysis of graphical MUI as screenshot without using the source code and user participation. The proposed method combines the Densnet201 architecture and K-Nearest Neighbours (KNN) classifier to assess the MUIs. First, we apply the Borderline-SMOTE method to obtain a balanced dataset. Then, the GoogleNet is used to extract automatically the features of MUI. Finally, we apply the KNN classifier to classify the MUIs as good or bad. We evaluate this approach based on publicly available large-scale datasets. The obtained results are very promising and shows the efficiency of the proposed model with an average of 93% of accuracy. This model implemented for the mobile application designers and it aims for improving the quality of MUIs. In fact, the MUIs evaluation can decrease the misunderstanding of the user needs and improve the design and usability in order to reach the user satisfaction.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

Additional information

Notes on contributors

Makram Soui

Makram Soui received a PhD in Computer Science from the Polytechnic University of Hauts-de-France, in 2010. He is currently a visiting professor in engineering and computer science, Oakland university. He teaching many courses to undergraduate and postgraduate students. He published 10 referred journal papers and 24 conference papers with a low acceptance rate.

Zainab Haddad

Zainab Haddad is currently a member of Artificial Intelligence Research Unit; National School of Computer Sciences, University of Manouba, Tunisia. She received her master degree in 2015, from the higher institute of management of Gabes, Tunisia. Her main interests are in human–computer interface design, mobile user interfaces. She published 1 referred journal paper.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.