Abstract
During the COVID-19 pandemic, online classes became the only option for many students. The main challenge for these classes was conducting risky and complex chemical or biological experiments in a domestic environment. To address this challenge, a smart experiment system called MRLab was developed. MRLab used wearables such as a smart glove and head-mounted device to record sensory data and a multimodal hybrid fusion model GVVS to interpret the user’s experimental intent, which essentially transforms the user’s abstract behavioral actions into a probabilistic set of experimental intent that can be computed. Different experiments in MRLab used different libraries of experimental intents. The SrNet model in GVVS was used to estimate the probability of the user’s gesture behavior generated from the smart glove, while the SIPA algorithm compared speech information entered during the experiment with the experimental intent library to estimate the probability of the user’s intent. At the same time, the scene visual channel monitored the information about the object the user intended to operate, with the SVF algorithm computing the probability of the intended object in real-time. The results from ANOVA and post-hoc comparative testing conducted on 21 volunteers revealed that MRLab outperformed other experiment modes, including WEB, AR, and VR, with a higher intention understanding rate, efficiency, and user satisfaction. Therefore, MRLab proved to be a useful alternative to traditional physics laboratory experiments during the pandemic, along with being an additional teaching tool for remote learning purposes.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Additional information
Funding
Notes on contributors
Hongyue Wang
Hongyue Wang is a graduate student at the Department of Computer Science and Technology, University of Jinan. His research interests lie in human-computer interaction, virtual reality and artificial intelligence research in smart education.
Zhiquan Feng
Zhiquan Feng is a professor of Computer Science and Technology at University of Jinan. His work explores human-machine interaction and collaboration issues in topics such as smart education, elderly robots, and robotic arms.
Xiaohui Yang
Xiaohui Yang is an associate professor of Computer Science and Technology at University of Jinan. His primary research interests lie in the area of Image and Video Processing.
Liran Zhou
Liran Zhou is a graduate student at the Department of Computer Science and Technology, University of Jinan. Her research interests lie in human-computer interaction and collaboration in elderly care.
Jinglan Tian
Jinglan Tian is a lecturer of Computer Science and Technology at University of Jinan. Her research interests are human computer interaction, human behavior recognition, etc.
Qingbei Guo
Qingbei Guo is an associate professor of Computer Science and Technology at University of Jinan. His research at intersection of pattern recognition and computer vision focuses especially on human computer collaboration.