Abstract
The popularity of science videos is critical for the dissemination of knowledge, and predicting the popularity of a video is a hot topic among researchers. The existing research is mainly based on the videos’ content (e.g., theme) or video-related external information (e.g., comments). However, videos with different popularity can bring different learning states and viewing experiences (emotional awakening, flow experiences). In this paper, we utilize participants’ learning states when they watch videos to predict the popularity of the videos, depending on two modal indicators, i.e., contact physiological indicators recorded by a Biopac MP150 polygraph and noncontact gesture indices recorded by a Kinect V2 body camera which can obtain data of head position. We propose two classification prediction models with each modality of indicator and filter out the indicators that make sense for modeling. Results show that the meaningful indicators in the physiological modality through stepwise logistic regression are the standard deviation of normal to normal R-R intervals (SDNN) and high-frequency heart rate variability (HF). We find that participants had higher SDNN and lower HF when watching science video with high popularity (compared with low popularity one), and the accuracy rate of classification model is 81.6%. In a same way, the selected meaningful indicators in the gesture aspect are the maximum, the standard deviation (SD) of distance between the participants’ head and Kinect. We find that the maximum and SD of head distance are smaller when participants studied highly popular science videos (contrary to the less popular videos) and the accuracy rate is 73.7%. Combining the four indicators for modeling by the direct input method, the accuracy rate is 78.9%, and the SD of head distance is probably the most important indicator for predicting popularity of videos. These results indicate that it is feasible to predict video popularity by learning states.
Disclosure statement
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Funding
Notes on contributors
Guangliang Hu
Guangliang Hu is a master student at the Zhejiang Sci-Tech University (China). His research interests include online video learning, human-computer interaction design, and user experience.
Yankai Wang
YanKai Wang is a master student at the Zhejiang Sci-Tech University (China). His research interests include online video learning, quality of experience evaluation, and learning state recognition.
Zhen Yang
Zhen Yang is an assistant professor at the Zhejiang Sci-Tech University (China). His research interests include human computer interaction, human factors, user experience, cognitive science and neuroergonomics.
Zhiguo Hu
Zhiguo Hu is a full professor and researcher at the Hangzhou Normal University (China). He has a PhD degree in Basic Psychology (Beijing Normal University, China). Currently, his research interests are on the neuro-cognitive mechanism of emotion regulation and related areas.
Tian Gan
Tian Gan is an associate professor at the Zhejiang Sci-Tech University (China). She has a degree in Applied psychology (Southwest University, China), master’s degree in Psychology (Southwest University, China), and PhD in Cognitive Neuroscience (Beijing Normal University, China). Currently, her research interests are on the social neuroscience and cognitive enhancement.
Hongyan Liu
Hongyan Liu is a full professor and researcher at the Zhejiang Sci-Tech University (China). She has a bachelor’s degree in psychology (Northeast Normal University, China), master’s degree and PhD in Basic Psychology (Beijing Normal University, China). Currently, her research interests are on neuro-cognitive mechanism underlying emotion and attractiveness and related areas.