79
Views
0
CrossRef citations to date
0
Altmetric
Media & Communication Studies

What multimodal components, tools, dataset and focus of emotion are used in the current research of multimodal emotion: a systematic literature review

ORCID Icon & ORCID Icon
Article: 2376309 | Received 16 Oct 2023, Accepted 02 Jul 2024, Published online: 16 Jul 2024

References

  • Amini, M., Amini, M., Nabiee, P., & Delavari, S. (2018). The relationship between emotional intelligence and communication skills in healthcare staff. Shiraz E-Medical Journal, 20(4). https://doi.org/10.5812/semj.80275
  • Arthanarisamy, R. M. P., & Palaniswamy, S. (2022). Subject independent emotion recognition using EEG and physiological signals–a comparative study. Applied Computing and Informatics. http://doi.org/10.1108/aci-03-2022-0080
  • Askool, S., Pan, Y. C., Jacobs, A., & Tan, C. (2019). Understanding proximity mobile payment adoption through technology acceptance model and organisational semiotics: An exploratory study. https://aisel.aisnet.org/ukais2019/38/
  • Athavipach, C., Pan-Ngum, S., & Israsena, P. (2019). A wearable in-ear EEG device for emotion monitoring. Sensors (Basel, Switzerland), 19(18), 4014. https://doi.org/10.3390/s19184014
  • Azmi, A., Ibrahim, R., Abdul Ghafar, M., & Rashidi, A. (2022). Smarter real estate marketing using virtual reality to influence potential homebuyers’ emotions and purchase intention. Smart and Sustainable Built Environment, 11(4), 870–890. https://doi.org/10.1108/SASBE-03-2021-0056
  • Bagila, S., Kok, A., Zhumabaeva, A., Suleimenova, Z., Riskulbekova, A., & Uaidullakyzy, E. (2019). Teaching primary school pupils through audio-visual means. International Journal of Emerging Technologies in Learning (iJET), 14(22), 122–140. https://doi.org/10.3991/ijet.v14i22.11760
  • Baig, M. Z., & Kavakli, M. (2019). A survey on psycho-physiological analysis & measurement methods in multimodal systems. Multimodal. Multimodal Technologies and Interaction, 3(2), 37. https://doi.org/10.3390/mti3020037
  • Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest: A Journal of the American Psychological Society, 20(1), 1–68. https://doi.org/10.1177/1529100619832930
  • Bashir, S., Bano, S., Shueb, S., Gul, S., Mir, A. A., Ashraf, R., Noor, N., & Shakeela. (2021). Twitter chirps for Syrian people: Sentiment analysis of tweets related to Syria Chemical Attack. International Journal of Disaster Risk Reduction, 62, 102397. https://doi.org/10.1016/j.ijdrr.2021.102397
  • Bayoudh, K., Knani, R., Hamdaoui, F., & Mtibaa, A. (2022). A survey on deep multimodal learning for computer vision: Advances, trends, applications, and datasets. The Visual Computer, 38(8), 2939–2970. https://doi.org/10.1007/s00371-021-02166-7
  • Beniczky, S., & Schomer, D. L. (2020). Electroencephalography: Basic biophysical and technological aspects important for clinical applications. Epileptic Disorders: International Epilepsy Journal with Videotape, 22(6), 697–715. https://doi.org/10.1684/epd.2020.1217
  • Bhattacharya, P., Gupta, R. K., & Yang, Y. (2023). Exploring the contextual factors affecting multimodal emotion recognition in videos. IEEE Transactions on Affective Computing, 14(2), 1547–1557. https://doi.org/10.1109/TAFFC.2021.3071503
  • Boehm, K. M., Khosravi, P., Vanguri, R., Gao, J., & Shah, S. P. (2022). Harnessing multimodal data integration to ­advance precision oncology. Nature Reviews. Cancer, 22(2), 114–126. https://doi.org/10.1038/s41568-021-00408-3
  • Cai, L., Dong, J., & Wei, M. (2020). Multi-modal emotion recognition from speech and facial expression based on deep learning [Paper presentation]. 2020 Chinese Automation Congress (CAC) (pp. 5726–5729). https://doi.org/10.1109/CAC51589.2020.9327178
  • Caihua, C. (2019). Research on multi-modal mandarin speech emotion recognition based on SVM [Paper presentation]. 2019 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS) (pp. 173–176). https://doi.org/10.1109/ICPICS47731.2019.8942545
  • Chou, W. Y. S., & Budenz, A. (2020). Considering emotion in COVID-19 vaccine communication: Addressing vaccine hesitancy and fostering vaccine confidence. Health Communication, 35(14), 1718–1722. https://doi.org/10.1080/10410236.2020.1838096
  • Chudasama, V., Kar, P., Gudmalwar, A., Shah, N., Wasnik, P., & Onoe, N. (2022). M2FNet: Multi-modal fusion network for emotion recognition in conversation. Proceedings of theCVF Conference on Computer Vision and Pattern Recognition (pp. 4652–4661). https://doi.org/10.48550/arXiv.2206.02187
  • Clayton, S., & Ogunbode, C. (2023). Looking at emotions to understand responses to environmental challenges. Emotion Review, 15(4), 275–278. https://doi.org/10.1177/17540739231193757
  • Dahmane, M., Alam, J., St-Charles, P. L., Lalonde, M., Heffner, K., & Foucher, S. (2022). A multimodal non-intrusive stress monitoring from the pleasure-arousal emotional dimensions. IEEE Transactions on Affective Computing, 13(2), 1044–1056. https://doi.org/10.1109/TAFFC.2020.2988455
  • Davies, S. R., Halpern, M., Horst, M., Kirby, D. S., & Lewenstein, B. (2019). Science stories as culture: Experience, identity, narrative and emotion in public communication of science. Journal of Science Communication, 18(05), A01. https://doi.org/10.22323/2.18050201
  • Deinsberger, J., Reisinger, D., & Weber, B. (2020). Global trends in clinical trials involving pluripotent stem cells: A systematic multi-database analysis. NPJ Regenerative Medicine, 5(1), 15. https://doi.org/10.1038/s41536-020-00100-4
  • Dzedzickis, A., Kaklauskas, A., & Bucinskas, V. (2020). Human emotion recognition: Review of sensors and methods. Sensors (Basel, Switzerland), 20(3), 592. https://doi.org/10.3390/s20030592
  • Egger, M., Ley, M., & Hanke, S. (2019). Emotion recognition from physiological signal analysis: A review. Electronic Notes in Theoretical Computer Science, 343, 35–55. https://doi.org/10.1016/j.entcs.2019.04.009
  • Fröhlich, M., Sievers, C., Townsend, S. W., Gruber, T., & van Schaik, C. P. (2019). Multimodal communication and language origins: Integrating gestures and vocalizations. Biological Reviews of the Cambridge Philosophical Society, 94(5), 1809–1829. https://doi.org/10.1111/brv.12535
  • Gandhi, A., Adhvaryu, K., Poria, S., Cambria, E., & Hussain, A. (2022). Multimodal sentiment analysis: A systematic review of history, datasets, multimodal fusion methods, applications, challenges and future directions. Information Fusion, 91, 424–444. https://doi.org/10.1016/j.inffus.2022.09.025
  • Garg, S., Patro, R. K., Behera, S., Tigga, N. P., & Pandey, R. (2021). An overlapping sliding window and combined features based emotion recognition system for EEG signals. Applied Computing and Informatics, https://doi.org/10.1108/ACI-05-2021-0130
  • Ghaleb, E., Popa, M., & Asteriadis, S. (2019a). Metric learning-based multimodal audio-visual emotion recognition. IEEE Multimedia, 27(1), 1–1. https://doi.org/10.1109/MMUL.2019.2960219
  • Ghaleb, E., Popa, M., & Asteriadis, S. (2019b). Multimodal and temporal perception of audio-visual cues for emotion recognition [Paper presentation]. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) (pp. 552–558). https://doi.org/10.1109/ACII.2019.8925444
  • Gordon, R., Ciorciari, J., & van Laer, T. (2018). Using EEG to examine the role of attention, working memory, emotion, and imagination in narrative transportation. European Journal of Marketing, 52(1–2), 92–117. https://doi.org/10.1108/EJM-12-2016-0881
  • Guanghui, C., & Xiaoping, Z. (2021). Multi-modal emotion recognition by fusing correlation features of speech-visual. IEEE Signal Processing Letters, 28, 533–537. https://doi.org/10.1109/LSP.2021.3055755
  • He, L., Niu, M., Tiwari, P., Marttinen, P., Su, R., Jiang, J., Guo, C., Wang, H., Ding, S., Wang, Z., Pan, X., & Dang, W. (2022). Deep learning for depression recognition with audiovisual cues: A review. Information Fusion, 80, 56–86. https://doi.org/10.1016/j.inffus.2021.10.012
  • Hu, L., Li, W., Yang, J., Fortino, G., & Chen, M. (2022). A sustainable multi-modal multi-layer emotion-aware service at the edge. IEEE Transactions on Sustainable Computing, 7(2), 324–333. https://doi.org/10.1109/TSUSC.2019.2928316
  • Illendula, A., & Sheth, A. (2019). Multimodal emotion classification [Paper presentation]. Companion Proceedings of the 2019 World Wide Web Conference (pp. 439–449). https://doi.org/10.1145/3308560.3316549
  • Ismail, S. N. M. S., Aziz, N. A. A., Ibrahim, S. Z., Khan, C. T., & Rahman, M. A. (2021). Selecting video stimuli for emotion elicitation via online survey. Human-Centric Computing and Information Sciences, 11(36), 1–18. https://doi.org/10.22967/HCIS.2021.11.036
  • Jakovljevic, M., & Jakovljevic, I. (2021). Sciences, arts and religions: The triad in action for empathic civilization in Bosnia and Herzegovina. Science, Art and Religion, 1(1–2), 5–22. https://doi.org/10.5005/sar-1-1-2-5
  • Jamshidi, L., Heyvaert, M., Declercq, L., Fernández-Castilla, B., Ferron, J. M., Moeyaert, M., Beretvas, S. N., Onghena, P., & Van den Noortgate, W. (2022). A systematic review of single-case experimental design meta-analyses: Characteristics of study designs, data, and analyses. Evidence-Based Communication Assessment and Intervention, 17(1), 6–30. https://doi.org/10.1080/17489539.2022.2089334
  • Jeon, L., Buettner, C. K., & Grant, A. A. (2018). Early childhood teachers’ psychological well-being: Exploring potential predictors of depression, stress, and emotional exhaustion. Early Education and Development, 29(1), 53–69. https://doi.org/10.1080/10409289.2017.1341806
  • Jiang, Y., Li, W., Hossain, M. S., Chen, M., Alelaiwi, A., & Al-Hammadi, M. (2020). A snapshot research and implementation of multimodal information fusion for data-driven emotion recognition. Information Fusion, 53, 209–221. https://doi.org/10.1016/j.inffus.2019.06.019
  • Jo, W., Kannan, S. S., Cha, G. E., Lee, A., & Min, B. C. (2020). Rosbag-based multimodal affective dataset for emotional and cognitive states [Paper presentation].2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 226–233). https://doi.org/10.1109/SMC42975.2020.9283320
  • Keller, S. B., Ralston, P. M., & LeMay, S. A. (2020). Quality output, workplace environment, and employee retention: The positive influence of emotionally intelligent supply chain managers. Journal of Business Logistics, 41(4), 337–355. https://doi.org/10.1111/jbl.12258
  • Keltner, D., Sauter, D., Tracy, J., & Cowen, A. (2019). Emotional expression: Advances in basic emotion theory. Journal of Nonverbal Behavior, 43(2), 133–160. https://doi.org/10.1007/s10919-019-00293-3
  • Kim, B., de Visser, E., & Phillips, E. (2022). Two uncanny valleys: Re-evaluating the uncanny valley across the full spectrum of real-world human-like robots. Computers in Human Behavior, 135, 107340. https://doi.org/10.1016/j.chb.2022.107340
  • Kim, J. H., Kim, B. G., Roy, P. P., & Jeong, D. M. (2019). Efficient facial expression recognition algorithm based on hierarchical deep neural network structure. IEEE Access, 7, 41273–41285. https://doi.org/10.1109/ACCESS.2019.2907327
  • Kossaifi, J., Walecki, R., Panagakis, Y., Shen, J., Schmitt, M., Ringeval, F., Han, J., Pandit, V., Toisoul, A., Schuller, B., Star, K., Hajiyev, E., & Pantic, M. (2019). Sewa db: A rich database for audio-visual emotion and sentiment research in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(3), 1022–1040. https://doi.org/10.1109/TPAMI.2019.2944808
  • Kumari, K., Abbas, J., Hwang, J., & Cioca, L. I. (2022). Does servant leadership promote emotional intelligence and organizational citizenship behavior among employees? A structural analysis. Sustainability, 14(9), 5231. https://doi.org/10.3390/su14095231
  • Laureanti, R., Bilucaglia, M., Zito, M., Circi, R., Fici, A., Rivetti, F., Valesi, R., Oldrini, C., Mainardi, L.T., & Russo, V. (2020). Emotion assessment using Machine Learning and low-cost wearable devices [Paper presentation].2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC) (pp. 576–579). https://doi.org/10.1109/EMBC44109.2020.9175221
  • Lee, J. H., Kim, H. J., & Cheong, Y. G. (2020). A multi-modal approach for emotion recognition of TV drama characters using image and text [Paper presentation]. 2020 IEEE International Conference on Big Data and Smart Computing (BigComp) (pp. 420–424). https://doi.org/10.1109/BigComp48618.2020.00-37
  • Li, M., Ch’ng, E., Chong, A. Y. L., & See, S. (2018). Multi-class Twitter sentiment classification with emojis. Industrial Management & Data Systems, 118(9), 1804–1820. https://doi.org/10.1108/IMDS-12-2017-0582
  • Li, W., Huan, W., Hou, B., Tian, Y., Zhang, Z., & Song, A. (2022). Can emotion be transfered? A review on transfer learning for EEG-Based Emotion Recognition. IEEE Transactions on Cognitive and Developmental Systems, 14(3), 833–846. https://doi.org/10.1109/TCDS.2021.3098842
  • Li, X., Song, D., Zhang, P., Zhang, Y., Hou, Y., & Hu, B. (2018). Exploring EEG features in cross-subject emotion recognition. Frontiers in Neuroscience, 12, 162. https://doi.org/10.3389/fnins.2018.00162
  • Lim, F. V., Toh, W., & Nguyen, T. T. H. (2022). Multimodality in the English language classroom: A systematic review of literature. Linguistics and Education, 69, 101048. https://doi.org/10.1016/j.linged.2022.101048
  • Liu, G., & Tan, Z. (2020). Research on multi-modal music emotion classification based on audio and lyirc [Paper presentation]. 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC) (Vol. 1, pp. 2331–2335). https://doi.org/10.1109/ITNEC48623.2020.9084846
  • Löffler, D., Schmidt, N., & Tscharn, R. (2018). Multimodal expression of artificial emotion in social robots using color, motion and sound [Paper presentation]. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 334–343). https://doi.org/10.1145/3171221.3171261
  • Ma, Y., Hao, Y., Chen, M., Chen, J., Lu, P., & Košir, A. (2019). Audio-visual emotion fusion (AVEF): A deep efficient weighted approach. Information Fusion, 46, 184–192. https://doi.org/10.1016/j.inffus.2018.06.003
  • Marechal, C., Mikolajewski, D., Tyburek, K., Prokopowicz, P., Bougueroua, L., Ancourt, C., & Wegrzyn-Wolska, K. (2019). Survey on AI-based multimodal methods for emotion detection. High-performance modelling and simulation for big data applications. 11400, 307–324. https://doi.org/10.1007/978-3-030-16272-6
  • Marshall, I. J., & Wallace, B. C. (2019). Toward systematic review automation: A practical guide to using machine learning tools in research synthesis. Systematic Reviews, 8(1), 163. https://doi.org/10.1186/s13643-019-1074-9
  • Mehta, D., Siddiqui, M. F. H., & Javaid, A. Y. (2018). Facial emotion recognition: A survey and real-world user experiences in mixed reality. Sensors (Basel, Switzerland), 18(2), 416. https://doi.org/10.3390/s18020416
  • Men, L. R., & Yue, C. A. (2019). Creating a positive emotional culture: Effect of internal communication and impact on employee supportive behaviors. Public Relations Review, 45(3), 101764. https://doi.org/10.1016/j.pubrev.2019.03.001
  • Millar, B., & Lee, J. (2021). Horror films and grief. Emotion Review, 13(3), 171–182. https://doi.org/10.1177/17540739211022815
  • Mills, K. A., & Unsworth, L. (2018). IP ad animations: Powerful multimodal practices for adolescent literacy and emotional language. Journal of Adolescent & Adult Literacy, 61(6), 609–620. https://doi.org/10.1002/jaal.717
  • Mittal, T., Bhattacharya, U., Chandra, R., Bera, A., & Manocha, D. (2020). M3er: Multiplicative multimodal emotion recognition using facial, textual, and speech cues. Proceedings of the AAAI Conference on Artificial Intelligence, 34(02), 1359–1367. https://doi.org/10.1609/aaai.v34i02.5492
  • Mohamed Shaffril, H. A., Samsuddin, S. F., & Abu Samah, A. (2021). The ABC of systematic literature review: The basic methodological guidance for beginners. Quality & Quantity, 55(4), 1319–1346. https://doi.org/10.1007/s11135-020-01059-6
  • Morton, D. P., Hinze, J., Craig, B., Herman, W., Kent, L., Beamish, P., Renfrew, M., & Przybylko, G. (2020). A multimodal intervention for improving the mental health and emotional well-being of college students. American Journal of Lifestyle Medicine, 14(2), 216–224. https://doi.org/10.1177/1559827617733941
  • Mufid, M., Masruri, S., & Azhar, M. (2021). Controlling anger on Al-Qur’an and psychology of Islamic education perspective (A study at on private higher education). Review of International Geographical Education Online, 11(9), 56–70. https://doi.org/10.17762/pae.v58i2.3313
  • Muhajarah, K. (2022). Anger in Islam and its relevance to mental health. Jurnal Ilmiah Syi’ar, 22(2), 141–153. https://doi.org/10.29300/syr.v22i2.7417
  • Muszynski, M., Tian, L., Lai, C., Moore, J. D., Kostoulas, T., Lombardo, P., Pun, T., & Chanel, G. (2021). Recognizing ­induced emotions of movie audiences from multimodal information. IEEE Transactions on Affective Computing, 12(1), 36–52. https://doi.org/10.1109/TAFFC.2019.2902091
  • Nabi, R. L., Gustafson, A., & Jensen, R. (2018). Framing climate change: Exploring the role of emotion in generating advocacy behavior. Science Communication, 40(4), 442–468. https://doi.org/10.1177/1075547018776019
  • Ngiam, K. Y., & Khor, W. (2019). Big data and machine learning algorithms for health-care delivery. The Lancet Oncology, 20(5), e262–e273. https://doi.org/10.1016/S1470-2045(19)30149-4
  • Ninaus, M., Greipl, S., Kiili, K., Lindstedt, A., Huber, S., Klein, E., Karnath, H.-O., & Moeller, K. (2019). Increased emotional engagement in game-based learning–A machine learning approach on facial emotion detection data. Computers & Education, 142, 103641. https://doi.org/10.1016/j.compedu.2019.103641
  • Pandeya, Y. R., & Lee, J. (2021). Deep learning-based late fusion of multimodal information for emotion classification of music video. Multimedia Tools and Applications, 80(2), 2887–2905. https://doi.org/10.1007/s11042-020-08836-3
  • Papadakis, S. (2021). Tools for evaluating educational apps for young children: A systematic review of the literature. Interactive Technology and Smart Education, 18(1), 18–49. https://doi.org/10.1108/ITSE-08-2020-0127
  • Priyasad, D., Fernando, T., Denman, S., Sridharan, S., & Fookes, C. (2020). Attention driven fusion for multi-modal emotion recognition [Paper presentation]. ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 3227–3231). https://doi.org/10.1109/ICASSP40776.2020.9054441
  • Ruba, A. L., & Repacholi, B. M. (2020). Do preverbal infants understand discrete facial expressions of emotions? Emotion Review, 12(4), 235–250. https://doi.org/10.1177/1754073919871098
  • Sarker, I. H. (2021). Machine learning: Algorithms, real-world applications and research directions. SN Computer Science, 2(3), 160. https://doi.org/10.1007/s42979-021-00592-x
  • Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G. J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174. https://doi.org/10.1016/j.im.2019.103174
  • Sharma, K., & Giannakos, M. (2020). Multimodal data capabilities for learning: What can multimodal data tell us about learning? British Journal of Educational Technology, 51(5), 1450–1484. https://doi.org/10.1111/bjet.12993
  • Singh, N. (2019). Big data technology: Developments in current research and emerging landscape. Enterprise Information Systems, 13(6), 801–831. https://doi.org/10.1080/17517575.2019.1612098
  • Song, T., Zheng, W., Lu, C., Zong, Y., Zhang, X., & Cui, Z. (2019). MPED: A multi-modal physiological emotion database for discrete emotion recognition. IEEE Access, 7, 12177–12191. https://doi.org/10.1109/ACCESS.2019.2891579
  • Strandberg, C., Styvén, M. E., & Hultman, M. (2020). Places in good graces: The role of emotional connections to a place on word-of-mouth. Journal of Business Research, 119, 444–452. https://doi.org/10.1016/j.jbusres.2019.11.044
  • Sun, L., Lian, Z., Tao, J., Liu, B., & Niu, M. (2020). Multi-modal continuous dimensional emotion recognition using recurrent neural network and self-attention mechanism [Paper presentation]. Proceedings of the 1st International on Multimodal Sentiment Analysis in Real-Life Media Challenge and Workshop (pp. 27–34). https://doi.org/10.1145/3423327.3423672
  • Swain, D. L., Langenbrunner, B., Neelin, J. D., & Hall, A. (2018). Increasing precipitation volatility in twenty-first-century California. Nature Climate Change, 8(5), 427–433. https://doi.org/10.1038/s41558-018-0140-y
  • Toorajipour, R., Sohrabpour, V., Nazarpour, A., Oghazi, P., & Fischl, M. (2021). Artificial intelligence in supply chain management: A systematic literature review. Journal of Business Research, 122, 502–517. https://doi.org/10.1016/j.jbusres.2020.09.009
  • Tsiourti, C., Weiss, A., Wac, K., & Vincze, M. (2019). Multimodal integration of emotional signals from voice, body, and context: Effects of (in) congruence on emotion recognition and attitudes towards robots. International Journal of Social Robotics, 11(4), 555–573. https://doi.org/10.1007/s12369-019-00524-z
  • Tung, K., Liu, P. K., Chuang, Y. C., Wang, S. H., & Wu, A. Y. A. (2019). Entropy-assisted multi-modal emotion recognition framework based on physiological signals [Paper presentation]. 2018 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), IEEE (pp. 22–26). https://doi.org/10.1109/IECBES.2018.8626634
  • Uusberg, A., Taxer, J. L., Yih, J., Uusberg, H., & Gross, J. J. (2019). Reappraising reappraisal. Emotion Review, 11(4), 267–282. https://doi.org/10.1177/175407391986261
  • Van Dinter, R., Tekinerdogan, B., & Catal, C. (2021). Automation of systematic literature reviews: A systematic literature review. Information and Software Technology, 136, 106589. https://doi.org/10.1016/j.infsof.2021.106589
  • Van Kleef, G. A. (2021). Comment: Moving (further) beyond private experience: On the radicalization of the social approach to emotions and the emancipation of verbal emotional expressions. Emotion Review, 13(2), 90–94. https://doi.org/10.1177/1754073921991231
  • Wang, Z., & Xie, Y. (2020). Authentic leadership and employees’ emotional labour in the hospitality industry. International Journal of Contemporary Hospitality Management, 32(2), 797–814. https://doi.org/10.1108/IJCHM-12-2018-0952
  • Waterloo, S. F., Baumgartner, S. E., Peter, J., & Valkenburg, P. M. (2018). Norms of online expressions of emotion: Comparing Facebook, Twitter, Instagram, and WhatsApp. New Media & Society, 20(5), 1813–1831. https://doi.org/10.1177/1461444817707349
  • Williams, R. I., Jr, Clark, L. A., Clark, W. R., & Raffo, D. M. (2021). Re-examining systematic literature review in management research: Additional benefits and execution protocols. European Management Journal, 39(4), 521–533. https://doi.org/10.1016/j.emj.2020.09.007
  • Winasis, S., Djumarno, D., Riyanto, S., & Ariyanto, E. (2021). The effect of transformational leadership climate on employee engagement during digital transformation in Indonesian banking industry. International Journal of Data and Network Science, 5(2), 91–96. https://doi.org/10.5267/j.ijdns.2021.3.001
  • Wirz, D. (2018). Persuasion through emotion? An experimental test of the emotion-eliciting nature of populist communication. International Journal of Communication, 12, 1114–1138. https://doi.org/10.5167/uzh-149959
  • Xu, C., Furuya-Kanamori, L., Kwong, J. S. W., Li, S., Liu, Y., & Doi, S. A. (2021). Methodological issues of systematic reviews and meta-analyses in the field of sleep medicine: A meta-epidemiological study. Sleep Medicine Reviews, 57, 101434. https://doi.org/10.1016/j.smrv.2021.101434
  • Zerback, T., & Wirz, D. S. (2021). Appraisal patterns as predictors of emotional expressions and shares on political social networking sites. Studies in Communication Sciences, 21(1), 27–45. https://doi.org/10.5167/uzh-206208
  • Zhang, X., Wang, M. J., & Guo, X. D. (2020). Multi-modal emotion recognition based on deep learning in speech, video and text [Paper presentation]. 2020 IEEE 5th International Conference on Signal and Image Processing (ICSIP) (pp. 328–333). https://doi.org/10.1109/ICSIP49896.2020.9339464