180
Views
1
CrossRef citations to date
0
Altmetric
Research Articles

CNAMD Corpus: A Chinese Natural Audiovisual Multimodal Database of Conversations for Social Interactive Agents

ORCID Icon, , , , , & show all
Pages 2041-2053 | Received 28 Nov 2022, Accepted 19 Jun 2023, Published online: 04 Jul 2023
 

Abstract

Impressive progress has been made in developing companion Socially Interactive Agents (SIAs) that provide companionship and reduce loneliness. However, recent works focus on analyzing multimodal feedback in Answer part but ignore Question part. Furthermore, research on SIAs is primarily based on English, which poses a challenge for Chinese SIAs because of cultural differences between English and Chinese. Therefore, we introduce a Chinese Natural Audiovisual Multimodal Database (CNAMD) corpus, the first and largest freely available Chinese multimodal database for multi-person interaction, containing 48 hours of videos and annotations across eight modalities. Using CNAMD, we analyze the characteristics of vocal-verbal, audio, behavioral, and multimodal combinations during questioning, test the performance of six baselines on three tasks, and propose improvements for processing daily Chinese data. The present findings will help designers consider Chinese customs and language when designing Chinese SIAs, making them more suitable for the Chinese cultural context and users.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are openly available in CNAMD corpus at https://github.com/JingyuWu-ZJU/CNAMD_corpus.

Additional information

Funding

This work was supported by National key research and development program of China (No. 2021YFF0900602), the Natural Science Foundation of Zhejiang Province (No. LY22F020014), the Ng Teng Fong Charitable Foundation in the form of ZJU-SUTD IDEA Grant (188170-11102) and the National Natural Science Foundation of China (No. 62006208 and No. 62107035). We thank David Mulrooney (PhD) for editing the English text of a draft of this manuscript

Notes on contributors

Jingyu Wu

Jingyu Wu is a PhD candidate in the College of Computer Science and Technology, Zhejiang University. With a background in human–computer interaction and computer vision, his PhD research focuses on multimodal SIA development, computer vision, and human–AI interaction.

Shi Chen

Shi Chen is currently an assistant professor in the Industrial Design Department, Zhejiang University. Her research interests lie in information and interaction design, visual design computing, and design cognition. She has published many research papers in various reputable journals and conference proceedings.

Wei Xiang

Wei Xiang is a lecturer in the Industrial Design Department, Zhejiang University. He received his PhD degree in Digital Art and Design. His research lies in design intelligence and human–computer interaction.

Lingyun Sun

Lingyun Sun is a professor at the College of Computer Science and Technology, Zhejiang University. He is the deputy director of the International Design Institute of Zhejiang University. His research interests include human–computer interaction, creative intelligence, and information and interaction design.

Hongzeng Zhang

Hongzeng Zhang is an undergraduate student currently working as a research assistant at the International Design Institute of Zhejiang University. His research interests focus on computer vision and artificial intelligence.

Zhengyu Zhang

Zhengyu Zhang is an undergraduate student currently working as an intern research assistant at the International Design Institute of Zhejiang University. His research interests focus on computer vision, especially in human pose estimation.

Yanxu Li

Yanxu Li is a type designer currently working as a research assistant at the International Design Institute of Zhejiang University. His research interests focus on multimodal learning.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.