0
Views
0
CrossRef citations to date
0
Altmetric
Full paper

Nearest neighbor future captioning: generating descriptions for possible collisions in object placement tasks

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon show all
Received 31 Jan 2024, Accepted 04 Jul 2024, Published online: 09 Aug 2024
 

Abstract

Domestic service robots (DSRs) that support people in everyday environments have been widely investigated. However, their ability to predict and describe future risks resulting from their own actions remains insufficient. In this study, we focus on the linguistic explainability of DSRs. Most existing methods do not explicitly model the region of possible collisions; thus, they do not properly generate descriptions of these regions. In this paper, we propose the Nearest Neighbor Future Captioning Model that introduces the Nearest Neighbor Language Model for future captioning of possible collisions, which enhances the model output with a nearest neighbors retrieval mechanism. Furthermore, we introduce the Collision Attention Module that attends regions of possible collisions, which enables our model to generate descriptions that adequately reflect the objects associated with possible collisions. To validate our method, we constructed a new dataset containing samples of collisions that can occur when a DSR places an object in a simulation environment. The experimental results demonstrated that our method outperformed baseline methods, based on the standard metrics. In particular, on CIDEr-D, the baseline method obtained 25.09 points, whereas our method obtained 33.08 points.

GRAPHICAL ABSTRACT

Disclosure statement

No potential conflict of interest was reported by the author(s).

Notes

Additional information

Funding

This work was partially supported by JSPS KAKENHI Grant Number 23K03478, JST Moonshot, JST CREST, and NEDO.

Notes on contributors

Takumi Komatsu

Takumi Komatsu graduated with a B.E. in computer science from Keio University, Japan in 2023 and is currently pursuing his M.S. at the same university. His research interests include service robots, multimodal language understanding, and machine learning.

Motonari Kambara

Motonari Kambara is a Ph.D. student at Keio University, Japan. He obtained a B.E. and an M.S. in computer science from Keio University in 2021 and 2023, respectively. From 2023, he is a research fellow at JSPS. His research interests include vision & language and domestic service robotics.

Shumpei Hatanaka

Shumpei Hatanaka received his B.E. and M.S. degrees in computer science from Keio University, Japan, in 2021 and 2023, respectively. His research interests encompass referring expression segmentation, vision-and-language navigation.

Haruka Matsuo

Haruka Matsuo graduated with a B.E. in computer science from Keio University, Japan, in 2023 and is currently pursuing her M.S. at the same university. Her research interests include service robots, multimodal language understanding, and machine learning.

Tsubasa Hirakawa

Tsubasa Hirakawa received the Ph.D. degree in computer science from Hiroshima University, Japan, in 2017. From 2017 to 2019, he was a Researcher Fellow with Chubu University, Japan. He has been a specially appointed Associate Professor with the Chubu Institute for Advanced Studies, Chubu University, since 2019. He has been a Lecturer with the Department of Computer Science, Chubu University, since 2021. He held a Fellowship with Japan Society for the Promotion of Science, from 2014 to 2017. He was a Visiting Researcher with ESIEE Paris, France, from 2014 to 2015.

Takayoshi Yamashita

Takayoshi Yamashita received the Ph.D. degree in computer science from Chubu University, Japan, in 2011. He was with OMRON Corporation, from 2002 to 2014. He was a Lecturer with the Department of Computer Science, Chubu University, from 2014 to 2017, where he was an Associate Professor, from 2017 to 2021, and has been a Professor since 2021. His current research interests include object detection, object tracking, human activity understanding, pattern recognition, and machine learning. He is a member of IEICE and IPSJ.

Hironobu Fujiyoshi

Hironobu Fujiyoshi received the Ph.D. degree in electrical engineering from Chubu University, Japan, in 1997. From 1997 to 2000, he was a Postdoctoral Fellow with the Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA, working on the DARPA video surveillance and monitoring (VSAM) effort and the humanoid vision project for the HONDA humanoid robot. From 2005 to 2006, he was a Visiting Researcher with the Robotics Institute, Carnegie Mellon University. He is currently a Professor with the Department of Robotics, Chubu University. His research interests include computer vision, video understanding, and pattern recognition. He is a member of IEICE and IPSJ.

Komei Sugiura

Komei Sugiura is Professor at Keio University, Japan. He obtained a B.E. in electrical and electronic engineering, and an M.S. and a Ph.D. both in informatics from Kyoto University in 2002, 2004, and 2007, respectively. From 2006 to 2008, he was a research fellow at JSPS. From 2006 to 2009, he was also with ATR. From 2008 to 2020, he was Senior Researcher at National Institute of Information and Communications Technology, Japan, before joining Keio University in 2020. His research interests include multimodal language understanding, service robots, machine learning, spoken dialogue systems, cloud robotics, imitation learning, and recommender systems.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 332.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.