151
Views
1
CrossRef citations to date
0
Altmetric
Articles

Human Interaction Recognition in Videos with Body Pose Traversal Analysis and Pairwise Interaction Framework

ORCID Icon, ORCID Icon & ORCID Icon
 

Abstract

Interaction recognition in videos with body pose is gaining remarkable attention due to its speed and robustness. Recently proposed recurrent neural network (RNN) and deep ConvNets-based methods are showing good performances in learning sequential information. Despite these good performances, RNN lags behind in learning spatial relation between body parts, while deep ConvNets requires huge amount of data for training. We propose a traversal-based three-layer neural network (TNN), followed by pairwise interaction framework (PIF) for interaction recognition. We also propose a novel algorithm for tracking humans in successive frames. The proposed algorithm computes collective traversal of individual body parts across the frames and feeds to TNN to learn effective representation of complex actions. The PIF model combines confidence scores of a pair of action labels corresponding to an interaction for final interaction prediction. We evaluate the approach on two publicly available datasets i.e. UT-Interaction and SBU Kinect Interaction. Results show that our proposed approach outperforms the state-of-the-art methods.

Notes

1 There may possibility of interaction class with two active performers such as Hug and Handshake. We have divided these classes into two labels i.e. Left and Right as shown in Figure (c).

Additional information

Notes on contributors

Amit Verma

Amit Verma is currently a PhD research scholar in the Department of Electronics and Communication at National Institute of Technology Raipur, India. His research interest includes image processing, computer vision, and neural network. Email: [email protected]

Toshanlal Meenpal

Toshanlal Meenpal is currently an assistant professor in the Department of Electronics and Communication at National Institute of Technology Raipur, India. He obtained his PhD from Bhabha Atomic Research Centre (BARC), Mumbai under the aegis of HBNI University, Mumbai. He did his master’s degree in automation and computer vision engineering from Indian Institute of Technology, Kharagpur in 2005. Before switching to academics, he has also worked for 5 years as a design engineer at ST Microelectronics and Nvidia Graphics in the Multimedia playback-related R&D groups. His research interests include multimedia security techniques like digital watermarking, steganography, cryptography as well as image processing and image analysis.

Bibhudendra Acharya

Bibhudendra Acharya was born in India, on June 30, 1978. He graduated from Dr B A Marathawada University, Aurangabad, India in electronics and telecommunication engineering in 2002 and did his MTech from National Institute of Technology, Rourkela, Odisha, India in telematics and signal processing in 2004 and the PhD degree from National Institute of Technology, Rourkela, Odisha, India, in 2015. He is currently serving as an assistant professor in the Department of Electronics and Communication, NIT, Raipur. He has more than 40 research publications in national/international journals and conferences. His research areas of interest are cryptography and network security, microcontroller and embedded system, signal processing and soft computing. Email: [email protected]

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.