ABSTRACT
In recent years, smart workplaces that adapt to human activity and motion behavior have been proposed for cognitive production systems. In this respect, methods for identifying the feelings and activities of human workers are being investigated to improve the cognitive capability of smart machines such as robots in shared working spaces. Recognizing human activities and predicting the possible next sequence of operations may simplify robot programming and improve collaboration efficiency. However, human activity recognition still requires explainable models that are versatile, robust, and interpretative. Therefore, recognizing and analyzing human action details using continuous probability density estimates in different workplace layouts is essential. Three scenarios are considered: a standalone, a one-piece flow U-form, and a human-robot hybrid workplace. This work presents a novel approach to human activity recognition based on a probabilistic spatial partition (HAROPP). Its performance is compared to the geometric-bounded activity recognition method. Results show that spatial partitions based on probabilistic density contain 20% fewer data frames and 10% more spatial areas than the geometric bounding box. The approach, on average, detects human activities correctly for 81% of the cases for a pre-known workplace layout. HAROPP has scalability and applicability potential for cognitive workplaces with a digital twin in the loop for pushing the cognitive capabilities of machine systems and realizing human-centered environments.
Availability of data and material
Not applicable
Code availability
Not applicable.
Consent for publication
The authors affirm that human research participants provided informed consent for publication of the images in .
Consent to participate
Voluntary participation and informed consent were obtained from all individual participants included in the study in the lab environment.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Ethics approval
The motion capture data comprise joint position, orientation, and time stamp. The data is stored anonymously for analysis at the participant’s consent for academic research.
Symbols and acronyms
= | List of activities | |
= | List of resources | |
= | List of cells or tasks | |
= | Observability scope of activities | |
= | Kernel of density estimates | |
= | Probability of Kernel density estimation | |
= | Kernel bandwidth | |
= | Contour path extracted from the density plot | |
= | Recognized human activity | |
= | Function | |
= | Area of bounding geometry | |
= | Area of density estimate | |
= | Contained frame within the bounding geometry | |
= | Contained frame within the density area | |
HRC | = | Human robot collaboration |
HAR | = | Human activity recognition |
HAROPP | = | Human activity recognition on probabilistic partition |
MTM | = | Method-time measurement |
HAROPP | = | Inertial measurement unit |
YAML | = | Yet another markup language |
MTM | = | Method-time measurement |
MTM-1 | = | MTM first generation |
MTM-2 | = | MTM second generation |
MTM-UAS | = | MTM Universal Analysing System |