Abstract
This study introduces a novel framework for the robotic decommissioning of nuclear facilities, that focuses on object classification and six degrees of freedom pose estimation from partial-view three-dimensional (3-D) scan data. Addressing the challenge of precise robotic manipulation in environments where acquiring full-scan data is impractical, this framework leverages a deep neural network for initial pose estimation, subsequently refined by a modified iterative closest point algorithm. Our method demonstrates high accuracy in identifying scanned objects and estimating their poses from partial-view scans, validated through experiments with 3-D printed mock-ups. This advancement highlights the potential for significantly enhancing robotic automation in nuclear decommissioning and related fields.
Acknowledgments
The author is grateful for the invaluable insights and discussions provided by Hogeon Seo, which significantly enhanced this research. Special thanks also go to Jaehyun Ha for his diligent work in creating the 3-D mock-ups for experiments.
Disclosure Statement
No potential conflict of interest was reported by the author.