Abstract
Recent years have seen tremendous progress in artificial intelligence (AI), such as with the automatic and real-time recognition of objects and activities in videos in the field of computer vision. Due to its increasing digitalization, the operating room (OR) promises to directly benefit from this progress in the form of new assistance tools that can enhance the abilities and performance of surgical teams. Key for such tools is the recognition of the surgical workflow, because efficient assistance by an AI system requires this system to be aware of the surgical context, namely of all activities taking place inside the operating room. We present here how several recent techniques relying on machine and deep learning can be used to analyze the activities taking place during surgery, using videos captured from either endoscopic or ceiling-mounted cameras. We also present two potential clinical applications that we are developing at the University of Strasbourg with our clinical partners.
Acknowledgements
The work described in this invited paper was carried out by the members of the CAMMA lab since 2013 in collaboration with their clinical partners at University Hospital of Strasbourg, IHU Strasbourg and IRCAD, and supported by French state funds managed within the Investissements d'Avenir program by the ANR (references ANR-11-LABX-0004, ANR-10-IDEX-0002-02, ANR-10-IAHU-02 and ANR-16-CE33-0009) and by BPI France (project CONDOR).
Declaration of interest
No potential conflict of interest was reported by the author.
Notes
Notes
3 Qualitative tool tracking results can be appreciated in this video: https://youtu.be/vnMwlS5tvHE