294
Views
12
CrossRef citations to date
0
Altmetric
Articles

A deep separable neural network for human tissue identification in three-dimensional optical coherence tomography images

, & ORCID Icon
 

Abstract

This research proposes a dilated depthwise separable network for human tissue identification using three-dimensional (3D) optical coherence tomography (OCT) images. Automatic human tissue identification has made it possible for fast pathological tissue analyses, detecting tissue changes over time, and efficiently making a precise therapy treatment plan. 3D medical image classification is a challenging task because of the indistinct tissue characteristics and computational efficiency. To address the challenge, a deep dilated depthwise separable convolutional neural network is proposed in this research. A depthwise separable architecture is introduced to improve parameter utilization efficiency. Dilated convolutions are applied to systematically aggregate multiscale contextual information and provide a large receptive field with a small number of trainable weights, which provide computational benefit. 2D convolutions are applied in the proposed model to enhance the computational efficiency. The constructed model is tested by performing a multi-class human thyroid tissue classification on 3D OCT images. For comparison, experimental results are obtained for texture feature-based shallow learning models and typical deep learning classification models. The results show that the proposed DDSCN model outperforms those state-of-art models and can improve accuracy by 3.2% compared with the best texture-based model and 2.27% compared with the best CNN model. The proposed deep model provides significant potential for the applicability of the deep learning technique to analyze medical images of human tissue while advancing the next generation of OCT-based real-time surgery image guidance.

Acknowledgement

The authors would like to thank Integrated Electronics Engineering Center (IEEC) at Binghamton University for the support of the data collection in this research. The reviewers are also appreciated for their valuable comments to improve this paper.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.