323
Views
4
CrossRef citations to date
0
Altmetric
Articles

Automatic Liver Tumor Segmentation based on Multi-level Deep Convolutional Networks and Fractal Residual Network

& ORCID Icon

REFERENCES

  • Z. Bai, H. Jiang, S. Li, and Y. Yao, “Liver tumour segmentation based on multi-scale candidate generation and fractal residual network,” in IEEE Access, Vol. 7, pp. 82122–82133, 2019. doi:10.1109/ACCESS.2019.2923218.
  • M. Bellver, K. Maninis, J. Pont-Tuset, X. Giró, J. Torres, and L. V. Gool, “Detection-aided liver lesion segmentation using deep learning,” ArXiv, 2017. abs/1711.11069, pp.1–5.
  • L. Bi, J. Kim, A. Kumar, and D. Feng. “Automatic liver lesion detection using cascaded deep residual networks, journals/corr/BiKKF17,” adarX: 1704.02703v2, 2017. abs/1704.02703.
  • A. Das, P. Das, S. S. Panda, and S. Sabutd, “Detection of liver cancer using modified fuzzy clustering and decision tree classifier in CT images, ISSN 1054-6618,” Pattern Recognit. Image Anal., Vol. 29, no. 2, pp. 201–211, 2019. © Pleiades Publishing, Ltd., 2019.
  • Ö Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger. “3d U-Net: learning dense volumetric segmentation from sparse annotation,” in Medical Image Computing and Computer-Assisted Intervention_MICCAI (Lecture Notes in Computer Science). Cham: Springer, 2016, pp. 424–432.
  • N. T. N. Anh, J. Cai, J. Zhang, and J. Zheng. “Constrained active contours for boundary refinement in interactive image segmentation,” 2012 IEEE International Symposium on Circuits and Systems (ISCAS), Seoul, 2012, pp. 870–873, doi: 10.1109/ISCAS.2012.6272179. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
  • C. Peng, X. Zhang, G. Yu, G. Luo, and J. Sun, “Large kernel matters–improve semantic segmentation by global convolutional network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4353–4361.
  • P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 1125–1134.
  • B. C. Anil, and P. Dayananda, “Study on segmentation and liver tumor detection methods,” Int. J. Eng. Technol. (UAE), Vol. 7, pp. 28–33, 2018. 10.14419/ijet.v7i3.4.14670.
  • D. Chen, J.-M. Mirebeau, H. Shu, and L. Cohen, Eikonal region-based active contours for image segmentation, 2019.
  • X. Han. “Automatic liver lesion segmentation using a deep convolutional neural network method”, arXiv:1704.07239, 2017. [Online]. Available: https://arxiv.org/abs/1704.07239.
  • P. F. Christ, et al. “Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolutional neural networks,” arXiv:1702.05970, 2017. [Online]. Available: https://arxiv.org/abs/1702.
  • Y. Yuan. “Hierarchical convolutional-deconvolutional neural network for automatic liver and tumour segmentation,” arXiv:1710.04540, 2017.
  • G. Chlebus, H. Meine, J. H. Moltz, and A. Schenk. “Neural network based automatic liver tumor segmentation with random forest-based candidate ltering,” arXiv:1706.00842, 2017. [Online]. Available: https://arxiv.org/abs/1706.00842.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.