640
Views
2
CrossRef citations to date
0
Altmetric
Original Article

Improvement in automatic food region extraction based on saliency detection

&
Pages 634-647 | Received 31 Aug 2021, Accepted 13 Mar 2022, Published online: 28 Mar 2022

References

  • Lega, I. C.; Lipscombe, L. L. Diabetes, Obesity, and Cancer–pathophysiology and Clinical Implications. Endocrine Rev. 2020, 41(3), 33–52. DOI: 10.1210/endrev/bnz014.
  • World Health Organization. Global Action Plan for the Prevention and Control of Noncommunicable Diseases 2013-2020. World Health Organization, 2013.
  • Doulah, A.; Mccrory, M. A.; Higgins, J. A.; Sazonov, E. A Systematic Review of Technology-driven Methodologies for Estimation of Energy Intake. IEEE Access. 2019, 7(1), 49653–49668. DOI: 10.1109/ACCESS.2019.2910308.
  • Bruno, V., and Silva Resende, C. J. 2017. A Survey on Automated Food Monitoring and Dietary Management Systems. J. Health Med. Inform. 8(3), 1–7.
  • Aizawa, K. Image Recognition-based Tool for Food Recording and Analysis: FoodLog. In Connected Health in Smart Cities; El Saddik, A.; Hossain, M. S.; Kantarci, B.; Eds.; Switzerland: Springer Cham, 2019; pp 1–9.
  • Ahmad, Z.; Bosch, M.; Khanna, N.; Kerr, D. A.; Boushey, C. J.; Zhu, F., and Delp, E. J. (2016). A Mobile Food Record for Integrated Dietary Assessment. Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Netherlands, 53–62.
  • Fang, S.; Shao, Z.; Kerr, D. A.; Boushey, C. J.; Zhu, F. An End-to-end Image-based Automatic Food Energy Estimation Technique Based on Learned Energy Distribution Images: Protocol and Methodology. Nutrients. 2019, 11(4), 877. DOI: 10.3390/nu11040877.
  • Hassannejad, H.; Matrella, G.; Ciampolini, P.; De Munari, I.; Mordonini, M.; Cagnoni, S. Automatic Diet Monitoring: A Review of Computer Vision and Wearable Sensor-based Methods. Int. J. Food Sci. Nutr. 2017, 68(6), 656–670. DOI: 10.1080/09637486.2017.1283683.
  • Höchsmann, C.; Martin, C. K. Review of the Validity and Feasibility of Image-assisted Methods for Dietary Assessment. Int. J. Obes. 2020, 44(12), 2358–2371. DOI: 10.1038/s41366-020-00693-2.
  • Dhruv, P., and Naskar, S. (2020). Image Classification Using Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN): A Review. Proceedings of the International Conference on Machine Learning and Information Processing, India, 367–381.
  • Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39(4), 640–651. DOI: 10.1109/TPAMI.2016.2572683.
  • Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39(12), 2481–2495. DOI: 10.1109/TPAMI.2016.2644615.
  • Ullah, I.; Jian, M.; Hussain, S.; Guo, J.; Yu, H.; Wang, X.; Yin, Y. A Brief Survey of Visual Saliency Detection. Multimedia Tools Appl. 2020, 79(45), 34605–34645. DOI: 10.1007/s11042-020-08849-y.
  • Chen, H. C.; Jian, W.; Sun, X.; Li, Z.; Li, Y.; Fernstrom, J. D.; Sun, M.; Baranowski, T.; Sun, M. Saliency-aware Food Image Segmentation for Personal Dietary Assessment Using a Wearable Computer. Meas. Sci. Technol. 2015, 26(2), 025702. DOI: 10.1088/0957-0233/26/2/025702.
  • Wang, Y.; Zhu, F.; Boushey, C. J., and Delp, E. J. (2017). Weakly Supervised Food Image Segmentation Using Class Activation Maps. Proceedings of the 2017 IEEE International Conference on Image Processing, China, 1277–1281.
  • Ege, T.; Shimoda, W., and Yanai, K. (2019). A New Large-scale Food Image Segmentation Dataset and Its Application to Food Calorie Estimation Based on Grains of Rice. Proceedings of the 5th International Workshop on Multimedia Assisted Dietary Management, France, 82–87.
  • Futagami, T.; Kitada, A., and Hayasaka, H. (2020). Food Region Extraction by Applying Saliency Detection Model. Proceedings of the 64th Annual Conference of the Institute of Systems, Control and Information Engineers, Web conference, 57–62 ( in Japanese).
  • Sugiyama, H.; Morikawa, C.; Aizawa, K. Segmentation of Food Images by Local Extrema and GrabCut. J. Inst. Image Inform. Television Eng. 2012, 66(5), J179–J181. in Japanese. DOI: 10.3169/itej.66.J179.
  • Agrawal, M.; Konolige, K., and Blas, M. R. (2008). Censure: Center Surround Extremas for Realtime Feature Detection and Matching. Proceedings of the 10th European Conference on Computer Vision, France, 102–115 (in Japanese).
  • Futagami, T.; Hayasaka, N., and Onoye, T. (2020). Performance Comparison of Saliency Detection Methods for Food Region Extraction. Proceedings of the 4th International Conference on Graphics and Signal Processing, Japan, 1–4.
  • Kroner, A.; Senden, M.; Driessens, K.; Goebel, R. Contextual Encoder–decoder Network for Visual Saliency Prediction. Neural Netw. 2020, 129, 261–270. Doi:10.1016/j.neunet.2020.05.004.
  • Jiang, M.; Huang, S.; Duan, J., and Zhao, Q. (2015). Salicon: Saliency in Context. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, United States, 1072–1080.
  • Sklansky, J. Measuring Concavity on a Rectangular Mosaic. IEEE Trans. Comput. 1972, 21(12), 1355–1364. DOI: 10.1109/T-C.1972.223507.
  • Rother, C.; Kolmogorov, V.; Blas, A. GrabCut: Interactive Foreground Extraction Using Iterated Graph Cuts. ACM Trans. Graph. 2004, 23(3), 309–314. DOI: 10.1145/1015706.1015720.
  • BoyKov, Y.; Kolmogorov, V. An Experimental Comparison of Min-cut/max-flow Algorithms for Energy Minimization in Vision. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26(9), 1124–1137. DOI: 10.1109/TPAMI.2004.60.
  • Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-scale Image Recognition. arXiv preprint. 2014, arXiv, 1409.1556.
  • Goh, T. Y.; Basah, S. N.; Yazid, H.; Safar, M. J. A.; Saad, F. S. A. Performance Analysis of Image Thresholding: Otsu Technique. Measurement. 2018, 114(9), 298–307. DOI: 10.1016/j.measurement.2017.09.052.
  • Jaisakthi, S. M.; Mirunalini, P.; Aravindan, C. Automated Skin Lesion Segmentation of Dermoscopic Images Using GrabCut and K-means Algorithms. IET Comput. Vis. 2018, 12(8), 1088–1095. DOI: 10.1049/iet-cvi.2018.5289.
  • Silva, R. H. L.; Machado, A. M. C. Automatic Measurement of Pressure Ulcers Using Support Vector Machines and GrabCut. Comput. Method Program Biomed. 2021, 200, 105867. Doi:10.1016/j.cmpb.2020.105867.
  • Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. Enet: A Deep Neural Network Architecture for Real-time Semantic Segmentation. arXiv preprint. 2016, arXiv, 1606.02147.
  • Lo, S. Y.; Hang, H. M.; Chan, S. W.; Lin, J. J. Efficient Dense Modules of Asymmetric Convolution for Real-time Semantic Segmentation. arXiv preprint. 2018, arXiv, 1809.06323.
  • Pohlen, T.; Hermans, A.; Mathias, M., and Leibe, B. (2017). Full-resolution Residual Networks for Semantic Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, United States, 4151–4160.
  • Aslan, S.; Ciocca, G.; Mazzini, D.; Schettini, R. Benchmarking Algorithms for Food Localization and Semantic Segmentation. Int. J. Mach. Learn. Cybern. 2020, 11(12), 2827–2847. DOI: 10.1007/s13042-020-01153-z.
  • Ciocca, G.; Mazzini, D., and Schettini, R. (2019). Evaluating CNN-based Semantic Food Segmentation across Illuminants. Proceedings of the 7th International Workshop on Computational Color Imaging, Japan, 247–259.
  • Pinzon-Arenas, J. O.; Jimenez-Moreno, R.; Pachon-Suescun, C. G. ResSeg: Residual Encoder-decoder Convolutional Neural Network for Food Segmentation. Int. J. Electr. Comput. Eng. 2020, 10(2), 1017–1026.
  • Shimoda, W., and Yanai, K. (2015). CNN-based Food Image Segmentation without Pixel-wise Annotation. Proceedings of the 20th International Conference on Image Analysis and Processing, Italy, 449–457.
  • Ciocca, G.; Napoletano, P.; Schettini, R. Food Recognition: A New Dataset, Experiments, and Results. IEEE J. Biomed. Health Inform. 2016, 21(3), 588–598. DOI: 10.1109/JBHI.2016.2636441.