350
Views
4
CrossRef citations to date
0
Altmetric
Research Article

Gradient convolutional neural network for classification of agricultural fields with contour levee

, ORCID Icon, ORCID Icon, &

References

  • Abd Manaf, S., Mustapha, N., Sulaiman, M.N., Husin, N.A., Shafri, H.Z.M. and Razali, M.N. 2018. “Hybridization of SLIC and extra tree for object based image analysis in extracting shoreline from medium resolution satellite images.”
  • Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P. and Süsstrunk, S. 2010. SLIC superpixels. Technical report, Ecole Polytechnique Fedrale de Lausanne.
  • Cao, H., Wang, Y., Chen, J., Jiang, D., Zhang, X., Tian, Q. and Wang, M. 2021. “Swin-Unet: Unet-like pure transformer for medical image segmentation.” arXiv Preprint arXiv:2105.05537.
  • Chen, L. C., Papandreou, G., Schroff, F. and Adam, H. 2017. “Rethinking atrous convolution for semantic image segmentation.” arXiv Preprint arXiv:1706.05587.
  • Fang, F., X. Yuan, L. Wang, Y. Liu, Z. Luo. 2018. “Urban land-use classification from photographs.” IEEE Geoscience and Remote Sensing Letters 15 (12): 1927–1931. DOI:https://doi.org/10.1109/LGRS.2018.2864282.
  • Felzenszwalb, P. F., and D. P. Huttenlocher. 2004. “Efficient graph-based image segmentation.” International Journal of Computer Vision 59 (2): 167–181. doi:https://doi.org/10.1023/B:VISI.0000022288.19776.77.
  • Gao, H., J. Guo, P. Guo, X. Chen. 2021. “Classification of very-high-spatial-resolution aerial images based on multiscale features with limited semantic information.” Remote Sensing 13 (3): 364. DOI:https://doi.org/10.3390/rs13030364.
  • Le, H., Vicente, T.F.Y., Nguyen, V., Hoai, M., and Samaras, D. 2018. “A + D net: Training a shadow detector with adversarial shadow attenuation.” In: Proceedings of the European Conference on Computer Vision (ECCV). Munich, Germany, 662–678.
  • Liu, X., S. Guo, B. Yang, S. Ma, H. Zhang, J. Li, C. Sun, et al. 2018. “Automatic organ segmentation for CT scans based on super-pixel and convolutional neural networks.” Journal of Digital Imaging 31 (5): 748–760. DOI:https://doi.org/10.1007/s10278-018-0052-4.
  • Löw, F., Michel, U., Dech, S. and Conrad, C. 2013. “Impact of feature selection on the accuracy and spatial uncertainty of per-field crop classification using support vector machines”. ISPRS Journal of Photogrammetry and Remote Sensing 85: 102–119. doi:https://doi.org/10.1016/j.isprsjprs.2013.08.007.
  • Lu, Q., Liu, Y., Huang, J., Yuan, X. and Hu, Q. 2019. “License plate detection and recognition using hierarchical feature layers from CNN.” Multimedia Tools and Applications 78 (11): 15665–15680. DOI:https://doi.org/10.1007/s11042-018-6889-1.
  • Martins, V. S., Kaleita, A.L., Gelder, B.K., da Silveira, H.L. and Abe, C.A. 2020. “Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution”. ISPRS Journal of Photogrammetry and Remote Sensing 168: 56–73. doi:https://doi.org/10.1016/j.isprsjprs.2020.08.004.
  • Mboga, N., T. Grippa, S. Georganos, S. Vanhuysse, B. Smets, O. Dewitte, E. Wolff, et al. 2020. “Fully convolutional networks for land cover classification from historical panchromatic aerial photographs”. ISPRS Journal of Photogrammetry and Remote Sensing 167: 385–395. doi:https://doi.org/10.1016/j.isprsjprs.2020.07.005.
  • Mi, L., and Z. Chen. 2020. “Superpixel-enhanced deep neural forest for remote sensing image semantic segmentation.” ISPRS Journal of Photogrammetry and Remote Sensing 159: 140–152. doi:https://doi.org/10.1016/j.isprsjprs.2019.11.006.
  • Mohammadimanesh, F., Salehi, B., Mahdianpari, M., Gill, E. and Molinier, M. 2019. “A new fully convolutional neural network for semantic segmentation of polarimetric SAR imagery in complex land cover ecosystem”. ISPRS Journal of Photogrammetry and Remote Sensing 151: 223–236. doi:https://doi.org/10.1016/j.isprsjprs.2019.03.015.
  • Neubert, P., and P. Protzel, 2014. “Compact watershed and preemptive slic: On improving trade-offs of superpixel segmentation algorithms.” In: 2014 22nd International Conference on Pattern Recognition, 996–1001.
  • Phalke, A. R., M. Özdoğan, P. S. Thenkabail, T. Erickson, N. Gorelick, K. Yadav, R. G. Congalton, et al. 2020. “Mapping croplands of Europe, Middle East, Russia, and Central Asia using landsat, random forest, and google earth engine”. ISPRS Journal of Photogrammetry and Remote Sensing 167: 104–122. doi:https://doi.org/10.1016/j.isprsjprs.2020.06.022.
  • Qiao, Z., and X. Yuan. 2021. “Urban land-use analysis using proximate sensing imagery: A survey.” International Journal of Geographical Information Science 35 (11): 2129–2148. doi:https://doi.org/10.1080/13658816.2021.1919682.
  • Qiao, Z., X. Yuan, and M. Elhoseny, 2020. “Urban scene recognition via deep network integration.” In: International Conference on Urban Intelligence and Applications, August 14-16, Taiyuan, China, 135–149.
  • Seferbekov, S., Iglovikov, V., Buslaev, A. and Shvets, A. 2018. “Feature pyramid network for multi-class land segmentation.” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 272–275.
  • Shen, Y., X. Liu, and X. Yuan. 2017. “Fractal dimension of irregular region of interest application to corn phenology characterization.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 10 (4): 1402–1412. doi:https://doi.org/10.1109/JSTARS.2016.2645880.
  • Stojmenović, M., and J. Žunić. 2008. “Measuring elongation from shape boundary.” Journal of Mathematical Imaging and Vision 30 (1): 73–85. doi:https://doi.org/10.1007/s10851-007-0039-0.
  • Teluguntla, P., Thenkabail, P.S., Oliphant, A., Xiong, J., Gumma, M.K., Congalton, R.G., Yadav, K., et al. 2018. “A 30-m landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on google earth engine cloud computing platform”. ISPRS Journal of Photogrammetry and Remote Sensing 144: 325–340. doi:https://doi.org/10.1016/j.isprsjprs.2018.07.017.
  • Vedaldi, A., and S. Soatto, 2008. “Quick shift and kernel methods for mode seeking.” In: European conference on computer vision. Marseille, France, 705–718.
  • Wang, J., X. Li, and J. Yang, 2018. “Stacked conditional generative adversarial networks for jointly learning shadow detection and shadow removal.” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, Utah, 1788–1797.
  • Xie, S., and Z. Tu, 2015. “Holistically-nested edge detection.” In: Proceedings of the IEEE international conference on computer vision. Santiago, Chile, 1395–1403.
  • Xie, Y., Lark, T.J., Brown, J.F. and Gibbs, H.K. 2019. “Mapping irrigated cropland extent across the conterminous United States at 30 m resolution using a semi-automatic training approach on google earth engine”. ISPRS Journal of Photogrammetry and Remote Sensing 155: 136–149. doi:https://doi.org/10.1016/j.isprsjprs.2019.07.005.
  • Yang, S., Yuan, X., Liu, X. and Chen, Q. 2020. “Superpixel generation for polarimetric SAR using hierarchical energy maximization”. Computers & Geosciences 135: 104395. doi:https://doi.org/10.1016/j.cageo.2019.104395.
  • Yuan, X., J. Shi, and L. Gu. 2021. “A review of deep learning methods for semantic segmentation of remote sensing imagery.” Expert Systems with Applications 169: 114417. doi:https://doi.org/10.1016/j.eswa.2020.114417.
  • Zhuang, C., X. Yuan, and W. Wang, 2020. “Boundary enhanced network for improved semantic segmentation.” In: International Conference on Urban Intelligence and Applications. Taiyuan, China, 172–184.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.