Publication Cover
Canadian Journal of Remote Sensing
Journal canadien de télédétection
Volume 50, 2024 - Issue 1
421
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Dense Connected Edge Feature Enhancement Network for Building Edge Detection from High Resolution Remote Sensing Imagery

Réseau dense connecté de rehaussement de contours pour la détection des contours de bâtiments dans des images de

, &
Article: 2298806 | Received 01 Jun 2023, Accepted 18 Dec 2023, Published online: 16 Jan 2024

References

  • Ahmadi, S., Zoej, M.J.V., Ebadi, H., Moghaddam, H.A., and Mohammadzadeh, A. 2010. “Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours.” International Journal of Applied Earth Observation and Geoinformation, Vol. 12 (No. 3): pp. 150–157. doi:10.1016/j.jag.2010.02.001.
  • Ahmed, A., and Byun, Y.-C. 2019. ‘Edge-detection using CNN for roof images’. Paper presented at Proceedings of the 2019 Asia Pacific Information Technology Conference, 75–78. Jeju Island Republic of Korea: ACM. doi:10.1145/3314527.3314544.
  • Arbeláez, P., Maire, M., Fowlkes, C., and Malik, J. 2011. “Contour detection and hierarchical image segmentation.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33 (No. 5): pp. 898–916. doi:10.1109/TPAMI.2010.161.
  • Bai, B., Fu, W., Lu, T., and Li, S. 2022. “Edge-guided recurrent convolutional neural network for multitemporal remote sensing image building change detection.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 60: pp. 1–13. doi:10.1109/TGRS.2021.3106697.
  • Bell, S., Zitnick, C., Bala, K., and Girshick, R. 2015. ‘Inside-outside net: detecting objects in context with skip pooling and recurrent neural networks’. arXiv. http://arxiv.org/abs/1512.04143.
  • Benjilali, W., Guicquero, W., Jacques, L., and Sicard, G. 2019. ‘Hardware-friendly compressive imaging based on random modulations & permutations for image acquisition and classification.” 2019 IEEE International Conference on Image Processing (ICIP), 2085–89. Taipei, Taiwan: IEEE. doi:10.1109/ICIP.2019.8803113.
  • Bousias Alexakis, E., and Armenakis, C. 2021. “Performance improvement of encoder/decoder-based CNN architectures for change detection from very high-resolution satellite imagery.” Canadian Journal of Remote Sensing, Vol. 47 (No. 2): pp. 309–336. doi:10.1080/07038992.2021.1922880.
  • Canny, J. 1986. “A computational approach to edge-detection.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8 (No. 6): pp. 679–698. doi:10.1109/TPAMI.1986.4767851.
  • Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., and Zagoruyko, S. 2020. ‘End-to-End Object Detection with Transformers’. arXiv. http://arxiv.org/abs/2005.12872.
  • Chaudhuri, B.B., and Chanda, B. 1984. “The equivalence of best plane fit gradient with Robert’s, Prewitt’s and Sobel’s gradient for edge-detection and a 4-neighbour gradient with useful properties.” Signal Processing, Vol. 6 (No. 2): pp. 143–151. doi:10.1016/0165-1684(84)90015-X.
  • Chen, D.-J., Hsieh, H.-Y., and Liu, T.-L. 2021. ‘Adaptive Image Transformer for One-Shot Object Detection.” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 12242–51. Nashville, TN, USA: IEEE. doi:10.1109/CVPR46437.2021.01207.
  • Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A.L. 2018. “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 40 (No. 4): pp. 834–848. doi:10.1109/TPAMI.2017.2699184.
  • Cui, S., Yan, Q., and Reinartz, P. 2012. “Complex building description and extraction based on hough transformation and cycle detection.” Remote Sensing Letters, Vol. 3 (No. 2): pp. 151–159. doi:10.1080/01431161.2010.548410.
  • Deng, R., Shen, C., Liu, S., Wang, H., and Liu, X. 2018. “Learning to predict crisp boundaries”. arXiv. http://arxiv.org/abs/1807.10097.
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., et al. 2021. “An Image Is Worth 16x16 Words: Transformers for Image Recognition at Scale”. arXiv. http://arxiv.org/abs/2010.11929.
  • Durieux, L., Lagabrielle, E., and Nelson, A. 2008. “A method for monitoring building construction in Urban sprawl areas using object-based analysis of spot 5 images and existing GIS data.” ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 63 (No. 4): pp. 399–408. doi:10.1016/j.isprsjprs.2008.01.005.
  • Fang, F., Li, J., Yuan, Y., Zeng, T., and Zhang, G. 2021. “Multilevel edge-detections guided network for image denoising.” IEEE Transactions on Neural Networks and Learning Systems, Vol. 32 (No. 9): pp. 3956–3970. doi:10.1109/TNNLS.2020.3016321.
  • Fang, T., Zhang, M., Fan, Y., Wu, W., Gan, H., and She, Q. 2021. “Developing a feature decoder network with low-to-high hierarchies to improve edge-detection.” Multimedia Tools and Applications, Vol. 80 (No. 1): pp. 1611–1624. doi:10.1007/s11042-020-09800-x.
  • Feng, M., Lu, H., and Ding, E. 2019. “Attentive feedback network for boundary-aware salient object detection.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1623–32. Long Beach, CA, USA: IEEE. doi:10.1109/CVPR.2019.00172.
  • Gao, Y., Wang, M., Tao, D., Ji, R., and Dai, Q. 2012. “3-D object retrieval and recognition with hypergraph analysis.” IEEE Transactions on Image Processing, Vol. 21 (No. 9): pp. 4290–4303. doi:10.1109/TIP.2012.2199502.
  • Hamaguchi, R., Fujita, A., Nemoto, K., Imaizumi, T., and Hikosaka, S. 2018. ‘Effective Use of Dilated Convolutions for Segmenting Small Object Instances in Remote Sensing Imagery’. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), 1442–50. Lake Tahoe, NV: IEEE. doi:10.1109/WACV.2018.00162.
  • Han, J., Ngan, K.N., Li, M., and Zhang, H.-J. 2006. “Unsupervised extraction of visual attention objects in color images.” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 16 (No. 1): pp. 141–145. doi:10.1109/TCSVT.2005.859028.
  • Harris, C., and Stephens, M. 1988. “A combined corner and edge detector.” Procedings of the Alvey Vision Conference 1988, 23.1–23.6. Manchester: Alvey Vision Club. doi:10.5244/C.2.23.
  • He, J., Zhang, S., Yang, M., Shan, Y., and Huang, T. 2022. “BDCN: Bi-directional cascade network for perceptual edge-detection.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 44 (No. 1): pp. 100–113. doi:10.1109/TPAMI.2020.3007074.
  • He, K., Zhang, X., Ren, S., and Sun, J. 2016. “Deep residual learning for image recognition.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–78. Las Vegas, NV, USA: IEEE. doi:10.1109/CVPR.2016.90.
  • He, X., Zhang, Z., and Yang, Z. 2021. “Extraction of Urban built-up area based on the fusion of night-time light data and point of interest data.” Royal Society Open Science, Vol. 8 (No. 8): pp. 210838. doi:10.1098/rsos.210838.
  • Itti, L., Koch, C., and Niebur, E. 1998. “A model of saliency-based visual attention for rapid scene analysis.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20 (No. 11): pp. 1254–1259. doi:10.1109/34.730558.
  • Ji, S., Wei, S., and Lu, M. 2019. “Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 57 (No. 1): pp. 574–586. doi:10.1109/TGRS.2018.2858817.
  • Ko, B.C., and Nam, J.-Y. 2006. “Object-of-interest image segmentation based on human attention and semantic region clustering.” Journal of the Optical Society of America, Vol. 23 (No. 10): pp. 2462–2470. doi:10.1364/JOSAA.23.002462.
  • Krizhevsky, A., Sutskever, I., and Hinton, G.E. 2017. “ImageNet classification with deep convolutional neural networks.” Communications of the ACM, Vol. 60 (No. 6): pp. 84–90. doi:10.1145/3065386.
  • Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. 1998. “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE, Vol. 86 ((No. 11):pp. 2278–2324. doi:10.1109/5.726791.
  • Li, G., and Yu, Y. 2015. “Visual Saliency Based on Multiscale Deep Features.” arXiv. http://arxiv.org/abs/1503.08663.
  • Li, S., Liu, Q., Li, Z., Chen, E., and Zhang, J. 2017. “Building Height Extraction from Overlapping Airborne Images in Urban Environment Using Computer Vision Approach.” 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 5767–69. Fort Worth, TX: IEEE. doi:10.1109/IGARSS.2017.8128318.
  • Li, X., Yang, F., Cheng, H., Liu, W., and Shen, D. 2018. “Contour Knowledge Transfer for Salient Object Detection.” In Computer Vision – ECCV 2018, edited by V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, Vol. 11219, 370–385. Cham: Springer International Publishing. doi:10.1007/978-3-030-01267-0_22.
  • Lin, D., Ji, Y., Lischinski, D., Cohen-Or, D., and Huang, H. 2018. “Multiscale Context Intertwining for Semantic Segmentation.” In Computer Vision – ECCV 2018: Lecture Notes in Computer Science, edited by V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss, vol. 11207, 622–638. Cham: Springer International Publishing. doi:10.1007/978-3-030-01219-9_37.
  • Liu, N., and Han, J. 2016. “DHSNet: Deep hierarchical saliency network for salient object detection.” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 678–86. Las Vegas, NV, USA: IEEE. doi:10.1109/CVPR.2016.80.
  • Liu, N., Han, J., and Yang, M.-H. 2018. “PiCANet: Learning Pixel-Wise Contextual Attention for Saliency Detection.” arXiv. http://arxiv.org/abs/1708.06433.
  • Liu, N., Zhang, N., Wan, K., Shao, L., and Han, J. 2021. “Visual Saliency Transformer.” arXiv. http://arxiv.org/abs/2104.12099.
  • Liu, Y., Cheng, M.-M., Hu, X., Bian, J.-W., Zhang, L., Bai, X., and Tang, J. 2019. “Richer convolutional features for edge-detection.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 41 (No. 8): pp. 1939–1946. doi:10.1109/TPAMI.2018.2878849.
  • Liu, Z., Tan, Y., He, Q., and Xiao, Y. 2022. “SwinNet: Swin transformer drives edge-aware RGB-D and RGB-T salient object detection.” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 32 (No. 7): pp. 4486–4497. doi:10.1109/TCSVT.2021.3127149.
  • Lu, T., Ming, D., Lin, X., Hong, Z., Bai, X., and Fang, J. 2018. “Detecting building-edges from high spatial resolution remote sensing imagery using richer convolution features network.” Remote Sensing, Vol. 10 (No. 9): pp. 1496. doi:10.3390/rs10091496.
  • Luo, Z., Mishra, A., Achkar, A., Eichel, J., Li, S., and Jodoin, P.-M. 2017. “Non-local deep features for salient object detection.” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 6593–6601. Honolulu, HI: IEEE. doi:10.1109/CVPR.2017.698.
  • Ma, X., Liu, S., Hu, S., Geng, P., Liu, M., and Zhao, J. 2018. “SAR image edge-detection via sparse representation.” Soft Computing, Vol. 22 (No. 8): pp. 2507–2515. doi:10.1007/s00500-017-2505-y.
  • Maggiori, E., Tarabalka, Y., Charpiat, G., and Alliez, P. 2017. “Can semantic labeling methods generalize to any city? The Inria aerial image labeling benchmark.” 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 3226–29. Fort Worth, TX: IEEE. doi:10.1109/IGARSS.2017.8127684.
  • Marcu, A., and Leordeanu, M. 2016. “Dual local-global contextual pathways for recognition in aerial imagery.” arXiv. http://arxiv.org/abs/1605.05462.
  • Min, D., Zhang, C., Lu, Y., Fu, K., and Zhao, Q. 2022. “Mutual-guidance transformer-embedding network for video salient object detection.” IEEE Signal Processing Letters, Vol. 29: pp. 1674–1678. doi:10.1109/LSP.2022.3192753.
  • Partovi, T., Bahmanyar, R., Kraus, T., and Reinartz, P. 2017. “Building outline extraction using a heuristic approach based on generalization of line segments.” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 10 (No. 3): pp. 933–947. doi:10.1109/JSTARS.2016.2611861.
  • Pei, J., Cheng, T., Tang, H., and Chen, C. 2023. “Transformer-based efficient salient instance segmentation networks with orientative query.” IEEE Transactions on Multimedia, Vol. 25: pp. 1964–1978. doi:10.1109/TMM.2022.3141891.
  • Peng, J., Zhang, D., and Liu, Y. 2005. “An improved snake model for building detection from urban aerial images.” Pattern Recognition Letters, Vol. 26 (No. 5): pp. 587–595. doi:10.1016/j.patrec.2004.09.033.
  • Pirzada, S.J.H., and Siddiqui, A. 2013. “Analysis of edge-detection algorithms for feature extraction in satellite images.” 2013 IEEE International Conference on Space Science and Communication (IconSpace), 238–42. Melaka, Malaysia: IEEE. doi:10.1109/IconSpace.2013.6599472.
  • Qin, X., Zhang, Z., Huang, C., Gao, C., Dehghan, M., and Jagersand, M. 2019. “BASNet: Boundary-Aware Salient Object Detection.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7471–81. Long Beach, CA, USA: IEEE. doi:10.1109/CVPR.2019.00766.
  • Ren, X. 2008. “Multiscale improves boundary detection in natural images.” In Computer Vision – ECCV 2008: Lecture Notes in Computer Science, edited by D. Forsyth, P. Torr, and A. Zisserman, vol. 5304, 533–545. Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-540-88690-7_40.
  • Shan, J., Zhou, S., Cui, Y., and Fang, Z. 2023. “Real-time 3d single object tracking with transformer.” IEEE Transactions on Multimedia, Vol. 25: pp. 2339–2353. doi:10.1109/TMM.2022.3146714.
  • Shelhamer, E., Long, J., and Darrell, T. 2017. “Fully convolutional networks for semantic segmentation.” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 39 (No. 4): pp. 640–651. doi:10.1109/TPAMI.2016.2572683.
  • Simonyan, K., and Zisserman, A. 2015. “Very deep convolutional networks for large-scale image recognition.” arXiv. http://arxiv.org/abs/1409.1556.
  • Soria, X., Riba, E., and Sappa, A.D. 2020. “Dense extreme inception network: Towards a robust CNN model for edge-detection.” arXiv. http://arxiv.org/abs/1909.01955.
  • Strudel, R., Garcia, R., Laptev, I., and Schmid, C. 2021. “Segmenter: Transformer for Semantic Segmentation.” arXiv. http://arxiv.org/abs/2105.05633.
  • Su, N., Yan, Y., Qiu, M., Zhao, C., and Wang, L. 2018. “Object-based dense matching method for maintaining structure characteristics of linear buildings.” Sensors, Vol. 18 (No. 4): pp. 1035. doi:10.3390/s18041035.
  • Sun, L., Tang, Y., and Zhang, L. 2017. “Rural building detection in high-resolution imagery based on a two-stage CNN Model.” IEEE Geoscience and Remote Sensing Letters, Vol. 14 (No. 11): pp. 1998–2002. doi:10.1109/LGRS.2017.2745900.
  • Marr, D., and Hildreth, E. 1980. “Theory of edge-detection.” Proceedings of the Royal Society of London. Series B, Biological sciences, Vol. 207: pp. 187–217. doi:10.1098/rspb.1980.0020.
  • Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., and Jégou, H. 2021. “Training Data-Efficient Image Transformers & Distillation through Attention.” arXiv. http://arxiv.org/abs/2012.12877.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. 2023. “Attention is all you need.” arXiv. http://arxiv.org/abs/1706.03762.
  • Wang, T., Borji, A., Zhang, L., Zhang, P., and Lu, H. 2017. “A stagewise refinement model for detecting salient objects in images.” 2017 IEEE International Conference on Computer Vision (ICCV), pp. 4039–48. Venice: IEEE. doi:10.1109/ICCV.2017.433.
  • Wang, T., Zhang, L., Wang, S., Lu, H., Yang, G., Ruan, X., and Borji, A. 2018. “Detect Globally, Refine Locally: A Novel Approach to Saliency Detection.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3127–35. Salt Lake City, UT: IEEE. doi:10.1109/CVPR.2018.00330.
  • Wang, W., Zhao, S., Shen, J., Hoi, S.C.H., and Borji, A. 2019. “Salient object detection with pyramid attention and salient edges.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 1448–57. Long Beach, CA, USA: IEEE. doi:10.1109/CVPR.2019.00154.
  • Wang, X., Chen, S., Wei, G., and Liu, J. 2023. “TENet: Accurate light-field salient object detection with a transformer embedding network.” Image and Vision Computing, Vol. 129: pp. 104595. doi:10.1016/j.imavis.2022.104595.
  • Wang, Y., Jia, X., Zhang, L., Li, Y., Elder, J.H., and Lu, H. 2023. “A uniform transformer-based structure for feature fusion and enhancement for RGB-D saliency detection.” Pattern Recognition, Vol. 140: pp. 109516. doi:10.1016/j.patcog.2023.109516.
  • Wang, Z., Zhang, Y., Liu, Y., Wang, Z., Coleman, S., and Kerr, D. 2022. “TF-SOD: A novel transformer framework for salient object detection.” Neural Computing and Applications, Vol. 34 (No. 14): pp. 11789–11806. doi:10.1007/s00521-022-07069-9.
  • Shen, W., Wang, X., Wang, Y., Bai, X., and Zhang, Z. 2015. “DeepContour: A deep convolutional feature learned by positive-sharing loss for contour detection.” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 3982–91. Boston, MA, USA: IEEE. doi:10.1109/CVPR.2015.7299024.
  • Wen, X., Li, X., Zhang, C., Han, W., Li, E., Liu, W., and Zhang, L. 2021. “ME-net: A multiscale erosion network for crisp building-edge-detection from very high resolution remote sensing imagery.” Remote Sensing, Vol. 13 (No. 19): pp. 3826. doi:10.3390/rs13193826.
  • Gao, W., Zhang, X., Yang, L., and Liu, H. 2010. “An improved sobel edge-detection.” 2010 3rd International Conference on Computer Science and Information Technology, 67–71. Chengdu, China: IEEE. doi:10.1109/ICCSIT.2010.5563693.
  • Wu, R., Feng, M., Guan, W., Wang, D., Lu, H., and Ding, E. 2019. “A mutual learning method for salient object detection with intertwined multi-supervision.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 8142–51. Long Beach, CA, USA: IEEE. doi:10.1109/CVPR.2019.00834.
  • Wu, Z., Su, L., and Huang, Q. 2019. “Cascaded partial decoder for fast and accurate salient object detection.” arXiv. http://arxiv.org/abs/1904.08739.
  • Xia, L., Zhang, X., Zhang, J., Yang, H., and Chen, T. 2021. “Building extraction from very-high-resolution remote sensing images using semi-supervised semantic edge.” Remote Sensing, Vol. 13 (No. 11): pp. 2187. doi:10.3390/rs13112187.
  • Xie, S., and Tu, Z. 2015. “Holistically-nested edge-detection.” arXiv. http://arxiv.org/abs/1504.06375.
  • Yan, X., Tang, H., Sun, S., Ma, H., Kong, D., and Xie, X. 2022. “AFTer-UNet: Axial Fusion Transformer UNet for Medical Image Segmentation.” 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 3270–80. Waikoloa, HI, USA: IEEE. doi:10.1109/WACV51458.2022.00333.
  • Yu, Z., Feng, C., Yu Liu, M., and Ramalingam, S. 2017. “CASENet: Deep category-aware semantic edge-detection.” arXiv. http://arxiv.org/abs/1705.09759.
  • Zhang, K., Guo, Y., Wang, X., Yuan, J., Ma, Z., and Zhao, Z. 2019. “Channel-wise and feature-points reweights densenet for image classification.” 2019 IEEE International Conference on Image Processing (ICIP), 410–14. Taipei, Taiwan: IEEE. doi:10.1109/ICIP.2019.8802982.
  • Zhang, X., Wang, T., Qi, J., Lu, H., and Wang, G. 2018. “Progressive attention guided recurrent network for salient object detection.” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 714–22. Salt Lake City, UT: IEEE. doi:10.1109/CVPR.2018.00081.
  • Zhao, T., and Wu, X. 2019. “Pyramid feature attention network for saliency detection.” arXiv. http://arxiv.org/abs/1903.00179.
  • Zhao, Y., Sun, G., Zhang, L., Zhang, A., Jia, X., and Han, Z. 2023. “MSRF-Net: Multiscale receptive field network for building detection from remote sensing images.” IEEE Transactions on Geoscience and Remote Sensing, Vol. 61: pp. 1–14. doi:10.1109/TGRS.2023.3282926.
  • Zheng, L., Wang, S., Liu, Z., and Tian, Q. 2015. “Fast image retrieval: Query pruning and early termination.” IEEE Transactions on Multimedia, Vol. 17 (No. 5): pp. 648–659. doi:10.1109/TMM.2015.2408563.