1,508
Views
26
CrossRef citations to date
0
Altmetric
Articles

Semantic segmentation of high spatial resolution images with deep neural networks

, , &
Pages 749-768 | Received 10 Jul 2018, Accepted 23 Dec 2018, Published online: 10 Jan 2019

Figures & data

Figure 1. (a) A residual unit proposed in ResNet-101 (He et al. Citation2016a). (b) An improved residual unit proposed in ResNet-101-v2 (He et al. Citation2016b).

Figure 1. (a) A residual unit proposed in ResNet-101 (He et al. Citation2016a). (b) An improved residual unit proposed in ResNet-101-v2 (He et al. Citation2016b).

Figure 2. A building block with a bottleneck design for ResNet-101 (He et al. Citation2016a).

Figure 2. A building block with a bottleneck design for ResNet-101 (He et al. Citation2016a).

Figure 3. Overview of the PSPNet (Zhao et al. Citation2017a).

Figure 3. Overview of the PSPNet (Zhao et al. Citation2017a).

Figure 4. Pyramid pooling module (Zhao et al. Citation2017a).

Figure 4. Pyramid pooling module (Zhao et al. Citation2017a).

Figure 5. Overview of our proposed deep neural network.

Figure 5. Overview of our proposed deep neural network.

Table 1. The dimensions of 16 TOP tiles in the training set and 17 TOP tiles in the testing set.

Table 2. Experimental results with a different number of auxiliary losses. Loss 1, 2 and 3 are shown in .

Table 3. Evaluation of results in the Vaihingen dataset using the full reference set.

Table 4. OAs of results in the Vaihingen dataset using the reference set with eroded boundaries.

Figure 6. Example results of test images in the Vaihingen dataset. (a) The original image, (b) the results of SP-SVL-3, (c) the results of CNN-HAW, (d) the results of CNN-FPL, (e) the results of PSPNet and (f) our results. White: impervious surfaces, Blue: buildings, Cyan: low vegetation, Green: trees, Yellow: cars, Red: clutter/background. (Best viewed in color version).

Figure 6. Example results of test images in the Vaihingen dataset. (a) The original image, (b) the results of SP-SVL-3, (c) the results of CNN-HAW, (d) the results of CNN-FPL, (e) the results of PSPNet and (f) our results. White: impervious surfaces, Blue: buildings, Cyan: low vegetation, Green: trees, Yellow: cars, Red: clutter/background. (Best viewed in color version).

Table 5. Evaluation of results in the Potsdam dataset using the full reference set.

Table 6. OAs of results in the Potsdam dataset using the reference set with eroded boundaries.

Figure 7. Example results of test images in the Potsdam dataset. (a) the original image, (b) the results of SP-SVL-3, (c) the results of CNN-HAW, (d) the results of CNN-FPL, (e) the results of PSPNet and (f) our results. White: impervious surfaces, Blue: buildings, Cyan: low vegetation, Green: trees, Yellow: cars, Red: clutter/background. (Best viewed in color version).

Figure 7. Example results of test images in the Potsdam dataset. (a) the original image, (b) the results of SP-SVL-3, (c) the results of CNN-HAW, (d) the results of CNN-FPL, (e) the results of PSPNet and (f) our results. White: impervious surfaces, Blue: buildings, Cyan: low vegetation, Green: trees, Yellow: cars, Red: clutter/background. (Best viewed in color version).

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.