275
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Global road extraction using a pseudo-label guided framework: from benchmark dataset to cross-region semi-supervised learning

ORCID Icon, ORCID Icon, , , &
Received 16 Jan 2024, Accepted 28 May 2024, Published online: 21 Jun 2024

Figures & data

Figure 1. Examples from the five road datasets (512 × 512 pixels).

Figure 1. Examples from the five road datasets (512 × 512 pixels).

Table 1. Details of the five road datasets.

Figure 2. Distribution of the geographical locations of the GRSet dataset.

Figure 2. Distribution of the geographical locations of the GRSet dataset.

Figure 3. Annotation examples of the GRSet.

Figure 3. Annotation examples of the GRSet.

Figure 4. The flowchart of the proposed pseudo-label guided global-scale road extraction framework, which is composed of two parts: (a) GRSet generation, and (b) pseudo-label guided global-scale road extraction network.

Figure 4. The flowchart of the proposed pseudo-label guided global-scale road extraction framework, which is composed of two parts: (a) GRSet generation, and (b) pseudo-label guided global-scale road extraction network.

Figure 5. Visualizations of the weak and strong data augmentations.

Figure 5. Visualizations of the weak and strong data augmentations.

Figure 6. Geographical locations of the validation sets. The validation set covers the four continents of Europe, Africa, Asia, and North America.

Figure 6. Geographical locations of the validation sets. The validation set covers the four continents of Europe, Africa, Asia, and North America.

Table 2. The details of the validation set of global road networks.

Figure 7. Illustration of sampling of GRSet.

Figure 7. Illustration of sampling of GRSet.

Table 3. Initialization details of the comparison methods.

Figure 8. Quantitative IoU comparison of models trained on the different road datasets.

Figure 8. Quantitative IoU comparison of models trained on the different road datasets.

Figure 9. The visual outputs for the Birmingham image produced by LinkNet50_IN with different training sets.

Figure 9. The visual outputs for the Birmingham image produced by LinkNet50_IN with different training sets.

Figure 10. The histogram illustrates the quantitative results obtained on the SpaceNet and DeepGlobe dataset.

Figure 10. The histogram illustrates the quantitative results obtained on the SpaceNet and DeepGlobe dataset.

Figure 11. The visualization results of different road datasets, where the first and second rows respectively represent the results obtained on the SpaceNet and DeepGlobe datasets.

Figure 11. The visualization results of different road datasets, where the first and second rows respectively represent the results obtained on the SpaceNet and DeepGlobe datasets.

Figure 12. The Histogram illustrates the quantitative results of the large-scale images.

Figure 12. The Histogram illustrates the quantitative results of the large-scale images.

Figure 13. The local visualization results of the large-scale images.

Figure 13. The local visualization results of the large-scale images.

Figure 14. The Histogram illustrates the quantitative results obtained on the LoveDA dataset.

Figure 14. The Histogram illustrates the quantitative results obtained on the LoveDA dataset.

Figure 15. The visual urban and rural results obtained on the LoveDA road validation set.

Figure 15. The visual urban and rural results obtained on the LoveDA road validation set.

Figure 16. The histogram illustrates the quantitative results obtained on the Birmingham image.

Figure 16. The histogram illustrates the quantitative results obtained on the Birmingham image.

Figure 17. The visual results obtained on the Birmingham image.

Figure 17. The visual results obtained on the Birmingham image.

Figure 18. The IoU variation with different values of μ.

Figure 18. The IoU variation with different values of μ.

Data availability statement

The data that support the findings of this study are available at https://github.com/xiaoyan07/GRNet_GRSet from the author [X. Lu] and the corresponding author [Y. Zhong] ([email protected]), upon reasonable request.