3,418
Views
6
CrossRef citations to date
0
Altmetric
Research Article

A Visual Navigation System for UAV under Diverse Illumination Conditions

ORCID Icon, , , & ORCID Icon
Pages 1529-1549 | Received 17 Mar 2021, Accepted 22 Sep 2021, Published online: 29 Sep 2021

Figures & data

Figure 1. Comparison of matching results and detection results between low-light image and the enhanced image obtained by our method.

Figure 1. Comparison of matching results and detection results between low-light image and the enhanced image obtained by our method.

Figure 2. The proposed low-light image enhancement pipeline. The Decom-Net decomposes the input image into an illumination map and reflectance map, and the Enhance-Net brightens up the illumination map. The reflectance map and illumination map of the low-light image are used as the input of Enhance-Net. The decompositions of normal-light images do not participate in the Enhance-Net training stage.

Figure 2. The proposed low-light image enhancement pipeline. The Decom-Net decomposes the input image into an illumination map and reflectance map, and the Enhance-Net brightens up the illumination map. The reflectance map and illumination map of the low-light image are used as the input of Enhance-Net. The decompositions of normal-light images do not participate in the Enhance-Net training stage.

Figure 3. The network architecture of Decom-Net.

Figure 3. The network architecture of Decom-Net.

Figure 4. The network architecture of Enhance-Net.

Figure 4. The network architecture of Enhance-Net.

Figure 5. Example of the low-light image decomposition result. (a) is the input image, (b) is the reflectance map generated by Decom-Net, (c) is the illumination map generated by Enhance-Net.

Figure 5. Example of the low-light image decomposition result. (a) is the input image, (b) is the reflectance map generated by Decom-Net, (c) is the illumination map generated by Enhance-Net.

Figure 6. Example of synthetic low-light image.

Figure 6. Example of synthetic low-light image.

Figure 7. The Laplacian gradient function values of images under different illumination conditions.

Figure 7. The Laplacian gradient function values of images under different illumination conditions.

Figure 8. Visual navigation pipeline.

Figure 8. Visual navigation pipeline.

Table 1. PSNR/SSIM values on the synthetic test images. Note that the red, blue, and green in the table represent the best, sub-optimal, third-place results, respectively

Figure 9. Visual comparison of the synthetic test images. Please zoom in for a better view.

Figure 9. Visual comparison of the synthetic test images. Please zoom in for a better view.

Table 2. Comparison of matching results on the synthetic test images

Figure 10. Experimental results of visual navigation.

Figure 10. Experimental results of visual navigation.

Table 3. Self-localization accuracy and average matching points of the autonomous driving experiment

Figure 11. Experimental results of autonomous driving.

Figure 11. Experimental results of autonomous driving.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.