3,384
Views
6
CrossRef citations to date
0
Altmetric
Research Article

A Visual Navigation System for UAV under Diverse Illumination Conditions

ORCID Icon, , , & ORCID Icon
Pages 1529-1549 | Received 17 Mar 2021, Accepted 22 Sep 2021, Published online: 29 Sep 2021

References

  • Alpen, M., C. Willrodt, K. Frick, and J. Horn. 2010. On-board SLAM for indoor UAV using a laser range finder. International Society for Optics and Photonics 7692:769213.
  • Bay, H., T. Tuytelaars, and L. V. Gool 2006. Surf: Speeded up robust features. 9th European Conference on Computer Vision, Graz, Austria.
  • Bian, J.-W., W.-Y. Lin, Y. Matsushita, S. K. Yeung, T. D. Nguyen, and -M.-M. Cheng 2017. GMS: Grid-based motion statistics for fast, ultra-robust feature correspondence. Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, Hawaii.
  • Comport, A., E. Malis, and P. Rives. 2010. Real-time quadrifocal visual odometry. The International Journal of Robotics Research 29 (2–3):245–66. doi:https://doi.org/10.1177/0278364909356601.
  • Depaola, R., C. Chimento, M. L. Anderson, K. Brink, and A. Willis 2018. UAV navigation with computer vision–flight testing a novel visual odometry technique. 2018 AIAA Guidance, Navigation, and Control Conference, Kissimmee, Florida.
  • Dong, C., C. C. Loy, K.-M. He, and X.-O. Tang. 2015. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 38 (2):295–307. doi:https://doi.org/10.1109/TPAMI.2015.2439281.
  • Dong, J.-W., and N. E. Department. 2017. Analysis on inertial navigation technology. 1st ed. Instrum Tech.China: Xian.
  • Fu, X.-Y., D.-L. Zeng, Y. Huang, X.-P. Zhang, and X.-H. Ding 2016. A weighted variational model for simultaneous reflectance and illumination estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, America.
  • Fu, X.-Y., D.-L. Zeng, Y. Huang, Y.-H. Liao, X.-H. Ding, and J. Paisley. 2016a. A fusion-based enhancing method for weakly illuminated images. Signal Processing 129:82–96. doi:https://doi.org/10.1016/j.sigpro.2016.05.031.
  • Golden, J. P. 1980. Terrain contour matching (TERCOM): A cruise missile guidance aid/Image processing for missile guidance. International Society for Optics and Photonics 238:10–18.
  • He, K.-M., X.-Y. Zhang, S.-Q. Ren, and S. Jian 2016. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, America.
  • Ibrahim, H., and N. S. P. Kong. 2007. Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Transactions on Consumer Electronics 53 (4):1752–58. doi:https://doi.org/10.1109/TCE.2007.4429280.
  • Jiang, Y., X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang. 2021. Enlightengan: Deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–49. doi:https://doi.org/10.1109/TIP.2021.3051462.
  • Jobson, D. J., Z. Rahman, and G. A. Woodell. 1997. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing 6 (7):965–76. doi:https://doi.org/10.1109/83.597272.
  • Ke, X., W. Lin, G. Chen, Q. Chen, X. Qi, and J. Ma 2020. Edllie-net: Enhanced deep convolutional networks for low-light image enhancement. IEEE 5th International Conference on Image and Vision Computing, Beijing, China.
  • Kingma, D. P., and J. Ba. 2014. Adam: A method for stochastic optimization. arXiv Preprint arXiv 1412.6980.
  • Land, E. H. 1977. The retinex theory of color vision. Scientific American 237 (6):108–29. doi:https://doi.org/10.1038/scientificamerican1277-108.
  • Li, C.-Y., J.-C. Guo, F. Porikli, and Y.-W. Pang. 2018. LightenNet: A convolutional neural network for weakly illuminated image enhancement. Pattern Recognition Letters 104:15–22. doi:https://doi.org/10.1016/j.patrec.2018.01.010.
  • Long, Y., Y.-P. Gong, Z.-F. Xiao, and Q. Liu. 2017. Accurate object localization in remote sensing images based on convolutional neural networks. IEEE Transactions on Geoscience and Remote Sensing 55 (5):2486–98. doi:https://doi.org/10.1109/TGRS.2016.2645610.
  • Lore, K. G., A. Akintayo, and S. Sarkar. 2017. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognition 61:650–62. doi:https://doi.org/10.1016/j.patcog.2016.06.008.
  • Lowe, D. G. 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60 (2):91–110. doi:https://doi.org/10.1023/B:VISI.0000029664.99615.94.
  • Pisano, E. D., S.-Q. Zong, B. M. Hemminger, D. Marla, R. E. Johnston, M. Keith, M. P. Braeuning, and M. P. Stephen. 1998. Contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. Journal of Digital Imaging 11 (4):193. doi:https://doi.org/10.1007/BF03178082.
  • Ronneberger, O., P. Fischer, and T. Brox 2015. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention, Munich, Germany.
  • Shen, L., Z.-H. Yue, F. Feng, Q. Chen, S.-H. Liu, and J. Ma. 2017. MSR-net: Low-light image enhancement using deep convolutional network. arXiv Preprint arXiv 1711.02488.
  • Shetty, A., and G.-X. Gao 2019. UAV Pose Estimation using Cross-view Geolocalization with Satellite Imagery. International Conference on Robotics and Automation (ICRA), Montreal, Canada.
  • Springenberg, J. T., A. Dosovitskiy, T. Brox, and M. Riedmiller. 2014. Striving for simplicity: The all convolutional net. arXiv Preprint arXiv 1412.6806.
  • Strydom, R., S. Thurrowgood, and M. Srinivasan 2014. Visual odometry: Autonomous uav navigation using optic flow and stereo. Proceedings of Australasian conference on robotics and automation, Melbourne, Australia.
  • Wang, P., R. Yang, B. Cao, X. Wei, and Y.-Q. Lin 2018. Dels-3d: Deep localization and segmentation with a 3d semantic map. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, Utah.
  • Wang, S., R. Clark, H. Wen, and N. Trigoni. 2018b. End-to-end, sequence-to-sequence probabilistic visual odometry through deep neural networks. The International Journal of Robotics Research 37 (4–5):513–42. doi:https://doi.org/10.1177/0278364917734298.
  • Wang, S.-H., J. Zheng, H.-M. Hu, and B. Li. 2013. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Transactions on Image Processing 22 (9):3538–48. doi:https://doi.org/10.1109/TIP.2013.2261309.
  • Wei, C., W.-J. Wang, W.-H. Yang, and J.-Y. Liu 2018. Deep retinex decomposition for low-light enhancement. The British Machine Vision Conference, Newcastle, UK.
  • Wenze, P., R. Wang, N. Yang, Q. Cheng, Q. Khan, L. V. Stumberg, N. Zeller, and D. Cremers 2020. 4Seasons: A cross-season dataset for multi-weather SLAM in autonomous driving. DAGM German Conference on Pattern Recognition, Tübingen, Germany.
  • Xu, Z.-H., L.-X. Wu, Z. Wang, R. Wang, Z. Li, and F. Li 2013. Matching UAV images with image topology skeleton. 2013IEEE International Geoscience and Remote Sensing Symposium-IGARSS, Melbourne, Australia.
  • Zhu, P.-F., L.-Y. Wen, D.-W. Du, B. Xiao, H.-B. Ling, Q.-H. Hu, -Q.-Q. Nie, C. Hao, C.-F. Liu, and X.-Y. Liu 2018. Visdrone-det2018: The vision meets drone object detection in image challenge results. Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.