92
Views
3
CrossRef citations to date
0
Altmetric
Computers and Computing

Partially Visible Lane Detection with Hierarchical Supervision Approach

ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon

References

  • T. Bellet, J.-C. Paris, and C. Marin-Lamellet, “Difficulties experienced by older drivers during their regular driving and their expectations towards advanced driving aid systems and vehicle automation,” Transportation Research Part F: Traffic Psychology and. Behaviour, Vol. 52, pp. 138–63, jan 2018. DOI: 10.1016/j.trf.2017.11.014
  • Y. Zhao, T. Yamamoto, and T. Morikawa, “An analysis on older driver's driving behavior by GPS tracking data: Road selection, left/right turn, and driving speed,” J. Traffic Transport. Engin. (English Ed.), Vol. 5, no. 1, pp. 56–65, 2018. DOI: 10.1016/j.jtte.2017.05.013
  • P. P. Em, J. Hossen, I. Fitrian, and E. K. Wong, “Vision-based lane departure warning framework,” Heliyon, Vol. 5, no. 8, p. e02169, 2019. DOI: 10.1016/j.heliyon.2019.e02169
  • T. Aziz, Y. Horiguchi, and T. Sawaragi, “An empirical investigation of the development of driver's mental model of a lane departure warning system while driving,” IFAC Proceedings Volumes, Vol. 46, no. 15, pp. 461–8, 2013. DOI: 10.3182/20130811-5-US-2037.00022
  • Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, Vol. 521, no. 7553, pp. 436–44, 2015. DOI: 10.1038/nature14539
  • M. D. Zeiler, and R. Fergus, “Visualizing and understanding convolutional networks,” in European Conference on Computer Vision, 2014, pp. 818–33.
  • R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel, “ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness,” arXiv preprint arXiv:1811.12231, 2018.
  • J. M. Alvarez, T. Gevers, and A. M. Lopez, “3D scene priors for road detection,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pp. 57–64.
  • Y. Gao, Y. Song, and Z. Yang, “A real-time drivable road detection algorithm in urban traffic environment,” in International Conference on Computer Vision and Graphics, pp. 387–96, 2012.
  • Y. He, H. Wang, and B. Zhang, “Color-based road detection in urban traffic scenes,” IEEE Trans. Intell. Transp. Syst., Vol. 5, no. 4, pp. 309–18, 2004. DOI: 10.1109/TITS.2004.838221
  • S. Yun, Z. Guo-Ying, and Y. Yong, “A road detection algorithm by boosting using feature combination,” in 2007 IEEE Intelligent Vehicles Symposium, 2007, pp. 364–8.
  • T. T. Son, S. Mita, and A. Takeuchi, “Road detection using segmentation by weighted aggregation based on visual information and a posteriori probability of road regions,” in 2008 IEEE International Conference on Systems, Man and Cybernetics, 2008, pp. 3018–25.
  • M. Wu, S.-K. Lam, and T. Srikanthan, “Nonparametric technique based high-speed road surface detection,” IEEE Trans. Intell. Transp. Syst., Vol. 16, no. 2, pp. 874–84, 2014.
  • J. Kim, J. Kim, G.-J. Jang, and M. Lee, “Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection,” Neural. Netw., Vol. 87, pp. 109–21, 2017. DOI: 10.1016/j.neunet.2016.12.002
  • B. Jia, W. Feng, and M. Zhu, “Obstacle detection in single images with deep neural networks,” Signal Image Video Process., Vol. 10, no. 6, pp. 1033–40, 2016. DOI: 10.1007/s11760-015-0855-4
  • S. Protasov, A. M. Khan, K. Sozykin, and M. Ahmad, “Using deep features for video scene detection and annotation,” Signal Image Video Proces., Vol. 12, no. 5, pp. 991–9, 2018. DOI: 10.1007/s11760-018-1244-6
  • J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431–40.
  • M. Park, K. Yoo, Y. Park, and Y. Lee, “Diagonally-reinforced lane detection scheme for high-performance advanced driver assistance systems,” J. Semiconduct. Technol. Sci., Vol. 17, no. 1, pp. 79–85, 2017. DOI: 10.5573/JSTS.2017.17.1.079
  • J. Li, X. Mei, D. Prokhorov, and D. Tao, “Deep neural network for structural prediction and lane detection in traffic scene,” IEEE. Trans. Neural. Netw. Learn. Syst., Vol. 28, no. 3, pp. 690–703, 2016. DOI: 10.1109/TNNLS.2016.2522428
  • N. O'Mahony, et al. “Deep learning vs. traditional computer vision,” in Science and Information Conference, 2019, pp. 128–44.
  • A. Gurghian, T. Koduri, S. V. Bailur, K. J. Carey, and V. N. Murali, “Deeplanes: End-to-end lane position estimation using deep neural networksa,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, pp. 38–45.
  • A. B. Hillel, R. Lerner, D. Levi, and G. Raz, “Recent progress in road and lane detection: A survey,” Mach. Vis. Appl., Vol. 25, no. 3, pp. 727–45, 2014. DOI: 10.1007/s00138-011-0404-2
  • V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Trans. Pattern. Anal. Mach. Intell., Vol. 39, no. 12, pp. 2481–95, 2017. DOI: 10.1109/TPAMI.34
  • J. Kim, and C. Park, “End-to-end ego lane estimation based on sequential transfer learning for self-driving cars,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 30–38.
  • S. Lee, et al. “Vpgnet: Vanishing point guided network for lane and road marking detection and recognition,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1947–55.
  • X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang, “Spatial as deep: Spatial cnn for traffic scene understanding,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
  • M. Ghafoorian, C. Nugteren, N. Baka, O. Booij, and M. Hofmann, “EL-GAN: Embedding loss driven generative adversarial networks for lane detection,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018.
  • J. Philion, “Fastdraw: Addressing the long tail of lane detection by adapting a sequential prediction network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11582–91.
  • Y. Hou, Z. Ma, C. Liu, and C. C. Loy, “Learning lightweight lane detection cnns by self attention distillation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1013–21.
  • J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–55.
  • J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 779–88.
  • T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117–25.
  • C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg, “Dssd: Deconvolutional single shot detector,” arXiv preprint arXiv:1701.06659, 2017.
  • K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  • Y. Liu, M.-M. Cheng, X. Hu, K. Wang, and X. Bai, “Richer convolutional features for edge detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3000–09.
  • K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1026–34.
  • P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., Vol. 33, pp. 898–916, May 2011. DOI: 10.1109/TPAMI.2010.161
  • A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2012.
  • A. Dutta, and A. Zisserman, “The VGG image annotator (VIA),” arXiv preprint arXiv:1904.10699, 2019.
  • J. Howard, and S. Ruder, “Universal language model fine-tuning for text classification,” arXiv preprint arXiv:1801. 06146, 2018.
  • L. N. Smith, “A disciplined approach to neural network hyper-parameters: Part 1-learning rate, batch size, momentum, and weight decay,” arXiv preprint arXiv:1803.09820, 2018.
  • J. Hur, S.-N. Kang, and S.-W. Seo, “Multi-lane detection in urban driving environments using conditional random fields,” in 2013 IEEE Intelligent Vehicles Symposium (IV), pp. 1297–302, 2013.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.