355
Views
0
CrossRef citations to date
0
Altmetric
Articles

A method for improved pedestrian gesture recognition in self-driving cars

, &

References

  • Data set. 2018. https://github.com/pixmoving-moveit/traffic-gesture-recognition/tree/master/traffic_gesture_recognition/datasets
  • Deng, Z., X. Zhu, D. Cheng, M. Zong, and S. Zhang. 2016. “Efficient KNN Classification Algorithm for Big Data.” Neurocomputing 195 (C): 143–148. doi:10.1016/j.neucom.2015.08.112.
  • Devanne, M., H. Wannous, S. Berretti, P. Pala, M. Daoudi, and A. Del Bimbo. 2015. “3-D Human Action Recognition by Shape Analysis of Motion Trajectories on Riemannian Manifold.” IEEE Transactions on Cybernetics 45 (7): 1340–1352. doi:10.1109/TCYB.2014.2350774.
  • Dong, C. Q., T. M. Ding, X. Y. Huang, and P. C. Zhao. 2017. “A Survey of Research on Unmanned Human-Computer Interaction.” Auto Time 14: 11–12.
  • Guo, D. D., and X. A. Zhu. 2015. “Human Action Recognition Based on Time-Space Domain Adaboost Algorithm.” Journal of Beijing Information Science & Technology University 2: 50–54.
  • Ji, S., M. yang, and K. Yu. 2013. “3D Convolutional Neural Networks for Human Action Recognition.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (1): 221–231. doi:10.1109/TPAMI.2012.59.
  • Jing, L., X. Yang, and Y. Tian. 2018. “Video You Only Look Once: Overall Temporal Convolutions for Action Recognition.” Journal of Visual Communication & Image Representation 52: 58–65. doi:10.1016/j.jvcir.2018.01.016.
  • Jing, Y. U., G. E. Jun, and L. Guo. 2017. “Investigation on Human Action Classification Based on Skeleton Features.” Computer Technology & Development. 27(08): 83-87
  • Kumar, P., S. Kumar, and B. Raman. 2016. “A Fractional Order Variational Model for the Robust Estimation of Optical Flow from Image Sequences.” Optik – International Journal for Light and Electron Optics 127 (20): 8710–8727. doi:10.1016/j.ijleo.2016.05.118.
  • Liu, J., N. Akhtar, and A. Mian. 2017. “Skepxels: 375 Spatio-Temporal Image Representation of Human Skeleton Joints for Action Recognition.”arXiv: 1711.05941, Nov 2017
  • Liu, J., A. Shahroudy, D. Xu, and G. Wang. 2016a. “Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition.” n[C]// European Conference on Computer Vision. Springer, Cham, 816–833.
  • Liu, J., G. Wang, L. Y. Duan, K. Abdiyeva, and A. C. Kot. 2018. “Skeleton-Based Human Action Recognition With Global Context-Aware Attention LSTM Networks.” IEEE Transactions on Image Processing 27 (4): 1586–1599. doi:10.1109/TIP.2017.2785279.
  • Liu, L., H. Bao, W. Pan, and C. Xu. 2016b. “Night-Time Pedestrian Detection Based on Temperature and HOGI Feature in Infra-Red Images.” International Journal of Simulation–Systems, Science & Technology 17 (22).
  • Liu, X., Z. Y. Xu, J. X. Zhu, and C. Huang. 2017. “Key Human Body Gesture Recognition for Assisted Indoor Positioning.” Science Technology and Engineering 17 (12): 211–217.
  • Maeda, T., and T. Ohtsuka 2015. “Reliable Background Prediction Using Approximated GMM.” IEEE International conference on machine vision applications, 142–145.
  • Ohnbar E, Trivedi M M.2013. “Joint Angles Similarities and HOG2 for Action Recognition.”Computer Vision & Pattern Recognition Workshops. IEEE.465-470.
  • Song, S., C. Lan, J. Xing, W. Zeng, and J. Liu. 2016. “An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data.”
  • Tian, X., H. Bao, C. Xu, and B. Wang. 2013. “Pedestrian Detection Algorithm Based on Local Color Parallel Similarity features.” International Journal on Smart Sensing & Intelligent Systems 6 (5): 1869–1890. doi:10.21307/ijssis-2017-618.
  • Wang, S. F., X. Dai, N. Xu, and P. F. Zhang. 2017. “Overview on Environment Perception Technology for Unmanned Ground Vehicle.” Journal of Changchun University of Science and Technology(Natural Science Edition) 40 (1): 1–6.
  • Weinzaepfel, P., J. Revaud, Z. Harchaoui, and C. Schmid 2014. “Daniel: Large Displacement Optical Flow with Deep Matching.” IEEE international conference on computer vision, 1385–1392.
  • Xin, M., H. Zhang, H. Wang, and D. Yuan. 2016. “ARCH: Adaptive Recurrent-Convolutional Hybrid Networks for Long-Term Action recognition.” Neurocomputing 178 (4): 87–102. doi:10.1016/j.neucom.2015.09.112.
  • Xu, C., H. Bao, L. Zhang, and N. He. 2014. “Crowd Density Estimation Based on Improved Harris Algorithm & Optics Alg.” Journal of Computers 9 (5): 1209–1217. doi:10.4304/jcp.9.5.1209-1217.
  • Xu, C., X. Tian, W. Zhao, H. Liu, and H. Bao. 2013. “High-Density Crowd Behavior Recognition Based on Improved Harris.” Information Technology Journal 12 (16): 3678–3684. doi:10.3923/itj.2013.3678.3684.
  • Yu, G., J. Yuan, and Z. Liu. 2015. “Human Action Prediction with Multiclass Balanced Random Forest.” In Human Action Analysis with Randomized Trees, 73–81. Springer: Singapore.
  • Zhang, J., J. Z. Wu, J. L. Tang, and H. H. Fan. 2017a. “Human Action Recognition Method Based on Spatio-Temporal Image Segmentation and Interactive Area Detection.” Application Research of Computers 34 (1): 302–305.
  • Zhang, S., X. Li, M. Zong, X. Zhu, and D. Cheng. 2017b. “Learning K, for KNN Classification.” Acm Transactions on Intelligent Systems & Technology 8 (3): 43. doi:10.1145/2990508.
  • Zhang, S., X. Li, M. Zong, X. Zhu, and R. Wang. 2017c. “Efficient KNN Classification with Different Numbers of Nearest Neighbors.” IEEE Transactions on Neural Networks & Learning Systems PP (99): 1–12. doi:10.1109/TNNLS.2017.2705113.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.