75
Views
0
CrossRef citations to date
0
Altmetric
Research Articles

Grey wolf optimization (GWO) with the convolution neural network (CNN)-based pattern recognition system

ORCID Icon, &
Pages 238-252 | Received 10 Sep 2021, Accepted 04 Jan 2023, Published online: 18 Jan 2023

References

  • Zhu F, Shao L, Xie J, et al. From handcrafted to learned representations for human action recognition: a survey. Image Vis Comput. 2016;55:42–52. doi:10.1016/j.imavis.2016.06.007.
  • Ijjina EP, Mohan CK. Human action recognition based on motion capture information using fuzzy convolution neural networks. In: Eighth International Conference on Advances in Pattern Recognition, ICAPR 2015, Kolkata, India, January 4–7, 2015. IEEE; 2015.
  • Dawn DD, Shaikh SH. A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector. Vis Comput. 2016;32(3):289–306. doi:10.1007/s00371-015-1066-2.
  • Vemulapalli R, Arrate F, Chellappa R. R3DG features: relative 3D geometry-based skeletal representations for human action recognition. Comput Vis Image Underst. 2016;152:155–166. doi:10.1016/j.cviu.2016.04.005.
  • Qazi HA, et al. Human action recognition using SIFT and HOG method. In: International Conference on Information and Communication Technologies (ICICT), Karachi, Pakistan. IEEE; 2017.
  • Ma M, Marturi N, Li Y, et al. Region-sequence based six-stream CNN features for general and fine-grained human action recognition in videos. Pattern Recognit. 2018;76:506–521. doi:10.1016/j.patcog.2017.11.026.
  • Tu Z, Xie W, Qin Q, et al. Multi-stream CNN: learning representations based on human-related regions for action recognition. Pattern Recognit. 2018;79:32–43. doi:10.1016/j.patcog.2018.01.020.
  • Bulbul MF, et al. Improving human action recognition using hierarchical features and multiple classifier ensembles. Comput J. 2019;64(11):1633–1655.
  • Murala S, Jonathan Wu QM. Spherical symmetric 3D local ternary patterns for natural, texture and biomedical image indexing and retrieval. Neurocomputing. 2015;149:1502–1514. doi:10.1016/j.neucom.2014.08.042.
  • Sam BB, Fred AL. An efficient grey wolf optimization algorithm based extended kalman filtering technique for various image modalities restoration process. Multimed Tools Appl. 2018;77(23):30205–30232. doi:10.1007/s11042-018-6088-0.
  • Hou Y, Li Z, Wang P, et al. Skeleton optical spectra-based action recognition using convolutional neural networks. IEEE Trans Circuits Syst Video Technol. 2016;28(3):807–811.
  • Moreira TP, Menotti D, Pedrini H. First-person action recognition through visual rhythm texture description. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, March 5–9, 2017. IEEE; 2017.
  • Kumar N, Bhargava D. A scheme of features fusion for facial expression analysis: a facial action recognition. J Stat Manag Syst. 2017;20(4):693–701. doi:10.1080/09720510.2017.1395189.
  • Ijjina EP, Chalavadi KM. Human action recognition in RGB-D videos using motion sequence information and deep learning. Pattern Recognit. 2017;72:504–516. doi:10.1016/j.patcog.2017.07.013.
  • Satyamurthi S, Tian J, Chua MCH. Action recognition using multi-directional projected depth motion maps. J Ambient Intell Humaniz Comput. 2018: 1–7.
  • Ji X, Cheng J, Feng W, et al. Skeleton embedded motion body partition for human action recognition using depth sequences. Signal Processing. 2018;143:56–68. doi:10.1016/j.sigpro.2017.08.016.
  • Wang H, Wang L. Beyond joints: learning representations from primitive geometries for skeleton-based action recognition and detection. IEEE Trans Image Process. 2018;27(9):4382–4394. doi:10.1109/TIP.2018.2837386.
  • Arivazhagan S, Shebiah RN, Harini R, et al. Human action recognition from RGB-D data using complete local binary pattern. Cogn Syst Res. 2019;58:94–104. doi:10.1016/j.cogsys.2019.05.002.
  • Dhiman C, Vishwakarma DK. A robust framework for abnormal human action recognition using R-transform and zernike moments in depth videos. IEEE Sensors J. 2019;19(13):5195–5203. doi:10.1109/JSEN.2019.2903645.
  • Avola D, Bernardi M, Foresti GL. Fusing depth and colour information for human action recognition. Multimed Tools Appl. 2019;78(5):5919–5939. doi:10.1007/s11042-018-6875-7.
  • Si C, Jing Y, Wang W, et al. Skeleton-based action recognition with hierarchical spatial reasoning and temporal stack learning network. Pattern Recognit. 2020;107:107511, doi:10.1016/j.patcog.2020.107511.
  • Sharif M, Khan MA, Zahid F, et al. Human action recognition: a framework of statistical weighted segmentation and rank correlation-based selection. Pattern Anal Appl. 2020;23(1):281–294. doi:10.1007/s10044-019-00789-0.
  • Wang X, Gao L, Song J, et al. Beyond frame-level CNN: saliency-aware 3-D CNN with LSTM for video action recognition. IEEE Signal Process Lett. 2016;24(4):510–514. doi:10.1109/LSP.2016.2611485.
  • Lavinia Y, Vo HH, Verma A. Fusion based deep CNN for improved large-scale image action recognition. In: IEEE International Symposium on Multimedia (ISM), San Jose, CA, USA. IEEE; 2016.
  • Banerjee B, Murino V. Efficient pooling of image based CNN features for action recognition in videos. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA. IEEE; 2017.
  • Qi T, Xu Y, Quan Y, et al. Image-based action recognition using hint-enhanced deep neural networks. Neurocomputing. 2017;267:475–488. doi:10.1016/j.neucom.2017.06.041.
  • Li B, He M, Dai Y, et al. 3D skeleton based action recognition by video-domain translation-scale invariant mapping and multi-scale dilated CNN. Multimed Tools Appl. 2018;77(17):22901–22921. doi:10.1007/s11042-018-5642-0.
  • Zhang B, Wang L, Wang Z, et al. Real-time action recognition with deeply transferred motion vector cnns. IEEE Trans Image Process. 2018;27(5):2326–2339. doi:10.1109/TIP.2018.2791180.
  • Meng B, Liu X, Wang X. Human action recognition based on quaternion spatial-temporal convolutional neural network and LSTM in RGB videos. Multimed Tools Appl. 2018;77(20):26901–26918. doi:10.1007/s11042-018-5893-9.
  • Yao G, Lei T, Zhong J. A review of convolutional-neural-network-based action recognition. Pattern Recognit Lett. 2019;118:14–22. doi:10.1016/j.patrec.2018.05.018.
  • Li Z, Zheng Z, Lin F, et al. Action recognition from depth sequence using depth motion maps-based local ternary patterns and CNN. Multimed Tools Appl. 2019;78(14):19587–19601. doi:10.1007/s11042-019-7356-3.
  • Rajput AS, Raman B, Imran J. Privacy-preserving human action recognition as a remote cloud service using RGB-D sensors and deep CNN. Expert Syst Appl. 2020;152:113349.
  • Ozcan T, Basturk A. Performance improvement of pre-trained convolutional neural networks for action recognition. Comput J. 2021;64(11):1715–1730.
  • Kumaran N, Vadivel A, Saravana Kumar S. Recognition of human actions using CNN-GWO: a novel modeling of CNN for enhancement of classification performance. Multimed Tools Appl. 2018;77(18):23115–23147. doi:10.1007/s11042-017-5591-z.
  • Zhu A, Wu Q, Cui R, et al. Exploring a rich spatial–temporal dependent relational model for skeleton-based action recognition by bidirectional LSTM-CNN. Neurocomputing. 2020;414:90–100. doi:10.1016/j.neucom.2020.07.068.
  • Chen C, Jafari R, Kehtarnavaz N. A survey of depth and inertial sensor fusion for human action recognition. Multimed Tools Appl. 2017;76(3):4405–4425. doi:10.1007/s11042-015-3177-1.
  • Ozcan T, Basturk A. Human action recognition with deep learning and structural optimization using a hybrid heuristic algorithm. Cluster Comput. 2020;23(4):2847–2860.
  • Jegham I, Ben Khalifa A, Alouani I, et al. Vision-based human action recognition: an overview and real world challenges. Forensic Sci Int: Digital Invest. 2020;32:200901, doi:10.1016/j.fsidi.2019.200901.
  • Dai C, Liu X, Lai J. Human action recognition using two-stream attention based LSTM networks. Appl Soft Comput. 2020;86:105820, doi:10.1016/j.asoc.2019.105820.
  • Vishwakarma DK. A two-fold transformation model for human action recognition using decisive pose. Cogn Syst Res. 2020;61:1–13. doi:10.1016/j.cogsys.2019.12.004.
  • Sahoo SP, et al. HAR-depth: a novel framework for human action recognition using sequential learning and depth estimated history images. IEEE Trans Emerg Topics Comput Intell. 2020;5(5):813–825.
  • Abdelbaky A, Aly S. Two-stream spatiotemporal feature fusion for human action recognition. Vis Comput. 2021;37(7):1821–1835.
  • Ullah S, et al. Weakly-supervised action localization based on seed superpixels. Multimed Tools Appl. 2020;80(4):6203–6220.
  • Khan MA, et al. A resource conscious human action recognition framework using 26-layered deep convolutional neural network. Multimed Tools Appl. 2020;80(28):35827–35849.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.