101
Views
1
CrossRef citations to date
0
Altmetric
Articles

Hybrid features-enabled dragon deep belief neural network for activity recognition

&
Pages 355-371 | Received 15 Sep 2017, Accepted 29 May 2018, Published online: 10 Jul 2018

References

  • Rodriguez M, Orrite C, Medrano C, et al. A Time Flexible Kernel framework for video-based activity recognition. Image Vision Comput. 2016 Apr–May;48–49:26–36. doi: 10.1016/j.imavis.2015.12.006
  • Banerjee T, Keller JM, Skubic M, et al. Day or night activity recognition from video using fuzzy clustering techniques. IEEE T Fuzzy Syst. 2014;22(3):483–493. doi: 10.1109/TFUZZ.2013.2260756
  • Gori I, Aggarwal JK, Matthies L, et al. Multitype activity recognition in robot-centric scenarios. IEEE Robot Automat Lett. 2016;1(1):593–600. doi: 10.1109/LRA.2016.2525002
  • Huynh-The T, Le B-V, Lee S, et al. Interactive activity recognition using pose-based spatio–temporal relation features and four-level Pachinko Allocation Model. Inform Sci. 2016 Nov;369:317–333. doi: 10.1016/j.ins.2016.06.016
  • Jones S, Shao L. Content-based retrieval of human actions from realistic video databases. Inform Sci. 2013;236:56–65. doi: 10.1016/j.ins.2013.02.018
  • Ziaeefard M, Bergevin R. Semantic human activity recognition: a literature review. Pattern Recognit. 2015;48(8):2329–2345. doi: 10.1016/j.patcog.2015.03.006
  • Li H, Chen J, Hu R. Multiple feature fusion in convolutional neural networks for action recognition. Wuhan Univ J Nat Sci. 2017 Feb;22(1):73–78. doi: 10.1007/s11859-017-1219-4
  • Aggarwal JK, Ryoo MS. Human activity analysis: a review. Acm Comput Surv. 2011;43(16):1–43. doi: 10.1145/1922649.1922653
  • Gorelick L, Blank M, Shechtman E, et al. Actions as space-time shapes. IEEE T Pattern Anal. 2007;29(12):2247–2253. doi: 10.1109/TPAMI.2007.70711
  • Kuehne H, Jhuang H, Garrote E, et al. “HMDB: a large video database for human motion recognition,” In ICCV, 2011.
  • Ratre A, Pankajakshan V. Tucker visual search-based hybrid tracking model and Fractional Kohonen Self-Organizing Map for anomaly localization and detection in surveillance videos. Imaging Sci J. 2018;66(4):1–16. doi: 10.1080/13682199.2017.1396405
  • Dhumane AV, Prasad RS. Multi-objective fractional gravitational search algorithm for energy efficient routing in IoT. Wirel Netw. 2017: 1–15.
  • Nipanikar SI, Hima Deepthi V, Kulkarni N. A sparse representation based image steganography using particle swarm optimization and wavelet transform. Alexandria Engineering Journal. In Press.
  • Shelke PM, Prasad RS. An improved anti-forensics JPEG compression using Least Cuckoo Search algorithm. Imaging Sci J. 2018;66(3):169–183. doi: 10.1080/13682199.2017.1389832
  • Bhopale SD, Kumar S, Jayadevappa D. Implementation of image segmentation using FPGA. Int J Adv Res Technol. 2014;3(4):2700–2702.
  • Meng L, Qing L, Yang P, et al. Activity recognition based on semantic spatial relation, in: Pattern Recognition (ICPR). Proceedings of the 21st International Conference on Pattern Recognition; 2012; Tsukuba, Japan. p. 609–612.
  • Li W-X, Vasconcelos N. Complex activity recognition via attribute dynamics. Int J Comput Vision. 2017 Apr;122(2):334–370. doi: 10.1007/s11263-016-0918-1
  • Sadanand S, Corso JJ. Action bank: a high-level representation of activity in video. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2012; Washington. p. 1234–1241.
  • Alazrai R, Mowafi Y, Lee CG. Anatomical-plane-based representation for human-human interactions analysis. Pattern Recognit. 2015;48(8):2346–2363. doi: 10.1016/j.patcog.2015.03.002
  • Cho S, Byun H. A space-time graph optimization approach based on maximum cliques for action detection. IEEE T Circuit Syst Video Technol. 2016;26(4):661–672. doi: 10.1109/TCSVT.2015.2424054
  • Teh YW, Jordan MI, Beal MJ, et al. Hierarchical dirichlet processes. J Am Stat Assoc. 2004;101(476):1–30.
  • Kong Y, Fu Y. Close human interaction recognition using patch-aware models. IEEE T Image Process. 2016;25(1):167–178. doi: 10.1109/TIP.2015.2498410
  • Wang H, Yuan C, Hu W, et al. Action recognition using nonnegative action component representation and sparse basis selection. IEEE Trans Image Process. 2014;23(2):570–581. doi: 10.1109/TIP.2013.2292550
  • Kong Y, Jia Y. A hierarchical model for human interaction recognition. Proceedings of the IEEE International Conference on Multimedia and Expo (ICME); 2012; Melbourne, VIC, Australia. p. 1–6.
  • Nigam S, Khare A. Integration of moment invariants and uniform local binary patterns for human activity recognition in video sequences. Multimed Tools Appl. 2016 Dec;75(24):17303–17332. doi: 10.1007/s11042-015-3000-z
  • Wang H, Dan Oneata J, Schmid C. A robust and efficient video representation for action recognition. Int J Comput Vision. 2016 Sep;119(3):219–238. doi: 10.1007/s11263-015-0846-5
  • Xia L-m, Shi X-t, Tu H-b. An approach for complex activity recognition by key frames. J Cent South Univ. 2015 Sep;22(9):3450–3457. doi: 10.1007/s11771-015-2885-z
  • Hasan M, Roy-Chowdhury AK. A continuous learning framework for activity recognition using deep hybrid feature models. IEEE T Multimedia. 2015;17(11):1909–1922. doi: 10.1109/TMM.2015.2477242
  • Schuldt C, Laptev I, Caputo B. Recognizing human actions: a local SVM approach. Proceedings of the 17th International Conference on Pattern Recognition, (ICPR'04), vol. 3; 2004 Aug; Cambridge, UK. p. 32–36.
  • Laptev I, Lindeberg T. Space-time interest points. Proceedings of ninth International Conference on Computer Vision, Nice; 2003; France, France. p. 432–439.
  • Laptev I, Lindeberg T. Velocity adaptation of space time interest points. Proceedings of the 17th International Conference on Pattern Recognition; 2004; Cambridge, UK.
  • Vojtech BJ. Deep neural networks and their implementation [Thesis]. Charles University in Prague; 2016. Available from: https://is.cuni.cz/webapps/zzp/detail/159717/?lang=cs.
  • Mirjalili S. Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl. 2016 May;27(4):1053–1073. doi: 10.1007/s00521-015-1920-1
  • KTH dataset. Available from: http://www.nada.kth.se/cvap/actions/.
  • Weizmann dataset. Available from: http://www.wisdom.weizmann.ac.il/~vision/SpaceTimeActions.html.
  • Kumari S, Mitra SK. Human action recognition using DFT. Proceedings of Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics; 2011; Hubli, Karnataka. p. 239–242.
  • Xu Y, Zhu Q, Fan Z, et al. Coarse to fine K nearest neighbor classifier. Pattern Recogn Lett. 2013 Jul;34(9):980–986. doi: 10.1016/j.patrec.2013.01.028
  • Xu Y, Zhu Q, Chen Y, et al. An improvement to the nearest neighbor classifier and face recognition experiments. Int J Innov Comput, Inf Contr. 2013 Feb;9(2):543–554.
  • Sopharak A, Uyyanonvara B, Barman S, et al. Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Comput Med Imag Grap. 2008;32(8):720–727. doi: 10.1016/j.compmedimag.2008.08.009

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.