References
- A. Tyagi, and S. Bansal, “Feature extraction technique for vision-based Indian sign language recognition system: A review,” Comput. Meth. Data Eng., vol. 1227,39–53, 2021.
- T. Grzejszczak, M. Kawulok, and A. Galuszka, “Hand landmarks detection and localization in color images,” Multimed. Tools Appl., Vol. 75, no. 23, pp. 16363–87, 2016.
- H. Ansar, A. Jalal, M. Gochoo, and K. Kim, “Hand gesture recognition based on auto-landmark localization and reweighted genetic algorithm for healthcare muscle activities,” Sustainability, Vol. 13, no. 5, p. 2961, 2021.
- A. Tyagi, and S. Bansal, “Sign language recognition using hand mark analysis for vision-based system (HMASL),” in Emergent Converging Technologies and Biomedical Systems. In: Marriwala, N., Tripathi, C.C., Jain, S., Mathapathi, S. (eds) Emergent Converging Technologies and Biomedical Systems. Lecture Notes in Electrical Engineering. Singapore: Springer, 2022, pp. 431–45.
- I. A. Adeyanju, O. O. Bello, and M. A. Adegboye, “Machine learning methods for sign language recognition: A critical review and analysis,” Intell. Sys. Appl., Vol. 12, p. 200056, 2021.
- A. Tyagi, S. Bansal, and A. Kashyap, “Comparative analysis of feature detection and extraction techniques for vision-based ISLR system,” in 2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC). 2020, November, pp. 515–20. IEEE.
- E. B. Candrasari, L. Novamizanti, and S. Aulia, “Discrete wavelet transform on static hand gesture recognition,” J. Phys. Conf Ser., Vol. 1367, no. 1, p. 012022, 2019, November. IOP Publishing.
- A. Jalal, N. Khalid, and K. Kim, “Automatic recognition of human interaction via hybrid descriptors and maximum entropy Markov model using depth sensors,” Entropy, Vol. 22, no. 8, p. 817, 2020.
- A. K. Sahoo, P. K. Sarangi, and R. Gupta, “Indian sign language recognition using a novel feature extraction technique,” in Sharma, T.K., Ahn, C.W., Verma, O.P., Panigrahi, B.K. (eds) Soft Computing: Theories and Applications, Volume. 1380, Singapore: Springer, 2022, pp. 299–310.
- S. Albanie, G. Varol, L. Momeni, T. Afouras, J. S. Chung, N. Fox, and A. Zisserman, “BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues,” in European Conference on Computer Vision, 2020, August, pp. 35–53. Springer, Cham.
- D. Li, C. Rodriguez, X. Yu, and H. Li, “Word-level deep sign language recognition from video: A new large-scale dataset and methods comparison,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2020, pp. 1459–69.
- A. Dudhal, H. Mathkar, A. Jain, O. Kadam, and M. Shirole, “Hybrid SIFT feature extraction approach for Indian sign language recognition system based on CNN,” in: Pandian, D., Fernando, X., Baig, Z., Shi, F. (eds) Proceedings of the International Conference on ISMAC in Computational Vision and Bio-Engineering 2018 (ISMAC-CVB), Cham: Springer, 2018,Vol. 30, pp. 727–38.
- A. Wadhawan, and P. Kumar, “Deep learning-based sign language recognition system for static signs,” Neural Comput. Appl., Vol. 32, pp. 7957–7968, 2020.
- I. Mahmud, T. Tabassum, M. P. Uddin, E. Ali, A. M. Nitu, and M. I. Afjal, “Efficient noise reduction and HOG feature extraction for sign language recognition,” in 2018 International Conference on Advancement in Electrical and Electronic Engineering (ICAEEE), 2018, November, pp. 1–4. IEEE.
- V. Adithya, and R. Rajesh, “A deep convolutional neural network approach for static hand gesture recognition,” Procedia. Comput. Sci., Vol. 171, pp. 2353–61, 2020.
- O. Mazhar, S. Ramdani, and A. Cherubini, “A deep learning framework for recognizing both static and dynamic gestures,” Sensors, Vol. 21, no. 6, p. 2227, 2021.
- J. Rekha, J. Bhattacharya, and S. Majumder, “Hand gesture recognition for sign language: A new hybrid approach,” in Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV) (p. 1). The Steering Committee of the World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp), 2011.
- https://www.u-aizu.ac.jp/labs/is-pp/pplab/swr/sign_word_dataset.zip.
- S. Sharma, R. Gupta, and A. Kumar, “Continuous sign language recognition using isolated signs data and deep transfer learning,” J. Ambient. Intell. Humaniz. Comput., 1–12, 2021.
- R. Gupta, “Multi-input CNN-LSTM for end-to-end Indian sign language recognition: A use case with wearable sensors,,” in Challenges and applications for hand gesture recognition, Bhupesh Kumar Dewangan Lalit Kane, and Tanupriya Choudhury, Ed. IGI Global, 2022, pp. 156–74.
- S. R. Bansal, S. Wadhawan, and R. Goel, “mRMR-PSO: A hybrid feature selection technique with a multiobjective approach for sign language recognition,” Arab. J. Sci. Eng., Vol. 47, pp. 10365–10380, 2022.
- N. O’Mahony, S. Campbell, A. Carvalho, S. Harapanahalli, G. V. Hernandez, L. Krpalkova, and J. Walsh, “Deep learning vs. traditional computer vision,” in Science and information conference, Prof. Dr. Kohei Arai and Supriya Kapoor, Ed. Cham: Springer, 2019, April, pp. 128–44.
- M. M. El-Gayar, and H. Soliman, “A comparative study of image low level feature extraction algorithms,” Egypt. Informat. J., Vol. 14, no. 2, pp. 175–81, 2013.
- F. Csóka, J. Polec, T. Csóka, and J. Kačur, “Recognition of sign language from high resolution images using adaptive feature extraction and classification,” Int. J. Electron. Telecommun., Vol. 65, pp. 303–308, 2019.
- A. Tyagi, and Sandhya Bansal, “Hybrid FAST-SIFT-CNN (HFSC) approach for vision-based Indian sign language recognition,” Int. J. Comp. Dig. Sys., Vol. 11, pp. 1217–27, 2021.
- A. Tyagi, and Sandhya Bansal, “Hybrid FiST_CNN approach for feature extraction for vision-based Indian sign language recognition,” Int. Arab J. Inf. Technol., Vol. 19, no. 3, pp. 403–11, 2022.
- F. Chen Chen, S. Appendino, A. Battezzato, A. Favetto, M. Mousavi, and F. Pescarmona, “Constraint study for a hand exoskeleton: human hand kinematics and dynamics,” J. Robot., Vol. 2013, pp. 158–174, 2013.
- https://drive.google.com/drive/folders/1keWr7X8aR4YMotY2m8SlEHlyruDDdVi
- https://drive.google.com/drive/folders/0B6iDOaIw70SceUFWQ0NoREVIUTA?resourcekey=0-fjQdHEkRhuPpOnlbICy2bg&usp=sharing
- https://drive.google.com/drive/folders/1mHmmmSaU5ZV8QKIUSCF0fabVv54HhxWq?usp=sharing
- Z. A. Ansari, and G. Harit, “Nearest neighbour classification of Indian sign language gestures using kinect camera,” Sādhanā, Vol. 41, no. 2, pp. 161–82, 2016 Feb.