Automatic Indian Sign Language Recognition for Continuous Video Sequence
Abstract
Sign Language Recognition has become the active area of research nowadays. This paper describes a novel approach towards a system to recognize the different alphabets of Indian Sign Language in video sequence automatically. The proposed system comprises of four major modules: Data Acquisition, Pre-processing, Feature Extraction and Classification. Pre-processing stage involves Skin Filtering and histogram matching after which Eigen vector based Feature Extraction and Eigen value weighted Euclidean distance based Classification Technique was used. 24 different alphabets were considered in this paper where 96% recognition rate was obtained.
Keywords: Eigen value, Eigen vector, Euclidean Distance (ED),Human Computer Interaction, Indian Sign Language (ISL), Skin Filtering.
Cite as:Joyeeta Singh, Karen Das "Automatic Indian Sign Language Recognition for Continuous Video Sequence", ADBU J.Engg.Tech., 2(1)(2015) 0021105(5pp)
Full Text:
PDFReferences
J. Singha and K. Das, “Hand Gesture Recognition Based on Karhunen-Loeve Transformâ€, Mobile and Embedded Technology International Conference (MECON), January 17-18, 2013, India, pp. 365-371.
M.K. Bhuyan, M.K. Kar, and D.R. Neog, “Hand Pose Identification from Monocular Image for Sign Language Recognitionâ€, IEEE International Conference on Signal and Image Processing Applications, 2011, pp. 378-383.
A.S. Ghotkar, R. Khatal, S. Khupase, S. Asati, and M. Hadap, “Hand Gesture Recognition for Indian Sign Languageâ€, International Conference on Computer Communication and Informatics (ICCCI), Jan. 10-12, 2012, Coimbatore, India.
J. Singha and K. Das, “Indian Sign Language Recognition Using Eigen Value Weighted Euclidean Distance Based Classification Techniqueâ€, International Journal of Advanced Computer Science and Applications, Vol. 4, No. 2, March2013, pp. 188-195.
H. Brashear, T.Starner, P.Lukowicz, and H. Junker, “Using Multiple Sensors for Mobile Sign Language Recognitionâ€, Seventh IEEE International Symposium on Wearable Computers, 2003.
Y.L. Gweth, C. Plahl, and H. Ney, “Enhanced Continuous Sign Language Recognition using PCA and NeuralNetwork Featuresâ€, IEEE, 2012, pp. 55-60.
T. Starner and A. Pentland, “Real-Time American Sign Language Recognition from VideoUsing Hidden Markov Modelsâ€, IEEE, 1995, pp. 265-270.
M.P. Paulraj, S. Yaacob, Mohd S.Z Azalan, and R.Palaniappan, “A Phoneme Based Sign Language RecognitionSystem Using Skin Color Segmentationâ€,6th International Colloquium on Signal Processing & Its Applications (CSPA), 2010, pp. 86-90.
R.H. Liang and M.Ouhyoung, “A Real-time Continuous Gesture Recognition System for Sign Languageâ€, IEEE International Conference on Automatic Face and Gesture Recognition, 1998, Japan, pp.558-567.
B.L. Tsai and C.L. Huang, “A Vision-Based Taiwanese Sign Language Recognition Systemâ€, IEEE International Conference on Pattern Recognition, 2010, pp. 3683-3686.
J. Singha and K. Das, “Recognition of Indian Sign Language in Live Videoâ€, International Journal of Computer Applications, Vol. 70, No. 19, May 2013, pp. 17-22.
Refbacks
- There are currently no refbacks.
------------------------------------------------------------------------------------------------------------------------
The ADBU Journal of Engineering Technology (AJET)" ISSN:2348-7305
This journal is published under the terms of the Creative Commons Attribution (CC-BY) (http://creativecommons.org/licenses/)