Search for a command to run...
In Sign language plays a vital role in enabling communication between hearing-impaired individuals and the rest of society. Existing sign language recognition systems mainly depend on traditional image processing and basic machine learning techniques, which face challenges such as variations in hand gestures, lighting conditions, and complex backgrounds. These approaches often require manual feature extraction and provide limited accuracy and scalability. To address these limitations, a deep learning-based sign language detection system is proposed to automatically recognize hand gestures and convert them into text. The proposed system employs convolutional neural network models including MobileNetV2, ResNet50, and EfficientNet-B0 trained on an American Sign Language dataset. Data preprocessing and augmentation techniques are applied to improve model robustness. The system is implemented using the TensorFlow framework with convolutional, pooling, dense, and softmax layers. Experimental results show that EfficientNet-B0 achieves the highest accuracy among the models. Overall, the proposed system offers improved performance, real-time applicability, and enhanced accessibility for sign language communication.
Published in: International Journal for Research In Science & Advanced Technologies