Search for a command to run...
Individuals who are deaf or have speech disabilities often experience difficulty with communication with other individuals who do not know any form of sign language, so use of a system to enable communication through facial expressions is essential for socializing and ensuring accessible communication. Therefore, an innovative real-time sign language interpretation system utilizing current developments in embedded technology as well as advances in machine learning has been created as a way to facilitate the communication barrier between individuals with hearing loss and non-sign language users. Specifically, by integrating a camera module and microcontroller into a real time recording device, the user's hand movements will be captured in real time via the camera module. The user's signs are converted into text/voice using a pre-trained quantized MobileNet model which allows accurate interpretation of sign language signs. Implementing this existing technology provides a new alternative for individuals who are deaf or have speech impairments and their ability to communicate with people who do not understand any form of sign language. By eliminating the need for the use of electronics or external sensors, this novel communication solution has been developed to be affordable and mobile. Compact designs also provide a means for using the system in a variety of different environments, such as hospitals, public areas, schools, government offices, and so on. As well, this design offers superior scalability, allowing it to be integrated with smartphones or any type of IoT communication platform in the future, along with real-time processing abilities.
Published in: International Journal for Research in Applied Science and Engineering Technology
Volume 14, Issue 3, pp. 3838-3843