Search for a command to run...
A substantial number of students with hearing impairments are enrolled in higher education, motivating the development of inclusive assistive technologies that reduce communication barriers. This study developed and evaluated a prototype electronic glove that translates Mexican Sign Language (LSM) signs into Spanish text using machine learning. Eight participants (four deaf and four hearing with LSM proficiency) completed four sessions involving 12 signs; three sessions (S1–S3) were used for model development and one session (T) was held out for evaluation. Models were trained on S1–S3 and tested on T using a session-level split without window mixing across sessions; therefore, results represent a speaker-dependent, inter-session pilot assessment rather than a speaker-independent generalization test. The glove integrates flex sensors and an inertial measurement unit IMU MPU6050 connected to an ESP32-C3 SuperMini microcontroller. These components were selected due to their low cost, availability, and ease of integration, making them suitable for the development of accessible wearable assistive technologies. Under this protocol, the system achieved a window-level overall test accuracy of 97.0% (95% CI computed at the window level: 96.00–97.00), with higher performance for the dynamic subset (98.0%) than for the static subset (95.0%), and an algorithmic decision delay of 1.2 s. Usability and acceptance were evaluated using the System Usability Scale (SUS) and a Technology Acceptance Model (TAM)-based questionnaire. The mean SUS score was 50.6 ± 1.8 (marginal usability), while participants reported positive perceptions across TAM constructs. Overall, findings demonstrate technical feasibility under controlled inter-session conditions and provide a foundation for iterative user-centered refinement, followed by strict speaker-independent validation and classroom deployment studies in future work.