Search for a command to run...
Visual impairment affects approximately 2.2 billion people globally, with at least 1 billion cases preventable or unaddressed. This paper presents the comprehensive architectural design, implementation, and evaluation of Assistive Vision—a production-ready mobile application engineered to assist visuallyimpaired users in navigating physical environments and accessing textual information. Built on the Flutter framework, the system leverages Google ML Kit for privacy-preserving, on-device ma-chine learning inference, integrating real-time object detection and optical character recognition (OCR) into a coherent voice-first interaction paradigm. Unlike cloud-dependent solutions, Assistive Vision executes all processing locally, ensuring low latency, user privacy, and functionality in offline environments. The applica- tion implements 12 voice commands, priority-based Text-to-Speech queuing, GPS-based location services with reverse geocoding, andan emergency SOS system. The architecture operates on a service- oriented pattern with intelligent frame throttling and duplicate detection mechanisms optimized for battery efficiency. Performanceevaluation on mid-range Android devices demonstrates average inference latency of 200–800ms, TTS latency under 200ms, andmemory footprint of approximately 150MB. Usability testing with visually impaired volunteers achieved an average satisfaction scoreof 4.4 out of 5, with OCR accuracy of 97.2% for high-contrast printed text and object detection rates of 89.5% in well-lit indoor environments. The system provides approximately 7–8 hours of continuous operation on a single battery charge, making it viable for full-day deployment in real-world assistive scenarios. Index Terms—Assistive Technology, Computer Vision, On- Device Machine Learning, OCR, Object Detection, Flutter, EdgeComputing.
Published in: International Scientific Journal of Engineering and Management
Volume 05, Issue 03, pp. 1-9
DOI: 10.55041/isjem05798