Published on: April 2026
A HYBRID YOLO-BASED ASSISTIVE VISION SYSTEM FOR VISUALLY IMPAIRED USERS: REAL-TIME OBJECT DETECTION AND VOICE FEEDBACK
Ranganatha Sarma
Dr. Abuzar Ansari
Article Status
Available Documents
Abstract
Visual impairment affects over 2.2 billion people worldwide, severely restricting independent living, particularly in indoor environments where object recognition and navigation remain major challenges. Traditional aids such as white canes offer limited semantic information, while commercial solutions like Microsoft Seeing AI and OrCam MyEye are often costly and may require internet connectivity. Recent advances in deep learning, especially single-stage object detectors like YOLO, have enabled real-time, on-device assistive technologies. This paper presents a hybrid assistive vision system that integrates a pretrained YOLOv11n detection model (for persons and general objects) with a custom fine-tuned YOLOv11n-cls classification model focused on five common indoor furniture items prevalent in Indian households: chair,television, refrigerator, table, and almirah (wardrobe). A dataset of 15,000 diverse images was curated under varied lighting, angles, and indoor conditions. The custom model achieved 99.61% accuracy,precision, recall, and F1-score on a held-out test set of 2,045 images.
How to Cite this Paper
Sarma, R. (2026). A Hybrid YOLO-Based Assistive Vision System for Visually Impaired users: Real-Time Object Detection and Voice Feedback. International Journal of Creative and Open Research in Engineering and Management, <i>02</i>(04). https://doi.org/10.55041/ijcope.v2i4.444
Sarma, Ranganatha. "A Hybrid YOLO-Based Assistive Vision System for Visually Impaired users: Real-Time Object Detection and Voice Feedback." International Journal of Creative and Open Research in Engineering and Management, vol. 02, no. 04, 2026, pp. . doi:https://doi.org/10.55041/ijcope.v2i4.444.
Sarma, Ranganatha. "A Hybrid YOLO-Based Assistive Vision System for Visually Impaired users: Real-Time Object Detection and Voice Feedback." International Journal of Creative and Open Research in Engineering and Management 02, no. 04 (2026). https://doi.org/https://doi.org/10.55041/ijcope.v2i4.444.
References
- Davanthapuram, X. Yu, and J. Saniie, “Visually Impaired Indoor Navigation using YOLO Based Object Recognition, Monocular Depth Estimation and Binaural Sounds,” in Proc. IEEE Int. Conf. Electro Information Technology (EIT), 2021, pp. 173–177.
- Wang et al., “YOLO-OD: Obstacle Detection for Visually Impaired Navigation Assistance,” Sensors, vol. 24, no. 23, Art. no. 7621, 2024, doi: 10.3390/s24237621.
- Noor et al., “Towards a Real-Time Indoor Object Detection for Visually
- Impaired Users Using Raspberry Pi 4 and YOLOv11: A Feasibility Study,”
- Microprocess. Microsyst., 2025.
- Obayya et al., “An intelligent framework for visually impaired people through indoor object detection-based
- assistive system using YOLO with recurrent neural networks,” Sci. Rep., vol. 15, Art. no. 43720, Dec. 2025, doi: 10.1038/s41598-025-27603-8.
- Hingnekar et al., “Netra AI: Real-Time AI-Powered Navigational Assistance forVisually Impaired Individuals UsingOptimized YOLOv11 Architecture,” TechRxiv, 2025.
- Yu et al., “Visual Impairment Spatial Awareness System for Indoor Activities,” J. Imaging, vol. 11, no. 1, Art. no. 9, 2025, doi: 10.3390/jimaging11010009.
- Microsoft, “Seeing AI – Talking Camera App for the Blind,” [Online]. Available: https://www.microsoft.com/en-us/ai/seeing-ai
Ethical Compliance & Review Process
- •All submissions are screened under plagiarism detection.
- •Review follows editorial policy.
- •Authors retain copyright.
- •Peer Review Type: Double-Blind Peer Review
- •Published on: Apr 17 2026
This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. You are free to share and adapt this work for non-commercial purposes with proper attribution.

