An Assistive Model for the Visually Impaired Integrating the Domains of IoT, Blockchain and Deep LearningShruti Jadon, Saisamarth Taluri, Sakshi Birthi, Sanjana Mahesh, Sankalp Kumar, Sai Shruthi Shashidhar, Prasad B. Honnavalli
- Physics and Astronomy (miscellaneous)
- General Mathematics
- Chemistry (miscellaneous)
- Computer Science (miscellaneous)
Internet of Things, blockchain and deep learning are emerging technologies that have recently gained popularity due to their various benefits and applications. All three domains have had success independently in various applications such as automation, agriculture, travel, finance, image recognition, speech recognition, and many others. This paper proposes an efficient, lightweight, and user-friendly solution to help visually impaired individuals navigate their way by taking advantage of modern technologies. The proposed method involves the usage of a camera lens attached to a Raspberry Pi device to capture live video frames of the user’s environment, which are then transmitted to cloud storage. The link to access these images is stored within a symmetrical private blockchain network (no superior access), where all deep learning servers act as nodes. The deep learning model deployed on these servers analyses the video frames to detect objects and feeds the output back to the cloud service. Ultimately, the user receives audio notifications about obstacles through an earphone plugged into the Raspberry Pi. In particular, when running the model on a high-performing network and an RTX 3090 GPU, the average obstacle notification time is reported within 2 s, highlighting the proposed system’s responsiveness and effectiveness in aiding visually impaired individuals.