Sunday, December 13, 2020

Mobility of visually impaired people

  MOBILITY OF VISUALLY IMPAIRED PEOPLE

What is Visual Impairment?

 Visual Impairment can be defined as a condition in which an individual’s capacity to see things are not normal. This means that the function of the eye for numerous reasons may become limited. Visual Impairment can be anything ranging from not being able to see near or far off things to partial or complete blindness. The ability of an individual to see objects clearly is termed as the visual acuity of the person and is a criterion for diagnosing an individual with Visual Impairment. Blindness is a form of Visual Impairment in which the visual acuity of an individual is extremely poor along with the visual field where the individual is not able to see any object.



Introduction:

The computer vision based assistive technology for the blind and visually impaired is a developing area. The assistive technology helps the visually impaired by providing them with a greater independence. By enabling them with their day-to-day activities like indoor and outdoor navigation, obstacle detection, locating the doors and lost objects, etc.Even though different assistive technologies are available for the blind, most of them have complex designs which are developed for a specific purpose and are expensive for the commercial production. Rather than depending on a tra-ditional white cane, the blind and visually impaired people can make use of the cheaper assistive device proposed in blog . The proposed system incorporates several assistance features in a device which will be an asset for them accordingto their needs.

 

How does computer vision helps to visually impaired people?

Computer vision is a field that deals with acquiring, pro-cessing, examining and understanding the images. Out-put is in form of description or an interpretation or some quantitative measurements to obtain an understanding high-dimensional data from the real world in order to produce nu-merical or symbolic information to make a decision.Computer vision is an area for duplicating the abilities of human vision.Which also known as Image Analysis, Scene Analysis, Image .Understanding, Robotics ,Artificial Intelligence , Computer Graphics, Pattern Recognition. The computations are done by electronically comprehending and apprehending an im-age. As a scientific discipline, computer vision is concernedwith the theory behind artificial systems that extract in-formation from images. The image data can take many forms, such as video sequences, views from multiple cam-eras, or multi-dimensional data from a medical scanner. As a technological discipline, computer vision seeks to apply its theories and models to the construction of computer vision systems.The World Health Organization estimates that there are around 39 million blind people around the globe. 

Thus there is a need for assistive and rehabilitative devices. The most popular aid for the blind is by using the white cane with a guide dog to avoid obstacles. The brain plasticity enable the blind to use their occipital lobe to perceive the object through other sensory modalities, thus the blind people lo-calizes the dynamic obstacles through the sense of hearingbut in an unknown environment this tent to be a challengeto determine the object. Over years different commercial applications were developed and among them the most pop-ular application’s are the GPS powered applications like Mo-bile Geo, Braille Note GPS, MoBIC etc., computer visionbased application like The vOICe, NAVI, ENVS, TVS etc. and several other prototypical applications. The past few decades saw the tremendous growth in the computer system hardware. This has lead to cheaper and compact sized high performance computers which enables the scientists and re-searchers to create handheld and wearable devices for the assistance of blind and visually impaired people. This blog describes the various computer vision based assistive technology which was developed for them and proposes a cheaper and efficient system.

 

 

 

 

Related Work:

SLAM:

Simultaneous localization and mapping is a indoor navigation used to estimate the user’s position and orientation by matching the detected landmarks against corresponding features on the digitalized floor map hence provides verbal instructions to guide the user to desired destination. It consists of a head mounted camera, a microphone and a speaker/earphone.

The voice:

Vision technology is a sensory substitution system for the totally blind which gives a visual experience through live camera views by image-to-sound renderings. The sensory substitution technology make use of the neural plasticity of the human brain i.e., sensory substitution make use of the human brains ability of cortical remapping .Which enhances the senses of other sensory modalities, for example the blind people can use their occipital lobe to com-prehend objects through the use of other sensory modalities. The vOICe technology constitutes a live feed from a head-mounted camera, the image is scanned from left to right of the video frame which is in turn converted into sound-scapes. The audio mapping is done by associating height to pitch and brightness to loudness. The vOICe requires a minimum amount of training and effort. EyeMusic  andPSVA  are sensory substitution systems similar to The vOICe for aiding blind and visually impaired people.

                                      

                                      





Electro-Neural Vision System :

Simon Meers and Koren Ward developed a visual substitution system called Electro-Neural Vision system(ENVS) . The ENVS provide a virtual perception of the threedimensional profile and colour of the surroundings through electrical pulses. The ENVS system comprises of a stereo camera which can capture the image and calculate the disparity depth map indicating the distance to each point of the image. Special gloves with electrodes are used to deliver the electrical pulses to the fingers. Transcutaneous Electro-Neural Stimulation unit in the ENVS system samples the depth value from the computer and converts them into an electro-neural pulse. Then these pulses are delivered to the fingers through electrodes in gloves. The intensity of the pulse varies with distance sampled corresponding to the depth map region. Colors of the surroundings is determined from the pulse frequency. ENVS system enables a blind person to have a virtual experience by creating a mental map of the 3D profile of his/her surroundings.

                                                              


                             

Clear Path Guidance for Blind:

 Volodymyr Ivanchenko, James Coughlan, William Gerrey and Huiying Shen  developed a system to assist the navigation of blind people who uses wheelchair. For the visually impaired wheelchair riders, it is extremely difficult to travel. As they are unaware about the hazards until it is too late. Also it is difficult to maintain the orientation while traveling on a straight line. The clear path guidance system informs user about the terrain information using the computer vision based range sensor. The two stereo camera mounted above riders head which connected to a computer analyses the terrain by tracking the traditional white cane used by the blind people. System alerts the user vocally regarding walls and obstacles in the direction to which the white cane is pointed.                                          

Tyflos:

 Tyflos  is an electronic travel aid which was initially developed by Dr. Bourbakis and latter additional features were incorporated to the system. Tyflos prototype integrates a portable computer, cameras and GPS sensors, microphones, text-to-speech converter, language processor, a 2D vibration vest, a speech synthesizer and an audio recorder. The tyflos system has a stereo vision module which is attached on conventional eyeglasses. This stereo vision system captures environmental data and process them. From the data acquired, the system creates a depth map of the 3D environment of the surroundings. Tyflos system has a vibratory belt which is worn by the blind person on his/her abdomen. The vibratory belt has a two dimensional array of 16 elements. The depth map is mapped to a tactile vocabulary and the user can sense them through the vibratory belt and locate obstacles for a safe navigation.

 

 

 Virtual White Cane:

 Roberto Manduchi and Dan Yuan  developed a laserbased mobility device which make use of the computer vision technology. The hand held device can be used as an alternative to the traditional white cane used by the blind people for navigation. The user receives feedback about his/her surrounding through a tactile interface and audio signals, from which the blind person can make a mental image of the scene. Device scans surroundings with a laser pointer which is combined with a digital camera and a computer processor. The surroundings spacial information are gathered and analyzed as the user moves around. The system produce special sounds for steps, curb or a drop off thus making the navigation more comfortable. 

                                     



 

Finger reader:

FingerReader is a wearable text reading device to assist the blind people and dyslexic readers. FingerReader is an accurate and efficient system which make use of the computer vision technology to scan the printed text. System has a scene and finger detection, which tracks the fingertip to localize a horizontal focus region, which can be adjusted as a parameter. After the text line is extracted, the Tesseract Optical character recognition(OCR) engine is used to extract the words. OCR extracts one word at a time and uttered to the user. Limitation of the technology is that the camera does not auto focus and continuous feedback is needed. Hindsight  is similar to the FingerReader, which also uses Tesseract OCR engine to detect the text. Computer vision algorithms are used for deblurring and stabilizing the image to maximize the reading speed.

                                      


Applications:

1)Be My Eyes:

This app send the notification to the several volunteer based on language and timezone.

Also audio connection allows to user and the volunteer to solve the task together.

 

2) TapTap See:

This app designed specifically to help the blind or visually impaired accurately identify everyday objects without the need for sighted assistance. Simply tapping the screen to take a photo, the user will hear the app correctly name the item.

 

3)Ray app: 

The ray app replaces the traditional click interaction of Android devices with touch and directional swipe gestures for easy. From voice-operated messaging to online audio books and color identification.

 

4)Voice dream reader:

This app allows the blind or visually impaired to read anything that contains text.

The app offers customizable text and reading options.

 

5) Big browser:

This app allows users to adjust color themes and zoom in on content for an easier read. The app is also equipped with a larger keyboard and controls that are easier to see.

 

 

 

 

 

References

[1] L. Chen, B.-L. Guo, and W. Sun. Obstacle detection system for visually impaired people based on stereo vision. In Genetic and Evolutionary Computing (ICGEC), 2010 Fourth International Conference on, pages 723–726, Dec 2010.

[2] P. Chippendale, V. Tomaselli, V. D’Alto, G. Urlini, and C. Modena. Personal shopping assistance and navigator system for visually impaired people. In Proc. of the CVPR2014 Workshop”, 2014.

[3] P. Costa, H. Fernandez, P. Martins, J. Barroso, and L. Hadjileontiadis. Obstacle detection using stereo imaging to assist the navigation of visually impaired people. Procedia Computer Science, 12:83 – 93, 2012. [13] D. Dakopoulos and N. G. Bourbakis. Wearable obstacle avoidance electronic travel aids for blind: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 40(1):25–35, 2010

[4] L. Dunai, G. Peri-Fajarnes, E. Lluna, and B. Defez. Sensory navigation device for blind people. The Journal of Navigation, 66:349 362, 2013.

[5] Y. H. Lee, T.-S. Leung, and G. Medioni. Real-time staircase detection from a wearable stereo system. In 21st International Conference on Pattern Recognition (ICPR 2012), pages 3770–3773, 2012.

 [6] M. Leo, G. Medioni, M. Trivedi, T. Kanade, and G. Farinella. Computer vision for assistive technologies. Computer Vision and Image Understanding, 154:1 – 15, 2017. 

Mobility of visually impaired people

  MOBILITY OF VISUALLY IMPAIRED PEOPLE What is Visual Impairment?   Visual Impairment can be defined as a condition in which an individual...