Abstract
There are a very few people who have the ability to "see" the surroundings by the echoes, which is called echolocation. The study of the brain mechanism of echolocation can not only help to improve the blind assistance device, but also provides a window into the research of brain's plasticity. In this paper, we developed a wearable system to transform the spatial information captured by camera into a voice description and fed it back to blind users which is inspired by echolocation. After our online virtual scene training, users can easily discriminate object location in the camera's view, motion of the objects, even shape of the objects. Compared with natural echolocation, it's easier to learn and be applied in daily life. In addition, the device achieves high spacial resolution. In this study, two trained blind subjects and two non-trained sighted subjects were tested by using functional Magnetic Resonance Imaging (fMRI). We obtain the fMRI images of the subjects' brain activity when they were listening to the sound of the wearable prototype. Intriguingly, we find that after training with the blind assistance system, the blind' visual area of the brain have been activated when they are dealing with the acoustic feedback from the device.
Export citation and abstract BibTeX RIS
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.