Author: Olena Markaryan
We think that we see with our eyes, but is that a fact?
Dr.
Ione Fine, talking about the crossmodal plasticity phenomenon in her interview, explains
that if a person doesn’t get visual input, the part of the brain that is
responsible for the processing of visual input can’t just stop functioning and
do nothing. Instead, it starts to be fed with auditory and tactile information for
analysis. Actually, in the daily life
of sightless people the visual cortex is actively engaged in the audio
information processing [0].
This was confirmed by Dr.
Giulia Dormal and Dr. Olivier Collignon et al.’s observations when applying
functional magnetic resonance imaging (fMRI). The scientists noted the activation of visual
cortex regions in response to audio stimuli in congenitally blind and
late-onset individuals [1,
2].
Example spectrogram of a one-second sound generated by The vOICe. Image source: www.seeingwithsound.com |
Breaking stereotypes gives many possibilities.
Thus, if one sensory
receptor is not available, why not to use the other one in order to provide the
brain with necessary ‘food’ for processing? Actually, this principle has been
used by Dr. Peter Meijer who
developed a system converting visual images to auditory signals in 1992 [3]. This system provides the user with ‘visual’
information via the sense of hearing and is called The vOICe [4].
With help of this sensory substitution device and after extensive training, totally blind individuals are able
to differentiate between the shapes of different objects, identify actual
objects, and also locate them in space, identify and mimic the body posture of
a person standing a few meters away, navigate in crowded corridors while avoiding
obstacles, and even deduce live, 3-dimensional emotional facial
expressions from the shape of the face and mouth [5].
Another interesting fact was found by Dr. Ella
Striem-Amit et al. The researchers decided to evaluate the visual acuity which
The vOICe provides to 8 congenitally and 1 early-onset totally sightless
individuals. Importantly, the subjects were trained to
use the program during several months (2 hours a week) prior the acuity
assessment. With help of the Snellen tumbling-E test it was revealed that the
visual acuity of participants varied between 20/200 and 20/600. Moreover, 5 of
9 participants had a visual acuity exceeding the blindness threshold as
established by the World Health Organisation
at 20/400. Therefore they could now formally be regarded as low-vision sighted
[5].
Dr. Meijer about his invention.
Being intrigued with The
vOICe operating principles, I couldn’t resist asking the device developer some
questions:
Dr.
Meijer, taking into account that seeing with sound is a quite unordinary
approach to perceive the external world, how did the idea that people can
actually see with sound come to your mind?
Dr. Peter
Meijer: In the brain, at the neuron level, the
signals carrying visual or auditory information all look the same (just spike
trains), so if the switch-box circuitry permits there could be a
"leaking" of auditory input to the visual brain areas. [In
similar way as] With the old telephone
system with copper wires in the ground, you can call people that you have never
called before, all without changing the physical wiring in the ground.
How and
when does sound start to be analyzed by visual cortex instead of (or after)
auditory cortex?
Dr. P.M.: Within days of complete blindfolding of normally sighted people, the visual
cortex starts to respond to sound, cf. 6,
7, 8. How it works, and the
extent to which for instance the parietal cortex (association cortex) is
involved is still unclear. The basic idea is that the visual cortex
"likes" to do what it is good at, such as doing spatial computations,
and if it can (and must, for lack of eyesight) get that information elsewhere,
the brain will adapt. For similar reasons, in Charles Bonnet syndrome, loss of
eyesight leads to visual hallucinations because the visual areas in the brain
still "want" to create realistic visual renderings. Ideally, sensory
substitution would replace meaningless hallucinations by visual hallucinations
based on true visual input, even though that input is now differently encoded (e.g.
in sound). Normal vision can be viewed as visual hallucinations where the
content just happens to match physical reality because the content is derived
from environmental visual input from the eyes.
Transforming of visual image into sound: how does it work?
Visual
images are captured by a camera and then transformed into so-called soundscapes
that preserve the object’s shape information. The algorithm of visual-auditory
conversion is the following: time and stereo panning form the horizontal axis
in the sound representation of an image, tone frequency makes up the vertical
axis and loudness corresponds to pixel brightness [9]. For example, a bright dot gives a short beep, with pitch telling elevation; a rising
bright line gives a rising tone. More examples and corresponding soundscapes
you can find in the manual
to the program [10].
Remarkably, The vOICe
lets one use the natural optical features, namely visual perspective, parallax, occlusion, shading, and shadows
which may help greatly in the independent navigation. For example, knowing the
rule that an object appears twice as large at half the distance and applying it
while moving around and analyzing soundscapes, the user can differentiate
between and identify nearby obstacles as well as distant landmarks.
More information about the interesting regularities you can find in the manual for
the The vOICe [10].
|
To start practicing The
vOICe you only need a computer to install the free-to-download Windows program,
and headphones. This will let you practice the interpretation of soundscapes of
simple shapes. When you are ready to go to the next level, you will need to use
a portable computer (laptop or tablet) and a camera to get a live view of the visual
environment. All the details about software and hardware you can find at seeingwithsound.com.
Also, you can find recommendations there regarding the use of bone conduction
headphones (which permit hearing both the soundscapes and natural environmental
sounds) and USB camera glasses which will make practicing with The vOICe more
convenient.
However, it's not a magic bullet.
It is highly important
to undergo the step-by-step training before using The vOICe in a real
environment, especially outside. Listening to The vOICe soundscapes of the outer
environment without any preliminary training may cause irritability or a
headache in some cases because of the stream of complex sounds which you cannot
interpret yet. A quite apt comparison that Dr. Meijer once made (personal correspondence): “Learning
to drive a car can initially be highly stressful too, with the need to
near-simultaneously watch the road, watch the rear view mirror, and operate the
gas, gear lever, clutch and steering wheel in real-time. Still, would-be
drivers are not complaining [and keep on studying]. Mastering The vOICe means hard work and persistence”. Here you can find suggestions
for self-training, and the English
manual (a translation
into Russian is also available) will help you to further explore usage of this
program.
Dr. Meijer’s advice regarding
being persistent with The vOICe training is corroborated with scientific
observations. Dr.
Lotfi Merabet et al. measured brain activity (using fMRI) before and
after The vOICe
training. Before the training, 4 sighted subjects showed strong activation of auditory cortex but
no activation of visual areas in response to The vOICe audio stimuli. After one week
of training, activation was also recorded in visual cortical areas in 3 out of
4 of the sighted subjects [6]. Other interesting results include what Dr. Amir Amedi et al.
observed while studying the lateral-occipital
tactile-visual area (LOtv), which is normally responsible for object shape
recognition via integrated visual and tactile information processing. According
to fMRI results, the soundscapes generated by The vOICe also activated LOtv
during shape recognition, whereas other sounds still did not activate LOtv. Moreover,
the LOtv activation was only observed in
subjects who were trained to interpret the soundscapes. The scientists added
that it is unlikely that visual imagery instigates the processing of
information from soundscapes in LOtv [9].
What about the feedback from sightless users?
On the photo: Pranav Lal and photographs made by him. The source: http://techesoterica.com/ |
I decided to contact a
sightless person who uses The vOICe in his daily life. Recently, the New
Scientist published
an article about the congenitally blind young man Pranav Lal who makes
wonderful photos of places he travels to. He uses The vOICe to make good shots.
Mr. Lal has been using The vOICe since 2001 (i.e. 14 years as of now). So I supposed he was the
right person to ask for opinions regarding the sensory substitution device:
What are your feelings while perceiving the world via The vOICe? What
are the advantages for you personally in using The vOICe?
Pranav Lal: As regards my feelings, I cannot describe them in one word. I
experience so much more. For example, I was looking at the staircase outside my
house. I have seen the architecture plans of the house using The vOICe. I
looked at the staircase sideways with The vOICe and connected the architect’s
drawing with what I was seeing. When I was being driven to a shop that was
quite far from my house, I was looking at all the vehicles and at the walls on
the side of the road as well as other things like vehicles stopped at the red
lights etc. I got so much more information. Words do not convey visual
information. You need to experience it. In addition, The vOICe helps me with
orientation. For example, I can walk in a straight line and not collide with
colleagues who are standing in random positions in the office. I feel more in
tune with my environment and can acquire information almost as fast as a
sighted person. Moreover, it gives me more inclusion with the sighted world. I
can point to things and ask people what they are and if people get excited
about something, I can look at that thing and participate in the conversation. The
thing with The vOICe is that you need to practice and start with small things
like looking at the door of your bedroom and evaluating how it looks visually.
For how long do you actively use The vOICe? Do
you use it during the whole day, or for a short period? Did you experience any
side-effects after usage of this program (e.g. headache)?
PL: I have used it for a maximum of 12 hours without any discomfort. I use
the program regularly. I wear the setup on a need basis. For example, on a
regular day, I may use The vOICe for 5 or 10 minutes to walk around my office
but when I go on holiday or to a new place, I only take it off when I return to
my hotel room. I assure you that there are no headaches. There is some
discomfort if your setup is not comfortable but we are fixing those problems
fast. For example, headphones became uncomfortable for me. I have now switched
to bone conduction headphones so my ears are free.
Do you really perceive the soundscapes subconsciously without thinking
much about the basic rules of vision-to-sound conversion?
PL: As for subconscious interpretation, I do not consciously think of the
rules any more. I sense a scene and then break it down into shapes. I then look
at spaces between shapes, patches of light and dark and then look for varying
textures. If I encounter something really knew, then I know the 3 basic rules
and try to make sense of it. The 3 rules are: the panning represents horizontal
placement of an object, the pitch represents the height of an object and the
volume represents the brightness of an object.
What will happen is that the more
you use the program, the more the rules will become a habit when listening to a
soundscape. I frequently find myself using the rules when listening to music
and believe me that makes for strange images! I do not exactly build full
pictures in my head but more like a functional model a kin to a photographic
negative.
Pranav Lal keeps a blog techesoterica.com, where he shares his
experience of using The vOICe, as well as his attitude towards other topics.
Afterword.
I would like to note
that earlier I wrote about another sensory substitution
device – a tactile one named BrainPort. In my opinion, the uniqueness of both
The vOICe and BrainPort is that their operating principle is based on our
organism’s (brain’s in this case) natural ability to adapt towards new
conditions. The sensory substitution devices are noninvasive, relatively cheap
and can open up new opportunities in perception of the world that we have not
thought of before.
Another point concerning
The vOICe that amazed me much (apart from everything else) is that it may give
the experience of visual perception to congenitally blind individuals. Thus the
specialists know that the concept of ‘critical periods’ exists, which assumes
that if during a particular developmental period (that happens in childhood)
the visual stimuli do not come to the brain, visual functions do not develop
(reviewed in [11]).
This is confirmed by psychological observations of children with vision loss at
different ages [11].
For instance, in case the visual deprivation starts at 6 months of age, it
prevents the development of normal acuity. If the visual deprivation happens near
birth, it prevents sensitivity to the global direction of motion. Nevertheless,
the studies of congenitally blind subjects that used The vOICe [5]
as well as Pranav Lal’s experience demonstrate that they still may acquire such
visual functions as acuity, shape recognition, object localization in space,
etc., despite having had no visual experience during the developmental periods.
Acknowledgement.
I thank Dr. Peter Meijer
and Pranav Lal for their help in creation of this article.
References:
1. Dormal G, Lepore F, Harissi-Dagher M, Albouy G,
Bertone A, Rossion B, Collignon O (2014). Tracking the evolution of crossmodal
plasticity and visual functions before and after sight-restoration. Journal of
Neurophysiology, 113, 1727-1742. doi: 10.1152/jn.00420.2014.
2. Collignon O, Dormal G, Albouy G, Vandewalle G, Voss
P, Phillips C, Lepore F. (2013). Impact of blindness
onset on the functional organization and the connectivity of the occipital
cortex. Brain, 136 (Pt 9): 2769-83. doi: 10.1093/brain/awt176.
3. Meijer PB
(1992). An experimental system for auditory image representations. IEEE Trans
Biomed
5.
Striem-Amit E., Guendelman M., Amedi A.
(2012). ‘Visual’ Acuity of the Congenitally Blind Using Visual-to-Auditory Sensory Substitution.
PLoS ONE 7(3): e33136.
doi:10.1371/journal.pone.0033136
6. Merabet L, Poggel D, Stern W, Bhatt E, Hemond C,
Maguire S, Meijer P and Pascual-Leone A (2008). Retinotopic visual cortex
mapping using a visual-to-auditory sensory-substitution device. Front. Hum.
Neurosci. Conference Abstract: 10th International Conference on Cognitive
Neuroscience. doi: 10.3389/conf.neuro.09.2009.01.273
7. Pascual-Leone A,
8. Merabet LB, Maguire D, Warde A, Alterescu K,
Stickgold R, Pascual-Leone A.(2004). Visual hallucinations during prolonged
blindfolding in sighted subjects. J
Neuroophthalmol. 24(2):109-13.
9.
Amedi A, Stern W M, Camprodon J A, Bermpohl F, Merabet L, Rotman S, Hemond C,
Meijer P & Pascual-Leone A (2007). Shape conveyed by visual-to-auditory
sensory substitution activates the lateral occipital complex. Nature Neuroscience 10, 687 – 689,
doi:10.1038/nn1912
10. Manual of The vOICe: http://www.seeingwithsound.com/manual/The_vOICe_Training_Manual.htm
Self-Training
for The vOICe: http://www.seeingwithsound.com/training.htm
11. Lewis
T. L., Maurer D. (2005). Multiple Sensitive Periods in Human Visual
Development: Evidence from Visually Deprived Children. 2005 Wiley Periodicals, Inc., DOI:
10.1002/dev.20055.
|