Saturday 12 March 2016

Thoughts aloud: may the cells restoring brain damage, be a solution for the blind?


Today I watched an amazing speech of neurosurgeon Jocelyne Bloch on TEDTalks.

Dr. Bloch together with her colleague, a biologist Jean-Francois Brunet, purified miraculous brain cells, which behave almost like stem cells, but with a slight difference. These so-called doublecortin-positive cells comprising only 4% of the cerebral cortex are able to turn into full-fledged neurons and build up the damaged areas of the brain. To prove this, the researchers conducted the following animal experiment with monkeys. They isolated a small sample of the brain from a non-functional region thereof, and grew a culture of doublecortin-positive cells from it. Hereafter the researchers tagged the grown cells with a special dye and implanted them back into intact brain of the same monkey. Due to the tags researchers could observe that the introduced cells eventually dissipated, they just disappeared. Further, in order to understand whether these cells may behave differently, researchers introduced the same tagged cells this time into the damaged part of the monkey’s brain. Noteworthy, in this case the cells have not disappeared. They remained at the site of injury instead and become fully-fledged neurons. Moreover, the newly formed neurons not only built up the injury physically, but also took over the functions for which this region responded before the damaging.

More details you can learn from the speech of Dr. Bloch:


Undoubtedly, this is an amazing speech. I suppose that in perspective the study of Dr. Bloch et al. gives hope to restore also the visual function to patients who have lost their vision due to brain structures damage.

Also the doctor’s speech pushed my thought further. As we know, the retina and the optic nerve are also made up of neurons. Involuntarily a thought arises: what if these unique doublecortin -positive cells are capable to differentiate (turn) into neurons, by being implanted into the damaged retina or optic nerve? Maybe, it’s a crazy idea, but I'll try to investigate the information to this end and surely will let you know as soon as I find out anything.




Sunday 13 September 2015

Seeing with sound: what hidden abilities do we have?

Author: Olena Markaryan                                     


We think that we see with our eyes, but is that a fact?


The experimental psychologist Dr. Michael Proulx at the TEDx Talks once said: “We think that we see with our eyes, but in fact we see with our brain. Our eyes just provide us the information and the brain sorts out that information, makes sense of it and makes that feeling of seeing” [0]. These words give us a lot of food for thought as well as push us to revaluate our abilities. What happens if the visual input is cut off? Can the person still restore visual perception in this case?
Dr. Ione Fine, talking about the crossmodal plasticity phenomenon in her interview, explains that if a person doesn’t get visual input, the part of the brain that is responsible for the processing of visual input can’t just stop functioning and do nothing. Instead, it starts to be fed with auditory and tactile information for analysis. Actually, in the daily life of sightless people the visual cortex is actively engaged in the audio information processing [0]. This was confirmed by Dr. Giulia Dormal and Dr. Olivier Collignon et al.’s observations when applying functional magnetic resonance imaging (fMRI). The scientists noted the activation of visual cortex regions in response to audio stimuli in congenitally blind and late-onset individuals [1, 2].
Example spectrogram of a one-second
sound generated by The vOICe.

Image source: www.seeingwithsound.com
 

 

Breaking stereotypes gives many possibilities.


Thus, if one sensory receptor is not available, why not to use the other one in order to provide the brain with necessary ‘food’ for processing? Actually, this principle has been used by Dr. Peter Meijer who developed a system converting visual images to auditory signals in 1992 [3].  This system provides the user with ‘visual’ information via the sense of hearing and is called The vOICe [4]. With help of this sensory substitution device and after extensive training, totally blind individuals are able to differentiate between the shapes of different objects, identify actual objects, and also locate them in space, identify and mimic the body posture of a person standing a few meters away, navigate in crowded corridors while avoiding obstacles, and even deduce live, 3-dimensional emotional facial expressions from the shape of the face and mouth [5].
Another interesting fact was found by Dr. Ella Striem-Amit et al. The researchers decided to evaluate the visual acuity which The vOICe provides to 8 congenitally and 1 early-onset totally sightless individuals. Importantly, the subjects were trained to use the program during several months (2 hours a week) prior the acuity assessment. With help of the Snellen tumbling-E test it was revealed that the visual acuity of participants varied between 20/200 and 20/600. Moreover, 5 of 9 participants had a visual acuity exceeding the blindness threshold as established by the World Health Organisation at 20/400. Therefore they could now formally be regarded as low-vision sighted [5].

Dr. Meijer about his invention.

Being intrigued with The vOICe operating principles, I couldn’t resist asking the device developer some questions:

Dr. Meijer, taking into account that seeing with sound is a quite unordinary approach to perceive the external world, how did the idea that people can actually see with sound come to your mind?

Dr. Peter Meijer: In the brain, at the neuron level, the signals carrying visual or auditory information all look the same (just spike trains), so if the switch-box circuitry permits there could be a "leaking" of auditory input to the visual brain areas. [In similar way as] With the old telephone system with copper wires in the ground, you can call people that you have never called before, all without changing the physical wiring in the ground.

How and when does sound start to be analyzed by visual cortex instead of (or after) auditory cortex?

Dr. P.M.: Within days of complete blindfolding of normally sighted people, the visual cortex starts to respond to sound, cf. 6, 7, 8. How it works, and the extent to which for instance the parietal cortex (association cortex) is involved is still unclear. The basic idea is that the visual cortex "likes" to do what it is good at, such as doing spatial computations, and if it can (and must, for lack of eyesight) get that information elsewhere, the brain will adapt. For similar reasons, in Charles Bonnet syndrome, loss of eyesight leads to visual hallucinations because the visual areas in the brain still "want" to create realistic visual renderings. Ideally, sensory substitution would replace meaningless hallucinations by visual hallucinations based on true visual input, even though that input is now differently encoded (e.g. in sound). Normal vision can be viewed as visual hallucinations where the content just happens to match physical reality because the content is derived from environmental visual input from the eyes.

Transforming of visual image into sound: how does it work?


Visual images are captured by a camera and then transformed into so-called soundscapes that preserve the object’s shape information. The algorithm of visual-auditory conversion is the following: time and stereo panning form the horizontal axis in the sound representation of an image, tone frequency makes up the vertical axis and loudness corresponds to pixel brightness [9]. For example, a bright dot gives a short beep, with pitch telling elevation; a rising bright line gives a rising tone. More examples and corresponding soundscapes you can find in the manual to the program [10].
Remarkably, The vOICe lets one use the natural optical features, namely visual perspective, parallax, occlusion, shading, and shadows which may help greatly in the independent navigation. For example, knowing the rule that an object appears twice as large at half the distance and applying it while moving around and analyzing soundscapes, the user can differentiate between and identify nearby obstacles as well as distant landmarks. More information about the interesting regularities you can find in the manual for the The vOICe [10]. 

On the photo: Setup for the Windows version of The vOICe. The vOICe for Android application is also available. Source of images: www.seeingwithsound.com

To start practicing The vOICe you only need a computer to install the free-to-download Windows program, and headphones. This will let you practice the interpretation of soundscapes of simple shapes. When you are ready to go to the next level, you will need to use a portable computer (laptop or tablet) and a camera to get a live view of the visual environment. All the details about software and hardware you can find at seeingwithsound.com. Also, you can find recommendations there regarding the use of bone conduction headphones (which permit hearing both the soundscapes and natural environmental sounds) and USB camera glasses which will make practicing with The vOICe more convenient.

However, it's not a magic bullet.


It is highly important to undergo the step-by-step training before using The vOICe in a real environment, especially outside. Listening to The vOICe soundscapes of the outer environment without any preliminary training may cause irritability or a headache in some cases because of the stream of complex sounds which you cannot interpret yet. A quite apt comparison that Dr. Meijer once made (personal correspondence): “Learning to drive a car can initially be highly stressful too, with the need to near-simultaneously watch the road, watch the rear view mirror, and operate the gas, gear lever, clutch and steering wheel in real-time. Still, would-be drivers are not complaining [and keep on studying]. Mastering The vOICe means hard work and persistence”. Here you can find suggestions for  self-training, and the English manual (a translation into Russian is also available) will help you to further explore usage of this program.
Dr. Meijer’s advice regarding being persistent with The vOICe training is corroborated with scientific observations. Dr. Lotfi Merabet et al. measured brain activity (using fMRI) before and after The vOICe training. Before the training, 4 sighted subjects showed strong activation of auditory cortex but no activation of visual areas in response to The vOICe audio stimuli. After one week of training, activation was also recorded in visual cortical areas in 3 out of 4 of the sighted subjects [6]. Other interesting results include what Dr. Amir Amedi et al. observed while studying the lateral-occipital tactile-visual area (LOtv), which is normally responsible for object shape recognition via integrated visual and tactile information processing. According to fMRI results, the soundscapes generated by The vOICe also activated LOtv during shape recognition, whereas other sounds still did not activate LOtv. Moreover, the LOtv activation was only observed in subjects who were trained to interpret the soundscapes. The scientists added that it is unlikely that visual imagery instigates the processing of information from soundscapes in LOtv [9].


What about the feedback from sightless users?

On the photo: Pranav Lal and photographs made by him. The source: http://techesoterica.com/
I decided to contact a sightless person who uses The vOICe in his daily life. Recently, the New Scientist published an article about the congenitally blind young man Pranav Lal who makes wonderful photos of places he travels to. He uses The vOICe to make good shots. Mr. Lal has been using The vOICe since 2001 (i.e. 14  years as of now).  So I supposed he was the right person to ask for opinions regarding the sensory substitution device:

What are your feelings while perceiving the world via The vOICe? What are the advantages for you personally in using The vOICe?

Pranav Lal: As regards my feelings, I cannot describe them in one word. I experience so much more. For example, I was looking at the staircase outside my house. I have seen the architecture plans of the house using The vOICe. I looked at the staircase sideways with The vOICe and connected the architect’s drawing with what I was seeing. When I was being driven to a shop that was quite far from my house, I was looking at all the vehicles and at the walls on the side of the road as well as other things like vehicles stopped at the red lights etc. I got so much more information. Words do not convey visual information. You need to experience it. In addition, The vOICe helps me with orientation. For example, I can walk in a straight line and not collide with colleagues who are standing in random positions in the office. I feel more in tune with my environment and can acquire information almost as fast as a sighted person. Moreover, it gives me more inclusion with the sighted world. I can point to things and ask people what they are and if people get excited about something, I can look at that thing and participate in the conversation. The thing with The vOICe is that you need to practice and start with small things like looking at the door of your bedroom and evaluating how it looks visually.

For how long do you actively use The vOICe? Do you use it during the whole day, or for a short period? Did you experience any side-effects after usage of this program (e.g. headache)?

PL: I have used it for a maximum of 12 hours without any discomfort. I use the program regularly. I wear the setup on a need basis. For example, on a regular day, I may use The vOICe for 5 or 10 minutes to walk around my office but when I go on holiday or to a new place, I only take it off when I return to my hotel room. I assure you that there are no headaches. There is some discomfort if your setup is not comfortable but we are fixing those problems fast. For example, headphones became uncomfortable for me. I have now switched to bone conduction headphones so my ears are free.

Do you really perceive the soundscapes subconsciously without thinking much about the basic rules of vision-to-sound conversion?

PL: As for subconscious interpretation, I do not consciously think of the rules any more. I sense a scene and then break it down into shapes. I then look at spaces between shapes, patches of light and dark and then look for varying textures. If I encounter something really knew, then I know the 3 basic rules and try to make sense of it. The 3 rules are: the panning represents horizontal placement of an object, the pitch represents the height of an object and the volume represents the brightness of an object.
 What will happen is that the more you use the program, the more the rules will become a habit when listening to a soundscape. I frequently find myself using the rules when listening to music and believe me that makes for strange images! I do not exactly build full pictures in my head but more like a functional model a kin to a photographic negative.
Pranav Lal keeps a blog techesoterica.com, where he shares his experience of using The vOICe, as well as his attitude towards other topics.

Afterword.


I would like to note that earlier I wrote about another sensory substitution device – a tactile one named BrainPort. In my opinion, the uniqueness of both The vOICe and BrainPort is that their operating principle is based on our organism’s (brain’s in this case) natural ability to adapt towards new conditions. The sensory substitution devices are noninvasive, relatively cheap and can open up new opportunities in perception of the world that we have not thought of before.
Another point concerning The vOICe that amazed me much (apart from everything else) is that it may give the experience of visual perception to congenitally blind individuals. Thus the specialists know that the concept of ‘critical periods’ exists, which assumes that if during a particular developmental period (that happens in childhood) the visual stimuli do not come to the brain, visual functions do not develop (reviewed in [11]). This is confirmed by psychological observations of children with vision loss at different ages [11]. For instance, in case the visual deprivation starts at 6 months of age, it prevents the development of normal acuity. If the visual deprivation happens near birth, it prevents sensitivity to the global direction of motion. Nevertheless, the studies of congenitally blind subjects that used The vOICe [5] as well as Pranav Lal’s experience demonstrate that they still may acquire such visual functions as acuity, shape recognition, object localization in space, etc., despite having had no visual experience during the developmental periods.

Acknowledgement.

I thank Dr. Peter Meijer and Pranav Lal for their help in creation of this article.
 

References:
0. Michael Proulx at TEDxBathUniversity: https://www.youtube.com/watch?v=2_EA6hHuUSA
1. Dormal G, Lepore F, Harissi-Dagher M, Albouy G, Bertone A, Rossion B, Collignon O (2014). Tracking the evolution of crossmodal plasticity and visual functions before and after sight-restoration. Journal of Neurophysiology, 113, 1727-1742. doi: 10.1152/jn.00420.2014.
2. Collignon O, Dormal G, Albouy G, Vandewalle G, Voss P, Phillips C, Lepore F. (2013). Impact of blindness onset on the functional organization and the connectivity of the occipital cortex. Brain, 136 (Pt 9): 2769-83. doi: 10.1093/brain/awt176.
3. Meijer PB (1992). An experimental system for auditory image representations. IEEE Trans Biomed Eng. 39(2):112-21.
5. Striem-Amit E., Guendelman M., Amedi  A. (2012). ‘Visual’ Acuity of the Congenitally Blind Using Visual-to-Auditory Sensory Substitution. PLoS ONE 7(3): e33136. doi:10.1371/journal.pone.0033136
6. Merabet L, Poggel D, Stern W, Bhatt E, Hemond C, Maguire S, Meijer P and Pascual-Leone A (2008). Retinotopic visual cortex mapping using a visual-to-auditory sensory-substitution device. Front. Hum. Neurosci. Conference Abstract: 10th International Conference on Cognitive Neuroscience. doi: 10.3389/conf.neuro.09.2009.01.273
7. Pascual-Leone A, Hamilton R (2001) The metamodal organization of the brain. Prog Brain Res 134: 1–19.
8. Merabet LB, Maguire D, Warde A, Alterescu K, Stickgold R, Pascual-Leone A.(2004). Visual hallucinations during prolonged blindfolding in sighted subjects. J Neuroophthalmol. 24(2):109-13.
9. Amedi A, Stern W M, Camprodon J A, Bermpohl F, Merabet L, Rotman S, Hemond C, Meijer P & Pascual-Leone A (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience 10, 687 – 689, doi:10.1038/nn1912
Self-Training for The vOICe: http://www.seeingwithsound.com/training.htm
11. Lewis T. L., Maurer D. (2005). Multiple Sensitive Periods in Human Visual Development: Evidence from Visually Deprived Children.  2005 Wiley Periodicals, Inc., DOI: 10.1002/dev.20055.

Friday 8 May 2015

Surgical sight restoration: A sticking point and treatment with alternating current.





I decided to write this article after reading quite sad notification dedicated to a patient whose cornea was restored after longstanding blindness. In spite of operation successfulness, researchers observing the patient during 7 months after the operation, concluded that due to long-term visual deprivation the vision restoration may never be complete.



Being flurried with this unpromising news I decided to investigate whether any solutions may exist. Now I would like to share my findings with you.




Finding No.1: the issue does really exist.




One would think that the newly developed surgical techniques, such as corneal and limbal stem-cell transplantation, intraocular lens and artificial cornea implantation would have to solve the issue of complete visual perception restoration for the targeted patients. However what have turned out in fact? Studies of patients who underwent the restorative eyesight surgery after years of blindness do not give encouraging results. For example, Dr. Fine et al. studied the patient MM, who lost one eye at the age of 3,5 and was blinded in the other one after chemical and thermal damage to cornea [1]. This patient underwent the corneal and limbal stem-cell transplantation surgery when he was 43 years old. The surgery was successful and the patient gained acuity of 0,02 [2]. He could easily recognize simple shapes, identify colors and discriminate the direction of both simple and complex plaid motion [1].  In spite the patient regained the important vision functions; he still had low acuity performance even upon 2 years after surgery as well as difficulties with three-dimensional form, face, and gender recognition and interpretation [1].  Even 7 years after the operation, he still had poor spatial resolution and limited visual abilities [3].



With this regard it is worth to mention about the “critical period theory”. The critical period is a period of visual development when the visual stimuli are necessary for the visual function development. This theory was extended by researchers Dr. Lewis and Dr. Maurer, who noted that different visual functions have different sensitive periods of development [4]. Thus, in case the visual deprivation starts at 6 months of age, it prevents the development of normal acuity, but does not affect the sensitivity to the global direction of motion, which develops during the period near birth. 

Accordingly, Dr. Fine et al. supposed that MM had dissimilarities in visual function restoration because the ability to  interpret three-dimensional forms and faces develops after early development, while the ability to interpret motions is formed earlier in childhood [1]. Regarding the acuity it was concluded that long standing visual loss deteriorated the spatial resolution of the patient’s relevant visual cortex area. 

Another observation Dr. Ostrovsky et al. did while studying two congenitally blind children suffering from dense bilateral cataract [5]. The children gained partial vision restoration at the age of 7 and 13 due to intraocular lens implantation. As the result, after operation children had acuity of 0,2 and 0,25 and both could perform simple shape recognition. Nevertheless they still had poor but improved with time (during 10-18 months) recognition of overlapping simple shapes, i.e.  perceptual organization of the visual scene.




Thus, the theory about critical / sensitive periods may explain the partial visual function recovery after restorative eyesight surgery in patients who lost sight in the childhood. Nevertheless, how the similar issue may be explained when the vision loss happens in the adulthood?




Dr. Sikl et al. studied the subject who lost his vision at the age of 17 because of the explosion [6]. The patient’s cornea was damaged in both eyes. At the age of 71 he underwent an artificial cornea implantation. As the result, the patient gained acuity of 0,33. Upon 6 and 8 months post surgery he performed a good object recognition: 92% recognition of canonical form objects versus 20-30% demonstrated by early-blindness patients postoperatively. Also the patient differentiated face from non-face stimuli and successfully fulfilled simple tasks of visual space perception.  At the same time he still had difficulties with complex 3-dimentional visual scenes recognition, gender and two faces shown simultaneously differentiation, as well as limited ability to integrate partial information. A neuropsychological examination did not reveal any cognitive deficits and the patient’s performance matched his age.




As far as is known, the sensory substitution (e.g. spatial detection of sound, Braille reading) helps greatly to sightless individuals in their daily life. However, does this result of cross modal plasticity always have an advantageous impact?




Unfortunately, it does not. Dr. Dormal et al. investigated a patient whose vision severely deteriorated in childhood (during the age of 2,5-13 years) because of dense bilateral cataracts [7]. After artificial cornea implantation at the age of 47, the subject’s acuity improved from 0,04 up to 0,2 (1,5 months post surgery) and up to 0,7 (7 months post surgery). The researchers noted the contrast sensitivity and face individuation improvements (which though were still below the normal range). The activity of the visual cortex before and after the restorative eyesight surgery was monitored via functional magnetic resonance imaging (fMRI). The researchers noted that before the surgery visual cortex of the patient actively responded to audio stimuli. After the surgery the visual cortex still responded to audio input which overlapped with visual responses. Though the activation of visual cortex with sound was decreased post surgery, it still was recorded even 7 months after the surgery. In other words, the audio signals still competed with visual ones for being analyzed by visual cortex. In the researcher’s opinion it may explain why the patient’s visual performance still was below the normal level after the sight-restorative surgery.



According to all above mentioned, 

it seems that if the longstanding eyesight loss happens due to damage of anterior eye tissues, surgical restoration of the tissues is not enough to regain the visual functions to the full extent. Sounds quite pessimistic, isn’t it? 

It may well be not so sad if to come to understanding that the visual function loss is not restricted solely to the local tissue damage [8].


Finding No.2: applying of alternating current may partially return visual perception to sightless. 


After alpha band oscillations monitoring in both visually impaired and sighted subjects, Michal Bola et al. noted that visual function loss is accompanied with disturbance of brain networks synchronization (BNS). Moreover, the researchers came further and demonstrated that BNS may be adjusted with alternating current application. The method which was used by the researchers is called “noninvasive repetitive transorbital alternating current stimulation” (rtACS) wherein the stimulating electrodes are applied to the skin at the ocular region [8]. 

Formerly I have already written about this therapeutic approach. Treatment with rtACS leads to improvement of patients’ visual tasks performance. The success of rtACS was ascertained, particularly, by clinical observational study, where patients with optic nerve damage exhibited significant improvements in both visual field (by 9,3%) and acuity (by 0.02) after the treatment [9]. An explanation of such phenomenon was proposed by Dr. Sabel et al. within “residual vision activation theory” [10]. 

According to the theory, the visual system pathway usually is not damaged totally. There still exist some survived residual structures. Nevertheless they can’t provide proper transfer of visual information because the neuronal cell loss leads to neuronal network disorganization, i.e. to loss of network synchrony. Stimulation with rtACS forces the disorganized neuronal network to fire simultaneously. That restores the network synchrony of both survived cells within damaged region and cells of upstream visual pathway. Repetition of rtACS stimulation stabilizes the network firing synchrony. The mechanism involved is similar to one underlying the process of normal learning.

Notably that even patients considered to be “legally blind,” almost always have some degree of residual vision and therefore some restoration potential [10]. 

According to Dr. Sabel, the subject’s age, as well as age, type and location of the damage throughout visual system pathway do not influence the degree of visual restoration (it refers to injuries of nerve tissues, that is retina, optic nerve, brain regions). The only known parameter that matters for restoration, though, is the size and topography of areas of residual vision (ARVs). Vision restoration may be induced in most visual field impairments (scotoma, tunnel vision, hemianopia, acuity loss), irrespective of their etiology (e.g. stroke, neurotrauma, glaucoma, amblyopia, age-related macular degeneration). However, vision restoration is rarely complete and does not take place in all patients.



Sum of Findings No.1 and No.2: should we expect a light at the end of the tunnel?



Comparing of all abovementioned facts brought me to one presumption. Whereas noninvasive stimulation of visual system tissues with alternating current may improve visual perception even in patients whose blindness is caused by damage to the nervous tissues of the visual pathway. Probably, such approach could also help to the patients whose nerve tissues are not affected but because of the long term visual input absence the visual function restoration doesn’t happen completely. Definitely this is what should be investigated, but what if rtACS is what could help in rehabilitation of the patients after surgical vision restoration and allow them to regain the visual function to the full extent?
 

References:


1. Fine I, Wade AR, Brewer AA, May MG, Goodman DF, Boynton GM, Wandell BA, MacLeod DIA (2003). Long-term deprivation affects visual perception and cortex. Nat Neurosci 6: 915–916. DOI:10.1038/nn1102

2. Saenz M., Lewis L. B., Huth A.G., Fine I., Koch C.(2008). Visual motion area MT+/V5 responds to auditory motion in human sight-recovery subjects. J Neurosci. 28(20): 5141–5148. doi:10.1523/JNEUROSCI.0803-08.2008.

3. Heimler B et al. Revisiting the adaptive and maladaptive effects of crossmodal plasticity. Neuroscience (2014), http:// dx.doi.org/10.1016/j.neuroscience.2014.08.003

4. Lewis T. L., Maurer D. (2005). Multiple Sensitive Periods in Human Visual Development: Evidence from Visually Deprived Children.  2005 Wiley Periodicals, Inc., DOI: 10.1002/dev.20055.

5. Ostrovsky, Y., Meyers, E., Ganesh, S., Mathur, U., and Sinha, P. (2009). Visual parsing after recovery from blindness. Psychol. Sci. 20, 1484–1491. doi: 10.1111/j.1467-9280.2009.02471.x

6. Šikl R, Šimeček M, Porubanová-Norquist M, Bezdíček O, Kremláček J, Stodůlka P, Fine I, Ostrovsky Y  (2013). Vision after 53 years of blindness. i-Perception 4(8) 498–507; doi:10.1068/i0611

7. Dormal G, Lepore F, Harissi-Dagher M, Albouy G, Bertone A, Rossion B, Collignon O (2014). Tracking the evolution of crossmodal plasticity and visual functions before and after sight-restoration. Journal of Neurophysiology, 113, 1727-1742. doi: 10.1152/jn.00420.2014

8. Bola M., Gall C., Moewes C., Fedorov A., Hinrichs H., Sabel B.A.(2014).Brain functional connectivity network breakdown and restoration in blindness. Neurology 6, 542–551.doi:10.1212/ WNL.0000000000000672

9. Fedorov A, Jobke S, Bersnev V, Chibisova  A., Chibisova Y., Gall C., Sabel B. A. (2011). Restoration of vision after optic nerve lesions with noninvasive transorbital alternating current stimulation: a clinical observational study. Brain Stimul.4:189-201. DOI:10.1016/j.brs.2011.07.007

10. Sabel B.A, Henrich-Noack P., Fedorov A., Gall C. (2011). Vision restoration after brain and retina damage: The “Residual Vision Activation Theory”. Prog Brain Res, 192, 199-262. DOI: 10.1016/B978-0-444-53355-5.00013-0


 

Monday 4 May 2015

The Second Workshop and Lecture Series on “Cognitive neuroscience of auditory and cross-modal perception” took place in Košice, Slovakia on 20 – 24 April 2015

I was happy to participate in The Workshop dedicated to neural processes of auditory, visual and cross-modal perception. The talks were related to cognitive neuroscience research, covering behavioral, neuroimaging, and modeling approaches, as well as applications of the research in auditory prosthetic devices (cochlear implants, hearing aids).
Topics and presenters (detailed abstracts please find here):

Monday, 20 April 2015

Learning From Nature’s Experiments: What Clinical Research Can Mean for Sensory Scientists, Frederick (Erick) Gallun, US Dept. of Veterans Affairs and Oregon Health & Science University

Pursuit eye movements and perceived object velocity, potential clinical applications
Arash Yazdanbakhsh, Boston University

Active listening: Speech intelligibility in cocktail party listening.
Simon Carlile, Auditory Neuroscience Laboratory, School of Medical Science and Bosch Institute, University of Sydney, Australia 2006

Perceptual Learning; specificity, transfer and how learning is a distributed process
Aaron Seitz, Department of Psychology, University of California, Riverside, USA

Spatial hearing: Effect of hearing loss and hearing aids
Virginia Best, Boston University

Toward an evolutionary theory of speech: how and why did it develop the way it did
Pierre Divenyi, Center for Computer Research for Music and Acoustics, Stanford UniversityU.S.A.

On the single neuron computation 
Petr Marsalek, Charles University in Prague

How spectral information triggers sound localization in sagittal planes
Robert Baumgartner, Piotr Majdak, and Bernhard Laback, Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria

Cognitively Inspired Speech Processing For Multimodal Hearing Technology
Dr Andrew Abel, Prof. Amir Hussain, Computing Science and Mathematics, University of Stirling, Scotland

Auditory Distance Perception and DRR-ILD Cues Weighting
Jana Eštočinová, Jyrki Ahveninen, Samantha Huang, Stephanie Rossi, and Norbert Kopčo, Institute of Computer Science, P. J. Šafárik University, Košice, Slovakia; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/Massachusetts General Hospital; Center for Computational Neuroscience and Neural Technology, Boston University


Tuesday, 21 April 2015

RESTART theory: discrete sampling of binaural information during envelope fluctuations is a fundamental constraint on binaural processing.
G. Christopher Stecker, Vanderbilt University School of Medicine, Nashville TN USA

Sound Localization Cues and Perceptual Grouping in Electric Hearing
Bernhard Laback, Austrian Academy of Sciences

Brain Training; How to train cognition to yield transfer to real world contexts
Aaron Seitz, Department of Psychology, University of California, Riverside, USA

Coincidence detection in the MSO - computational approaches
Petr Marsalek, Charles University in Prague

Auditory Processing After mild Traumatic Brain Injury: New Findings and Next Steps
Frederick (Erick) Gallun, US Dept. of Veterans Affairs and Oregon Health & Science University

Hearing motion in motion
Carlile, S, Leung J, Locke, S, and Burgess, M., Auditory Neuroscience Laboratory, School of Medical Science and Bosch Institute, University of Sydney, Australia 2006

Auditory processing capabilities supporting communication in preverbal infants
István Winkler, Research Centre for Natural Sciences, Hungarian Academy of Sciences

Chirp stimuli for entrainment: chirp up, chirp down and task effects
Aleksandras Voicikas, Ieva Niciute, Osvaldas Ruksenas, Inga Griskova-Bulanova, Vilnius University, Department of Neurobiology and Biophysics.

Cross-modal interaction in spatial attention
Marián Špajdel, Zdenko Kohút, Barbora Cimrová, Stanislav Budáč, Igor Riečanský
Laboratory of Cognitive Neuroscience, Institute of Normal and Pathological Physiology,
Slovak Academy of Sciences; Department of Psychology, Faculty of Philosophy and Arts, University of Trnava, Slovakia; Centre for Cognitive Science, Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics, Comenius University in Bratislava, Slovakia; SCAN Unit, Institute of Clinical, Biological and Differential Psychology, Faculty of Psychology, University of Vienna, Austria

Prediction processes in the visual modality – an EEG study
Gábor Csifcsák, Viktória Balla, Szilvia Szalóki, Tünde Kilencz, Vera Dalos

Early electophysiological correlates of susceptibility to the double-flash illusion
Simon Júlia, Csifcsák Gábor, Institute of Psychology University of Szeged

Suggestion of rehabilitative treatment for patients subjected to sight restorative surgeryOlena Markaryan, Independent researcher

Learning of auditory distance with intensity and reverberation cues
Hladek Lubos1, Seitz Aaron, Kopco Norbert, Institute of Computer Science, P. J. Safarik University in Kosice, Slovakia, Department of Psychology, University of California Riverside, USA

Streaming and sound localization with a preceding distractor
Gabriela Andrejková1, Virginia Best3, Barbara G. Shinn-Cunningham3, and Norbert Kopčo
Institute of Computer Science, P. J. Šafárik University, Košice, Slovakia; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Harvard Medical School/ Massachusetts General Hospital, Charlestown MA; Center for Computational Neuroscience and Neural Technology, Boston University, Boston MA

Exposure to Consistent Room Reverberation Facilitates Consonant Perception
Norbert Kopčo, Eleni Vlahou, Kanako Ueno3 & Barbara Shinn-Cunningham
Institute of Computer Science, P. J. Šafárik University; Department of Psychology, University of California, Riverside; School of Science and Technology, Meiji University Center for Computational Neuroscience and Neural Technology, Boston University

Contextual plasticity in sound localization: characterization of spatial properties and neural locus, Beáta Tomoriová, Ľuboš Marcinek, Ľuboš Hládek, Norbert Kopčo
Pavol Jozef Šafárik University in Košice, Slovakia; Technical University of Košice, Slovakia.

Visual Adaptation And Spatial Auditory Processing
Peter Lokša, Norbert Kopčo, Institute of Computer Science, P. J. Šafárik University in Košice, Slovakia

Speech Localization in a Multitalker Reverberant Environment
Peter Toth, Norbert Kopco, Charles University in Prague

Wednesday, 22 April 2015

Visuospatial memory and where eyes look when the percept changes
Arash Yazdanbakhsh, Boston University

Modeling Auditory Scene Analysis by multidimensional statistical filtering
Volker Hohmann, Medical Physics, University of Oldenburg, Germany

Modeling auditory stream segregation by predictive processes
István Winkler, Research Centre for Natural Sciences, Hungarian Academy of Sciences

What is the cost of simultaneously listening to the "what" and the "when" in speech?
Pierre Divenyi, Center for Computer Research for Music and Acoustics, Stanford UniversityU.S.A.

Neuroimaging of task-dependent spatial processing in human auditory cortex.
G. Christopher Stecker, Vanderbilt University School of Medicine, Nashville TN USA

Temporal Effects in the Perception of Interaural Level Differences: Data and Model Predictions, Bernhard Laback, Austrian Academy of Sciences

Modeling Cocktail Party Processing in a Multitalker Mixture using Harmonicity and Binaural FeaturesVolker Hohmann, Medical Physics, University of Oldenburg, Germany

Audibility and spatial release from maskingVirginia Best, Frederick Gallun, Norbert Kopčo