Vision consists of your eyes detecting light and converting it to electro-chemical impulses in neurons which are then given meaning by your brain. Hence, the real “seeing” occurs in the brain with the interpretation of the impulses. For this reason, as long as input from the surrounding world can be delivered to the brain, through any means, for example taste or touch, and the brain can learn to make sense of it, seeing can take place. Substituting one sense for another has allowed people to successfully see the world without vision.
In his book, Incognito, David Eagleman tells of Paul Bach-y-Rita, a neuroscientist at the University of Wisconsin, who, in the 1960′s, developed a device which presented a tactile display to blind people in effect allowing them to see. The blind subjects had a video camera attached to their foreheads. The information from the video camera was converted into tiny vibrations on their backs. Of course, at first, they were bumping into things, and it was nothing like vision. However, after wearing these visual-tactile substitution glasses for a week and learning to correctly translate the vibrations into movement instruction, they became quite skilled.
But, there is more. Eagleman writes:
The stunning part is that they actually began to perceive the tactile input – to see with it. After enough practice, the tactile input becomes more than a cognitive puzzle that needs translation; it becomes a direct sensation.
If it seems strange that nerve signals coming from the back can represent vision, bear in mind that your own sense of vision is carried by nothing but millions of nerve signals that just happen to travel along different cables. Your brain is encased in absolute blackness in the vault of your skull. It doesn’t see anything. All it knows are these little signals and nothing else. And yet you perceive the world in shades of brightness and colors. Your brain is in the dark but your mind constructs the light.
To the brain, it does not matter where those pulses come from – the eyes, the ears, or somewhere else entirely. As long as they consistently correlate with your own movements as you push, thump, and kick things, your brain can construct the direct perception we call vision.
He tells the story of Eric Weihenmayer, an extreme rock climber, who is blind, but has learned to see with his tongue. In 2001, he became the first blind person to climb Mount Everest. While climbing, Eric uses something called a BrainPort which takes video input and translates it into patterns of electrical pulses onto a grid of over 600 electrodes in his mouth. This allows Eric to discern qualities usually associated with vision such as distance, shape, direction of movement and size.
Demonstrating the concept in reverse, Eagleman tells the story of Mike May who was blinded by a chemical explosion at the age of three. Forty-three years later his vision was restored by a new surgical development. The bandages were removed, and Mike was to look upon the faces of his children for the first time. What was supposed to be a heart warming scene, was not. Although his eyes were working perfectly, Mike did not have vision because his brain could not make sense of the torrent of input from his eyes. With learning, over time, Mike came to have the experience of sight.
Vision is learned. Seeing is in the brain.