Do you think that there is a computer screen sitting in front of you right now?

It would certainly seem so if you are reading these words online, but in fact you are not actually “seeing” the computer screen in front of you. What you see are photons of light bouncing off the screen (and generated by the internal electronics of the screen itself), which pass through the hole in the iris of your eye, through the liquid medium inside your eye, wending their way through the bipolar and ganglion cells to strike the rods and cones at the back of your retina. These photons of light carry just enough energy to bend the molecules inside the rods and cones to change the electrochemical balance inside these cells, causing them to fire, or have what neuroscientists call an “action potential.”

From there the nerve impulse races along the neural pathway from the retina to the back of the brain, leaping from neuron to neuron across tiny gaps called synaptic clefts by means of neurotransmitter substances that flow across those gaps. Finally, they encounter the visual cortex, where other neurons record the signals that have been transduced from those photons of light, and reconstruct the image that is out there in the world.

Out of an incomprehensible number of data signals pouring in from the senses, the brain forms models of faces, tables, cars, trees, and every conceivable known (and even unknown — imagined) object and event. It does this through something called neural binding. A “red circle” would be an example of two neural network inputs (“red” and “circle”) bound into one percept of a red circle. Downstream neural inputs, such as those closer to muscles and sensory organs, converge as they move upstream through convergence zones, which are brain regions that integrate information coming from various neural inputs (eyes, ears, touch, etc.) You end up perceiving a whole object instead of countless fragments of an image. This is why you are seeing an entire computer screen with a meaningful block of text in front of you right now, and not just a jumble of data.

According to the University of Cambridge cosmologist Stephen Hawking, however, not even science can pull us out of such belief dependency. In his new book, The Grand Design, co-authored with the Caltech mathematician Leonard Mlodinow, Hawking presents a philosophy of science he calls “model-dependent realism,” which is based on the assumption that our brains form models of the world from sensory input, that we use the model most successful at explaining events and assume that the models match reality (even if they do not), and that when more than one model makes accurate predictions “we are free to use whichever model is most convenient.” Employing this method, Hawking and Mlodinow claim that “it is pointless to ask whether a model is real, only whether it agrees with observation.”

More: CLICK

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: