If you think about it, our remarkable ability to receive and decode speech is a biological miracle of the first order. The more so because it appears that we have only learned the trick in the last 50-100,000 years.
I have commented before about the importance of looking at entire brain systems rather than individual spots in the brain or individual brain cells. The brain is supposed to operate as a unified whole. If it doesn’t things are going wrong. Another concept, and one that is not widely known outside of neuroscience community, is that instead of thinking about a stimulus causing a set of neurons to fire, it is better to think about stimuli modulating the activity pattern of neural circuits that are already active.
Think of the appearance f a TV screen when the set isn’t tuned to a particular station. You get a fast moving “snow” effect. It is only when a coherent signal arrives that the “snow” gets itself organized.
Now colleagues from the University of Maryland have published some new data which may help explain how we discriminate speech, and why sometimes it can go wrong.
It appears that a particular resonance pattern in the brain's auditory processing region appears to be the key.
Huan Luo and David Poeppel found that the inherent rhythm of neural activity called "theta band" specifically reacts to spoken sentences by changing its phase. The researchers also noted that the natural oscillation of this frequency provides further evidence that the brain samples speech segments about the length of a syllable.
The findings represent the first time that such a broad neural response has been identified as central to perceiving the highly complex dynamics of human speech. Previous research studies have looked at the responses of individual neurons to speech sounds, but not the response of the auditory cortex as a whole.
In their experiments, the researchers asked volunteers to listen to spoken sentences. One example was, “He held his arms close to his sides and made himself as small as possible."
At the same time, the subjects' brains were scanned using magnetoencephalography. In this imaging technique, sensitive detectors are used to measure the magnetic fields produced by electrical activity in brain regions.
The theta band oscillates between four and eight cycles per second, and it changes its phase pattern with unique sensitivity and specificity in response to the spoken sentences. As a second experiment, when the experimenters degraded the intelligibility of the sentences, the theta band pattern lost its tracking resonance with the speech.
The researchers said their findings suggest that the brain discriminates speech by modulating the phase of the continuously generated theta wave in response to the incoming speech signal.
Second, looking at the time-dependent characteristics of this theta wave suggest that the brain samples the incoming speech in "chunks" that are about the length of a syllable from any given language.
When you hear something, it goes from your ears to your brain where fields begin to sample what you are hearing. From that you decode the message and turn into something that you can understand, and you make the decision on whether or not you need to act.
It’s remarkable what’s going on under the hood!