© 2024 Michigan State University Board of Trustees
Public Media from Michigan State University
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Scientists are using new technology to study the cells behind language comprehension

AILSA CHANG, HOST:

Right now, the sound of my voice is causing a lot of brain activity in an area just above your ear. That's where your brain processes spoken language. NPR's Jon Hamilton reports on a team of scientists who are using a new technology to study the cells that make this possible.

JON HAMILTON, BYLINE: The human brain recognizes speech instantly and without apparent effort. Dr. Eddie Chang, a neurosurgeon at the University of California, San Francisco, is part of a team that's been trying to understand how our brains do this.

EDDIE CHANG: It's a really remarkable feat - translating the sounds that we hear that come in through our ears into things that we understand, like words.

HAMILTON: The team studied eight patients having a type of brain surgery that requires them to remain conscious. During the operation, surgeons temporarily implanted a new kind of probe in an area of the cortex, the brain's outer layer, that is critical for speech perception. Then, the patients listened to dozens of recordings that included all the speech sounds of American English.

(SOUNDBITE OF MONTAGE)

UNIDENTIFIED PERSON #1: It was nobody's fault.

UNIDENTIFIED PERSON #2: Have you got enough blankets?

UNIDENTIFIED PERSON #3: Yet, they thrived on it.

UNIDENTIFIED PERSON #4: Junior, what on Earth's the matter with you?

HAMILTON: Meanwhile, the probe, which is roughly the size and shape of an eyelash, was monitoring nearly 700 individual brain cells. Chang says these cells are highly organized.

CHANG: There is, in fact, a map where specific spots along that cortex are actually tuned to different speech sounds, like the different parts of consonants and vowels.

HAMILTON: Some cells respond to ah sounds while others wait for an oh or buh (ph) or cuh (ph). The researchers had used an older technology to map these cells across the surface of the cortex, but the new probe offered a three-dimensional view that included cells beneath the surface. The scientists thought these deeper cells might respond to the same speech sounds as those on the surface, but Chang says that's not what they found.

CHANG: When you eavesdrop on the activity of hundreds of them across this depth of the cortex, there's actually a tremendous amount of diversity.

HAMILTON: So just beneath the cells responding to an ah sound, there could be cells tuned to buh or cuh.

CHANG: And what that means is that speech sounds - you know, the different parts of consonants and vowels - are being processed by cells that are literally microns apart.

HAMILTON: Chang says this organization may help the brain process speech sounds quickly and efficiently. The study appears in the journal Nature. And David Poeppel, a neuroscientist at New York University and the Max Planck Institute in Germany, says it addresses some basic questions about how the human brain processes language.

DAVID POEPPEL: What is the parts list? And how are those parts put together to make it so smooth and easy to speak, to listen and to connect the stuff that you say to the ideas in your head?

HAMILTON: Poeppel says the study looked at only one brain area in just a few patients. Even so, it shows the potential of a technology that can monitor hundreds of individual neurons instead of just a few.

POEPPEL: Every time you get a finer way to measure something - let's say, a better microscope - you discover a new layer of interesting information, right? So it's more and more fine-grained.

HAMILTON: The research adds to the evidence that the human brain is organized to recognize individual speech sounds rather than entire words or sentences. Poeppel says that seems to confirm an idea that dates back nearly a century.

POEPPEL: The idea is that there's actually an organization of speech sounds that is quite abstract but that holds for all languages 'cause the challenge is, of course, well, we're speaking American English right now, but that's very different from, say, Norwegian.

HAMILTON: Or Urdu or a Bantu language. Poeppel says what all those languages have in common is a set of speech sounds that the brain can transform into meaningful words and sentences.

Jon Hamilton, NPR News.

(SOUNDBITE OF CHINESE MAN ET. AL SONG, "INDEPENDENT MUSIC") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Jon Hamilton is a correspondent for NPR's Science Desk. Currently he focuses on neuroscience and health risks.
As the year ends, your gift to WKAR is more important than ever. Donate $60 or more before December 31, and we’ll donate a WKAR Reading Kit to a child in need in our community. Your generosity not only supports our vital journalism for the upcoming year but also fosters a love for reading in young minds. Together, we can create a brighter future!