Education & Science

Major Scientific Success: Brain Signals Translated to Speech

People whose speech ability has been lost due to a stroke or other medical condition now have hope because of technological innovation that uses brain activity to produce synthesized speech.

Scientists from the University of California, San Francisco (UCSF) implanted electrodes into the brains of volunteers and decoded signals from brain centers coordinating speech, to guide a computer-simulated version of their vocal tract – lips, jaw, tongue, and larynx – to generate speech through a synthesizer.

This discourse was largely understandable, with some clarity sometimes, giving researchers hope that, following some improvements, a viable clinical device could be developed in the coming years for patients who have lost their ability to speak.

“We were shocked when we first heard the results; we couldn’t believe it. It was unbelievably exciting that a lot of aspects of real speech were present in what came out of the synthesizer,” said Ph.D. student Josh Chartier, co-author of UCSF. “Clearly, there is still more work to be done to make it more natural and intelligible, but we were very impressed by how much it can be decoded from brain activity,” he added, according to Reuters.

Diseases that leave you unable to speak

Stroke, cerebral palsy, amyotrophic lateral sclerosis (ALS), Parkinson’s disease and multiple sclerosis, brain injury, and cancer – especially in the neck area – are among the conditions in which patients lose their ability to speak.

Some of these people use devices that track the movements of their eyes or facial muscles to compose the words letter by letter, but producing a text or speech using the synthesizer takes a long time. With speech synthesizers, such as the one used by astrophysicist Stephen Hawking, people can transmit up to 10 words per minute, and in natural speech, about 100 – 150 words per minute are used.

The research involved five volunteers, able to speak – patients with epilepsy who were temporarily implanted with electrodes in the brain to map the source of the seizures before a neurosurgical intervention.

The volunteers read aloud, while the activity of the brain regions involved in the speech was monitored. The researchers outlined the vocal tract movements needed to produce the speech and created, for each participant, a “virtual vocal tract” that could be controlled by the activity of their brain, producing the synthesized speech.

“Very few of us have a clue, in fact, what happens in our mouths when we talk,” said neurosurgeon Edward Chang, lead author of the study recently published in the journal Nature. “The brain translates those thoughts into what you want to say in the movements of the vocal system and that’s what we’re trying to decode,” he added.

Researchers have been able to successfully synthesize slow sounds, such as “sh”, and have encountered difficulties with strong sounds such as “b” or “p”. The technology didn’t work as well when they tried to decode brain activity directly into speech, without using a virtual voice tract.

That is why scientists continue to improve the brain-computer interface needed to play synthesized speech, giving hope to those who are unable to communicate verbally today.

In addition, future studies will test the technology on people who can no longer speak.

Leave a Comment

Your email address will not be published. Required fields are marked *

*