Study: Scientists Create Artificial Intelligence to Turn Thoughts into Words | Science News
A person’s thoughts can now be translated into recognizable speech by monitoring someone’s brain activity.
Neuroscientists have created a system to translate thought to speech. The technology is able to clearly speak words, laying the groundwork for helping people who cannot speak to be able to communicate, according to a press release.
The research was published Tuesday in the journal Scientific Reports.
The new technology makes use of artificial intelligence and speech synthesizers and could allow people with communication issues, such as those recovering from a stroke or living with amyotrophic lateral sclerosis regain their ability to communicate with others.
“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,” Nima Mesgarani, author and principal investigator at Columbia University’s Mortimer B. Zuckerman Mind Brain Behavior Institute, said. “With today’s study, we have a potential way to restore that power. We’ve shown that, with the right technology, these people’s thoughts could be decoded and understood by any listener.”
Previous research has shown that when people speak or think about speaking, there are distinct and recognizable patterns of activity in their brains. These patterns also appear when people listen or think about listening to others.
The Photos You Should See – Jan. 2019
In order to bring these thoughts to life, scientists used a vocoder, a computer algorithm that synthesizes speech after being trained on recordings of people talking. Mesgarani said in the press release that this is the same technology used by Apple’s Siri and Amazon’s Alexa that allows the devices to respond to people’s questions.
In order to teach the vocoder, scientists asked epileptic patients undergoing surgery to listen to phrases spoken by different people. Mesgarani and his team tracked brain patterns and used the neural patterns to train the device.
Next, researchers asked the patients to listen to others reciting numbers, while recording brain signals that would be used by the vocoder. The sound produced by the device was analyzed and refined by artifical intelligence that mirrors the structure of neurons in the brain.
The vocoder was able to recite the numbers in a robotic voice and the researchers asked the patients to listen and report what they heard.
Mesgarani said in the release that patients understood and could repeat the sounds 75 percent of the time, a drastic increase from previous studies. Researchers are hoping this system could be part of an implant in the brain that translates brian signals from a wearer’s thoughts directly into words.
Editor’s Note: Mortimer B. Zuckerman is the chairman of the executive committee and editor-in-chief of U.S. News.