Neuro engineers at New York's Columbia University say they have created a system that can translate human thought into recognizable language, revolutionizing not only medicine but communication as well.
By monitoring subjects' brain activity, researchers from the Mortimer B. Zuckerman Mind Brain Behavior Institute in Columbia were able to train artificial intelligence to translate thought patterns into understandable sentences, according to an article published Tuesday in Scientific Reports. The authors consider patients with language impaired by disease or trauma to be the first users of incentive technology.
"We have shown that the ideas of these people can be deciphered and understood by every listener with the right technology." Nima Mesgarani, the lead author of the newspaper.
After the researchers failed to translate brain activity into recognizable speech, they turned to a computer algorithm that can generate speech and is referred to as a vocoder. The algorithm improves the more it "trains" by recording human speech.
Researchers translate brain signals directly into the language: //t.co/qEStGOoPOW
This breakthrough harnesses the power of speech synthesizers and artificial intelligence to allow computers to communicate directly with the brain in new ways. #ai #BCI #speech #science
̵1; Neuroscience News (@NeuroscienceNew) January 29, 2019
"This technology was used by Amazon Echo and Apple Siri, to give verbal answers to our questions. " said Dr. Mesgarani, who is also a professor at the Fu Foundation School of Engineering and Applied Science in Columbus.
The vocoder was trained using the interpretation of brain activity Ashesh Dinesh Mehta, neurosurgeon at the Northwell Health Neuroscience Institute in Long Island, and co-author of the newspaper.
"Working with Dr. Mehta, we asked epilepsy patients who are already undergoing brain surgery to listen to sentences spoken by different people while measuring patterns of brain activity," Mesgarani said. "These neural patterns trained the vocoder."
After this training was completed, the next phase began: patients listened to a person reading numbers from 0 to 9 while the algorithm was scanning brain activity and tried to translate it into sound.The result was a robot-read reading of numbers that men understand and repeat stealthy listeners with an accuracy of about 75 percent.
This may seem rather modest, but Dr. Mesgarani said such an outcome was "far beyond the previous attempts. Researchers plan to further improve the system so that it does not take the brain patterns of a person who thinks of speech as listening.
"This would be a game changer. Anyone who has lost his ability to speak, whether through injury or disease, has the opportunity to connect with the world around him, "Mesgarani said.
The technology will also need to work with more complex words and phrases to make it more practical The ultimate goal for the team is to create an implant that synthesizes language directly from thought.
Do you think your friends would be interested in sharing this story!