In the first experiment of this kind, scientists were able to translate brain signals directly into understandable speech. It may sound like wild science fiction in the beginning, but this feat could actually help some people with language problems.
And yes, we could also pull some futuristic computer interfaces out of it.
The key to this system is an artificial intelligence algorithm that corresponds to what the subject hears with patterns of electrical activity, and then turns it into speech that actually makes sense to the listener.
We know from previous research that when we speak (or even imagine talking), we get different patterns in the neural networks of the brain. In this case, the system deciphers the brain's answers, not the actual thoughts into the language, but it has the potential to do so with sufficient development.
"Our voices help us connect our friends, family, and the world around us. That's why the loss of voice due to injury or disease is so devastating," says one of the teams, Nima Mesgarani from Columbia University in New York ,
"With today's study, we have a potential path to recovery. We've shown that with the right technology, these people's thoughts can be deciphered and understood for every listener."
The algorithm used is called a vocoder, the same algorithm that synthesizes after training speech synthesized about people. If you receive a response from Siri or Amazon Alexa, a vocoder will be provided.
In other words, Amazon or Apple does not have to program every single word into their devices ̵
Here the vocoder was not trained by human speech, but by neuronal activity in the auditory cortex area of the brain, as measured by patients undergoing brain surgery while hearing loud spoken sentences.
This database, to which brain signals are recorded while the patients were listening to the digits 0 through 9 that were read out, was passed through the vocoder and cleared up using another AI analysis. They were found to be exactly what they heard, even though the final voice is still pretty much robotically controlled.
The technique proved to be much more effective than previous attempts to use simpler computer models on spectrogram images – visual representations of sound frequencies. 19659003] "We found that people could understand and repeat the sounds 75 percent of the time, going far beyond any previous experiment," says Mesgarani.
"The sensitive vocoder and the strong neural networks represented the sounds of the patients had listened with surprising accuracy."
There is still much to do, but the potential is enormous. It's worth emphasizing that the system does not make the spoken word out of the actual mental thoughts, but it may do so – this is the next challenge the researchers want to face.
It may even be your turn lower down You can think about your emails on the screen or switch on your intelligent lights by giving only one mental command.
It will take time, not least because our brains work a little differently – a large amount of training data from each person To interpret all our thoughts accurately, this would be required.
In the not-too-distant future, we are potentially talking about people getting a voice that does not have one yet, whether they have a locked-in syndrome or a recovering stroke or amyotrophic lateral sclerosis (as in the case of the late Stephen Hawking) (WHEN).
"If the wearer thinks in this scenario," I need a glass of water, "our system could capture and transmit the brain signals from that thought. Then they blend into a synthetic verbal language," says Mesgarani.
"This would be a game changer, and anyone who has lost his ability to speak, whether through injury or disease, would have the opportunity to connect with the world around him."
The research was published in Scientific Reports published]