قالب وردپرس درنا توس
Home / Health / Scientists draw speech directly from the brain – TechCrunch

Scientists draw speech directly from the brain – TechCrunch



In an achievement that could ultimately unlock the possibility of speech for people with serious illness, scientists have successfully restored the language of healthy subjects by tapping directly into their brains. Technology is far from practical, but science is real and the promise is there.

Edward Chang, a neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, commented on the impact of the team's work in a news release: "For the first time, this study shows that we have whole spoken sentences based on the brain activity of one Person can generate. This is gratifying proof of the principle that, with technology already in range, we should be able to build a device that is clinically appropriate for patients with speech loss. "

To be clear, this is not a magical machine that you sit in and it translates your thoughts into speech. It is a complex and invasive process that does not exactly decipher what the topic is thinking but what they actually spoke had already implanted large electrode arrays in the brain for another medical procedure , The researchers had these happy people read aloud several hundred sentences while accurately recording the signals detected by the electrodes.

The Electrode Array Concerned

It happens that the researchers know a certain pattern of brain activity, which after that comes to mind and arrange words (in cortical areas like Wernicke and Broca) and before the final signals from the motor cortex to their tongues – and mouth muscles are sent. There is a kind of intermediate signal between those who Anumanchipalli and his co-author, PhD student Josh Chartier, have been previously characterized, and who in their opinion could work for the reconstruction of the language.

By analyzing the audio data, the team can determine the muscles and movements would be involved if (this is a fairly established science), and from this they built up a kind of virtual model of the person's vocal system. Then, the brain activity detected during the session was mapped onto this virtual model using a machine learning system, essentially allowing a record of a brain to be recorded.

to control a mouth. It is important to understand that this does not turn abstract thoughts into words – it understands the concrete instructions of the brain to the facial muscles and determines from the words which words would form those movements. It is brain reading, but not .

The resulting synthetic language is not very clear, but it is understandable. Properly positioned, it might be able to output 150 words per minute from a person who may otherwise be speechless.

"We still have a way to perfectly imitate the spoken language," said Chartier. "Nevertheless, the accuracy we achieved here is an astonishing improvement in real-time communication compared to what is currently available."

For comparison, a person so affected, such as having a degenerative muscle disorder, often needs to talk by spelling words one at a time with their gaze. Imagine 5-10 words per minute, with other methods for people with disabilities slowing down even more. It is, in a way, a miracle that they can communicate at all, but this time-consuming and less natural method is far from the speed and expressiveness of the real language.

If a person could use this method, then they would be much closer to the ordinary language, though perhaps at the expense of perfect accuracy. But it is not a magic bullet.

The problem with this method is that it requires very carefully collected data from a healthy language system, from the brain to the tongue spit. For many people it is no longer possible to collect this data, and for others it is impossible to recommend the invasive survey method for a doctor. And conditions that prevent a person from ever speaking prevent that method from working as well.

The good news is that this is a start and theoretically there are many conditions for doing so. And collecting these critical brain and voice record data could be preventative if a stroke or degeneration is considered a risk.


Source link