Please note that scientists generate speech from brain signals.
Scientists report that they have developed a virtual prosthetic voice, a system that decrypts the vocal intentions of the brain and translates them into the most understandable language, without having to move a muscle, including those in the mouth. (Physicist and author Stephen Hawking used a muscle in his cheek to input keyboard characters that a computer synthesized into speech.)
"It's an impressive piece of work that takes us to another level of language restoration," by producing brain signals decoded. Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Florida, who was not a member of the research group.
The new system, which was described on Wednesday in the journal Nature, decodes that The motor control of the brain conducts the vocal movement during speech ̵1; the tapping of the tongue, the constriction of the lips – and generates understandable sentences that approximate the natural cadence of a speaker.
Earlier implant-based communication systems generated about eight words a minute. The new program generates about 150 minutes per minute, the pace of natural language.
The researchers also found that a synthesized voice system based on one person's brain activity could be used and adapted by another person – an indication that these could be off-the-shelf virtual systems be available someday.
The team plans to conduct clinical trials to further test the system. The biggest clinical challenge may be to find suitable patients: Strokes that obstruct a person's speech also damage or eliminate the areas of the brain that support speech articulation.
This is a fascinating research. Congratulations to the researchers.
Mike "Mish" Shedlock