Brain implants that decode language directly – rts.ch

2023-08-24 04:29:21

Giving voice back to those who have lost the use of it due to illness or accident is becoming more and more plausible. Two brain implant experiments show significant progress in this area, according to work published Wednesday in the journal Nature.

Pat Bennett, 68, was a dynamic and athletic executive until she was diagnosed with Charcot’s disease more than a decade ago. This neurodegenerative disease, which progressively deprives the patient of all movement until complete paralysis, was first reflected in her by speech difficulties, then the impossibility of speaking.

Researchers from the Department of Neurosurgery at the American University of Stanford implanted four small squares of 64 micro-electrodes made of silicone in March 2022. Penetrating into the cerebral cortex for only 1.5 millimeters, they record the electrical signals produced by the areas of the brain linked to the production of language.

The signals produced are conveyed outside the skull through a bundle of cables, and processed by an algorithm. The machine “learned”, over four months, to interpret its meaning. It associates signals with phonemes – the sounds that form the words of a language – and processes them with the help of a language model.

Reduced error rate

We “can now imagine a future in which we restore a fluid conversation with a person suffering from paralysis” of language, said in a press briefing Frank Willett, professor at Stanford and co-author of the study.

With his brain-machine interface (BMI), Pat Bennett speaks via a screen at the rate of more than 60 words per minute. Still far from the 150 to 200 words per minute of a standard conversation, but already three times faster than in the previous record, dating from 2021 and already held by the team that took it under its wing. The error rate on a 50-word vocabulary has fallen to less than 10%, from more than 20% previously.

In the second experiment, conducted by Edward Chang’s team at the University of California, the device relies on a strip of electrodes placed on cortical material. Its performance is comparable to the Stanford team system, with a median of 78 words per minute, five times faster than before.

A huge leap for the patient, paraplegic since a hemorrhage in the brainstem, and who until now communicated at a maximum rate of 14 words per minute, using a technique for tracking head movements.

Synthetic voice and avatar

His team’s brain-machine interface produces language in the form of text, but also with a synthesized voice and an avatar reproducing the patient’s facial expressions when he speaks. Because “the voice and our expressions are also part of our identity”, according to Professor Chang.

The team is now aiming for a wireless version of the device, which would have “profound implications for a patient’s independence and social interactions”, according to David Moses, study co-author and professor of neurosurgery at the Institute. University of San Francisco.

ats/fgn

1692856717
#Brain #implants #decode #language #rts.ch

Leave a Replay