A man with severe paralysis has communicated in sentences thanks to a ‘speech neuroprosthesis’ which turned signals from his brain to his vocal tract into words on a screen.
In the past, work to help people communicate has focussed on spelling-based technology, where letters are individually typed out. Rather than looking at the signals used to move the arm or hand to enable typing, this latest research concentrated instead on translating signals which would normally control muscles of the vocal system used for saying words.
This type of technology could lead to a quicker, more natural way of communicating for those with speech loss.
It is the work of a team from the University of California San Francisco (UCSF), which built on more than 10 years of research by UCSF neurosurgeon Dr Edward Chang.
Distinguished Professor Chang, from the Joan and Sanford Weill Chair of Neurological Surgery at UCSF, said: “To our knowledge, this is the first successful demonstration of direct decoding of full words from the brain activity of someone who is paralyzed and cannot speak. It shows strong promise to restore communication by tapping into the brain’s natural speech machinery.”
Chang and his team began by analysing the brain recordings of volunteer patients with normal speech who were undergoing neurosurgery. From there, Dr David Moses, a postdoctoral engineer in the Chang lab, developed new ways of decoding these patterns.
Next came a study to test the technology – BRAVO (Brain-Computer Interface Restoration of Arm and Voice) – with the first participant in the trial known as BRAVO1. BRAVO1, a man in his late 30s who suffered a severe stroke 15 years ago, worked with researchers to develop 50 words that the team could recognise from his brain activity.
Following surgery to fit a device over BRAVO1’s speech motor cortex, the team began recording neural activity in this brain region. During each session, he attempted to say each of the 50 words while surgically-implanted electrodes recorded the brain signals from his speech cortex.
The next stage saw the utilisation of artificial intelligence to translate the patterns of neural activity into words. Finally, BRAVO1 was shown short sentences comprising some of the 50 words and attempted to say them. As he tried, the words were decoded thanks to his brain activity, and appeared on-screen, one by one.
He was then asked questions, including “How are you today?” and “Would you like some water?” As he attempted to answer, BRAVO1’s speech appeared on screen: “I am very good,” and “No, I am not thirsty.”
With up to 93% accuracy and a speed of up to 18 words a minute, the findings pave the way for a more natural way for people with speech loss to be able to communicate.
Dr Moses said: “This is an important technological milestone for a person who cannot communicate naturally and it demonstrates the potential for this approach to give a voice to people with severe paralysis and speech loss.”
The study appears in the New England Journal of Medicine.