The first “neuroprosthesis,” developed by UCSF researchers in 2021, translates brain signals from a man with severe paralysis directly into words that appear as text on a screen. Using this brain-machine interface, he can also move a robotic arm to manipulate objects. The latest breakthrough in the study makes it possible for him to compose sentences from a larger vocabulary in real time with 94% accuracy. The device’s motor control capabilities have also been enhanced.
The team has increased the system’s initial 50-word vocabulary to more than 1,100 words by integrating the NATO phonetic alphabet (Alpha, Bravo, Charlie) into the algorithm, enabling the patient to spell out letters by attempting to say them. In addition, advancements in the motor control function now allow the patient to grip and rotate objects in different directions and then put them down in specific locations – practical actions for everyday life.
“This represents a big step forward for the clinical viability for this technology to restore function to paralyzed patients,” said UCSF neurosurgeon Edward F. Chang, MD, chair of the Department of Neurological Surgery, who led the initial landmark study.
“This multifunctional system taps into the brain’s learning mechanisms and uses machine learning algorithms that then adapt with the patient,” said UCSF neurologist Karunesh Ganguly, MD, PhD, principal investigator for the ongoing trial.
Building on the patient’s skill set for plug-and-play capability
The patient has a high-performance electrocorticography (ECoG)–based neural interface on the part of his brain’s sensorimotor cortex that controls speech, which Chang implanted when the trial began. The patient’s neural activity was recorded as he attempted to say each of the initial 50 vocabulary words. The interface translates brain signals intended to control vocal system muscles into words that appear onscreen. The integration of the phonetic alphabet allows the patient more flexibility in his communication.
He recently said to the research team, “You all stay safe from the virus.”
The motor control capability works through the same principle. By recording the patient’s brain signals as he attempted to move his hand to control a joystick, the researchers developed an algorithm that translated those signals into actions carried out by a robotic arm. The team has improved the algorithm’s ability to decode those signals, enabling the patient to control a high degree-of-freedom robotic arm and manipulate objects with greater dexterity.
“We’ve been very focused on accuracy and can now zero in on what specific brain signals look like and what they mean,” Ganguly said. “Denoising is a good term for it.
“The ECoG interface is very stable and robust,” he continued. “It allows for long-term progress – the patient doesn’t have to relearn things. He can do these tasks day after day. Even a month after learning a task, he can do it again without a drop in performance. This way, we’re able to move forward, build on his skill set, and enable plug-and-play capability.”
Expanding access to more patients
The research team is working to increase the system’s practical applications. “For our next phase, the device will be wheelchair-mounted and more usable in a real-world environment,” Ganguly said.
He anticipates that the team’s work with the first patient over the past two years will translate to shorter implementation times for future patients. “It should take only weeks,” he said.
“We are creating technology that works for people who are severely paralyzed. We hope to make it more accessible to those with less severe paralysis. This multifunctional device allows for both speech and motor control, which paves the path for broader functionality.”
Neurology and neurosurgery research and treatment take place within the UCSF Weill Institute for Neurosciences.
To learn more
UCSF Neurology and Neurosurgery
Phone: (800) 444-2559 | Fax: (415) 353-4395