Skip to main content

Improving Measurement of Parkinson’s Disease Severity with AI

UCSF researchers develop video-based tracking system to quantify motor symptoms and provide new clinical insights for personalized treatment.
UCSF researchers develop video-based tracking system to quantify motor symptoms and provide new clinical insights for personalized treatment.
UCSF researchers develop video-based tracking system to quantify motor symptoms and provide new clinical insights for personalized treatment.

 

Despite recent advancements in the treatment of Parkinson’s Disease, it remains a challenge to accurately measure the progression of symptoms in this neurological disorder. While noticeable symptoms like tremors, stiffness and slowing of movement can be observed, there have previously been few precise ways to quantify changes in symptoms that can be used outside of research laboratories and in routine clinical practice.

To provide more personalized treatment based on individuals’ disease state and progression, researchers at the University of California San Francisco (UCSF) developed a video-based analysis system enabled by machine learning (ML), to quantify and validate motor symptom severity in patients with Parkinson’s Disease (PD). Their AI pipeline, running on standard clinical videos, was able to determine the severity of PD symptoms from very short video clips of just a few seconds.

Their study appeared online in the June 25,2024 issue of NPJ Parkinson’s Disease.

The system uses single-view, seconds-long videos recorded on devices such as smartphones, tablets, and digital cameras, eliminating the need for expensive, specialized equipment. The researchers designed the framework to provide a comprehensive movement dataset and an interpretable video-based system able to predict high versus low PD motor symptom severity. The system automatically extracts a large array of features representing movement characteristics in raw, unedited video recordings of PD patients performing motor tasks. This type of a system was made possibly recently, thanks to advancements in machine learning and computer vision that have enabled the development of algorithms that can translate information from video related to movement at key anatomical positions without the need for physical markers like wearable sensors.

“Our framework effectively expands on previous research in PD quantification and addresses many of the shortcomings for a simple yet comprehensive video-based solution,” said co-senior study author Reza Abbasi-Asl, PhD, UCSF Assistant Professor of Neurology. “Our approach extracted and identified salient movement features that could be used to train accurate ML models for predicting low and high-severity motor impairment states.”

The research team used clinical data from 31 participants with PD who were evaluated at UCSF as part of the multi-day UCSF “Parkinson’s Spectrum” cohort study. For each patient, they recorded both a full-body video of walking/gait and a video of the finger tapping task. As part of the study protocol, the standardized video recordings were taken when patients were both on and off dopaminergic medications, which can improve symptoms. The positions of individual joints in each frame were then extracted using Google “Mediapipe” computer vision software in collaboration with co-investigator Anupam Pathak, PhD and Google Research.

The UCSF team then devised a data-driven approach that validated and robustly quantified established clinical movement signs, but also identified new clinical insights, including pinkie finger movements as well as lower limb and axial features of gait that had not previously been evaluated in relation to clinical severity in PD.

“The field of movement disorders is in need of better tools to measure, monitor, and track disease signs and symptoms in a straightforward, reliable, and objective way,” said study author Jill Ostrem, MD, a neurologist and Medical Director and Division Chief of the UCSF Movement Disorders and Neuromodulation Center. “This study demonstrates this may be possible using simple standardized video recordings.”

The researchers are planning on follow-up studies to further refine their framework, increasing the degree of automation and validating it in larger, representative cohorts. They also plan to extend the framework to incorporate additional motor modalities, such as facial expressions and speech, as well as extending this to home use. They hope to explore the framework’s utility in predicting other outcomes in PD and apply it to other neurological movement disorders such as dystonia and essential tremor.

“Using standard videos combined with interpretable AI techniques can assist neurologists in the treatment of patients with Parkinson’s Disease and other neurological movement disorders,” said co-senior study author Simon Little, MBBS, PhD, UCSF Associate Professor of Neurology. “Objective, video-based readouts of Parkinson’s severity could support quicker and better diagnostics and treatment in future.”

 

About UCSF Health: UCSF Health is recognized worldwide for its innovative patient care, reflecting the latest medical knowledge, advanced technologies and pioneering research. It includes the flagship UCSF Medical Center, which is a top-ranked hospital, as well as UCSF Benioff Children’s Hospitals, with campuses in San Francisco and Oakland; Langley Porter Psychiatric Hospital and Clinics; UCSF Benioff Children’s Physicians; and the UCSF Faculty Practice. These hospitals serve as the academic medical center of the University of California, San Francisco, which is world-renowned for its graduate-level health sciences education and biomedical research. UCSF Health has affiliations with hospitals and health organizations throughout the Bay Area. Visit https://ucsfhealth.org. Follow UCSF Health on Facebook or on Twitter.