AI smartphone tool can diagnose strokes within minutes

AI smartphone tool can diagnose strokes within minutes
© iStock/imaginima.

Researchers have created a new tool that can diagnose strokes with the accuracy of an emergency room clinician from interaction with a smartphone.

Researchers at Penn State and Houston Methodist Hospital have designed a machine learning model to aid in, and potentially increase the speed of, stroke diagnosis by physicians in a clinical setting. The tool can diagnose a stroke based on abnormalities in a patient’s speech ability and facial muscular movements within minutes from an interaction with a smartphone.

James Wang, professor of information sciences and technology at Penn State, said: “When a patient experiences symptoms of a stroke, every minute counts, but when it comes to diagnosing a stroke, emergency room physicians have limited options: send the patient for often expensive and time-consuming radioactivity-based scans or call a neurologist –  a specialist who may not be immediately available –  to perform clinical diagnostic tests.”

Diagnosing strokes with machine learning

Physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan, and the team is trying to emulate the process by using machine learning. Their novel approach is the first to analyse the presence of stroke among emergency room patients suspected of having a stroke by using computational facial motion analysis and natural language processing to identify abnormalities in a patient’s face or voice, such as a drooping cheek or slurred speech.

The tool could help emergency room physicians determine critical next steps for the patient more quickly and could be utilised by caregivers or patients to make self-assessments before reaching the hospital.

“This is one of the first works that is enabling Artificial Intelligence (AI) to help with stroke diagnosis in emergency settings,” said Sharon Huang, associate professor of information sciences and technology at Penn State.

Training the model

A dataset was built from more than 80 patients experiencing stroke symptoms at Houston Methodist Hospital in Texas and each patient was asked to perform a speech test to analyse their speech and cognitive communication while being recorded on an Apple iPhone.

Huang said: “The acquisition of facial data in natural settings makes our work robust and useful for real-world clinical use, and ultimately empowers our method for remote diagnosis of stroke and self-assessment.”

The researchers found that its performance achieved 79% accuracy, comparable to clinical diagnostics by emergency room doctors, who use additional tests such as CT scans. However, the model could help save valuable time in diagnosing a stroke, with the ability to assess a patient in as little as four minutes.

John Volpi, a vascular neurologist and co-director of the Eddy Scurlock Stroke Center at Houston Methodist Hospital, said: “There are millions of neurons dying every minute during a stroke. In severe strokes it is obvious to our providers from the moment the patient enters the emergency department, but studies suggest that in the majority of strokes, which have mild to moderate symptoms, that a diagnosis can be delayed by hours and by then a patient may not be eligible for the best possible treatments.”

Volpi, a co-author on the paper, said: “If we can improve diagnostics at the front end, then we can better expose the right patients to the right risks and not miss patients who would potentially benefit.

“We have great therapeutics, medicines, and procedures for strokes, but we have very primitive and, frankly, inaccurate diagnostics.”

Subscribe to our newsletter

LEAVE A REPLY

Please enter your comment!
Please enter your name here