With a new, highly “explainable” approach to predicting outcomes of atrial fibrillation treatment, a UW team gives doctors and patients a new tool in treating a leading cause of strokes and heart failure.
A team from UW’s Department of Bioengineering and Division of Cardiology published a new, machine learning-based approach for predicting the recurrence of atrial fibrillation (AF) in patients after they receive ablation treatment for the condition. The work appeared in Nature Communications Medicine.
“Our approach creates an easily digestible visualization of why the AF might come back,” Bioengineering’s Professor Patrick Boyle said. He co-led the study with Dr. Nazem Akoum, an arrhythmia specialist and Section Head of the Cardiac Electrophysiology Service at UW’s School of Medicine. Trainees from Boyle’s lab Savannah Bifulco (PhD, 2023, now at Boston Scientific) and MD/PhD student Matthew Magoon were co-lead authors on the paper.
The approach relies on what is called a “random forest” model and Shapley analysis (SHAP). “This type of algorithm is relatively straightforward – kind of the humble station wagon compared to the fancy sportscar of more sophisticated deep learning methods, but there’s a method to our madness,” Boyle said. “Machine learning is not inherently explainable, but it’s much easier with random forests to understand what’s going on within the black box.”
Explainability refers to human users’ ability to parse and understand how the artificial intelligence or machine learning application arrived at its predictions or outputs. Users are more likely to trust and base their decisions on AI results that are highly explainable.
Researchers worldwide, including the leading UW CSE team that developed SHAP, have emphasized that healthcare professionals, patients and their loved ones, and government medical regulators are less likely to engage with (and potentially benefit from) “black box” AI/ML technology if essential information about the underlying system is poorly communicated.
Having the model show its work gives us insight into how the decision was made and lowers the barrier to adoption by healthcare professionals. – Professor Patrick Boyle

In the case of the team’s model of atrial fibrillation after ablation treatment, explainability includes a quantification of the degree to which the prediction was based on a variety of factors like the size of the atrial chambers of the patient’s heart, where the heart shows diseased tissue, the locations where ablation treatment created lasting scar tissue, and the patient’s history of hypertension.
“It’s not just broad strokes,” Boyle said. “We can zoom in on every specific case and how the prediction was made… Medical stakeholders crave a deeper understanding of why the recommendation was made or where the insight came from.”
Atrial fibrillation affects about 10.5 million adults in the United States, or about 4.5 percent of the population, according to a recent study supported by the National Heart, Lung, and Blood Institute. This irregular, fast rhythm in the upper chambers of the heart can cause strokes, blood clots, and heart failure. AF is often treated with anticoagulant drugs. If those blood thinners don’t work or if a patient can’t take them, what is known as catheter ablation is the next step. A thin tube is inserted into the heart, and heat, cold, or electrical shock is used to strategically destroy parts of the heart to prevent abnormal beating from getting started.
With a highly explainable prediction of how likely AF is to return after ablation treatment, doctors and patients can make better-informed decisions with more confidence – about whether to proceed, how patients’ lives may be impacted by the treatment, and what follow-up treatment might be needed.
The team also published a pair of datasets alongside the paper, including models reconstructed from the MRI scans of 82 atrial fibrillation patients and all source code used for explainable ML in the study.