Originally published by our sister publication Anesthesiology News

SINGAPORE—External validation of a novel surgical transfusion risk prediction model has concluded that the machine learning model—dubbed S-PATH—has excellent external validity and discrimination across diverse healthcare institutions.

The study found that when it comes to identifying patients who need preoperative blood typing and antibody screening, the S-PATH approach was consistently more efficient than the conventional maximum surgical blood ordering schedule (MSBOS) approach, a finding that the investigators said highlights the viability of the model to serve as a generalizable tool.

“During the COVID-19 pandemic, our hospital system was very concerned about blood utilization,” said Sunny Lou, MD, PhD, an anesthesiology instructor at Washington University School of Medicine in St. Louis. “At the same time, I was looking for a research project where machine learning could potentially make an impact on clinical care, so it seemed like a good opportunity to try to improve our presurgical blood ordering practices.”

With few methods available to estimate surgical transfusion risk beyond MSBOS, Lou and her colleagues developed S-PATH, a machine learning model that uses several predictor variables such as patient demographics, comorbidities, preoperative laboratory values and elective surgery status, as well as institution- and procedure-specific historical transfusion risk data. They then compared S-PATH with MSBOS for guiding presurgical blood typing and antibody screening decisions (Anesthesiology 2022;137[1]:55-66).

image

“I think we all recognize that a patient’s risk of transfusion is going to depend on the patient and the procedure,” Dr. Lou said. “We trained our personalized model to take both into account.”

In the current study, the researchers sought to further assess the generalizability of S-PATH across a diverse group of institutions, as well as identify institution-level predictors of the model’s performance. They used 2020-2021 data from the Multicenter Perioperative Outcomes Group (MPOG), which includes electronic health record–derived data from a broad range of institutions across the United States. Except for obstetric and nonoperative cases, all surgical procedures were included.

The study’s primary outcome was the presence of red cell transfusion during surgery. The ability of S-PATH to rank-order patients in terms of transfusion risk was evaluated at each institution and then compared with the MSBOS approach.

Presenting at the 18th World Congress of Anaesthesiologists (abstract AP02.09), Dr. Lou noted that 47 institutions and 3,455,295 surgical cases were included in the analysis. Among them, the incidence of red cell transfusion ranged from 0.0% to 6.5% (median, 1.6%).

It was also found that to achieve a predetermined benchmark of 96% sensitivity, the machine learning tool recommended blood typing and antibody screening in a median of 32.4% of patients (IQR, 26.3%-42.4%), while the MSBOS approach recommended such screening in 53.4% (IQR, 46.8%-61.4%).

“The personalized model was very consistent in terms of reducing the number of recommended screens by about a third,” Dr. Lou said. “Furthermore, the overall performance of the model was consistently excellent across all hospitals.”

Perhaps not surprisingly, the researchers were encouraged by their findings.

“There have been many examples in the literature of machine learning models that have failed to generalize,” she added. “We were excited to see that our model continued to be so robust across each of the different hospitals. To the best of my knowledge, it’s one of the most broadly tested machine learning models at this point that has continued to show good performance.”

Ideally, Dr. Lou and her colleagues eventually want to see S-PATH used as a clinical decision support tool for preoperative blood orders, although the tool will require refinement before that becomes a reality. At the moment, interested clinicians can visit the model’s GitHub page (bit.ly/spath), where a calculator and downloadable code are available.

“If you know how to use Jupyter Notebook, you can go to GitHub and enter the values for any patient and the model will output a prediction,” Dr. Lou explained. “Practically speaking, to get it broadly implemented would probably require it to be integrated into the electronic health record, and there are substantial challenges there. We’re still working on that piece.”

Steven M. Frank, MD, said the research is exciting because it addresses the imperfect science of determining preoperative blood orders, a practice that is based on methods first described in the 1970s.

“Many institutions over-order blood, using a ‘better safe than sorry’ approach, which is wasteful of time, money and blood,” commented Dr. Frank, a professor of anesthesiology and critical care medicine at Johns Hopkins School of Medicine, in Baltimore.

“Of course,” Dr. Frank added, “under-ordering blood can be a major patient safety issue. The investigators have clearly demonstrated the value of MPOG combined with machine learning in order to improve practice. It remains to be determined how practical implementation of machine learning can be accomplished, which I believe is still in its infancy.”

By Michael Vlessides


Frank and Lou reported no relevant financial disclosures. The abstract won first prize as the best clinical research abstract at the meeting.