Connect with us

Psy-Addiction

Study Outlines Challenges and Potential Fixes for Utilizing AI to Predict Opioid Use Disorder

Published

on

In 2019, more than 10 million Americans abused prescription opioids, and in 2020, an opioid was responsible for nearly 75% of overdose deaths. As indicated by the United States Centers for Disease Control and Prevention, overdose deaths involving opioids, including prescription opioids, heroin and synthetic opioids such as fentanyl, have expanded eightfold starting around 1999.

As researchers and the medical care community search for powerful methods for relieving the opioid epidemic, rapid advances in machine learning are promising. The development of machine learning models that make use of healthcare data to deal with various aspects of the opioid crisis has been facilitated by access to data and machine learning frameworks. By utilizing a variety of data and information, health care databases, for example, can assist researchers and clinicians in identifying patients who are at risk.

However, do these health care-based machine learning models accurately predict opioid use disorder? That’s what the College of Engineering and Computer Science at Florida Atlantic University wanted to investigate. Thusly, they examined peer-reviewed journal papers and led the first systematic review dissecting not just the technical aspects of machine learning applied to predicting opioid use, yet in addition the published results.

Their objective was to decide whether these machine learning techniques are helpful and, all the more significantly, reproducible. For the study, they audited 16 peer-reviewed journal papers that pre-owned AI models to anticipate opioid use disorder and explored how the papers trained and assessed these models.

The results, which were published in the journal Computer Methods and Programs in Biomedicine, show that while the reviewed papers show that machine learning models applied to the prediction of opioid use disorder may be useful, there are important ways to make these models more transparent and reproducible, which will ultimately increase their use in research.

For the systematic review, specialists looked through Google Scholar, Semantic Scholar, PubMed, IEEE Xplore and Science.gov. They extricated information that incorporated the study’s goal, dataset used, cohort selected, types of machine learning models created, model evaluation metrics, and the details of the machine learning tools and strategies used to make the models.

Discoveries showed that of these 16 papers, three made their dataset, five used a publicly available dataset and the leftover eight used a private dataset. Cohort size went from the low hundreds to the greater than half a million. Six papers used a single kind of machine learning model, while the remaining ten used up to five different machine learning models. Most papers didn’t adequately depict the AI strategies and tools used to produce their results. Just three papers published their source code.

“The reproducibility of papers using machine learning for health care applications can be improved upon,” said Oge Marques, Ph.D., co-author and a professor in FAU’s Department of Electrical Engineering and Computer Science. “For example, even though health care datasets can be hindered by privacy laws and ethical considerations, researchers should follow machine learning best practices. Ideally, the code should be publicly available.”

The specialists’ suggestions are triple: use the area under the precision/recall curve (AUPRC), which is a metric that is more useful in cases of imbalanced datasets where the negative class is more common and true-negative predictions have low values; In this crucial area of health care, interpretable models should be used whenever possible, and non-interpretable models, also known as “black-box models,” should be avoided.

They suggest defining the justifications for its use if that is not possible and a non-interpretable model must be used to predict opioid use disorder. At long last, to guarantee transparency and reproducibility of results, the specialists suggest the reception of agendas and other documentation practices prior to submitting machine-learning-based studies for review and publication. Better documented and publicly available studies will assist the research community advance the field.

The scientists note that the absence of good machine learning reproducibility practices in the papers makes it difficult to check their cases. For example, the proof introduced may miss the mark concerning the acknowledged norm, or the case just holds in a smaller situation than declared.

“Journal papers would be more valuable to the research community and their suggested application if they follow good practices of machine learning reproducibility in order for their claims to be verified and used as a solid base for future work,” said Marques. “Our study recommends a minimum set of practices to be followed before accepting machine-learning-based studies for publication.”

Christian Garbin, the study’s first author and a Ph.D. candidate, and Nicholas Marques, a M.S. student in data science and analytics at the College of Engineering and Computer Science and a National Science Foundation Research Traineeship Program scholar, are the study’s co-authors.

“Opioid use disorder is a public health concern of the first magnitude in the United States and elsewhere,” said Stella Batalama, Ph.D., dean, FAU College of Engineering and Computer Science. “Harnessing the power and potential of machine learning to predict and prevent one’s risk of opioid use disorder holds great promise. However, to be effective, machine learning methods must be reliable and reproducible. This systematic review by our researchers provides important recommendations on how to accomplish that.”

Trending

error: Content is protected !!