As the COVID-19 pandemic has swept the world, researchers have revealed lots of of papers every week reporting their findings—lots of which haven’t undergone a radical peer assessment course of to gauge their reliability.
In some instances, poorly validated analysis has massively influenced public policy, as when a French staff reported COVID sufferers have been cured by a mixture of hydroxychloroquine and azithromycin. The declare was extensively publicized, and shortly U.S. sufferers have been prescribed these medication beneath an emergency use authorization. Further analysis involving bigger numbers of sufferers has solid severe doubts on these claims, nonetheless.
With a lot COVID-related info being launched every week, how can researchers, clinicians and policymakers sustain?
In a commentary revealed this week in Nature Biotechnology, University of New Mexico scientist Tudor Oprea, MD, Ph.D., and his colleagues, lots of whom work at artificial intelligence (AI) corporations, make the case that AI and machine learning have the potential to assist researchers separate the wheat from the chaff.
Oprea, professor of Medicine and Pharmaceutical Sciences and chief of the UNM Division of Translational Informatics, notes that the sense of urgency to develop a vaccine and devise efficient therapies for the coronavirus has led many scientists to bypass the normal peer assessment course of by publishing “preprints”—preliminary variations of their work—on-line.
While that allows speedy dissemination of latest findings, “The problem comes when claims about certain drugs that have not been experimentally validated appear in the preprint world,” Oprea says. Among different issues, unhealthy info might lead scientists and clinicians to waste money and time chasing blind leads.
AI and machine studying can harness huge computing energy to test lots of the claims which can be being made in a research paper, the counsel the authors, a gaggle of public and private-sector researchers from the U.S., Sweden, Denmark, Israel, France, the United Kingdom, Hong Kong, Italy and China led by Jeremy Levin, chair of the Biotechnology Innovation Organization, and Alex Zhavoronkov, CEO of InSilico Medicine.
“I think there is tremendous potential there,” Oprea says. “I think we are on the cusp of developing tools that will assist with the peer review process.”
Although the instruments usually are not totally developed, “We’re getting really, really close to enabling automated systems to digest tons of publications and look for discrepancies,” he says. “I am not aware of any such system that is currently in place, but we’re suggesting with adequate funding this can become available.”
Text mining, wherein a pc combs by hundreds of thousands of pages of textual content searching for specified patterns, has already been “tremendously helpful,” Oprea says. “We’re making progress in that.”
Since the COVID epidemic took maintain, Oprea himself has used superior computational strategies to assist establish present medication with potential antiviral exercise, culled from a library of hundreds of candidates.
“We’re not saying we have a cure for peer review deficiency, but we are saying that that a cure is within reach, and we can improve the way the system is currently implemented,” he says. “As soon as next year we may be able to process a lot of these data and serve as additional resources to support the peer review process.”
Jeremy M. Levin et al, Artificial intelligence, drug repurposing and peer assessment, Nature Biotechnology (2020). DOI: 10.1038/s41587-020-0686-x
University of New Mexico
Researchers say synthetic intelligence and machine studying might improve scientific peer assessment (2020, September 15)
retrieved 16 September 2020
This doc is topic to copyright. Apart from any truthful dealing for the aim of personal examine or analysis, no
half could also be reproduced with out the written permission. The content material is supplied for info functions solely.