AI standards launched to tackle problems of overhyped claims

0
AI has flaws which we are not being told about

The first international standards for the design and reporting of clinical trials involving artificial intelligence have been announced in a move experts hope will tackle the issue of overhyped studies and prevent harm to patients.

While the possibility that AI could revolutionise healthcare has fuelled excitement, in particular around screening and diagnosis, researchers have previously warned that the field is strewn with poor-quality research.

Now an international team of experts has launched a set of guidelines under which clinical trials involving AI will be expected to meet a stringent checklist of criteria before being published in top journals.

The new standards are being simultaneously published in the BMJ, Nature Medicine and Lancet Digital Health, expanding on existing standards for clinical trials – put in place more than a decade ago for drugs, diagnostic tests, and other interventions – to make them more suitable for AI-based systems.

Prof Alastair Denniston of the University of Birmingham, an expert in the use of AI in healthcare and member of the team, said the guidelines were crucial to making sure AI systems were safe and effective for use in healthcare settings.

“Evaluating AI systems for health is a bit like road-testing a car. We expect any new car coming to market to have been independently tested for performance and safety, and for these evaluations to be carried out under standard conditions and reported in a standardised way,” he said. “It is the same way with AI systems. We need to know that they do what they claim to do.”

The new international standards, said Denniston, provided a framework for how to design, deliver and report clinical trials in AI. That would not only mean trials were better designed but that different systems could be measured up against each other.

“AI will revolutionise many aspects of healthcare, but there is currently a trust issue,” said Denniston.

Despite a growing number of headlines reporting that AI systems outperform doctors in making medical diagnoses, Denniston said work by his own team examining more than 20,000 such studies highlighted serious concerns about their quality.

“Less than 1% of the studies followed these or similar guidelines to provide that kind of transparency,” he said, noting that studies often only reported the best-case scenario. Among the new standards researchers will be expected to “specify the procedure for acquiring and selecting the input data for the AI intervention”.

The team say the checklist will also help stop AI systems being trained and tested on too narrow a population. “We need to ensure its safety and performance across age, sex, ethnicity or setting,” said Denniston.

“The whole purpose to ensure patients and healthcare professionals can be really confident that the AI healthcare products are only deployed when they are known to be effective and safe.”

The news comes a day after the NHS revealed the first £50m of its £140m AI in Health and Care Award programme is to be invested to help develop systems including a wearable ECG monitoring patch that can help diagnose irregular heartbeats and a system to spot cancer in prostate biopsy slides.

Prof Mihaela van der Schaar, the director of the Cambridge Centre for AI in Medicine, who was not involved in developing the standards, said AI and machine learning methods must not only be effective but transparent, robust and trustworthy.

“Too often, a promising model is undermined when its creators provide it as a ‘black box’ with minimal consideration for end users such as doctors,” she said. “These new reporting guidelines, which prioritise such concerns by factoring them into a standardised evaluation framework, are a partial but valuable solution that could help catalyse a top-to-bottom transformation of healthcare.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here