Assessing what makes a reasonable decision with AI
As the impact of artificial intelligence (AI) grows in our world, the University of Adelaide is exploring the role that technology can play in the health sphere, particularly in clinical decision-making and explanations.
The analytical review in the outlines one of the major challenges in health AI – explainability – and explores whether explanations of specific predictions for individuals is absolutely necessary to make a good decision.
“The field of explainability which focuses on individual-level explanations is a developing one,” said Dr Melissa McCradden, of the University of Adelaide’s Australian Institute for Machine Learning (AIML).
“We are optimistic about where the field can go, but with where we are right now, requiring prediction-level explanations for clinical decision-making is problematic.”
Dr McCradden and her co-author Dr Ian Stedman, a lawyer and professor of public policy at York University in Toronto, Canada, argue a good clinical decision is not only one that advances the goals of care, but it also must be legally defensible.
“Clinicians must calibrate their judgement against a whole constellation of other factors, even if they are using an AI tool that is well validated and highly accurate,” said Dr Stedman.
Dr McCradden, a Clinical Research Fellow in AI Ethics with The Hospital Research Foundation Group, AI Director with the Women’s and Children’s Health Network, and Adjunct Scientist with The Hospital for Sick Children, said there are two types of explainability – inherent explainability and post-hoc explainability.
Inherent explainability refers to understanding how the model as a whole functions and post hoc refers to attempts to understand the means by which a specific prediction was generated by the model.
“Some models are directly interpretable, meaning that the operations from inputs to outputs are easy to follow and clear such as decision trees. Others are more opaque, meaning that the process from inputs to outputs is difficult or impossible to follow precisely, even for developers,” said Dr McCradden.
“The issue is in health AI, clinicians typically believe an explanation is what they are getting when they see something like a heatmap, or a prediction accompanied by the reasons the patient received this output. This is understandably what many clinicians want, but new evidence is showing that it might nudge them to make less accurate decisions when the AI tool is incorrect.”
Their work builds off prior work by fellow AIML Researcher Dr Lauren Oakden-Rayner, whose on the limits of explainability methods highlights the field’s nascency.
Dr McCradden and Dr Stedman argue explainability alone shouldn’t serve as an essential part of ethical decision making.
Clinicians are required to draw conclusions from evidence and understanding, placing the patient at the centre of the process instead of AI.
“Piling more weight onto the value ascribed to the AI tool's output further shifts the emphasis away from the patient – their wishes, their culture, their context,” said Dr Stedman.
“Historically, reasonable judgements have been made on the basis of the totality of evidence and resources available to the clinician, contextualised in light of the patient's specific situation.
Dr McCradden and Dr Stedman concluded it is highly unlikely that an AI prediction would be the sole source of information by which a clinician makes a decision, particularly as their performance is never 100 per cent perfect.
“It will, for the foreseeable future, always be necessary to triangulate sources of evidence to point to a reasonable decision,” said Dr McCradden.
“In this sense, physicians should consider what, specifically, the AI tool's output contributes to the overall clinical picture. But we always need to be grounded by the patient’s wishes and best interests.”
Dr McCradden is grateful for the funding support from The Hospital Research Foundation Group.
Media Contacts:
Dana Rawls, Manager, Communications, Australian Institute for Machine Learning, The University of Adelaide. Phone: +61 (8)8313 4343. Email: dana.rawls@adelaide.edu.au
Rhiannon Koch, Media Officer, The University of Adelaide. Phone: +61 (8)8313 4075. Mobile: +61 (0)481 619 997. Email: rhiannon.koch@adelaide.edu.au
Ěý