Adaptive AI algorithms, often referred to as “black box AI”, change as they learn, and it is difficult to explain how they are making their decisions. This presents a challenge for regulators all over the world, not only of medical devices, but in many other AI tools that carry risk. To protect patient safety, it will be necessary to understand if the algorithm has changed significantly since it was approved either due to the change in the actual algorithm logic, features or because of data drift (which could mean that the model is out-of-date in light of new data). If it has, both manufacturers and regulators need to know whether it remains safe and fit for purpose and if instructions on use need to be modified.
This project combines a clinical and regulatory stream with a data science stream. The clinical and regulatory stream, through workshops with experts, will develop a position on what is a significant change in an algorithm’s performance from a clinical and regulatory perspective, referring to real, disguised examples and simulated examples.
The data science stream will use Concept Drift Detection to identify changes in Predictive Clinical Models due to effects of updating with new data, and to detect when models have become out-of-date.
This will be based on using COVID-19 primary care data:
- It will adopt a moving window approach to simulate changes in data as more is collected over time.
- It will explore / develop “concept drift” metrics to score how well a model fits current data, how this has changed, and how performance is affected.
- It will explore methods to update models given new data and assess
- It will compare metrics / update methods from (2) and (3) on Deep Learning Models / Bayesian Models / Tree-Based Models
Over the next two years the regulatory framework for Software as a Medical Device (SaMD) and AI Software as a Medical Device (AIaMD) will be re-defined in the UK by MHRA. The framework, which is a Ministerial priority, will make best use of the most up-to-date thinking. A methodology to identify a significant change in an adaptive AI algorithm will be an important element of the framework which will bring AI software medical devices safely to market.
Meet the Principal Investigator(s) for the project
Dr Allan Tucker - Allan Tucker is Reader in the Department of Computer Science where he heads the Intelligent Data Analysis Group consisting of 17 academic staff, 15 PhD students and 4 post-docs. He has been researching Artificial Intelligence and Data Analytics for 21 years and has published 120 peer-reviewed journal and conference papers on data modelling and analysis. His research work includes long-term projects with Moorfields Eye Hospital where he has been developing pseudo-time models of eye disease (EPSRC - £320k) and with DEFRA on modelling fish population dynamics using state space and Bayesian techniques (NERC - £80k). Currently, he has projects with Google, the University of Pavia Italy, the Royal Free Hospital, UCL, Zoological Society of London and the Royal Botanical Gardens at Kew. He is academic lead on an Innovate UK, Regulators’ Pioneer Fund (£740k) with the Medical and Health Regulatory Authority on benchmarking AI apps for the NHS. He serves regularly on the PC of the top AI conferences (including IJCAI, AAAI, and ECML) and is on the editorial board for the Journal of Biomedical Informatics and Medical Informatics and Decision Making. He is hosting a special track on "Explainable AI" at the IEEE conference on Computer Based Medical Systems in 2019. He has been widely consulted on the ethical and practical implications of AI in health and medical research by the NHS, and the use of machine learning for modelling fisheries data by numerous government thinktanks and academia.
Related Research Group(s)
Intelligent Data Analysis - Concerned with effective analysis of data involving artificial intelligence, dynamic systems, image and signal processing, optimisation, pattern recognition, statistics and visualisation.
Project last modified 24/09/2021