AI in the NHS: how should physicians respond?
By Jeremy C Wyatt, Clinical Advisor on New Technologies, Health Informatics Unit, RCP
Many of us will welcome the Prime Minister’s announcements on May 21st that artificial intelligence (AI) and ‘The Grand Data Challenge’ will be used to: “transform the prevention, early diagnosis and treatment of diseases like cancer, diabetes, heart disease and dementia by 2030.”
This coincides with the launch of a national survey of the NHS and commercial landscape for AI in healthcare, which the RCP is supporting.
However, this promised transformation of UK healthcare raises at least five professional issues for physicians.
Do physicians who use AI products such as clinical decision support systems (CDSS) face reduced or increased risks of litigation?
Published court cases in this area are very rare. However, several empirical studies identified recently suggest that using a CDSS will reduce the chances of patients bringing a law suit against their doctor1.
How can an individual physician decide whether to use the CDSS provided by our employer – must we always wait for a randomised trial demonstrating benefit?
At least 162 RCTs of CDSS have been conducted but these are mostly of experimental systems, not the kind of AI systems that NHS Trusts procure. However, the question that most individual clinicians really want to be answered is whether using a CDSS will help make their decision (whether diagnosis, treatment choice or prognosis) more accurate? The analogy here is not a treatment whose effectiveness needs to be evaluated in a trial, but a test whose accuracy needs to be compared against a gold standard. It is more reasonable for a doctor to request results of an evaluation of CDSS accuracy against a gold standard2 than the results of an RCT before using a CDSS in patient care3.
How will accuracy of records influence decisions from a CDSS?
The increasing use of electronic patient record (EPR) data for machine learning makes it even more important for physicians – indeed all doctors – to rigorously ensure the accuracy and depth of the structured, coded data they record. Incomplete or biased patient data will lead to bad algorithms and unsafe clinical decisions4. Unfortunately, much routine NHS data would be classed as Band C5, requiring much work to transform it into the discoverable, documented Band A data machine learning requires to produce useful algorithms6.
How can we as consultants contribute to NHS procurement process for high quality CDSS ?
Some relevant shortlisting criteria for these tools could include:
- Clarity of the clinical scope and purpose of the algorithm or CDSS, so that users know what to expect and when to use the tool
- Evidence that the algorithm is more accurate than typical users, despite missing, ambiguous or poor quality clinical data or images
- Ease of use in clinical workplaces with minimal training, including avoidance of “alert fatigue”
- Excellent fit with existing information systems, especially electronic patient records and image repositories such as PACS. Specifically, is enough high quality coded data already available to drive the CDSS, without imposing an excessive data input burden on busy clinical user?
- Ease of updating the knowledge base when required, and evidence that the supplier is stable and likely to still be trading in 3-5 years.
If a well-designed trial has been conducted and sufficient evidence is available to support a NICE or other national technology appraisal that is extremely helpful, but this is rare so far.
What tasks will AI take from clinicians?
Some AI scientists have made unwise statements in the past, such as: “we should stop training radiologists right now”7. While this is untrue, AI tools will probably have significant impact on our work, professional training, the nature of some specialties and even the content of the MRCP. In future, we will need to focus our efforts away from technical tasks amenable to AI (eg calculating disease probability or drug dosage) to more human tasks. Examples of such tasks that AI is unlikely to take from human physicians soon are:
- Maintaining rapport with and trust of the patients and colleagues who rely on us
- Obtaining an accurate clinical history and examination and recording the key findings as high quality, structured coded clinical data for sharing in repositories used to train deep learning algorithms, as well as human colleagues
- Taking a more holistic overview of patient management for the increasing proportion of patients with complex multi-morbidity, by eliciting patient preferences and tailoring care pathways according to these
- Using our clinical skills to safely carry out bedside diagnostic and therapeutic procedures
- Using clinical observation and our scientific knowledge to develop new tests and procedures (or even AI algorithms) for testing in well-designed research studies
While senior NHS staff are aware of these issues and of the potential benefits of AI, there are probably many junior staff who are not, perhaps signalling a need for wider workforce education on this important topic.
While AI can help physicians, it is unfortunate that some AI proponents do not appear to understand that doctors are much more than diagnostic engines whose skills will be rapidly superseded by AI classifiers. Perhaps they have never met the kind of well-trained, experienced general physician who keeps meticulous records and helps patients obtain the best from limited healthcare resources by formulating their needs and preferences, relying as much on bedside as on laboratory or radiographic diagnosis.
1] Thornton J et al, Health Informatics Journal, submitted 31-5-18
 eg. https://www.ncbi.nlm.nih.gov/pubmed/28117445
 Cathy O’Neil, Weapons of Math Destruction, https://www.youtube.com/watch?v=gdCJYsKlX_Y
 https://www.ncbi.nlm.nih.gov/pubmed/ 27146325
 Geoffrey Hinton 2016; https://www.youtube.com/watch?v=2HMPRXstSvQ