In a world where the role of the doctor as a sole diagnostician is changing as rapidly as the progression in medical technology itself it is a natural par for the course that our reliance on big data goes hand in hand. Greg McEwen discusses what the consequences might be for our trust in human decision and the accuracy of their diagnoses, as AI gradually takes over.
What does the word “diagnosis” mean? It can be defined as “the act of identifying a disease from its signs and symptoms”. As a society, we have traditionally looked to our healthcare professionals to diagnose and treat our ailments, from minor aches and pains to major, life-threatening conditions. However, technology has long played a part in the diagnostic process. From cancer screening to MRI scanning, to optometry, computers have been employed with a view to informing and improving key decision making. The caveat to this is that the technology is operated and, most importantly, interpreted and acted upon by people exercising judgment.
The existence of lawyers who specialise in clinical negligence, from both a claimant and defendant perspective, is a reminder of the industry that has grown up around litigation in this area. In the year 2015-16, the NHS Litigation Authority received nearly 11,000 new claims for clinical negligence and nearly 1,000 referrals about the performance of doctors, dentists and pharmacists. Of course, not all claims relate to diagnostic error. Likewise, not every error in diagnosis results in a claim and nor should it, since the mere fact of an incorrect diagnosis does not equate to negligence. But could advances in technology lead to earlier or more accurate diagnoses?
Diagnosis remains an art as much as a science but that has not stopped the onward march of technology, with AI and big data seeking to chip away at the role of diagnostician and decision maker. Whether it’s through a wearable consumer device such as a Fitbit, or AI trained to identify potentially cancerous tumours, the average patient today is exposed to technology that can monitor heart rate, nutritional intake and sleep patterns, all the way up to identifying serious, life-threatening conditions.
Will this technology, some of which has the potential to reduce or replace human input, lead to better outcomes? There certainly seems to be a belief that it will amongst some major stakeholders, both healthcare providers and technology companies alike. IBM’s Watson supercomputer is currently being used in the US to help produce tailored treatment plans for cancer patients. Here in the UK, Babylon Health is reported to have secured £50m to further develop its AI diagnostic tool, itself a development on its existing clinical triage app, trialled in the NHS.
Are we hurtling head first into futuristic healthcare, then? Does this threaten the role of doctor as sole diagnostician? And what happens if AI gets it wrong?
It is obvious that the primary concern with our increasing reliance on AI diagnoses is around the issue of liability for errors. Where would medical and legal responsibility fall if a patient incorrectly receives the all-clear on the basis of an AI algorithm? It seems unlikely that this technology will be used to diagnose patients in isolation for various reasons, not least that the lines of clinical responsibility and legal liability need to remain clear. Patients need to know who is ultimately responsible for their medical treatment and who they can look to for redress in the event that something goes wrong. Primary responsibility is likely to remain with the healthcare provider yet, whether healthcare professionals will be able to measure the accuracy and reliability of AI-output remains uncertain, given the complexity of the software and the protection of proprietary information.
For insurers and healthcare organisations, this step into the unknown opens up the important issue around digital malpractice, lengthening the chain of responsibility to manufacturers and software developers. Increasingly, we have to consider whether mishaps and mistakes fall into the category of negligence, product liability or both, particularly as we move through a period in which doctors increasingly work in tandem with AI and big data.
However, we remain optimistic as AI also brings great opportunity. Error is as much a possibility with the humans that currently run our healthcare system, as any walk of life – it is suggested that as many as 1 in 6 diagnoses within the NHS turn out to be incorrect. The number of known diseases in humans has been put at anywhere between 10,000 and 30,000 depending on the criteria employed. . Using AI as an assistive tool has the potential to improve accuracy and reduce diagnostic errors, within an increasingly stretched Health Service. The use of AI to detect heart disease, for example, has been estimated to save the NHS over £300 million a year.
There is however a flip side when comparing machines with their human counterparts. Diagnoses and treatment plans are not simply a matter of logic and deduction. They affect real people. The fact that a computer aided cancer diagnosis is accurate doesn’t make it any less devastating for the recipient. Machines cannot empathise. There will always be a need for healthcare professionals in the diagnostic process, however advanced the technology becomes.
What we can say is that the risks are broadening along with the benefits, for all involved in the delivery of healthcare in the digital age. As technology increasingly plays a part in the diagnostic process, we’re likely to see a host of new issues arising around the attribution of liability, arguably the price of progress.
Written by Greg McEwen at BLM