Artificial Intelligence in Healthcare: Where will liability lie?

  • Insight Article 10 December 2025 10 December 2025
  • UK & Europe

  • Tech & AI evolution

  • Healthcare

We cannot escape the fact that Artificial Intelligence (AI) is becoming more common in our daily lives. From asking Copilot or Chat GPT a question, to driverless cars, to AI in a healthcare setting; the uses of AI are continually evolving.

As Government plans have been announced to digitalise the healthcare sector, the reality is very much that AI is already making a mark, and it is important to consider how its use can affect insurers and those who have responsibility for claims.

AI is being used in a variety of ways within healthcare, analysing x-rays, mammograms and skin samples or being used as a virtual scribe to take notes at appointments. AI is being used by clinicians in primary, secondary and tertiary settings, for example by assisting with analysis of test results/images or by reducing the time spent on administrative tasks.

It is hoped that AI will benefit both patients and professionals working within healthcare, but it is equally important to exercise caution when using AI. An AI tool relies on data that is put into it. Systems can learn, but they need to learn from existing data. This data needs to be broad enough to represent society as a whole, not just a small cross section of patients.  It is important for developers and users of AI systems to ensure data going in is reliable and stays reliable, to avoid potential for unreliable results. People using the systems should be encouraged to raise any concerns about the results AI is generating to make sure that any inaccuracies are closely examined, as this may be due to problems with data inputting and cleansing. 

Where a healthcare professional is making clinical decisions, the liability position if something goes wrong is familiar. The position where AI is involved is presently unknown and there are a number of possibilities over where liability could lie. It could fall to the clinician using the technology (i.e. when inputting data or interpreting the information generated), the healthcare organisation who have implemented the AI system, the body who developed the technology, or the body who gave approval for the technology to be used in a healthcare setting. If something goes wrong, a number of legal frameworks could apply, including negligence, product liability, and vicarious liability. It is not clear how the courts will approach the use of AI at present, as this is very much a developing area. There will inevitably be claims either regarding the use of, or the failure to use AI in a patient’s clinical journey and the decision making alongside that. Contracts for the use of AI will need to be carefully considered, including liability and indemnity provisions. 

When used in clinical practice there may be a question of exactly how the standard of care will be assessed. Parties in a claim usually instruct independent medical experts to assist the Court in determining liability, but will the same medical experts still be able to comment where AI has been used or will there be a need to instruct an expert in AI in addition to the medical experts to explain how the technology works? Alternatively, will medical experts need to be familiar with the use of AI in their field of practice in order to be able to prepare a report? Similarly, consideration needs to be given to whether the familiar Bolam test still works in a situation where AI has made a recommendation, but a reasonable body of clinical opinion does not agree with that recommendation.

As AI is continually learning from available data, there is also an argument that AI may become so advantageous to clinical practice or accurate in the future that it could be argued that it is a breach of duty not to use AI and some patients may specifically ask for AI to be used. If AI is not available, would it then be a breach of duty not to use AI and would the consent process need to include risks related to the use of AI? These are issues that clinical negligence lawyers may need to tackle as the use of AI in healthcare increases and is also something that medicolegal experts will need to be alive to.

As well as the different legal frameworks potentially in play, one must also consider how AI is used, and whether this will influence where any liability lies. For example, if a person uses AI to help to reach a diagnosis, like using any other diagnostic tool, the legal responsibility lies with the person making the diagnosis (as in a standard negligence claim). Some argue that this could differ, depending on the way that a particular algorithm is used and whether it is AI reaching the diagnosis or the clinician. This raises the question that if the clinician is completely removed from making the diagnosis, where does liability lie? Some have argued that, if the algorithm is influencing the decision or reaching the diagnosis, it could become a question of whether the clinician can understand or explain how the diagnosis was reached because, if they cannot explain this, then can they really be responsible if something has gone wrong?

Healthcare providers using AI in a clinical context will need to be aware of how AI is being used and what it is being used for as this will likely help to determine where liability could lie. Having policies and operating procedures in place providing guidance over the role of AI in reaching a diagnosis may help as could training from the AI developers so that healthcare providers can understand the ways that AI can help and how it works. Consideration may need to be given to documenting in the medical records whether AI was used in a diagnosis and how it was used as this may help if something does go wrong and these questions need to be answered.

It is important to say that, despite instances of AI being used in healthcare, it is currently still a qualified human who makes the final diagnosis and discusses/consents their patient about the treatment journey (although AI may assist with that process, by providing information about likely treatment outcomes and the risks associated with the options under discussion etc). The NHS England Transformation Directive from 30 April 2025 in particular states that “the final decision about the care that people receive should be made in consultation with the patient or service user, using your professional judgment”. This may change as the use of AI becomes more prevalent and where liability lies when things go wrong may similarly evolve.

End

Themes:

Areas:

  • Market Insight

Additional authors:

Kayleigh Tranter

Stay up to date with Clyde & Co

Sign up to receive email updates straight to your inbox!