ChatGPT Health – could this be a new dawn in clinical negligence claims?
-
Insight Article 09 January 2026 09 January 2026
-
UK & Europe
-
People dynamics
-
Healthcare
OpenAI has recently launched ChatGPT Health. This is a new space within ChatGPT that will allow users to ask health questions. In some regions, users can upload their medical records and fitness apps to provide personal data to add context to their questions (the ability to link records is not currently available to UK users). OpenAI stresses that it is not to be used to replace medical care, but clearly the reality is that many will turn to it as first port of call.
Even before this advancement, OpenAI has reported that ChatGPT is widely used for heath related questions. It is reportedly relied upon by many users to ask questions before consulting their GP (e.g. uploading an image of a spot and asking if it is a concern or asking AI to explain their diagnosis and surgery in plain language). The risks of relying on AI as a first point of call does not need to be laboured here. AI has been proven to be excellent at imaging recognition (and its use in healthcare longstanding). However, its ability to contextualise and adapt to unique circumstance of an individual user is not as well tested (and that is exactly the gap ChatGPT Health now appears to be trying to target).
As a clinical negligence practitioner, it is wise to pay attention to these developments. Not only are surveys showing the wide uptake of AI by patients, but there is evidence of generative AI now working its way into healthcare settings (whether that be to act as a scribe or to interpret imaging). NHS England is trying to keep pace, recently releasing guidance on AI-enabled ambient scribing products in health and care settings.
It surely cannot be long before we see cases where a patient has a fixed ‘AI-backed’ idea of their condition and treatment need and criticises a doctor for not following suit. This is even more so when apps named like ‘ChatGPT Health’ will suggest to the user it is specialised in health conditions and questions. Such a claim not only engages the Bolam/Bolitho questions, but also questions that drift into the data protection, consumer protection and product liability remits, with there being UK uncertainty as to whether Generative AI is a ‘product’. As part of their core technology, models like ChatGPT were never seen to be intended for medical use. However, some would argue that line has clearly now been crossed.
From a clinical negligence perspective, the direction of travel is concerning. There is claims exposure in these advancements. Examples of Gen AI “hallucinations” are common and arguably can never more be as serious as when concerning medical care.
The introduction of ChatGPT Health signals a move towards a more structured way of using mainstream AI in the health setting. Jurisdictions like the EU have already adopted a more formal approach to AI regulation. The UK is yet to follow suit, relying on existing regulations rather than any single new AI Act in the spirit of ‘pro-innovation’. However, with daily advancements in AI such as the dawn of ChatGPT Health on the horizon, the regulatory challenges the UK faces is getting greater (as is the need for clinicians to have a clear understanding of the regulations to know how to safely engage in AI tools).
Clyde & Co's healthcare group is recognised for its extensive industry knowledge, offering a range of legal services covering public and private sectors as well as inquests, advocacy, professional regulation, product liability and pharmaceuticals/life sciences. Should we be able to assist you, please do contact one of our experts.
For any questions regarding this article, please reach out to Adam Hudson using the contact details provided below.
End
