If you use an AI tool during a clinical encounter and the output is wrong, and you act on it, and the patient is harmed — who is liable?
The answer, under current UK law, is straightforward and uncomfortable: you are.
The AI tool is not a registered medical practitioner. It does not owe a duty of care. It cannot be sued for negligence. It does not have professional indemnity insurance. It does not face a fitness-to-practise hearing. The doctor who used the tool, made the decision, and carried out the action is the one who holds the professional and legal responsibility.
This is not a reason to avoid AI. It is a reason to use it with your eyes open. This article explains the medico-legal framework, what the GMC and defence organisations expect, and how to use AI tools safely within existing liability structures.
The Legal Framework for Clinical Negligence in England and Wales
Clinical negligence in England and Wales requires proof of three elements: a duty of care (which exists whenever a doctor-patient relationship is established), a breach of that duty (falling below the standard of a reasonably competent doctor in that field), and causation (the breach caused or materially contributed to the patient's harm).
The standard of care is assessed by reference to a responsible body of medical opinion — the Bolam test, as modified by the Bolitho judgment, which requires that the practice be logically defensible. Since the Montgomery ruling in 2015, the duty to inform patients of material risks has been judged by what a reasonable patient would want to know, rather than what the medical profession considers appropriate to disclose.
Where does AI fit within this framework? The current legal position is that AI is a tool. Using a tool does not transfer responsibility. If a surgeon uses a faulty instrument, the surgeon is still accountable for the decision to operate and the conduct of the surgery. If a GP uses an AI tool that generates an incorrect management recommendation, the GP is still accountable for the clinical decision.
The legal question is not "did the AI make an error?" It is "did the doctor act as a reasonably competent doctor would, given the tools and information available?"
What the GMC Expects
The GMC has been explicit on this point. Doctors remain responsible for the decisions they take when using AI, and professional standards continue to apply. GMC research has found that over a quarter of respondents had used some form of AI in practice, and many felt undertrained on the risks and responsibilities involved.
The GMC's position can be summarised as follows: you may use AI tools to support your practice, but you must exercise your own professional judgement, you must be able to justify your decisions, and you must not outsource clinical reasoning to a tool that you do not understand or cannot verify.
This has practical implications for documentation. If you use an AI tool and it contributes to a clinical decision, the clinical record should reflect your reasoning — not a copy-paste of the AI output. If a decision is later questioned, the standard will be whether your reasoning was defensible, not whether the AI was correct.
What Defence Organisations Advise
The major medical defence organisations — the MDU, MPS, and MDDUS — have all addressed AI use in clinical practice in recent guidance. While the specific wording varies, the consistent themes are:
Professional responsibility is not delegable. AI does not absorb or share your professional accountability. The decision and the documentation are yours.
Due diligence matters. Using a well-established, evidence-grounded clinical AI tool as a reference — analogous to consulting UpToDate, the BNF, or CKS — is defensible. Using an unvalidated, consumer-grade chatbot as the basis for a prescribing decision is much harder to defend.
Documentation should reflect your reasoning. If AI contributed to your thinking, note it appropriately. Do not present AI-generated text as your own clinical reasoning without review and verification.
Patient data governance is critical. Entering identifiable patient data into a consumer AI tool may breach GDPR, your trust's information governance framework, and your professional obligations. If a data breach occurs through an unapproved tool, the liability consequences compound.
Stay current with guidance. The medico-legal landscape around AI is evolving rapidly. What is acceptable today may be subject to more specific regulation or case law in the near future. Defence organisations advise clinicians to stay informed.
The Spectrum of Risk
Not all AI use carries the same medico-legal risk. A practical way to think about it:
Lower risk. Using a purpose-built, evidence-grounded clinical AI tool (like iatroX, UpToDate ExpertAI, or DynaMed) as a guideline reference — equivalent to checking CKS or the BNF. The tool provides information; you make the decision. The tool shows its sources; you verify them. This is analogous to existing clinical reference use and carries no novel liability risk.
Moderate risk. Using an AI ambient scribe to generate clinical notes, which you then review, edit, and sign. The liability risk relates to the review step: if the AI-generated note contains an error that you do not catch, the note is still yours. Thorough review is essential, and the time savings from AI documentation must not come at the cost of review quality.
Higher risk. Using a general-purpose AI tool (ChatGPT, Gemini, etc.) for a clinical question and acting on the output without independent verification. This is harder to defend because the tool is not designed for clinical use, it is not grounded in verified clinical evidence, and it has no regulatory status. If the output is wrong and the patient is harmed, the question at a medico-legal review will be: why did you rely on this tool, and why did you not verify against an authoritative source?
Highest risk. Entering identifiable patient data into an unapproved AI tool, using AI output to make prescribing decisions without pharmacopoeial verification, or allowing AI to generate clinical decisions that you accept without independent assessment. These uses compound clinical risk with data governance risk and professional conduct risk.
Practical Steps to Protect Yourself
Choose your tools deliberately. Use AI tools that are designed for clinical use, grounded in verified evidence, and ideally carry regulatory markings. iatroX is UKCA-marked and MHRA-registered for its UK guideline features. This regulatory status does not eliminate risk, but it provides evidence of a safety-conscious design approach.
Verify before you act. Every AI-generated clinical recommendation should be checked against a primary source — NICE, CKS, BNF, or the relevant specialty guideline — before it influences a patient care decision. iatroX's citation-first design makes this verification step fast and frictionless.
Document your reasoning. Your clinical record should show that you considered the relevant evidence and exercised your own judgement. It should not look like an unedited AI output. If AI helped you reach a conclusion, the record should reflect your reasoning process, including what you verified and how.
Never enter identifiable patient data into unapproved tools. This includes names, NHS numbers, dates of birth, and any combination of details that could identify an individual. If your trust or practice has not approved a specific tool for use with patient data, assume it is not appropriate.
Maintain your own knowledge. The strongest defence against AI-related liability is not avoiding AI — it is being competent enough to recognise when AI is wrong. iatroX's Q-Bank supports this through spaced repetition and active recall, helping you maintain the clinical knowledge that enables you to evaluate AI output critically.
Stay current with professional guidance. The GMC, Royal Colleges, and defence organisations are all developing more specific guidance on AI in clinical practice. Read it. It applies to you.
The Future Direction
UK case law on AI-related clinical negligence is still forming. No landmark case has yet tested the specific scenario of a doctor relying on AI-generated advice that turns out to be wrong. When that case arrives — and it will — the legal framework described above will be applied: duty, breach, causation. The question will be whether the doctor's use of AI fell below the standard of a reasonably competent practitioner.
The clinicians who are best protected are those who use AI as a reference tool (like any other clinical resource), verify its output against authoritative sources, maintain their own clinical knowledge, and document their reasoning clearly. This is not a new standard of care — it is the existing standard, applied to a new type of tool.
Conclusion
AI does not change who is responsible for clinical decisions. You are. The GMC is clear. The law is clear. The defence organisations are clear.
What AI changes is the information environment in which you make those decisions. Used well — with verification, documentation, and judgement — AI tools like iatroX make you better informed and more efficient. Used carelessly — without verification, without understanding the tool's limitations, without maintaining your own competence — AI introduces risks that are entirely avoidable.
The standard is the same as it has always been: practise competently, make defensible decisions, document your reasoning, and use your tools wisely. AI is a powerful tool. It is not a co-defendant.
