If AI makes a mistake, am I liable? A UK clinician’s guide to “human-in-the-loop” safety

Featured image for If AI makes a mistake, am I liable? A UK clinician’s guide to “human-in-the-loop” safety

Introduction: the fear is rational

Imagine this scenario: It’s 3 AM during a busy on-call shift. You use a chatbot to sanity-check an antibiotic choice for a complex patient. Later, the patient deteriorates. A complaint is filed, and the key question asked during the investigation is: "Why did you rely on AI?"

This fear is rational. For UK clinicians, the introduction of artificial intelligence into daily practice isn't just a question of efficiency; it's a question of professional survival. While AI use is already happening in UK practice—often informally and without consistent governance—the medico-legal landscape can feel like a minefield.

However, the real issue isn't "AI vs no AI." It is unverifiable vs verifiable reasoning.

In the UK, AI does not absorb clinical responsibility. The clinician (and often the employing organisation) remains accountable for decisions and outcomes. The practical medico-legal question, therefore, becomes: how do you use AI in a way that strengthens rather than weakens the defensibility of your clinical reasoning?

What UK guidance implies about liability

The rules are clearer than many think. Recent guidance from the GMC, NHS England, and medico-legal defence organisations converges on a few core principles.

  • Professional accountability remains yours: The GMC is explicit that doctors are responsible for the decisions they take when using new technologies like AI. If you act on an AI's output, it becomes your decision. You cannot outsource your professional judgement to an algorithm.
  • Organisational exposure: NHS adoption guidance highlights that organisations may also be liable for claims arising from the use of AI products, particularly where a non-delegable duty of care is argued. This is why using tools that have passed procurement governance (like the DTAC) is essential.
  • Indemnity context: While state-backed indemnity schemes (like CNSGP) exist for NHS work, they do not remove your professional duties. Using unapproved tools with patient data could arguably fall outside the scope of "approved" practice, potentially leaving you exposed.

The “Swiss Cheese” model of AI error

To understand safety, we can adapt James Reason's famous "Swiss Cheese" model to AI. "Human-in-the-loop" isn't just a slogan; it is the final, critical barrier preventing an algorithmic error from reaching the patient.

  1. The Model Slice: The AI hallucinates, uses outdated guidelines, or provides a non-deterministic answer.
  2. The Data Slice: The prompt was inaccurate, or the patient context was missing.
  3. The Workflow Slice: Time pressure or automation bias leads to "blind trust."
  4. The Governance Slice: The tool has no DCB0160 clinical safety case or hazard log.
  5. The Documentation Slice: There is no record of how the decision was reached, making it hard to defend later.
  6. The Human Slice (Final Barrier): The clinician reviews, cross-checks, corrects, and documents.

NHS England’s guidance on ambient scribing explicitly reinforces this final barrier, stating that users must "review outputs prior to further actions."

“Audit trail” as a medico-legal asset

This is the key differentiator between a risky workflow and a defensible one. In complaints, coroner’s inquests, and litigation, a recurring theme is: was the clinician’s reasoning process reasonable and recorded?

Step A — Why audit trails matter Good Medical Practice (2024) is non-optional: "You must keep clear, accurate and legible records." This applies to how you reached a decision, not just the decision itself.

Step B — The risk of generic chatbots Typical public chatbots create a defensibility gap. They often fail to provide patient-safe provenance (no clear source paragraph, versioning, or stable citation). Furthermore, entering patient identifiers into public tools creates a significant Information Governance (IG) risk.

Step C — The defensible alternative: citation-linked retrieval A safer class of clinical AI is retrieval-based—it points you to the authoritative primary source so you can check it. Tools that link directly to the primary source (e.g., a specific NICE paragraph, CKS section, or local guideline PDF) allow you to:

  1. Verify the recommendation instantly.
  2. Record exactly what you checked.
  3. Demonstrate human oversight.

The Argument: A citation-linked tool can be safer than unaided memory when it reliably points you to the authoritative primary source and you actually check it—because you reduce recall error and you increase traceability. iatroX is designed around this pattern, linking answers back to the relevant guideline passage so the clinician can verify and document their rationale. Note: This only reduces risk if the clinician checks the source and stays within their competence.

The “Human-in-the-Loop” workflow

A. Before you use any AI in clinical work

  • Use only tools approved by your setting or those with appropriate governance.
  • If the tool uses patient data (e.g., ambient scribing), ensure a Data Protection Impact Assessment (DPIA) and DCB0160 safety documentation are in place.

B. During use (“2-minute safety loop”)

  1. State the clinical question clearly.
  2. Extract the answer and the source (link/paragraph).
  3. Cross-check critical items: red flags, contraindications, dosing, and escalation thresholds against the primary source.
  4. Decide using your human judgement.
  5. Document what you checked and what you did.

C. After use: record-keeping template

Consider using a note snippet like this to evidence your reasoning:

"Clinical decision support tool used to locate relevant guidance on [topic]. Primary source reviewed (NICE/CKS/local guideline): [title/section/date]. Decision taken: [what/why]."

“Never copy-paste AI into notes” (a nuanced rule)

Instead of an absolute ban, adopt a safer, clinically realistic rule:

Never paste unverified AI text into the record.

If you use AI to draft wording (e.g., for a referral letter), you must ensure it is fact-checked, edited, and attributable to you. It must not introduce inaccuracies or fabricated guideline references. You are signing the note; you own the words.

What to do if the AI output is wrong

The GMC explicitly references the duty to report adverse incidents involving medical devices, including software and digital tools. If an AI tool gives you dangerous advice:

  1. Don’t ignore it. Treat it as a safety signal.
  2. Report it via your local incident system (Datix/Ulysses) or directly to the vendor.
  3. Escalate to your organisation's Clinical Safety Officer (CSO) or digital lead.
  4. Document what happened and the mitigation you used to prevent harm.

Patient communication: when do you mention AI?

Transparency is sensible, particularly if AI materially influenced the care pathway or if patient audio/data was processed (e.g., by an ambient scribe). Follow your local policy and IG advice, but frame it clearly: "I use a tool to help write my notes/find guidelines, but I review everything personally."

5 rules for UK clinicians using AI

  1. You own the decision. Treat AI like any other tool—helpful, not authoritative.
  2. No patient identifiers in public AI tools. Use approved systems and follow IG policy.
  3. Always verify against a primary source (guideline paragraph, formulary, local policy) before acting.
  4. Document what you checked (source/section/date + your rationale).
  5. Never paste unverified AI text into clinical notes. Review, edit, and ensure accuracy first.

References

  • GMC: Artificial intelligence and innovative technologies
  • GMC: Good medical practice (2024)
  • NHS England: Guidance on AI-enabled ambient scribing products
  • NHS Transformation Directorate: Artificial Intelligence (IG guidance)
  • NHS Digital: DCB0129/DCB0160 & clinical risk management standards
  • Medical Protection: AI Safer Practice Framework
  • The MDU: Using AI safely and responsibly in primary care

Share this insight