Who is responsible when AI is used? (GMC expectations + what UK doctors actually say)

Featured image for Who is responsible when AI is used? (GMC expectations + what UK doctors actually say)

AI has moved from “interesting” to everyday in UK clinical work: ambient scribes drafting notes, triage tools routing demand, imaging algorithms flagging abnormalities, and generative AI helping clinicians structure letters and summaries.

Which leads to the question that clinicians actually search for:

If AI influences my decision or my record — who is responsible?

This article answers that question in plain English, using:

  • GMC guidance on how professional standards apply when AI is used.
  • GMC research capturing what UK doctors are doing with AI and what worries them.

It also gives you a medico-legal documentation template you can copy/paste.


The headline answer (you can keep in your head)

1) You own your clinical decisions

The GMC’s position is straightforward: professional standards still apply. When you use AI tools, you remain responsible for the decisions you take.

2) Others also carry responsibility — but it doesn’t remove yours

The GMC recognises that doctors are not always involved in development/procurement/updates, and states that those involved in the creation, testing, and updating of these technologies (developers, employers, and sometimes clinicians) remain responsible for those aspects.

3) Transparency matters

The GMC explicitly highlights that it’s important to discuss the use of innovative technologies with patients, including other options and the uncertainties/limitations, so patients can make informed decisions.


What the GMC expects from doctors when AI is used (professionalism, oversight, accountability)

The GMC does not regulate or approve technologies — that is the MHRA’s role — but it does set the professional standards for clinicians. In practice, “GMC expectations” when AI is involved can be grouped into six clinician behaviours.

1) Use your professional judgement — don’t outsource it

AI can support you, but it does not replace your judgement. The standard remains: your reasoning must be defensible, clinically coherent, and appropriate to the patient in front of you.

2) Stay within competence (and seek training)

If you don’t understand what a tool does, where it can fail, or what it is trained on, you are at higher risk of error. The GMC acknowledges that clinicians may lack confidence early in adoption and stresses professional judgement.

3) Be transparent with patients when it matters

If AI is used in a way that affects the consultation (for example, an ambient scribe capturing audio to draft notes, or an AI decision support system materially influencing next steps), you should be able to explain:

  • What the tool is doing (in plain language)
  • The limitations/uncertainties
  • What alternatives exist
  • That you remain responsible for the final decision/record

4) Check and correct outputs (especially for generative AI)

Generative systems can be fluent and wrong. Oversight means:

  • Verifying key facts (doses, thresholds, contraindications, red flags)
  • Ensuring the output matches the patient context
  • Preventing “automation bias” (the tendency to trust a confident-looking answer)

5) Document sensibly (audit trail, not essay-writing)

You do not need to write a legal dissertation. You do need a clean trail that shows appropriate oversight and anchoring to authoritative guidance.

6) Raise concerns if a tool puts patients at risk

If you believe you are being asked to use a technology that creates patient safety risks, the GMC points clinicians to its guidance on raising and acting on concerns.


What UK doctors actually say: benefits, risks, and what’s missing

The GMC commissioned research exploring how doctors experience AI in practice.

Key themes reported by the GMC include:

  • Doctors see AI primarily as an assistive tool, often to support efficiency.
  • They identify real risks, including bias, transparency issues, and concerns about over-reliance.
  • They recognise that they remain responsible for decisions informed by AI.
  • Many believe more education is needed, especially on ethical and security considerations.

Separately, a major survey undertaken by the Alan Turing Institute with GMC support (929 UK registered doctors) found that more than a quarter reported using at least one AI system in the last year (often summarised publicly as “one in four”).

The take-home point for clinicians:

Adoption is already happening. The limiting factor is not interest — it is governance, training, and confidence in defensible use.


The clinician’s accountability map (a practical mental model)

Use this to avoid confusion when multiple parties are involved.

The clinician (you)

You are responsible for:

  • Clinical decisions you take
  • The accuracy and appropriateness of what enters the record
  • Applying professional standards (consent, confidentiality, safety)

The organisation (practice / PCN / Trust)

Typically responsible for:

  • Procurement and governance approvals
  • Information governance frameworks (DPIA, contracts, data flows)
  • Training, SOPs, monitoring and incident pathways

The developer/vendor

Typically responsible for:

  • The design and performance claims of the product
  • Security controls and contractually defined data handling
  • Updates and maintenance

Important nuance: even if the organisation/vendor has responsibilities, your day-to-day professional accountability still applies to your decisions and documentation.


What to document when AI is used (copy/paste templates)

Below are short templates for common clinical realities.

A) AI used for drafting notes (ambient scribe)

AI scribe/ambient documentation tool used to draft consultation note.
Patient informed; opportunity to object offered; no objection raised.
Clinician reviewed/edited output prior to filing in record.

B) Patient declined AI scribe

AI scribe discussed; patient declined. Standard note-taking used.

C) Generative AI used for structuring a letter or summary (not clinical decision-making)

Generative AI tool used to assist drafting/structuring correspondence.
No patient-identifiable data entered beyond approved governance.
Clinician reviewed, edited, and approved final content.

D) AI decision support used as part of clinical reasoning

AI decision support used as an aid to clinical reasoning.
Output reviewed and cross-checked against authoritative guidance.
Final decision made by clinician; rationale and safety-netting documented.

Tip: If you’re using AI, the most defensible sentence is often:

“Clinician reviewed/edited and remains responsible for final record/decision.”


The “defensible use” checklist (30 seconds)

Before you act or file, ask:

  1. What did the AI actually do? Draft? Suggest? Triage? Diagnose?
  2. What did I personally verify? (key facts, thresholds, contraindications)
  3. What is my anchor? (NICE / NICE CKS / SIGN / BNF / local policy)
  4. Have I been transparent where appropriate? (patient awareness, ability to object)
  5. Have I documented oversight? (one line is enough)

What training is missing (and what clinicians should ask for)

The GMC’s research indicates clinicians want more education on AI use — especially ethical and security considerations.

If you are building an internal training pack, keep it practical. A minimum viable curriculum:

  • Capabilities & failure modes: hallucinations, bias, drift, automation bias
  • Information governance: what you can/can’t input; DPIA basics; vendor data flows
  • Clinical safety: incident reporting, monitoring, when to stop using the tool
  • Communication: how to explain AI use to patients without undermining trust

Where iatroX fits (keeping your trail clean)

Even with excellent AI tooling, defensibility still depends on anchoring decisions to authoritative sources.

A simple, repeatable pattern for day-to-day clinical questions:

  1. Use iatroX Knowledge Centre to orient and jump to leaf pages in NICE and NICE CKS.
  2. Use /shared for scenario reasoning that can be reused for learning and CPD.
  3. Use /q to reinforce recall so you stop re-searching the same decision points.

This complements, rather than competes with, the broader knowledge stack clinicians use (NICE CKS, GPnotebook, UpToDate, BMJ Best Practice, PubMed).


Sources (UK)


Share this insight