The GP’s “AI at work” rulebook: what’s safe, what’s reckless, and what’s simply unprofessional

Featured image for The GP’s “AI at work” rulebook: what’s safe, what’s reckless, and what’s simply unprofessional

Introduction

“Can I use ChatGPT to write this referral letter?” “Is it legal to ask an AI for a differential diagnosis?”

If you are a UK clinician in 2025, you have likely asked yourself these questions—or whispered them to a colleague. The technology has moved faster than the official handbooks, leaving many GPs in a state of professional anxiety. You want the efficiency gains, but you don't want to end up in front of the GMC.

The good news is that the guidance does exist. The GMC, the Medical Defence Union (MDU), and NHS England have all set out clear principles. This article translates those high-level ethical standards into a practical, day-to-day rulebook for using AI in primary care.

The golden rule: professional accountability

The GMC’s guidance on Artificial Intelligence and Innovative Technologies boils down to one non-negotiable principle: You cannot outsource your professional judgement.

If you use an AI tool to help you make a decision, you are responsible for that decision, not the software developer. If the AI suggests a drug dose and it’s wrong, you are as liable as if you had looked it up in an out-of-date BNF.

The “three-zone model” for AI use in primary care

To navigate the safety and governance landscape, it helps to categorise AI tasks into three zones: Green (Safe), Amber (Proceed with Caution), and Red (Do Not Enter).

🟢 Zone 1: Green (Safe & Administrative)

Low risk. No patient data. High efficiency.

  • Drafting non-clinical text: Using ChatGPT to draft practice newsletters, complaints policies, or staff rotas.
  • General medical learning: Asking an AI to "Explain the pathophysiology of Lupus" for your own revision (not for a specific patient).
  • Template creation: Asking an AI to "Create a text message template for inviting patients to a flu clinic."
  • Tools: Standard ChatGPT, Claude, Microsoft Copilot (Non-enterprise).

🟡 Zone 2: Amber (Clinical Support & Drafting)

Medium risk. Requires strict de-identification and "Human-in-the-Loop".

  • Clinical decision support: Using a tool to check guidelines or brainstorm differentials.
    • The Rule: You must use a tool that cites its sources (like iatroX), and you must verify the output against the primary source (NICE/CKS).
  • Drafting referral letters: Using AI to turn a list of bullet points into a formal letter.
    • The Rule: Zero patient identifiers (Name, DOB, NHS Number) can be entered. You must review every single word of the output before sending.
  • Tools: iatroX (for cited clinical answers), Enterprise-grade AI with data agreements.

🔴 Zone 3: Red (Reckless & Unprofessional)

High risk. Breaches GDPR and GMC standards.

  • Patient data in public models: Pasting a patient's history including their name or NHS number into the free version of ChatGPT. This is a significant data breach.
  • "Black Box" diagnosis: Accepting a diagnosis or treatment plan from an AI that does not show its working or sources, without checking it yourself.
  • Auto-pilot documentation: Allowing an AI to listen to a consult and file the notes to the EHR without you reading and approving them.

Human responsibility doesn’t disappear

When you use AI in a clinical context, your documentation needs to reflect that you remained the decision-maker.

How to document AI use: If an AI tool materially helped you make a complex decision (e.g., a complex interaction check), it is good practice to note it, just as you would note "discussed with microbiology."

“Checked renal dosing guidelines via iatroX; verified against BNF. Decision to reduce dose to...”

This shows you used the tool as a library, not a doctor.

How to talk to patients about AI

Patients are reading the same headlines you are. Some will be excited; others will be suspicious. If a patient asks if you are using AI, be transparent but reassuring.

  • Don't say: "I'm asking the computer what to do."
  • Do say: "I'm using a specialist tool to double-check the very latest national guidelines to ensure we get your treatment exactly right."

Conclusion

AI is a tool, like a stethoscope or a pulse oximeter. It can make you better, faster, and safer, but only if you know how to use it. Stick to the Green and Amber zones, keep patient data out of the Red zone, and always—always—verify the output.


Share this insight