From Taboo to Tool: How UK GPs Are Actually Using AI (and the Safe Way to Do It)

Featured image for From Taboo to Tool: How UK GPs Are Actually Using AI (and the Safe Way to Do It)

Twelve months ago, admitting you used AI in your practice felt like admitting you didn't check blood pressures yourself. It was a guilty secret.

In 2026, the stigma has evaporated. According to the latest data from the Nuffield Trust and RCGP, over 28% of UK GPs are now explicitly using AI tools in their daily work. The conversation has shifted from "Should we use it?" to "How do we use it without getting sued?"

This guide breaks down exactly how your colleagues are deploying these tools today, the risks of the current "Wild West" landscape, and the verification-first workflow that keeps your licence safe.

What GPs are using AI for

The "robot doctor" replacing GPs is a myth. The reality is much more mundane—and much more useful. GPs are using AI as an "efficiency engine" to survive the 8am scramble.

  • Clinical Documentation (57%): By far the biggest use case. Tools like ambient scribes (Heidi, Corti) listen to the consult and draft the notes, saving ~60 minutes of admin per day.
  • Administrative Tasks (44%): Drafting letters to housing officers, summarising long hospital discharge letters into bullet points, and rewriting "medicalese" into plain English for patients.
  • Professional Development (45%): Using tools as a "super-search" to clarify guidelines or check knowledge on weak topics (e.g., "Remind me of the DVLA rules for Group 2 drivers with TIA").
  • Decision Support (28%): A smaller but growing cohort is using AI to double-check differentials or dosing, though this remains the area of highest caution.

The risk: inconsistent oversight

While usage is booming, governance is lagging. We are currently in a "postcode lottery" of regulation.

  • The "Wild West": Some Integrated Care Boards (ICBs) have issued blanket bans on all AI tools, while others are actively funding pilots. This leaves many GPs using tools "under the radar" on their personal devices.
  • The "Black Box" Problem: Many consumer tools (like standard ChatGPT) do not cite their sources. If an AI tells you a drug is safe in pregnancy, you have no way of knowing if it pulled that from a 2024 UK guideline or a 2018 US forum post.
  • Liability: The GMC has been clear: You are the deployer. If the AI makes a mistake and you act on it without verification, the negligence is yours, not the software's.

The practical boundary model (green/amber/red)

To navigate this safely, successful practices are adopting a simple traffic-light protocol for AI use.

ZoneStatusDescription
GREENLow RiskAdmin & Education. "Draft a rota apology email." "Summarise this 4-page PDF into 5 bullet points." "Quiz me on cardiology guidelines." Rule: No patient identifiers.
AMBER⚠️ Medium RiskClinical Context (De-identified). "I have a 45M patient with resistant hypertension on A+C+D, what is the 4th line option?" Rule: Must verify the answer against BNF/NICE.
RED🛑 High RiskDirect Diagnosis. "What is this rash?" "Read this raw patient file and tell me what's wrong." Rule: Do not use generic AI for this. Only use registered Medical Devices.

The “verification-first” workflow that keeps you safe

The most dangerous thing a GP can do is "copy-paste-send." The safe GP uses a Verification-First workflow.

  1. Prompt: Ask the question (e.g., "Management of gout flare in CKD 4").
  2. Review: Read the AI's generated summary.
  3. Verify: Click the citation. Does the link actually go to a NICE CKS or BNF page? Does the text match the summary?
  4. Act: Make the clinical decision based on the source, not the summary.
  5. Document: "Advice checked against NICE CKS [Link]."

This turns the AI into a "search accelerator" rather than a decision maker. You are still the pilot; the AI is just checking the map.

Where iatroX fits

Most generic chatbots fail the "Verification" step because they hallucinate links or reference US data.

iatroX is built to be the clinician-facing speed layer for this specific workflow.

  • Citation-First: Unlike ChatGPT, iatroX is designed to start with the UK guideline (NICE, SIGN, CKS) and build the answer from there.
  • No Hallucinated Links: If we can't find a trusted UK source, we don't guess.
  • Exam-Aligned: Because it's mapped to the MRCGP/UKMLA curriculum, the logic aligns with UK best practice, not US insurance protocols.

It provides the speed of an AI chat but with the traceability of a textbook—allowing you to move from "Taboo" to "Tool" without compromising safety.


Share this insight