Should You Use ChatGPT for Clinical Decision Support? A UK Clinician's Practical Guide to AI Safety, Guidelines, and Governance

Featured image for Should You Use ChatGPT for Clinical Decision Support? A UK Clinician's Practical Guide to AI Safety, Guidelines, and Governance

According to a 2025 AMA survey of 1,183 physicians, 66% already use AI in clinical practice. UK figures are likely similar. UK clinicians are using ChatGPT for summarising complex cases, drafting patient letters, understanding unfamiliar conditions, reviewing drug interactions, preparing for MDT discussions, writing audit reports, and revising for exams.

The question is not "should clinicians use AI?" — they already do. The question is "how do they use it safely?"

The Safety Considerations

Hallucination Risk

ChatGPT can generate plausible-sounding but factually incorrect clinical information. The underlying language model predicts the most probable next token — it does not verify clinical accuracy against authoritative sources before generating output. For common clinical questions with extensive training data, outputs are usually correct. For rare conditions, unusual drug interactions, edge-case dosing, and complex multi-step management pathways, the probability of hallucination increases.

The Nature Medicine study (Ramaswamy et al., Mount Sinai, February 2026) found ChatGPT Health under-triaged 52% of gold-standard emergencies — directing patients with DKA and impending respiratory failure to 24-48 hour evaluation. While this tested the consumer product, the underlying model limitations apply to all ChatGPT interactions. Classical textbook presentations: correct. Ambiguous, atypical, or evolving presentations: unreliable.

The risk is highest for questions where the clinician lacks the knowledge to evaluate the AI's answer — which is the exact situation in which they are most likely to consult AI. This creates a dangerous feedback loop: the less the clinician knows, the more they rely on AI, and the less able they are to catch errors.

UK Guideline Misalignment

Standard ChatGPT does not preferentially cite UK guidelines. US clinical guidelines are substantially over-represented in its training data. A UK GP asking "what is the first-line treatment for hypertension?" may receive ACC/AHA (US) aligned recommendations rather than NICE NG136. The treatment thresholds, first-line drug choices, and step-up protocols differ materially — this is not a theoretical concern but a practical prescribing risk.

A UK clinician asking about antibiotic prescribing may receive recommendations aligned to IDSA (US) rather than NICE antimicrobial stewardship guidance. A UK pharmacist checking a drug interaction may receive information referencing US-approved drugs not available in the UK market. These misalignments are structural consequences of using a global model for jurisdiction-specific clinical questions.

Data Governance

Entering patient-identifiable information into standard ChatGPT raises data protection concerns under UK GDPR. Health data is classified as special category data requiring explicit legal basis for processing. Standard ChatGPT conversations may be used for model training unless the user has opted out. ChatGPT for Healthcare (enterprise) addresses this with HIPAA compliance and customer-managed encryption — but that product is enterprise-only and unavailable in the UK.

The individual UK clinician using standard ChatGPT has no formal data governance framework. They are personally responsible for ensuring no patient-identifiable data enters the system — easy to violate inadvertently when describing a clinical scenario or copy-pasting from a clinical record.

Regulatory Status

Standard ChatGPT is not a registered medical device. Using an unregistered tool for clinical decision support places the governance burden entirely on the individual clinician and their employing organisation. The Royal College of Surgeons Bulletin (October 2025) noted that LLMs intended for medical purposes in the UK — including clinical decision support — are classified as medical devices. The MHRA's National Commission is developing recommendations that may formalise this classification.

The Practical Safety Framework

Six rules for safe use of ChatGPT in UK clinical practice.

1. Never enter patient-identifiable data. No names, NHS numbers, dates of birth, hospital numbers, or demographic combinations that could identify a patient. Describe scenarios generically: "a 65-year-old male with..." — not the patient's actual details.

2. Always verify clinical outputs against UK authoritative sources. NICE guidelines, CKS, BNF, peer-reviewed literature, SmPC data. If the output cites "per AHA/ACC guidelines" — that is a US recommendation. Do not assume UK guideline alignment.

3. Do not rely on ChatGPT for drug doses or interactions. Cross-reference every dose against the BNF. Cross-reference every interaction against the BNF interaction checker or emc SmPC section 4.5. A hallucinated dose is a direct patient safety risk.

4. Use it for synthesis and drafting, not for clinical decisions. ChatGPT excels at synthesising information, structuring correspondence, and summarising evidence. These outputs should be treated as first drafts requiring professional review — not as clinical recommendations to be followed without verification.

5. Document appropriately. The clinical decision was yours, supported by your clinical judgment and verified against authoritative sources. The AI assisted with information retrieval and drafting, not with clinical decision-making. Documentation should reflect this distinction.

6. Consider your audience. A referral letter drafted by ChatGPT that references US guideline recommendations will confuse the receiving clinician and undermine professional credibility. Review all AI-drafted clinical correspondence for jurisdiction-appropriate guideline references before sending.

Purpose-Built Alternatives

iatroX — MHRA-registered, UKCA-marked Class I medical device. Retrieves and synthesises NICE guidelines, CKS summaries, peer-reviewed literature, and SmPC data. Does not require patient data input — you ask clinical questions, not questions about specific patients. Free for clinical AI, exam preparation, and calculators. The tool designed for the "what does NICE say about X?" question UK clinicians ask dozens of times per week.

OpenEvidence — free for verified clinicians globally. Searches peer-reviewed medical literature only. HIPAA-compliant, SOC 2 Type II certified. Proprietary models (not ChatGPT). Strongest for US evidence queries and broad literature synthesis. ~15 million consultations/month.

Medwise AI — enterprise deployment for NHS Trusts. Integrates local Trust policies and formularies alongside national guidelines. Enterprise licensing — not available to individual clinicians.

When ChatGPT Is Genuinely Useful

Fairness matters. ChatGPT has legitimate clinical use cases where its strengths align with the task.

Low-risk, high-value use cases: Drafting patient information leaflets (then editing for UK context). Summarising lengthy research papers for rapid evidence review. Brainstorming differential diagnoses for educational purposes (then verifying). Writing audit methodology sections. Explaining complex concepts in plain English for patient communication. Administrative tasks: meeting agendas, complaint response drafts, job descriptions, educational material outlines.

These use cases are low-risk because they do not directly inform clinical decisions about individual patients. The clinician reviews and edits before the output reaches any patient or clinical record.

The Honest Position

ChatGPT is a powerful tool that UK clinicians are already using daily. Telling them to stop is neither realistic nor helpful. The better approach: provide a practical safety framework, explain the specific limitations, and point clinicians toward purpose-built tools for clinical decision support specifically.

Many clinicians will use both — ChatGPT for general research, writing, and administrative tasks where UK-specific accuracy is less critical; iatroX for clinical questions where UK guideline alignment, regulatory governance, and verifiable citations matter. The goal is informed use, not prohibition.

Try Ask iatroX — UK guidelines, MHRA-registered, free →

Share this insight